Submitted by Victor Lamoine, Institut Maupertuis
The Ensenso N10 is an industrial 3D sensor using two cameras and the stereo-vision principle. One interesting feature is that the sensors works in infrared wavelengths and an IR projector projects a random pattern to allow scanning uniform surfaces (where the stereo-correspondence would fail otherwise).
I integrated the Ensenso N10 in the Point Cloud Library (1.8.0 version, not yet released), and you can find a tutorial here. The PCL EnsensoGrabber API is available here. This camera is very easy to use yet not very fast (only 4 FPS through PCL !) because of an unknown reason (it should be much faster).
I have made a package allowing fully automatic extrinsic calibration of the sensor mounted on a robot. It should be easy to change the robot and the setup (eg: fixed camera): https://github.com/InstitutMaupertuis/ensensoextrinsiccalibration
The extrinsic calibration relies on Ensenso SDK internals; there is no dependency on the industrial_calibration package.
So if you plan on using such a camera on a robot, it should be very easy with this package! Don't hesitate to leave a message here or on the issue tracker if you have problems understanding how it works, making it work...
Update from Victor (Nov. 19):
I did not choose the EnsensoSDK over the industrial_calibration package.
In fact I began to work with the EnsensoSDK calibration in early 2014, at that time I didn't even know about ROS-Industrial!
So I just wanted to go to the simplest solution for the Ensenso and use my work (PCL integration of the Ensenso) to automatically calibrate the sensor.
I have began working (well.. understanding first!!) with the industrial_calibration package because I want to calibrate the David SLS-2 on the robot (and their SDK does not provide any calibration procedure).
So in short, the next package will be sls_2_extrinsic_calibration and it will depend on industrial_calibration.
Source code: https://github.com/swri-robotics/delaunay
At the 5th edition of the ROS-Industrial training workshop, held at Fraunhofer IPA in Stuttgart, Germany, attendees from both industry and academia were given a general introduction to the ROS concept, structure and tools, to then receive hands-on training on three separate sessions covering perception, manipulation, and navigation.
The workshop was followed by an overview of current activities of the ROS-Industrial Consortium Europe. We are happy about the feedback that we received, and as ROS(-Industrial) awareness and usage spreads, we are preparing for more in-depth training workshops to be delivered in 2016. You can find the dates of the next training sessions in the calendar of upcoming events.
The ROS-Industrial Initiative had a strong presence at the last edition of ROSCon, the ROS developers' conference, which was held on October 3-4 in Hamburg, Germany, right after IROS. This ROSCon, the fourth since the inaugural edition in 2012, was particularly significant as the available registration slots sold out weeks before the event. The organizers, the OSRF, managed to fit more than 350 people at the conference venue, which spread over two buildings at the University of Hamburg, right in the heart of the city.
Not only was attendance a record high, but a quick "raise your hands" poll among the participants showed that the overwhelming majority was attending ROScon for the first time. This is a clear sign that ROS is quickly raising interest outside its original development community, which confirms our strong belief that it can be of great value for the broader robotics and automation world. The scheduled and lightning talks showcased a great variety of use cases and technical developments, such as autonomous driving as well as automated warehouses. ROS-Industrial was represented with members from both the Consortia attending and delivering talks, in addition to providing information about the initiative to conference participants visiting our booth at the exhibitor area.
During the second birds-of-a-feather session the 5th ROS-Industrial community meeting was held, during which we discussed driver development, the upcoming FTPs, and the details on how to join the online developers meeting. We would like to thank all participants for contributing to an interesting discussion!
This was probably the ROSCon edition marking the de facto acceptance of ROS as the robotics software "tool of choice" for practitioners in academia and industry. It was very interesting to discuss with participants how the robotics landscape is evolving, and rest assured that we at ROS-Industrial are closely following such developments: please refer to our upcoming events schedule to see for yourself what an exciting year lies ahead.
Originally posted as NSF Awards Abstract #1531065
PI: Malathi Veeraraghavan, Shaun Edwards (Co-PI)
Currently, industrial robots are cost-effective for repetitive and high-volume tasks such as welding and painting, but not for lower-volume, mixed-part production. The need for robotic part handling for unstructured industrial applications is diverse. In manufactured-goods distribution centers, where multiple bins are presented to an operator, a human is required to handle a range of parts that must be boxed and shipped. In the reclamation and recycling industry, humans sort waste streams of mixed products on conveyor belts. Assembly and kitting operations in manufacturing are termed robotic opportunities but they require a solution for handling many part types in the same work-cell. This project will research and integrate technologies to enable the use of industrial robots for low-volume mixed-part production tasks. The proposed solution will include 3D image sensors and high-speed flexible networking, cloud computing, and industrial robots. The inclusion of cutting-edge new software such as the Robot-Operating System Industrial (ROS-I) and Cloud Computing platforms offer excellent educational opportunities for both undergraduate and graduate students. The software developed in this project will be widely distributed to enable further innovations by other teams.
The project objective is to develop cloud robotics applications that leverage high-performance computing and high-speed software-defined networks (SDN). Specifically, the target applications combine big-data analytics of sensor data (of the type collected from factory floors) with the control of industrial robots for low-volume, mixed-part production tasks. Cloud computers located at a remote facility relative to the factory floor on which industrial robots operate can be used for compute-intensive applications such as object identification from 3D sensor data, and grasp planning for the robots to perform object manipulation. The project methods will consist of (i) integrating ROS-I components and developing new software as required to transmit the 3D sensor data to remote computers, running the object identification and grasp planning applications, and returning robot instructions to the original site, (ii) running this software on geographically distributed compute clouds, (iii) collecting measurements and enhancing the software to meet real-time delay requirements. The technical challenge lies in meeting these stringent real-time requirements. For example, high-speed networks with the flexibility to connect arbitrary factory floors and datacenters are needed to transfer the 3D sensor data quickly to the remote cloud computers and to deliver the computed robot instructions (hence, SDN).
For more information about the US Ignite Program refer to: Link
October 21, Stuttgart, Germany
The ROS-Industrial Consortium Europe is hosting a ROS-Industrial Training Class October 21 in Stuttgart, Germany. For details, refer to the event page.
Registration is still open: Link
November 16-18, San Jose area, California, USA
The ROS-Industrial Consortium Americas is coordinating with Flex and Silicon Valley Robotics to host a ROS-Industrial Developers Training Class November 16-18, 2015 in Milpitas, California. For details, refer to the events page.
Registration is now open: Link
The ROS-Industrial team is pleased to present its schedule of events for the upcoming year (below). Please refer to the events page for details as they become available.
|Oct. 21, 2015||RIC-EU||ROS-I Developers Training||Fraunhofer IPA, Stuttgart, Germany|
|Oct. 31, 2015||TU Delft||CAD to ROS Workbench Milestone 1 Launch (Anticipated)|
|Oct. 31, 2015||RIC-Americas||Deadline for FTP Sign-up: Robotic Blending Mi. 4|
|Nov. 16-18, 2015||RIC-Americas, SVR, Flex||ROS-I Developers Training||Flex, Milpitas CA, USA|
|Dec. 4, 2015||RIC-Americas||RIC Roadmapping Meeting||WebMeeting|
|Dec. 14, 2015||RIC-Americas||ROS-I Community Meeting||RoboUniverse in San Diego, CA, also WebMeeting|
|Dec. 15-16, 2015||RIC-Americas||ROS-I Booth, Presenter Panel||RoboUniverse in San Diego, CA|
|Dec. 15, 2015, 2:15 PM||RIC-Americas||Presentation Panel: 3D Sensing for Industrial Robotics||RoboUniverse in San Diego, CA|
|Jan. 7-8, 2016||RIC-EU||RIC-EU Meeting||Fraunhofer IPA, Stuttgart, Germany|
|Jan. 8, 2016||RIC-EU||RIC Roadmapping Meeting||WebMeeting|
|Jan. 31, 2016||RIC-Americas||Deadline for FTP Sign-up: 5D Slicer for Robotic 3D Printing|
|Jan. 31, 2016||RIC-EU||Deadline for FTP Sign-up: CAD to ROS Workbench Mi. 2|
|Feb. 9, 2016||RIC-Americas||RIC Roadmapping Meeting||WebMeeting|
|Mar. 3, 2016||RIC-Americas||RIC-Americas Annual Meeting||SwRI, San Antonio, TX, USA|
|Mar. 21-23, 2016||RIC-EU||ROS-I Presentation||European Robotics Forum, in Ljubljana, Slovenia|
|Apr. 6-8, 2016||RIC-Americas||ROS-I Developers Training Class||SwRI, San Antonio, TX USA|
|Apr. 15, 2016||TBD||(Tentative) ROS-I Workshop - Asia Region||TBD|
|Apr. 20, 2016||RIC-EU||ROS-I Developers Training||Fraunhofer IPA, Stuttgart, Germany|
|Apr. 30, 2016||RIC||Deadline for FTP Sign-up: Topic TBD||Americas, Europe|
|May 16-21, 2016||RIC-EU||ROS-I Community Meeting||ICRA, Stockholm, Sweden|
|May 16-21, 2016||Amazon||Amazon Picking Challenge (Anticipated)||ICRA, Stockholm, Sweden|
|Jun. 1-3, 2016||RIC-EU||RIC-EU Meeting||RoboBusiness Europe, Odense, Denmark|
|Jun. 8, 2016||RIC-EU||ROS-I Conference||Fraunhofer IPA, Stuttgart, Germany|
|Jun. 21-24, 2016||RIC-EU||TBD||Automatica Trade Fair, Munich, Germany|
|Sept. 12-17||RIC-Americas||TBD||IMTS 2016, Chicago, USA|
|Nov. 6-9, 2016||RIC-Americas||ROS-I Booth||Pack Expo International, Chicago, USA|
We are grateful to Sachin Chitta for hosting the inaugural MoveIt! community meeting on September 3, 2015. During the meeting, Jorge Nicho, an SwRI ROS-Industrial team member, presented his work updating the Stochastic Trajectory OptiMization Planner (STOMP) for use in the Indigo release of MoveIt! (refer to the video below). STOMP is particularly useful for generating well behaved smooth collision-free motion plans in reasonable time.
Links to related resources:
Submitted by Mr. Moshe Schwimmer, Product Lifecycle Management, Siemens Digital Factory
Siemens PLM Software is a leading global provider of product lifecycle management (PLM) software. These PLM solutions can help make smarter decisions that lead to better products.
In this post you'll find a demo (below) of a simulated hybrid industrial environment that I presented recently at both the yearly ROS-Industrial Americas meeting and the ROS-Industrial conference in Europe. It includes ROS-operated robots, classical robots (programmed by Process Simulate), a simulated human, and equipment.
In the demo, I use Process Simulate to create an environment with both the UR robot which is operated directly by ROS and a KUKA robot, operated by Process Simulate. You'll see a simulated human and other equipment in use as well, and everything is synchronized by Process Simulate’s simulated PLCs.
As you'll see in the demo, the UR robot picks boxes from a conveyor and moves them to the right container, according to color. The color is determined by the OpenCV package, which is loaded by ROS. Once a container is full, The KUKA robot picks it up and moves it to the removal area for the simulated human to take it away.
The simulation uses proximity, light, and vision sensors to provide information that is gathered in Process Simulate. Some of it is processed locally and some of it is sent in real time, on each time interval, to the ROS environment.
In the ROS environment, the only thing which is modeled is the UR robot. The ROS-controlled robot uses OpenCV and MoveIT packages to understand its surroundings and plan its path. ROS will send the information regarding the next location of the robot to Process Simulate on each time interval.
The step forward, represented by the demo, is the collaboration between ROS packages and the Process Simulate environment, including use of simulated industrial robotics and equipment.
On a related note, I'd also like to let you know about our Frontier Partner program:
The Frontier Partner program grants leading robotics-focused startups developer licenses for a broad range of Siemens’ PLM software (including Process Simulate), access to its technology partner program, and other development resources. Siemens is looking for partners with new approaches in robotics simulation, motion planning, robot interoperability, deployment and optimization in the field of industrial robotics, and especially startups that are developing on ROS.
More details can be found here: https://www.frontier.spigit.com/Page/Robotics
If you are looking to share interesting view points, use cases and environment challenges which are related to ROS-I, contact me at: moshe.schwimmer (at sign) siemens.com
A team of developers from Southwest Research Institute, The Boeing Company, Caterpillar, Wolf Robotics, and TU Delft have recently completed the third milestone of the Scan-N-Plan for Robotic Blending Focused Technical Project. Sponsors for this milestone include The Boeing Company and Caterpillar, Inc.
Scan-N-Plan technologies are a suite of open-source software tools that enables automated process-planning and execution based on 3-D scan data. The Robotic Blending project is focused on bringing these Scan-N-Plan capabilities to the world of surface finishing.
In many factories today, a part fresh from a CNC machine will require a manual post-processing step to ensure that the surface is sufficiently smooth to eliminate fatigue crack growth factors and/or to prepare the surface for painting. The physical labor of smoothing the surface is frequently accomplished using hand-held power tools and these actions, over time, can cause repetitive stress injuries. Robotic automation is a desirable alternative, but programming for each unique part's geometry and surface defects is not cost effective.
The goal of the Robotic Blending project is to allow an operator to place a part requiring surface processing into a robot work cell and have the robot automatically generate a plan and execute it. Using a simple 4-button software interface, the operator follows a step-by-step procedure to scan, preview, blend, and inspect a part in only a few minutes (see the video). The operator's only input to the system is to instruct the robot where to look for parts, what surfaces should be processed, and to give approval to the generated process plans. An administrator menu is also provided to adjust process parameters (e.g. feeds, speeds, tool geometry, QA tolerances, etc.).
The current system can process flat surfaces at arbitrary, but reachable, position and orientation. Process path generation and robot trajectory planning is very fast: roughly one second per surface. Milestone 3 saw the inclusion of a vastly improved user interface (including the operator interface mentioned previously), an improved laser scanner driver, adjustable process parameters, new support for ABB robots, and robot simulations/previews. In addition, demonstrations of the Milestone 3 software were independently and successfully performed at Boeing and Caterpillar facilities (on different robots) the week of July 27, 2015.
The next milestone will expand the capabilities of the system to deal with more complex/non-flat parts, to perfect the blending process, and to "close the loop" between the QA scoring and process planning by reworking the part until it objectively meets a given quality metric.
Special thanks to the following software developers who contributed to this milestone:
- Adam Clark - Boeing
- Chris Sketch - Caterpillar
- Gijs van der Hoorn - TU Delft
- Jonathan Meyer - SwRI
- Matthew West - Caterpillar
- Zach Bennett - Wolf Robotics
Most robot controllers cannot process large and complex data such as point clouds or meshes. Because of this limitation, it is not possible to automatically generate complex paths on surfaces using a robot controller. The Institut Maupertuis decided to create a ROS library that automatically plans complex paths on 3D surfaces. The aim of the Bezier project is to provide a simple, yet generic, library for this purpose. It heavily relies on the Point Cloud Library (PCL) and VTK to generate robot poses. The capability to generate variable height passes (instead of fixed height planar passes) on complex surfaces is key. At the moment the library is oriented towards milling applications. However we expect it can be utilized in other processes as well.
The planning method requires only input meshes (raw, CAD), tool parameters (height of pass, width) to generate a full 3D robot trajectory. The project was initiated in March 2015 and a first version was released publicly in July; the Institut Maupertuis demoed an automatic machining application on polystyrene using the Bezier library:
Hardware interfaces are particularly important for any future integration of ROS-Industrial with production systems. With already existing EtherCat (maintained by Intermodalics) and CanOpen (maintainted by Fraunhofer IPA) networks, we are happy to announce support for PROFINET, as one of the most widely used fieldbuses in automation world.
PROFINET, an open automation standard and part of IEC 61158 is not only fully compatible with all the features of standard Ethernet, but it is also capable of real-time performance. It enables high-speed data exchange for the entire range of automation applications. PROFINET uses three communication services (Standard TCP/IP, Real Time, Isochronous Real Time), which can be used simultaneously, allowing transfers of both input/output data within submilisecond cycle times. To access all the features of PROFINET, use of specialized hardware is necessary. There are various PROFINET PCI cards on the market. We decided on the communication processor CP1616 from Siemens, since it provides not only Linux compatible drivers, but support for IO Controller/IO Device modes. This allows ROS-Industrial to work as both - a master or a slave on PROFINET network.
The goal of current development is to define a ROS-PROFINET abstraction layer and provide a specific implementation for the CP1616. The package release date is targeted for September.
Special thanks to the Google Summer of Code program for supporting this effort. Additional thanks to Siemens for technical support. Software development is ongoing and can be found in the ROS-Industrial Siemens experimental repo.
If you follow ROS-related news you probably noticed that packages were contributed back in May to interface ROS systems to the Cognex In-Sight camera and to Siemens S7 PLCs via Modbus TCP communication. Generation Robots, the company behind these contributions, is in fact not new to ROS development, as its CEO Jérôme Laplace told us.
Generation Robots' R&D branch HumaRobotics worked over the years with ROS on platforms such as NAO from Aldebaran, Baxter from Rethink Robotics, Q.bo from Thecorpora and DARwIn-OP / DARwIn-Mini from Robotis. Their staff of cognitive science PhDs and robotics engineers provides ROS-based solutions both on real robots and during simulation. For example, the CEA (French Alternative Energies and Atomic Energy Commission) has sought their expertise on the DARwIn-OP and PhantomX robots for usage in inspection, radioactive material handling and disaster relief scenarios. An outcome of this collaboration has been to provide the community with simulation packages for both robots in the Gazebo simulator, user friendly ROS APIs and custom walking algorithms.
HumaRobotics helps industrial collaborators by sharing their expertise in human-robot interaction to bring collaborative capabilities to the ROS-enabled Baxter: for instance, by enabling it with speech recognition and synthesis, adaptive dialog abilities, human posture detection and natural face-to-face interaction with an operator. They also make use of advanced machine learning techniques to provide fast and natural inverse kinematics for physical interaction between the robot and the human (tool passing, third-hand).
"Industrial scenarios often involve integrating robots with standard industrial devices and protocols. ROS-enabled robots do not always have such capabilities by default, but one of the strengths of ROS is how easy it can be extended with new functionalities", Laplace said. "Due to its community-driven nature and the wide range of existing functionality, ROS is really enabling fast development of advanced robotic systems".
To join HumaRobotics in the fast-growing community of ROS(-Industrial) adopters and speed up the prototyping and development of industrial robot applications, download the code. Contact ROS-I (Americas, Europe) to better understand what the ROS-Industrial Consortia can do for you!
The ROS-Industrial Consortium is tackling a topic that is of interest to the whole ROS community: conversion of CAD data to ROS-interpretable file types (e.g. URDF, SRDF). This work will be conducted over the next three years by the TU Delft Robotics Institute. To help us make ROS even more convenient to use:
- Click to read a public version of the CAD to ROS FTP proposal.
- Submit a letter of intent to participate by the end of July. UPDATE: Now due Aug. 7.
- Participants must be current ROS-I Consortium members. Join now!
- A full proposal is available upon request: email@example.com.
Over the past 6 months, the SwRI ROS-Industrial team has been executing a Cooperative Research program with the National Institute of Standards and Technology (NIST). From a manufacturing perspective, NIST’s impact is quite diverse. It includes aspects from general process improvement to specific manufacturing processes like nano-manufacturing, and of course robotics.
A core theme of the NIST-supported ROS-Industrial program is agility. That is the ability of manufacturing systems to perform a diverse set of tasks, with the built in intelligence to re-task on the fly. Agility is the perhaps the greatest unrealized promise of robotics. With the support of NIST, it is this valuable and critical aspect of robotics that ROS-Industrial aims to enable. The research effort is broken down into several sub-tasks, outline below. The tasks vary, some with immediate impact and others with more long term goals. However, they all have the common theme of enabling robotic agility.
Robot Testing and Evaluation
Testing and evaluation (T&E) are very important for both measuring and comparing the performance of complex systems. Prior collaborative work was focused on test methods for Response Robots (think robots climbing around piles of rubble). Through these efforts a standard test-suite for response robots was developed. This test-suite demonstrably pushed the state of the art in response robots. With the goal in mind of measuring and pushing the state of the art in robotic agility, SwRI is developing test methods for evaluating robots for complex tasks, such as assembly.
Dual Arm Manipulator Development
Dual arm manipulation is an exciting area of research. Such systems mimic human operations, giving robotic systems the ability to both hold an object with one arm and perform an operation with the other. With NIST support, SwRI researchers have developed ROS-Industrial Hilgendorf support software for the robot configuration shown below. The support software was open sourced to jump start dual arm manipulation research on similar setups. The Hilgendorf system configuration can be easily assembled from off the shelf components. ROS-I researchers will utilize Hilgendorf for developing dual arm applications.
Calibration Library Improvements
The ROS-Industrial Calibration Library is a powerful tool for calibrating frame transformations between multiple robots and sensors. Improvements have been made to this library to make the data collection and calibration steps more streamlined. An additional goal of this effort was to evaluate the accuracy of a system calibrated with our library. System evaluation is a key part of the NIST mission. An example system with a network consisting of 6 cameras was calibrated with the ROS-Industrial Calibration Library using a target held by a UR10 robot. The system demonstrated pose variance for each camera better than 1/4 mm and 1/10th degree.
Ontologies for Agile Planning in Manufacturing
Past (and present) robotic automation is primarily used in high volume/low variation production applications, with the acceptations being driven by safety and environmental conditions. The obstacle that steers automation away from low volume/high variation production applications is the effort associated with teaching each part. The interest is to develop an ontology structure to represent the assembly process in a way that automated planning and assembly tasks can be executed. Current literature approaches the problem in a similar way to how a child learns to perform new task. There is a low level skill set (refer to the figure below) that needs to be taught, that then can be used to complete complicated tasks. The challenge is to formulate the skill primitives in such a way that they are robot independent and have the capability to store all information necessary for the robot to execute them efficiently. In the long term, such an ontology could enable highly dynamic and generic functionality within ROS-Industrial.
Descartes Joint Trajectory Planner for Semi-Constrained Cartesian Paths
This grant also supported development of the Descartes Path Planner. Please refer to our previous post for a description and video of Descartes.
This work was conducted under NIST contract #70NANB14H226.
Current MoveIt!/ROS path planners are focused on collision-free pick and place applications. In the typical pick and place application, the starting and goal positions and collision models are the only inputs to the planner. By contrast, many industrial applications must follow a pre-defined Cartesian path, where the path in between matters as well. Some common examples of this are blending, painting, machining, sanding, sealing, and welding. Unfortunately, solving the Cartesian path planning problem by simply applying an inverse kinematics solution results in an artificially limited solution set that doesn't take advantage of the process flexibility/tolerance allowances. In reality, Cartesian paths are typically semi-constrained. For example, in a machining application a five degree-of-freedom (DOF) path is required, where the sixth DOF, the orientation about the tool, is not defined (doesn't matter). Joint trajectory planners that fail to take advantage of these open constraints, such as inverse kinematics (IK) based planners, limit the likelihood of finding a valid solution, even though one could exist in the semi-constrained space. The Descartes planner library was initiated in Summer 2014 with NIST and ROS-Industrial Consortium Americas support to address semi-constrained Cartesian industrial processes. Descartes has already been demonstrated in a robotic routing and blending/sanding applications. Key capabilities of Descartes include, path optimization, collision avoidance, near instantaneous re-planning, and a plug-in architecture.
The Descartes library saw its first use in early 2015, and was alpha-released at the ROS-Industrial Community Meeting at ICRA 2015 on May 26, 2015. The focus of recent development has been on making the library more user friendly, better able to capture process requirements, and more computationally efficient. A recent addition with a strong impact on all of these areas is process velocity consideration. Descartes can use this extra knowledge to improve its search for the optimal process path.
At the time of the ROS-Industrial Community meeting in January, a 6DOF robot following a semi-constrained (5DOF) Cartesian path of approximately 800 points took 30 seconds to plan. Today that same path can be solved in a fraction of a second. A specific implementation for the robot blending application has seen speed increases of a factor of 1000 as compared to testing in January. Looking toward the future, Descartes will continue to see improvements to its usability and performance. Active areas of research and development include high degree of freedom ( > 7 DOF) planning for both single and dual arm configurations, and hybrid planning, where free space motions (such as those found in a pick and place application) are combined with well defined process paths.