Unlocking for Industrial Users the Power of ROS 2: ROS-I October Training Highlights

The ROS-Industrial Consortium Americas provided a hybrid, online/in-person, ROS 2 training October 22-24. The three-day ROS-I developers training class was given to trainees from across the ROS-Industrial Consortium membership.

Whether you are just starting to use or are already experienced in ROS 2, the training offered something for all levels. Newcomers learned the ropes, getting a solid foundation in the framework’s core concepts, while experienced developers delved into advanced topics such as advanced motion planning, including tuning optimization contsraint parameters.

Currently training is on ROS 2 Humble, delivered through an AWS EC2 instance. On the first day, the attendees were divided into two groups: beginner ROS developers and advanced ROS developers. The advanced group learned how to set up a basic motion planning pipeline in Tesseract, where they refined a planning pipeline increasing robustness. By the end of the day students were adding unique customizations to the pipeline.

The beginner group focused on learning the fundamentals of ROS 2, including workspace structure and best practices for adding scripts and building the workspace. They also learned about creating packages / nodes, topics (publishers / subscribers), messages, services, actions, launch files and command line parameters.

Day 2, groups gathered to learn how to develop URDF/XACROs to describe a robot, as well as how to use TFs and create a MoveIt package for an industrial robot for motion planning in a simulation environment.

Day 3 started with a tour that focused on ROS 2 robotic systems within the Robotics Department at SwRI, where the instructors were able to give insights on the capabilities of ROS 2 in more tangible working industrial examples.

In person attendees stepping through the scan-n-plan workshop demonstration

Wrapping up the training participants were given the opportunity to share with instructors and the group how they intended to use ROS in their projects and where already under way the instructors were able to provide additional assistance to address issues in their application development back home.

Hybrid training always presents challenges in making sure those online get the same attention as those in the room. However, it was rewarding to assist attendees in getting deeper into ROS 2, whether they were just starting out or wanting to expand their solution set or add features to an ongoing project. As always, as instructors, we always look forward to seeing how they apply these new skills and always look forward to interacting through the repositories or various ROS/ROS-I collaborative events.

If you are interested in an upcoming training class, they are regularly posted at https://rosindustrial.org/events-summary.

An Open Framework for Additive Manufacturing

Mainstreaming Robotic Based Additive Manufacturing

Robotic additive manufacturing, sometimes called robot 3D printing, is evolving from research to applied technology with maturation of methodologies (gantry systems and robot arms) and source materials (metal powder, wire, polymer and concrete).

A conventional gantry system that layers material via a single x-y plane tool path is an established 3D printing solution for certain repeatable applications, while robotic arms can offer more complexity when layering material in multiple planes. However, to date traditional approaches for planning trajectories for 3D printing are not optimized for taking advantage of high degree of freedom (DOF) systems that include industrial manipulators.

Leveraging the advances in planning for high (DOF) robotic arm equipped solutions for complex additive manufacturing (AM) entails processes for planning and execution for both hardware and the work environment. The steps of a process are dependent upon often multiple proprietary software tools, machine vision tools, and drivers for motion planning end effectors, printer heads and media used in each 3D printing process.

ROS Additive Manufacturing

Over the years the ROS-I open source project and within the ROS-Industrial Consortium the creation of frameworks that enable new application development have become a standard approach to enable rapid extensibility from an initial developed application. After numerous conversations with end-users, other technical contributors, it seemed that there was an interest in looking at some of the capabilities within the ROS and ROS-I ecosystem to create a framework that seeks to take advantage of high Degree of Freedom systems and optimization based motion planning to bring a one stop shop in additive manufacturing planning and application.

ROS Additive Manufacturing (RAM) aims to leverage the flexibility of additive manufacturing with industrial robotic applications. While looking for an open-source ROS package to slice meshes into configurable trajectories for additive manufacturing using a Yaskawa Motoman welding robot, we have been aware of the ROS Additive Manufacturing package developed by the Institute Maupertuis in Bruz, France, and so this was used as a starting point.

The RAM package was originally built in ROS Melodic, so it was rebuilt in ROS Noetic from source. Building the application from source in Noetic was mostly straightforward. We followed the the installation instructions detailed in the Maupertuis Institute's GitLab repository. The terminal commands using pip were replaced using pip3 and all terminal commands specifying ROS Melodic were replaced with ROS Noetic. When attempting to build the package in ROS, there were clashes between Sphinx 1.7.4 and the latest version of Jinja2 (version 3.1.2 as of June 2022). An older version of Jinja2 (version 2.10.1) was installed to successfully build the package and for the software to launch.

The RAM software features an RViz GUI interface that allows the user to select various trajectory generation algorithms to create a trajectory from a given mesh or YAML file. Printing parameters such as blend radius, print speed, laser power, and material feed rate can be modified for each individual layer of the print. The parameters of the entrance and exit trajectories can also be modified to have a different print speed, print angle, print length, and approach type. The output format of the exported trajectory is a ROS bag file. For our experiment, we used a Yaskawa Motoman welding robot and we needed to post-process the results to correctly interface with the robot.

Going from Plans to Robot Motion

Motion was achieved by post-processing trajectories with a customized version of the robodk post-processor. Welding parameter files were defined on the robot's teach pendant like normal. A "User Frame" (reference system) was defined at the center of the metal plate to match the ROS environment. The robot's tool was edited to match the orientation used by ROS. This allowed us to generate robot programs without having to configure the ROS environment to match the workcell. Extra lines were added in the post-processor to start/stop welding. The program files were copied via ftp onto the controller and executed natively as Linear moves.

This hybrid ROS/robot controller set up allowed us to quickly set up this demonstration. The output of the tool is a list of cartesian poses. The post-processor converted these to linear moves in the robot's native format. The robot did the work of moving in lines at a set velocity; there was no reason to do additional planning or create joint trajectories. The robodk_postprocessors package available on Github has not been maintained or updated in some time. Numerous bugs exist and these needed work arounds.

Existing ROS drivers are focused entirely on joint trajectories. A different approach that would allow streaming of robot programs would be beneficial, and this is part of future work to be proposed.

Below are two screenshots from the software for trajectories produced for a star and a rectangle with rounded corners. These shapes were included within the software as YAML files. The Contour generation algorithm was used with a 2.5 mm layer height and a 5.0 mm deposited material width for both shapes . The star shown below had three layers and the rounded rectangle had five layers. All other parameters were left to their default values.

Creation of a star shape set of tool paths via the RAM GUI.

Creation of a rounded corner rectangle within the RAM GUI.

As seen in the video below, the GUI interface provided a convenient and intuitive way to modify trajectory and print parameters. Paired with our post-processor, sending completed trajectories to the robot hardware was efficient.

Screen capture of process within RAM software. After clicking "generate trajectory", the post processor saves the output into a JBI file which is transferred to the robot via FileZilla.

Test samples were made on a test platform provided by Yaskawa Motoman over the 2022 summer period. As can be seen in initial test samples and more complete builds, the application was able to make adjustments to motion profiles and weld settings, including more advanced waveforms such as Miller Electric’s Regulated Metal Deposition (RMD) process.

Figures Above: In-Process and completed RAM generated tool path sets executed on the Yaskawa testbed system.

To streamline the process of building the RAM package from source, the documentation should be updated to detail the process of building in ROS Noetic instead of ROS Melodic. Additionally, the documentation does not show how to interface and post-process the exported trajectories to work with robotic hardware. Although this is beyond the intended scope of the RAM software project, this would improve the utilization of this software for industrial applications. The documentation for the package is currently in French. An English translation of the documentation would would make understanding the adjustable parameters within the software easier for English speakers.

Future work seeks to incorporate the ability to fit/apply the generated tool paths fit to an arbitrarily contoured surface within the actual environment, much like is done in the various Scan-N-Plan processes for surface processing currently available within the ROS-I ecosystem, thus being able to do additional intermediate inspection and processing as the build progresses, or update build based on perception/machine vision data.

Furthermore, implementing with a new driver approach that enables more efficient tool path to trajectory streaming would improve the usability and interfacing with the hardware. Implementations of various algorithms to ensure as consistent profile/acceleration control to manage sharp transitions would also be beneficial and may be implemented through optimization-based planners such as TrajOpt. Porting to ROS 2 would also be in scope.

A ROS-Industrial Consortium Focused Technical Project proposal is in the works that seeks to address these issues and offer a complete open source framework for facilitating flexible additive manufacturing process planning and execution for high degree of freedom systems. Special thanks to Yaskawa Motoman for making available the robotic welding platform, and thanks to Doug Smith at Southwest Research Institute for working out the interaction between the RAM package and the robotic system.

Editor Note: Additional thanks to David Spielman, an intern this summer at Southwest Research Institute. This work would not have been possible without his diving into the prior RAM repository and getting everything ready relative to testing.

Using Tesseract and Trajopt for a Real-Time Application

The past two years have seen enormous development efforts transform the tesseract-robotics and trajopt_ros packages from highly experimental software into hardened, industrial tools. A recent project offered the opportunity to try out some of the latest improvements in the context of a robot that had to avoid a dynamic obstacle in its workspace. This project built upon previous work in real-time trajectory planning by developing higher level control logic for the system as a whole, and a framework for executing the trajectories that are being modified on the fly. Additionally, this project provided the chance to develop and test new functionality at a lower level in the motion planning pipeline.

One of these improvements was the integration of continuous collision checking throughout robot motions, as opposed to only checking for collisions at discrete steps along the motion. This is a critical component to finding robot motions that avoid colliding with other objects in the environment. Previously, a collision could have occurred if the distance between the steps in the motion plan were larger than the obstacles in the environment (pictured below). With the integration of our new collision evaluators, these edge cases can be avoided.

The other major piece of research done on the low-level motion planning was benchmarking the speed at which our trajopt_ros solver can iterate, and therefore how quickly it can find a valid solution. We did not impose hard real-time deadlines on the motion planner, instead we ran it as fast as possible and took results as soon as they were available. For our application this was adequate. Some of our planning speed test results are pictured below.

The final major development to the robot motion pipeline to enable real-time trajectory control was the creation of a framework for changing the trajectory being executed on a physical robot controller. This was a particularly exciting element of the research because it brought our work out of the simulation environment and proved that this workflow can be effectively implemented on a robot. We are excited to apply these tools to more of our projects and continue improving them in that process.

All of our improvements to the Tesseract and Trajopt-ROS packages have been pushed back out to the open source community. Check out the latest versions here and let us know how they are enabling new capabilities for you!

Documentation updates improve ROS utilization and functionality

Lessons from UR driver updates reinforce importance of documentation

A key strength of the open-source community is the capacity to build on the knowledge of other developers who enable future advancements. Documentation plays a critical role in advancing understanding, which improves ROS utilization globally. This will be increasingly important as we move from ROS to ROS2 and document the various steps, including driver updates, necessary to execute projects.

Recent driver updates for a Universal Robots project helped demonstrate the importance of documentation to our team. In July 2019, Universal Robots updated their software for e-series and CB-series to 5.4 and 3.10. We were using the 5.4 software on UR 10e robots for a few different projects, and the ur_modern_driver [1] for ROS kinetic and melodic was no longer compatible due to this update. I updated the driver by investigating the release notes for 5.4.x.x [2] and the client_interface document [3].

UR10 E-series in the SwRI collaborative lab

To update the ROS driver for compatibility with software updates, I first identified what changes occurred and located appropriate software documentation for the hardware. The software documentation defined modules and variable types for the changes, which allowed for comparison with equivalent variables and modules in the current driver code. Without proper documentation from Universal Robots this would have been a much more difficult endeavor.

The ur_modern_driver specifically interacts with the client interface. Two variables were added to the 5.4 software client interface: a reserved byte in the Masterboard data sub package of Robot State Message to be used for internal UR use and a safety status value to the real time interface. The client interface document gave the types and sizes of these variables along with variables used in previous software versions.

The client interface for 5.3 and earlier had an internal UR int in Robot Mode Data I could compare to, and the safety status value could be compared to any of the other double variables in the previous driver’s RealTime interface. I used the search feature in QTCreator to find instances of these variables and added equivalent lines for the new ones. Then I used QTCreator’s debugger to track where new if statements and functions needed to be added to allow the driver to detect the new software version being used and access these new variables.

Working on this upgrade reinforced the necessity of good documentation. In general, the ur_modern_driver had more detailed documentation than many other ROS repos; however, it could still be improved. The README had no mention of the purpose of the use_lowbandwidth_trajectory_follower parameter in the launch files or the urXXe_bringup_joint_limited.launch file; both of these are useful when simulations are being overenthusiastic in their trajectory planning. I added documentation to the README to help others use these features to troubleshoot.

To update drivers, you will need to know what has been changed in the software, and you will ideally have access to a previous version of the driver. Because industrial hardware is intended to be reliable and accessible for multiple clients, there is often plenty of useful documentation if you can search with the correct terminology. Using the known changes and the documentation to compare with the previous driver code allowed me to update the driver fairly quickly so projects could move forward.

1https://github.com/ros-industrial/ur_modern_driver

2https://www.universal-robots.com/how-tos-and-faqs/faq/ur-faq/release-note-software-version-54xx/

3https://www.universal-robots.com/how-tos-and-faqs/how-to/ur-how-tos/remote-control-via-tcpip-16496/

Field notes from Automate 2019, and why we’re bullish on ROS2

What makes a good industrial automation demonstration? When we started preparing for Automate 2019 back in January, a few key points came to mind. Our specialty in SwRI’s Manufacturing and Robotics Technology Department is advanced robotic perception and planning, so we decided that the robot should perform an authentic dynamic scan-and-plan process on a previously-unseen scene – as far away as we could get from a “canned” demo. We also wanted the demo to be an interactive experience to help drive discussion with visitors and entertain onlookers. These goals led us to the tube threading concept: a human would bend a piece of shiny metal tubing into a novel shape, and the robot would perceive it and plan a path to sweep a ring along it.

Michael Ripperger & Joseph Schornak on location at Automate 2019

Michael Ripperger & Joseph Schornak on location at Automate 2019

Developing a demo system presents an opportunity to explore new ideas in a low-risk environment because the schedule and deliverables are primarily internally-motivated. Since my group had limited previous exposure to ROS2, we decided that our Automate demo should use ROS2 to the greatest possible extent. The original vision was that the system would be entirely composed of ROS2 nodes. However, due to the practical requirements of getting everything working before the ship date, we decided to use a joint ROS/ROS2 environment, with ROS motion planning and the GUI nodes communicating with the ROS2 perception nodes across the ROS-to-ROS2 bridge

ROS2 Strengths and Challenges

In contrast to virtually every other robotics project I’ve worked on, the demo system’s perception pipeline worked consistently and reliably. Intel maintains a ROS2 driver for Realsense RGB-D cameras, which allowed us to use the D435 camera without any customization or extra development. Our YAK surface reconstruction library based on the Truncated Signed Distance Field algorithm helped us avoid the interreflection issues that would usually plague perception of shiny surfaces. After a couple afternoons spent learning how to use new-to-me VTK libraries, the mesh-to-waypoint postprocessor could consistently convert tube scans into trajectory waypoints. More information about this software is available from the SwRI press release or the writeup in Manufacturing Automation.

Block Diagram of SwRI ROS-I Automate 2019 Demonstration

Block Diagram of SwRI ROS-I Automate 2019 Demonstration

Motion planning turned out to be a particularly challenging problem. Compared to a traditional robot motion task like pick-and-place, which involves planning unconstrained paths through open space, the kinematic constraints of the tube threading problem are rather bizarre. While the ring tool is axially underconstrained and can be rotated freely to the most convenient orientation, it is critical that it remain aligned with the axis of the tube to avoid collision. It’s impossible to flip the ring once it’s over the tube, so if the chosen ring orientation causes the robot to encounter a joint limit halfway down the tube, tough luck! Additionally, the robot must avoid collision between the tube and robot hardware during motion. Our initial solution used Trajopt by itself, but it would sometimes introduce unallowable joint flips since it tried to optimize every path waypoint at once without a globally-optimal perspective on how best to transition between those waypoints. We added the Descartes sampling algorithm, which addressed these issues by populating Trajopt’s seed trajectory with an approximate globally-optimal path that satisfied these kinematic and collision constraints. Planning still failed occasionally: even with a kinematically-redundant Kuka iiwa7 arm, solving paths for certain tube configurations simply wasn’t feasible[^1].

TrajOpt Path Planning Implementation & Testing

TrajOpt Path Planning Implementation & Testing

[^1]: The extent of solvable tube configurations could be greatly increased by including the turntable as a controllable motion axis. Given the constraints of the iiwa7’s ROS driver, we decided that this would be, in technical software terms, a whole other can of worms.

We shipped the robot hardware about a week in advance of the exhibit setup deadline. Our reliance on ROS meant we could switch to simulation with minimal hassle, but there were some lingering issues with the controller-side software that had to wait until we were reunited with the robot the Saturday before the show[^2]. This contributed to moderate anxiety on Sunday evening as we worked to debug the system using real-world data. We had to cut some fun peripherals due to time constraints, such as the handheld ring wand that would let visitors race the robot. By Tuesday morning the robot was running consistently, provided we didn’t ask it to solve paths for too-complicated tubes. This freed up some time for me to walk the halls away from our booth and talk to other exhibitors and visitors.

[^2]: Our lunch upon arrival was Chicago-style deep dish pizza, which conveniently doubled as dinner that evening.

More Collaborative Robots

There were collaborative robots of all shapes and sizes on display from many manufacturers. I may have seen nearly the same number of collaborative robots as traditional ones! A handful were programmed to interact with visitors, offering lanyards and other branded largesse to passersby. Most of them were doing “normal robot things,” albeit intermingled with crowds of visitors without any cages of barriers, and generally at a much more sedate pace compared to the traditional robots. Some of the non-collaborative robots were demonstrating safety sensors that let them slow down and stop as visitors approached them -- I usually discovered these by triggering them accidentally.

I was surprised by the number of autonomous forklifts and pallet transporters. I’m told that there were more in 2019 than at previous shows, so I’m curious about what recent developments drove growth in this space.

I learned that ROS-Industrial has significant brand recognition. I got pulled into several conversations solely because I was wearing a ROS-I polo! Many of these discussions turned to ROS2, which produced some interesting insights. Your average roboticist-on-the-street is aware of ROS2 (no doubt having read about it on this very blog), but their understanding of its capabilities and current condition might be rather fuzzy. Many weren’t sure how to describe the key differences between ROS and ROS2, and a few weren’t even aware that ROS2 has been out in the wild for three versions! I’ll unscientifically hypothesize that a key challenge blocking wider ROS2 adoption is the lack of demonstrated success on high-visibility projects. Our demo drove some good conversation to alleviate these concerns: I could show a publicly-visible robotic system heavily reliant on ROS2 and point to the open-source native ROS2 device drivers that let it function.

Showcasing Perception and Planning Potential

In terms of demo reception, people who visited our booth were impressed that we were scanning and running trajectories on previously-unseen parts. I usually had to provide additional context to show how our perception and planning pipeline could be extended to other kinds of industrial applications. There’s a tricky balance at play here – an overly abstract demo requires some imagination on the part of the viewer to connect it to an industrial use case, but a highly application-specific demo isn’t easily generalized beyond the task at hand. Since our group specializes in application-generic robot perception and planning, I think that a demo tending towards the abstract better showcases our areas of proficiency. This is a drastically different focus from other exhibits at the show, which generally advertised a specific automation process or turnkey product. I feel like we successfully reached our target audience of people with difficult automation tasks not addressed by off-the-shelf solutions.

Development of the Industrial YAK reconstruction for the Automate Demo in ROS2

Development of the Industrial YAK reconstruction for the Automate Demo in ROS2

While it certainly would have been easier to adapt an already-polished system to serve as a show demo, developing a completely new one from scratch was way more fun. Improvements made to our perception and planning software were pushed back upstream and rolled into other ongoing projects. We’re now much more comfortable with ROS2, to the extent that we’ve decided that from here on out new robotics projects will be developed using ROS2. The show was a lot of fun, a great time was had by all, and I hope to see you at Automate 2021!

ROS Industrial Conference #RICEU2018 (Session 4)

From public funding opportunities to the latest technologies in software and system integration, the combination of robotics and IT to hardware and application highlights: ROS-Industrial Conference 2018 offered a varied and top-class programme to more than 150 attendees. For the sixth time already, Fraunhofer IPA organized a ROS event in Stuttgart to present the status of ROS in Europe and to discuss existing challenges.

This is the fourth instalment of a series of four consecutive blog posts, presenting content and discussions according to the sessions:

  1. EU ROS Updates (watch all talks in this YouTube playlist)
  2. Software and system integration (watch all talks in this YouTube playlist)
  3. Robotics meets IT (watch all but 1 talks in this YouTube playlist)
  4. Hardware and application highlights (watch all but 1 talks in this YouTube playlist)

Day 3 - Session “Hardware and Application Highlights“

Georg Heppner (FZI) and Fabian Fürst (Opel) At ROS-Industrial Conference 2018

Georg Heppner (FZI) and Fabian Fürst (Opel) At ROS-Industrial Conference 2018

In the fourth and final session of the ROS-Industrial Conference 2018, the focus was on hardware developments and applications implemented in industrial use cases. Fabian Fuerst, Opel, and Georg Heppner, FZI, delivered the session keynote. They presented their solution for flexible automotive assembly with industrial robotic co-workers. The application was developed as part of the EU EuRoC project. In this four-year competition, more than 100 participants initially worked on new robotic solutions for the manufacturing industry. In the course of several evaluation rounds, the team from FZI, Opel and MRK Systeme GmbH was able to assert itself successfully to the end.

During the course of the project, the FZI developed an automated robotic assembly for flexible polymer door sealings on car doors. The sealing is a closed ring, which has to be fixed with up to 40 plastic pins depending on the model, an ergonomically unfavourable task that could not be automated until now. The developed assembly cell is very flexible and open, so that the robot can be used without a safety fence. For this purpose, an external force control was developed that can be used easily and directly also for numerous other robots as a package of ROS-Industrial. The CAD-2-PATH software is used for the simple path creation for the robot. This enables a quick adjustment to other door models and does not require any expert knowledge. This is important because there are different door models and sealing types and the automation solution must be adaptable accordingly and quickly. It is notable that the application received positive assessment from Opel with regards to safety, typically a sensitive topic when applying novel tools such as ROS in automotive applications.

Paul Evans (Southwest Research Institute / ROS-Industrial North America) at ROS-Industrial Conference 2018

Paul Evans (Southwest Research Institute / ROS-Industrial North America) at ROS-Industrial Conference 2018

The presentation by Paul Evans, Southwest Research Institute and ROS-Industrial Consortium North Americas, provided current information on the activities of the North America Consortium such as strategic initiatives, trainings, and networking activities. These also focus on voices of members and include activities for the strategy alignment, for more robustness and flexibility and agility. There are also collaborations with OEMs who support ROS or develop their own drivers. At the ROS-I Consortium Americas Annual Meeting 2018, different applications were presented, for example an order batch picking robot from Bastian Solutions and a robotic system for agile aerospace applications like sanding, blending, drilling etc. for the U.S. Air Force. A last highlight that Evans presented was the ROS-I collaboration with BMW and Microsoft. While RIC-North Americas supported the evaluation of simulation environments that included physics engines the RIC-EU partners provided additional navigation support and training for mobile robots at the BMW plant to support assembly logistics. The solution is deployed on Microsoft Azure.

Mobile robots was also the topic of the lecture by Karsten Bohlmann, E&K Automation. He presented solutions for ROS on AGVs and perception-driven load handling and PLC interfaces.

Arun Damodaran (Denso) at ROS-Inudstrial Conference 2018

Arun Damodaran (Denso) at ROS-Inudstrial Conference 2018

Denso Robotics Europe was present at the conference with Arun Damodaran, who talked about Cobotta, the ROS-enabled collaborative robot. This is a six-axis arm with a reach of 342 mm, a repeatability of 0,05 mm and a payload of 500 g. It has an inherently safe design, meets all requirements for safety-standards corresponding to the ISO norms and is compliant thanks to safety-rated monitored function. Another advantage is its easy set-up and use. This is realized by the usage of the robot programming software drag&bot. Developed by the spin-off of the same name of Fraunhofer IPA, the software enables the programming of robots like Cobotta with the drag and drop principle. No expert knowledge is needed. The software is also based on ROS, works independently from any robot manufacturer and can be reused as well as shared via the cloud. Denso has been engaged in the development of ROS components and packages (simulation, control, path creating) for its robots since 2012 and now uses an open platform for controlling the Cobotta.

Felipe Garcia Lopez from Fraunhofer IPA focused on a networked navigation solution for mobile robots in industrial applications. This is particularly useful for changing environments in which mobile robots should independently select free routes. Fraunhofer IPA and Bär Automation, for example, have implemented a navigation solution for agile assembly in automobile production. With this, AGVs can locate themselves robustly and precisely based on sensor data, even without special infrastructure. This makes it possible to easily adapt existing paths or integrate new ones even after commissioning. Since the software's sensor fusion module can process data from almost any sensor, very customer-specific solutions can be implemented.

Another example is the networked navigation for smart transport robots at BMW. Here as well there were few static landmarks, a lot of dynamic obstacles and sparse sensor data in large-scale environments. A process reliability of more than 99% had to be fulfilled. The presented navigation as well as the vehicle control are ROS-based. At the end of the presentation, an outlook into Cloud-Navigation was given: Mobile robots and stationary sensors are then connected using a Cloud-based IT-infrastructure. The environment is cooperatively modelled and SLAM is used. This enables also solutions for “Navigation-as-a-service” meaning map updates and cooperative path planning for each robot. With Cloud-Navigation, local hardware and computational resources can be reduced and the quality and flexibility of the overall navigation system is enhanced.

Thomas Pilz (Pilz GmbH & Co. KG) at ROS-Inudstrial Conference 2018

Thomas Pilz (Pilz GmbH & Co. KG) at ROS-Inudstrial Conference 2018

ROS as an appropriate solution both inside and outside of industry – this was the starting point for Thomas Pilz, Managing Partner of family owned company Pilz. Combined with his own career and his experience with the first service robots, lightweight robots and robots outside production environments, he first described how the question of safety standards has changed in recent years. The definition and understanding of a robot is currently in the process of changing significantly. For Pilz, systems such as the Care-O-bot® from Fraunhofer IPA are the new upcoming robots. They operate outside of cages, are mobile and users can easily interact with them and program them using ROS. He sees ROS as a success factor for service robots because of its modular design, its standardization, additional flexibility through programming languages and its networked, interoperable system in line with Industry 4.0.

Robots that are to interact with humans are also changing the required safety technology at Pilz in the long term because all previous infrastructure such as fences is no longer required. This led Pilz to develop its own robot arm with appropriate safety technology. They use ROS modules developed by Pilz because they are breaking new ground with the development of the robot arm and can thus fall back on a broad programming knowledge base. They had nothing to lose with the new product. However, in order for them to meet the safety standards, the modules must no longer be changed in an uncontrolled manner. To improve this, Pilz recommends changing the safety standards so that they are also amenable to Open Source. Finally yet importantly, he believes that the term robot manufacturer will also change, because this role will increasingly be fulfilled by those who implement the application and no longer by those who produce the robot or components for it. In the lively discussion after the presentation, Pilz once again emphasized two arguments in favour of ROS. First: When it is said that ROS is tedious, one should bear in mind that the development of proprietary software is also difficult. Second: ROS is tedious, but fun. Pilz also sees ROS as a decisive factor for employee satisfaction and as an argument for staying with Pilz.

At the end of the conference, Gaël Blondel from the Eclipse Foundation presented the Eclipse Foundation and its Robotics Activities. The platform with around 280 corporate members, half of them from Europe, provides a mature, scalable, and business-friendly environment for open source software collaboration and innovation. Eclipse is vendor-neutral and offers a business-friendly ecosystem based on extensible platforms. They offer their own IP management and licensing but also accept other business-friendly licenses. Several working groups are particularly engaged in development processes for robotics. One example for a robotic project managed with Eclipse is the EU project RobMoSys that aims to coordinate the whole community’s best and consorted efforts to realise a step-change towards a European ecosystem for open and sustainable industry-grade software development.

At the end of the event, Mirko Bordignon and Thilo Zimmermann thanked the participants for another great and record breaking ROS-Industrial Conference. Presentations and videos of the event have been made available on the event website: https://rosindustrial.org/events/2018/12/11/ros-industrial-conference-2018

ROS Industrial Conference #RICEU2018 (Session 3)

From public funding opportunities to the latest technologies in software and system integration, the combination of robotics and IT to hardware and application highlights: ROS-Industrial Conference 2018 offered a varied and top-class programme to more than 150 attendees. For the sixth time already, Fraunhofer IPA organized a ROS event in Stuttgart to present the status of ROS in Europe and to discuss existing challenges.

This is the third instalment of a series of four consecutive blog posts, presenting content and discussions according to the sessions:

  1. EU ROS Updates (watch all talks in this YouTube playlist)
  2. Software and system integration (watch all talks in this YouTube playlist)
  3. Robotics meets IT (watch all but 1 talks in this YouTube playlist)
  4. Hardware and application highlights

Day 2 - Session “Robotics meets IT“

Henrik Christensen (UC San Diego) at ROS-Industrial Conference 2018

Henrik Christensen (UC San Diego) at ROS-Industrial Conference 2018

The third session testified the growing importance of ROS to support the development and deployment of robotic solutions from companies outside the traditional boundaries of this industry. Predominantly software players such as Amazon or Google now offer platforms leveraging ROS, which they described during the session.

Henrik Christensen, from UC San Diego and ROBO Global, gave a very inspiring keynote speech on why robotics is increasingly using cloud technologies and how it will benefit from them. He outlined three factors as current business drivers for this development: the increasing demand for flexibility in production, the aging world population and the associated increasing demand for service robots at home, and finally the trend that more and more people live in cities, posing great challenges for logistics. All robot solutions must be cost-efficient and robust at the same time in order to offer the required reliability. If computer performance always had to be on board, the hardware would often be inadequate (e.g. for slim service robots for private use) or the costs for suitable hardware would be too high (e.g. for autonomous cars).

Technologies from or in the cloud can be a solution for this. Christensen presented the value of these ecosystems using extensive market examples and explained how they differ in agility and size. Many successful companies, primarily in the USA and Asia, have shifted their business model from owning things or technologies to orchestrating them and offering services. For robotics, ROS 2.0 can be a decisive door opener here, offering the standardization required for platforms.

Milad Geravand (Bosch Engineering) at ROS-Industrial Conference 2018

Milad Geravand (Bosch Engineering) at ROS-Industrial Conference 2018

The next presentations in the session took up these and similar ideas and presented existing solutions. Milad Geravand from Bosch Engineering presented a modular software platform for mobile systems such as cleaning, off-road and intralogistics robots and how they can be developed more efficiently. In his experience, the difficulties in the development process are similar in many companies: The applications are usually very different, the software is becoming increasingly complex, a structured deployment and integration process is lacking. ROS is not yet ready for the products and the leap from prototype to series production is still too big. With the software platform presented, which is based on ROS, Bosch would therefore like to address precisely these challenges and enable uses cases to be developed quickly and reliably.

Eric Jensen, working for Canonical, the company well known for the Ubuntu Linux distribution, presented the advantages of Ubuntu Core especially with regard to security that is still an open issue for ROS. The mentioned advantages are: A minimal, transactional Ubuntu for appliances, safe and reliable updates with tests and rollbacks, app containment and isolation with managed access to resources, a unique development environment familiar for Linux developers and the possibility to easily create app stores for all devices needed. Furthermore, Ubuntu has one of the biggest developer communities in the world and is backed by Canonical itself, an important plus for security. Last but not least, the system offers automatic security warnings for the „snaps“, the special package format in Ubuntu, system audits through package verification and compliance management – all are important features for an improved security.

Roger Barga (Amazon AWS) at ROS-Inudstrial Conference 2018

Roger Barga (Amazon AWS) at ROS-Inudstrial Conference 2018

Only a few weeks before the ROS-Industrial-Conference, Amazon, for a long time far more than an e-commerce store, had introduced its new platform AWS RoboMaker, which caused a sensation beyond the ROS-Community. Roger Barga, General Manager at AWS Robotics & Autonomous Services, kindly presented this novel development at the conference. Amazon's commitment to robotics is based on discussions with around 100 companies, during which they were able to identify two main problems in robot development. On the one hand, this is a very high demand for automation solutions with simultaneous difficulties with ROS such as security or performance. On the other hand, the development process is usually very inefficient.

The RoboMaker platform addresses these requirements with its four main components. It offers a browser-based development environment, which in turn has integrated cloud extensions for ROS as well as a simulation environment. The cloud extensions range from machine learning tools to monitoring and analytics. Concrete capabilities for robots include speech recognition and output, video streaming, image and video analysis, as well as logging and monitoring with Amazon CloudWatch. The simulation environment allows thousands of simulations to be run in parallel. The fourth component is fleet management, so that robot applications can be deployed over the air. The presentation ended with a short introduction to the learning environment of RoboMaker, with which Amazon applies reinforcement learning to robots. The robots then learn according to the principle "trial and error". By merging all errors within a fleet in the cloud, a large knowledge base is quickly available and not every single robot has to make a specific error to learn from, but it benefits from the learning experiences of other robots in the fleet.

The topic of robotics in the cloud was also the focus of the lecture by Christian Henkel from Fraunhofer IPA. In his experience, the deployment of ROS-based applications on distributed systems such as mobile robots is still too great a challenge, which he would like to address in his work with docker containers (dockeROS). With his solution, it is possible to simply run ros nodes in docker containers on remote robots.

Martin Hägele (Fraunhofer IPA) moderates a panel discussion with Henrik Christensen (UC San Diego), Oliver Goetz (SAP), Michael Grupp (magazino), Niels Jul Jacobsen (MiR) and Damon Kohler (Google).

Martin Hägele (Fraunhofer IPA) moderates a panel discussion with Henrik Christensen (UC San Diego), Oliver Goetz (SAP), Michael Grupp (magazino), Niels Jul Jacobsen (MiR) and Damon Kohler (Google).

With Damon Kohler, Google Robotics and its recently presented cloud solution were also represented at the conference. In his introductory remarks, Kohler mentioned several challenges related to cloud robotics, including security, connectivity and latency, and distributing work, e.g. partitioning problems. In contrast, he sees advantages such as scalability, collaborative perception and behaviour and a robust change management and monitoring. He sees cloud robotics as a further development of the well-known principle "sense -> plan -> act" around the component "sense -> share -> plan -> act" and as an interplay of edge and cloud processing.

The aims of cloud robotics are an increased launch cadence, more data and more users and a better resource utilization. This shall be reached by infrastructure as a service, design for small and decoupled components as well as tools for automation and orchestration. The ROS nodes correspond to the Google micro-services: They are stateless and replicable, which means horizontally scalable. The container orchestration engine Kubernetes helps to deploy and release these micro-services. Several mature and robust logging and monitoring tools like Stackdriver help managing the system. The heart of the whole is the Cloud Robotics Core, being available from beginning of 2019 that enables to integrate Kubernetes on robots. Overall, Google’s vision is an open platform and a thriving ecosystem where integrators, developers, hardware developers and operators can collaborate with customers efficiently.

The second day of the conference ended with a panel discussion. The panellists were Henrik Christensen (UC San Diego), Oliver Goetz (SAP), Michael Grupp (magazino), Niels Jul Jacobsen (MiR) and Damon Kohler (Google). Moderated by Martin Hägele (Fraunhofer IPA), they summed up some advantages from their respective company perspectives, but also existing challenges of ROS and the role of open source software and robotics for their corporate strategy.

ROS Industrial Conference #RICEU2018 (Session 2)

From public funding opportunities to the latest technologies in software and system integration, the combination of robotics and IT to hardware and application highlights: ROS-Industrial Conference 2018 offered a varied and top-class programme to more than 150 attendees. For the sixth time already, Fraunhofer IPA organized a ROS event in Stuttgart to present the status of ROS in Europe and to discuss existing challenges.

This is the second instalment of a series of four consecutive blog posts, presenting content and discussions according to the sessions:

  1. EU ROS Updates (watch all talks in this YouTube playlist)
  2. Software and system integration (watch all talks in this YouTube playlist)
  3. Robotics meets IT
  4. Hardware and application highlights

Day 2 - Session “Software and System Integration Topics“

Dave Coleman (PickNik) at ROS-Industrial Conference 2018

Dave Coleman (PickNik) at ROS-Industrial Conference 2018

The second day of the conference started with the session "Software and System Integration Topics". Dave Coleman, founder of Picknik Consulting and lead maintainer of MoveIt!, opened the session with a very personal keynote about his commitment to open source software, from his student days to his role as an entrepreneur. He reported how he got in touch with the beginnings of ROS at Willow Garage and highlighted the unique spirit with which the project was incubated. He introduced the successful MoveIt! library, shared his lessons learned and the challenges which many open source projects face. As a proof of how Open Source and business can successfully coexist, he described the founding of PickNik and how the company is profitable without investors.

The following presentations were more technical and started with Víctor Mayoral Vilches, CEO of Acutronic Robotics. He talked about his company's solutions for system integration in modular systems, through the device H-ROS SoM (System on Module), used as example. In his opinion, ROS already addresses many programming needs, but system integration goes far beyond programming and requires extensive resources for each new project. He therefore sees modularity as an essential improvement. Combining the features of a real-time capable link layer made of RTOS and the Linux Network stack, and ROS 2.0, he presented the challenges and developed solutions to achieve easier system integration. He also gave insights into the use of AI to further reduce programming efforts and to train the robot instead, a technology that is still in its infancy. As part of a Focused Technical Project with ROSIN, the company also worked on the interoperability of modules.

Jon Tjerngren (ABB) at ROS-Industrial Conference 2018

Jon Tjerngren (ABB) at ROS-Industrial Conference 2018

Jon Tjerngren presented how ABB robots can be used with ROS. For this purpose, the company developed various ease-of-use packages with ROS that simplify and accelerate the setup of ABB robots. All of them are already freely available online: abb_librws can be used to off-load of computational heavy tasks, e.g. image processing. abb_libegm can be used for motion correction and as an StateMachine add-in for remote control.

ROS2 Embedded tailored to real-time operating systems was the topic of Ingo Lütkebohle’s presentation from Bosch Corporate Research. He emphasized the importance that ROS must also be integrated into the firmware. This would better address four challenges: hardware access, latency, power savings, and safety. To this end, he presented a solution developed in the OFERA project with which ROS2 can be used in microcontrollers.

André Santos from INESC TEC and University of Minho, focused on software quality. More and more robot systems are safety-critical systems, which places very high demands on the quality of the software. Finding errors in the code early on reduces costs and development time. Although there are various static analysis tools, none offers ROS-specific analysis. This is why the HAROS (High Assurance ROS) framework was developed, which is capable of extracting and, to some extent, reverse-engineering the computation graph. It also provides a visualization of the extracted graph and enables property-based testing for ROS.

Anders Billise Beck (UR) at ROS-Inudstrial Conference 2018

Anders Billise Beck (UR) at ROS-Inudstrial Conference 2018

Anders Billersoe Beck from Universal Robots was the last speaker in the second session. He introduced the new UR e-series (with integrated force/torque sensor, 500 Hz controller frequency and more new features) and how ROS supports it. For this, a new driver is developed in a Focused Technical Project of ROSIN together with the FZI, which will also remain open-source. The goal is to make a UR robot easy to use and enable plug-and-play with ROS. The driver should make two modes of operation possible: remote control and ROS URcap embedding. More supported features are calibration, a new safety system and easier programming. Beck concluded the presentation with some points that he believes are in need of improvement to make ROS ready for industrial applications. These are easier general use, proper handling of hard and soft real-time boundaries and supporting more control in edge devices.

New Release of ROS Qt Creator 4.8 RC on Xenial and Bionic

We are pleased to announce the release of the ROS Qt Creator Plug-in for Qt Creator 4.8 RC on Xenial and Bionic. The ROS Qt Creator Plug-in creates a centralized location for ROS tools to increase efficiency and simplify tasks.

Picture obtained from Qt Blog

Picture obtained from Qt Blog

Highlights:

  • Qt Creator 4.8 introduces several new rich features and improvements to existing capabilities.
    • Generic Programming Language Support (Python Support!)
      • To use this feature, you must enable the Language Support Plugin
    • C++
      • Compilation Database Projects!
      • Clang Format Based Indentation!
      • Cppcheck Diagnostics!
      • Simultaneously debugging one or more executables!
  • ROS Plug-in introducint a few new features and bug fixes
    • Upgraded to Qt Creator 4.8
    • Added catkin_test_results run step
    • Added ROS Settings Page to configure default settings
    • Bug Fixes
      • Issue #284 Package Wizard caused Qt Creator to crash if using when a ROS project is not loaded.
      • Issue #289 Clicking Help caused Qt Creator to crash
qt-creator-ros-settings-page.png

A ROS-Industrial Collaboration with Microsoft and BMW

ROS-Industrial recently had the opportunity to collaborate with Microsoft, BMW and Open Robotics on an automation solution that was featured in Season 3 of the Decoded show on YouTube. This enabled the ROS-I Consortium to realize sustainable gains on the team’s vision for greater efficiency and visibility with respect to logistics and material management challenges in assembly plants.

Mobile Robot.png

BMW has set forth a vision where they break down the barriers between the historical automation paradigms and the challenges with interacting with the largely manual operations of their ever-increasing high-mix assembly operations. Historically, materials are delivered to the line for human operators to consume and exact quantities and status are lost at that point of consumption. The idea is to leverage intelligent autonomous operation to give better visibility to what is where, while leveraging cloud technologies to create a tighter loop and connection to the order delivery systems. The goal is to enable a leaner operational buffer within the workflow, reducing carried inventory and driving greater efficiency.

This is where Microsoft came into the picture, with their Azure solutions and client support team to do rapid development sprints to enable tighter coupling between their SAP work order environment to the “to-be deployed” autonomous robotic fleet.

BMW has built a home-grown start transport robot that runs on the same battery used in the i3 model car they produce. However, there needed to be coordination of these assets over the long term with a richer simulation environment as the platform’s capability increased. This is where the Microsoft team came to deliver.

Close Up.png

Originally, the Southwest Research Institute (SwRI) ROS-I team support was around navigation and evaluation of Gazebo in manufacturing environments supporting many mobile robots. It became clear as the Microsoft team got to work that Gazebo would not support spinning up multiple mobile robots in one instance. In fact, due to how Gazebo is structured, even a handful of robots brought Gazebo processing to a crawl. This led to the implementation of Argos, an open-source simulator that also has the capability to include physics as the scale BMW was interested in. A container strategy was developed and, in the end, the ROS-I team ended up learning quite a bit from the Microsoft team through the week-long development hack on SwRI’s campus. This development week really furthered the understanding of capability with regard to richer simulation capabilities, including the physics, which supports the ROS-I vision of tighter process performance and management of non-rigid bodies within the planning cycle.

As things got going in Germany, as the Decoded episode shows, the Microsoft team worked closely with BMW’s ROS and Manufacturing Execution System (MES) developers, tightly coupling the SAP functionality along with the fleet management and navigation tuning functionality that was required along with the Argos implementation. In the end, this led to a functional, if not sustainable, solution for the BMW team as they continued to refine the performance of the specific ROS-based robot, leveraging the Azure environment to assure SAP to simulation, to robot action within the Azure platform.

VPLogistics.png

We hope those who watch this Decoded episode agree this demonstrates that collaborations between for-profit entities such as Microsoft, and nonprofits – such as SwRI, Open Robotics and Fraunhofer IPA, as well as open-source projects such as ROS-I – can enable end-users such as BMW to create their own sustainable, high performing solutions. We believe the open-source contributions will enable others to leverage the development and hopefully expand the capability. This idea of a pre-competitive foundation that enables interoperability and flexibility without generating silos is key if we are to move the ball forward with respect to operational efficiency gains at the scale we need.

As we have seen in recent months, robotics development and IT are entering a new phase, where more teams and individuals can grasp their own destiny and ROS-Industrial is excited to be a part of that journey!

Thanks to Microsoft and BMW as well as the Decoded team for making this project possible.

ROS-Industrial Americas 2018 Annual Meeting Review

The ROS-Industrial Consortium Americas (RICA) held its 2018 Annual Meeting in San Antonio, on the campus of Southwest Research Institute (SwRI) on March 7th and 8th, 2018. This was a two-day event, with the 7th open to the public, including tours and demonstrations, followed by Consortium Members meeting on the 8th with a road-mapping exercise and project idea brainstorming.

This was the first time that RICA held the event over two full days. Also, this was the most well attended event, topping out over 80 people on the 7th. There were talks spanning from the more strategic/visionary to the technical with regards to open-source robotics application development. This provides an excellent cross-section of the technical development community and organization decision makers to share ideas and cross-pollinate taking back what they learned to their organizations.

The morning of the 7th featured:

  • SwRI Introduction - Paul Evans - SwRI
  • ROS-I Consortium/Introduction - Matt Robinson - SwRI
  • Manufacturing in Mixed Reality - Dr. Aditya Das - UTARI
  • Discussion on the Design of a Multiuse Workcell and Incorporation of the Descartes Package - Christina Petlowany - UT Austin Nuclear Robotics Group
  • Integrating ROS into NASA Space Exploration Missions - Dustin Gooding - NASA

The talks touched on a mix of how humans can interact with the technological solutions and also the need for solutions that can work within environments originally designed for people. The common thread is enabling humans and robots to work more efficiently within the same spaces, and leveraging the same tools.

Rick Meyers of the ARM Institute & Air Force Research Laboratory, during the lunchtime keynote, discussed the vision and motivations of Air Force ManTech to drive advancements in automation and robotics in the manufacturing environment. This tied into the motivation of the Advanced Automation for Agile Aerospace Applications (A5) program, and how ROS ties into the realization of the Air Force ManTech vision.

The tours and demonstrations included many different applications, all with either ROS/ROS-Industrial element, though in some cases complimentary. ADLINK Neuron focused on coordinated mobile robots and a means to assist their industrial partners to easily transition to the ROS2 environment and provide consulting services for DDS implementation and ROS-related algorithm development.

KEBA demonstrated their new ROS RMI interface integrated into their controller, while UTARI demonstrated Manufacturing in Mixed Reality implemented through the Microsoft HoloLens, allowing users to fuse process guidelines, real-time inspection data, and cross reference information to determine adaptive measures and project outcomes.

SwRI and the ROS-I team demonstrated an example of merging SwRI’s Human Performance Initiative’s Markerless Motion Capture combined with path planning to retrieve an object from an open grasp. SwRI’s Applied Sensing Department showcased their Class 8 truck enabling all attendees to go for a ride, while gaining insights to the vehicle’s capabilities. The ROS-I team at SwRI also presented Robotic Blending Milestone 4, Intelligent Part Reconstruction, with TSDF implementation, and Trajopt, a newly fully-integrated into ROS sequential convex optimizer. The UT Austin Nuclear Robotics Group demonstrated their improved situational awareness for mobile manipulation on their Husky platform where users could “drive” the system to pick up a presented object.

Finally, the SwRI team presented and demonstrated the A5 platform, which is a mobile manipulation platform designed to perform numerous processes on large aircraft in an unstructured setting. The process demonstrated was sanding of a test panel overhead. Overviews of the localization and planning on the visualization were included.

Talks for the afternoon centered around OEM and Integration service providers, and included:

  • ADLINK Neuron: An industrial oriented ROS2-based platform - Hao-Chih Lin - ADLINK
  • Unique ROS Combination with Safety and PLC - Thomas Linde - KEBA
  • Leveraging ROS-Industrial to Deliver Customer Value - Joe Zoghzoghy - Bastian Solutions

This set of talks brought home innovations by the OEM and service provider communities. Bastian Solutions’ story of concept via working with the ROS-Industrial team, through pilot and into production, demonstrated a real value proposition for mobile solution, and broader ROS-enabled, development for the integrator community.

The morning of the 8th featured:

  • RIC-Americas Highlights and Upcoming Events - Matt Robinson & Levi Armstrong - SwRI
  • RIC-Europe Highlights & ROSiN Update - Mirko Bordignon - Fraunhofer IPA
  • ROS-Industrial Lessons from Bootstrapping in Asia Pacific - Min Ling Chan - ARTC
  • ROS2 is Here - Dirk Thomas - Open Robotics
  • ARM Institute Introduction & Update - Bob Grabowski - ARM Institute
  • Windows IoT & Robotics - Lou Amadio - Microsoft

Matt Robinson covered strategic initiatives for the Consortium followed by Levi Armstrong covering RICA technical developments, including TrajOpt and Intelligent Part Reconstruction, Noether, PCL Afront Mesher, and Qt Creator updates and upcoming release.

Mirko Bordignon highlighted for the Americas audience what is happening around the ROSIN initiative, driving awareness, and furthering the global nature of ROS-I. Min Ling Chan shared progress within the Asia-Pacific region and the progress and status of the Pack ML Focused Technical Project, which has a Phase 2 launch coming soon.

Dirk Thomas of Open Robotics presented the latest on ROS2, and for the first time we were happy to welcome Bob Grabowski of the ARM Institute. The ARM Institute is the newest DoD Manufacturing Innovation Institute, and this is the first Annual Meeting since the Institute’s launch. Synergies between the ARM Institute and ROS-I will be important to monitor moving forward.

The morning session concluded when the Windows IoT and Azure teams were represented respectively by Lou Amadio and Ryan Pedersen, presenting their current strategy for ROS support and their plans moving forward, particularly for ROS2.

The featured keynote was presented by Dr. Phil Freeman of Boeing, “Why Boeing is Using ROS-Industrial.” Phil offered great insights to the value of ROS-Industrial for Boeing, and what it has enabled for their operations in the context of the challenges Boeing faces. The talk featured example applications and conveyed the message that within the robotics space we truly are at a tipping point with regards to capability and accessibility.

A road-mapping session was then conducted, focusing on problems to solve. The idea is to tie problems to projects and then identify the capabilities that need to be developed to meet certain prioritized problems. The problem focus areas were Human Capability, Quality Processes and Execution, Flexibility/Agility, and Strategy/Alignment. Common themes were: standard interfaces, documentation, ROS2 for Industrial applications, ownership and community engagement, simpler recovery means, and real-time diagnostics.

The afternoon speaker session touched on technologies that seek to enable richer and more reliable networking and data sharing/management through the application development/implementation process, and across the value stream:

Now that the dust has settled, these are some observations from this seat:

  1. ROS-Industrial is a big tent, and is truly global. Each Consortium needs to optimize how it works within their region to meet their member needs and optimally leverage resources available to them.
  2. As regional resources are optimized, the other consortia need to monitor developments, share information and ensure that all within the broader ROS-I organization are aware what is in-flight, what development activities are happening where, to reduce/eliminate redundant efforts.
  3. ROS2 is here, but there is work to do. It will be important to monitor developments and foster awareness to enable developers, solution providers, and end users to leverage ROS2 capability to complement their end solutions when and where appropriate.
  4. There are a number of innovators, solution providers, and end users realizing value proposition on ROS/ROS-Industrial deployments TODAY, and in some cases for some time. Let’s socialize and share their success stories.
  5. Foster both membership engagement and community engagement in the vision and execution of the vision for ROS-Industrial. We are excited to both enable start-ups to engage, but also improve how we leverage our University partners. Through effective projects, sponsorships, or roles within the ROS-I organizational structure, these all help foster a sense of community and subsequent ownership.
  6. There is an inflection point or tipping point, and for advanced robotics this seems to be an appropriate time. The idea also, that ROS can span beyond just the robotic processes, but do more to enable more intelligent processing via leveraging IoT, enable leverage of advanced technologies for further end user value seems to be gaining steam.
  7. We advance ROS-Industrial together. Engage, participate, communicate, and we succeed together.

As always, we are looking forward to feedback on the event and how to improve this event and events moving forward. We are looking forward to bringing back the online quarterly membership meetings, so keep an eye on that, as coordination and the invites are hosted on a rotational basis by the three Consortium managers. ROS-Industrial is an open-source project, and with that we seek to be open, and a be that forum for sharing ideas, and solving problems for industry in the 21st century.

Public day presentations can be found on the Event Page within the agenda after each speaker line item. Member day presentations are included behind the member portal, and are available for download.

Thanks for your support of open-source automation for industry!

Part 1, Updates and New Strategic Initiatives for the ROS-Industrial Consortium Americas

Recently ROS-Industrial Consortium Americas Leadership, along with review, and consultation, with the global ROS-Industrial leadership, presented to the Americas Consortium Advisory Council a number of proposed changes to the agreement, this post is a summary of the most meaningful changes and initiatives.

Read More

ROS-Industrial Migration to Discourse

Today, February 14th, we notified the ROS-I users Google Group, about an upcoming transition to Discourse on March 1. I have included the letter below that was provided to the Google Group members. We are excited to be part of the ecosystem over at Discourse and hope that it drives improved collaboration, synergy, and interaction with the broader ROS Community.

We look forward to this transition, but of course with any change, there can be problems. Please feel free to comment below, or reach out directly if you have questions and/or concerns.

Discourse.JPG

“In recent years there has been a migration, related to ROS/ROS-related discussions, Q&A, and collaboration to ROS Discourse (discourse.ros.org). At ROS-Industrial we see this year as the time to move over to Discourse as well, and retire the ROS-I Google Group, swri-ros-pkg-dev. This obviously does not come without some consideration and a migration plan. The target date for the transition is March 1. The content that is currently within the forum over at the Google Group will be kept available for reference, as read-only, and inquiries to swri-ros-pkg will be met with an automatic reply to direct inquiries to the ROS-Industrial Discourse category.

For users the move to Discourse should be quite convenient and efficient. Accounts from GitHub, or Google, may be used, so no new accounts will be needed in those cases.

We hope that this change is welcomed as it drives synergy with the broader ROS community, and allows for a true “one stop” in discussion and collaboration on all things ROS. To start there will be an ‘ROS-Industrial’ category, with subcategories developed when traffic merits the creation of subcategories.

We would like to thank our friends over at Open Robotics for helping us out with this change.”

Intelligent Part Reconstruction

It has long been a challenge in industry to image, or leverage non-contact sensors, to generate reconstructions of highly spectral or featureless surfaces. Shiny parts, dark surfaces, occlusion, and limited resolution all corrupt single-shot scanning for first-look robotic solution imaging or scanning systems. A whole new class of applications can be efficiently addressed if there were an efficient way to reconstruct surfaces to enable reliable trajectories for subsequent processing.

In the context of autonomous processing of parts, the mesh is the “stitching” together of points generated by a 3D depth camera that creates a “point cloud.” Algorithms are then applied to derive surfaces from the point cloud, as well as edges, and even detect “engineered features,” such as drilled holes. The process deteriorates when there is a lack of “points” returned to the sensor (i.e. sparse data). Smooth surfaces also make it difficult to “stitch” images together or organize points in a way that enables mesh creation. As in the example below, there is insufficient data to create the mesh over the full scanned surface. There are techniques to mitigate this phenomenon, such as “flat” coating surfaces, but these can be cumbersome, costly, and inefficient.

Spectral Sample Part.JPG

In recent years, academic research in the field of on-line surface reconstruction has built on the Truncated Signed Distance Field (TSDF). The Kinect Fusion TSDF technique pioneered by Microsoft Research involves probabilistically fusing many organized depth images from 3D cameras into a voxelized distance field, to estimate an average, implicit surface. The scanner is manipulated by hand, and each image’s pose is registered relative to the previous images by way of the Iterative Closest Point (ICP) algorithm. While this technique shows promise in fusing partial observations of difficult to scan objects, it suffers from the practical constraint that it must scan very quickly to accurately estimate scanner motion, and the surface being scanned must have sufficient features to enable tracking.

The TSDF-based reconstruction process only produces good results if the sensor gets good views of as much of the surface as possible. This is a fairly intuitive task for a human, since we can look at the partially-reconstructed surface, recognize which areas are incomplete, and move the camera to compensate.

It’s much more difficult for a robot to make these decisions. One way to approach this problem is to track which areas around the surface have and haven’t been seen by the camera. The robot can take an initial measurement, see which areas haven’t been viewed, and pick a new view that looks at these unknown regions. This lets the robot discover that it doesn’t have information about the back side of a wall and decide that it needs to move the camera to the opposite side of the work area to look at the obscured surface.

In this implementation, views around the volume are randomly generated within a range of angles and distances. Rays are cast corresponding to the camera’s field of view from each pose and count how many of these rays hit unknown voxels. The next best view is the one that hits the most unknowns, and the robot tries to move to this view to explore more of the part.

NBV.JPG

The results have been very promising. The performance of the combination of TSDF + Next Best View (NBV) within this work have resolved a number of the issues encountered in a prior Robotic Blending Focused Technical Project (FTP). The first of two primary metrics was: mesh completeness, where a complete part was created, where before insufficient returns left “holes” in the data. An example of a before-and-after can be seen below.

Al Bracket.JPG

The second metric: to generate trajectories within the compliance of the tool leveraged in the robotic blending work. In this case, that was approximately 2 cm. You can see in the video on this aluminum sample that the tool follows the arc and does not bottom out, or lift off of the part. While somewhat qualitative, operating within this compliance range was impossible before the development of this TSDF + NBV implementation.

Future work seeks to refine this tool set into a more cohesive set of packages that can then be contributed to the ROS-Industrial community. In the meantime, further testing to understand the limitations of the current implementation, and subsequent performance improvements, are slated in conjunction with other process development initiatives.

Check back here for more information and/or updates, or feel free to inquire directly about this capability: matt.robinson <at> swri.org.

Through 2018 and into 2019 additional developments have taken place, and we look forward to providing an open-source implementation over at github.com/ros-industrial-consortium. See below for some updates on demonstrations and outputs.

An intro to how Intelligent Part Reconstruction, a TSDF-based approach, allows for the creation of improved meshes to facilitate planning over large featureless surfaces or highly spectral surfaces. https://rosindustrial.org/news/2018/1/3/intelligent-part-reconstruction
Improved dynamic reconstruction on polished stainless steel conduit running at the frame rate of the sensor. This appears in the demonstration within the SwRI booth at Automate 2019.

Improved dynamic reconstruction on polished stainless steel conduit running at the frame rate of the sensor. This appears in the demonstration within the SwRI booth at Automate 2019.


Robotic Blending Milestone 4 Technology Demonstration at Wolf Robotics

The Robotic Blending project is the first open source instantiation of what will become a general Scan-N-PlanTM framework (Figure 1). The project has been making steady progress over the past two and a half years.

Figure 1. Execution of surface blending of a complex contour part on Wolf Robotics Demonstration Hardware in Fort Collins, CO.

Figure 1. Execution of surface blending of a complex contour part on Wolf Robotics Demonstration Hardware in Fort Collins, CO.

Starting in earnest at the beginning of 2017, Milestone 4 (M4) sought to further the functionality of the technology to incorporate functionality that was of interest to the participating members. These members, 3M, Caterpillar, GKN Aerospace, Wolf Robotics, and the SwRI development team set forth to realize a set of objectives:

  • Closed-loop inspection and retouch: Integrating the process planning and quality assurance steps so that parts are finished with a closed, sensor-driven loop.
  • More Robust Surface Segmentation: Improving the surface segmentation and planning algorithms to accommodate more complex surfaces found on real parts (continuous surfaces with radius of curvature above a 50 mm threshold, as seen in Figure 1 above)
  • Blending Process Refinement: Improving the quality of the blending process to produce surface finishes that meet engineering requirements.
  • Edge Processing: Processing/chamfering simple 2.5D edges that occur where two surfaces meet.
  • Technology Transfer: Meetings, demonstrations, and sponsor sites to support knowledge sharing among project participants and performers.
  • Integration and Testing: Demonstration support.

The intent of the demonstration was to review the capability as-developed relative to the processing of provided Caterpillar production parts. Performance was tracked to a provided success criteria that tied to performance metrics that were relevant to the target application.

All parts presented were able to be perceived, meshed, and discrete parts for processing selected. There were difficulties with GUI interaction relative to selection, but these were considered minor.

Paths were generated for every part presented that included blending surface paths as well as the edge paths. Every path that was generated was simulated without issue.

Execution of the blending paths was performed on 100% of presented parts, and a subset of parts for edge processing. There were observed challenges due to the scale of the tools and media relative to the edge and execution of the paths without having issues with either collision or losing contact with the part. This is simply a need for finer calibration techniques for these particular hardware configurations.

Quality assurance (QA) paths were generated and simulated in all cases. False positives were prevalent and related to scatter/reflectivity, particularly for aggressive media combined with edges/corners on the parts. This is a common issue for laser-based sensors and spectral (shiny) surfaces, particularly along edges. Root cause was identified in detailed views of the scan data showing the scatter that exceeds the acceptance criteria of 0.5 mm.

For cases where slag was present to be identified the QA algorithm identified the slag and subsequent path plans were generated, displayed, and able to be simulated and executed, see Figure 2. In cases where there was no remaining slag and the finish was not high spectral the QA passed the part.

Figure 2. Processed Part and Resultant QA that highlights non-compliant regions for re-processing

Figure 2. Processed Part and Resultant QA that highlights non-compliant regions for re-processing

Overall, the demonstration was considered a success, and follow on work is in the proposal development phase. The next steps for the team: First, consider establishing two test-sites where follow on development and testing can be performed.  Second, evaluate functionality around these elements: work flow, path planning relative to perceived and characterized anomaly or feature, human mark/indication and plan, process refinement considering PushCorp functionality and 3M media, and finally Digital Twin elements to enable consistent performance between the two sites.

Additional information and videos highlighting the current capability will be available soon!

Latest updates to the packages can be found here: https://github.com/ros-industrial-consortium

Special thanks to the Robotic Blending M4 team members:

Schoen Schuknecht – 3M

JD Haas – 3M

Leon Adcock – Caterpillar

Prem Chidambaram – Caterpillar

Wajahat Afsar - Caterpillar

Chris Allison – GKN Aerospace

Richard Cheng – GKN Aerospace

Mike McMillen – PushCorp

Jonathan Meyer – SwRI

Austin Deric - SwRI

Alex Goins - SwRI

Lance Guyman – Wolf Robotics

Jason Flamm – Wolf Robotics

Zach Bennett – Wolf Robotics

Nephan Dawson – Wolf Robotics

Global ROS-I Community Meeting

Thanks to our presenters, Paul Evans (host), Paul Hvass, Matt Robinson, Min Ling Chan, Dave Coleman, and Mirko Bordignon for an informative session on ROS-Industrial projects seeking community involvement.  This web meeting, held on 16 May 2017, is the second Global ROS-I Community Web Meeting. Scroll down below the video for abstracts.

Recording of the Global ROS-I Community meeting held on 16 May, 2017

  • Paul Evans (ROS-Industrial Americas and SwRI): Welcome and review of the agenda.  The Global Community Web Meeting focused on open source projects seeking broader community participation.
     
  • Paul Hvass (PlusOne Robotics): Outgoing ROS-I Americas Program Manager message to the community and introduction of incoming ROS-I Americas Program Manager.
     
  • Matt Robinson (Transitioning to ROS-Industrial/SwRI): Incoming ROS-I Americas Program Manager greeting to the community.
     
  • Min Ling Chan (ROS-Industrial Asia Pacific and A*STAR): PackML Business Analytics Dashboard
    • Highlighted a PackML (Packaging Machine Language) project focused on creating an ability to run ROS across multiple OEM PLCs for manufacturing plants for communication between PLCs, increased interoperability, modularity, and efficiency.  Proposed is a new Business Analytics Dashboard to provide users an intuitive display of the real-time root cause analysis and OEE.
  • Paul Hvass (PlusOne Robotics): Sensor Configuration and Calibration Assistant
    • Presented a project to create a graphical user interface for the industrial calibration package with preset configurations for the most common calibration cases to simplify the calibration process.
  • Dave Coleman (PickNik): MoveIt! Code Sprint – Minimum Cycle Time Motion for Bin Picking
    • Introduced the MoveIt! Code Sprint focused on integrating existing academic motion planners into MoveIt! that have the potential to improve cycle time, optimize existing planners, and systematically compare performance for industrial use cases.
  • Mirko Bordignon (ROS-Industrial Europe and Fraunhofer IPA): The ROSIN Project
    • Provided an overview of the new ROSIN European initiative.  ROSIN was launched to bring ROS to the factory floor with a focus on improving software quality.  Included is a targeted investment for ROS-Industrial Focused Technical Projects.  Educational activities are included as a key component of the initiative to support wider adoption.

Recap: Successful ROS-I Consortium Americas Meeting in Chicago

On April 7, the ROS-Industrial Consortium Americas hosted its annual meeting in Chicago following on the heels of the Automate show. The meeting brought together more than 60 people from across the industrial robotics industry to learn about, discuss, and plan for the future of open source software for manufacturing automation. The Consortium is now a world-wide organization led by SwRI in the Americas, Fraunhofer IPA in Europe, and A*STAR ARTC in the Asia Pacific region.

The annual meeting demarked a number of milestones for ROS-I:

The ROS-I Consortium Americas meeting brought together representatives from across industry including end users, system integrators, robot OEMs, automation equipment OEMs, and researchers.

The ROS-I Consortium Americas meeting brought together representatives from across industry including end users, system integrators, robot OEMs, automation equipment OEMs, and researchers.

The Open Source Robotics Foundation was represented by Tully Foote who took questions during an open mic session, and also led a round table roadmapping discussion about ROS/ROS 2 core.

The Open Source Robotics Foundation was represented by Tully Foote who took questions during an open mic session, and also led a round table roadmapping discussion about ROS/ROS 2 core.

Matthew Robinson from Caterpillar gave an inspiring keynote presentation on the topic of Flexible Automation for Manufacturing in Heavy Industries.

Matthew Robinson from Caterpillar gave an inspiring keynote presentation on the topic of Flexible Automation for Manufacturing in Heavy Industries.

The ROS-I Consortium is global! Each regional program manager presented an update about the progress and future plans for his/her region. Left to right: Min Ling Chan from RIC-Asia Pacific, Dr. Mirko Bordignon from RIC-Europe, and Paul Hvass from RI…

The ROS-I Consortium is global! Each regional program manager presented an update about the progress and future plans for his/her region. Left to right: Min Ling Chan from RIC-Asia Pacific, Dr. Mirko Bordignon from RIC-Europe, and Paul Hvass from RIC-Americas.

During the afternoon session, Consortium members organized into groups to discuss specific technical roadmapping thrusts.&nbsp;

During the afternoon session, Consortium members organized into groups to discuss specific technical roadmapping thrusts. 

Meeting attendees also met with Focused Technical Project moderators to talk about one of the five new project topics that were introduced for 2017.

Meeting attendees also met with Focused Technical Project moderators to talk about one of the five new project topics that were introduced for 2017.

One of the chief benefits of the Consortium is the ability of members to sponsor Focused Technical Projects. These projects expand the capabilities of ROS-I and costs are shared by participating members so their resources are multiplied by their collaborators. This year, five project topics were announced and then discussed in a round table forum:

  • Collaborative Robotic Fastener Installation
  • Sensor Configuration and Calibration Assistant
  • MoveIt! Code Sprint
  • ROS-I Business Analytics Dashboard
  • Robotic Edge Processing

To learn more about the ROS-I Consortium, please visit the Join Now page.

ROS-I Consortium Annual Meeting to Feature Eight Noted Speakers

Meeting to be held April 7 in Chicago

  • Keynote speaker Matthew Robinson, Caterpillar
  • Brett Hemes, 3M
  • Trent Weiss, The Boeing Company
  • Dr. Steve Turek, Manufacturing USA
  • Tully Foote, OSRF
  • Min Ling Chan, ARTC
  • Mirko Bordignon, Fraunhofer IPA
  • Paul Hvass, SwRI
Click the image above to download a printable flier for the ROS-I Consortium Americas Annual Meeting.

Click the image above to download a printable flier for the ROS-I Consortium Americas Annual Meeting.