A Step Forward for Industrial Use Cases with the Intel RealSense D455

Like many others I was saddened by the news that Intel is winding down their RealSense business. There was a time where I struggled to find meaningful uses for their cameras. They were small and affordable, but they had too much noise and not enough accuracy for the specific applications I was involved with. That changed with the D455.

The first application we tried the D455 on was one where the D435 had failed to meet the requirements under benchmark testing. The amount of noise and the waves in the depth image were simply too great and the end-user expressed concern when reviewing the output. After reading the specs of the D455, purchasing a unit for the lab, and sharing the results with the team, the D455 was selected for the application.

Within a few minutes of plugging in a D455 for the first time it was apparent how much more stable the 3d image was compared to the 415/435. The rippling “quantum foam” I was so used to seeing was greatly reduced. The wider colorized 3d image made it much easier to see what was going on. The higher accuracy added more detail to objects that were just blobs before. These combine to make it a practical option for real robotic projects.

Image output from the 435

Image output from the 435

Image output from the 455

Image output from the 455

We have the opportunity to use a significant amount of these D455 units in this upcoming application, and it is clear This camera will continue to be a contender for numerous other projects as well. I know that the team here is excited to see this product continue to be supported, with long term availability, and eagerly await clarification from Intel about the details around support and future updates to this product and the other stereo-based products they indicated they will continue to support.

As part of the ROS-Industrial open-source project we continue to provide information, and resource around 3D cameras that our team here in the Americas has tested and our partners around the world have tested. You can see updates to this list, that also includes legacy hardware for comparison, over at https://rosindustrial.org/3d-camera-survey

Behavior Cloning for Deep Reinforcement Learning Trajectory Generation

Motion planning for robotic arms is much more computationally intensive than one would initially realize. When you move your arm around, you do not need to actively think about how to move your individual joints to reach an end position while avoiding obstacles because our brains are very efficient at motion planning. Robotic arms handle motion planning differently than humans. Before the robot moves, the motion planner has already calculated all states between the start and end position. The computational difficulty in this approach comes from the infinite number of possible joint positions for the arm to be in between the start and end goal. Therefore, search over this space would be extremely inefficient. Consequently, motion planners simplify the problem by discretizing the possible arm positions to facilitate efficient motion planning. By doing this, we limit the positions the arm can take to positions that are within this discretization. This approach works well for free space motion when the arm does not need to plan around a cluttered scene, but often struggles to compute trajectories for tightly constrained spaces, such as if the arm is in a tight passageway. If the discretization is too course, a solution may not be possible for standard motion planners. If we make the discretization more fine, then computation will be exponentially longer.

To get around these shortcomings, it is helpful to use a different approach such as reinforcement learning. The reinforcement learning agent plans trajectories through generation of intermediate states by prediction of continuous joint increments. The joint trajectory is generated by repeatedly observing the environment, planning small, continuous joint updates, then executing the update. The process of planning joint updates is done using a deep neural network, that learns through trial and error how to navigate around the environment. The plans taken by the arm are judged according to a cost function, and a reward is given to the arm in accordance with the optimality of the trajectory taken by the robot. The neural network adjusts its predictions to obtain the best possible reward.

hrlmp_simple_2.png

With a reinforcement learning approach to trajectory generation for 6+ degree of freedom arms, training times can often be very long. Therefore, we can apply our existing search-based motion planning applications to improve training times for the reinforcement learning algorithm. When the robotic arm starts the learning process, it wiggles around randomly. It is awarded a reward according to the quality of the trajectories generated from the random wiggling. To find a valid trajectory, the arm will have to wiggle in just the right way to reach the waypoint, obtaining a large amount of rewards. Due to random exploration process, the system takes a very long time to train, on the order of magnitude of several days depending on hardware. However, we have access to a plethora of valid trajectories provided by planners like TrajOpt and OMPL. The reinforcement learning agent will attempt to imitate the actions taken by the motion planners, with the addition of some exploratory noise, and reach a valid path much sooner than through chance.

The examples of valid trajectories provided by the motion planners are used to train the actor network of the reinforcement learning neural network in a supervised fashion. With the actor network learning from trajectories generated from graph search techniques, the weaknesses of the motion planners comes into play; TrajOpt generally takes a very long time to generate valid trajectories and OMPL algorithms such as RRT are non-deterministic. Due to these weaknesses, we cannot rely on the examples provided by the motion planners to train the actor network entirely in a supervised fashion. Instead, we train the actor network on examples provided by the motion planners then switch to training the network through standard reinforcement learning exploration methods. Training is able to benefit through the learning on the expert motion plans by imitating the actions of the motion planners, and learning how to improve the imitated trajectories to obtain more rewards.

This early work shows promise relative to levaraging reinforcement learning for motion planning. A number of improvements are slated both in the near term and long term, in particularly, to enable leverage in meaningful industrial confined space applications. One such improvement is the general restructuring of the core components of the motion planner. The reward function and visualization logic need to be decoupled from the environment simulator. Some of the hyperparameters should be renamed so that they are more conceptually clear and concise.

Furthermore, training time, which takes approximately 3-5 days of compute, can be dramatically decreased by dividing training into multiple tasks which can be computed in parallel by a specialized deep learning computer with multiple GPUs. By leveraging four GPUs, we estimate that the training process could be cut down to 24 hours.

Farther out it is hoped to leverage a different neural network architecture to handle the arbitrary nature of meshes and point clouds that dynamic systems will inevitably encounter, that drive the system to have to learn explicitly, and the current neural network cannot handle these elements of the system.

There remains a large amount of opportunity in this space, and the idea of a hybrid optimization-based and learning-based motion planning framwork offers a balance of solution that has the promise to enable precision motion planning applications without either driving excessive planning times, or invalid solutions.

Stay tuned as we look forward to sharing more on this and other motion planning topics, as we seek to further the state of the art nad provide compelling capabilities in the open source motion planning framework that are enabling advanced applications every day.

ROS2-Industrial training material has been open-sourced

ROS-Industrial recently open-sourced its ROS2 training material, created with ROSIN (https://www.rosin-project.eu/) funding. Here is the link for the repository: https://github.com/ros-industrial/ros2_i_training. This work is licensed under a Creative Commons Attribution 4.0 International License 11 (https://creativecommons.org/licenses/by/4.0/legalcode).

The contents include slides and workshops for the following topics:

  • ROS2 basics:
    • Composed node, publish / subscribe, services, actions, parameters, launch system
    • Manged nodes, Quality of Service (QoS)
    • File system
  • SLAM and Navigation
  • Manipulation basics

More information about this update can be found on ROS Discourse post: https://discourse.ros.org/t/open-sourcing-our-ros2-industrial-training-material/21179

Get in touch with us if you would like to improve the existing content or would like to contribute new contents.

Thanks to ROSIN to make this happen!

Breaking Down the ROS-Industrial Consortium Americas 2021 Annual Meeting

The ROS-Industrial Consortium Americas (RIC Americas) gathered virtually on April 13-15 for the 2021 Annual Meeting. The event was a great opportunity for the diverse ROS community to discuss challenges and progress while laying out new initiatives and fostering relationships. The event demonstrated several ways RIC Americas members and the open-source community are furthering ROS for industry through global consortia initiatives, tech and software company projects, and collaboration among researchers and industry organizations. All the presentations and videos for the public day and the demonstrations may be found over at the event page, or over at the ROS-Industrial YouTube channel playlists page!

Day 1

The first day was open to the public and kicked off with a summary of 2020 RIC Americas activities relative to ROS-Industrial. The topic of training centered on the migration from ROS to ROS2,the move from preconfigured virtual machines to a cloud-based training environment, and the delivery of the training virtually. Overall feedback was positive; however, there are quite a few areas for improvement:

  • Some exercises have still required reaching back to ROS1 to complete.
  • There could be more explanation about how things are done and why.
  • Labs need to be optimized for ROS2 and to be more meaningful in the virtual format.
Zoom Training-1021.JPG

Tech updates that have been contributed include collision checking improvements, parallel process planners using Taskflow as well as discrete capability improvements such as the addition of kinematic calibration added to robot_cal_tools and heat raster-based tool path planning on meshes within the toolpath_offline_planner.

Darryl Lee of the ROS-Industrial Consortium Asia-Pacific (RIC Asia-Pacific) shared Focused Technical Project (FTP) developments around Interoperable Robotics with RMF. The Robotics Middleware Framework (RMF) has been a successful program funded by the Singapore government that has sought to implement a unified IoT infrastructure, based on ROS, for the healthcare industry. This new FTP initiative seeks to extend this work to commercialization for the manufacturing industry.

RIC Asia-Pacific also launched ROS2 training but also included training on their own developed Easy_Perception_Deployment (EPD) and Easy_Manipulation_Deployment (EMD). Of interest via feedback from their training events have been use cases around mobile manipulators, depalletizing, and easy pick-and-place configuration set up. The talk by RIC Asia-Pacific concluded with Glenn Tan going into the development of their EPD and EMD implementations, where to find them, and how they came to be.

Christoph Hellmann Santos of the ROS-Industrial Consortium European Union (RIC EU) shared the developments in collaboration with Fraunhofer IPA, including the recent advancements as the ROSIN project concludes and the current work and near-term roadmap for the Cognitive Robotics and AI Innovation Center seeks to advance ROS2 capability for industry, with an initial focus on hybrid model-driven engineering and diagnostics and monitoring framework for ROS and ROS-based systems.

ipa model base.JPG

RIC EU shared their launch of ROS2 training and application development examples, focused on an easy programming for welding robots that include seam detection, collision free motion planning and execution, optimized path planning, work piece pose detection, with easy set up and integration into the UR caps ecosystem.

Following the ROS-Industrial updates, a collaboration between MathWorks and Delft University of Technology was presented on Automated Code Generation of Simulink Models as ros_control Controllers which showcased the process for moving Simulink developed controllers and implementing them into the ros_control framework.

From here partner organizations provided introductions and updates to what has been taking place within their organizations. NIST has been a champion for ease of interoperability and moving to unifying standards to foster more efficient innovation for all users of robotics. NIST’s Craig Schlenoff focused on the move towards agility and all the activities related around facilitating agility in robotics. Georgia Tech Research Institute’s Stephen Balakirsky shared their organizations leverage of ROS to facilitate advanced capability, how they have realized robot teams to work together on tasks. ARM Institute CTO Arnie Kravitz shared both the impact of ROS on the development within the ARM Institute technology project portfolio and how the technology project portfolio becomes a proving ground for capability within the ROS ecosystem.

Spirit Aerosystems wrapped up the morning by reviewing the outcomes from the ARM Institute supported Collaborative Robotic Sanding project. This project featured a ROS2 software backbone, but also included ROS-based training, woven in with a gap assessment for Spirit and partner Wichita State’s National Institute for Aviation Research. While the technical outcomes for the project were of note, and it was interesting to build a fully functional ROS2 system, for ROS to gain real traction in industry a more thorough educational infrastructure is required to support all facets of the industrial teams that create, deploy and sustain these future systems on the factory floor.

The afternoon provided a number of advances from the tech company side of the ROS-Industrial Consortium, starting with the case for migrating to ROS2 by Open Robotics. There are now several utilities to assist with migration, and a handy cheat sheet if someone is interested in considering ROS2 migration, with more utilities on the horizon.

PickNik’s Andy Zelenak shared the recent collaboration with Universal Robots and FZI Research to realize a ROS2 Driver, which is now available for all UR models. Intel’s Atul Hatalkar shared the vision around ROS++ where a ROS++ exists to support industrial autonomy including non-programmer utilities, active bridges to facilitate autonomous interoperability, and security and data reliability.

Microsoft shared a good portion of their recent work around the Mixed Reality Toolkit and how they are working to enable richer AR/VR applications in ROS2. Canonical’s Sid Faber updated the group on the latest efforts around security, particularly security for legacy ROS implementations via their ESM service and shared their work on the CIS ROS Melodic Benchmark. AWS’ Jack Reilly introduced AWS’ goals around their EDU program seeking to “democratize robotics, support the ROS Community, and create features and resources to support learners and educators.” This is a toolset focused on educational content, including accessible cloud-based content, development environment utilities, and even physical implementations to support educational objectives.

Wrapping up the day, Michael Jaentsch of Siemens and Robert Burridge of TRACLabs shared work that seeks to leverage interoperability to facilitate improved collaboration between disparate devices. Siemens, with funding from the ARM Institute, built on the prior work funded by NIST extending MTConnect to ROS to support many-to-many functionality and brought in RTI’s DDS Connext to create an OPC-UA to DDS bridge. TRACLabs demonstrated using their two tools, PRIDE and CRAFTSMAN to facilitate dynamic tasking between disparate types of hardware, in a NASA-based application.

Day 2

Day two brought together the ROS-Industrial Consortium membership, focused on the Americas, but open to global members. Here the focus was on members sharing some of their experiences, learning from each other and providing feedback on how the Consortium, as an organization, should leverage its shared resources to advance open source for industry. The panel focused on getting ramped up into ROS development, and decisions you make before deciding to go in that direction. There were insights around talent development and how to engage in open source, with a great challenge by PlusOne’s Shaun Edwards to think beyond simply pushing fixes to leveraged repos.

Summary of the Training Flows Need to Support ROS for Industry

Summary of the Training Flows Need to Support ROS for Industry

From there, inputs for influence of the roadmap were next. To date, the member feedback has evolved the focus of the Consortium into four main themes:

  • Ease of Use
  • Advanced Capability
  • Interoperability
  • Human Interface and Reaction

Feedback from the 2020 workshop indicated additional focus on resources for members and the broader community were of great interest. This includes write ups, simple explainers, more complete and up-to-date wikis, to help bridge the gap between traditional industry decision makers and application builders to the open source developer community. Feedback from the 2021 workshop centered on educational resources, field service “How Tos,” status dashboard templates, functional safety, ROS2 porting guidance, and security resources. There was significant interest in terms of capability around machining/CAM style path planning and execution to support processes such as additive manufacturing, as well as additional PLC tools for controlling external devices.

The member speaking portion of Day 2 was highlighted by David Poweleit of Steel Founders’ Society of America (SFSA) sharing the needs of their membership, and how they align with the objectives of ROS-Industrial, relative to driving high-mix intelligent processing, in a way that is easy for end-user/operator interaction.

Boeing, represented by Martin Szarski and William Ko, followed up with their success story around open-source implementation of their navigation implementation as they worked toward robust factory mobility. This was definitely an interesting talk from the technical development and implementation of their navigation solution, to the journey to be able to open source this resource for the broader community.

The day wrapped with a workshop on project brainstorming. Here the membership offered up collaborative topics, that could be projects, or working groups to address challenges relative to industrial leverage of open source. A couple key topics:

  1. Hardware Reference Implementation Working Group – The goal of this working group would be to leverage existing standards, but factor in a way that is understandable for the industrial community. Initial starting point would be manipulators, ideally speaking ROS2 out of the box, and consistent across OEMs with a focus on target or desired behavior.
  2. Scan-N-Plan Implementation Generalized for High-Mix Processing – Demonstrated on an SFSA use case, this would be a high-mix surface finishing where the output would be a generalized implementation of Scan-N-Plan that could be adopted by solution providers for various high-mix end user customers.
  3. ROS Workbench – Provide model-driven and GUI-driven utilities to lower the barrier to entry for manufacturing engineers, or those with more traditional industrial automation experience to set up and do preliminary configuration of a ROS-based application.
  4. Calibration – Revisit the calibration toolset and create one toolset with resources to enable all the various forms of calibration, intrinsic, extrinsic, and kinematic, to enable high performance ROS-based applications.
  5. Planning for Additive or Machining – The interest in doing more with manipulators in one system is appealing, but currently there are no easily incorporated tool planning utilities to support additive or subtractive processes as seen in Additive or CNC type machining applications.

The next day, Demo Day!, was a great share of what a number of the different members and community participants are doing with regards to making open source happen in the real world, and the hardware to make interesting applications happen. We encourage you to check out the event page for the full Demo Day! list, which inludes the link to the video. Special thanks to all the participants in Demo Day! for sharing their contributions and recent developments.

Now the ball is in the court of the community and the membership that expressed interest in working together to address gaps and move these various topics forward. We are excited about what the next year has to offer and look forward to sharing more outcomes, collaboration opportunities, and resources for the community and industry to continue making open source software a reality on production floors around the world.

NIST Grant Awarded to SwRI for Agile Robotic Assembly

A grant from the National Institute of Standards and Technology (NIST) has been awarded to Southwest Research Institute (SwRI) with the goal of accelerating development of agile, robotic assembly processes for manufacturing. This complements internal research at Southwest Research Institute (SwRI) for developing robotic machine assembly capabilities (Figure 1).

Figure 1: A peg-in-hole assembly task performed with a collaborative robot at SwRI. The parts used were from the Siemens Robot Learning Challenge, which involves assembly of a set of gears and shafts.

Figure 1: A peg-in-hole assembly task performed with a collaborative robot at SwRI. The parts used were from the Siemens Robot Learning Challenge, which involves assembly of a set of gears and shafts.

The grant is inspired by the goals of NIST to promote agility within industrial robotics. Their recent efforts to promote agility, such as the Agile Robotics for Industrial Automation Competition (ARIAC), have extensively addressed challenges associated with robotic kitting. Previous ARIAC competitions have included teams competing in a simulation environment with judging metrics focused on efficiency, performance, and cost of sensors used. However, the new challenge for ARIAC 2021 will also involve assembly operations. In alignment with the competition , the goal of this new grant is to develop a software framework that will allow plug-and-play development of assembly algorithms, which will help users reach the hardware testing and implementation phase faster. The framework will consist of a software visualization to verify assembly processes and metrics to evaluate capabilities of the overall assembly solution. The Robot Operating System (ROS) will be heavily utilized with primary developments in ROS 2 to future proof the software framework.

For the eventual end users, we strive to enable manufacturers to dynamically reprogram robots with agility that meets the user’s needs. This may include handling a variety of part types, adjusting to changing part size and scale, and adapting to task failures in real-time. To standardize the evaluation of assembly algorithm performance, the NIST Assembly Task Board #1 (Figure 2) will be used to generate metrics that allow manufacturers to compare performance of different strategies. Therefore, end users can focus on improving their assembly strategy rather than building infrastructure to define robot capabilities, sensors, and evaluation methods.

Figure 2: NIST Task Board #1 that will be used for testing and development of an robotic assembly framework.

Figure 2: NIST Task Board #1 that will be used for testing and development of an robotic assembly framework.

Collaboration with industry and research partners will be important to understand the needs of end users and desired features that would enable agile, robotic assembly implementations. Consequently, we want for the framework to be evaluated on specific tasks of interest for these partners. Soliciting industry interest and engaging in formal collaboration, such as a ROS-I Focused Technical Project, is the eventual desired outcome.

What happened at ROS-Industrial Conference 2020?

This year was a tough year for event organizers. Around the world, events needed to be moved online to react to the COVID-19 pandemic. The pandemic also lead to ROS-Industrial Conference a being virtual venue, but did not affect the success of the event. With more than 250 attendees, the conference grew by 66% in attendance compared with previous years.

This year’s conference featured mainly four different activities. There were topic specific technical one-hour presentation sessions, in which 2-4 speakers presented their newest developments and experiences with ROS. A small virtual exhibitor area enabled attendees to get in contact with organisations active in the ROS-Industrial community. In networking sessions, attendees had the possibility to meet each other and get to know each other. The ROS-Industrial Video Competition accompanied the conference and the winner’s ceremony took place on the second day of the conference.

Conquering the industry

The conference kicked off with a session about ROS-Industrial and its current state. In this session, Christoph Hellmann Santos (Program Manager of the European branch of the ROS-Industrial Consortium) gave a motivating presentation about ROS and the Mission of the ROS-Industrial Consortium. He explains that low volume high variance production with robotics is a “Final frontier of robotics” and that pioneering in robotics is hard and lonely. According to Christoph Hellmann Santos, ROS and ROS-Industrial have a unique community, which helps with on the one side with reusing robot software and on the other side with being less lonely and getting support. With more than 80.000 developers that published more than 200.000 ROS packages on GitHub ROS is also the biggest open source robotics ecosystem that ever existed. For a long time industry said, ROS is not ready for industry – today thousands of robots controlled by ROS/ROS2 are running in 24/7 in our factories (BMW, Audi, MIR, ). Another chance states Christoph Hellmann Santos is that ROS/ROS2 is a prime platform for AI-based algorithm deployment in robotics.

As second presenter in this session, Carlos Hernandez Corbato (Project manager of the H2020 ROSIN project and assistant professor at TU Delft) presented the results of the H2020 ROSIN project. The project was established in 2017 as support project for the ROS-Industrial initiative. It had three major missions, which were making ROS better, business friendly and accessible. The ROSIN project itself was a great success more than 70 technical and educational projects for around the ROS-Industrial initiative were financed. In total, ROSIN generated 9M€ investment into new ROS packages. This was also visible throughout the conference as a number of the ROSIN FTP projects presented their results.

The ecosystem is vibrant

During the conference, many new and recently developed packages were presented. This started of with a session on visualization tools in which Rafael Luque (FADA) presented integration of laser projectors into ROS. Next Darko Lukic (Cyberbotics) gave details about ros2 and webots integration. Levente Tamas (Technical University of Cluj-Napoca) and Francisco Marin (Rey Juan Carlos University) went on and explained how to enable augmented reality in ROS using different tools.

In the industrial tools session, Johannes Meyer (Intermodalics) explained how the realtime robotics framework OROCOS can be integrated into ROS. Rafael Arais (INESC TEC) explained the package robin, which provides a ROS-CODESYS bridge. Luca Muratore (IIT) showed the ROS End-Effector framework, which abstracts end effector control and finally Alejandro Mosteo (Centro Universitario de la Defensa) presented RCLAda, an Ada implementation for ROS2.

In the session about planners, Kristofer Bengtsson (Chalmers University of Technology) presented a sequence planner for intelligent industrial automation using ROS. Allessandro Umbrico presented ROXANNE, which is a ROS package aiming at facilitating the integration of Artificial Intelligent Automated Planning and Execution techniques with robotic platforms. This package specifically supports the development of ROS-based deliberative architectures integrating timeline-based planning and execution capabilities. Finally César López ((Nobleo Technology) showed new implementations for coverage path planning and control. In the control and path planning session, Jordan Palacios and Victor López (Pal Robotics) explained the new ros2_control developments and showed a practical example. The next speaker was Henning Kayser (Picknik Robotics) who presented the newest developments around Moveit for ROS2. Finally, Gilles Chabert (IRT Jules Verne) talked about trajectory validation using interval computation. This session showed that ROS2 is getting ready for manipulation.

Model-driven robotics and development solutions available

ROS is also becoming a prime platform for model driven robot development. In the session about model-driven robotics, Ricardo Sanz (Polytechnic University of Madrid) explained how systems engineering knowledge can be used at runtime by their framework called mROS. Then Ansgar Rademacher (CEA List) presented the integration of ROS/ROS2 into Papyrus for Robotics, a model-driven development IDE. Finally, Shashank Sharma and YJ Lim (MathWorks) presented how MATLAB and Simlink can be leveraged for model-driven automated driving system development.

In the session, full stack solutions speakers presented solutions for developing robot software using ROS. Pablo Quilez (Drag & Bot GmbH) showed how their software makes developing industrial robot applications easy and robot manufacturer independent. Herb Kuta (LGE) talked about OpenEmbedded, meta-ros and webOS, which makes developing custom linux distributions for embedded systems that package ROS easy and automatable. Mathew Hansen (AWS) talked about AWS RobotMaker, which is a cloud based solution for robot development and lifecycle management.

Software quality and Security are improving

Industrial deployment of robot software requires high quality code. In the software quality session, Bainian Chen (ARTC) explained new features of industrial_ci, which is a continuous integration solution for ROS and ROS2. Zhoulai Fu and Francisco Martinez (ITU) presented their experiences with fuzz testing ROS components. Increased connectivity in automation leads to higher productivity but also to higher vulnerability for cyber-attacks. Therefore, security is a major factor for robot systems. In the security session, Victor Mayoral (Alias Robotics) presented how robot end-points can be protected against cyber-threats. Federico Maggi (Trend Micro) explained how legacy programming languages in robotics endanger robot security. Finally, Ulrich Seldeslachts (LSEC) gave a broader perspective on hardening industrial robotics installations.

ROS 2-based real-time systems are in sight

Real-time is becoming a more and more pressing topic for spreading ROS in industry. ROS 2 now has real-time capable middleware and schedulers. Ralph Lange (Bosch Corporate Research) presented their implementation of a real-time and deterministic scheduler for ROS 2. Francesca Finocchiaro and Pablo Garrido (eProsima) presented how ROS 2 can be run on microcontrollers using µROS. Finally, Lennart Puck (FZI) presented how real-time systems can be created using ROS 2 as well as a benchmark of these systems. Lennart Puck stated that based on their benchmarks ROS 2 can meet real-time requirements. Katherine Scott (Open Robotics) talked about the transition from ROS 1 to ROS 2 and the general design decisions. The conclusion in general is that now is the time to switch to ROS 2.

Professional applications are expanding

Another part of the conference were three sessions around applications of ROS in professional scenarios or products. This was kicked of with a session on industrial applications on the first day of the conference, where ABB Corporate Research presented how ABB robots can be controlled with ROS and Tecnalia showed how scan & plan applications can be implemented on industrial robots using ROS. On the second day another session on industrial applications held presentations from Bosch, Sewts and Pilz. Timo Steinhagen (Bosch) presented the Locator, which was developed using ROS. Sewts presented their robot application for handling textiles, which is based on ROS. Pilz talked about their ROS-based service robotics portfolio. Another session focused on applications in agriculture. Here, Heiko Engemann (FH Aachen) presented their robot the ETAROB, which runs ROS. Felipe Neves dos Santos (INESC TEC) explained how they use ROS for robots for woody crops. Finally, Wilco Bonestroo (Saxion University of Applied Sciences) talked about using ROS for developing drones for agriculture.

ROS-Industrial Video Competition

More proof of ROS in application was achieved by the ROS-Industrial Video Competition which asked for videos in the categories professional applications and cloud robotics. The cloud robotics category was sponsored by AWS. In total, 33 videos were submitted. In the category cloud robotics, INESC TEC won with the following submission.

The professional application category was won by the company QuadSat, which produces drones for antenna testing.

Links & Videos

All competition videos can be found here: https://rosindustrial.org/rosindustrial-video-competition-2020

The conference videos can be found here:

ROS-Industrial Asia Pacific Workshop 2020

The Annual ROS-Industrial Asia Pacific Workshop took place on the 29th October 2020, this year in a one day digital webinar format. The workshop was opened by our Guest-of-Honor, Prof. Quek Tong Boon, Chief Executive of the Singapore National Robotics Programme. After the opening, Erik Unemyr, Consortium Manager for ROS-Industrial Asia Pacific, shared updates on the topic of “Industry Ready ROS 2 – Easy to Adopt Modules with Quality”, which comprised of the current technology focus the team has been developing in-house, including:

  1. easy_perception_deployment – a ROS2 Package that aims to accelerate the training and deployment of Computer Vision models for industry use (which is now in Beta release, and you can find it here)
  1. easy_manipulation_deployment – a ROS2 Package that has a user-friendly Graphical User Interface (GUI) to create a robotic workcell, and supports a variety of commonly used industrial end-effectors using a flexible grasp implementation approach. This will package will be released soon, to be made available on the ROS-Industrial GitHub.

Next, we had the opportunity to invite Roger Barga, General Manager at AWS Robotics, to present on “The Role of the Cloud in Future of Robotics”. During his presentation, he addressed the importance and necessity of applications in cloud computing such as using it for development of robotic applications in simulation, testing and deployment. AWS also currently supports ROS, ROS2 & Gazebo within their services.

Matt Robinson, Programme Manager for our ROS-Industrial counterpart in Americas at the Southwest Research Institute (SwRI), presented on “Enabling Production Performance in ROS-Based Systems” where he brought up the value of ROS2 for various industrial use cases and also showcased some of the developments happening at SwRI.

Sharing more details about the activities at the Advanced Remanufacturing and Technology Centre, Bai Fengjun, Technical Lead from the Advanced Robotics Applications team at ARTC, presented development on the Next Generation Hyper-Personalization Line and how ROS has played a part in the development of such applications for the Fast Moving Consumer Goods sector.

Michael Sayre, CEO & Co-Founder of Cognicept Systems, one of our Consortium Members in the Asia Pacific Region, then presented on the importance of error handling and remote management for robotic fleets, and their latest development of the ROS2 Listener agent that was developed together with the ROS-Industrial Team at ARTC. You can find the repository here.

Shortly after, Albertus Hendrawan Adiawahono, Head of the Mobility Group at the ASTAR Institute for Infocomm Research (I2R) presented on their current efforts with the local healthcare ecosystem to develop modules that would aid robots to be more resilient in the hospital ward setting, where the environment rapidly changes. They currently have completed Proof of Concepts in which the robots are able to adapt to lifts, curtains and even simulating a blue code emergency drill.

After the lunch break, we invited Jack Sheng Kee, Lab Director of the Delta Research Centre, a ROS-Industrial Consortium Member, to share on “Reconfigurable and Flexible Automation in Manufacturing” where he presented some of the existing solutions Delta has developed, and how they are all ROS supported.

We also had the team from Open Robotics, Marco Gutierrez and Grey, to present on roadmap updates with new features and future plans for Ignition Gazebo, ROS2 and also the Robotics Middleware Framework (RMF). The development of RMF has become a key effort in driving the integration and deployment of wide-scale smart robotics systems, which includes the communication between robots, building infrastructure and other edge devices.

Christoph Hellmann Santos, Consortium Manager for ROS-Industrial Europe at Fraunhofer IPA presented on the latest updates and success stories of both the ROSIN and ROS-Industrial Projects, such as the toolbox for automated delivery for the DHL Streetscooter and the real-time mapping project with Bosch Rexroth.

Prof Trygve Thomessen, Managing Director of PPM Robotics AS also presented ROSIN updates with the ROSWELD project, an application and success story of ROS being deployed in heavy industrial applications such as robotic welding. Last but not least, we had Andrei Kholodnyi, Principal Technologist at Wind River to present on “A Mixed-Critical ROS2 Implementation on VxWorks RTOS, WRLinux & Hypervisor” where he highlighted the use and importance of safety compliant and real-time solutions for ROS2 Applications.

A summarized table of all the speakers, including presentation slides and recording, is now available here!

To conclude this year’s ROS-Industrial Workshop Asia Pacific, Dr. Zhang Jing Bing, Technical Division Director for Smart Robotics and Automation (SRA) at ARTC gave his closing remarks.

The ROS-Industrial Consortium Asia Pacific @ ARTC continue with a multi-prong approach in bridging the gaps between the industry and the community in adoption of ROS and robotics, by working closely with our industry partners and to develop modules that can cater for industrial needs, providing training opportunities for aspiring roboticists as well as companies that are embarking on leveraging ROS to scale their robotics adoption.

On behalf of the ROS-Industrial Team at ARTC, we hope that you enjoyed the webinar as much as we did, and we look forward to meeting each other in 2021 for future ROS-Industrial activities!

Workshop-2020

Perception-Based Region Selection for Human to Robot Collaboration

Background

The need for robotic systems that can collaborate with humans on the factory floor is in demand by the manufacturing community, but collaborative robotic solutions are still lacking in many respects. One such problem appears in quality control of subtractive manufacturing applications, such as sanding, grinding, and deburring, where material from a part is removed using an abrasive tool until a desired surface condition is obtained. In such scenario, the quality of the finish can be assessed by an expert human operator and therefore it would be very advantageous to leverage this expertise so as to guide semi-automated robotic systems to work on the regions that need further work until the desired quality is achieved. Given this challenge, this research focused on enhanced human-robot collaboration, by producing a capability that allows a human operator to guide the process by physically drawing a closed selection region on the part itself. This region will then be sensed by a vision system coupled with an algorithmic solution to crop out sections of the nominal process toolpaths that fall outside the confines of this region.

Approach

Initially, a small dataset of hand-drawn closed-region images was produced in order to aid the initial development of the 2D contour detection method and projection into 3D. These images were made with a dark marker on white paper laying on a flat surface and imaged with the Framos d435 camera. The 2D contour method that resulted from this dataset was implemented with the OpenCV open-source library and comprised the following filters/method: grayscaling, thresholding, dilation, canny edge detection and contour finding. The output of this operation was the 2D pixel coordinates of the detected contours (Figures 1.a and 1.b).

Figure 1a. amoeba 2d detection

Figure 1a. amoeba 2d detection

Figure 1B. Box 2D Detection

Figure 1B. Box 2D Detection

The following stage used the 2D pixel coordinates and located the corresponding 3D points from the point cloud associated with the image; this was possible because both the 2D image and point cloud were of the same size. Following that, some additional filters were applied, and adjacent lines were merged in order to form larger segments. In the final steps, the segments were classified as open open and closed contours and then normal vectors were estimated. Results are shown in Figures 2.a and 2.b. Additional datasets were collected with varying conditions such as thicker, thinner lines, curved surfaces and multiple images containing parts of the same closed contour. These datasets allowed refining the method and addressed corner cases that emerged under more challenging conditions such as regions spanning multiple images (Figures 3.a, 3.b, 3.c).

Figure 2a. trianble region detection

Figure 2a. trianble region detection

figure 2b. amoeba region detection

figure 2b. amoeba region detection

Figure 3a. Box Multi-image 2d contour

Figure 3a. Box Multi-image 2d contour

figure 3b. Box multi-image 2d contour

figure 3b. Box multi-image 2d contour

Figure 3c. Box Multi-image region detection

Figure 3c. Box Multi-image region detection

Accomplishments

This research lead to the creation of an open-source C++ library that can be used to detect regions that have a similar need for human-robot collaboration. The repository can be found here https://github.com/swri-robotics/Region-Detection.

Furthermore, the work was featured as part of a recently ARM Insitute Project, with Spirit AeroSystems as prime investigator called Collaborative Robotic Sanding. An excerpt of that demonstration video highlighting the region detecion is included in the excerpt below.

Collaborative Robot Sanding with ROS2 for Aerospace Applications

Starting in mid-2019, a project led by Spirit AeroSystems and funded by the ARM Institute kicked-off around an idea to develop a complete Collaborative Robotic Sanding application. The goal was to have the robot do the 80% of the repetitive sanding tasks, while the process experts performed the detailed work and oversaw the robot’s work and could identify areas that needed additional processing. The objective was to find an effective balance between the benefits of automation and highly skilled manufacturing personnel.

This effort involved multiple organizations in which the Southwest Research Institute (SwRI) team, led by Jorge Nicho, sought to leverage a number of the emerging developments around ROS2, and create a complete and functional application in ROS2 that could be a stake in the ground for how to build industrial applications in ROS2. It had been noted at a prior ARM Institute meeting that the stakeholders were interested in the development and maturation of ROS2, so this project became a great opportunity to step forward and build an application in ROS2.

In parallel with this project, from the SwRI/ROS-I Americas perspective, was also the providing of a ROS-Industrial training, and assisting the Wichita State University partner, in identifying suitable curriculum elements around ROS to be incorporated into their academic programs. A key outcome would be the pilot of a ROS-based introduction to advanced robotics systems to be provided as part of the technical program at Wichita State. Such a program would assist in the realization of technician skills relative to working with ROS-based systems.

New Capabilties Developed within the Program

As far as the application development, two new features needed to be developed. The first was the ability for the robot to apply a constant force and tool speed during the execution of the sanding process trajectories. The selected robot, Universal Robots UR10e, has force-feedback feature, but simultaneous control of constant force while executing a trajectory at a constant velocity was not readily available out of the box. A recent blog post over at the SwRI Innovations in Automation blog details that specific development.

Second, in order to enable human to robot collaboration, it was desired that the operators could mark directly on the part in order to indicate an area that needed additional processing. Then the system would recognize the marks and only process within the marked regions. There will be more details on those two new developments forthcoming.

figure 1. Gazebo View of the collaborative Robotic Sanding application

figure 1. Gazebo View of the collaborative Robotic Sanding application

The ROS2 application leveraged Gazebo, Figure 1, to allow for richer emulation of the depth sensors and robot trajectory execution. This aided with development and verification of the localization technique, as the parts could be easily located within the working envelope of the simulated system. The ros_scxml package was utilized for creation of the application state machine, Figure 2, simplifying the state machine creation process, and allowing for more efficient updating as the application matured.

Figure 2. SCXML node diagram

Figure 2. SCXML node diagram

Some other additional features, such as the ability to understand reachability in the context of the part were included and the output can be seen in the below figure.

FIgure 3. Reachability assessment within the application gui

FIgure 3. Reachability assessment within the application gui

The development resulted in an application, complete with Graphical User Interface, Figure 4, integrated at WSU (Wichita State University) and in collaboration with Spirit AeroSystems. Post integration, the system was tested against the requirements for successful sanding of the part in TRL 6 testing trials. Initially there were some performance limitations. First, when using the manipulator for application of force it was found that performance of the force application degraded as the manipulator extended into a near-singularity configuration. This has an impact on what regions may be effectively processed, as for optimal force application and execution there is a relationship between the efficiency of the force response and the configuration of the arm, i.e. the arm being fully extended outward from the base attempting to process a distant surface area.

Figure 4. APplication GUI

Figure 4. APplication GUI

Bridging ROS2 with ROS1-Based Manipulator

Additionally, the manipulator currently only has a ROS1 interface. Since the application was built in ROS2 then the team had to leverage the ROS-to-ROS2 bridge. While this works to drive basic functionality, there were issues with the reliability of the communication between the application and the manipulator. This has been established as an area for continuous improvement and an effort to support the development of a ROS2 interface for Universal Robots is underway, with support from PickNik, Spirit AeroSystems, TU Delft, FZI Research, SwRI, and of course Universal Robots.

Overall, this was a successful program, and the key metrics for the top-level program were realized once it was understood how to best leverage the solution as developed. A final demonstration video, included below, was produced by the Spirit AeroSystems team. We believe the software-side of the application will be a complete example of a ROS2-based industrial application and we hope that the community will find it of interest. The application repository may be currently found at: https://github.com/swri-robotics/collaborative-robotic-sanding

Thanks to our partners at Spirit AeroSystems and Wichita State University for their collaboration, feedback, and partnership on this project. Thanks to the ARM Institute as well for their support.

First Impressions Using Tesseract Ignition

I have a robotics application where I am using an ABB robot with a fairly simple environment and no complex motions expected. I decided this would be a good candidate for testing the new Tesseract Ignition tools and the new Tesseract command language feature that is currently in development on GitHub alongside the main master branch of Tesseract (also TrajOpt and Descartes are being updated to work with these new changes). After installing the Tesseract Ignition tools, I was able to put in our URDF which has a simple end effector on a standard ABB IRB 2400 URDF. The Tesseract Setup Wizard allowed me to easily generate an allowed collision matrix with a single click after loading in my URDF, and then the kinematic groups tool was also very easy to use to add my desired kinematic chain for the motion planner. From here I was able to have an srdf configuration file that would be usable in the Tesseract motion planning environment.

Picture1.png

In software, I was then able to extend the new Tesseract Command Language planning server class and add my own planning profiles for freespace, transition, and raster motions that allow for parallel processing of motion plans in a variety of pre-made taskflow structures. Once finished I could easily load in a toolpath and quickly generate paths using the new planning server. The new Tesseract Ignition Visualization tool allowed me to visualize all the target waypoints and the robot motion. (Note I had to build Tesseract Ignition from source to get the visualization to work for now)

Picture2.png

Overall this integration process has been user friendly and allows me to spend less time having to worry about the details of the motion planning process.

Editor's Note: Tyler Marr is a Research Engineer at Southwest Research Institute and has been developing and deploying ROS-based systems involving autonomous motion planning during his time at SwRI.

Tesseract Setup Wizard Leveraging Ignition

Southwest Research Institute (SwRI) is excited to announce that it has adopted Ignition Robotics Software as the visualization tool set for the Tesseract Motion Planning Framework. Ignition Robotics Software provides a modular set of libraries related to rendering, GUI, math, physics and much more.

Over the past few years SwRI has received several inquires related to richer visualization, simulation and ease of use tools for industrial automation applications allowing a user without programming experience to perform tasks that leverage the advanced capabilities provided by ROS. The goal was to first start with something simple that would add value to the open-source community. It was chosen to start by developing a setup wizard for the Tesseract Motion Planning Framework, further down more details are provided.

If you are familiar with the current tools within ROS you may be asking yourself why we chose to leverage Ignition Robotics Software over something like RViz, RobotWeb Tools, etc. In my opinion, the Ignition Robotics Software is more user-experience focused and the others are more developer focused, for a specific platform. The Ignition GUI leverages Qt Quick, which provides several advantages over the legacy Qt Widgets. These advantages allow it not only to be used on a desktop but also on tablets and smart phones, along with multiple methods for web deployment; opening up the possibility to leverage this tool similar to how you would use an Industrial Human Machine Interface (HMI). In addition, Qt Quick allows for a cleaner solution for separating the UI development from the business logic, allowing faster development and integration.

Another aspect of the Ignition Robotics Software is the rendering capabilities which provides not only Ogre, but Ogre2 and OptiX. And because of its plugin architecture it will most likely see more support for other rendering libraries in the future. Lastly, an additional advantage is having direct access to physics provided by Ignition Physics for simulating various industrial processes like sanding, grinding and painting in the future.

The other component of this exercise was to determine how to deploy the User Tools. Since we are talking about deploying applications instead of libraries which are mostly self contained, it was key to utilize a method of deployment that allows these tools to be easily accessed by the user, with frequent improvements and support for early access to enable testing before making new features available. For this we have chosen to leverage Snapcraft and the Snap Store, provided by Canonical, for deploying these user based tools on Linux and we are currently investigating using MSIx for deployment on Windows.

Before I move on to providing details on Tesseract Ignition, I would like to recognize two key individuals instrumental throughout the development and decision process. I would like to recognize Louise Poubel, from Open Robotics, for her support related to the Ignition Robotics Software packages, and Kyle Fazzari, from Canonical, for his support related to building and deploying this tool to the Snap Store. Thank you both for your time and guidance on this effort and I look forward to further collaboration.

Tesseract Ignition Overview: This package provides two applications, the first is the Tesseract Setup Wizard and second is Tesseract Visualization outlined below and can be downloaded on the Snap Store by clicking on the Snap Store button below. Please see our video for a walk through of these tools and how you may start leveraging it now.

  • Tesseract Setup Wizard
    • Loading a URDF and SRDF
    • Defining kinematic groups
    • Defining allowed collision matrix
    • Defining group states
    • Defining group tool center points
    • Defining group opw kinematics parameters
    • Saving SRDF
  • Tesseract Visualization
    • Trajectory Simulation
    • Tool Path Visualization
    • Marker Visualization

First Virtual ROS-Industrial Training focused on ROS2 in the Americas

SwRI recently hosted a training session for ROS-Industrial Americas consortium members, led by instructors Josh Langsfeld, David Merz, and Randall Kliman. As it was held in the middle of the COVID19 pandemic, the traditional in-person format was replaced with a brand new virtual training method, using videoconferencing and virtual machines running in the cloud instead of students’ own laptops. The overall schedule was similar to previous training sessions, with two days focused on exercises designed to teach the basics of ROS and ROS-Industrial, followed by a more free-form lab day where the students could work on longer exercise. The class was held in a Zoom meeting that ran all day long, enabling easy interaction between the instructors and students during both the presentation and the exercise times.

Training live on the Zoom feed, while development exercises took place on AWS EC2

Training live on the Zoom feed, while development exercises took place on AWS EC2

To provide students with an Ubuntu environment, we opted for a new approach where we set up virtual machines using Amazon’s Elastic Compute Cloud (EC2) service and asked the students to log in using a remote desktop protocol. We were pleased to work with Amazon in setting up this arrangement and they helped by preparing an Ubuntu 18.04 base image with ROS Melodic preinstalled. This let us start up a whole set of virtual machine instances, one for each student, ready to go for training. These instances were made accessible to the public internet and so all students were able to directly log in using only an IP address and a provided key file. This approach turned out to be quite robust and no one had any issues accessing the cloud instances. The use of EC2 virtual machines also enabled easy instructor-student interactions, as the instructors could also log into the same instances and see exactly what the student was seeing. We used this to great effect along with Zoom breakout rooms to engage in one-on-one troubleshooting with the students, both guiding the students with next steps to take and even controlling their machine to help with more complicated problems. Overall, the virtual training experience was quite smooth and it is likely we will keep it as an option going forward, even when in-person trainings are able to resume.

And if attempting virtual training for the first time was not enough, this training also marked a milestone in the ongoing adoption of ROS2, as the basic material and exercises taught on the first day were updated to use ROS2, without assuming prior knowledge of ROS1. This first day covers the fundamental concepts of ROS packages, messages, topics, services, and parameters, all of which are fully functional and easily demonstrated in ROS2. As ROS-Industrial is in the middle of the transition, however, the full training is not yet available in ROS2. Instead, the second day, which teaches the basic concepts specific to ROS-Industrial, including URDFs, TF, and motion planning with MoveIt were done in ROS1. We expect this transition to continue and soon all of this material will be available in ROS2 as well, especially now that MoveIt2 is out and ready for use. Check back on the training website over the next few months to keep an eye out for the updates! We’ll be looking forward to additional training sessions and expanding what we can do with both ROS2 and the virtual training format.

Real-Time Trajectory Planning for Industrial Robots

Picture an industrial application where we want to do some work on large parts. The work is performed in a series of bays, and the parts to be processed are rolled into the bays on carts. The work bays typically have one or two people in them doing various manual tasks, or perhaps just passing through. However, one of the process tasks is difficult for a human to do. Maybe this task carries a risk of repetitive motion injuries, or maybe it requires reaching an area that is hard to reach from a standing position. It would be very desirable to automate this task using a robot, but there are some challenges that limit the applicability of traditional industrial automation:

  1. There are people in the work area. Maybe this is a small company with only a few bays, and they can’t permanently cordon off the entire bay with light gates and proximity sensors for an industrial robot. They need the robotic system to safely work alongside humans.

  2. Since the parts that need processing are just rolled in on carts, they are positioned inconsistently. Even the parts themselves may have variation that is not reflected in CAD data.

  3. The environment is dynamic and constantly changing. Carts are constantly being rolled in and out, and parts that are currently being processed could be bumped and shifted.

Collaborative robotic hardware does not address all the challenges with this type of application: we need collaborative software as well. Just as a human walking down a hall does not plan every step in advance and then close their eyes to execute the motion, our industrial automation systems need the ability to adapt and plan in an “online” manner. In our application above, the system needs to be constantly perceiving the environment and avoiding (and perhaps predicting the motion of) moving collision obstacles all while tracking the part that it is processing.

To this end, researchers at Southwest Research Institute have added online path planning capability to the ROS Industrial ecosystem through updates to the Tesseract (https://github.com/ros-industrial-consortium/tesseract) motion planning framework and the TrajOpt (https://github.com/ros-industrial-consortium/trajopt_ros) trajectory optimization library. TrajOpt uses the Sequential Quadratic Programming (SQP) method for motion planning. This algorithm works to solve the highly nonlinear motion planning problem by approximating it as a quadratic and solving the resulting Quadratic Program (QP) until the optimization converges to a local minimum. The online planning capability is implemented by continuously regenerating and solving the QP as the environment is updated with new information about the robot’s surroundings.

manual_moving_demo.gif

The examples below show the results. In these examples, a 6-DOF robot arm is underslung on a 2-DOF gantry. A red cylinder surrounds the robot, representing a boundary that the robot should keep away from humans and other unexpected obstacles. In the first example, the robot dynamically avoids a human that walks into its work area. The second example demonstrates the robot following a moving target pose. Both animations are displayed in real-time, and the system easily achieved a 1000Hz update rate for this example while running on a consumer desktop PC. A simplified version of these examples is available in the tesseract_ros repository (https://github.com/ros-industrial-consortium/tesseract_ros) so that you can run it yourself.

Dynamically avoiding a human entering working space

Dynamically avoiding a human entering working space

moving target pose

moving target pose

There is still a lot work to be done before we could deploy this in an application like the one in our example. One remaining question is what level of on-the-fly flexibility is desirable for a given application. Does the robot have free reign to adapt its path anywhere within its joint limits, or do we constrain it to only deviate by some amount from a given preplan? Creating a framework to represent this kind of high-level logic as well as the infrastructure involved in execution is the next step in the process. We look forward to deploying a system with this feature set on hardware using something simlilar to the joint_trajectory_controller within ros-control (http://wiki.ros.org/joint_trajectory_controller).

Editor's Note: This work and subsequent blog entry made possible through contributions by Levi Armstrong and Joseph Schornak. Thanks to Matthew Powelson for his contributions to ROS-Industrial during his time at Southwest Research Institute, we wish him the best on his next robotics adventure!

Lessons from a ROS2 Collaborative Industrial Scan-N-Plan Application

Contributed by Joseph Schornak, a Research Engineer at Southwest Research Institute's Manufacturing and Robotic Technologies Department


In early 2019 my team at Southwest Research Institute swore a solemn oath: any new robotic systems we develop will use ROS2. ROS Noetic will be the last ROS release, as the community shifts its focus to building and supporting new ROS2 releases. We agreed that it was crucial to commit early and get substantial hands-on experience with ROS2 well in advance of this upcoming transition.

Our first opportunity to apply this philosophy to a system intended for a production environment came in Spring 2019. We began a new project to develop a greenfield collaborative Scan-N-Plan system for an industrial client, which I will refer to here as Project Alpha. Several months ago, we completed and shipped Project Alpha, making it one of the first ROS2 systems to be deployed commercially.

The purpose of this article is to describe some of the discoveries made and lessons learned throughout this project, as we begin to apply this knowledge to the next generation of ROS2 systems.

Why is developing in ROS2 important?

There is a “chicken and egg” problem surrounding developing in ROS2. The most important part of ROS has been the lively and diverse package ecosystem, since the ability to bring in ready-to-ship packages supporting a wide variety of sensors and robots presents a huge advantage for roboticists. While the core rclcpp packages are fully-featured and robust, we need more ROS2 interface packages for sensors and robots commonly used in robotic applications. This gap presents a dilemma: potential users are discouraged from committing to ROS2 due to a lack of support for their hardware, which reduces the incentive for vendors to develop and support new ROS2 packages for their products.

In order to break this cycle, a critical mass of developers needs to commit to ROS2 and help populate the ecosystem. There are certainly benefits for early adopters: Intel’s family of RealSense RGB-D cameras had very early ROS2 support, and as a result, this camera has become a go-to 3D perception solution for ROS2 projects.

Integrating a Robot

We decided to build Project Alpha around the Universal Robots UR10e. Its reach, payload capacity, and collaborative capability satisfied our application-specific requirements. Additionally, we had experience integrating URs with prior projects, and we already had a few on hand in our lab. Fortuitously, the start of the project coincided with the beta release of the excellent Universal_Robots_ROS_Driver package, which has become our driver of choice.

However, there was a substantial immediate challenge: the UR robot driver was a ROS1 package, and we were developing a ROS2 system. There is very little ROS2 driver support for industrial robots, since the process of developing new robot drivers needs significant specialized effort. To address this challenge, we encourage the community to overcome this obstacle and invest the effort to develop new ROS2 drivers for industrial robots.

For the time being, the ros1_bridge package was sufficient to relay joint_state topics and robot control services between the ROS1 and ROS2 networks. We also adapted a custom action bridge node to convey FollowJointTrajectory action goals from our ROS2 motion planning and execution nodes to the ROS1 UR driver node. With these pieces in place, we were ready to plan!

Writing New Nodes

While our robot was ready to move, there was no ROS2-native motion planning pipeline available. At the time, MoveIt2 was still in an alpha state and and was undergoing significant development. To address this gap, we decided to port our Tesseract motion planning library to ROS2. This effort resulted in three repositories: the ROS-independent Tesseract core repository, the ROS1 tesseract_ros repository, and its close ROS2 sibling, tesseract_ros2.

As we worked through the ROS2 port of Tesseract and created new system-specific ROS2 packages for Project Alpha, we found ourselves discovering a new set of best-practice approaches for ROS2. For example, there are two distinct approaches when creating a C++ ROS2 node:

Pass in a Node instance: Create a custom class that takes a generic Node object in its constructor. This is similar to how NodeHandle objects are used in ROS1. These classes are flexible: they can be wrapped in a thin executable as standalone nodes or included as one aspect of a more complex node. The core mechanisms of the class can be exposed both through C++ functions and through ROS2 services.

Extend the Node class: Create a class that inherits from and extends the base ROS2 Node class and add application-specific member variables and functions. I get the impression that this is more in-line with the design intent of ROS2, since key functionality like logging and time measurement is exposed as member functions of the Node class. This approach also exposes new capabilities unique to ROS2, like node lifecycle management. Ultimately, we used both approaches. We found that the first strategy made it easier to directly port ROS nodes, so the nodes in the tesseract_ros2 package use this method. For the newly-developed Project Alpha nodes we used the second strategy, since we had much more freedom to design these new nodes from scratch to make the most of ROS2.

Working with DDS Middleware

The ROS2 DDS communication middleware layer represents a substantial improvement over the TCP/IP-based system used in previous ROS versions. ROS2 ships with a variety of RMW (ROS MiddleWare) implementations provided by several DDS vendors. Fortunately it is very straightforward to switch between the different RMW implementations: all that is required is to install the packages for the new RMW version, set the RMW_IMPLEMENTATION environment variable to specify the desired version, and rebuild any built-from-source packages in your workspace that provide message definitions.

Surprisingly, the choice of which RMW implementation we used had a substantial effect on the performance of Project Alpha, although this did not become clear until relatively late in development.

At the beginning of the project we used FastRTPS, which was the default option for ROS2 Dashing. It worked well for our initial collection of nodes, but when we integrated the ROS driver for the UR10e robot we began experiencing dropped messages and higher latency. Our theory is that the high volume of messages produced by the UR10e's real-time control loop overwhelmed the RMW layer under its default settings. We began exploring alternatives.

Our next option was OpenSplice, which eliminated the issue of dropped messages with the UR10e. However, we discovered several new issues: nodes took several seconds to fully initialize and begin publishing messages, and topics advertised by newly-launched nodes would often not be visible to nodes that were already running. Project Alpha's nodes were designed to all launch together at system startup and stay alive the whole time the system was running, so we were able to work around this issue for some time.

When we updated Project Alpha to use ROS2 Eloquent, we decided to try out the newly-available CycloneDDS RMW implementation. We discovered that it was not susceptible to any of our previous issues: it allowed our nodes to launch quickly on startup, handled high-rate topics as well as large messages like high-resolution point clouds, and could also gracefully manage nodes arbitrarily joining and leaving the network. Project Alpha was shipped to the client configured to use CycloneDDS.

Conclusions

Project Alpha has been a success, and we have been able to leverage our ROS2 development experience to pursue new projects with confidence. We were able to answer some important questions:

Is it better to develop pure-ROS2 systems or hybrid ROS/ROS2 systems? It is preferable to develop and maintain an exclusively ROS2 system. Hybrid systems will be a reality for some time until the development of the ROS2 ecosystem can catch up.

What ROS2 release should be used? We consistently found that there were substantial benefits to using the latest ROS2 release, in the form of new features and functionality.

Is ROS2 "ready for industry?" Resoundingly, yes! Get out there and build ROS2 systems!

How to securely control your robot with ROS-Industrial

Trend Micro and Politecnico di Milano (Polimi) recently brought up a security issue with controlling industrial robots using ROS-Industrial drivers. We have worked fast to describe the mitigation for the security problem uncovered. Actually, it is quite simple, by following basic security guidelines on how to setup your network you can eliminate the described security risk at the source. Here we show how to setup secure communication between your ROS PC and your industrial robot.

In ROS-Industrial robots are connected to the ROS PC using so called motion servers. These are programs written in the OEM specific programming language that are running on the robot controller and enable receiving target values (typically axis positions) from and sending actual values as well as the robot status to the robot’s ROS driver. The interface used for this communication differs from one robot OEM to another. The problem is that as of now robot OEMs do not provide interfaces that provide a security layer or authentication methods for these interfaces and no such measures can be added to the motion servers running on the robot controllers. Therefore, it is possible for intruders to attack the communication interface between ROS-Industrial robot driver and the motion server running on the robot controller. TrendMicro and PoliMi claim to have succeeded in sending motion commands to the robot controlled by a ROS-Industrial robot driver from another device that is connected to the same network as the controlled robot and the ROS-Industrial robot driver (Figure 1). This behavior can be potentially exploited by malicious network participants.

Figure 1. Typical setup of a ROS-Industrial robot driver and vulnerable communication

Figure 1. Typical setup of a ROS-Industrial robot driver and vulnerable communication

To minimize the risk of this potential attack vector on the interface between the device running ROS and the robot controller the network needs to be setup correctly. The connection between the ROS PC and the robot controller needs to be isolated from other networks that might be connected to the ROS controller. Figure 2 shows how to set this up correctly, so that a bad actor will have a hard time exploiting this vulnerability. Isolation of the connection between ROS PC and robot controller means that if you want to connect your ROS PC to another network in a secure way, you will need two network cards. One is used to connect to the robot controller, the other is used to connect to the outer network. Figure 3 shows an example for a vulnerable network setup that you should avoid at all cost.

Figure 2. Correct network setup to avoid security vulnerabilities

Figure 2. Correct network setup to avoid security vulnerabilities

Currently, the vulnerability has only been tested with drivers for Kuka and ABB but it could also be exploited with other industrial robot drivers. If you isolate the connection between the ROS PC and your robot controller but connect your ROS PC to a network with potentially malicious participants on another network card we strongly recommend following the instructions on http://wiki.ros.org/Security and if you use Ubuntu the instruction provided by Canonical (https://ubuntu.com/engage/securing-ros-on-robotics-platforms-whitepaper) to ensure your ROS PC protected.

Figure 3. Vulnerable network setup that should be avoided.

Figure 3. Vulnerable network setup that should be avoided.

Hybrid Perception Systems for Process Feature Detection

In the past years, Southwest Research Institute has used ROS Industrial tools to integrate numerous Scan-N-Plan surface finishing applications. A typical application involves reconstructing the environment, generating raster paths on some unwieldy curved surface, and then executing that path using an industrial manipulator carrying some process tool like a sander or spray nozzle. Automating these tasks is often hugely value-added as they can often be difficult and dangerous for humans to perform.

However, generalizing this concept a bit more beyond rasters on curved surfaces, industrial applications abound where some generic process is applied to some feature. Examples include grinding flashing from a steel casting, applying sealant to a seam, or smoothing a wrinkle in a composite layup. While each of these applications involve numerous complications, the first step is the same. Some feature must be detected in 3D space.

Example of a flashing removal application where only part of the surface requires grinding

Example of a flashing removal application where only part of the surface requires grinding

When thinking about how to detect these features, machine learning is often considered. While machine learning sometimes poses a potential solution to these problems, this can be plagued by a few issues. First, machine learning is progressing quickly. An algorithm developed in Caffe, Torch, or Theano only a few years ago may be difficult to maintain or deploy today and need to be ported to Tensorflow or PyTorch. Thus, any robotics system that uses these approaches should be flexible enough to use any of these frameworks. Second, while semantic segmentation for 2D images is a relatively mature field, performing these operations in 3D space is much newer, requiring more computationally expensive algorithms and annotation of 3D data. While these challenges are certainly known to the machine learning community, they have limited adoption in industrial applications where dataset availability is limited and robotics integrators have day jobs that don’t involve AI research.

Experimental Lab Setup for high mix welding application

Experimental Lab Setup for high mix welding application

To address these challenges, the ROS Industrial team at Southwest Research Institute proposes a hybrid approach to 3D perception systems wherein mature 2D detectors are integrated into a ROS 3D perception pipeline to detect process features and enable the flexibility to upgrade the detector without any modifications to the rest of the system.

The principal is simple. Often in industrial applications 3D perception data (i.e. point clouds) comes from 3D depth cameras that provide not only depth information but also a 2D video stream (https://rosindustrial.org/3d-camera-survey). By using open source ROS tools, we can use the 2D video stream to detect the features we want and project them back to the 3D data. We can then aggregate those detected features over the course of a scan in order to get a semantically labelled 3D mesh. This allows toolpaths to then be generated on the mesh that are informed by the detected process features. In order to evaluate such an approach, an example high-mix fillet welding application was developed where each part was a set of tack welded aluminum plates, but their exact size and location was unknown.

Left column shows 2D image and detected weld seam. Right column shows 3D mesh and aggregate 3D detected weld seam. Notice that the 2D detector was trained to avoid welding additional brackets and clamps.

Left column shows 2D image and detected weld seam. Right column shows 3D mesh and aggregate 3D detected weld seam. Notice that the 2D detector was trained to avoid welding additional brackets and clamps.

The system makes use of open source ROS tools and proceeds as follows. First, the camera driver provides a colorized point cloud to the TSDF Node (from yak_ros, https://github.com/ros-industrial/yak_ros) which reconstructs the environmental geometry. Simultaneously, point cloud annotating node (https://github.com/swri-robotics/point_cloud_segmentation) extracts a pixel aligned 2D image from the point cloud and sends it across a ROS service to the arbitrary 2D detector (in this case FCN8) which returns a mask with a label for each pixel in the image. These labels are then used to annotate the point cloud by re-colorizing it. By doing so, these results can be aggregated using the open source octomap_server (http://wiki.ros.org/octomap_server). At the end of the scan, YAK provides a 3D mesh of the environment and octomap provides an octomap colorized with the semantic labels. Tesseract (https://github.com/ros-industrial-consortium/tesseract) collision checking interfaces can then be used to detect the voxels associated with each mesh triangle, allowing the geometric mesh to be annotated with semantic data. In this case, the areas non marked as “weldable” were removed and a region of interest mesh was passed to the tool path planning application. The final architecture diagram is shown below.

Architecture Diagram of the components of the system

Architecture Diagram of the components of the system

The result can be seen in the video below. While this approach has its limitations, overall it performed well. We look forward to deploying systems like this in the future to further explore its capabilities.

Currently the point cloud segmentation library is still under development, but we believed there was value in making the industrial open-source community aware now to provide additioanl insight and feedback. In the future we anticipate this migrating to the ROS-Industrial family of repositories.

Guest article: A Story of Autonomous Logistics

From rapid robot prototyping to pre-series robot production

a guest article by Deparment of Autonomous Logisitics of StreetScooter

The robotic delivery arm of Eva was self-constructed at StreetScooter

The robotic delivery arm of Eva was self-constructed at StreetScooter

The vision of a generic toolbox to solve automated delivery challenges was born in the Department of Autonomous Logistics in 2016. ROS was chosen as the framework because it was quite popular among students in robotics and many suitable open source modules were already identified. At the same time, a ROS-based software stack for urban autonomous driving called Autoware was released open source. This was a blessing for the young robotic team since multiple components could later be adapted for Eva’s Follow Me Delivery function. The fresh robotic engineers could learn from the experiences made with this stack, without having a senior robotic expert in the team. With the test track next to the developers’ office, a short iteration cycle was introduced to gather the knowledge needed.

Eva, Adam and Alfi for Follow Me and Autonomous Parcel Delivery

The first prototype was not Adam but Eva, constructed in partnership with TAS (Institute for Autonomous Systems Technology of the Bundeswehr University) [1], PEM (Production Engineering of E-Mobility Components) of RWTH Aachen and Beckhoff Automation. Eva was constructed to demonstrate the autonomous parcel delivery.

After showing the first promising in-house developments in software and hardware design, two realistic use-cases for applying robotic technology to logistics were identified in 2017:

  1. Follow Me Delivery
  2. Autonomous Yard Logistic

Both developments were chosen based on the agile mindset to deliver benefits as early as possible to the customer. A maximum speed of 10 km/h was a promising entry point since an emergency stop was possible under all circumstances. In the meantime, the usage of open source technology for robotics prototyping increased since this showed strong acceleration in the development. The re-usage of components between both use cases and vehicle types was given by the modularity of the ROS Framework [2] Two StreetScooter Work vehicles, Adam and Alfi, have been equipped with the Follow Me Delivery system (Adam for rapid prototyping and Alfi to show the next steps in system design and focus on industrialization).

Most systems have been integrated into the roof top of Alfi. In this way the integration of the Follow Me Delivery System into a series StreetScooter Work M vehicle was possible.

Most systems have been integrated into the roof top of Alfi. In this way the integration of the Follow Me Delivery System into a series StreetScooter Work M vehicle was possible.

Adam and the Demonstration of Follow Me Delivery

Nvidia invited StreetScooter to demonstrate the Follow Me Delivery function on a test track next to the NVIDIA GTC conference at Munich in 2017 [3]. The new cooperation announcement on the conference and the immense press feedback reveal the potential of Follow Me Delivery. The software itself was a combination of multiple open source ROS modules integrated into the basic move_base navigation framework of ROS. The team showed that, by combining and adapting multiple ROS modules like depth_clustering or osm_cartography of different organizations, the development of an autonomous vehicle is possible. Safety was given by the low-level controller that supervised the controllability of the system in combination with a trained safety driver.

The ground was quite uneven. This led to false-positive obstacles detected in the ground.

The ground was quite uneven. This led to false-positive obstacles detected in the ground.

Obelix for Prototyping in Autonomous Yard Logistics

In 2018, the first prototype system for the autonomous yard logistics was equipped on an electric Wiesel truck from KAMAG. At this time, the vehicle base itself was also a prototype. The first step was to automate the steering to reduce safety risks of the heavy truck with a maximum weight of about 26 tons. That way, the acceleration of the vehicle was still in control of the safety driver. This level of automation generated lots of interest at DHL since the safety concept is much simpler and the system costs are lower in comparison to a fully autonomous system.

Many benefits like lower material wear, less noise and simpler vehicle handling were still given. This worldwide unique concept was named Assisted Maneuvering and Positioning System (AMPS). Field-tested software and hardware solutions from Alfi have been adapted to the new vehicle Obelix. Based on the experiences made from the open source packages of Follow Me Delivery, a new in-house development has been started. Some powerful packages like robot_localization or Google's carthograpther are still used in our software stacks today. Because of a planned in-field test at a DHL parcel center in cologne in 2019, much more requirements and quality management had to be introduced. LiDAR was chosen as the main sensor system because a high precision in localization and detection with an error margin smaller than 3 centimeters is demanded in changing and demanding outdoor conditions.

Obelix at his daily mission on the test track of Avantis.

Obelix at his daily mission on the test track of Avantis.

Snow tracks on Avanits in the LiDAR measurement. Even under those conditions, the system needed to adhere to its requirements.

Snow tracks on Avanits in the LiDAR measurement. Even under those conditions, the system needed to adhere to its requirements.

Asterix at the parcel centers of Cologne and Hanover

In 2019, Asterix and the AMPS system were deployed to the parcel center Eifeltor next to Cologne. The operation of the new electric Wiesel vehicle from KAMAG in combination with AMPS was possible after smaller adoptions. The container loading worked right from the start, but the loading dock approach was not precise enough under all circumstances. Goal poses of the docks were defined only by mapped GNSS measurements. Even with a high-end localization system, metal objects and walls around Asterix disturbed the radio signals of the satellites. These experiences led to the development of an active loading dock detection that is fused with the global goal pose. After 3 months of daily operation, Asterix and the AMPS system received very positive feedback from evaluation with multiple DHL test drivers. The fenced area of the parcel center was an ideal use case to gather first experiences in mixed traffic with robotic transportation system. Afterward, Asterix was also successfully tested at the freight section of DHL on a newly constructed parcel center without any adaptations of the system or environment [3]. The open source package Marv Robotics supported us in the creation of a bagfile database.

Datasets from the parcel center operation tests were crucial for further development of AMPS and higher levels of autonomy.

Datasets from the parcel center operation tests were crucial for further development of AMPS and higher levels of autonomy.

LiDAR data of multiple containers and loading docks on the parcel center. Loading docks and containers can be detected with sufficient accuracy without artificial landmarks.

LiDAR data of multiple containers and loading docks on the parcel center. Loading docks and containers can be detected with sufficient accuracy without artificial landmarks.

Simba, Asterix and Columbus for Industrialization of AMPS

Based on customer feedback, data analysis methods and advanced robotic components test benches, a pre-series version of AMPS is in development. The design is focused to increase adaptability. Therefore, multiple vehicle types can be supported. Drivers of the DHL parcel centers will evaluate the system in daily operation. They will be supported by developers on demand, when the driver activates the remote access to the AMPS system. A precise absolute and relative localization is demanded for the precise maneuvering of the vehicle. The GNSS and IMU systems that have been used in the prototyping phase were too expensive and nontransparent. That's why in-house hardware and software designs have been done based on state-of-the-art electronic components. The system is called Columbus.

Simba, the virtual vehicle, got quite popular since most developers work remotely during the COVID-19 pandemic. The continuous integration testing framework runs multiple scenarios on a virtual parcel center. Since the LiDAR sensors can be simulated in the Gazebo simulation, the complete software stack is been tested closed-loop. Most errors in the software development can be detected, therefore, in this stage before deploying to the vehicle. In that way the validation of the software components is done in an automated and reproducible way with every new release.

Vehicle, parcel center, LiDAR and Containers are simulated in detail for the test bench.

Vehicle, parcel center, LiDAR and Containers are simulated in detail for the test bench.

Idefix, our Hardware-in-the-Loop test bench, is gone be refactored with industrial graded hardware. Software and hardware integration aspects like networking can be analyzed before working directly in the car. In combination with our virtual vehicle we created a virtual driver seat to drive AMPS inside the simulation on the actual hardware. Asterix is also been used to evaluate ROS2, industrial graded middlewares and operation systems. Challenges at the integration of new software frameworks on the target hardware are identified in an early development phase.

Idefix gets new dresses. Mock-up designs in the simulation allows us in an early stage to evaluate new functionality with our customer.

Idefix gets new dresses. Mock-up designs in the simulation allows us in an early stage to evaluate new functionality with our customer.

Using MoveIt2 on a Industrial Open-Source Application

The Motion Planning Framework for ROS known as MoveIt has been successfully used in numerous industrial and research applications where complex collision-free robot motions are needed in order to complete manipulation tasks. In recent months, a great deal of effort has gone into migrating MoveIt into ROS2 and as a result the new MoveIt2 framework already provides access to many of the core features and functionality available in its predecessor. While some of the very useful setup tools are still a work in progress (Mainly the MoveIt setup assistant), I was able to integrate MoveIt2 into the Collaborative Robotic Sanding Application (CRS) in order to plan trajectories which were then executed on a Gazebo simulated UR10 robot arm.

My ROS2 setup involved building the MoveIt2 repository from source as described in github and then overlaying that colcon workspace on top of my existing CRS application workspace. I also built and ran the simple demo which worked right out of the gate and was very helpful in helping me understand how to integrate MoveIt2 into my own application.

The C++ integration was very straight forward and only needed the use of two new classes, MoveItCpp and PlanningComponent. In this architecture, MoveItCpp is used to load the robot model, configure the planning pipeline from ROS2 parameters and initialize defaults; then there's the PlanningComponent class which is associated to a planning group and is used to setup the motion plan request and call the low level planner. Furthermore, the PlanningComponent class has a similar interface to the familiar MoveGroupInterface class from MoveIt; however one of the big changes here is that the methods in the PlanningComponent class aren't just wrappers to various services and actions provided by the move_group node but they instead make direct function calls to the various motion planning capabilities. I think this is a welcomed changed since this architecture will allow creating MoveIt2 planning configuration on the fly that can adapt to varying planning situations that may arise in an application.

On the other hand, the launch/yaml integration wasn't as clean as many ROS2 concepts are still relatively new to me. In order to properly configure MoveIt2, it is necessary to load a URDF file as well as a number of parameters residing in several yaml files into your MoveIt2 application. Fortunately, most of the yaml files generated by the MoveIt Setup Assistant from the original MoveIt can be used with just minor modifications and so I ran the Setup Assistant in ROS1 and generated the needed config files. Furthermore, the ability to assemble ROS2 launch files in python really came in handy here as it allowed me to instantiate a python dictionary from a YAML file and pass its elements as parameters for my ROS2 application. Beyond learning about MoveIt2, going through this exercise showed me how to reuse the same yaml file for initializing parameters in different applications which I thought was a feature that was no longer available in ROS2.

My overall impression of MoveIt2 was very positive and I feel that the architectural changes aren't at all disruptive to existing MoveIt developers and furthermore it'll lead to new interesting ways in which the framework gets used; I sure look forward to the porting of other very useful MoveIt components. The branch of project that integrates MoveIt2 can be found here and below is a short clip of the planning that I was able to do with it. In this application, the robot has to move the camera to three scan position and so MoveIt2 is used to plan collision-free motions to those positions.

Building Out a ROS2 Mobile Scan-N-Plan Demonstration

As part of the ROS Industrial Consortium Americas 2020 Annual meeting, SwRI demonstrated a mobile robot application bridging ROS 2 with ROS 1 drivers for a mobile application. The exercise refined our capabilities with ROS2 systems, and many lessons learned along the way will inform later work in collaboration between mobile bases and industrial manipulators.

ROS2 Mobile Scan-N-Plan Node Diagram

ROS2 Mobile Scan-N-Plan Node Diagram

Our demonstration leveraged the Clearpath Ridgeback and a Kuka Iiwa. Based on previous work with the Iiwa, we chose to use the ROS1 driver available for the robot. To integrate this into the system, both the input and output of the robot had to be bridged to the ROS2 system. This was achieved with a modified version of the ROS action bridge, which enabled joint command actions to be streamed to the robot, and a standard implementation of the ROS message bridge, to retrieve telemetry information from the Iiwa. With the support of ClearPath, we updated the OS of the Ridgeback to support ROS Melodic, an important stepping stone towards future application bridging the mobile base to a greater ROS2 system.

ROS-I Americas Annual Meeting attendees ask questions after observing demonstration

ROS-I Americas Annual Meeting attendees ask questions after observing demonstration

The demonstration itself was an expansion on the normal Scan’N’Plan framework. The mobile platform was driven to a position of interest; in this case, an aircraft fuselage segment. A scan path, generated in the robot frame, was run and fed into YAK to create a model of the target. Once the scan was complete, alignment and tool path generation was done using a hybrid perception tool to remove masked features. Hybrid perception leverages machine learning classification on 2D data sets overlayed on 3D data, where both are available from the perception sensor set up, in this case an Intel® RealSense™ D435.

These tool paths were then streamed to the robot in the same manner as the scan path. However, our end effector (in this case, an array of lasers) would toggle on and off using segmentation described above, to protect regions of interest.

We hope all present enjoyed this demonstration. We look forward to applying the lessons learned here to future work with mobile robots!

Recapping the 2020 ROS-I Americas Annual Meeting

The 8th ROS-Industrial Consortium Americas Annual Meeting was held March 4-5, 2020, at Southwest Research Institute in San Antonio, Texas. It reminded us of both how far we have come, yet how much there is to be done. As is the tradition with this event, the first day was open to the public, while day two focused on the membership and the mission of the consortium, and what we can do as a body to further leverage open-source for industry.

For those who could not attend due to COVID-19 travel restrictions, we offered video conferencing. In some cases, we had to adjust the speaking schedule to accommodate for remote presentations. All of the Day 1 presentations and videos can be found on the event page, while all the Day 2 content is at the ROS-I Consortium Member Portal.

Day 1 – Overview, Panel & Tours

Day one kicked off with an introduction by SwRI’s Paul Evans followed by a Consortium Overview, by Matt Robinson, and a more technically focused talk on deploying ROS for industrial automation by SwRI’s ROS-Industrial Americas Technical Lead Levi Armstrong. This highlighted recent production deployments, addressed recent challenges, and discussed development of novel new capability for the end-user site.

The morning continued with a talk by Roger Barga, GM of AWS Robotics, on the Role of the Cloud in the Future of Robotics. Here Roger highlighted practical applications of cloud computing with regards to robotics application development, testing and deployment support. Currently AWS is supporting ROS, ROS2, and Gazebo within their services, and provide additional tools to facilitate adding features to applications such as text or speech recognition and security. A case study featuring member Bastian Solutions was shared and a call for multi-robot simulation use cases was put out to attendees.

Alex Shikany, VP of Membership & Business Intelligence of A3, spoke on the role of robotics on employment. Here the point was made that in times of investment in automation, unemployment also declines, indicated that for the U.S. that investment in automation coincides with increased hiring. Essentially, though there is an increase in the amount of work being automated, this doesn’t manifest in fewer jobs; the jobs evolve.

A panel discussion featuring Joshua Heppner of Intel, Lou Amadio from Microsoft, and Roger Barga from AWS fielded questions relating to cloud, sustainability, education, and areas for growth. The Americas Consortium has evolved in recent years with more engagement from the tech industry, as they seek to understand industry needs and how ROS and related applications can scale in robust, sustained ways.

To wrap up day one, Shaun Edwards and Paul Hvass of Plus One Robotics shared a perspective from the founders of ROS-Industrial. They challenged the community to think about breaking down barriers with a simple question: “Where is the Download Button?”

The afternoon featured tours and demonstrations within the SwRI organization and in collaboration with partners. SwRI demonstrations included a ROS2 Mobile Scan-N-Plan that featured localization, reconstruction, machine learning-based segmentation, tool trajectory planning, and optimization-based free space motion planning. Other demonstrations included SLED, and SwRI’s Ranger for autonomous vehicle localization, as well as work in single camera markerless motion capture by SwRI Human Performance Initiative, which has broad applicability in manufacturing robotics with regards to capability. Mathworks demonstrated some of the latest features related to ROS and Simulink, including safety, and UT Austin demonstrated improved means for ease of use in complex systems for inspection/handling tasks in hazardous environments.

The tour and demonstrations included off-campus visits to Xyrec and Plus One Robotics. Shaun Edwards and Paul Hvass offered tours and a behind-the-scenes perspective for how their business has grown and the need to provide their customers a means to try and test before deciding to implement. Xyrec visitors were treated to a firsthand overview the largest mobile robot produced, designed to apply laser ablation surface treatments to commercial aircraft.

Day one concluded over dinner and networking. It was an excellent opportunity to chat with key partners, stakeholders, and new faces about the landscape of advanced robotics, and how we are seeing more rapid progression from academia to factory floor.

Day 2 – Membership Highlights

Day two, the member-only portion, is a forum for membership to share what they are working on, challenges, and seek out areas and partners for collaboration. The day started off with Consortium updates by region, starting off with Americas, then progressed through the EU and Asia-Pacific. Each region has shown progress to delivering more content and training related to ROS2. The EU Consortium shared updates and outcomes of the ROSin initiative, and Asia-Pacific discussed the Singapore-led healthcare initiative Robotics Middleware Framework (RMF), which seeks to disrupt how healthcare adopts both IoT and robotics.

A panel discussed why end-users are looking to advanced robotics solutions via a ROS-based approach, and what are some of the motivations, and challenges that these approaches face today. David Leeson of Glidewell Labs, Greg Balandran of Spirit AeroSystems, and Mitch Pryor of UT Austin Nuclear Robotics Group shared anecdotes and practical examples where there are no solutions in the traditional industrial world. They also discussed challenges their organizations face and shared how an open-source model enables more efficient spread of emerging capability, hopefully preventing silo formation amongst a select few solution providers.

From here, Dave Coleman of PickNik shared recent developments around MoveIt2, including capabilities around real-time path planning. Open Robotics Developer Advocate Katherine Scott then shared developments and opportunities to collaborate around ROS2 and the curation of the ROS ecosystem.

The keynote for the member day of the annual meeting featured Greg Balandran, R&T Engineered Factory Automation Manager ad Spirit AeroSystems, who discussed ROS-I’s Influence on Automation Strategy. Greg shared challenges that his organization has faced and how leveraging ROS and working within ROS-I helped accelerate their automation development and roadmap execution.

The day two afternoon kicked off with a workshop where members broke into groups to discuss elements of the vision for ROS-I, needs, and the areas where membership can focus for advancement. The key takeaways from this workshop were:

  • There is a huge desire for richer industrial-centric training and education resources, beyond the current instructor-led model. These could include cloud-based exercises of the current topics, as well as new topics such as navigation for industrial-centric applications.
  • There was a great discussion around additional vehicles to drive investment into ROS-I and the capabilities to accelerate the capabilities of interest. This included collaboration with other organizations that fund mid-TRL research as well as other funding vehicles that foster small business engagement/participation.
  • Numerous topics focused on moving ROS-I into a product or something tangible that an end-user can develop an application with tools they have existing familiarity with, such as CAD environments and the physics tools that they currently leverage.
  • Detailed listing of capabilities that are desired to enable member applications that include real-time process planning, high-res SLAM, dynamic models, error handling and recover at the system level, and many others.

This feedback enables the Consortium to update and prioritize collaborations, training topics, project topics that are put forward and more to ensure the focus is appropriate relative to the needs of the membership and the broader ROS-Industrial community.

The remainder of the day focused on member presentations that shared recent developments and contributions. Lou Amadio of Microsoft kicked it off with a presentation on recent developments relative to ROS and Windows, Azure, and Visual Studio. Arnie Kravitz, CTO of the ARM Institute, discussed how ROS and ROS-I play a key role in the ARM Institute’s vision in fostering advanced robot capability from the research domain, to be ready for production.

YJ Lim, Sr. Technical Product Manager of MathWorks, shared what the latest developments by MathWorks have meant for developing functional safety in ROS-enabled systems. NOV’s Justin Kinney, Technical Manager Robotics and Mechatronics, shared the end-user ROS journey, from “What is a ROS” to their own functional full-scale demonstration on a test bed, a powerful story that lays out what is possible by leveraging the resources available in the community and working with the other members to further your vision.

Wrapping up the day, John Wen of Rensselaer Polytechnic Institute, and John Wason of Wason Technologies, shared their collaborative ARM Institute project, on Robot Raconteur, which is a middleware that enables easy integration of numerous types of hardware components, which now has interoperability with ROS, and how they are seeking to progress the work to enable richer industrial applications. Eugen Solowjow, Research Scientist at Siemens Corporate Technology, shared recent developments by Siemens that further the tighter connection between industry standard technology, that has the safety and reliability that is expected, but offers both interoperability with ROS-enabled components, as well as the latest AI on the edge capability that is currently emerging for advanced capabilities.

Moving Forward

It was a full two days, and though there remains uncertainty relative to how many in person or physical development we can do, the benefit of collaborative software development, and as evidenced in all the activity in the open source community, though physical testing and developments may have slowed, show there is a lot of work that can be done, even during a challenging time. We hope that those who attended, both in person or remote, also feel as motivated to see how we can further ROS-I into the project and product that we feel it can be. We look forward to stepping up to the challenges put forth by our membership and partners, and hope you also look forward to joining in that journey.