Teaching an Old Robot New Tricks

Robotics is increasingly present in our daily lives in one way or another. Although many hear the word 'Robotics' think of humanoid-type robots or even robotic arms used in industry, the reality is that robotics has many forms and applications, from autonomous mobile robots (AMR) to standard industrial robots. Robots range in size from as small of the palm of your hand, to robots capable of reaching the top of an airplane.

Xyrec laser coating removal robot (https://www.swri.org/industry/industrial-robotics-automation/blog/laser-coating-removal-robot-earns-2020-rd-100-award)

GAPTER (http://wiki.gaitech.hk/gapter/index.html)

Robots, and in particular industrial robots, are programmed to perform certain functions. The Robot Operating System (ROS) is a very popular framework that facilitates the asynchronous coordination between a robot and other drives and/or devices, and has been a go-to means to enable the development of advanced capability across the robotics sector.

how ros-i extends ROS and ROS 2 to industrial relevant hardware and applications

Southwest Research Institute (SwRI) and the ROS-I community more broadly often develop applications in ROS 2 , the successor to ROS 1. In many cases, particularly where legacy application code is utilized bridging back to ROS 1 is still very common, and one of the challenges in supporting adoption of ROS for industry. This post does not aim to explain ROS, or any of the journey to migrating to ROS 2 in detail, but if interested as a reference, I invite you to read the following blogs by my colleagues, and our partners at Open Robotics/Open Source Robotics Foundation.

Giving an Old Robot a New Purpose

Robots have been manufactured since the 1950’s, and, logically, over time there are newer versions with better properties and performance than their ancestors, and this is where the question comes in, how can you give capability to those older but still functional robots? This is becoming a more important question as the circular economy has gained momentum and understanding of the carbon footprint impact of the manufacture of robots that can be offset by reusing a functional robot. Each robot has its own capabilities and limitations and those must be taken into account. However across the ROS-I community this question of can I bring new life to this old robot always comes up, and this exact use case came up recently here at SwRI.

In the lab an older Fanuc robot was acquired it seemed a good candidate to set up a system that could demonstrate basic Scan-N-Plan capabilities in an easy to digest way with this robot that would be constantly available for testing and demonstrations. The particular system was a demo unite from a former integration company and included an inverted Fanuc Robot manufactured in 2008.

The demo envisioned for this system would be a basic Scan-N-Plan implementation that would locate and execute the cleaning of a mobile phone screen. Along the way, we encountered several obstacles which are described below.

Let's talk first about the drivers, 'a driver is a software component that lets the operating system and a device communicate with each other'. Each robot has its own drivers to properly communicate with whatever is going to instruct it on how to move. So when speaking of drivers, the handling of that is different from a computer's driver to a robot's driver, and this is because a computer’s driver can be updated faster and easier than that of a robot. When device manufacturers identify errors, they create a driver update that will correct them. In computers you will be notified if a new update is available, you can accept the update and the computer will start updating, but in the world of industrial robots, including the Fanuc in the lab here, you need to manually upload the driver, and the supporting software options to the robot controller. Once the driver software and options are installed a fair amount of testing is needed to understand what the changes you made to the robot have impacted elsewhere in the system. In certain situations may receive your robot with the required options needed to facilitate external system communication, however it is always advised to check and confirm functionality.

With the passing of time, the robot will not communicate as fast as newer versions of the same model, so to obtain the best results you will want to try to update your communication drivers, if available. The Fanuc robot comes with a controller that lets you operate it manually, via a teach pendant that is in the users hand at all times, or set it to automatically and it will do what it has instructed via a simple cycle start, but all safety systems need to be functional and in the proper state for the system to operate.

The rapid position report of the robot state is very important for the computer’s software (in this case our ROS application) to know where the robot is and if it is performing the instructions correctly. This position is commonly known as the robot pose. For robotic arms the information can be separated by joint states, and your laptop will probably have an issue with the old robot due to reporting these joint states at a slower speed while in auto mode that the ROS-based software on the computer expects. One way to solve this slow reporting is to update the drivers or by adding the correct configurations for your program to your robot’s controller, but that is not always possible or feasible.

Updated location of the RGB-d camera in the fanuc cell

Another way to make the robot move as expected is to calibrate the robot with an RGB-D camera. In order to accomplish this, you must place the robot in a strategic position so that most of the robot is visible by the camera. Then view the projection of the camera and compare it to the URDF, which is a file that represents the model of the robot in simulation. Having both representations, in Rviz for example, you can change the origin of the camera_link, until you see that the projection is aligned with the URDF.

For the Scan n’ Plan application, the RGB-D camera was originally mounted on part of the robot's end-effector, but when we encountered this joint state delay, the camera was changed to a strategic position on the roof of the robot’s enclosure where it could view the base and the Fanuc robot for calibration to the simulation model as can be seen in the photos below. In addition, we set the robot to manual mode, where the user needed to hold the controller and tell the robot to start with the set of instructions given by the developed ROS-based Scan-N-Plan generated program.

 

CONFIRMING VIEWS OF CAMERA TO ROBOT CALIBRATION

Where we landed and what I learned

We were able to successfully give this old robot a new purpose and shared how to do it and how to solve some issues you could encounter. While note as easy as a project on “This Old House” you can teach an old robot new tricks. It is very important to know the control platform of your robot, it may be that a problem is not with your code but with the robot itself, so it is always good to make sure that the robot and the associated controller and software work well and then seek alternatives to enable that new functionality within the constraints of your available hardware. Though not always efficient in getting to the solution, older robots can deliver value when you systematically design the approach and work within the constraints of your hardware taking advantage of the tools available, in particular those in the ROS ecosystem.

Summary of ROS-Industrial Conference 2022

The 10th edition of the ROS-Industrial Conference took place on December 15-16, 2022 in Stuttgart, Germany and remotely. During the conference, 55 participants present in Stuttgart and an online audience of more than 200 people attended 17 talks in six sessions. The goal of the conference was to show and discuss what currently is possible in the ROS2 ecosystem when it comes to industrial applications.

Creating Manipulation cells in ROS2 has now stronger tool support

In the first session with the title “Manipulation Workshop”, Ragesh Ramachandran (Fraunhofer IPA) explained how you can currently setup a robotic manipulation cell that is based on ROS2 and controlled via moveit2. The workshop showed that you can export a model of your cell from CAD and create a fully functional manipulation cell using off the shelf components such as the universal robot driver for ROS2. In the second talk of the session, Michal Milkowski (Universal Robots) presented the integration of UR Script with ROS2.

Bosch uses ROS in product development

The second session of conference focused on boader topics. Dr.-Ing. Werner Kraus (Fraunhofer IPA) reported the newest robot sales statisctics and pointed out that robot solutions are a key factor for Europe staying economically relevant even with growing demographic issues. Dr. Ralph Lange (Bosch) explained in his talk, how ROS and ROS2 are used at Bosch, a company that has been active in the ROS community since the beginning. ROS and ROS2 are used in many research and development departments at Bosch. ROS and ROS2 are also integrated into a small number of products. Maria Vergo (ARTC) explained about the current progress of spreading ROS in the Asia Pacific region as well as recent and planned developments of ROS2 components by the ROS-Industrial Consortium Asia & Pacific. She highlighted the recently published packages Easy Perception Deployment and Easy Manipulation Deployment that simplify the deployment of perception and manipulation in ROS2. Michael A. Ripperger (SwRI) talked about developments that are currently ongoing in the Americas region of the consortium. A major development ongoing there is the industry friendly motion planner tesseract as well as a plugin for FreeCAD that will simplify the integration of ROS and CAD models significantly.

OpenRMF enables hardware-agnostic fleet management with building infrastructure integration

The next session was title “Navigation and ROS2”. Marcel Schnizler (Pilz) explained in his talk, how the PSENScan laserscanner can be used to easily build safe mobile robot systems using ROS or ROS2. The second presenter Aaron Chong (Open Robotics) introduced OpenRMF to the European audience. OpenRMF is a manufacturer-agnostic fleet management system for robotics that is currently developed by Open Robotics and others. Next to the fleet management capabilities, it also features integration with building infrastructure such as doors and elevators. The ecosystem currently supports more than 10 different robots for different kind of purposes. The system is deployed in multiple hospitals in Singapore at this time. Victor Mayoral-Vilches revealed information about his start-up Acceleration Robotics that aims at integrating hardware acceleration into the ROS2 ecosystem. The start-up is heavily involved in the development of new ROS2 features that simplify the usage of hardware acceleration such as GPUs and FPGAs. It has also developed a Robot Processing Unit that can be used by robot developers to leverage the newly developed hardware acceleration features.

Universal Robots and Yaskawa Motoman integrate ROS2 interfaces into their robot controllers

In the last session of the first day “Manipulation and ROS2”, Rune Søe-Knudsen (Universal Robots) and Felix Exner (FZI) presented the latest updates to the Universal Robots ROS2 robot diver. Liana Bertoni and Davide Torelli (IIT) explained the capabilities of the ROS2 End-effector framework that aims to facilitate integration, planning and control of heterogeneous robotic end-effectors. Ted Miller (Yaskawa Motoman) introduced MotoROS2 and ongoing driver development for ROS2. The interesting thing about the driver is that it runs directly on the robot controller using MicroROS. He announced that the driver will soon be published for public beta testing.

ROS2 fieldbus interfaces are becoming available

Day 2 of the conference started of with the session “ROS2 and Hardware Interfaces”. In the first talk of the session, Dr.-Ing. Denis Stogl (Stogl Robotics) explained the different features of ros2_control. For robot drivers and other hardware interfaces, ros2_control provides the tools to build hardware-agnostic interfaces as well as an infrastructure for controllers and a collection of ready-to-use controllers for different purposes. Maciej Bednarczyk from ICUBE at Université de Straßbourg presented their EtherCAT stack, which leverages ros2_control to enable simple integration into ROS2. The stack is available as v1.0.0 release. Christoph Hellmann Santos from Fraunhofer IPA presented ros2_canopen, a stack to integrate ROS2 with CANopen. The stack provides different interfaces to control CANopen Devices from ROS2, one of them being ros2_control.

ROS2 is starting to be embedded in to industrial devices such as PLCs

In the embedded platforms session, Andrei Kholodnyi from Wind River presented the integration of ROS2 with the Yocto framework. Yocto is a toolchain, which enables companies to create tailor-made Linux distributions for their high performance industrial computing products. The integration of ROS2 into the Yocto framework is achieved by the meta-layer “meta-ros”. The second talk of the session stressed the importance of Yocto for industrial equipment. Hermann Spies and Özkan Öztürk from Phoenix Contact presented work on integrating ROS2 with their PLCNext device series. The PLCNext devices use Yocto for their operating system. The ROS2 Bridge is integrated into the PLCNext devices using Docker containers that run alongside the IEC 61131 execution environment directly on the PLC hardware.

Intrinsic to strongly support the ROS community

On the first day of the conference, it became known that Intrinsic, a company that plans to democratize robotics by providing simpler and hardware-agnostic robot software with AI integration has bought parts of Open Robotics, the foundation that governs the ROS ecosystem. Interesting is, that Intrinsic is financed by Alphabet. In a short statement, Torsten Kröger, CTO at Intrinsic, and Brian Gerky, CEO of Open Robotics explained their plans and what the acquisition means for the ROS community. From their statement, it seems that the ROS ecosystem has gained a financially strong supporter in Intrinsic and will see a lot of work towards more industrial deployment in the next years.

The conference has shown that the ROS-Industrial Community has been quite active during the pandemic years and we are a looking into a bright future for open source robotics.

ROSCon 2022 Rewind

ROSCon 2022 Group Photo

This October I was fortunate enough to attend ROSCon with fellow colleagues Jerry Tower and Michael Ripperger in beautiful Kyoto, Japan. By luck, it just so happened that the month-long trip I booked to Japan one year ago lined up with Japan's borders opening and the conference's location and dates. Now that I'm back in America and have my work and personal business back in order, I'd like to share with you my ROSCon 2022 experience.

With an attendance of approximately 800 ROS developers ranging from absolute beginners to seasoned industry and academia experts, there was something for everyone at ROSCon. The panels were particularly useful to better understand the current state of ROS, ROS2, future plans, and concerns of the community. I found the presentations about integrating CANopen with ROS 2 in addition to the development work on a ROS 2 simulator with the Unreal Engine 4 interesting as well. The full ROSCon program with videos of the presentations can be found here:  https://roscon.ros.org/2022/.

Formant Robotics' robot dog being controlled via a Steam Deck roaming around the conference grounds during lunch.

The conference's exhibitors covered a healthy spectrum of different roles necessary to bring robots to the forefront of industry. This spectrum included companies that specialized in cutting edge hardware necessary for perception, navigation, and path planning, to companies that regularly deploy hundreds of robots to factory floors with cloud services. I also took this opportunity to visit with our ROS-I Asia Pacific counterparts in-person at their booth which featured an impressive demonstration of two robots working together to pick and place items into manipulated bins.

Overall, ROSCon was a great success and Open Robotics did a fantastic job putting everything together. We will most certainly make it to New Orleans for ROSCon 2023!

It's a wrap for ROS-Industrial Asia Pacific Workshop 2022!

After two years of running the ROS-Industrial Asia Pacific workshop digitally, this year’s edition has returned to physical and was held from 9 to 10 Nov 2022. The ROS-AP team has seen an overwhelming response, with tickets sold out happening early in the registration. Over 100 attendees from the ROS community attended this 2-day event, with 15 esteemed speakers who are leaders in business, industry, research, and education sectors. Prof Quek Boon Tong, CE of the National Robotics Programme Singapore, has graced the event as a Guest-of-honour, and among the keynote speakers are Prof Selina Seah, Assistant CEO of Changi General Hospital and Prof Giorgio Metta, Scientific Director of IIT.

The entire speaker lineup can be found here.

This workshop has provided many insights into ROS’s global development and adoption. Particularly on how ROS has become an essential component for large organisations to achieve their developmental goal in robotics and how ROS has supported the early-stage journey growth of aspiring start-ups such as Lionsbot and Augmentus. Matthew Festo, the General Manager of Open Robotics Singapore, has shared that there are more than $6 billion in known acquisitions of ROS-based companies and 2.2 million ROS-based installations and updates per month. With these staggering stats, ROS is making a huge global impact! Similarly, Mr Su Lian Jye, the Research Director of ABI Research, shared that ROS had greatly accelerated the development of robotics solutions, creating a rippling effect in driving the higher demand for automated solutions by numerous industries. Lastly, there were multiple sharing on Open-RMF, a ROS-based interoperability framework that enables different fleets of robots to interoperate with the infrastructure assets and building management systems. Prof Selina Seah and Dr Pongsak, General Manager for Pansonic R&D centre, have shared different use cases deployed within a hospital environment. The demand for robotics in the service industry has dramatically increased, and the healthcare sector remains a forerunner in using robotics to enhance services and efficiencies.

The end of this workshop marks another successful year of the ROS-Industrial Asia Pacific team proliferating ROS and cultivating a robust, inclusive ROS ecosystem in Asia. As the effort intensifies, the ROS-Industrial Asia Pacific team will continue to grow and see a new leadership pairing of Maria Vergo and Tan Chang Tat taking the team forward into the next phase. Watch this space!

Darryl Lee, Consortium Manager

ROS-Industrial Asia Pacific

The approved slides and presentations are made available through the links below.

Taking ROS-Industrial and ROS 2 Training to Boston

In mid-October, my colleague, Lily Baye-Wallace, and I made trip out to Analog Devices in Boston, Massachusetts to provide ROS 2 training to members of our ROS-Industrial community. Analog Devices, a full ROS-Industrial Consortium member took advantage of the ability to host a ROS-I training event, bringing ROS-I training to the East Coast.

From ROS 2 basics and fundamentals to a new image processing pipeline exercise, our attendees were able to explore the many different aspects and capabilities of both ROS 2 and ROS-Industrial as well as explore the beautiful historic city of Boston.

The Analog Devices “Analog Garage”

We had many attendees fly in from across the country this week (and even a few international ones!) to hear us talk about ROS and its importance to the robotic community around the world. Beginning with the basics of how to set up a ROS workspace, we led the students through various exercises designed to guide them through the process of how to set up a ROS environment and incorporate it into their various projects. Other content included using MoveIt! for motion planning and how to create a URDF. Our recently updated advance topic, Building a Perception Pipeline, taught the students how to use several tools within the Point Cloud Library (PCL) to filter and segment camera data to be used with their robots.

Overall, this was a great way to engage with members of the robotics community and share some of our knowledge and expertise with them. All of our training material can be found on our ROS-Industrial Training wiki. We hope to see you at our next training event in Febuary 2023 in San Antonio, TX!

Note: If you are interested in learning more about hosting a ROS-Industrial Consortium training event, reach out to a team member!

ROSCon 2022 - ROS-Industrial Consortium Americas Look Back

Last week, my colleagues, Jerry Towler and Fernando Martinez, and I made the grueling 14-hour trans-Pacific flight and 3-hour train ride to attend ROSCon in Kyoto Japan. All told, we probably would have endured a lot worse for the opportunity to travel to Japan and talk about robots.

Visiting with the ROS-I Asia-Pacific Team

On the first day, we attended the ros2_control workshop. The focus of the workshop was a deep dive into the revised architecture of ros2_control followed by a tutorial for creating a custom hardware interface. The coding exercise helped solidify some of the new ros2_control interface concepts and provided the opportunity to see the components in action via simulation using Gazebo. The Construct also provided access to a robot in their lab in Spain for hardware interface testing. Unfortunately, we did not finish in time to use it, but the Construct showed a live* demo on the hardware using a ros2_control interface.

On to the ROSCon program! As always, the keynote presentations highlighted some interesting applications of ROS, specifically in augmented reality for assistance in surgical applications and in GPS-denied UAV autonomy. A majority of the other presentations focused on software interoperability (OpenRMF), RMW layer development and challenges, AMR/UAV autonomy, and, of course, porting from ROS1 to ROS2. The program this year was light on manipulator-specific content, touching primarily on application deployment using ROS2 and ros2_control. The presentations that stood out the most to us covered changes to the ROS2 navigation stack enabling improved motion planning and support for different types of mobility (e.g., Ackermann, legged, and non-circular diff-drive robots); an improved file format for ROS2 bag files (MCAP); an IDE with visualization for defining kinematic chains in URDF; and updates to the BehaviorTree framework. Our final key takeaway from the presentations: all robotics companies everywhere are hiring. The full ROSCon program is posted online; links to the video presentations and slides are expected to be published within the next few weeks.

ROS-Industrial exhibition booth with Easy manipulation demo

In addition to the presentation program, ROSCon also hosted an exhibit hall for robotics vendors displaying robots, sensor technology and software/simulation applications. The ROS-I AP consortium also staffed a booth with an impressive demo showcasing an application where two robots pick and place objects into a bin and jointly manipulate the bin based on 3D perception feedback.

Overall, the first in-person ROSCon since 2019 was a success with roughly 800 attendees from more than 30 countries. It was a great opportunity to engage with the open-source community and learn more about what other roboticists around the world are working on. Looking forward to another ROSCon next year in New Orleans!

*live on hardware in Spain, viewing concurrently and remotely in Kyoto…neat!

A turn in the welding robotics community

In early October in Denver, Colorado, the American Welding Society (AWS) held the first Automated Welding & Sensors Conference. This is a follow up to the prior National Robotic Arc Welding Conference, which last took place in 2019, but was not an official AWS conference. Since then, a lot has changed. One change is a pivot to acknowledging there is a lot going on in robotics, sensors, human robot interaction, and emerging trends in workforce education and sustainment, that it makes sense to have a conference on automation in welding.

The venue and a full house for the first official American Welding Society Automated Welding & Sensors Conference

Full disclosure, I cut my teeth and bare many scars by implementing robotic welding, on large structures, where quality was paramount, and input variation was prevalent. I also spent many hours on precision sheet metal assemblies, that also had high quality requirements, and interesting material combinations. Robotics enabled repeatability, and a means to ensure quality when we had our processes under control, but the robotic systems of the early to mid-2000s had their limitations. My journey to further robotic welding capability and performance during my time in industry working for Caterpillar led working with tools such as ROS, and subsequently getting involved in the ROS-Industrial open-source project and consortium.

Fast forward, and there are still challenges. There are also a lot of new tools, innovations, new voices with new ideas, and plenty of examples of applying technology in a sustainable way to realize gains for small and medium enterprises. First and foremost was the prevalence of collaborative robots, in particular Universal Robots power and force limited manipulators. The conference was organized by Vectis Automation founder and CEO Doug Rhoda and his team, in partnership with Jeff Noruk of ServoRobot.

Some of the key takeaways from my perspective is there is a hunger for intelligent yet easy to use solutions. There is an inherent high mix, and, at times, harsh environment. It is now at the point where collaborative robots – power and force limited manipulators – are now appearing in several job shops, and large manufacturers around the world. Caterpillar shared their experience in taking advantage of leveraging collaborative hardware-based systems to realize flexible and agile welding capability.

Caterpillar’s Don Stickel sharing industry technological needs

Furthermore, there have been advances in welding data management, which has been a growth area for those passionate about welding quality and the wealth of “big data,” that can be harvested from welding operations. In parallel, there has been significant advances in the ability to monitor with camera systems to richly visualize the welding process. The arc, molten pool, the filler metal/electrode and perform real-time vision analysis to close the loop, beyond historically simply the electrical signals previously leveraged.

Advances in leveraging mobility were shared, and digital technologies, making their way into the welding equipment, are giving the welding industry a lot to pull into their operations. However, there is still further opportunity to support the welding domain through implementation and scaling of AI-based capabilities. This could be for low-level real-time control, such as what Novarc shared during their talk, or the ability for higher level weldment process planning to be more robust in AI generalization, to scale across classes of fabrications more efficiently.

The growth in robotics has been well documented in many domains such as warehouse and logistics, on-road autonomy, drone-based inspection, and monitoring through legged robots performing various tasks in dynamic environments.

There is an opportunity for all the contributors in the robotics domain to learn about the needs, challenges, and opportunities within the materials joining community. While often labeled as dirty, dull, and dangerous, there is both a need via the well-documented labor shortage, and a technical challenge to establish advanced approaches to make the products of tomorrow right here where they will be consumed.

It was great to see the progress in the welding community, relative to adopting technology and aligning tech capability with the needs of this end-user community. Let’s keep supporting these communities, and continue further for the coating industry, forging and casting as well, again to support the ability to make products efficiently where they are consumed. Not just to advance technology for the sake of technology but to build the ecosystem for a more resilient economy and supply chain for the next generation. The ROS-I open-source project aims to provide utilities that serve as building blocks for these sorts of initiatives, and we look forward to seeing how they get onto the shop floor.

Fall 2022 ROS-I Community Meeting Happenings

The ROS-Industrial Community Meeting was held on Wednesday, 9/21, and it brought with it a few interesting updates for the Industrial ROS community, but also the broader ROS/open source robotics community as well. I wanted to take this moment to highlight some key bits that came out of this specific community meeting and provide the links to the provided presentations. Typically, I add these to the event agenda, but due to some changes in how the meeting unfolded, I figured a blog post may be easier to track down.

If you just want the specific decks from the speakers, I have included them here: (YouTube Playlist for your viewing pleasure HERE!)

Per typical ROS-I meetings, it started with an update on general ROS-I open source project/program updates. A fair amount of time was spent on ROS-Industrial training. It is no secret that during COVID we had to go virtual, and that certainly made training more available to more people. As in person events started to become possible again, we experimented with delivering training in a "hybrid" fashion, with students able to opt to attend remotely.

The feedback has not been very positive. For attendees online, they feel they do not get the same value as those in the room, and often are left on their own more so than those getting in person guidance from the instructors.

The solution to this moving forward is to no longer offer hybrid, but periodically offer a full virtual training option for those that do not want to attend an in person event.

That said, with in person events being broadly the norm now, we have brought back member hosted training, so that provides improved regional opportunities to make a training event where a near cross-country flight is not required. We look forward to continuing to offer member hosted events, covering both costs and the Midwest.

I also did a preview into some of the data we received from the annual meeting on what we can do to improve adoption of ROS across the Industrial robotics community. It isn't quite a Venn diagram, as shown below, but ease of use and training contribute to roadblocks to adoption. We are continuing to line this up with both past workshop feedback data, and process domain specific feedback to provide an updated roadmap for the ROS-I open source project. Stay tuned!

Initial digestion of workshop feedback from ROS-I Annual Meeting

From here Michael Ripperger, ROS-I Tech Lead shared some interesting updates around the Reach repository and some of the new scoring features to assess reachability for a manipulator configuration relative to a surface that is being targeted to process. These utilities have been refined to enable richer displays to enable analysis and improved solution design for various applications from painting, through surface finishing. The updated repository may be found here. Also, recent improvements to industrial reconstruction were noted, as some various users across the community kick the tires and provide feedback.

Brad Suessmith and Oded Dubovsky from Intel provided an update on the Intel RealSense product lineup, what is on the horizon and when releases will be announced. Stay tuned for a splashy announcement soon in concert with vision shows. Oded covered the roadmap for the ROS 2 driver with a big push on ROS 2 beta by end of year 2022.

Craig Schlenoff reviewed the NIST role in participating and even funding work around open source robotics. The particular focus for this particular talk was on the ability to realize agile performance within robotic systems. The key idea is to enable easy and rapid reconfiguration and re-tasking of robotic systems. This has been broken into both hardware and software agility and Craig summarized why this is of interest and how it fits into the NIST vision relative to dissemination of standards and practices that support sustainable capability development and proliferation. You can also check out the recently launched Manufacturing AI site that seeks to provide a landing page for information around AI development for manufacturing.

NIST high-level view of Measurement Science for Manufacturing Robotics

John Bonnin, from SwRI, then did a dive into the ConnTact framework. Here, the intent is to provide a way to easily evaluate learning policies around various assembly tasks, initially targeting the NIST assembly task board, in a way that abstracts details around specific hardware, while enabling easy, or simpler, task definition. The framework has been further refactored and in parallel is a port to ROS 2. The community is encouraged to check it the repository and engage in the improvement of this framework!

Updated schematic of the ConnTact Framework

PickNik CEO, Dave Coleman, dug into MoveIt (reviewed the roadmap, great things coming!) and MoveIt Studio. For those that haven't maybe had the chance to see MoveIt Studio in action, the intent is to provide a platform to enable the development of complex yet fault tolerant robotic applications. While the experience is simpler, and reduces reliance on high-end experts, there is still a certain level of expertise required, however, there is the potential and ability to build and debug advanced applications and get to the validation phase of your application sooner.

There is also cloud-based/remote monitoring and error recovery. This enables developers for solutions to think about recovery plans, support in the field, get applications up and running after a fault more efficiently. This also has the benefit of enabling diversely located development staff, all via the PickNik collaboration with Formant.

Summary of MoveIt Studio

Dave also exectued a live demo of the MoveIt Supervisor. The MoveIt Supervisor enables operator in the loop type applications, suitable for object/task identification in the scene to set up execution in a cluttered, or high noise environment. This is a great example of supervised autonomy, where things are "mostly automated". The demo went off without a hitch, as Dave stepped through identifying a door handle, planning the trajectories for opening the door handle, and then the plan executed, and the robot opened the door.

MoveIt Supervisor Demonstration

Dave reviewed the behavior tree behavior user interface. Built on BehaviorTree.cpp this enables for complex behavior development, but in an environment that makes the task simple to visualize and edit.

Coming soon will be updates around trajectory introspection, PRM graph planning caching, and optimal trajectory tuning. MoveIt Studio is available through PickNik via a licensing model. Check in with the PickNik team to learn more about MoveIt Studio and how to check it out for yourself!

This ended up being a great community meeting and we look forward to the next community update in December 2022. It has been rewarding to see both large tech companies, government agencies, and small innovative startups, march together in providing resources, tools and capability to enable new capabilities in manufacturing and industry. That is the goal of the ROS-Industrial open source project, and we look forward to what's next!

An Open Framework for Additive Manufacturing

Mainstreaming Robotic Based Additive Manufacturing

Robotic additive manufacturing, sometimes called robot 3D printing, is evolving from research to applied technology with maturation of methodologies (gantry systems and robot arms) and source materials (metal powder, wire, polymer and concrete).

A conventional gantry system that layers material via a single x-y plane tool path is an established 3D printing solution for certain repeatable applications, while robotic arms can offer more complexity when layering material in multiple planes. However, to date traditional approaches for planning trajectories for 3D printing are not optimized for taking advantage of high degree of freedom (DOF) systems that include industrial manipulators.

Leveraging the advances in planning for high (DOF) robotic arm equipped solutions for complex additive manufacturing (AM) entails processes for planning and execution for both hardware and the work environment. The steps of a process are dependent upon often multiple proprietary software tools, machine vision tools, and drivers for motion planning end effectors, printer heads and media used in each 3D printing process.

ROS Additive Manufacturing

Over the years the ROS-I open source project and within the ROS-Industrial Consortium the creation of frameworks that enable new application development have become a standard approach to enable rapid extensibility from an initial developed application. After numerous conversations with end-users, other technical contributors, it seemed that there was an interest in looking at some of the capabilities within the ROS and ROS-I ecosystem to create a framework that seeks to take advantage of high Degree of Freedom systems and optimization based motion planning to bring a one stop shop in additive manufacturing planning and application.

ROS Additive Manufacturing (RAM) aims to leverage the flexibility of additive manufacturing with industrial robotic applications. While looking for an open-source ROS package to slice meshes into configurable trajectories for additive manufacturing using a Yaskawa Motoman welding robot, we have been aware of the ROS Additive Manufacturing package developed by the Institute Maupertuis in Bruz, France, and so this was used as a starting point.

The RAM package was originally built in ROS Melodic, so it was rebuilt in ROS Noetic from source. Building the application from source in Noetic was mostly straightforward. We followed the the installation instructions detailed in the Maupertuis Institute's GitLab repository. The terminal commands using pip were replaced using pip3 and all terminal commands specifying ROS Melodic were replaced with ROS Noetic. When attempting to build the package in ROS, there were clashes between Sphinx 1.7.4 and the latest version of Jinja2 (version 3.1.2 as of June 2022). An older version of Jinja2 (version 2.10.1) was installed to successfully build the package and for the software to launch.

The RAM software features an RViz GUI interface that allows the user to select various trajectory generation algorithms to create a trajectory from a given mesh or YAML file. Printing parameters such as blend radius, print speed, laser power, and material feed rate can be modified for each individual layer of the print. The parameters of the entrance and exit trajectories can also be modified to have a different print speed, print angle, print length, and approach type. The output format of the exported trajectory is a ROS bag file. For our experiment, we used a Yaskawa Motoman welding robot and we needed to post-process the results to correctly interface with the robot.

Going from Plans to Robot Motion

Motion was achieved by post-processing trajectories with a customized version of the robodk post-processor. Welding parameter files were defined on the robot's teach pendant like normal. A "User Frame" (reference system) was defined at the center of the metal plate to match the ROS environment. The robot's tool was edited to match the orientation used by ROS. This allowed us to generate robot programs without having to configure the ROS environment to match the workcell. Extra lines were added in the post-processor to start/stop welding. The program files were copied via ftp onto the controller and executed natively as Linear moves.

This hybrid ROS/robot controller set up allowed us to quickly set up this demonstration. The output of the tool is a list of cartesian poses. The post-processor converted these to linear moves in the robot's native format. The robot did the work of moving in lines at a set velocity; there was no reason to do additional planning or create joint trajectories. The robodk_postprocessors package available on Github has not been maintained or updated in some time. Numerous bugs exist and these needed work arounds.

Existing ROS drivers are focused entirely on joint trajectories. A different approach that would allow streaming of robot programs would be beneficial, and this is part of future work to be proposed.

Below are two screenshots from the software for trajectories produced for a star and a rectangle with rounded corners. These shapes were included within the software as YAML files. The Contour generation algorithm was used with a 2.5 mm layer height and a 5.0 mm deposited material width for both shapes . The star shown below had three layers and the rounded rectangle had five layers. All other parameters were left to their default values.

Creation of a star shape set of tool paths via the RAM GUI.

Creation of a rounded corner rectangle within the RAM GUI.

As seen in the video below, the GUI interface provided a convenient and intuitive way to modify trajectory and print parameters. Paired with our post-processor, sending completed trajectories to the robot hardware was efficient.

Screen capture of process within RAM software. After clicking "generate trajectory", the post processor saves the output into a JBI file which is transferred to the robot via FileZilla.

Test samples were made on a test platform provided by Yaskawa Motoman over the 2022 summer period. As can be seen in initial test samples and more complete builds, the application was able to make adjustments to motion profiles and weld settings, including more advanced waveforms such as Miller Electric’s Regulated Metal Deposition (RMD) process.

Figures Above: In-Process and completed RAM generated tool path sets executed on the Yaskawa testbed system.

To streamline the process of building the RAM package from source, the documentation should be updated to detail the process of building in ROS Noetic instead of ROS Melodic. Additionally, the documentation does not show how to interface and post-process the exported trajectories to work with robotic hardware. Although this is beyond the intended scope of the RAM software project, this would improve the utilization of this software for industrial applications. The documentation for the package is currently in French. An English translation of the documentation would would make understanding the adjustable parameters within the software easier for English speakers.

Future work seeks to incorporate the ability to fit/apply the generated tool paths fit to an arbitrarily contoured surface within the actual environment, much like is done in the various Scan-N-Plan processes for surface processing currently available within the ROS-I ecosystem, thus being able to do additional intermediate inspection and processing as the build progresses, or update build based on perception/machine vision data.

Furthermore, implementing with a new driver approach that enables more efficient tool path to trajectory streaming would improve the usability and interfacing with the hardware. Implementations of various algorithms to ensure as consistent profile/acceleration control to manage sharp transitions would also be beneficial and may be implemented through optimization-based planners such as TrajOpt. Porting to ROS 2 would also be in scope.

A ROS-Industrial Consortium Focused Technical Project proposal is in the works that seeks to address these issues and offer a complete open source framework for facilitating flexible additive manufacturing process planning and execution for high degree of freedom systems. Special thanks to Yaskawa Motoman for making available the robotic welding platform, and thanks to Doug Smith at Southwest Research Institute for working out the interaction between the RAM package and the robotic system.

Editor Note: Additional thanks to David Spielman, an intern this summer at Southwest Research Institute. This work would not have been possible without his diving into the prior RAM repository and getting everything ready relative to testing.

ROS2 Easy-to-Adopt Perception and Manipulation Modules Open Sourced

ROS-Industrial has developed the easy_perception_deployment (EPD) & easy_manipulation_deployment (EPD) ROS2 packages to accelerate the industries' effort in training and deployment of custom CV models and also provide a modular and configurable manipulation pipeline for pick and place tasks respectively. The overall EPD-EMD pipeline is shown in Figure 1.

Figure 1. Overall EPD-EMD Pipeline

The EPD ROS2 package helps accelerate the training and deployment of Computer Vision (CV) models for industrial use. The package provides a user-friendly graphical user interface (GUI) as shown in Figure 2 to reduce the time and knowledge barrier so even end-users with no prior experience in programming would be able to use the package too. It relies on open-standard ONNX AI models, hence eliminating the overreliance on any given ML library such as Tensorflow, PyTorch, or MXNet.

Figure 2. Startup GUI of EPD

EPD itself runs on a deep-learning model as a ROS2 interface engine and outputs object information such as the object name and location in a custom ROS2 message. This can be used for use cases such as object classification, localization, and tracking. To train a model for custom object detection, all a user needs to prepare are the following:

  • .jpgs/.pngs Image Dataset of custom objects. (Approx. 30 images per object required)
  • .txt Class Labels List

The expected output will be:

  • .onnx trained AI model

Figure 3. EPD Training to Deployment Input & Output

To cater to the different use cases for different end-user’s requirements, the package also allows for customizability in 3 different profiles.

• Profile 1 (P1) – fastest, but least accurate

• Profile 2 (P2) – mid-tier

• Profile 3 (P3) – slower, but most precise output

EPD caters to 5 common industrial tasks achievable via Computer Vision.

  1. Classification (P1, P2, P3)
  2. Counting (P2, P3)
  3. Color-Matching (P2, P3)
  4. Localization/Measurement (P3)
  5. Localization/Measurement/Tracking (P3)

Figure 4. An output of EPD at Profile 3, with OIbject Localization and operating at 2 FPS

The EMD ROS2 Package is a modular and easy to deploy ROS2 manipulation pipeline that integrates perception elements to establish an end-to-end industrial pick and place task. Overall, the pipeline comprises 3 main components in which are:

  1. Workcell Builder

The Workcell builder as shown in Figure 5 essentially provides a user-friendly graphical user interface (GUI) to allow users to create a representation of their robot task space to provide a simulation of the same robot environment as well as the initial state for trajectory planning using motion planning frameworks like MoveIt2.

Figure 5. Workcell Builder from EMD

2. Grasp Planner

The Grasp Planner subscribes to a topic published by a given perception source and outputs a specific grasp pose for the end-effector using a novel, algorithmic depth-based method. The generated pose is then published as a ROS2 topic. As shown in Figure 7, The grasp planner currently supports and provides a 4 degree-of-freedom (DOF) pose for both multi-finger and suction array end effectors, apart from the traditional two-finger and single suction cup grippers.

Figure 6. POINCLOUD TO GRASP POINT GENERATION

Figure 6. Grasp Planner

It aims to eliminate the pain points that users face when deploying machine learning-based grasp planners such as:

• Time Taken for Training & Tedious Dataset Acquisition and Labelling

Current datasets available such as the Cornell Grasping Dataset and Jacquard Grasping Dataset generally account for two-finger grippers and training on generic objects. For customized use cases, datasets need to be generated and labeled manually which requires a lot of time. Semantic description of multi-finger grippers and suction arrays may be hard to determine as well.

• Lack of On-The-Fly End Effector Switching

In a high mix, low volume pick-and-place task, different end effectors are required to be switched around to cater for the grasping of different types of objects. The changing of end efforts would translate for users to re-collect a new dataset, re-label and re-train the dataset and models before deploying them.

3. Grasp Execution

The Grasp Execution was developed to allow for a robust path planning process in navigating the robot to the target location for the grasping action. It serves as a simulator that uses path planners from the motion framework MoveIt2, as well as the output generated by the Grasp Planner. Figure 7 demonstrates that various items are picked successfully.

Figure 7. Grasp Execution on different objects using different end-effectors

Overall, it benefits users by providing seamless integration with the grasp planner, as the grasp execution package communicates with the grasp planner through subscribing to a single ROS2 topic with the GraspTask.msg type. Apart from this, the grasp execution package also takes into account dynamic safety, which is important as collaborative robots often operate closely in a dynamic environment with human operators and other obstacles as well.

FIGURE 8. Improved grasp execution - dynamic safety architecture

FIGURE 9. dynamic safety zones

There is a need for the robot to be equipped with such capabilities to address safety concerns and detect possible collisions in its trajectory execution to avoid any obstacles. Users are provided with a vision based dynamic collision avoidance capability through the use of Octomaps, whereby when a collision has been deemed to occur within the trajectory of the robot, the dynamic safety module will be triggered to either stop the robot or account for a dynamic replanning of its trajectory given the new obstacles within the environment.

Both of these packages have been formally released and open sourced on the ROS-Industrial github repository, and the team would also like to acknowledge the Singapore Government for their vision and support in funding this R&D project “Accelerating Open Source Technologies for Cross Domain Adoption through Robot Operating System (ROS), supported by the National Robotics Programme (NRP).

Announcing Industrial Reconstruction Leveraging Open3D

Open3D Industrial Reconstruction of an aerospace radome

Mesh reconstruction is often a critical step in an industrial robotics application. CAD models are not always readily available for all parts and often parts have warped or changed due to frequent use in the field. Reconstruction allows a robotic system to get mesh information in these scenarios. Once a mesh has been generated software can be used to generate toolpaths for the robot either autonomously or with human input. These toolpaths can then be converted into robot trajectories which are subsequently executed.

Many sensors and software packages exist that allow for generating pointclouds or meshes, and it seems that RGB-D cameras are becoming increasingly popular and readily available. Previously, ROS-I released yak, which enabled using these cameras mounted on a robot arm to created a mesh. However, yak required CUDA, which can be difficult setting up, and yak would frequently have accuracy issues at the boundaries of meshes. Our new Industrial Reconstruction package still uses these same RGB-D cameras, but makes integrating mesh reconstruction into your industrial robotics application to be easier than it ever has been before by using the 3D data processing library Open3D.

Figures Above: Creation of a mesh from a highly reflective part

Industrial Reconstruction can easily be set up by simply running the command “pip3 install open3d”, and then cloning and building the repository like any other ROS 2 package. The TSDF algorithm provided by Open3D appears to be less susceptible to the edge issues seen in yak and it outputs fully colorized meshes. Having color in the meshes allows for greater ease of use when trying to manually create toolpaths, or may be used to drive segmentation for toolpath planning, all while giving more confidence to users in the accuracy of the generated mesh. On top of this, a live feed of the reconstruction in progress can be visualized in RVIZ. This enables users to go back and scan areas that are missing before exporting the mesh and potentially discovering the missing parts later, requiring a full rescan.

Live creation of a colorized mesh

Industrial Reconstruction is already in use today on multiple projects including our Scan N Plan workshop. We’re excited to see the projects that this new repository enables.

Securing ROS robotics platforms

How can we apply security principles and best practices to robotics development?

 
 

Let’s start with this well-known and relatively old quote, still very relevant today. It reminds us that security is a dynamic process that accompanies any system’s lifecycle. This is because new vulnerabilities are constantly being discovered for all kinds of components. The ways software is used, its platform and libraries change too. Software that was considered as secure in the past may not be the case today.

In the past few years, the number of published vulnerabilities discovered in open source has been steadily increasing. Open source is neither more nor less secure than proprietary code. However, few companies understand the breadth of open source that is being used in their applications. This lack of knowledge translates into a lack of awareness about vulnerable components, and this is a source of risk. ROS is certainly no exception, and collective efforts are needed to keep this great community secure.

As you can imagine, through vulnerable robots, organisations may leak sensitive information about their environment, or even cause physical harm if they are accessed by an unauthorised party.

So, what can you do to keep your robots and the data they handle secure? Let’s dive right into some tips.

Securing your robot’s software

What could go wrong?

You may already be aware of what “CVE” and “CWE” stand for. If not, I strongly encourage you to familiarise yourself with these great sources of information on security issues, published by the MITRE organisation. Common Vulnerabilities and Exposures (CVEs) are a catalogue of publicly disclosed security vulnerabilities in all kinds of software and systems. Common Weakness Enumeration (CWE) is a community-developed list of software and hardware weakness types. The Top 25 most common and dangerous security weaknesses are released every year. You can think of this as a “ranking” of the most prevalent and impactful weaknesses experienced over the past 2 years, organised and ranked for you. An interesting fact: many of the top vulnerabilities in the CWE Top 25 have been the same common kinds of vulnerabilities for decades. This means that while things do change, learning about the most common vulnerabilities will be useful for years to come.

Set your robots up for security

In practice, a good place for your team to start is to embrace a secure software development lifecycle process. It ensures that security is an intrinsic part of every stage in your software development. This approach requires that engineers are trained in secure coding best-practices. This may obviously represent an initial investment, but one that will always pay in the long term. A great place to start is by checking out the CISA life cycle processes guidelines.

 
 

Image source: OpenSense Labs

Your projects will likely have many reused software components, and they will need to be updated occasionally. Eventually, a vulnerability may be found in one, so keep an eye out and be ready to quickly update. To stay on top of things, it is a good idea to use Software Composition Analysis (SCA) tools to identify the open source software components in your code, and evaluate elements, such as security, licence compliance, and code quality (for example, they will let you know of known vulnerabilities in your components, and warn you if they are out of date, or have patches available). This will help you keep any libraries being used as up-to-date as possible, to reduce the likelihood of using components with known vulnerabilities (OWASP Top 10-2017 A9). You can check for any vulnerable components by using free tools, such as OWASP Dependency Check or Github’s Dependency Scanning.

It is crucial to keep your OS and software up to date with security updates and CVE fixes. This is a simple and essential practice to avoid becoming an avenue of exploitation. So, apply any security updates as soon as they’re available and it’s feasible for your robots. And remember, you can take further steps to harden your robots. For instance, make sure to close all their unused ports and enable only necessary services. Give the local user the least privileges they need, to prevent privilege escalation should an intruder ever gain access. If you want peace of mind that you are using a hardened OS with built-in security features, check out Ubuntu Core.

Deep dive into your code

There are different types of code-level analysis to implement as a basic measure. Some analyses can be automated and included in a CI/CD pipeline, so you don’t have to rely on manual scans, and luckily there are a number of open source tools to help you in this task. Each one has pros and cons, so combining them will lead to the most comprehensive results. Below are some suggestions for your ROS applications.

Static Analysis tools (SAST) and Fuzzers

The first good practice is to analyse your code statically – that is, without executing even a line of code. As you’ll see, there are plenty of such options, many free and open source. The gcc and clang compilers support using sanitizers that will detect errors in C/C++ code at build time. In particular, if your team is working with code in memory-unsafe languages like C and C++, this is a crucial step. Take a special look at Google Sanitizers’s Address, Leak, Memory, and Undefined Behaviour sanitizers. Other such free tools include LGTM (by GitHub), Coverity, and Reshift.

Fuzz testing or fuzzing is a well-known technique for uncovering programming errors in software, which consists in finding implementation bugs using malformed/semi-malformed data injection in an automated fashion. Many of these detectable errors can have serious security implications. It’s a great idea to use this practice to validate your SAST findings. Check out Google’s OSS-Fuzz for your fuzzing needs, and when you’re done, save this list of security tools that are free for open source projects.

 
 

Image source: Synopsys

And, just as crucial to the ROS ecosystem, please do report CVEs if you discover any! This will help strengthen the security of ROS code across the whole community. Have a look at the ROS 2 Vulnerability Disclosure Policy when you’re ready to report.

Hey, ROS 2 user

Of course we cannot discuss security in ROS-based robotics without mentioning the ROS 2 security features. The default middleware in ROS 2 is DDS, and there are security features that are baked into its very standard, such as: Cryptography, which implements encryption, decryption, hashing, digital signatures, and so on; Authentication, for checking the identity of all participants in the network; Access Control, which specifies which resources a given participant can access; and Security logging, to record all security-relevant events.

The SROS2 utilities provide a set of tools that make it easier to configure and use these security features. If you’re using or planning to use ROS 2, I encourage you to check out this tutorial, and to then try it out.

Given this clear security benefit of ROS 2 over ROS 1, an obvious step, whenever possible, is to migrate your code to ROS 2. But if you cannot, or simply are not ready to migrate just yet, consider exploring Canonical’s Extended Security Maintenance service for deployed robots. One of the benefits of ROS ESM is that it provides you up to 10 years of security maintenance for ROS and Ubuntu base OS distribution. This can be especially critical to get if you’re still running an unsupported version of ROS.

Join efforts with the larger ROS community

Last but not the least, the reason we’re all here: we’re a community interested in sharing, finding, and offering support to others working with ROS.

In case you’re not familiar with it, the ROS 2 Security Working Group focuses on raising awareness of and improving the security around ROS 2. How can you get involved? Track the wg-security tag on the ROS Discourse, and always get the upcoming meeting announcements. Join the monthly meetings, come and share your use cases, any obstacles you’re facing, and pool efforts with the rest of the ROS community to work through them. We hope to see you there.

Learn more:

You may also consider reading the following materials:

This is a guest post by ROS-Industrial Consortium Americas Member Canonical by Florencia Cabral Berenfus - Canonical, Robotics Security Engineer. This is a follow up to Florencia’s presentation given to the ROS-I membership at the 2021 4th Quarter Members Meeting. https://rosindustrial.org/events/2021/12/ros-industrial-consortium-americas-community-meeting-dec-2021. You can learn more about Robotics at Canonical at https://ubuntu.com/robotics.

Demystifying Robots Interoperability

For decades, robots have been deployed in the manufacturing sector to automate processes. Machine tending, inspection, pick, and place are among the most common applications where robots have been utilized. Despite the high utilization of robotic applications in the manufacturing environment, these applications were predominantly made in a fixed industrial setting. However, the recent advancement in mobility has resulted in mobile robots' rise. In 2021, IFR reported an estimated growth in service robots of 12% worldwide, with sales of personal robots rising by 41%. Asia, in particular, experienced substantial growth in the area.

Among the many applications where mobile robots are being used, Autonomous Mobile Robots (AMR), delivery, cleaning, and social robots, have been identified as the most common applications.

The proliferation of these new types of robots would unleash new possibilities to automate tasks where robots were not traditionally seen as capable. The trends towards such utilization have been seen especially in production and warehouses and in transportation and outdoor facility. These trends are primarily motivated by improved production flexibility, task optimization, reduced reliance on a limited skilled workforce, and the ability to respond more effectively to dynamic supply and demand fluctuations. For example, deployment of robots for flexible manufacturing where robots can be used to modularly navigate and conduct part of the processes that once were static operations requiring heavy CAPEX investment for fixed equipment construction.

With strong growth in the number of robots and their applications, there will be challenges, especially in interoperability, between different robots and other systems in the facility. If operators and business owners were to use many robots, two significant technical challenges would need to be addressed. Firstly, during the design phase and later, during the operation and deployment stage.

Fleet Management and Workers in a Warehouse Facility, The Robot Report (2019)

Responding to the interoperability challenges above, the team at ROS-Industrial Asia Pacific is developing technologies to support owners/operators, system integrators, and robot manufacturers through several development activities utilising Robotics Middleware Framework (RMF):

  1. Development of high fidelity simulation includes scenarios involving environmental factors (such as heavy rain) and network interruption. The development will support a better sense of realism in existing situations. This development will enable operators to optimize production for better planning and scheduling
  2. Development of Next-Generation Robot Middleware Framework that will help robots interact with each other and to other systems, such as ERP, MES, or the workcell and facilities management, such as door and lift. The development will enable traffic deconfliction and task prioritization toward autonomous operations.

At ROS-Industrial, our goal is to develop applications that can help the proliferation of robotics for Industrial use. We constantly seek market inputs to ensure that our developments are highly aligned with the industrial needs. If you have used robots or plan to deploy robots in your facility and would like to help in influencing the development of robotics utilization in your industry, we would like to invite you for a short 10-mins survey to understand your challenges and requirements through the link below.

*Reference:

Heer, C., 2022. World Robotics 2021 – Service Robots report released. [online] IFR International Federation of Robotics. Available at: <https: data-preserve-html-node="true"//ifr.org/ifr-press-releases/news/service-robots-hit-double-digit-growth-worldwide> [Accessed 7 April 2022].*

Using Tesseract and Trajopt for a Real-Time Application

The past two years have seen enormous development efforts transform the tesseract-robotics and trajopt_ros packages from highly experimental software into hardened, industrial tools. A recent project offered the opportunity to try out some of the latest improvements in the context of a robot that had to avoid a dynamic obstacle in its workspace. This project built upon previous work in real-time trajectory planning by developing higher level control logic for the system as a whole, and a framework for executing the trajectories that are being modified on the fly. Additionally, this project provided the chance to develop and test new functionality at a lower level in the motion planning pipeline.

One of these improvements was the integration of continuous collision checking throughout robot motions, as opposed to only checking for collisions at discrete steps along the motion. This is a critical component to finding robot motions that avoid colliding with other objects in the environment. Previously, a collision could have occurred if the distance between the steps in the motion plan were larger than the obstacles in the environment (pictured below). With the integration of our new collision evaluators, these edge cases can be avoided.

The other major piece of research done on the low-level motion planning was benchmarking the speed at which our trajopt_ros solver can iterate, and therefore how quickly it can find a valid solution. We did not impose hard real-time deadlines on the motion planner, instead we ran it as fast as possible and took results as soon as they were available. For our application this was adequate. Some of our planning speed test results are pictured below.

The final major development to the robot motion pipeline to enable real-time trajectory control was the creation of a framework for changing the trajectory being executed on a physical robot controller. This was a particularly exciting element of the research because it brought our work out of the simulation environment and proved that this workflow can be effectively implemented on a robot. We are excited to apply these tools to more of our projects and continue improving them in that process.

All of our improvements to the Tesseract and Trajopt-ROS packages have been pushed back out to the open source community. Check out the latest versions here and let us know how they are enabling new capabilities for you!

ROS-Industrial is buzzing

This year's ROS-Industrial Conference took place on December 01-02, 2021 as a virtual event. With more than 300 registrants it was the largest conference of the so far 9 conferences that have been organized since 2012. In this article, we try to capture the major developments around the ROS-Industrial initiative presented during the conference.

More and more companies are starting to deploy ROS in their industrial applications.

Operator using Drag & BOt to program robot

At the conference drag&bot explained how their system for easy robot programming is being used by industry and showed a number of example applications. Their software is based on ROS and enables programming different industrial robots using a simple drag and drop interface. Drag & Bot also offers an interface for integrating external ROS programs and can be used by developers to deploy their ROS based solutions to industry.

Another company that has today more than 450 mobile robots running their ROS based Navigation is Node Robotics. Robots running Node Robotics software are deployed in BMW's production facilities and in other companies. Node Robotics software offers easy integration of mobile robots of different types into one fleet and enables software systems for operating a single AMR, for data sharing between AMRs as well as fleet management systems and their software systems are based on ROS. The companies first deployment were of freely navigating AMRs was in 2016 in a plant of the automobile producer Audi.

Enabling safety and towards real-time execution with ROS

PSENscan LaserScanner developed by pilz with ROS Integration

The ROS-Industrial ecosystem is producing a number of cutting edge solutions integrating ROS with industry. Pilz is providing a portfolio of safety components that can be used to build a safety system around ROS based mobile robots. The center of the portfolio is the laser scanner PSENscan that can monitor configurable safety zones for the robot. The PSENscan also offers functionalities for integrating speed monitoring and it integrates with ROS via an open source driver.

Progress has also been made with regards to real-time execution in ROS2 systems. Andrei Kholodnyi (Wind River) co-lead of the ROS2 real-time working group presented the groups work. Members of the working group have developed a number of new real-time optimized ROS2 executors. Another available component provided by the working group is a Raspberry PI4 based real-time benchmarking system with RT_PREEMPT kernel for Ubuntu 20.04. ROS2 developers that are not satisfied with the real-time performance of RT patched Linux can also choose to switch to Wind River's vxWorks or Blackberry's QNX, both systems executing ROS2 applications.

Towards integration into industrial robot software platforms

Universal Robots teach pendant that has been enhanced with ROS support

As previously mentioned, drag & bot's software offers one way to integrate ROS into an industrial robotics platform but more solutions are becoming available. Another approach was presented by Universal Robots (UR) and Forschungszentrum Informatik (FZI) who have been collaborating to develop an advanced ROS interface for UR's robots. The work enables direct integration of externally running ROS-based applications into URScript programs running on the robot. During the conference they showed a new interfaces that enable cartesian control and speed scaling for industrial robots from within ROS. UR also showed a prototype of a URCap that enables integration with ROS at the conference. The goal of the development is to leverage ROS' capabilities for UR's robots and making them available via URCaps.

Canonical the publisher of Ubuntu also presented their solutions for deploying robot software. Their solution mainly consists of snap and Ubuntu Core. snap is a solution to create containerized and easily deployable software applications. Canonical claims that snaps have advantages for embedded systems other solutions such as docker because they are more easily integrated with embedded hardware. Ubuntu Core is a minimal operating system including application packages which is based on snap containers which enables a modular and simple architecture for embedded systems. The Canonical software solutions are optimized for ROS and they are already used in industry e.g. in Bosch Rexroth's CtrlX. snap are becoming another way of integrating ROS in industrial control systems and software platforms of the future.

Integration with 5G and hardware acceleration

Board with Hardware Acceleration support in ROS

Ericsson and eProsima presented their work of integrating ROS2 with 5G systems and thus moving ROS2 towards enabling distributed real-time systems. The two organizations collaborated on developing an interface for ROS2 and the underlying DDS middleware implementations to enable creating separate IP flows for specified ROS2 communications (previously only a seperate IP flow was created by DDS implementations) and setting 5G quality of service parameters for these IP flows. The interface has been integrated in eProsima's FastDDS. FastDDS is also the default middleware for the next longterm release ROS2 Humble (May 2022) which means integration of ROS2 with 5G quality of service is becoming simple.

Another interesting development was presented by Xilinx. Xilinx and AMD are working on providing prime hardware acceleration support for ROS2. The development of hardware acceleration interfaces is driven by the ROS2 hardware acceleration working group. Xilinx is developing the Kria Robotics Stack based on the open interfaces defined by the working group. Major features of the development are easy integration of embedded targets into the ROS2 build system and tools and an API for defining which parts of the ROS2 computation graph is run on CPU or e.g. FPGA. This development promises to make ROS2 a prime platform for deterministic and lightning fast computations that are needed for future robotics applications. Other companies such as NVIDIA are also targeting ROS2 for their hardware acceleration solutions as Katherine Scott (Open Robotics) stated in her presentation.

Tackling the industrial security challenge

Manufacturing systems are becoming a target for cyber criminals. As robots are deployed in all kind of systems that are essential for a countries economy they can even become a prime target in potential cyberwar scenarios. The ROS-Industrial community is aware of the arising problems and members Alias Robotics, Trend Micro as well as Canonical have presented research findings, solutions and services for ROS developers. Alias Robotics provides solutions such as the Robot Immune System (End point protection) as well as services for identifying potential risks in robotic products such as Threat Modeling. Alias Robotics and Trend Micro have together analyzed the DDS standard which is ROS2's prime middleware and used in a wide range of applications such as medical, automotive and space. A number of security issues where discovered and reported. Canonical provides longterm support for end of life ROS distributions with security updates and the previously described deployment toolchain based on Ubuntu Core and snap which simplifies security updates during production.

Advanced motion planning: Mobile manipulation, hybrid planning, collision avoidance and welding

Advanced manipulation has always been a strong suit of the ROS ecosystem. This years conference made abundantly clear that this is also true for ROS2. PickNik main driver behind moveit2 gave an overview of new features currently being developed, notably mobile manipulation and hybrid planning. A mobile manipulation demonstration for moveit2 was developed together with HelloRobot (workshop). A demonstrator for hybrid planning which integrates a global planner and local planner is being developed together with Fraunhofer IPA and targeting multi-layer welding. The goal is to perform scanning, welding and local planning at the same time to achieve higher process quality. Another talk focusing on robotic welding was contributed by IRT Jules Vernes that presented how they leveraged ROS for building a lightweight welding robot for mobile welding applications from scratch. They were able to design hardware and controller within a year and create a working prototype. SwRI has developed a ROS2 demonstrator for Scan & Plan applications (workshop).

ARTC is developing a software tools for collision avoidance in dynamic environments. Currently, obstacle avoidance during motion is not easily available for robot arms in ROS2. Therefore, ARTC has developed a dynamic safety joint trajectory controller, that integrates with motion planning solutions such as moveit2 and tesseract. The controller includes collision checking, speed scaling and re-planning and average execution frequencies of more than 200 Hz are possible on commercial-off-the-shelf hardware.

Model-driven robotics development with ROS

Software for modern robot systems is becoming more and more complex. Development and testing is becoming more and more difficult. It is time to work on handling the rising complexity of robotics development. Fraunhofer IPA presented their work on a model-driven development toolchain for ROS-based systems. The toolchain enables extracting models from available handwritten ROS components, defining ROS systems of existing and new components using a graphical tool and deploying the defined systems in different fashions i.e. ROS package or a complete docker container. The toolchain is currently in alpha state and heavily worked on.

SpaceROS and other news

  • The space industry in the US is on the rise and with it is the interest in robotics solutions for space. ROS is already deployed in a number of non-critical space applications. Open Robotics and PickNik are plotting the next big step for ROS - SpaceROS - qualifying ROS and moveit for mission critical space applications.
  • TurtleBot4 is coming in 2022 with a base from IRobot

Summary

This year's conference showed that ROS is being commercially deployed in industry and we are seeing that industrial robotics platform providers (UR, drag&bot) are opening their platforms for ROS. Additionally, a number of supplier companies are providing key technologies for building safety systems around ROS-based robot applications. Furthermore, ROS2 has many configuration options for achieving real-time performance and industrial operating system developers such as Blackberry and Wind River are supporting ROS2. ROS2 is becoming a prime robotics platform for new technologies such as 5G and hardware acceleration thus enabling the robot applications of the future. ROS2's security is moving in the focus of and being monitored by the security research community and a number of specialized security solution providers are available. With ROS2's cutting edge motion planning capabilities this means building industrial robotics applications with ROS2 and deploying them to industry is becoming much easier.

Process Planning for Large Systems

Advances in robotic capabilities allow us to tackle bigger problems with autonomous systems. While extra degrees of freedom in large robots like rail systems or mobile bases empower cutting edge work, they can cause challenges in process planning; the creation of the “useful” motion of a robotic system that is constrained by the application at hand. The ROS Industrial Consortium has addressed this problem by developing new process planners that can quickly plan process for robots with large degrees of freedom.

A full Dijkstra graph finds the optimal path through the graph by exploring every edge

In the field of robotics, motion planning can be divided into two forms: freespace planning, which finds a collision free path between two points, and process planning, which governs the movement of the robot through its useful operations. When a robot has more ways of moving, the graph of positions that represents the joint positions that can reach a point in space begins to get large. So large, in fact, that it begins to resemble the discretized representations of real space used by freespace motion planning. This project exploited this similarity to speed up process planning using freespace planning algorithms.

Example configuration that benefits from this alternate approach to solution searching

At a high level, the new improvements in process planning allow for branching “depth first” searches, which will quickly find a solution for every position in the trajectory, instead of search “breadth first” to find the optimal configuration at each pose. This work is especially useful in situations with many valid solutions, where it is able to find a “good enough” configuration in a tiny fraction of the time used by more traditional, comprehensive searches.

This is currently implemented in the repository https://github.com/swri-robotics/descartes_light. We encourage interested parties to check this out, and provide feedback. We expect to migrate this to the ROS-Industrial GitHub organization int he coming months. Thanks to the community for providing feedback and use cases to support this work.

Hands on with the DreamVu PAL Mini

I’ve been testing an evaluation kit for a new camera from DreamVu. The PAL Mini is a tiny 360-degree 3d camera with a software package for object detection and avoidance. It has ROS support and is intended for use with mobile robots.

They’re not kidding when they say it is small.  Here is the camera compared to an Xbox One controller.

The camera connects to an Nvidia Jetson, offloading the computationally heavy task of rectifying the image and generating a ROS navigation compatible LaserScan message. The Jetson can run ROS and may be sufficient for a completely controlling a mobile robot. DreamVu provides samples for mapping and navigation with a TurtleBot.

The camera connects to the Nvidia Jetson with a USB-C cable. There are ample remaining ports for other devices.

The Jeston can be connected to a host pc via ethernet and stream the resulting laser scans and rgb images. A simple ROS node is provided to convert to standard cv::Mats and publish them. A remote ROS master is not required for this.

A laser scan is generated, along with a panoramic image. The laser scan is combined with the image making it easy to see what and where and obstacle is.

As with any new product I ran into some issues during testing. I prefer to use the camera by streaming the results to my host PC, which most likely wasn’t their main focus while developing a camera designed to mount on mobile robots. Their support was able to resolve the issue quickly and took my suggestions on how to improve the reliability of their ROS package.

More information is available on their website. You can also check their Github (coming soon). Contact DreamVu for information on Object Detection and Avoidance, and other applications of their 360-degree view cameras and solutions.

Introducing the ConnTact Assembly Framework

This is a cross-post hosted over at the AI for Manufacturing Robotics Blog hosted by NIST as part of their initiative to set up a community hub for researchers and practitioners of AI for Manufacturing Robotics, who are interested in staying current on research trends or finding new projects and collaboration opportunities. You can learn more about this initiative and other activities about AI for Manufacturing Robotics at: https://sites.google.com/view/ai4manufacturingrobotics/

The Challenge of Assembly

When assembling precision-tolerance parts, humans have an ideal set of capabilities. Combining geometrical understanding, visual guidance, tactile confirmation, and compliant force/torque (F/T) control, it is possible for a human to quickly assemble components despite micrometer tolerances. We take this remarkable suite of abilities for granted. To prove it, just recall that the number-one cliché child's toy is all about fitting blocks through matching holes in a box. Our babies can do assembly tasks with ease.

Now, the moment you consider handing the child's block off to a modern industrial robot, you start to encounter some of a robot's greatest disadvantages. A robot is, by design, a highly rigid precision machine. The robot must be fed very precise information about the orientation and position of the goal - the box in this case - and it cannot correct mid-action to comply as nearly-aligned pieces try to slide into place. If provided erroneous measurements - even ones that are just a few millimeters off - it will quite simply crush the box, the block, and possibly itself. Rigidity is inherently a difficult obstacle to performing mechanical assembly when high-accuracy measurement and fixturing are impractical.

ConnTact guiding a UR10e robot through an assembly algorithm. The system can successfully insert a peg into the NIST taskboard despite up to 1cm error in position instructions

ConnTact guiding a UR10e robot through an assembly algorithm. The system can successfully insert a peg into the NIST taskboard despite up to 1cm error in position instructions

Compliant Control

Despite these difficulties, a respected method has been developed for robots to imitate human flexibility and responsiveness, called compliant feedback control. By rapidly controlling the motion of the robot in response to 6DOF F/T information, a robot can imitate the soft compliant behavior of a human grip. This can be achieved with any modern robot using an after-market 6-axis load cell mounted between the tool flange and the gripper.

This feedback enables detection of F/T "wrenches" acting on the gripper, so the control system can smoothly move to comply. Pressing on the front of the gripper makes the robot retract. Twisting the gripper makes the robot turn. The robot very convincingly pretends to be much less rigid. When displaced, it applies a constant gentle force toward its desired goal position and patiently waits until permitted to reach it.

This permits the use of tactile methods of operation - the use of touch to sense the environment and make decisions. The system can correlate data about reaction forces, robot position, and robot velocity to detect the collision mode which best describes the interaction of the robot and the environment. For instance, collision with a flat surface may be identified by sharp resistance in one direction but negligible friction in other directions. The alignment of a cylindrical peg with a receptacle may be detected by resistance in all directions except one, the vector parallel with the receptacle axis. By characterizing these interactions, a reliable understanding of the contact state between the robot and workpiece can be formed.

Researchers have been experimenting with and implementing compliant assembly systems for years. The Kuka iiwa comes pre-installed with certain compliant features built in. Other companies such as Franka Emika have designed robots specifically to achieve high performance in feedback-based assembly. And on the software side, open-source libraries exist which can control hardware through high-level position or velocity interfaces. In particular, our work makes extensive use of the cartesian_controllers libraries, developed at the FZI Research Center for Information Technology.

Compliance example: the robot is configured to respond to forces and torques applied to the gripper. After the disturbance ends, it returns to its assigned task.

Compliance example: the robot is configured to respond to forces and torques applied to the gripper. After the disturbance ends, it returns to its assigned task.

The ConnTact Framework

NIST hopes to expand the realm of assembly research to smaller developers with less abundant resources and permit a much more agile workflow. To that end, NIST is collaborating with SwRI to develop an open-source framework for compliant assembly tasks, called ConnTact. This framework is meant to provide tools which create a bridge directly from hardware configuration to the real algorithmic development required to accomplish difficult tasks. It also standardizes interfaces to permit the community to share solutions and re-implement them with new hardware for new applications. The framework takes in information about the robot and task location and provides an environment where the user can easily develop tactile algorithms. The overall goal is to significantly ease the load on an end-user who wants to develop a new assembly applications anywhere on the spectrum from straighforward repeatable tasks to complex problems that leverage AI and reinforcement learning methods.

The key to this framework is the simplification of interfaces. This permits any robot with force sensing and compliance control features to be used with any algorithm. To configure for a given robot, a developer must feed live force, torque, and position data into the Assembly Tools package. In addition, for each task, the task frame, or approximate location and rotation of the task relative to the robot, must be imported. With this basic input, the package provides a development framework with 3 major features: Generalization, Visualization, and Modularity.

The user provides a set of "user configuration" information and connects their preferred robot to the input/output interface. ConnTact then processes this configuration and runs the user's algorithm as configured, providing rich visual/logging feedback.

The user provides a set of "user configuration" information and connects their preferred robot to the input/output interface. ConnTact then processes this configuration and runs the user's algorithm as configured, providing rich visual/logging feedback.

Main Features

Generalization: The framework seeks to generalize algorithms to any location or task. This is accomplished by transforming all inputs - robot pose, force, and torque inputs - into task space, that is, it converts all spatial measurements to be relative to the task frame. For example, in the case of an Ethernet connector insertion task, given the location of the Ethernet socket relative to the robot, the development environment would provide position data to the algorithm relative to the socket. A command to move the plug to position (x,y,z) = (0,0,0.05) would place the Ethernet plug 5cm from the socket. A force of (0,0,10) always indicates that the gripper is experiencing a force away from the socket and parallel to its axis, even if the socket were upside-down on the underside of a workpiece. This allows the user to focus their efforts on algorithm development with the confidence that their work is applicable to any task location or orientation.

Visualization: The framework provides visualization options using Python plotting libraries familiar to any modern programmer. Input position and forces are mathematically processed to provide reliable speed, force, torque, and position data streams which are easily displayed at runtime. This built-in visualization is meant to equip every program with a greater degree of transparency.

Modularity: Finally, we facilitate modular algorithm development with a state machine-based control loop. The example implementation specifies a finite number of states and provides the conditions needed to transition between them. The user can reuse existing algorithmic motion routines from the open-source repository to rapidly produce useful programs. They can also develop their own algorithm modules and easily add them to existing program structures.

Some sample algorithm modules currently included:

  • Moving in a linear direction until a flat surface is detected.
  • Searching in a spiral pattern across a surface for a low spot, such as a connector falling a short way into its socket.
  • Moving compliantly in a specified direction until colliding with a rigid obstacle.
  • Probing in different directions to determine rigid constraint locations.
Left: Top-down/side-on motion and force readings are displayed live.Right: The robot streams data to RViz (a ROS-specific model suite) to show live pose and live F/T vectors.

Left: Top-down/side-on motion and force readings are displayed live.

Right: The robot streams data to RViz (a ROS-specific model suite) to show live pose and live F/T vectors.

Code Release

The basic WIP framework is being made available publicly now at https://github.com/swri-robotics/ConnTact, and work will proceed over the coming months to add features and documentation. NIST and SwRI welcome any feedback to improve the framework. We aim to make a simple, straightforward, and powerful system to bring agile compliant robotic assembly to a wider developer community and bring tactile robot sensing to the forefront.

NIST Launching New Website Focusing on AI for Manufacturing Robotics

In a recent meeting with ROS-I Consortium Americas member and long time supporter of open source for industry, NIST's Craig Schlenoff noted some recent developments over at NIST and has provided the below to notify the borader ROS-I Community. Thanks to NIST for continuing to keep robotics, AI, and standards that seek to bring order to the space front and center!

Dear Robotics Enthusiasts,

NIST is pleased to announce the launch of a new community hub for research on AI for Manufacturing Robotics. Link: https://sites.google.com/view/ai4manufacturingrobotics/

On this site you’ll find several resources you may find of interest, including:

  • Curated lists of relevant datasets, papers, and software repositories
  • Learning resources on AI, Robotics, and the Manufacturing Industry
  • Community content such as our on-going Research Spotlight article series
  • Announcements/archives of workshops, conferences and more.

Please be sure to bookmark the site and join our slack at https://tinyurl.com/ai-mnfg-robotics if you found it helpful, since new material is being added regularly.

Website_3.PNG

Thanks to Craig Schlenoff and Pavel Piliptchak of NIST for providing this update.

It's a wrap for ROS-Industrial Asia Pacific Workshop 2021!

The Annual ROS-Industrial Asia Pacific Workshop took place on the 18th August 2021, as a one day digital webinar. The workshop was opened by our Guest-of-Honor, Professor Alfred Huan, Assistant Chief Executive of SERC from A*STAR, in which he has given an overview of the robotics and automation eco-system in Singapore, as well as the how the adoption of ROS has been proliferated throughout Asia Pacific.

After the opening, our keynote speaker, Mr Tan Jin Yang, Senior Manager of Changi Airport Group, shared the topic of “Robotics Middleware Framework (RMF) for Facilities Management”. With the increased number of robots being used to augment the workforce, there is also a growing importance in effectively managing and ensuring interoperability across disparate fleets of robots. He has then shared on how RMF is now being tested in trials at Changi Airport Terminal 3, to address the task scheduling, automation and infrastructure sharing of the various robot brands that the airport has.

Next, we had Mr Darryl Lee, the Consortium Manager of ROS-Industrial Asia Pacific from Advanced Remanufacturing and Technology Centre (ARTC), to present on “Accelerating Industry Adoption of ROS2 based Technology in Asia Pacific”. During his presentation, he addressed the latest developments ongoing within the ROS-Industrial Consortium Asia Pacific Team, such as the formally released easy_perception_deployment (EPD) & easy_manipulation_deployment (EMD) ROS2 Packages, ROS Quality Badges, as well as some of the on-going projects such as the trials at Changi Airport Group presented by Jin Yang earlier.

Mr Matt Robinson, Consortium Manager for our ROS-Industrial counterpart in Americas at the Southwest Research Institute (SwRI), presented on “Lowering the Barrier for Industry Personnel to Leverage Open Source” where he brought up the strategy for developments, and also sharing on some of the useful tools and capabilities that have been developed at SwRI, such as the offline toolpath planner, visual programming, ROS Workbench and many other open-sourced advancements.

Dr Dave Coleman, CEO & Roboticist of PickNik Robotics, presented MoveIt 2: Land, Sea, Space, where he listed examples of how the robots has been used effectively in all land, sea and space. He also covered the diverse applications of MoveIt, the successful migration of MoveIt to ROS2, and the hardware integration challenges during the process of the migration alongside the upcoming roadmap, with the possibility of MoveIt 3 in the making.

Dr Jan Becker, President, CEO, and Co-Founder of Apex.AI presented “Apex.OS - A safety-certified software framework based on ROS 2”. He shared about the Apex.OS software framework based on ROS 2, which has been certified according to ISO 26262 - the automotive functional safety norm - to the highest safety level ASIL D. He also described the advantages of working with open APIs as well as outlining the efficient software development process, which lead to the functional safety certification. Truly a remarkable milestone for the open sourced industry!

Shortly after, Nicholas Yeo, Senior Director of Advanced Technology, Asia Pacific, in Johnson & Johnson, shared that they are calling forward like-minded partners to collaborate and support their effort in realising the value in advancement in robotics to drive agility and resiliency in their supply chain environment. He then shared their early successes, strategies and approaches towards developing a successful framework using open-source technologies.

After the intermission break, we invited Marco A. Gutierrez, a Software Engineer at Open Robotics, to share on “Roadmap Update: Ignition and ROS2” where he gave a brief overview on the latest developments of two projects regarding robust robot behaviour across a wide variety of robotic platforms. Ignition, the new and improved simulation tool, has its origins from Gazebo classic. It is currently in Edifice release, where Open Robotics addressed improvements made to both Mac and Windows Support as well as the design for enhanced distributed simulation. In the next LTS release set to be around September 2021, Ignition Fortress, will be focusing on improvement for the GUI Tools, sensors, SDFormat, rendering, as well as overall performance. For ROS2, he has also mentioned that ROS1’s last LTS release Noetic Ninjemys will end May 2025 and has encouraged the audience to update to ROS2. ROS2 Galactic Geochelone will be addressing middleware, tooling, quality, performance as well as documentation.

We also had Zeng Yadan, a Research Associate under the School of Mechanical and Aerospace Engineering from Nanyang Technological University (NTU) to present on the “Automation of Food Handling Based on 3D Vision Systems Under ROS”. She shared on the use case of robotics and automation for meal assembly, whereby their system utilised both a delta and scara robot with RGBD cameras and conveyor belts. This automated food assembly process for a full range of food items can be served during meals in hospitals, inflight, fast-food chains, etc. The technologies applied can minimize the risk of infection and viral contamination during food assembly. A very interesting sharing on the usecase of robotics for food assembly by NTU!

Felix von Drigalski, a Senior Researcher at Omron Sinic X presented on “Machine Assembly with MoveIt”, where he introduced the evolution of the robot system from 2018 to 2020. He gave techniques and advice such as the in-hand pose estimation whereby extrinsic manipulation can be used to align objects. Several other techniques such as L-plate reorientation, dynamic centering and software structure were also mentioned. He also discussed which features are available (or coming up) in MoveIt and ROS, and how to avoid common pitfalls.

Timothy Tan, Robotics Lead & Senior Systems Engineer of GovTech also presented “Robotics Adoption & RMF”, where he presented the current Robots Middleware Framework Architecture, and how it can be further enhanced in the future for optimal deployment.

Last but not least, we had Harshavardhan Deshpande, a Research Engineer of ROS-Industrial Europe, Fraunhofer IPA, presenting on “New Horizons for European Open Source Robotics”, where he summarized the outcomes and Focused Technical Projects (FTP) of the ROSIN project, and introduced their next funding focusing on Cognitive Robotics & AI innovation with ROS-I as a lighthouse.

A summarized table of all the speakers, including presentation slides and recording, are now available here!

To conclude this year’s ROS-Industrial Workshop Asia Pacific, we thank every speaker for their presentation during the webinar. The ROS-Industrial Consortium Asia Pacific at ARTC will continue to work closely with our industry partners, providing training opportunities for aspiring roboticists as well as companies that are embarking on leveraging ROS to scale their robotics adoption.

On behalf of the ROS-Industrial Team at ARTC, we hope that you enjoyed the webinar as much as we did, and we look forward to meeting each other in 2022 for future ROS-Industrial activities!