Field notes from Automate 2019, and why we’re bullish on ROS2

What makes a good industrial automation demonstration? When we started preparing for Automate 2019 back in January, a few key points came to mind. Our specialty in SwRI’s Manufacturing and Robotics Technology Department is advanced robotic perception and planning, so we decided that the robot should perform an authentic dynamic scan-and-plan process on a previously-unseen scene – as far away as we could get from a “canned” demo. We also wanted the demo to be an interactive experience to help drive discussion with visitors and entertain onlookers. These goals led us to the tube threading concept: a human would bend a piece of shiny metal tubing into a novel shape, and the robot would perceive it and plan a path to sweep a ring along it.

Michael Ripperger & Joseph Schornak on location at Automate 2019

Michael Ripperger & Joseph Schornak on location at Automate 2019

Developing a demo system presents an opportunity to explore new ideas in a low-risk environment because the schedule and deliverables are primarily internally-motivated. Since my group had limited previous exposure to ROS2, we decided that our Automate demo should use ROS2 to the greatest possible extent. The original vision was that the system would be entirely composed of ROS2 nodes. However, due to the practical requirements of getting everything working before the ship date, we decided to use a joint ROS/ROS2 environment, with ROS motion planning and the GUI nodes communicating with the ROS2 perception nodes across the ROS-to-ROS2 bridge

ROS2 Strengths and Challenges

In contrast to virtually every other robotics project I’ve worked on, the demo system’s perception pipeline worked consistently and reliably. Intel maintains a ROS2 driver for Realsense RGB-D cameras, which allowed us to use the D435 camera without any customization or extra development. Our YAK surface reconstruction library based on the Truncated Signed Distance Field algorithm helped us avoid the interreflection issues that would usually plague perception of shiny surfaces. After a couple afternoons spent learning how to use new-to-me VTK libraries, the mesh-to-waypoint postprocessor could consistently convert tube scans into trajectory waypoints. More information about this software is available from the SwRI press release or the writeup in Manufacturing Automation.

Block Diagram of SwRI ROS-I Automate 2019 Demonstration

Block Diagram of SwRI ROS-I Automate 2019 Demonstration

Motion planning turned out to be a particularly challenging problem. Compared to a traditional robot motion task like pick-and-place, which involves planning unconstrained paths through open space, the kinematic constraints of the tube threading problem are rather bizarre. While the ring tool is axially underconstrained and can be rotated freely to the most convenient orientation, it is critical that it remain aligned with the axis of the tube to avoid collision. It’s impossible to flip the ring once it’s over the tube, so if the chosen ring orientation causes the robot to encounter a joint limit halfway down the tube, tough luck! Additionally, the robot must avoid collision between the tube and robot hardware during motion. Our initial solution used Trajopt by itself, but it would sometimes introduce unallowable joint flips since it tried to optimize every path waypoint at once without a globally-optimal perspective on how best to transition between those waypoints. We added the Descartes sampling algorithm, which addressed these issues by populating Trajopt’s seed trajectory with an approximate globally-optimal path that satisfied these kinematic and collision constraints. Planning still failed occasionally: even with a kinematically-redundant Kuka iiwa7 arm, solving paths for certain tube configurations simply wasn’t feasible[^1].

TrajOpt Path Planning Implementation & Testing

TrajOpt Path Planning Implementation & Testing

[^1]: The extent of solvable tube configurations could be greatly increased by including the turntable as a controllable motion axis. Given the constraints of the iiwa7’s ROS driver, we decided that this would be, in technical software terms, a whole other can of worms.

We shipped the robot hardware about a week in advance of the exhibit setup deadline. Our reliance on ROS meant we could switch to simulation with minimal hassle, but there were some lingering issues with the controller-side software that had to wait until we were reunited with the robot the Saturday before the show[^2]. This contributed to moderate anxiety on Sunday evening as we worked to debug the system using real-world data. We had to cut some fun peripherals due to time constraints, such as the handheld ring wand that would let visitors race the robot. By Tuesday morning the robot was running consistently, provided we didn’t ask it to solve paths for too-complicated tubes. This freed up some time for me to walk the halls away from our booth and talk to other exhibitors and visitors.

[^2]: Our lunch upon arrival was Chicago-style deep dish pizza, which conveniently doubled as dinner that evening.

More Collaborative Robots

There were collaborative robots of all shapes and sizes on display from many manufacturers. I may have seen nearly the same number of collaborative robots as traditional ones! A handful were programmed to interact with visitors, offering lanyards and other branded largesse to passersby. Most of them were doing “normal robot things,” albeit intermingled with crowds of visitors without any cages of barriers, and generally at a much more sedate pace compared to the traditional robots. Some of the non-collaborative robots were demonstrating safety sensors that let them slow down and stop as visitors approached them -- I usually discovered these by triggering them accidentally.

I was surprised by the number of autonomous forklifts and pallet transporters. I’m told that there were more in 2019 than at previous shows, so I’m curious about what recent developments drove growth in this space.

I learned that ROS-Industrial has significant brand recognition. I got pulled into several conversations solely because I was wearing a ROS-I polo! Many of these discussions turned to ROS2, which produced some interesting insights. Your average roboticist-on-the-street is aware of ROS2 (no doubt having read about it on this very blog), but their understanding of its capabilities and current condition might be rather fuzzy. Many weren’t sure how to describe the key differences between ROS and ROS2, and a few weren’t even aware that ROS2 has been out in the wild for three versions! I’ll unscientifically hypothesize that a key challenge blocking wider ROS2 adoption is the lack of demonstrated success on high-visibility projects. Our demo drove some good conversation to alleviate these concerns: I could show a publicly-visible robotic system heavily reliant on ROS2 and point to the open-source native ROS2 device drivers that let it function.

Showcasing Perception and Planning Potential

In terms of demo reception, people who visited our booth were impressed that we were scanning and running trajectories on previously-unseen parts. I usually had to provide additional context to show how our perception and planning pipeline could be extended to other kinds of industrial applications. There’s a tricky balance at play here – an overly abstract demo requires some imagination on the part of the viewer to connect it to an industrial use case, but a highly application-specific demo isn’t easily generalized beyond the task at hand. Since our group specializes in application-generic robot perception and planning, I think that a demo tending towards the abstract better showcases our areas of proficiency. This is a drastically different focus from other exhibits at the show, which generally advertised a specific automation process or turnkey product. I feel like we successfully reached our target audience of people with difficult automation tasks not addressed by off-the-shelf solutions.

Development of the Industrial YAK reconstruction for the Automate Demo in ROS2

Development of the Industrial YAK reconstruction for the Automate Demo in ROS2

While it certainly would have been easier to adapt an already-polished system to serve as a show demo, developing a completely new one from scratch was way more fun. Improvements made to our perception and planning software were pushed back upstream and rolled into other ongoing projects. We’re now much more comfortable with ROS2, to the extent that we’ve decided that from here on out new robotics projects will be developed using ROS2. The show was a lot of fun, a great time was had by all, and I hope to see you at Automate 2021!

Intelligent Part Reconstruction

It has long been a challenge in industry to image, or leverage non-contact sensors, to generate reconstructions of highly spectral or featureless surfaces. Shiny parts, dark surfaces, occlusion, and limited resolution all corrupt single-shot scanning for first-look robotic solution imaging or scanning systems. A whole new class of applications can be efficiently addressed if there were an efficient way to reconstruct surfaces to enable reliable trajectories for subsequent processing.

In the context of autonomous processing of parts, the mesh is the “stitching” together of points generated by a 3D depth camera that creates a “point cloud.” Algorithms are then applied to derive surfaces from the point cloud, as well as edges, and even detect “engineered features,” such as drilled holes. The process deteriorates when there is a lack of “points” returned to the sensor (i.e. sparse data). Smooth surfaces also make it difficult to “stitch” images together or organize points in a way that enables mesh creation. As in the example below, there is insufficient data to create the mesh over the full scanned surface. There are techniques to mitigate this phenomenon, such as “flat” coating surfaces, but these can be cumbersome, costly, and inefficient.

Spectral Sample Part.JPG

In recent years, academic research in the field of on-line surface reconstruction has built on the Truncated Signed Distance Field (TSDF). The Kinect Fusion TSDF technique pioneered by Microsoft Research involves probabilistically fusing many organized depth images from 3D cameras into a voxelized distance field, to estimate an average, implicit surface. The scanner is manipulated by hand, and each image’s pose is registered relative to the previous images by way of the Iterative Closest Point (ICP) algorithm. While this technique shows promise in fusing partial observations of difficult to scan objects, it suffers from the practical constraint that it must scan very quickly to accurately estimate scanner motion, and the surface being scanned must have sufficient features to enable tracking.

The TSDF-based reconstruction process only produces good results if the sensor gets good views of as much of the surface as possible. This is a fairly intuitive task for a human, since we can look at the partially-reconstructed surface, recognize which areas are incomplete, and move the camera to compensate.

It’s much more difficult for a robot to make these decisions. One way to approach this problem is to track which areas around the surface have and haven’t been seen by the camera. The robot can take an initial measurement, see which areas haven’t been viewed, and pick a new view that looks at these unknown regions. This lets the robot discover that it doesn’t have information about the back side of a wall and decide that it needs to move the camera to the opposite side of the work area to look at the obscured surface.

In this implementation, views around the volume are randomly generated within a range of angles and distances. Rays are cast corresponding to the camera’s field of view from each pose and count how many of these rays hit unknown voxels. The next best view is the one that hits the most unknowns, and the robot tries to move to this view to explore more of the part.

NBV.JPG

The results have been very promising. The performance of the combination of TSDF + Next Best View (NBV) within this work have resolved a number of the issues encountered in a prior Robotic Blending Focused Technical Project (FTP). The first of two primary metrics was: mesh completeness, where a complete part was created, where before insufficient returns left “holes” in the data. An example of a before-and-after can be seen below.

Al Bracket.JPG

The second metric: to generate trajectories within the compliance of the tool leveraged in the robotic blending work. In this case, that was approximately 2 cm. You can see in the video on this aluminum sample that the tool follows the arc and does not bottom out, or lift off of the part. While somewhat qualitative, operating within this compliance range was impossible before the development of this TSDF + NBV implementation.

Future work seeks to refine this tool set into a more cohesive set of packages that can then be contributed to the ROS-Industrial community. In the meantime, further testing to understand the limitations of the current implementation, and subsequent performance improvements, are slated in conjunction with other process development initiatives.

Check back here for more information and/or updates, or feel free to inquire directly about this capability: matt.robinson <at> swri.org.

Through 2018 and into 2019 additional developments have taken place, and we look forward to providing an open-source implementation over at github.com/ros-industrial-consortium. See below for some updates on demonstrations and outputs.

An intro to how Intelligent Part Reconstruction, a TSDF-based approach, allows for the creation of improved meshes to facilitate planning over large featureless surfaces or highly spectral surfaces. https://rosindustrial.org/news/2018/1/3/intelligent-part-reconstruction
Improved dynamic reconstruction on polished stainless steel conduit running at the frame rate of the sensor. This appears in the demonstration within the SwRI booth at Automate 2019.

Improved dynamic reconstruction on polished stainless steel conduit running at the frame rate of the sensor. This appears in the demonstration within the SwRI booth at Automate 2019.