Intelligent Part Reconstruction

It has long been a challenge in industry to image, or leverage non-contact sensors, to generate reconstructions of highly spectral or featureless surfaces. Shiny parts, dark surfaces, occlusion, and limited resolution all corrupt single-shot scanning for first-look robotic solution imaging or scanning systems. A whole new class of applications can be efficiently addressed if there were an efficient way to reconstruct surfaces to enable reliable trajectories for subsequent processing.

In the context of autonomous processing of parts, the mesh is the “stitching” together of points generated by a 3D depth camera that creates a “point cloud.” Algorithms are then applied to derive surfaces from the point cloud, as well as edges, and even detect “engineered features,” such as drilled holes. The process deteriorates when there is a lack of “points” returned to the sensor (i.e. sparse data). Smooth surfaces also make it difficult to “stitch” images together or organize points in a way that enables mesh creation. As in the example below, there is insufficient data to create the mesh over the full scanned surface. There are techniques to mitigate this phenomenon, such as “flat” coating surfaces, but these can be cumbersome, costly, and inefficient.

Spectral Sample Part.JPG

In recent years, academic research in the field of on-line surface reconstruction has built on the Truncated Signed Distance Field (TSDF). The Kinect Fusion TSDF technique pioneered by Microsoft Research involves probabilistically fusing many organized depth images from 3D cameras into a voxelized distance field, to estimate an average, implicit surface. The scanner is manipulated by hand, and each image’s pose is registered relative to the previous images by way of the Iterative Closest Point (ICP) algorithm. While this technique shows promise in fusing partial observations of difficult to scan objects, it suffers from the practical constraint that it must scan very quickly to accurately estimate scanner motion, and the surface being scanned must have sufficient features to enable tracking.

The TSDF-based reconstruction process only produces good results if the sensor gets good views of as much of the surface as possible. This is a fairly intuitive task for a human, since we can look at the partially-reconstructed surface, recognize which areas are incomplete, and move the camera to compensate.

It’s much more difficult for a robot to make these decisions. One way to approach this problem is to track which areas around the surface have and haven’t been seen by the camera. The robot can take an initial measurement, see which areas haven’t been viewed, and pick a new view that looks at these unknown regions. This lets the robot discover that it doesn’t have information about the back side of a wall and decide that it needs to move the camera to the opposite side of the work area to look at the obscured surface.

In this implementation, views around the volume are randomly generated within a range of angles and distances. Rays are cast corresponding to the camera’s field of view from each pose and count how many of these rays hit unknown voxels. The next best view is the one that hits the most unknowns, and the robot tries to move to this view to explore more of the part.

NBV.JPG

The results have been very promising. The performance of the combination of TSDF + Next Best View (NBV) within this work have resolved a number of the issues encountered in a prior Robotic Blending Focused Technical Project (FTP). The first of two primary metrics was: mesh completeness, where a complete part was created, where before insufficient returns left “holes” in the data. An example of a before-and-after can be seen below.

Al Bracket.JPG

The second metric: to generate trajectories within the compliance of the tool leveraged in the robotic blending work. In this case, that was approximately 2 cm. You can see in the video on this aluminum sample that the tool follows the arc and does not bottom out, or lift off of the part. While somewhat qualitative, operating within this compliance range was impossible before the development of this TSDF + NBV implementation.

Future work seeks to refine this tool set into a more cohesive set of packages that can then be contributed to the ROS-Industrial community. In the meantime, further testing to understand the limitations of the current implementation, and subsequent performance improvements, are slated in conjunction with other process development initiatives.

Check back here for more information and/or updates, or feel free to inquire directly about this capability: matt.robinson <at> swri.org.

Through 2018 and into 2019 additional developments have taken place, and we look forward to providing an open-source implementation over at github.com/ros-industrial-consortium. See below for some updates on demonstrations and outputs.

An intro to how Intelligent Part Reconstruction, a TSDF-based approach, allows for the creation of improved meshes to facilitate planning over large featureless surfaces or highly spectral surfaces. https://rosindustrial.org/news/2018/1/3/intelligent-part-reconstruction
Improved dynamic reconstruction on polished stainless steel conduit running at the frame rate of the sensor. This appears in the demonstration within the SwRI booth at Automate 2019.

Improved dynamic reconstruction on polished stainless steel conduit running at the frame rate of the sensor. This appears in the demonstration within the SwRI booth at Automate 2019.