Perception-Based Region Selection for Human to Robot Collaboration

Background

The need for robotic systems that can collaborate with humans on the factory floor is in demand by the manufacturing community, but collaborative robotic solutions are still lacking in many respects. One such problem appears in quality control of subtractive manufacturing applications, such as sanding, grinding, and deburring, where material from a part is removed using an abrasive tool until a desired surface condition is obtained. In such scenario, the quality of the finish can be assessed by an expert human operator and therefore it would be very advantageous to leverage this expertise so as to guide semi-automated robotic systems to work on the regions that need further work until the desired quality is achieved. Given this challenge, this research focused on enhanced human-robot collaboration, by producing a capability that allows a human operator to guide the process by physically drawing a closed selection region on the part itself. This region will then be sensed by a vision system coupled with an algorithmic solution to crop out sections of the nominal process toolpaths that fall outside the confines of this region.

Approach

Initially, a small dataset of hand-drawn closed-region images was produced in order to aid the initial development of the 2D contour detection method and projection into 3D. These images were made with a dark marker on white paper laying on a flat surface and imaged with the Framos d435 camera. The 2D contour method that resulted from this dataset was implemented with the OpenCV open-source library and comprised the following filters/method: grayscaling, thresholding, dilation, canny edge detection and contour finding. The output of this operation was the 2D pixel coordinates of the detected contours (Figures 1.a and 1.b).

Figure 1a. amoeba 2d detection

Figure 1a. amoeba 2d detection

Figure 1B. Box 2D Detection

Figure 1B. Box 2D Detection

The following stage used the 2D pixel coordinates and located the corresponding 3D points from the point cloud associated with the image; this was possible because both the 2D image and point cloud were of the same size. Following that, some additional filters were applied, and adjacent lines were merged in order to form larger segments. In the final steps, the segments were classified as open open and closed contours and then normal vectors were estimated. Results are shown in Figures 2.a and 2.b. Additional datasets were collected with varying conditions such as thicker, thinner lines, curved surfaces and multiple images containing parts of the same closed contour. These datasets allowed refining the method and addressed corner cases that emerged under more challenging conditions such as regions spanning multiple images (Figures 3.a, 3.b, 3.c).

Figure 2a. trianble region detection

Figure 2a. trianble region detection

figure 2b. amoeba region detection

figure 2b. amoeba region detection

Figure 3a. Box Multi-image 2d contour

Figure 3a. Box Multi-image 2d contour

figure 3b. Box multi-image 2d contour

figure 3b. Box multi-image 2d contour

Figure 3c. Box Multi-image region detection

Figure 3c. Box Multi-image region detection

Accomplishments

This research lead to the creation of an open-source C++ library that can be used to detect regions that have a similar need for human-robot collaboration. The repository can be found here https://github.com/swri-robotics/Region-Detection.

Furthermore, the work was featured as part of a recently ARM Insitute Project, with Spirit AeroSystems as prime investigator called Collaborative Robotic Sanding. An excerpt of that demonstration video highlighting the region detecion is included in the excerpt below.

Collaborative Robot Sanding with ROS2 for Aerospace Applications

Starting in mid-2019, a project led by Spirit AeroSystems and funded by the ARM Institute kicked-off around an idea to develop a complete Collaborative Robotic Sanding application. The goal was to have the robot do the 80% of the repetitive sanding tasks, while the process experts performed the detailed work and oversaw the robot’s work and could identify areas that needed additional processing. The objective was to find an effective balance between the benefits of automation and highly skilled manufacturing personnel.

This effort involved multiple organizations in which the Southwest Research Institute (SwRI) team, led by Jorge Nicho, sought to leverage a number of the emerging developments around ROS2, and create a complete and functional application in ROS2 that could be a stake in the ground for how to build industrial applications in ROS2. It had been noted at a prior ARM Institute meeting that the stakeholders were interested in the development and maturation of ROS2, so this project became a great opportunity to step forward and build an application in ROS2.

In parallel with this project, from the SwRI/ROS-I Americas perspective, was also the providing of a ROS-Industrial training, and assisting the Wichita State University partner, in identifying suitable curriculum elements around ROS to be incorporated into their academic programs. A key outcome would be the pilot of a ROS-based introduction to advanced robotics systems to be provided as part of the technical program at Wichita State. Such a program would assist in the realization of technician skills relative to working with ROS-based systems.

New Capabilties Developed within the Program

As far as the application development, two new features needed to be developed. The first was the ability for the robot to apply a constant force and tool speed during the execution of the sanding process trajectories. The selected robot, Universal Robots UR10e, has force-feedback feature, but simultaneous control of constant force while executing a trajectory at a constant velocity was not readily available out of the box. A recent blog post over at the SwRI Innovations in Automation blog details that specific development.

Second, in order to enable human to robot collaboration, it was desired that the operators could mark directly on the part in order to indicate an area that needed additional processing. Then the system would recognize the marks and only process within the marked regions. There will be more details on those two new developments forthcoming.

figure 1. Gazebo View of the collaborative Robotic Sanding application

figure 1. Gazebo View of the collaborative Robotic Sanding application

The ROS2 application leveraged Gazebo, Figure 1, to allow for richer emulation of the depth sensors and robot trajectory execution. This aided with development and verification of the localization technique, as the parts could be easily located within the working envelope of the simulated system. The ros_scxml package was utilized for creation of the application state machine, Figure 2, simplifying the state machine creation process, and allowing for more efficient updating as the application matured.

Figure 2. SCXML node diagram

Figure 2. SCXML node diagram

Some other additional features, such as the ability to understand reachability in the context of the part were included and the output can be seen in the below figure.

FIgure 3. Reachability assessment within the application gui

FIgure 3. Reachability assessment within the application gui

The development resulted in an application, complete with Graphical User Interface, Figure 4, integrated at WSU (Wichita State University) and in collaboration with Spirit AeroSystems. Post integration, the system was tested against the requirements for successful sanding of the part in TRL 6 testing trials. Initially there were some performance limitations. First, when using the manipulator for application of force it was found that performance of the force application degraded as the manipulator extended into a near-singularity configuration. This has an impact on what regions may be effectively processed, as for optimal force application and execution there is a relationship between the efficiency of the force response and the configuration of the arm, i.e. the arm being fully extended outward from the base attempting to process a distant surface area.

Figure 4. APplication GUI

Figure 4. APplication GUI

Bridging ROS2 with ROS1-Based Manipulator

Additionally, the manipulator currently only has a ROS1 interface. Since the application was built in ROS2 then the team had to leverage the ROS-to-ROS2 bridge. While this works to drive basic functionality, there were issues with the reliability of the communication between the application and the manipulator. This has been established as an area for continuous improvement and an effort to support the development of a ROS2 interface for Universal Robots is underway, with support from PickNik, Spirit AeroSystems, TU Delft, FZI Research, SwRI, and of course Universal Robots.

Overall, this was a successful program, and the key metrics for the top-level program were realized once it was understood how to best leverage the solution as developed. A final demonstration video, included below, was produced by the Spirit AeroSystems team. We believe the software-side of the application will be a complete example of a ROS2-based industrial application and we hope that the community will find it of interest. The application repository may be currently found at: https://github.com/swri-robotics/collaborative-robotic-sanding

Thanks to our partners at Spirit AeroSystems and Wichita State University for their collaboration, feedback, and partnership on this project. Thanks to the ARM Institute as well for their support.

First Impressions Using Tesseract Ignition

I have a robotics application where I am using an ABB robot with a fairly simple environment and no complex motions expected. I decided this would be a good candidate for testing the new Tesseract Ignition tools and the new Tesseract command language feature that is currently in development on GitHub alongside the main master branch of Tesseract (also TrajOpt and Descartes are being updated to work with these new changes). After installing the Tesseract Ignition tools, I was able to put in our URDF which has a simple end effector on a standard ABB IRB 2400 URDF. The Tesseract Setup Wizard allowed me to easily generate an allowed collision matrix with a single click after loading in my URDF, and then the kinematic groups tool was also very easy to use to add my desired kinematic chain for the motion planner. From here I was able to have an srdf configuration file that would be usable in the Tesseract motion planning environment.

Picture1.png

In software, I was then able to extend the new Tesseract Command Language planning server class and add my own planning profiles for freespace, transition, and raster motions that allow for parallel processing of motion plans in a variety of pre-made taskflow structures. Once finished I could easily load in a toolpath and quickly generate paths using the new planning server. The new Tesseract Ignition Visualization tool allowed me to visualize all the target waypoints and the robot motion. (Note I had to build Tesseract Ignition from source to get the visualization to work for now)

Picture2.png

Overall this integration process has been user friendly and allows me to spend less time having to worry about the details of the motion planning process.

Editor's Note: Tyler Marr is a Research Engineer at Southwest Research Institute and has been developing and deploying ROS-based systems involving autonomous motion planning during his time at SwRI.

Tesseract Setup Wizard Leveraging Ignition

Southwest Research Institute (SwRI) is excited to announce that it has adopted Ignition Robotics Software as the visualization tool set for the Tesseract Motion Planning Framework. Ignition Robotics Software provides a modular set of libraries related to rendering, GUI, math, physics and much more.

Over the past few years SwRI has received several inquires related to richer visualization, simulation and ease of use tools for industrial automation applications allowing a user without programming experience to perform tasks that leverage the advanced capabilities provided by ROS. The goal was to first start with something simple that would add value to the open-source community. It was chosen to start by developing a setup wizard for the Tesseract Motion Planning Framework, further down more details are provided.

If you are familiar with the current tools within ROS you may be asking yourself why we chose to leverage Ignition Robotics Software over something like RViz, RobotWeb Tools, etc. In my opinion, the Ignition Robotics Software is more user-experience focused and the others are more developer focused, for a specific platform. The Ignition GUI leverages Qt Quick, which provides several advantages over the legacy Qt Widgets. These advantages allow it not only to be used on a desktop but also on tablets and smart phones, along with multiple methods for web deployment; opening up the possibility to leverage this tool similar to how you would use an Industrial Human Machine Interface (HMI). In addition, Qt Quick allows for a cleaner solution for separating the UI development from the business logic, allowing faster development and integration.

Another aspect of the Ignition Robotics Software is the rendering capabilities which provides not only Ogre, but Ogre2 and OptiX. And because of its plugin architecture it will most likely see more support for other rendering libraries in the future. Lastly, an additional advantage is having direct access to physics provided by Ignition Physics for simulating various industrial processes like sanding, grinding and painting in the future.

The other component of this exercise was to determine how to deploy the User Tools. Since we are talking about deploying applications instead of libraries which are mostly self contained, it was key to utilize a method of deployment that allows these tools to be easily accessed by the user, with frequent improvements and support for early access to enable testing before making new features available. For this we have chosen to leverage Snapcraft and the Snap Store, provided by Canonical, for deploying these user based tools on Linux and we are currently investigating using MSIx for deployment on Windows.

Before I move on to providing details on Tesseract Ignition, I would like to recognize two key individuals instrumental throughout the development and decision process. I would like to recognize Louise Poubel, from Open Robotics, for her support related to the Ignition Robotics Software packages, and Kyle Fazzari, from Canonical, for his support related to building and deploying this tool to the Snap Store. Thank you both for your time and guidance on this effort and I look forward to further collaboration.

Tesseract Ignition Overview: This package provides two applications, the first is the Tesseract Setup Wizard and second is Tesseract Visualization outlined below and can be downloaded on the Snap Store by clicking on the Snap Store button below. Please see our video for a walk through of these tools and how you may start leveraging it now.

  • Tesseract Setup Wizard
    • Loading a URDF and SRDF
    • Defining kinematic groups
    • Defining allowed collision matrix
    • Defining group states
    • Defining group tool center points
    • Defining group opw kinematics parameters
    • Saving SRDF
  • Tesseract Visualization
    • Trajectory Simulation
    • Tool Path Visualization
    • Marker Visualization

First Virtual ROS-Industrial Training focused on ROS2 in the Americas

SwRI recently hosted a training session for ROS-Industrial Americas consortium members, led by instructors Josh Langsfeld, David Merz, and Randall Kliman. As it was held in the middle of the COVID19 pandemic, the traditional in-person format was replaced with a brand new virtual training method, using videoconferencing and virtual machines running in the cloud instead of students’ own laptops. The overall schedule was similar to previous training sessions, with two days focused on exercises designed to teach the basics of ROS and ROS-Industrial, followed by a more free-form lab day where the students could work on longer exercise. The class was held in a Zoom meeting that ran all day long, enabling easy interaction between the instructors and students during both the presentation and the exercise times.

Training live on the Zoom feed, while development exercises took place on AWS EC2

Training live on the Zoom feed, while development exercises took place on AWS EC2

To provide students with an Ubuntu environment, we opted for a new approach where we set up virtual machines using Amazon’s Elastic Compute Cloud (EC2) service and asked the students to log in using a remote desktop protocol. We were pleased to work with Amazon in setting up this arrangement and they helped by preparing an Ubuntu 18.04 base image with ROS Melodic preinstalled. This let us start up a whole set of virtual machine instances, one for each student, ready to go for training. These instances were made accessible to the public internet and so all students were able to directly log in using only an IP address and a provided key file. This approach turned out to be quite robust and no one had any issues accessing the cloud instances. The use of EC2 virtual machines also enabled easy instructor-student interactions, as the instructors could also log into the same instances and see exactly what the student was seeing. We used this to great effect along with Zoom breakout rooms to engage in one-on-one troubleshooting with the students, both guiding the students with next steps to take and even controlling their machine to help with more complicated problems. Overall, the virtual training experience was quite smooth and it is likely we will keep it as an option going forward, even when in-person trainings are able to resume.

And if attempting virtual training for the first time was not enough, this training also marked a milestone in the ongoing adoption of ROS2, as the basic material and exercises taught on the first day were updated to use ROS2, without assuming prior knowledge of ROS1. This first day covers the fundamental concepts of ROS packages, messages, topics, services, and parameters, all of which are fully functional and easily demonstrated in ROS2. As ROS-Industrial is in the middle of the transition, however, the full training is not yet available in ROS2. Instead, the second day, which teaches the basic concepts specific to ROS-Industrial, including URDFs, TF, and motion planning with MoveIt were done in ROS1. We expect this transition to continue and soon all of this material will be available in ROS2 as well, especially now that MoveIt2 is out and ready for use. Check back on the training website over the next few months to keep an eye out for the updates! We’ll be looking forward to additional training sessions and expanding what we can do with both ROS2 and the virtual training format.

Real-Time Trajectory Planning for Industrial Robots

Picture an industrial application where we want to do some work on large parts. The work is performed in a series of bays, and the parts to be processed are rolled into the bays on carts. The work bays typically have one or two people in them doing various manual tasks, or perhaps just passing through. However, one of the process tasks is difficult for a human to do. Maybe this task carries a risk of repetitive motion injuries, or maybe it requires reaching an area that is hard to reach from a standing position. It would be very desirable to automate this task using a robot, but there are some challenges that limit the applicability of traditional industrial automation:

  1. There are people in the work area. Maybe this is a small company with only a few bays, and they can’t permanently cordon off the entire bay with light gates and proximity sensors for an industrial robot. They need the robotic system to safely work alongside humans.

  2. Since the parts that need processing are just rolled in on carts, they are positioned inconsistently. Even the parts themselves may have variation that is not reflected in CAD data.

  3. The environment is dynamic and constantly changing. Carts are constantly being rolled in and out, and parts that are currently being processed could be bumped and shifted.

Collaborative robotic hardware does not address all the challenges with this type of application: we need collaborative software as well. Just as a human walking down a hall does not plan every step in advance and then close their eyes to execute the motion, our industrial automation systems need the ability to adapt and plan in an “online” manner. In our application above, the system needs to be constantly perceiving the environment and avoiding (and perhaps predicting the motion of) moving collision obstacles all while tracking the part that it is processing.

To this end, researchers at Southwest Research Institute have added online path planning capability to the ROS Industrial ecosystem through updates to the Tesseract (https://github.com/ros-industrial-consortium/tesseract) motion planning framework and the TrajOpt (https://github.com/ros-industrial-consortium/trajopt_ros) trajectory optimization library. TrajOpt uses the Sequential Quadratic Programming (SQP) method for motion planning. This algorithm works to solve the highly nonlinear motion planning problem by approximating it as a quadratic and solving the resulting Quadratic Program (QP) until the optimization converges to a local minimum. The online planning capability is implemented by continuously regenerating and solving the QP as the environment is updated with new information about the robot’s surroundings.

manual_moving_demo.gif

The examples below show the results. In these examples, a 6-DOF robot arm is underslung on a 2-DOF gantry. A red cylinder surrounds the robot, representing a boundary that the robot should keep away from humans and other unexpected obstacles. In the first example, the robot dynamically avoids a human that walks into its work area. The second example demonstrates the robot following a moving target pose. Both animations are displayed in real-time, and the system easily achieved a 1000Hz update rate for this example while running on a consumer desktop PC. A simplified version of these examples is available in the tesseract_ros repository (https://github.com/ros-industrial-consortium/tesseract_ros) so that you can run it yourself.

Dynamically avoiding a human entering working space

Dynamically avoiding a human entering working space

moving target pose

moving target pose

There is still a lot work to be done before we could deploy this in an application like the one in our example. One remaining question is what level of on-the-fly flexibility is desirable for a given application. Does the robot have free reign to adapt its path anywhere within its joint limits, or do we constrain it to only deviate by some amount from a given preplan? Creating a framework to represent this kind of high-level logic as well as the infrastructure involved in execution is the next step in the process. We look forward to deploying a system with this feature set on hardware using something simlilar to the joint_trajectory_controller within ros-control (http://wiki.ros.org/joint_trajectory_controller).

Editor's Note: This work and subsequent blog entry made possible through contributions by Levi Armstrong and Joseph Schornak. Thanks to Matthew Powelson for his contributions to ROS-Industrial during his time at Southwest Research Institute, we wish him the best on his next robotics adventure!

Lessons from a ROS2 Collaborative Industrial Scan-N-Plan Application

Contributed by Joseph Schornak, a Research Engineer at Southwest Research Institute's Manufacturing and Robotic Technologies Department


In early 2019 my team at Southwest Research Institute swore a solemn oath: any new robotic systems we develop will use ROS2. ROS Noetic will be the last ROS release, as the community shifts its focus to building and supporting new ROS2 releases. We agreed that it was crucial to commit early and get substantial hands-on experience with ROS2 well in advance of this upcoming transition.

Our first opportunity to apply this philosophy to a system intended for a production environment came in Spring 2019. We began a new project to develop a greenfield collaborative Scan-N-Plan system for an industrial client, which I will refer to here as Project Alpha. Several months ago, we completed and shipped Project Alpha, making it one of the first ROS2 systems to be deployed commercially.

The purpose of this article is to describe some of the discoveries made and lessons learned throughout this project, as we begin to apply this knowledge to the next generation of ROS2 systems.

Why is developing in ROS2 important?

There is a “chicken and egg” problem surrounding developing in ROS2. The most important part of ROS has been the lively and diverse package ecosystem, since the ability to bring in ready-to-ship packages supporting a wide variety of sensors and robots presents a huge advantage for roboticists. While the core rclcpp packages are fully-featured and robust, we need more ROS2 interface packages for sensors and robots commonly used in robotic applications. This gap presents a dilemma: potential users are discouraged from committing to ROS2 due to a lack of support for their hardware, which reduces the incentive for vendors to develop and support new ROS2 packages for their products.

In order to break this cycle, a critical mass of developers needs to commit to ROS2 and help populate the ecosystem. There are certainly benefits for early adopters: Intel’s family of RealSense RGB-D cameras had very early ROS2 support, and as a result, this camera has become a go-to 3D perception solution for ROS2 projects.

Integrating a Robot

We decided to build Project Alpha around the Universal Robots UR10e. Its reach, payload capacity, and collaborative capability satisfied our application-specific requirements. Additionally, we had experience integrating URs with prior projects, and we already had a few on hand in our lab. Fortuitously, the start of the project coincided with the beta release of the excellent Universal_Robots_ROS_Driver package, which has become our driver of choice.

However, there was a substantial immediate challenge: the UR robot driver was a ROS1 package, and we were developing a ROS2 system. There is very little ROS2 driver support for industrial robots, since the process of developing new robot drivers needs significant specialized effort. To address this challenge, we encourage the community to overcome this obstacle and invest the effort to develop new ROS2 drivers for industrial robots.

For the time being, the ros1_bridge package was sufficient to relay joint_state topics and robot control services between the ROS1 and ROS2 networks. We also adapted a custom action bridge node to convey FollowJointTrajectory action goals from our ROS2 motion planning and execution nodes to the ROS1 UR driver node. With these pieces in place, we were ready to plan!

Writing New Nodes

While our robot was ready to move, there was no ROS2-native motion planning pipeline available. At the time, MoveIt2 was still in an alpha state and and was undergoing significant development. To address this gap, we decided to port our Tesseract motion planning library to ROS2. This effort resulted in three repositories: the ROS-independent Tesseract core repository, the ROS1 tesseract_ros repository, and its close ROS2 sibling, tesseract_ros2.

As we worked through the ROS2 port of Tesseract and created new system-specific ROS2 packages for Project Alpha, we found ourselves discovering a new set of best-practice approaches for ROS2. For example, there are two distinct approaches when creating a C++ ROS2 node:

Pass in a Node instance: Create a custom class that takes a generic Node object in its constructor. This is similar to how NodeHandle objects are used in ROS1. These classes are flexible: they can be wrapped in a thin executable as standalone nodes or included as one aspect of a more complex node. The core mechanisms of the class can be exposed both through C++ functions and through ROS2 services.

Extend the Node class: Create a class that inherits from and extends the base ROS2 Node class and add application-specific member variables and functions. I get the impression that this is more in-line with the design intent of ROS2, since key functionality like logging and time measurement is exposed as member functions of the Node class. This approach also exposes new capabilities unique to ROS2, like node lifecycle management. Ultimately, we used both approaches. We found that the first strategy made it easier to directly port ROS nodes, so the nodes in the tesseract_ros2 package use this method. For the newly-developed Project Alpha nodes we used the second strategy, since we had much more freedom to design these new nodes from scratch to make the most of ROS2.

Working with DDS Middleware

The ROS2 DDS communication middleware layer represents a substantial improvement over the TCP/IP-based system used in previous ROS versions. ROS2 ships with a variety of RMW (ROS MiddleWare) implementations provided by several DDS vendors. Fortunately it is very straightforward to switch between the different RMW implementations: all that is required is to install the packages for the new RMW version, set the RMW_IMPLEMENTATION environment variable to specify the desired version, and rebuild any built-from-source packages in your workspace that provide message definitions.

Surprisingly, the choice of which RMW implementation we used had a substantial effect on the performance of Project Alpha, although this did not become clear until relatively late in development.

At the beginning of the project we used FastRTPS, which was the default option for ROS2 Dashing. It worked well for our initial collection of nodes, but when we integrated the ROS driver for the UR10e robot we began experiencing dropped messages and higher latency. Our theory is that the high volume of messages produced by the UR10e's real-time control loop overwhelmed the RMW layer under its default settings. We began exploring alternatives.

Our next option was OpenSplice, which eliminated the issue of dropped messages with the UR10e. However, we discovered several new issues: nodes took several seconds to fully initialize and begin publishing messages, and topics advertised by newly-launched nodes would often not be visible to nodes that were already running. Project Alpha's nodes were designed to all launch together at system startup and stay alive the whole time the system was running, so we were able to work around this issue for some time.

When we updated Project Alpha to use ROS2 Eloquent, we decided to try out the newly-available CycloneDDS RMW implementation. We discovered that it was not susceptible to any of our previous issues: it allowed our nodes to launch quickly on startup, handled high-rate topics as well as large messages like high-resolution point clouds, and could also gracefully manage nodes arbitrarily joining and leaving the network. Project Alpha was shipped to the client configured to use CycloneDDS.

Conclusions

Project Alpha has been a success, and we have been able to leverage our ROS2 development experience to pursue new projects with confidence. We were able to answer some important questions:

Is it better to develop pure-ROS2 systems or hybrid ROS/ROS2 systems? It is preferable to develop and maintain an exclusively ROS2 system. Hybrid systems will be a reality for some time until the development of the ROS2 ecosystem can catch up.

What ROS2 release should be used? We consistently found that there were substantial benefits to using the latest ROS2 release, in the form of new features and functionality.

Is ROS2 "ready for industry?" Resoundingly, yes! Get out there and build ROS2 systems!

How to securely control your robot with ROS-Industrial

Trend Micro and Politecnico di Milano (Polimi) recently brought up a security issue with controlling industrial robots using ROS-Industrial drivers. We have worked fast to describe the mitigation for the security problem uncovered. Actually, it is quite simple, by following basic security guidelines on how to setup your network you can eliminate the described security risk at the source. Here we show how to setup secure communication between your ROS PC and your industrial robot.

In ROS-Industrial robots are connected to the ROS PC using so called motion servers. These are programs written in the OEM specific programming language that are running on the robot controller and enable receiving target values (typically axis positions) from and sending actual values as well as the robot status to the robot’s ROS driver. The interface used for this communication differs from one robot OEM to another. The problem is that as of now robot OEMs do not provide interfaces that provide a security layer or authentication methods for these interfaces and no such measures can be added to the motion servers running on the robot controllers. Therefore, it is possible for intruders to attack the communication interface between ROS-Industrial robot driver and the motion server running on the robot controller. TrendMicro and PoliMi claim to have succeeded in sending motion commands to the robot controlled by a ROS-Industrial robot driver from another device that is connected to the same network as the controlled robot and the ROS-Industrial robot driver (Figure 1). This behavior can be potentially exploited by malicious network participants.

Figure 1. Typical setup of a ROS-Industrial robot driver and vulnerable communication

Figure 1. Typical setup of a ROS-Industrial robot driver and vulnerable communication

To minimize the risk of this potential attack vector on the interface between the device running ROS and the robot controller the network needs to be setup correctly. The connection between the ROS PC and the robot controller needs to be isolated from other networks that might be connected to the ROS controller. Figure 2 shows how to set this up correctly, so that a bad actor will have a hard time exploiting this vulnerability. Isolation of the connection between ROS PC and robot controller means that if you want to connect your ROS PC to another network in a secure way, you will need two network cards. One is used to connect to the robot controller, the other is used to connect to the outer network. Figure 3 shows an example for a vulnerable network setup that you should avoid at all cost.

Figure 2. Correct network setup to avoid security vulnerabilities

Figure 2. Correct network setup to avoid security vulnerabilities

Currently, the vulnerability has only been tested with drivers for Kuka and ABB but it could also be exploited with other industrial robot drivers. If you isolate the connection between the ROS PC and your robot controller but connect your ROS PC to a network with potentially malicious participants on another network card we strongly recommend following the instructions on http://wiki.ros.org/Security and if you use Ubuntu the instruction provided by Canonical (https://ubuntu.com/engage/securing-ros-on-robotics-platforms-whitepaper) to ensure your ROS PC protected.

Figure 3. Vulnerable network setup that should be avoided.

Figure 3. Vulnerable network setup that should be avoided.

Hybrid Perception Systems for Process Feature Detection

In the past years, Southwest Research Institute has used ROS Industrial tools to integrate numerous Scan-N-Plan surface finishing applications. A typical application involves reconstructing the environment, generating raster paths on some unwieldy curved surface, and then executing that path using an industrial manipulator carrying some process tool like a sander or spray nozzle. Automating these tasks is often hugely value-added as they can often be difficult and dangerous for humans to perform.

However, generalizing this concept a bit more beyond rasters on curved surfaces, industrial applications abound where some generic process is applied to some feature. Examples include grinding flashing from a steel casting, applying sealant to a seam, or smoothing a wrinkle in a composite layup. While each of these applications involve numerous complications, the first step is the same. Some feature must be detected in 3D space.

Example of a flashing removal application where only part of the surface requires grinding

Example of a flashing removal application where only part of the surface requires grinding

When thinking about how to detect these features, machine learning is often considered. While machine learning sometimes poses a potential solution to these problems, this can be plagued by a few issues. First, machine learning is progressing quickly. An algorithm developed in Caffe, Torch, or Theano only a few years ago may be difficult to maintain or deploy today and need to be ported to Tensorflow or PyTorch. Thus, any robotics system that uses these approaches should be flexible enough to use any of these frameworks. Second, while semantic segmentation for 2D images is a relatively mature field, performing these operations in 3D space is much newer, requiring more computationally expensive algorithms and annotation of 3D data. While these challenges are certainly known to the machine learning community, they have limited adoption in industrial applications where dataset availability is limited and robotics integrators have day jobs that don’t involve AI research.

Experimental Lab Setup for high mix welding application

Experimental Lab Setup for high mix welding application

To address these challenges, the ROS Industrial team at Southwest Research Institute proposes a hybrid approach to 3D perception systems wherein mature 2D detectors are integrated into a ROS 3D perception pipeline to detect process features and enable the flexibility to upgrade the detector without any modifications to the rest of the system.

The principal is simple. Often in industrial applications 3D perception data (i.e. point clouds) comes from 3D depth cameras that provide not only depth information but also a 2D video stream (https://rosindustrial.org/3d-camera-survey). By using open source ROS tools, we can use the 2D video stream to detect the features we want and project them back to the 3D data. We can then aggregate those detected features over the course of a scan in order to get a semantically labelled 3D mesh. This allows toolpaths to then be generated on the mesh that are informed by the detected process features. In order to evaluate such an approach, an example high-mix fillet welding application was developed where each part was a set of tack welded aluminum plates, but their exact size and location was unknown.

Left column shows 2D image and detected weld seam. Right column shows 3D mesh and aggregate 3D detected weld seam. Notice that the 2D detector was trained to avoid welding additional brackets and clamps.

Left column shows 2D image and detected weld seam. Right column shows 3D mesh and aggregate 3D detected weld seam. Notice that the 2D detector was trained to avoid welding additional brackets and clamps.

The system makes use of open source ROS tools and proceeds as follows. First, the camera driver provides a colorized point cloud to the TSDF Node (from yak_ros, https://github.com/ros-industrial/yak_ros) which reconstructs the environmental geometry. Simultaneously, point cloud annotating node (https://github.com/swri-robotics/point_cloud_segmentation) extracts a pixel aligned 2D image from the point cloud and sends it across a ROS service to the arbitrary 2D detector (in this case FCN8) which returns a mask with a label for each pixel in the image. These labels are then used to annotate the point cloud by re-colorizing it. By doing so, these results can be aggregated using the open source octomap_server (http://wiki.ros.org/octomap_server). At the end of the scan, YAK provides a 3D mesh of the environment and octomap provides an octomap colorized with the semantic labels. Tesseract (https://github.com/ros-industrial-consortium/tesseract) collision checking interfaces can then be used to detect the voxels associated with each mesh triangle, allowing the geometric mesh to be annotated with semantic data. In this case, the areas non marked as “weldable” were removed and a region of interest mesh was passed to the tool path planning application. The final architecture diagram is shown below.

Architecture Diagram of the components of the system

Architecture Diagram of the components of the system

The result can be seen in the video below. While this approach has its limitations, overall it performed well. We look forward to deploying systems like this in the future to further explore its capabilities.

Currently the point cloud segmentation library is still under development, but we believed there was value in making the industrial open-source community aware now to provide additioanl insight and feedback. In the future we anticipate this migrating to the ROS-Industrial family of repositories.

Guest article: A Story of Autonomous Logistics

From rapid robot prototyping to pre-series robot production

a guest article by Deparment of Autonomous Logisitics of StreetScooter

The robotic delivery arm of Eva was self-constructed at StreetScooter

The robotic delivery arm of Eva was self-constructed at StreetScooter

The vision of a generic toolbox to solve automated delivery challenges was born in the Department of Autonomous Logistics in 2016. ROS was chosen as the framework because it was quite popular among students in robotics and many suitable open source modules were already identified. At the same time, a ROS-based software stack for urban autonomous driving called Autoware was released open source. This was a blessing for the young robotic team since multiple components could later be adapted for Eva’s Follow Me Delivery function. The fresh robotic engineers could learn from the experiences made with this stack, without having a senior robotic expert in the team. With the test track next to the developers’ office, a short iteration cycle was introduced to gather the knowledge needed.

Eva, Adam and Alfi for Follow Me and Autonomous Parcel Delivery

The first prototype was not Adam but Eva, constructed in partnership with TAS (Institute for Autonomous Systems Technology of the Bundeswehr University) [1], PEM (Production Engineering of E-Mobility Components) of RWTH Aachen and Beckhoff Automation. Eva was constructed to demonstrate the autonomous parcel delivery.

After showing the first promising in-house developments in software and hardware design, two realistic use-cases for applying robotic technology to logistics were identified in 2017:

  1. Follow Me Delivery
  2. Autonomous Yard Logistic

Both developments were chosen based on the agile mindset to deliver benefits as early as possible to the customer. A maximum speed of 10 km/h was a promising entry point since an emergency stop was possible under all circumstances. In the meantime, the usage of open source technology for robotics prototyping increased since this showed strong acceleration in the development. The re-usage of components between both use cases and vehicle types was given by the modularity of the ROS Framework [2] Two StreetScooter Work vehicles, Adam and Alfi, have been equipped with the Follow Me Delivery system (Adam for rapid prototyping and Alfi to show the next steps in system design and focus on industrialization).

Most systems have been integrated into the roof top of Alfi. In this way the integration of the Follow Me Delivery System into a series StreetScooter Work M vehicle was possible.

Most systems have been integrated into the roof top of Alfi. In this way the integration of the Follow Me Delivery System into a series StreetScooter Work M vehicle was possible.

Adam and the Demonstration of Follow Me Delivery

Nvidia invited StreetScooter to demonstrate the Follow Me Delivery function on a test track next to the NVIDIA GTC conference at Munich in 2017 [3]. The new cooperation announcement on the conference and the immense press feedback reveal the potential of Follow Me Delivery. The software itself was a combination of multiple open source ROS modules integrated into the basic move_base navigation framework of ROS. The team showed that, by combining and adapting multiple ROS modules like depth_clustering or osm_cartography of different organizations, the development of an autonomous vehicle is possible. Safety was given by the low-level controller that supervised the controllability of the system in combination with a trained safety driver.

The ground was quite uneven. This led to false-positive obstacles detected in the ground.

The ground was quite uneven. This led to false-positive obstacles detected in the ground.

Obelix for Prototyping in Autonomous Yard Logistics

In 2018, the first prototype system for the autonomous yard logistics was equipped on an electric Wiesel truck from KAMAG. At this time, the vehicle base itself was also a prototype. The first step was to automate the steering to reduce safety risks of the heavy truck with a maximum weight of about 26 tons. That way, the acceleration of the vehicle was still in control of the safety driver. This level of automation generated lots of interest at DHL since the safety concept is much simpler and the system costs are lower in comparison to a fully autonomous system.

Many benefits like lower material wear, less noise and simpler vehicle handling were still given. This worldwide unique concept was named Assisted Maneuvering and Positioning System (AMPS). Field-tested software and hardware solutions from Alfi have been adapted to the new vehicle Obelix. Based on the experiences made from the open source packages of Follow Me Delivery, a new in-house development has been started. Some powerful packages like robot_localization or Google's carthograpther are still used in our software stacks today. Because of a planned in-field test at a DHL parcel center in cologne in 2019, much more requirements and quality management had to be introduced. LiDAR was chosen as the main sensor system because a high precision in localization and detection with an error margin smaller than 3 centimeters is demanded in changing and demanding outdoor conditions.

Obelix at his daily mission on the test track of Avantis.

Obelix at his daily mission on the test track of Avantis.

Snow tracks on Avanits in the LiDAR measurement. Even under those conditions, the system needed to adhere to its requirements.

Snow tracks on Avanits in the LiDAR measurement. Even under those conditions, the system needed to adhere to its requirements.

Asterix at the parcel centers of Cologne and Hanover

In 2019, Asterix and the AMPS system were deployed to the parcel center Eifeltor next to Cologne. The operation of the new electric Wiesel vehicle from KAMAG in combination with AMPS was possible after smaller adoptions. The container loading worked right from the start, but the loading dock approach was not precise enough under all circumstances. Goal poses of the docks were defined only by mapped GNSS measurements. Even with a high-end localization system, metal objects and walls around Asterix disturbed the radio signals of the satellites. These experiences led to the development of an active loading dock detection that is fused with the global goal pose. After 3 months of daily operation, Asterix and the AMPS system received very positive feedback from evaluation with multiple DHL test drivers. The fenced area of the parcel center was an ideal use case to gather first experiences in mixed traffic with robotic transportation system. Afterward, Asterix was also successfully tested at the freight section of DHL on a newly constructed parcel center without any adaptations of the system or environment [3]. The open source package Marv Robotics supported us in the creation of a bagfile database.

Datasets from the parcel center operation tests were crucial for further development of AMPS and higher levels of autonomy.

Datasets from the parcel center operation tests were crucial for further development of AMPS and higher levels of autonomy.

LiDAR data of multiple containers and loading docks on the parcel center. Loading docks and containers can be detected with sufficient accuracy without artificial landmarks.

LiDAR data of multiple containers and loading docks on the parcel center. Loading docks and containers can be detected with sufficient accuracy without artificial landmarks.

Simba, Asterix and Columbus for Industrialization of AMPS

Based on customer feedback, data analysis methods and advanced robotic components test benches, a pre-series version of AMPS is in development. The design is focused to increase adaptability. Therefore, multiple vehicle types can be supported. Drivers of the DHL parcel centers will evaluate the system in daily operation. They will be supported by developers on demand, when the driver activates the remote access to the AMPS system. A precise absolute and relative localization is demanded for the precise maneuvering of the vehicle. The GNSS and IMU systems that have been used in the prototyping phase were too expensive and nontransparent. That's why in-house hardware and software designs have been done based on state-of-the-art electronic components. The system is called Columbus.

Simba, the virtual vehicle, got quite popular since most developers work remotely during the COVID-19 pandemic. The continuous integration testing framework runs multiple scenarios on a virtual parcel center. Since the LiDAR sensors can be simulated in the Gazebo simulation, the complete software stack is been tested closed-loop. Most errors in the software development can be detected, therefore, in this stage before deploying to the vehicle. In that way the validation of the software components is done in an automated and reproducible way with every new release.

Vehicle, parcel center, LiDAR and Containers are simulated in detail for the test bench.

Vehicle, parcel center, LiDAR and Containers are simulated in detail for the test bench.

Idefix, our Hardware-in-the-Loop test bench, is gone be refactored with industrial graded hardware. Software and hardware integration aspects like networking can be analyzed before working directly in the car. In combination with our virtual vehicle we created a virtual driver seat to drive AMPS inside the simulation on the actual hardware. Asterix is also been used to evaluate ROS2, industrial graded middlewares and operation systems. Challenges at the integration of new software frameworks on the target hardware are identified in an early development phase.

Idefix gets new dresses. Mock-up designs in the simulation allows us in an early stage to evaluate new functionality with our customer.

Idefix gets new dresses. Mock-up designs in the simulation allows us in an early stage to evaluate new functionality with our customer.

Using MoveIt2 on a Industrial Open-Source Application

The Motion Planning Framework for ROS known as MoveIt has been successfully used in numerous industrial and research applications where complex collision-free robot motions are needed in order to complete manipulation tasks. In recent months, a great deal of effort has gone into migrating MoveIt into ROS2 and as a result the new MoveIt2 framework already provides access to many of the core features and functionality available in its predecessor. While some of the very useful setup tools are still a work in progress (Mainly the MoveIt setup assistant), I was able to integrate MoveIt2 into the Collaborative Robotic Sanding Application (CRS) in order to plan trajectories which were then executed on a Gazebo simulated UR10 robot arm.

My ROS2 setup involved building the MoveIt2 repository from source as described in github and then overlaying that colcon workspace on top of my existing CRS application workspace. I also built and ran the simple demo which worked right out of the gate and was very helpful in helping me understand how to integrate MoveIt2 into my own application.

The C++ integration was very straight forward and only needed the use of two new classes, MoveItCpp and PlanningComponent. In this architecture, MoveItCpp is used to load the robot model, configure the planning pipeline from ROS2 parameters and initialize defaults; then there's the PlanningComponent class which is associated to a planning group and is used to setup the motion plan request and call the low level planner. Furthermore, the PlanningComponent class has a similar interface to the familiar MoveGroupInterface class from MoveIt; however one of the big changes here is that the methods in the PlanningComponent class aren't just wrappers to various services and actions provided by the move_group node but they instead make direct function calls to the various motion planning capabilities. I think this is a welcomed changed since this architecture will allow creating MoveIt2 planning configuration on the fly that can adapt to varying planning situations that may arise in an application.

On the other hand, the launch/yaml integration wasn't as clean as many ROS2 concepts are still relatively new to me. In order to properly configure MoveIt2, it is necessary to load a URDF file as well as a number of parameters residing in several yaml files into your MoveIt2 application. Fortunately, most of the yaml files generated by the MoveIt Setup Assistant from the original MoveIt can be used with just minor modifications and so I ran the Setup Assistant in ROS1 and generated the needed config files. Furthermore, the ability to assemble ROS2 launch files in python really came in handy here as it allowed me to instantiate a python dictionary from a YAML file and pass its elements as parameters for my ROS2 application. Beyond learning about MoveIt2, going through this exercise showed me how to reuse the same yaml file for initializing parameters in different applications which I thought was a feature that was no longer available in ROS2.

My overall impression of MoveIt2 was very positive and I feel that the architectural changes aren't at all disruptive to existing MoveIt developers and furthermore it'll lead to new interesting ways in which the framework gets used; I sure look forward to the porting of other very useful MoveIt components. The branch of project that integrates MoveIt2 can be found here and below is a short clip of the planning that I was able to do with it. In this application, the robot has to move the camera to three scan position and so MoveIt2 is used to plan collision-free motions to those positions.

Building Out a ROS2 Mobile Scan-N-Plan Demonstration

As part of the ROS Industrial Consortium Americas 2020 Annual meeting, SwRI demonstrated a mobile robot application bridging ROS 2 with ROS 1 drivers for a mobile application. The exercise refined our capabilities with ROS2 systems, and many lessons learned along the way will inform later work in collaboration between mobile bases and industrial manipulators.

ROS2 Mobile Scan-N-Plan Node Diagram

ROS2 Mobile Scan-N-Plan Node Diagram

Our demonstration leveraged the Clearpath Ridgeback and a Kuka Iiwa. Based on previous work with the Iiwa, we chose to use the ROS1 driver available for the robot. To integrate this into the system, both the input and output of the robot had to be bridged to the ROS2 system. This was achieved with a modified version of the ROS action bridge, which enabled joint command actions to be streamed to the robot, and a standard implementation of the ROS message bridge, to retrieve telemetry information from the Iiwa. With the support of ClearPath, we updated the OS of the Ridgeback to support ROS Melodic, an important stepping stone towards future application bridging the mobile base to a greater ROS2 system.

ROS-I Americas Annual Meeting attendees ask questions after observing demonstration

ROS-I Americas Annual Meeting attendees ask questions after observing demonstration

The demonstration itself was an expansion on the normal Scan’N’Plan framework. The mobile platform was driven to a position of interest; in this case, an aircraft fuselage segment. A scan path, generated in the robot frame, was run and fed into YAK to create a model of the target. Once the scan was complete, alignment and tool path generation was done using a hybrid perception tool to remove masked features. Hybrid perception leverages machine learning classification on 2D data sets overlayed on 3D data, where both are available from the perception sensor set up, in this case an Intel® RealSense™ D435.

These tool paths were then streamed to the robot in the same manner as the scan path. However, our end effector (in this case, an array of lasers) would toggle on and off using segmentation described above, to protect regions of interest.

We hope all present enjoyed this demonstration. We look forward to applying the lessons learned here to future work with mobile robots!

Recapping the 2020 ROS-I Americas Annual Meeting

The 8th ROS-Industrial Consortium Americas Annual Meeting was held March 4-5, 2020, at Southwest Research Institute in San Antonio, Texas. It reminded us of both how far we have come, yet how much there is to be done. As is the tradition with this event, the first day was open to the public, while day two focused on the membership and the mission of the consortium, and what we can do as a body to further leverage open-source for industry.

For those who could not attend due to COVID-19 travel restrictions, we offered video conferencing. In some cases, we had to adjust the speaking schedule to accommodate for remote presentations. All of the Day 1 presentations and videos can be found on the event page, while all the Day 2 content is at the ROS-I Consortium Member Portal.

Day 1 – Overview, Panel & Tours

Day one kicked off with an introduction by SwRI’s Paul Evans followed by a Consortium Overview, by Matt Robinson, and a more technically focused talk on deploying ROS for industrial automation by SwRI’s ROS-Industrial Americas Technical Lead Levi Armstrong. This highlighted recent production deployments, addressed recent challenges, and discussed development of novel new capability for the end-user site.

The morning continued with a talk by Roger Barga, GM of AWS Robotics, on the Role of the Cloud in the Future of Robotics. Here Roger highlighted practical applications of cloud computing with regards to robotics application development, testing and deployment support. Currently AWS is supporting ROS, ROS2, and Gazebo within their services, and provide additional tools to facilitate adding features to applications such as text or speech recognition and security. A case study featuring member Bastian Solutions was shared and a call for multi-robot simulation use cases was put out to attendees.

Alex Shikany, VP of Membership & Business Intelligence of A3, spoke on the role of robotics on employment. Here the point was made that in times of investment in automation, unemployment also declines, indicated that for the U.S. that investment in automation coincides with increased hiring. Essentially, though there is an increase in the amount of work being automated, this doesn’t manifest in fewer jobs; the jobs evolve.

A panel discussion featuring Joshua Heppner of Intel, Lou Amadio from Microsoft, and Roger Barga from AWS fielded questions relating to cloud, sustainability, education, and areas for growth. The Americas Consortium has evolved in recent years with more engagement from the tech industry, as they seek to understand industry needs and how ROS and related applications can scale in robust, sustained ways.

To wrap up day one, Shaun Edwards and Paul Hvass of Plus One Robotics shared a perspective from the founders of ROS-Industrial. They challenged the community to think about breaking down barriers with a simple question: “Where is the Download Button?”

The afternoon featured tours and demonstrations within the SwRI organization and in collaboration with partners. SwRI demonstrations included a ROS2 Mobile Scan-N-Plan that featured localization, reconstruction, machine learning-based segmentation, tool trajectory planning, and optimization-based free space motion planning. Other demonstrations included SLED, and SwRI’s Ranger for autonomous vehicle localization, as well as work in single camera markerless motion capture by SwRI Human Performance Initiative, which has broad applicability in manufacturing robotics with regards to capability. Mathworks demonstrated some of the latest features related to ROS and Simulink, including safety, and UT Austin demonstrated improved means for ease of use in complex systems for inspection/handling tasks in hazardous environments.

The tour and demonstrations included off-campus visits to Xyrec and Plus One Robotics. Shaun Edwards and Paul Hvass offered tours and a behind-the-scenes perspective for how their business has grown and the need to provide their customers a means to try and test before deciding to implement. Xyrec visitors were treated to a firsthand overview the largest mobile robot produced, designed to apply laser ablation surface treatments to commercial aircraft.

Day one concluded over dinner and networking. It was an excellent opportunity to chat with key partners, stakeholders, and new faces about the landscape of advanced robotics, and how we are seeing more rapid progression from academia to factory floor.

Day 2 – Membership Highlights

Day two, the member-only portion, is a forum for membership to share what they are working on, challenges, and seek out areas and partners for collaboration. The day started off with Consortium updates by region, starting off with Americas, then progressed through the EU and Asia-Pacific. Each region has shown progress to delivering more content and training related to ROS2. The EU Consortium shared updates and outcomes of the ROSin initiative, and Asia-Pacific discussed the Singapore-led healthcare initiative Robotics Middleware Framework (RMF), which seeks to disrupt how healthcare adopts both IoT and robotics.

A panel discussed why end-users are looking to advanced robotics solutions via a ROS-based approach, and what are some of the motivations, and challenges that these approaches face today. David Leeson of Glidewell Labs, Greg Balandran of Spirit AeroSystems, and Mitch Pryor of UT Austin Nuclear Robotics Group shared anecdotes and practical examples where there are no solutions in the traditional industrial world. They also discussed challenges their organizations face and shared how an open-source model enables more efficient spread of emerging capability, hopefully preventing silo formation amongst a select few solution providers.

From here, Dave Coleman of PickNik shared recent developments around MoveIt2, including capabilities around real-time path planning. Open Robotics Developer Advocate Katherine Scott then shared developments and opportunities to collaborate around ROS2 and the curation of the ROS ecosystem.

The keynote for the member day of the annual meeting featured Greg Balandran, R&T Engineered Factory Automation Manager ad Spirit AeroSystems, who discussed ROS-I’s Influence on Automation Strategy. Greg shared challenges that his organization has faced and how leveraging ROS and working within ROS-I helped accelerate their automation development and roadmap execution.

The day two afternoon kicked off with a workshop where members broke into groups to discuss elements of the vision for ROS-I, needs, and the areas where membership can focus for advancement. The key takeaways from this workshop were:

  • There is a huge desire for richer industrial-centric training and education resources, beyond the current instructor-led model. These could include cloud-based exercises of the current topics, as well as new topics such as navigation for industrial-centric applications.
  • There was a great discussion around additional vehicles to drive investment into ROS-I and the capabilities to accelerate the capabilities of interest. This included collaboration with other organizations that fund mid-TRL research as well as other funding vehicles that foster small business engagement/participation.
  • Numerous topics focused on moving ROS-I into a product or something tangible that an end-user can develop an application with tools they have existing familiarity with, such as CAD environments and the physics tools that they currently leverage.
  • Detailed listing of capabilities that are desired to enable member applications that include real-time process planning, high-res SLAM, dynamic models, error handling and recover at the system level, and many others.

This feedback enables the Consortium to update and prioritize collaborations, training topics, project topics that are put forward and more to ensure the focus is appropriate relative to the needs of the membership and the broader ROS-Industrial community.

The remainder of the day focused on member presentations that shared recent developments and contributions. Lou Amadio of Microsoft kicked it off with a presentation on recent developments relative to ROS and Windows, Azure, and Visual Studio. Arnie Kravitz, CTO of the ARM Institute, discussed how ROS and ROS-I play a key role in the ARM Institute’s vision in fostering advanced robot capability from the research domain, to be ready for production.

YJ Lim, Sr. Technical Product Manager of MathWorks, shared what the latest developments by MathWorks have meant for developing functional safety in ROS-enabled systems. NOV’s Justin Kinney, Technical Manager Robotics and Mechatronics, shared the end-user ROS journey, from “What is a ROS” to their own functional full-scale demonstration on a test bed, a powerful story that lays out what is possible by leveraging the resources available in the community and working with the other members to further your vision.

Wrapping up the day, John Wen of Rensselaer Polytechnic Institute, and John Wason of Wason Technologies, shared their collaborative ARM Institute project, on Robot Raconteur, which is a middleware that enables easy integration of numerous types of hardware components, which now has interoperability with ROS, and how they are seeking to progress the work to enable richer industrial applications. Eugen Solowjow, Research Scientist at Siemens Corporate Technology, shared recent developments by Siemens that further the tighter connection between industry standard technology, that has the safety and reliability that is expected, but offers both interoperability with ROS-enabled components, as well as the latest AI on the edge capability that is currently emerging for advanced capabilities.

Moving Forward

It was a full two days, and though there remains uncertainty relative to how many in person or physical development we can do, the benefit of collaborative software development, and as evidenced in all the activity in the open source community, though physical testing and developments may have slowed, show there is a lot of work that can be done, even during a challenging time. We hope that those who attended, both in person or remote, also feel as motivated to see how we can further ROS-I into the project and product that we feel it can be. We look forward to stepping up to the challenges put forth by our membership and partners, and hope you also look forward to joining in that journey.

ROS-Industrial Training hosted by Americas Member Glidewell Laboratories

SwRI ROS-I trainers, Levi Armstrong & Jorge Nicho, went on the road for a Glidewell Laboratories hosted session of ROS-Industrial training at Irvine, California offices during the days of February 12th to 14th. This training featured basic and advanced tracks where the basic teaches the fundamentals of ROS and builds up to a motion planning application with a robotic arm.

The advanced track focuses on making use of the powerful tools in the Point Cloud Library in order to build advanced perception applications whose result can then be used to command robot arms to act based on the sensed information.

We would like to thank all those that came out to the training and of course, our hosts Glidewell Laboratories, not just for the space, but for also the use of their lab equipment, which included some different configurations than what have been commonly used for lab exerices.

_D4_0422.jpg

Keep an eye out on the events listing for the next ROS-I training slated for summer 2020. We look forward to offering new content, with focus on ROS2 in the coming months!

ROS Bootcamp 2020 - a learning initiative in collaboration with Singapore Polytechnic

This year’s edition of the ROS Bootcamp was held from the 16th to 20th March, in collaboration with Singapore Polytechnic. The bootcamp stretched over a 5-day intensive hands-on experience on topics such as perception and autonomous navigation with the turtlebot3.

Timelapse of the students coming to class!

Timelapse of the students coming to class!

The students were first introduced to the basics of linux commands and writing a ROS subscriber and publisher. Students were then tasked to write up their own scripts to incorporate the use of open-sourced ROS packages such as AR Tracking & the ROS Navigation Stack. They had also learnt the importance of testing their work in simulation, using powerful tools on ROS such as RViz & Gazebo, before trying the actual hardware.

On the last day of the bootcamp, a mini navigation competition took place to see who’s turtlebot would be able to navigate through a maze autonomously using what they have learnt.

A successful run of the student’s work!

A successful run of the student’s work!

 The ROS-Industrial team is happy to see a handful of very enthusiastic and strong-willed future engineers growing interest more in the development of robotic applications! We look forward to next year’s edition!

This was ROS-Industrial Conference #RICEU2019 (Day 3)

2019-12-ROS-02.jpg

ROS-Industrial Conference 2019

December 10-12, 2019

Seven years after the very first public event entirely devoted to discussing the potential benefits of a shared, open-source framework for industrial robotics and automation, Fraunhofer IPA hosted the 7th edition of the ROS-Industrial Conference on December 10 to 12, 2019 (all slides and videos are linked under the program) .

This is the third instalment of a series of three consecutive blog posts, presenting content and discussions according to the event days including the follwing sessions:

  1. Day 1 with “EU ROS Updates” (watch all talks in this YouTube playlist of Day 1)

  2. Day 2 with “Software and System Integration” & “Robotics meets IT” (watch all talks in this YouTube Playlist of Day 2)

  3. Day 3 with “Hardware and Application highlights” & “Platforms and Community” (watch all talks in this YouTube Playlist of Day 3)

Day 3: Hardware and Application Highlights

On the third and last conference day, speakers came from Universal Robot, KUKA, and Pilz just to name a few examples. The opening presentation held Paul Evans talking about ROS-Industrial North Americas updates and highlights from Southwest Research Institute (SwRI). He summed up ROS-Industrial activities including membership growth, ROS2 demo on booth at Automate 2019, meetings, trainings (focusing on ROS2), initiatives, community engagement… Some technical highlights are for example the Scan-N-Plan tools that enable real-time robot trajectory planning from 3D scan data. Within the project A5, robotic applications for the aerospace industry were developed, e.g. collaborative and adaptive solutions.

Aerospace manufacturing was also the key domain Rik Tonnaer from the research center “Smart Advanced Manufacturing XL“, cooperating with TU Delft, talked about. This industry is challenging for automation solutions because of the variety of processes, which almost all require human dexterity, craftsmanship, and adaptation to variations. In addition, there are large part sizes, similar but not identical products and processes and a long legacy of approved and certified processes. Using the example of a drilling process, Tonnaer demonstrated how an automation solution can be developed despite the challenges mentioned. The research center aims at deploying the first drilling systems beginning of 2021. Technology transfer to other industries is also on the agenda. ROS is their platform of choice to maximize reuse, collaboration, and separation info functional components.

Paul Evans (Southwest Research Institute / ROS-Industrial North America) presenting ROS-Industrial North America updates & SwRI application highlights

Paul Evans (Southwest Research Institute / ROS-Industrial North America) presenting ROS-Industrial North America updates & SwRI application highlights

The third contribution came from robot manufacturer KUKA, a new prominent contributor to ROS. Thomas Ruehr started his presentation explaining how to interface KUKA robots with ROS via KUKA Robot Controller and KUKA System Software (KSS) as well as via KUKA Sunrise/RoboticsAPI. A product specifically benefiting from ROS is the navigation for the mobile manipulation platform KMR iiwa. Apart from that, there are already many KUKA robots running on ROS. The company is working on a more canonical offering with respect to a catalog of URDF models and official drivers for KSS, sunrise robots and the mobile platform. KUKA is actively asking the community for KUKA specific needs to help strengthen the company’s engagement in open-source software.

Another prominent robot manufacturer is heavily working on ROS support: Universal Robots. Anders Billesø Beck presented the new UR ROS driver addressing pre-existing hurdles like a clouded landscape with more than 200 variants of a ROS driver for UR and instability towards API changes. The new driver launched in October 2019 is the result of a Focused Technical Project with FZI Research Center for Information Technology that was funded by the EU project ROSIN (see presentation on day 1 for more information). Main advantages of the new driver are its ease of use and better performance and stability. The driver will remain open source and rely on future community contributions. It offers a stable control interface, teach-pendant integration, factory calibration in ROS, safety compliance speed scaling, eSeries tool communication and full safety compliance. UR is now working towards industrial-grade performance and stability.

In addition to the first lecture by Paul Evans, Erik Unemyr presented activities and application highlights from the ROS-Industrial Asia Pacific consortium managed by the Advanced Remanufacturing and Technology Centre (ARTC) of A*STAR (Agency for Science, Technology and Research) in Singapore. As is the case for Europe and North Americas, the Asia Pacific Consortium continues to support members, provide ROS trainings and other successful ROS events on a regular basis, and plans to offer ROS2 based trainings going forward. Unemyr covered a few of ARTC’s core robotics technology focus areas, including the mission to lower the technology barrier for adopting robotics for various applications. One way to achieve this is using a model-based teaching approach of robots using 3D computer vision. Another concept is using Augmented Reality teaching to enable simplified robotic use cases. The second highlight topic relate to providing more flexibility in automating high-mix applications that require toolpath generation. Unemyr also highlighted a key Singapore-sponsored project currently in development intended for large scale interoperable robotic deployments using ROS-enabled robots, called the Robotics Middleware Framework (RMF).

Erik Unemyr (Advanced Remanufacturing and Technology Centre (ARTC)) presenting ROS-Industrial Asia Pacific updates & ARTC application highlights

Erik Unemyr (Advanced Remanufacturing and Technology Centre (ARTC)) presenting ROS-Industrial Asia Pacific updates & ARTC application highlights

Stefan Doerr from Fraunhofer IPA presented the topic “Towards Plug & Play solutions for autonomous navigation of mobile robots and AGV”. The versatile navigation stack from his team does not need any markers or additional infrastructure, is mostly platform-, hardware- and sensor-independent and based on ROS. With its three core components long-term SLAM, zone-based global route planning and dynamic, local path and trajectory optimization the software has already been deployed on a variety of systems starting from vacuum cleaners and ending at autonomously driving trucks. Its plug & play technologies are the key to the widespread of mobile robots and AGVs in industry. The latest deployment of the software was successfully realized for Smart Transport Robots at BMW production plants. Challenges there were sparse sensor data, a highly variable environment with hardly any static structure, interaction with forklifts, trugger trains etc., large environments of more than 100,000 square meters and limited maneuverability. Ongoing research work at IPA is about machine learning for the navigation stack.

The company Pilz, well-known for its safety technologies, talked about its safety certified ROS-native industrial manipulator PRBT 6. Manuel Schoen presented the different modules for service robotics applications and how they benefit from ROS. The robot does not need any proprietary controller or teach-pendant but can be used directly with ROS. The safety controller for safety functionality is partially implemented in ROS as well. Since the robot itself in combination with the safety controller is certifiable under ISO 10218-1, the system integrator can focus on the application. Applications can be realized with the robot running automatically, running with manually reduced speed, or with manually high speed. In 2019, Pilz also presented results of a Focused Technical Project funded by the ROSIN project. The FTP implemented a trajectory generator with a MoveIt-interface for easy planning and execution of Cartesian standard-paths (LIN, PTP, CIRC) according to industrial requirements. In addition, the blending of multiple sequential motion commands was realized.

ROS is also entering the retail domain. This is done within the framework of the “REFILLs” project (Robotics Enabling Fully-Integrated Logistics Lines for Supermarkets). Jonathan Cacace from PRISMA Lab of University of Naples "Federico II" talked about the project’s aim: Novel robotics systems in close and smart collaboration with humans will allow addressing the main in-store logistics processes for retail stores, leading to a smarter self-refilling in supermarkets. Challenges like the changing environment, variety of objects, or tight work spaces have to be considered. To automatize for example the depalletizing of heterogeneous and cluttered objects, technologies like image processing, perception, grasping, and manipulation must be realized. Therefore, the ROS-based motion planner MoveIt plays a key role in the software development.

Manuel Schön (Pilz GmbH & Co. KG) presenting Safety Certified ROS-native Industrial Manipulator

Manuel Schön (Pilz GmbH & Co. KG) presenting Safety Certified ROS-native Industrial Manipulator

Day 3: Platforms and Community

Coming to the end of the conference, Penny Scully from Jungle gave an overview about the offerings of the company. It aims at connecting industrial partners and  robotics developers who offer “robofacturing”, a software defined manufacturing process of using robotics and AI technologies within the production of goods. Flexibility, time to market, and autonomy are main criteria of their offering. Jungle is developing an online platform to resell resolved challenges to a wider portfolio of industrial partners, like an app store for robotics, to extend their developers reach into the manufacturing industry. Processes like quality inspection, pressing, and trimming have already been implemented. Labelling, battery module assembly, and (un-)packing are in the future scope of the robofacturing process.

Benjamin Goldschmidt of Silexica highlighted its new SLX Analytics product. The German startup called attention to sporadic errors that occur in nowaday's complex software systems (e.g. an Autonomous Driving System). Because the complexity of such systems is so high, the developers are facing the challenge of understanding the overall system behaviour and knowing, in case of an error, which software component failed and why. With SLX Analytics, Silexica addresses these needs by providing automated testing of system metrics, enabling customers to uncover system-level defects early on and thereby reducing the risk of dangerous defects in the product. The first product release took place in January 2020. 

The conference concluded with a contribution of the Eclipse Foundation summarizing the importance of open-source communities for robotics. Philippe Krief presented two company stories from Bosch and MQTT. Their lessons learned: Communities do not only help being competitive against “bigger fishes” but they are also a great vehicle to gain visibility and recognition. That is why Eclipse is launching an Industrial Robotics Working Group collaborating with different ROS-related research projects like RobMoSys, ROSIN, or SeRoNet. Krief’s motto “Open-Source is a journey, not a destination” does not only  describe the work of the Eclipse Foundation very well, but is a nice closing word for the ROS-Industrial Conference 2019.

All three consortia, their members and managing institutes like Fraunhofer IPA are happy and proud to be part of this journey. Let us continue it for example with #RICEU2020! For some more impressions of the whole event please watch the event video.

This was ROS-Industrial Conference #RICEU2019 (Day 2)

2019-12-ROS-02.jpg

ROS-Industrial Conference 2019

December 10-12, 2019

Seven years after the very first public event entirely devoted to discussing the potential benefits of a shared, open-source framework for industrial robotics and automation, Fraunhofer IPA hosted the 7th edition of the ROS-Industrial Conference on December 10 to 12, 2019.

This is the second instalment of a series of three consecutive blog posts, presenting content and discussions according to the event days including the follwing sessions:

  1. Day 1 with “EU ROS Updates” (watch all talks in this YouTube playlist of Day 1)

  2. Day 2 with “Software and System Integration” & “Robotics meets IT” (YouTube Playlist of Day 2)

  3. Day 3 with “Hardware and Application highlights” & “Platforms and Community”

Day 2 Part 1: Software and System Integration

The second day of the ROS-Industrial Conference featured speakers from several well-known companies that have been using ROS for a long time or started using it recently and are developing it further together with the community. While some companies already spoke at the conference in earlier years, there were also speakers from new contributors. This shows that the interest in ROS from the industrial side is still unbroken and increasing.

Matt Hansen from Intel opened the second day. He presented the ROS developments of his company, which mainly include a Robot Development Kit (RDK) and navigation software. Intel especially focuses on the adoption of ROS2. The RDK supports the development and implementation of software components for mapping and planning, machine vision (point cloud generation, object detection, face and gesture detection and more) and intelligent handling or grasp detection. The comprehensive project navigation2 pursues the goals of providing a customizable (thanks to behavior trees), modular and extensible software. To ensure quality and maintainability, an automated system test was created. It uses Gazebo and a Turtlebot 3 model to test localization, transition into the ‘active’ lifecycle state and navigation.

Matt Hansen, Intel, giving a overview on ROS2 Robot Dev Kit and Navigation2

Matt Hansen, Intel, giving a overview on ROS2 Robot Dev Kit and Navigation2

ROS2 tracing was the topic of Ingo Lütkebohle from Bosch. Tracing is important because there are currently various problems in performance analysis and execution monitoring, for example: How long does my system take to react? How much resource is it consuming? Factors like distributed systems or repetitive periodic processing complicate performance analyses. Lütkebohle explained, which kind of information is recordable, which tracepoints exist in ROS2, and how static and dynamic tracing differ. In the end, he gave an example of a tracing installation and implementation process.

ROS as the basis for a framework with which industrial robots do no longer need to be programmed but can be intuitively instructed: Pablo Quilez from the startup drag&bot, whose first research activities took place at Fraunhofer IPA, spoke about this framework and first industrial projects. With the help of a graphical user interface, the user creates a robot program via drag&drop of function blocks. drag&bot is manufacturer independent, offers the same user interface for different robots, and does not require any expert knowledge in robotics. It is an open platform that third parties can extend, e.g. with ROS packages

Arne Roennau and his colleagues from FZI Research Center for Information Technology developed ROS based Cartesian controllers that enable motion, force, and compliance control for robotic manipulators. Cartesian control in task space is necessary for closed loop force control, direct teaching, contact-rich manipulation, manual guidance. That is why FZI worked on active Cartesian compliance using three controller modules and implemented them for a car door sealing assembly, a satellite assembly and many other applications. The goals here were to give the robot error correcting contact skills for autonomous execution that are transferable to different robots.

More contributions to ROS presented Steve Peters from Open Robotics, an institution that has been supporting and evolving ROS and Gazebo since many years. In this context, Open Robotics offers several developer tools to facilitate the use and integration of open source software and supports upcoming ROS releases like Melodic (May 2023) and Noetic (2025), the latter probably being the last ROS1 distribution. Together with other companies in the ROS2 Technical Steering Committee, Open Robotics helps managing the roadmap, contributes development efforts and sets developer policies. Finally yet importantly, interfacing ROS and Gazebo has been on the agenda of Open Robotics for quite some years.

Gazebo simulations were also the main topic in the presentation of Musa Morena Marcusso Manhães. She works for Bosch where she develops a Python library for scripting and rapid-prototyping of Gazebo simulations. So far, there are several application-dependent difficulties with respect to simulations (e.g. generation variations of worlds and models, scripting of world layouts and event-based actions). Marcusso’s approach is the procedural generation, a technique from gaming development. It enables rapid-prototyping of simulation scenarios and abstractions to simulation entities, allows scripting of Gazebo simulations, extends templating options for robot descriptions and improves the conversion between URDF and SDF.

Musa Morena Marcusso Manhães, BOSCH, presenting A Python library for scripting and rapid-prototyping of simulated Gazebo models and worlds called pcg_gazebo_pkgs

Musa Morena Marcusso Manhães, BOSCH, presenting A Python library for scripting and rapid-prototyping of simulated Gazebo models and worlds called pcg_gazebo_pkgs

Nadia Hammoudeh Garcia from Fraunhofer IPA highlighted the potentials of the combination of model-driven engineering (MDE) and ROS. Among others, ROS developers can benefit from the definition of common design patterns and specifications, model checker techniques and automated code generators.Since there are already about 4000 hand-written open-source ROS packages, Hammoudeh not only develops tools to generate code but also to automatically extract ROS models from the existing code. She released an eclipse-based tooling with a graphical interface and model editors as well as domain-specific languages to the models. All her contributions are also part of the German research project “Service Robotic Network” (see presentation from day 1 for more information).

Not only MDE can facilitate the deployment of ROS but also the tools from MathWorks, MATLAB and Simulink. Shashank Sharma presented how this works, which advantages the tools offer, and how they address common challenges of autonomous systems, e.g. multi-domain expertise, complexity, and performance evaluation. He also discussed the key advantages of Model-Based Design and how it can be applied to the robotics and autonomous systems. Therefore, MATLAB helps analyzing and visualizing ROS data and prototype algorithms. A Simulink model allows automated code (C and C++) and ROS node generation as well as prototyping new algorithms through the ROS interface. Finally, the tools from the company allow incorporate ROS in Model-Based Design workflows

Day 2 Part 2: Robotics meets IT

Robotics and IT are becoming more and more interlinked – this was already evident at the ROS-Industrial Conference 2018, when Roger Barga from Amazon Web Services (AWS) was invited to present a new service for robot applications “Amazon Robomaker“. Now, one year later, he presented the status of the service. It offers comprehensive functionalities for each of the three stages “design and develop“ (storage, logging, metrics, image and video recognition etc.), “test and verify“ (simulation tools for thousands of concurrent simulations and model training), and “deploy and update“ (control and multiple deployments, manage robots across multiple brands, deployment over-the-air). In addition, AWS contributed source code to ROS2 core, along with tools for ROS2 to improve functionality and code quality. The company will continue these efforts.

Roger Barga, AWS Amazon web services, giving a keynote on The Robotic Edge

Roger Barga, AWS Amazon web services, giving a keynote on The Robotic Edge

Andrei Kholodnyi from Wind River addressed the problem that there is an increasing amount of robots using ROS but a decreasing amount of embedded software engineers. He stated that people do not want to program bit and bytes anymore. That is why Wind River provides the downloadable SDK VxWorks for non-commercial usage. ROS2 is built on top of the VxWorks SDK and developers can deploy and run it on ARM and Intel.

Canonical, as the company who publishes Ubuntu, gives developers support for simple, secure, and scalable robotic deployment in the field of ROS. Rhys Davies, a product manager for Canonical's robotics initiatives presented tools like Snaps, containerized software packages for all Linux distributions, the company’s efforts for continued support for Python 2, and Extended Security Maintenance (EMS) for ROS, to maintain a robot past its usual lifetime. With these offerings, Canonical aims at supporting users to move to ROS2 and to facilitate the transition of robot systems already in practice.

The company eProsima, represented by Jaime Martin Losa, offers the open-source DDS for ROS2 “eProsima Fast RTPS”. This DDS offers real-time behavior (static allocations, non-blocking calls, sync and async publishing), intra-process communication and discovery server. Martin Losa detailed performance features using the example of an iRobot framework and gave numbers with respect to benchmarking criteria like latency and throughput. More features are currently being developed with the help of project funding from ROSIN (see presentation from day 1 for more details).

Besides Amazon, Microsoft was one of the most prominent companies presenting at ROS-Industrial Conference. Gunter Logemann talked about ROS applications with Visual Studio Code and Azure. Since June 2018, the IT giant has been contributing to the ROS development and since then, 279 ROS packages have been enabled on Windows. New features are for example the Azure IoT_Hub connector, Azure Kinect ROS Node, and Windows ML node. The developer environment Visual Studio Code offers several ROS extensions like automatic workspace activation, starting, stopping, and viewing the ROS system status, and automatically discover build tasks. ROS2 support is also included.

Both offensive and defensive security aspects were presented by Endika Gil Uriarte and Victor Mayoral from Alias Robotics. They explained that the network and transport layers are the most vulnerable points in an OS and not ROS, ROS2 or the application itself. For them, security is a process and cannot be “finished” with a certain technology. The company offers a variety of security tools and has profound knowledge concerning vendors as well as ROS and ROS2 thanks to its public robot vulnerability database and to a robot security survey done in conjunction with Joanneum Research Robotics. One of the solutions to enhance security is the company’s toolbox for robot security “alurity”. Soon, there will also be RIS, the Robot Immune System for UR robots offering defensive robot security.

audience at ROS-Industrial Conference 2019 at Fraunhofer IPA in Stuttgart, gErmany

audience at ROS-Industrial Conference 2019 at Fraunhofer IPA in Stuttgart, gErmany

Analytics for autonomous driving and large-scale sensor data processing was the topic of Jan Wiegelmann working for Autovia. The challenge here are the petabytes of data that are generated daily both from simulation and real-world sensor data. The Autovia Analytics Platform enables large-scale data processing for these use cases. Autovia IO is a data access layer that enables analytics apps to read ROS bags from local and cloud storage. It works for diverse platforms and does not require a ROS installation. Additionally, Autovia FS is a virtual file system with which all applications can read ROS bags from local and cloud storage. Autovia IDE completes the offer: It is an analytic toolbox running as managed service and addresses R&D engineers in automotive and aviation industry.

Christoph Hellmann Santos closed the second conference day and presented the EU initiative agROBOfood, which aims at bringing ROS to the agro-food sector. This domain will have to cope with major changes like ageing population and less working population, climate change, less productiveness of ecological food… New methods like vertical or urban farming and the key topic sustainability are on the rise. Already now, we see quite many robot platforms on the fields that might help mastering these changes. In this context, agROBOfood has three offers: Firstly, ROS will be pushed to use it in agricultural applications. Main topics are functional safety for mobile agricultural robots, software architecture and development and organizational structures like working sponsors and adding groups. Second, the project addresses SMEs with an open call for funding ROS software developments. Third, service providers can join the agROBOfood network and benefit from access to technology maps, market knowledge, and standardization activities. Finally, they can get in contact with people from all areas of agrofood robotics.

In the evening of this conference day, the social dinner took place in the restaurant “Leonhardts” in direct neighborhood to Stuttgart’s iconic TV tower. For some more impressions of the whole event please watch the event video.

This was ROS-Industrial Conference #RICEU2019 (Day 1)

2019-12-ROS-02.jpg

ROS-Industrial Conference 2019

December 10-12, 2019

Seven years after the very first public event entirely devoted to discussing the potential benefits of a shared, open-source framework for industrial robotics and automation, Fraunhofer IPA hosted the 7th edition of the ROS-Industrial Conference on December 10 to 12, 2019.

This is the first instalment of a series of three consecutive blog posts, presenting content and discussions according to the event days including the follwing sessions:

  1. Day 1 with “EU ROS Updates” (watch all talks of Day 1 in this YouTube playlist)

  2. Day 2 with “Software and System Integration” & “Robotics meets IT”

  3. Day 3 with “Hardware and Application highlights” & “Platforms and Community”

Day 1: Updates from ongoing activities in the EU

Already for the seventh time, Fraunhofer IPA invited to a big ROS-Industrial-Event. Meanwhile, the conference – for several years now scheduled shortly before the Christmas break – has established itself as the European event on the topic of ROS, where developers, researchers, companies and all interested parties can learn about the current status of the free Robot Operating System ROS and have the chance to network and exchange information. Last December, some 150 participants came to Stuttgart and benefited from around 40 presentations from academia and industry.

The first day was dedicated to "EU Updates". There is a lot to report here because extensive national and EU funding is flowing into research projects that directly serve further developments of ROS and, in particular, work towards industrial suitability. The projects cooperate closely with the existing community and are strongly networked with each other because they pursue a common goal despite their different approaches: the creation of an EU Digital Industrial Platform for Robotics.

The lectures started with an opening by Thilo Zimmermann, Manager of the ROS Industrial Consortium Europe, as well as by Christoph Hellmann Santos, Group Leader at Fraunhofer IPA, and Werner Kraus, Head of the department Robot and Assistive Systems, also at Fraunhofer IPA. Kraus gave an overview of the current robotics market and presented growing market figures from the statistics of the "International Federation of Robotics" on both industrial and service robotics. The latter in particular often relies on ROS, because with utilizing existing open source software like ROS, the first prototypes can be created much quicker without having to reinvent the wheel and expend too many own resources.

Werner Kraus, head of department of robotics and assistive systems at Fraunhofer IPA

Werner Kraus, head of department of robotics and assistive systems at Fraunhofer IPA

Updates on EU project ROSIN

The largest funding project for ROS is currently ROSIN, which will come to an end in December 2020. First, Carlos Hernandez Corbato from TU Delft gave an introduction to the project, which has two objectives: 1) assuring the availability of high-quality robot software tools and components, and 2) creating a sufficiently large European user- and developer base. To this end, the project is particularly active in three areas: quality assurance of software development and components, continuing education for students and professionals, and financial support for ROS components in the context of "focused technical projects" (FTPs). In more than 50 FTPs, more than three million Euros of funding have already been granted.

In order to improve software quality, ROSIN continues to develop technologies and methods like continuous integration, model-driven development and model-in-the-loop, automated test generation and code scanning. In this context, Andrzej Wasowski presented the topic "[Reactive] Programming with [Rx]ROS". Reactive programming raises the abstraction level comparing to the standard ROS API, by making the flow of information explicitly stand out in the source code.

As far as further education measures are concerned, 530 students and 436 professionals have already been trained in ROS-I schools and ROS-I academies, as Stephan Kallweit from FH Aachen reported. The project goal of 1000 participants has thus already been almost reached and ROSIN has been very successful as a multiplier of ROS knowledge. In addition, in "educational projects" (EPs), formats such as web-based interactive tutorials, ROS training centers or applied training events by 3rd parties are financially supported. The last project year is to be used in particular to create ROS2 training content.

Carlos Hernandez Corbato, professor at TU Delft and coordinator of EU project ROSIN

Carlos Hernandez Corbato, professor at TU Delft and coordinator of EU project ROSIN

Insights in „Focused Technical Projects“

As mentioned above, ROSIN has already promoted more than 50 FTPs addressing specific industry needs. Possible FTP contents were algorithms, e.g. a SLAM algorithm, application templates, improvement of existing components, process-related work, e.g. code security audits, improvement of documentation or the integration with other software frameworks. Five FTPs presented their work at the conference:

  • Rafael Arrais from the research institute INESCTEC talked about „ROBIN: The ROS-CODESYS Bridge“. It focusses on developing and releasing a bidirectional, reliable and structured communication bridge between ROS and CODESYS, a softPLC that can run on embedded devices and that supports a variety of fieldbuses, and even OPC UA. The developed software will allow the parametrization of ROS modules through IEC61131-3 programming languages and also streamline the interoperability between ROS and robotic hardware or automation equipment, fully empowering the Industry 4.0 paradigm of Plug’n’Produce.

  • The Norwegian company PPM, represented by Trygve Thommesen, developed BLACKBOX, an automated trigger-based reporting, data recording and playback unit, collecting data from robot and manufacturing systems. It takes error reporting and recovery of industrial robot systems to a new level, by developing and utilizing the innovative ROS based framework. The framework is built upon components from the project partner’s previous research and existing ROS modules.

  • The Spanish CATEC worked on a robust and reliable GPS-free localization algorithm for aerial robots applied to industrial applications. As Paloma Carrasco told, it focuses on safety and computational efficiency. This ROS library helps to foster the development of drone-based solutions for industrial inspections.

  • Olivier Michel, CEO at Cyberbotics, presented cross-platform ROS simulation for mobile manipulators. This FTP aims at developing a simulation framework for training pilots of robots used for intervention in case of a nuclear incident. These robots include industrial arms and will be controlled using ROS. The project will contribute to the ROS-Industrial community with a new open-source, cross-platform, high-performance simulator compatible with ROS for industrial robots.

  • Finally, Luca Muratore from IIT in Italy talked about ROS End-Effector. It provides a ROS-based set of standard interfaces to command robotics end-effectors in an agnostic fashion. Nowadays, end-effectors are controlled using customized software wrappers and specific frameworks: ROS End-Effectors aims to design and implement a universal ROS package capable to communicate with different end-effectors. The project will be ROS2 compatible and will work both in simulation (Gazebo environment) and in the real hardware.

The session ended with a lecture by Olivier Stasse, LAAS-CNRS. He reported on the use of ROS in robots that are to be used for partial automation in aircraft construction. This domain requires lightweight, safe, mobile, and versatile manufacturing cells that enable human-robot collaboration. Stasse is developing this within the framework of the “Rob4Fam” lab (Robots For the Future of Aircraft Manufacturing). Implemented technologies are real-time/interactive planning, torque control, SLAM algorithm and balance so that robots could climb stairs, screw, or drill. All will be ROS controlled. Also, ROS facilitates the integration between lab and industry.

More projects boosting ROS

Besides the research project ROSIN, there are other national or European research efforts building on or improving ROS components. One of them is the EU project RobMoSys, represented at the conference by Dennis Stampfer from Ulm University of Applied Science. It aims at developing composable models and software for robotic systems and uses a model-driven engineering approach as key enabler for complex software and system integration and for integrating existing technologies. A key component is MROS: Metracontrol for ROS2 systems, a RobMoSys-integrated technical project. Carlos Hernandez Corbato from TU Delft talked about this project that enables models@runtime to drive and architectural adaption for reliable autonomy.

On the national level, there is the German project “Service Robotic Network” (SeRoNet) that aims at building a community-driven platform for a more efficient development of service robots. With the technology behind, the design, development, and deployment of service robots in a variety of areas, from logistics, care, and healthcare to assembly support in manufacturing operations shall be made much easier than today. Through the online platform “robot.one“, users, system integrators and component manufacturers of service robot solutions will be able to collaborate efficiently and jointly support solutions from requirements analysis to deployment. In 2020, two open calls offer funding opportunities: The first call, open until late March, adresses companies that aim to make their software or hardware components compatible to the SeRoNet platform. The second call, in summer 2020, addresses end users and system integrators to propose and implement novel service robot solutions using SeRoNet technology.

Last but not least there is the project RoboPORT. Within this project, an online development platform for service robotics with an extensive library of open source robotics hardware is being created. New collaborative processes such as open innovation, crowd engineering and similar methods will be mapped on the platform to support a continuous and distributed development process. Also, the project supports ideation processes, hackathons and makeathons and offers a network of highly motivated domain experts that help realizing specific project ideas.

ROS is leaving shop floors

The EU activities around ROS are not only focused on the use of free software in production environments. The presentation by Gonzalo Casas from ETH Zurich showed how ROS can be used for architecture and digital fabrication in the construction industry. For example, a robot can use the path planning software MoveIt to build a wall with bricks.

Gonzalo Casas of ETH Zurich presented how ROS for architecture and digital fabrication in the construction industry

Gonzalo Casas of ETH Zurich presented how ROS for architecture and digital fabrication in the construction industry

Finally, it is the agricultural and food sector where robotic applications are increasingly in demand, including those using ROS. Fraunhofer IPA is also very active in this area, for example in the lead project at Fraunhofer "Cognitive Agriculture" (COGNAC) for sustainable and at the same time profitable agriculture. At EU level, the agROBOfood project is currently gaining momentum with the aim of building the European ecosystem for the effective adoption of robotics technologies in the European agrifood sector. These agricultural activities are not the first ones related to ROS. As Andreas Linz from Hochschule Osnabrück presented, there were projects like BoniRob and elwoRob that already built on ROS. The advantages are manifold: ROS is modular, expandable, reusable, has a huge and active Community and offers over 3000 nodes for nearly all use cases and a large collection of helpful tools, i.e. Rviz. Also, many companies provide ROS compatible drivers and software tools for their hardware, and the integration with other open source libraries like PCL (Point Cloud Library), OpenCV or Gazebo is possible.

The BoniRob system is a modular Robot Platform for agricultural applications using an app concept. Depending on the task, different hardware and software modules can be installed. So far, there is a phenotyping app and a soil2data app measuring the nutrients in the soil. Furthermore, there is the robot elWobot for maintenance in orchards and vineyards and there are robots for education and demonstration. For all these, ROS plays a crucial role.

The first conference day ended with a welcome reception and several ROS demonstrators at the Stuttgart research campus “ARENA2036” for future car manufacturing.

For some more impressions of the whole event please watch the event video.

Hands on with FRAMOS D435e camera featuring Intel® RealSense™ technology for industrial robotics

It has been nearly 10 years since the release of the XBOX Kinect camera, and things have certainly changed. Gone are the days of using reverse-engineered software drivers or soldering on USB cables. We have arrived in a time of 3D perception plenty. Multiple vendor-supported 3D camera options exist utilizing different depth camera technologies, to such an extent that picking one can often be difficult. The ever-popular 3D camera survey has grown to over 20 cameras, and it is still growing. Nevertheless, the ROS-I team is committed to testing as many of these sensors as possible and putting them through their paces.

One option that I have recently become fond of is the Intel RealSense. In my mind it is a sort of industry standard. Released and actively supported by ROS-I consortium member Intel, the RealSense is inexpensive, available off the shelf, and works reliably across a wide variety of operating conditions. Combine that with a stellar ROS (and ROS 2) driver, and you have a winner. The convenience of being able to just plug in the camera, bring up RealSense-viewer, and then tune or debug the camera cannot be overstated. It is a camera for the robot masses. However, it does have a problem. The Intel RealSense is USB 3.

intelrealsense1.png

For industrial automation tasks USB cameras have long been an issue. We often set up robotic wrist cameras at Southwest Research Institute. With a properly calibrated depth camera on a robot wrist we are able to reconstruct large parts and automate complex industrial tasks. USB complicates this setup. Flaky connections can plague a setup like this, leading developers to ritualistically disconnect and reconnect USB cables whenever something goes wrong. Further, cable runs quickly exceed the length that USB can handle leading to the use of sometimes-suspect USB-ethernet extenders. If only there was a camera option that had the software backing of the Intel RealSense with the industrial readiness of POE. Meet the FRAMOS Industrial Depth Camera D435e. Based on the Intel RealSense technology, FRAMOS has packaged this D435e 3D GigE camera in an IP66 enclosure and replaced the USB-C connector with an industrial-ready M12 GigE connection. While it certainly isn’t going to replace the Intel RealSense as the camera for the robot masses, it might be the camera for the industrial robot masses.

20191118_165810s.jpg

The ROS-I team recently got the chance to work with the camera hands-on, deploying it on an industrial scan-and-plan project. The setup for the FRAMOS RealSense is straightforward if a little quirky. On Linux one must simply install the FRAMOS CameraSuite Software and then install their custom version of the librealsense SDK. Then upon configuring the network settings for the camera, it behaves just like the USB RealSense. ROS doesn’t know the difference, and the RealSense-viewer behaves in the same way. However, using it has not been without its difficulties. ROS support for this camera was released October 29, 2019. We began integrating it on our system on November 4, 2019. As such, there were some problems. The software suite only allowed a limited set of the parameters to be set and required a custom version of libglf3w. This caused an issue where the camera mysteriously stopped working after an unrelated package was updated, and the only apparent solution was to reinstall the software suite. However, a few weeks later a new version of the software was released that fixed both issues.

Overall, using this camera has been troublesome but to an ever-decreasing degree. To some extent, the convenience and maturity of the Intel RealSense makes it easy to complain about any small thing that goes wrong on new hardware, but I personally have high hopes for the FRAMOS Realsense. When debugging, their software support was quick to respond, and the software seems to be in a state of continued improvement. We still run into occasional crashes, but similar issues plagued the Intel RealSense a mere year ago that have since been resolved. The necessity of the custom version of librealsense SDK is its biggest oddity, but with continued collaboration between FRAMOS and Intel this may be resolved someday.

Ultimately, when I think about the ideal camera for a ROS-I system, I imagine a POE camera with easy to use software, rock-solid vendor support, and an active ROS community. While there will always be cameras that excel at one application or another due to depth of field, resolution, or sensing technology, for medium to large scan and plan applications, the FRAMOS RealSense is well on its way to achieve that goal. Others will undoubtedly join the market, but regardless it is an exciting time for 3D perception in industrial applications!

New edition of the ROS MOOC from TUDelft for ROS beginners

New edition of the ROS MOOC from TUDelft for ROS beginners

We are pleased to announce a new edition of the ROS MOOC, Hello (Real) World with ROS. The course will open on 15 January 2020 at 13:00 CET on the edX online learning platform.

You can enrol now at the Course Webpage for a fun ROS learning journey!

This course is a part of the educational activities of the EU project ROSIN and is offered by the TU Delft Cognitive Robotics department with the support of the Online Learning School.

The target audience for the course are beginner level ROS1 users. The course will be instructor paced and of 6 weeks duration. A study/work load of about 8-12 hours per week is expected.

See you online from January 15th!

The Delft ROS MOOC team

Read More