Fall 2022 ROS-I Community Meeting Happenings

The ROS-Industrial Community Meeting was held on Wednesday, 9/21, and it brought with it a few interesting updates for the Industrial ROS community, but also the broader ROS/open source robotics community as well. I wanted to take this moment to highlight some key bits that came out of this specific community meeting and provide the links to the provided presentations. Typically, I add these to the event agenda, but due to some changes in how the meeting unfolded, I figured a blog post may be easier to track down.

If you just want the specific decks from the speakers, I have included them here: (YouTube Playlist for your viewing pleasure HERE!)

Per typical ROS-I meetings, it started with an update on general ROS-I open source project/program updates. A fair amount of time was spent on ROS-Industrial training. It is no secret that during COVID we had to go virtual, and that certainly made training more available to more people. As in person events started to become possible again, we experimented with delivering training in a "hybrid" fashion, with students able to opt to attend remotely.

The feedback has not been very positive. For attendees online, they feel they do not get the same value as those in the room, and often are left on their own more so than those getting in person guidance from the instructors.

The solution to this moving forward is to no longer offer hybrid, but periodically offer a full virtual training option for those that do not want to attend an in person event.

That said, with in person events being broadly the norm now, we have brought back member hosted training, so that provides improved regional opportunities to make a training event where a near cross-country flight is not required. We look forward to continuing to offer member hosted events, covering both costs and the Midwest.

I also did a preview into some of the data we received from the annual meeting on what we can do to improve adoption of ROS across the Industrial robotics community. It isn't quite a Venn diagram, as shown below, but ease of use and training contribute to roadblocks to adoption. We are continuing to line this up with both past workshop feedback data, and process domain specific feedback to provide an updated roadmap for the ROS-I open source project. Stay tuned!

Initial digestion of workshop feedback from ROS-I Annual Meeting

From here Michael Ripperger, ROS-I Tech Lead shared some interesting updates around the Reach repository and some of the new scoring features to assess reachability for a manipulator configuration relative to a surface that is being targeted to process. These utilities have been refined to enable richer displays to enable analysis and improved solution design for various applications from painting, through surface finishing. The updated repository may be found here. Also, recent improvements to industrial reconstruction were noted, as some various users across the community kick the tires and provide feedback.

Brad Suessmith and Oded Dubovsky from Intel provided an update on the Intel RealSense product lineup, what is on the horizon and when releases will be announced. Stay tuned for a splashy announcement soon in concert with vision shows. Oded covered the roadmap for the ROS 2 driver with a big push on ROS 2 beta by end of year 2022.

Craig Schlenoff reviewed the NIST role in participating and even funding work around open source robotics. The particular focus for this particular talk was on the ability to realize agile performance within robotic systems. The key idea is to enable easy and rapid reconfiguration and re-tasking of robotic systems. This has been broken into both hardware and software agility and Craig summarized why this is of interest and how it fits into the NIST vision relative to dissemination of standards and practices that support sustainable capability development and proliferation. You can also check out the recently launched Manufacturing AI site that seeks to provide a landing page for information around AI development for manufacturing.

NIST high-level view of Measurement Science for Manufacturing Robotics

John Bonnin, from SwRI, then did a dive into the ConnTact framework. Here, the intent is to provide a way to easily evaluate learning policies around various assembly tasks, initially targeting the NIST assembly task board, in a way that abstracts details around specific hardware, while enabling easy, or simpler, task definition. The framework has been further refactored and in parallel is a port to ROS 2. The community is encouraged to check it the repository and engage in the improvement of this framework!

Updated schematic of the ConnTact Framework

PickNik CEO, Dave Coleman, dug into MoveIt (reviewed the roadmap, great things coming!) and MoveIt Studio. For those that haven't maybe had the chance to see MoveIt Studio in action, the intent is to provide a platform to enable the development of complex yet fault tolerant robotic applications. While the experience is simpler, and reduces reliance on high-end experts, there is still a certain level of expertise required, however, there is the potential and ability to build and debug advanced applications and get to the validation phase of your application sooner.

There is also cloud-based/remote monitoring and error recovery. This enables developers for solutions to think about recovery plans, support in the field, get applications up and running after a fault more efficiently. This also has the benefit of enabling diversely located development staff, all via the PickNik collaboration with Formant.

Summary of MoveIt Studio

Dave also exectued a live demo of the MoveIt Supervisor. The MoveIt Supervisor enables operator in the loop type applications, suitable for object/task identification in the scene to set up execution in a cluttered, or high noise environment. This is a great example of supervised autonomy, where things are "mostly automated". The demo went off without a hitch, as Dave stepped through identifying a door handle, planning the trajectories for opening the door handle, and then the plan executed, and the robot opened the door.

MoveIt Supervisor Demonstration

Dave reviewed the behavior tree behavior user interface. Built on BehaviorTree.cpp this enables for complex behavior development, but in an environment that makes the task simple to visualize and edit.

Coming soon will be updates around trajectory introspection, PRM graph planning caching, and optimal trajectory tuning. MoveIt Studio is available through PickNik via a licensing model. Check in with the PickNik team to learn more about MoveIt Studio and how to check it out for yourself!

This ended up being a great community meeting and we look forward to the next community update in December 2022. It has been rewarding to see both large tech companies, government agencies, and small innovative startups, march together in providing resources, tools and capability to enable new capabilities in manufacturing and industry. That is the goal of the ROS-Industrial open source project, and we look forward to what's next!

An Open Framework for Additive Manufacturing

Mainstreaming Robotic Based Additive Manufacturing

Robotic additive manufacturing, sometimes called robot 3D printing, is evolving from research to applied technology with maturation of methodologies (gantry systems and robot arms) and source materials (metal powder, wire, polymer and concrete).

A conventional gantry system that layers material via a single x-y plane tool path is an established 3D printing solution for certain repeatable applications, while robotic arms can offer more complexity when layering material in multiple planes. However, to date traditional approaches for planning trajectories for 3D printing are not optimized for taking advantage of high degree of freedom (DOF) systems that include industrial manipulators.

Leveraging the advances in planning for high (DOF) robotic arm equipped solutions for complex additive manufacturing (AM) entails processes for planning and execution for both hardware and the work environment. The steps of a process are dependent upon often multiple proprietary software tools, machine vision tools, and drivers for motion planning end effectors, printer heads and media used in each 3D printing process.

ROS Additive Manufacturing

Over the years the ROS-I open source project and within the ROS-Industrial Consortium the creation of frameworks that enable new application development have become a standard approach to enable rapid extensibility from an initial developed application. After numerous conversations with end-users, other technical contributors, it seemed that there was an interest in looking at some of the capabilities within the ROS and ROS-I ecosystem to create a framework that seeks to take advantage of high Degree of Freedom systems and optimization based motion planning to bring a one stop shop in additive manufacturing planning and application.

ROS Additive Manufacturing (RAM) aims to leverage the flexibility of additive manufacturing with industrial robotic applications. While looking for an open-source ROS package to slice meshes into configurable trajectories for additive manufacturing using a Yaskawa Motoman welding robot, we have been aware of the ROS Additive Manufacturing package developed by the Institute Maupertuis in Bruz, France, and so this was used as a starting point.

The RAM package was originally built in ROS Melodic, so it was rebuilt in ROS Noetic from source. Building the application from source in Noetic was mostly straightforward. We followed the the installation instructions detailed in the Maupertuis Institute's GitLab repository. The terminal commands using pip were replaced using pip3 and all terminal commands specifying ROS Melodic were replaced with ROS Noetic. When attempting to build the package in ROS, there were clashes between Sphinx 1.7.4 and the latest version of Jinja2 (version 3.1.2 as of June 2022). An older version of Jinja2 (version 2.10.1) was installed to successfully build the package and for the software to launch.

The RAM software features an RViz GUI interface that allows the user to select various trajectory generation algorithms to create a trajectory from a given mesh or YAML file. Printing parameters such as blend radius, print speed, laser power, and material feed rate can be modified for each individual layer of the print. The parameters of the entrance and exit trajectories can also be modified to have a different print speed, print angle, print length, and approach type. The output format of the exported trajectory is a ROS bag file. For our experiment, we used a Yaskawa Motoman welding robot and we needed to post-process the results to correctly interface with the robot.

Going from Plans to Robot Motion

Motion was achieved by post-processing trajectories with a customized version of the robodk post-processor. Welding parameter files were defined on the robot's teach pendant like normal. A "User Frame" (reference system) was defined at the center of the metal plate to match the ROS environment. The robot's tool was edited to match the orientation used by ROS. This allowed us to generate robot programs without having to configure the ROS environment to match the workcell. Extra lines were added in the post-processor to start/stop welding. The program files were copied via ftp onto the controller and executed natively as Linear moves.

This hybrid ROS/robot controller set up allowed us to quickly set up this demonstration. The output of the tool is a list of cartesian poses. The post-processor converted these to linear moves in the robot's native format. The robot did the work of moving in lines at a set velocity; there was no reason to do additional planning or create joint trajectories. The robodk_postprocessors package available on Github has not been maintained or updated in some time. Numerous bugs exist and these needed work arounds.

Existing ROS drivers are focused entirely on joint trajectories. A different approach that would allow streaming of robot programs would be beneficial, and this is part of future work to be proposed.

Below are two screenshots from the software for trajectories produced for a star and a rectangle with rounded corners. These shapes were included within the software as YAML files. The Contour generation algorithm was used with a 2.5 mm layer height and a 5.0 mm deposited material width for both shapes . The star shown below had three layers and the rounded rectangle had five layers. All other parameters were left to their default values.

Creation of a star shape set of tool paths via the RAM GUI.

Creation of a rounded corner rectangle within the RAM GUI.

As seen in the video below, the GUI interface provided a convenient and intuitive way to modify trajectory and print parameters. Paired with our post-processor, sending completed trajectories to the robot hardware was efficient.

Screen capture of process within RAM software. After clicking "generate trajectory", the post processor saves the output into a JBI file which is transferred to the robot via FileZilla.

Test samples were made on a test platform provided by Yaskawa Motoman over the 2022 summer period. As can be seen in initial test samples and more complete builds, the application was able to make adjustments to motion profiles and weld settings, including more advanced waveforms such as Miller Electric’s Regulated Metal Deposition (RMD) process.

Figures Above: In-Process and completed RAM generated tool path sets executed on the Yaskawa testbed system.

To streamline the process of building the RAM package from source, the documentation should be updated to detail the process of building in ROS Noetic instead of ROS Melodic. Additionally, the documentation does not show how to interface and post-process the exported trajectories to work with robotic hardware. Although this is beyond the intended scope of the RAM software project, this would improve the utilization of this software for industrial applications. The documentation for the package is currently in French. An English translation of the documentation would would make understanding the adjustable parameters within the software easier for English speakers.

Future work seeks to incorporate the ability to fit/apply the generated tool paths fit to an arbitrarily contoured surface within the actual environment, much like is done in the various Scan-N-Plan processes for surface processing currently available within the ROS-I ecosystem, thus being able to do additional intermediate inspection and processing as the build progresses, or update build based on perception/machine vision data.

Furthermore, implementing with a new driver approach that enables more efficient tool path to trajectory streaming would improve the usability and interfacing with the hardware. Implementations of various algorithms to ensure as consistent profile/acceleration control to manage sharp transitions would also be beneficial and may be implemented through optimization-based planners such as TrajOpt. Porting to ROS 2 would also be in scope.

A ROS-Industrial Consortium Focused Technical Project proposal is in the works that seeks to address these issues and offer a complete open source framework for facilitating flexible additive manufacturing process planning and execution for high degree of freedom systems. Special thanks to Yaskawa Motoman for making available the robotic welding platform, and thanks to Doug Smith at Southwest Research Institute for working out the interaction between the RAM package and the robotic system.

Editor Note: Additional thanks to David Spielman, an intern this summer at Southwest Research Institute. This work would not have been possible without his diving into the prior RAM repository and getting everything ready relative to testing.

ROS2 Easy-to-Adopt Perception and Manipulation Modules Open Sourced

ROS-Industrial has developed the easy_perception_deployment (EPD) & easy_manipulation_deployment (EPD) ROS2 packages to accelerate the industries' effort in training and deployment of custom CV models and also provide a modular and configurable manipulation pipeline for pick and place tasks respectively. The overall EPD-EMD pipeline is shown in Figure 1.

Figure 1. Overall EPD-EMD Pipeline

The EPD ROS2 package helps accelerate the training and deployment of Computer Vision (CV) models for industrial use. The package provides a user-friendly graphical user interface (GUI) as shown in Figure 2 to reduce the time and knowledge barrier so even end-users with no prior experience in programming would be able to use the package too. It relies on open-standard ONNX AI models, hence eliminating the overreliance on any given ML library such as Tensorflow, PyTorch, or MXNet.

Figure 2. Startup GUI of EPD

EPD itself runs on a deep-learning model as a ROS2 interface engine and outputs object information such as the object name and location in a custom ROS2 message. This can be used for use cases such as object classification, localization, and tracking. To train a model for custom object detection, all a user needs to prepare are the following:

  • .jpgs/.pngs Image Dataset of custom objects. (Approx. 30 images per object required)
  • .txt Class Labels List

The expected output will be:

  • .onnx trained AI model

Figure 3. EPD Training to Deployment Input & Output

To cater to the different use cases for different end-user’s requirements, the package also allows for customizability in 3 different profiles.

• Profile 1 (P1) – fastest, but least accurate

• Profile 2 (P2) – mid-tier

• Profile 3 (P3) – slower, but most precise output

EPD caters to 5 common industrial tasks achievable via Computer Vision.

  1. Classification (P1, P2, P3)
  2. Counting (P2, P3)
  3. Color-Matching (P2, P3)
  4. Localization/Measurement (P3)
  5. Localization/Measurement/Tracking (P3)

Figure 4. An output of EPD at Profile 3, with OIbject Localization and operating at 2 FPS

The EMD ROS2 Package is a modular and easy to deploy ROS2 manipulation pipeline that integrates perception elements to establish an end-to-end industrial pick and place task. Overall, the pipeline comprises 3 main components in which are:

  1. Workcell Builder

The Workcell builder as shown in Figure 5 essentially provides a user-friendly graphical user interface (GUI) to allow users to create a representation of their robot task space to provide a simulation of the same robot environment as well as the initial state for trajectory planning using motion planning frameworks like MoveIt2.

Figure 5. Workcell Builder from EMD

2. Grasp Planner

The Grasp Planner subscribes to a topic published by a given perception source and outputs a specific grasp pose for the end-effector using a novel, algorithmic depth-based method. The generated pose is then published as a ROS2 topic. As shown in Figure 7, The grasp planner currently supports and provides a 4 degree-of-freedom (DOF) pose for both multi-finger and suction array end effectors, apart from the traditional two-finger and single suction cup grippers.

Figure 6. POINCLOUD TO GRASP POINT GENERATION

Figure 6. Grasp Planner

It aims to eliminate the pain points that users face when deploying machine learning-based grasp planners such as:

• Time Taken for Training & Tedious Dataset Acquisition and Labelling

Current datasets available such as the Cornell Grasping Dataset and Jacquard Grasping Dataset generally account for two-finger grippers and training on generic objects. For customized use cases, datasets need to be generated and labeled manually which requires a lot of time. Semantic description of multi-finger grippers and suction arrays may be hard to determine as well.

• Lack of On-The-Fly End Effector Switching

In a high mix, low volume pick-and-place task, different end effectors are required to be switched around to cater for the grasping of different types of objects. The changing of end efforts would translate for users to re-collect a new dataset, re-label and re-train the dataset and models before deploying them.

3. Grasp Execution

The Grasp Execution was developed to allow for a robust path planning process in navigating the robot to the target location for the grasping action. It serves as a simulator that uses path planners from the motion framework MoveIt2, as well as the output generated by the Grasp Planner. Figure 7 demonstrates that various items are picked successfully.

Figure 7. Grasp Execution on different objects using different end-effectors

Overall, it benefits users by providing seamless integration with the grasp planner, as the grasp execution package communicates with the grasp planner through subscribing to a single ROS2 topic with the GraspTask.msg type. Apart from this, the grasp execution package also takes into account dynamic safety, which is important as collaborative robots often operate closely in a dynamic environment with human operators and other obstacles as well.

FIGURE 8. Improved grasp execution - dynamic safety architecture

FIGURE 9. dynamic safety zones

There is a need for the robot to be equipped with such capabilities to address safety concerns and detect possible collisions in its trajectory execution to avoid any obstacles. Users are provided with a vision based dynamic collision avoidance capability through the use of Octomaps, whereby when a collision has been deemed to occur within the trajectory of the robot, the dynamic safety module will be triggered to either stop the robot or account for a dynamic replanning of its trajectory given the new obstacles within the environment.

Both of these packages have been formally released and open sourced on the ROS-Industrial github repository, and the team would also like to acknowledge the Singapore Government for their vision and support in funding this R&D project “Accelerating Open Source Technologies for Cross Domain Adoption through Robot Operating System (ROS), supported by the National Robotics Programme (NRP).

Announcing Industrial Reconstruction Leveraging Open3D

Open3D Industrial Reconstruction of an aerospace radome

Mesh reconstruction is often a critical step in an industrial robotics application. CAD models are not always readily available for all parts and often parts have warped or changed due to frequent use in the field. Reconstruction allows a robotic system to get mesh information in these scenarios. Once a mesh has been generated software can be used to generate toolpaths for the robot either autonomously or with human input. These toolpaths can then be converted into robot trajectories which are subsequently executed.

Many sensors and software packages exist that allow for generating pointclouds or meshes, and it seems that RGB-D cameras are becoming increasingly popular and readily available. Previously, ROS-I released yak, which enabled using these cameras mounted on a robot arm to created a mesh. However, yak required CUDA, which can be difficult setting up, and yak would frequently have accuracy issues at the boundaries of meshes. Our new Industrial Reconstruction package still uses these same RGB-D cameras, but makes integrating mesh reconstruction into your industrial robotics application to be easier than it ever has been before by using the 3D data processing library Open3D.

Figures Above: Creation of a mesh from a highly reflective part

Industrial Reconstruction can easily be set up by simply running the command “pip3 install open3d”, and then cloning and building the repository like any other ROS 2 package. The TSDF algorithm provided by Open3D appears to be less susceptible to the edge issues seen in yak and it outputs fully colorized meshes. Having color in the meshes allows for greater ease of use when trying to manually create toolpaths, or may be used to drive segmentation for toolpath planning, all while giving more confidence to users in the accuracy of the generated mesh. On top of this, a live feed of the reconstruction in progress can be visualized in RVIZ. This enables users to go back and scan areas that are missing before exporting the mesh and potentially discovering the missing parts later, requiring a full rescan.

Live creation of a colorized mesh

Industrial Reconstruction is already in use today on multiple projects including our Scan N Plan workshop. We’re excited to see the projects that this new repository enables.

Securing ROS robotics platforms

How can we apply security principles and best practices to robotics development?

 
 

Let’s start with this well-known and relatively old quote, still very relevant today. It reminds us that security is a dynamic process that accompanies any system’s lifecycle. This is because new vulnerabilities are constantly being discovered for all kinds of components. The ways software is used, its platform and libraries change too. Software that was considered as secure in the past may not be the case today.

In the past few years, the number of published vulnerabilities discovered in open source has been steadily increasing. Open source is neither more nor less secure than proprietary code. However, few companies understand the breadth of open source that is being used in their applications. This lack of knowledge translates into a lack of awareness about vulnerable components, and this is a source of risk. ROS is certainly no exception, and collective efforts are needed to keep this great community secure.

As you can imagine, through vulnerable robots, organisations may leak sensitive information about their environment, or even cause physical harm if they are accessed by an unauthorised party.

So, what can you do to keep your robots and the data they handle secure? Let’s dive right into some tips.

Securing your robot’s software

What could go wrong?

You may already be aware of what “CVE” and “CWE” stand for. If not, I strongly encourage you to familiarise yourself with these great sources of information on security issues, published by the MITRE organisation. Common Vulnerabilities and Exposures (CVEs) are a catalogue of publicly disclosed security vulnerabilities in all kinds of software and systems. Common Weakness Enumeration (CWE) is a community-developed list of software and hardware weakness types. The Top 25 most common and dangerous security weaknesses are released every year. You can think of this as a “ranking” of the most prevalent and impactful weaknesses experienced over the past 2 years, organised and ranked for you. An interesting fact: many of the top vulnerabilities in the CWE Top 25 have been the same common kinds of vulnerabilities for decades. This means that while things do change, learning about the most common vulnerabilities will be useful for years to come.

Set your robots up for security

In practice, a good place for your team to start is to embrace a secure software development lifecycle process. It ensures that security is an intrinsic part of every stage in your software development. This approach requires that engineers are trained in secure coding best-practices. This may obviously represent an initial investment, but one that will always pay in the long term. A great place to start is by checking out the CISA life cycle processes guidelines.

 
 

Image source: OpenSense Labs

Your projects will likely have many reused software components, and they will need to be updated occasionally. Eventually, a vulnerability may be found in one, so keep an eye out and be ready to quickly update. To stay on top of things, it is a good idea to use Software Composition Analysis (SCA) tools to identify the open source software components in your code, and evaluate elements, such as security, licence compliance, and code quality (for example, they will let you know of known vulnerabilities in your components, and warn you if they are out of date, or have patches available). This will help you keep any libraries being used as up-to-date as possible, to reduce the likelihood of using components with known vulnerabilities (OWASP Top 10-2017 A9). You can check for any vulnerable components by using free tools, such as OWASP Dependency Check or Github’s Dependency Scanning.

It is crucial to keep your OS and software up to date with security updates and CVE fixes. This is a simple and essential practice to avoid becoming an avenue of exploitation. So, apply any security updates as soon as they’re available and it’s feasible for your robots. And remember, you can take further steps to harden your robots. For instance, make sure to close all their unused ports and enable only necessary services. Give the local user the least privileges they need, to prevent privilege escalation should an intruder ever gain access. If you want peace of mind that you are using a hardened OS with built-in security features, check out Ubuntu Core.

Deep dive into your code

There are different types of code-level analysis to implement as a basic measure. Some analyses can be automated and included in a CI/CD pipeline, so you don’t have to rely on manual scans, and luckily there are a number of open source tools to help you in this task. Each one has pros and cons, so combining them will lead to the most comprehensive results. Below are some suggestions for your ROS applications.

Static Analysis tools (SAST) and Fuzzers

The first good practice is to analyse your code statically – that is, without executing even a line of code. As you’ll see, there are plenty of such options, many free and open source. The gcc and clang compilers support using sanitizers that will detect errors in C/C++ code at build time. In particular, if your team is working with code in memory-unsafe languages like C and C++, this is a crucial step. Take a special look at Google Sanitizers’s Address, Leak, Memory, and Undefined Behaviour sanitizers. Other such free tools include LGTM (by GitHub), Coverity, and Reshift.

Fuzz testing or fuzzing is a well-known technique for uncovering programming errors in software, which consists in finding implementation bugs using malformed/semi-malformed data injection in an automated fashion. Many of these detectable errors can have serious security implications. It’s a great idea to use this practice to validate your SAST findings. Check out Google’s OSS-Fuzz for your fuzzing needs, and when you’re done, save this list of security tools that are free for open source projects.

 
 

Image source: Synopsys

And, just as crucial to the ROS ecosystem, please do report CVEs if you discover any! This will help strengthen the security of ROS code across the whole community. Have a look at the ROS 2 Vulnerability Disclosure Policy when you’re ready to report.

Hey, ROS 2 user

Of course we cannot discuss security in ROS-based robotics without mentioning the ROS 2 security features. The default middleware in ROS 2 is DDS, and there are security features that are baked into its very standard, such as: Cryptography, which implements encryption, decryption, hashing, digital signatures, and so on; Authentication, for checking the identity of all participants in the network; Access Control, which specifies which resources a given participant can access; and Security logging, to record all security-relevant events.

The SROS2 utilities provide a set of tools that make it easier to configure and use these security features. If you’re using or planning to use ROS 2, I encourage you to check out this tutorial, and to then try it out.

Given this clear security benefit of ROS 2 over ROS 1, an obvious step, whenever possible, is to migrate your code to ROS 2. But if you cannot, or simply are not ready to migrate just yet, consider exploring Canonical’s Extended Security Maintenance service for deployed robots. One of the benefits of ROS ESM is that it provides you up to 10 years of security maintenance for ROS and Ubuntu base OS distribution. This can be especially critical to get if you’re still running an unsupported version of ROS.

Join efforts with the larger ROS community

Last but not the least, the reason we’re all here: we’re a community interested in sharing, finding, and offering support to others working with ROS.

In case you’re not familiar with it, the ROS 2 Security Working Group focuses on raising awareness of and improving the security around ROS 2. How can you get involved? Track the wg-security tag on the ROS Discourse, and always get the upcoming meeting announcements. Join the monthly meetings, come and share your use cases, any obstacles you’re facing, and pool efforts with the rest of the ROS community to work through them. We hope to see you there.

Learn more:

You may also consider reading the following materials:

This is a guest post by ROS-Industrial Consortium Americas Member Canonical by Florencia Cabral Berenfus - Canonical, Robotics Security Engineer. This is a follow up to Florencia’s presentation given to the ROS-I membership at the 2021 4th Quarter Members Meeting. https://rosindustrial.org/events/2021/12/ros-industrial-consortium-americas-community-meeting-dec-2021. You can learn more about Robotics at Canonical at https://ubuntu.com/robotics.

Demystifying Robots Interoperability

For decades, robots have been deployed in the manufacturing sector to automate processes. Machine tending, inspection, pick, and place are among the most common applications where robots have been utilized. Despite the high utilization of robotic applications in the manufacturing environment, these applications were predominantly made in a fixed industrial setting. However, the recent advancement in mobility has resulted in mobile robots' rise. In 2021, IFR reported an estimated growth in service robots of 12% worldwide, with sales of personal robots rising by 41%. Asia, in particular, experienced substantial growth in the area.

Among the many applications where mobile robots are being used, Autonomous Mobile Robots (AMR), delivery, cleaning, and social robots, have been identified as the most common applications.

The proliferation of these new types of robots would unleash new possibilities to automate tasks where robots were not traditionally seen as capable. The trends towards such utilization have been seen especially in production and warehouses and in transportation and outdoor facility. These trends are primarily motivated by improved production flexibility, task optimization, reduced reliance on a limited skilled workforce, and the ability to respond more effectively to dynamic supply and demand fluctuations. For example, deployment of robots for flexible manufacturing where robots can be used to modularly navigate and conduct part of the processes that once were static operations requiring heavy CAPEX investment for fixed equipment construction.

With strong growth in the number of robots and their applications, there will be challenges, especially in interoperability, between different robots and other systems in the facility. If operators and business owners were to use many robots, two significant technical challenges would need to be addressed. Firstly, during the design phase and later, during the operation and deployment stage.

Fleet Management and Workers in a Warehouse Facility, The Robot Report (2019)

Responding to the interoperability challenges above, the team at ROS-Industrial Asia Pacific is developing technologies to support owners/operators, system integrators, and robot manufacturers through several development activities utilising Robotics Middleware Framework (RMF):

  1. Development of high fidelity simulation includes scenarios involving environmental factors (such as heavy rain) and network interruption. The development will support a better sense of realism in existing situations. This development will enable operators to optimize production for better planning and scheduling
  2. Development of Next-Generation Robot Middleware Framework that will help robots interact with each other and to other systems, such as ERP, MES, or the workcell and facilities management, such as door and lift. The development will enable traffic deconfliction and task prioritization toward autonomous operations.

At ROS-Industrial, our goal is to develop applications that can help the proliferation of robotics for Industrial use. We constantly seek market inputs to ensure that our developments are highly aligned with the industrial needs. If you have used robots or plan to deploy robots in your facility and would like to help in influencing the development of robotics utilization in your industry, we would like to invite you for a short 10-mins survey to understand your challenges and requirements through the link below.

*Reference:

Heer, C., 2022. World Robotics 2021 – Service Robots report released. [online] IFR International Federation of Robotics. Available at: <https: data-preserve-html-node="true"//ifr.org/ifr-press-releases/news/service-robots-hit-double-digit-growth-worldwide> [Accessed 7 April 2022].*

Using Tesseract and Trajopt for a Real-Time Application

The past two years have seen enormous development efforts transform the tesseract-robotics and trajopt_ros packages from highly experimental software into hardened, industrial tools. A recent project offered the opportunity to try out some of the latest improvements in the context of a robot that had to avoid a dynamic obstacle in its workspace. This project built upon previous work in real-time trajectory planning by developing higher level control logic for the system as a whole, and a framework for executing the trajectories that are being modified on the fly. Additionally, this project provided the chance to develop and test new functionality at a lower level in the motion planning pipeline.

One of these improvements was the integration of continuous collision checking throughout robot motions, as opposed to only checking for collisions at discrete steps along the motion. This is a critical component to finding robot motions that avoid colliding with other objects in the environment. Previously, a collision could have occurred if the distance between the steps in the motion plan were larger than the obstacles in the environment (pictured below). With the integration of our new collision evaluators, these edge cases can be avoided.

The other major piece of research done on the low-level motion planning was benchmarking the speed at which our trajopt_ros solver can iterate, and therefore how quickly it can find a valid solution. We did not impose hard real-time deadlines on the motion planner, instead we ran it as fast as possible and took results as soon as they were available. For our application this was adequate. Some of our planning speed test results are pictured below.

The final major development to the robot motion pipeline to enable real-time trajectory control was the creation of a framework for changing the trajectory being executed on a physical robot controller. This was a particularly exciting element of the research because it brought our work out of the simulation environment and proved that this workflow can be effectively implemented on a robot. We are excited to apply these tools to more of our projects and continue improving them in that process.

All of our improvements to the Tesseract and Trajopt-ROS packages have been pushed back out to the open source community. Check out the latest versions here and let us know how they are enabling new capabilities for you!

ROS-Industrial is buzzing

This year's ROS-Industrial Conference took place on December 01-02, 2021 as a virtual event. With more than 300 registrants it was the largest conference of the so far 9 conferences that have been organized since 2012. In this article, we try to capture the major developments around the ROS-Industrial initiative presented during the conference.

More and more companies are starting to deploy ROS in their industrial applications.

Operator using Drag & BOt to program robot

At the conference drag&bot explained how their system for easy robot programming is being used by industry and showed a number of example applications. Their software is based on ROS and enables programming different industrial robots using a simple drag and drop interface. Drag & Bot also offers an interface for integrating external ROS programs and can be used by developers to deploy their ROS based solutions to industry.

Another company that has today more than 450 mobile robots running their ROS based Navigation is Node Robotics. Robots running Node Robotics software are deployed in BMW's production facilities and in other companies. Node Robotics software offers easy integration of mobile robots of different types into one fleet and enables software systems for operating a single AMR, for data sharing between AMRs as well as fleet management systems and their software systems are based on ROS. The companies first deployment were of freely navigating AMRs was in 2016 in a plant of the automobile producer Audi.

Enabling safety and towards real-time execution with ROS

PSENscan LaserScanner developed by pilz with ROS Integration

The ROS-Industrial ecosystem is producing a number of cutting edge solutions integrating ROS with industry. Pilz is providing a portfolio of safety components that can be used to build a safety system around ROS based mobile robots. The center of the portfolio is the laser scanner PSENscan that can monitor configurable safety zones for the robot. The PSENscan also offers functionalities for integrating speed monitoring and it integrates with ROS via an open source driver.

Progress has also been made with regards to real-time execution in ROS2 systems. Andrei Kholodnyi (Wind River) co-lead of the ROS2 real-time working group presented the groups work. Members of the working group have developed a number of new real-time optimized ROS2 executors. Another available component provided by the working group is a Raspberry PI4 based real-time benchmarking system with RT_PREEMPT kernel for Ubuntu 20.04. ROS2 developers that are not satisfied with the real-time performance of RT patched Linux can also choose to switch to Wind River's vxWorks or Blackberry's QNX, both systems executing ROS2 applications.

Towards integration into industrial robot software platforms

Universal Robots teach pendant that has been enhanced with ROS support

As previously mentioned, drag & bot's software offers one way to integrate ROS into an industrial robotics platform but more solutions are becoming available. Another approach was presented by Universal Robots (UR) and Forschungszentrum Informatik (FZI) who have been collaborating to develop an advanced ROS interface for UR's robots. The work enables direct integration of externally running ROS-based applications into URScript programs running on the robot. During the conference they showed a new interfaces that enable cartesian control and speed scaling for industrial robots from within ROS. UR also showed a prototype of a URCap that enables integration with ROS at the conference. The goal of the development is to leverage ROS' capabilities for UR's robots and making them available via URCaps.

Canonical the publisher of Ubuntu also presented their solutions for deploying robot software. Their solution mainly consists of snap and Ubuntu Core. snap is a solution to create containerized and easily deployable software applications. Canonical claims that snaps have advantages for embedded systems other solutions such as docker because they are more easily integrated with embedded hardware. Ubuntu Core is a minimal operating system including application packages which is based on snap containers which enables a modular and simple architecture for embedded systems. The Canonical software solutions are optimized for ROS and they are already used in industry e.g. in Bosch Rexroth's CtrlX. snap are becoming another way of integrating ROS in industrial control systems and software platforms of the future.

Integration with 5G and hardware acceleration

Board with Hardware Acceleration support in ROS

Ericsson and eProsima presented their work of integrating ROS2 with 5G systems and thus moving ROS2 towards enabling distributed real-time systems. The two organizations collaborated on developing an interface for ROS2 and the underlying DDS middleware implementations to enable creating separate IP flows for specified ROS2 communications (previously only a seperate IP flow was created by DDS implementations) and setting 5G quality of service parameters for these IP flows. The interface has been integrated in eProsima's FastDDS. FastDDS is also the default middleware for the next longterm release ROS2 Humble (May 2022) which means integration of ROS2 with 5G quality of service is becoming simple.

Another interesting development was presented by Xilinx. Xilinx and AMD are working on providing prime hardware acceleration support for ROS2. The development of hardware acceleration interfaces is driven by the ROS2 hardware acceleration working group. Xilinx is developing the Kria Robotics Stack based on the open interfaces defined by the working group. Major features of the development are easy integration of embedded targets into the ROS2 build system and tools and an API for defining which parts of the ROS2 computation graph is run on CPU or e.g. FPGA. This development promises to make ROS2 a prime platform for deterministic and lightning fast computations that are needed for future robotics applications. Other companies such as NVIDIA are also targeting ROS2 for their hardware acceleration solutions as Katherine Scott (Open Robotics) stated in her presentation.

Tackling the industrial security challenge

Manufacturing systems are becoming a target for cyber criminals. As robots are deployed in all kind of systems that are essential for a countries economy they can even become a prime target in potential cyberwar scenarios. The ROS-Industrial community is aware of the arising problems and members Alias Robotics, Trend Micro as well as Canonical have presented research findings, solutions and services for ROS developers. Alias Robotics provides solutions such as the Robot Immune System (End point protection) as well as services for identifying potential risks in robotic products such as Threat Modeling. Alias Robotics and Trend Micro have together analyzed the DDS standard which is ROS2's prime middleware and used in a wide range of applications such as medical, automotive and space. A number of security issues where discovered and reported. Canonical provides longterm support for end of life ROS distributions with security updates and the previously described deployment toolchain based on Ubuntu Core and snap which simplifies security updates during production.

Advanced motion planning: Mobile manipulation, hybrid planning, collision avoidance and welding

Advanced manipulation has always been a strong suit of the ROS ecosystem. This years conference made abundantly clear that this is also true for ROS2. PickNik main driver behind moveit2 gave an overview of new features currently being developed, notably mobile manipulation and hybrid planning. A mobile manipulation demonstration for moveit2 was developed together with HelloRobot (workshop). A demonstrator for hybrid planning which integrates a global planner and local planner is being developed together with Fraunhofer IPA and targeting multi-layer welding. The goal is to perform scanning, welding and local planning at the same time to achieve higher process quality. Another talk focusing on robotic welding was contributed by IRT Jules Vernes that presented how they leveraged ROS for building a lightweight welding robot for mobile welding applications from scratch. They were able to design hardware and controller within a year and create a working prototype. SwRI has developed a ROS2 demonstrator for Scan & Plan applications (workshop).

ARTC is developing a software tools for collision avoidance in dynamic environments. Currently, obstacle avoidance during motion is not easily available for robot arms in ROS2. Therefore, ARTC has developed a dynamic safety joint trajectory controller, that integrates with motion planning solutions such as moveit2 and tesseract. The controller includes collision checking, speed scaling and re-planning and average execution frequencies of more than 200 Hz are possible on commercial-off-the-shelf hardware.

Model-driven robotics development with ROS

Software for modern robot systems is becoming more and more complex. Development and testing is becoming more and more difficult. It is time to work on handling the rising complexity of robotics development. Fraunhofer IPA presented their work on a model-driven development toolchain for ROS-based systems. The toolchain enables extracting models from available handwritten ROS components, defining ROS systems of existing and new components using a graphical tool and deploying the defined systems in different fashions i.e. ROS package or a complete docker container. The toolchain is currently in alpha state and heavily worked on.

SpaceROS and other news

  • The space industry in the US is on the rise and with it is the interest in robotics solutions for space. ROS is already deployed in a number of non-critical space applications. Open Robotics and PickNik are plotting the next big step for ROS - SpaceROS - qualifying ROS and moveit for mission critical space applications.
  • TurtleBot4 is coming in 2022 with a base from IRobot

Summary

This year's conference showed that ROS is being commercially deployed in industry and we are seeing that industrial robotics platform providers (UR, drag&bot) are opening their platforms for ROS. Additionally, a number of supplier companies are providing key technologies for building safety systems around ROS-based robot applications. Furthermore, ROS2 has many configuration options for achieving real-time performance and industrial operating system developers such as Blackberry and Wind River are supporting ROS2. ROS2 is becoming a prime robotics platform for new technologies such as 5G and hardware acceleration thus enabling the robot applications of the future. ROS2's security is moving in the focus of and being monitored by the security research community and a number of specialized security solution providers are available. With ROS2's cutting edge motion planning capabilities this means building industrial robotics applications with ROS2 and deploying them to industry is becoming much easier.

Process Planning for Large Systems

Advances in robotic capabilities allow us to tackle bigger problems with autonomous systems. While extra degrees of freedom in large robots like rail systems or mobile bases empower cutting edge work, they can cause challenges in process planning; the creation of the “useful” motion of a robotic system that is constrained by the application at hand. The ROS Industrial Consortium has addressed this problem by developing new process planners that can quickly plan process for robots with large degrees of freedom.

A full Dijkstra graph finds the optimal path through the graph by exploring every edge

In the field of robotics, motion planning can be divided into two forms: freespace planning, which finds a collision free path between two points, and process planning, which governs the movement of the robot through its useful operations. When a robot has more ways of moving, the graph of positions that represents the joint positions that can reach a point in space begins to get large. So large, in fact, that it begins to resemble the discretized representations of real space used by freespace motion planning. This project exploited this similarity to speed up process planning using freespace planning algorithms.

Example configuration that benefits from this alternate approach to solution searching

At a high level, the new improvements in process planning allow for branching “depth first” searches, which will quickly find a solution for every position in the trajectory, instead of search “breadth first” to find the optimal configuration at each pose. This work is especially useful in situations with many valid solutions, where it is able to find a “good enough” configuration in a tiny fraction of the time used by more traditional, comprehensive searches.

This is currently implemented in the repository https://github.com/swri-robotics/descartes_light. We encourage interested parties to check this out, and provide feedback. We expect to migrate this to the ROS-Industrial GitHub organization int he coming months. Thanks to the community for providing feedback and use cases to support this work.

Hands on with the DreamVu PAL Mini

I’ve been testing an evaluation kit for a new camera from DreamVu. The PAL Mini is a tiny 360-degree 3d camera with a software package for object detection and avoidance. It has ROS support and is intended for use with mobile robots.

They’re not kidding when they say it is small.  Here is the camera compared to an Xbox One controller.

The camera connects to an Nvidia Jetson, offloading the computationally heavy task of rectifying the image and generating a ROS navigation compatible LaserScan message. The Jetson can run ROS and may be sufficient for a completely controlling a mobile robot. DreamVu provides samples for mapping and navigation with a TurtleBot.

The camera connects to the Nvidia Jetson with a USB-C cable. There are ample remaining ports for other devices.

The Jeston can be connected to a host pc via ethernet and stream the resulting laser scans and rgb images. A simple ROS node is provided to convert to standard cv::Mats and publish them. A remote ROS master is not required for this.

A laser scan is generated, along with a panoramic image. The laser scan is combined with the image making it easy to see what and where and obstacle is.

As with any new product I ran into some issues during testing. I prefer to use the camera by streaming the results to my host PC, which most likely wasn’t their main focus while developing a camera designed to mount on mobile robots. Their support was able to resolve the issue quickly and took my suggestions on how to improve the reliability of their ROS package.

More information is available on their website. You can also check their Github (coming soon). Contact DreamVu for information on Object Detection and Avoidance, and other applications of their 360-degree view cameras and solutions.

Introducing the ConnTact Assembly Framework

This is a cross-post hosted over at the AI for Manufacturing Robotics Blog hosted by NIST as part of their initiative to set up a community hub for researchers and practitioners of AI for Manufacturing Robotics, who are interested in staying current on research trends or finding new projects and collaboration opportunities. You can learn more about this initiative and other activities about AI for Manufacturing Robotics at: https://sites.google.com/view/ai4manufacturingrobotics/

The Challenge of Assembly

When assembling precision-tolerance parts, humans have an ideal set of capabilities. Combining geometrical understanding, visual guidance, tactile confirmation, and compliant force/torque (F/T) control, it is possible for a human to quickly assemble components despite micrometer tolerances. We take this remarkable suite of abilities for granted. To prove it, just recall that the number-one cliché child's toy is all about fitting blocks through matching holes in a box. Our babies can do assembly tasks with ease.

Now, the moment you consider handing the child's block off to a modern industrial robot, you start to encounter some of a robot's greatest disadvantages. A robot is, by design, a highly rigid precision machine. The robot must be fed very precise information about the orientation and position of the goal - the box in this case - and it cannot correct mid-action to comply as nearly-aligned pieces try to slide into place. If provided erroneous measurements - even ones that are just a few millimeters off - it will quite simply crush the box, the block, and possibly itself. Rigidity is inherently a difficult obstacle to performing mechanical assembly when high-accuracy measurement and fixturing are impractical.

ConnTact guiding a UR10e robot through an assembly algorithm. The system can successfully insert a peg into the NIST taskboard despite up to 1cm error in position instructions

ConnTact guiding a UR10e robot through an assembly algorithm. The system can successfully insert a peg into the NIST taskboard despite up to 1cm error in position instructions

Compliant Control

Despite these difficulties, a respected method has been developed for robots to imitate human flexibility and responsiveness, called compliant feedback control. By rapidly controlling the motion of the robot in response to 6DOF F/T information, a robot can imitate the soft compliant behavior of a human grip. This can be achieved with any modern robot using an after-market 6-axis load cell mounted between the tool flange and the gripper.

This feedback enables detection of F/T "wrenches" acting on the gripper, so the control system can smoothly move to comply. Pressing on the front of the gripper makes the robot retract. Twisting the gripper makes the robot turn. The robot very convincingly pretends to be much less rigid. When displaced, it applies a constant gentle force toward its desired goal position and patiently waits until permitted to reach it.

This permits the use of tactile methods of operation - the use of touch to sense the environment and make decisions. The system can correlate data about reaction forces, robot position, and robot velocity to detect the collision mode which best describes the interaction of the robot and the environment. For instance, collision with a flat surface may be identified by sharp resistance in one direction but negligible friction in other directions. The alignment of a cylindrical peg with a receptacle may be detected by resistance in all directions except one, the vector parallel with the receptacle axis. By characterizing these interactions, a reliable understanding of the contact state between the robot and workpiece can be formed.

Researchers have been experimenting with and implementing compliant assembly systems for years. The Kuka iiwa comes pre-installed with certain compliant features built in. Other companies such as Franka Emika have designed robots specifically to achieve high performance in feedback-based assembly. And on the software side, open-source libraries exist which can control hardware through high-level position or velocity interfaces. In particular, our work makes extensive use of the cartesian_controllers libraries, developed at the FZI Research Center for Information Technology.

Compliance example: the robot is configured to respond to forces and torques applied to the gripper. After the disturbance ends, it returns to its assigned task.

Compliance example: the robot is configured to respond to forces and torques applied to the gripper. After the disturbance ends, it returns to its assigned task.

The ConnTact Framework

NIST hopes to expand the realm of assembly research to smaller developers with less abundant resources and permit a much more agile workflow. To that end, NIST is collaborating with SwRI to develop an open-source framework for compliant assembly tasks, called ConnTact. This framework is meant to provide tools which create a bridge directly from hardware configuration to the real algorithmic development required to accomplish difficult tasks. It also standardizes interfaces to permit the community to share solutions and re-implement them with new hardware for new applications. The framework takes in information about the robot and task location and provides an environment where the user can easily develop tactile algorithms. The overall goal is to significantly ease the load on an end-user who wants to develop a new assembly applications anywhere on the spectrum from straighforward repeatable tasks to complex problems that leverage AI and reinforcement learning methods.

The key to this framework is the simplification of interfaces. This permits any robot with force sensing and compliance control features to be used with any algorithm. To configure for a given robot, a developer must feed live force, torque, and position data into the Assembly Tools package. In addition, for each task, the task frame, or approximate location and rotation of the task relative to the robot, must be imported. With this basic input, the package provides a development framework with 3 major features: Generalization, Visualization, and Modularity.

The user provides a set of "user configuration" information and connects their preferred robot to the input/output interface. ConnTact then processes this configuration and runs the user's algorithm as configured, providing rich visual/logging feedback.

The user provides a set of "user configuration" information and connects their preferred robot to the input/output interface. ConnTact then processes this configuration and runs the user's algorithm as configured, providing rich visual/logging feedback.

Main Features

Generalization: The framework seeks to generalize algorithms to any location or task. This is accomplished by transforming all inputs - robot pose, force, and torque inputs - into task space, that is, it converts all spatial measurements to be relative to the task frame. For example, in the case of an Ethernet connector insertion task, given the location of the Ethernet socket relative to the robot, the development environment would provide position data to the algorithm relative to the socket. A command to move the plug to position (x,y,z) = (0,0,0.05) would place the Ethernet plug 5cm from the socket. A force of (0,0,10) always indicates that the gripper is experiencing a force away from the socket and parallel to its axis, even if the socket were upside-down on the underside of a workpiece. This allows the user to focus their efforts on algorithm development with the confidence that their work is applicable to any task location or orientation.

Visualization: The framework provides visualization options using Python plotting libraries familiar to any modern programmer. Input position and forces are mathematically processed to provide reliable speed, force, torque, and position data streams which are easily displayed at runtime. This built-in visualization is meant to equip every program with a greater degree of transparency.

Modularity: Finally, we facilitate modular algorithm development with a state machine-based control loop. The example implementation specifies a finite number of states and provides the conditions needed to transition between them. The user can reuse existing algorithmic motion routines from the open-source repository to rapidly produce useful programs. They can also develop their own algorithm modules and easily add them to existing program structures.

Some sample algorithm modules currently included:

  • Moving in a linear direction until a flat surface is detected.
  • Searching in a spiral pattern across a surface for a low spot, such as a connector falling a short way into its socket.
  • Moving compliantly in a specified direction until colliding with a rigid obstacle.
  • Probing in different directions to determine rigid constraint locations.
Left: Top-down/side-on motion and force readings are displayed live.Right: The robot streams data to RViz (a ROS-specific model suite) to show live pose and live F/T vectors.

Left: Top-down/side-on motion and force readings are displayed live.

Right: The robot streams data to RViz (a ROS-specific model suite) to show live pose and live F/T vectors.

Code Release

The basic WIP framework is being made available publicly now at https://github.com/swri-robotics/ConnTact, and work will proceed over the coming months to add features and documentation. NIST and SwRI welcome any feedback to improve the framework. We aim to make a simple, straightforward, and powerful system to bring agile compliant robotic assembly to a wider developer community and bring tactile robot sensing to the forefront.

NIST Launching New Website Focusing on AI for Manufacturing Robotics

In a recent meeting with ROS-I Consortium Americas member and long time supporter of open source for industry, NIST's Craig Schlenoff noted some recent developments over at NIST and has provided the below to notify the borader ROS-I Community. Thanks to NIST for continuing to keep robotics, AI, and standards that seek to bring order to the space front and center!

Dear Robotics Enthusiasts,

NIST is pleased to announce the launch of a new community hub for research on AI for Manufacturing Robotics. Link: https://sites.google.com/view/ai4manufacturingrobotics/

On this site you’ll find several resources you may find of interest, including:

  • Curated lists of relevant datasets, papers, and software repositories
  • Learning resources on AI, Robotics, and the Manufacturing Industry
  • Community content such as our on-going Research Spotlight article series
  • Announcements/archives of workshops, conferences and more.

Please be sure to bookmark the site and join our slack at https://tinyurl.com/ai-mnfg-robotics if you found it helpful, since new material is being added regularly.

Website_3.PNG

Thanks to Craig Schlenoff and Pavel Piliptchak of NIST for providing this update.

It's a wrap for ROS-Industrial Asia Pacific Workshop 2021!

The Annual ROS-Industrial Asia Pacific Workshop took place on the 18th August 2021, as a one day digital webinar. The workshop was opened by our Guest-of-Honor, Professor Alfred Huan, Assistant Chief Executive of SERC from A*STAR, in which he has given an overview of the robotics and automation eco-system in Singapore, as well as the how the adoption of ROS has been proliferated throughout Asia Pacific.

After the opening, our keynote speaker, Mr Tan Jin Yang, Senior Manager of Changi Airport Group, shared the topic of “Robotics Middleware Framework (RMF) for Facilities Management”. With the increased number of robots being used to augment the workforce, there is also a growing importance in effectively managing and ensuring interoperability across disparate fleets of robots. He has then shared on how RMF is now being tested in trials at Changi Airport Terminal 3, to address the task scheduling, automation and infrastructure sharing of the various robot brands that the airport has.

Next, we had Mr Darryl Lee, the Consortium Manager of ROS-Industrial Asia Pacific from Advanced Remanufacturing and Technology Centre (ARTC), to present on “Accelerating Industry Adoption of ROS2 based Technology in Asia Pacific”. During his presentation, he addressed the latest developments ongoing within the ROS-Industrial Consortium Asia Pacific Team, such as the formally released easy_perception_deployment (EPD) & easy_manipulation_deployment (EMD) ROS2 Packages, ROS Quality Badges, as well as some of the on-going projects such as the trials at Changi Airport Group presented by Jin Yang earlier.

Mr Matt Robinson, Consortium Manager for our ROS-Industrial counterpart in Americas at the Southwest Research Institute (SwRI), presented on “Lowering the Barrier for Industry Personnel to Leverage Open Source” where he brought up the strategy for developments, and also sharing on some of the useful tools and capabilities that have been developed at SwRI, such as the offline toolpath planner, visual programming, ROS Workbench and many other open-sourced advancements.

Dr Dave Coleman, CEO & Roboticist of PickNik Robotics, presented MoveIt 2: Land, Sea, Space, where he listed examples of how the robots has been used effectively in all land, sea and space. He also covered the diverse applications of MoveIt, the successful migration of MoveIt to ROS2, and the hardware integration challenges during the process of the migration alongside the upcoming roadmap, with the possibility of MoveIt 3 in the making.

Dr Jan Becker, President, CEO, and Co-Founder of Apex.AI presented “Apex.OS - A safety-certified software framework based on ROS 2”. He shared about the Apex.OS software framework based on ROS 2, which has been certified according to ISO 26262 - the automotive functional safety norm - to the highest safety level ASIL D. He also described the advantages of working with open APIs as well as outlining the efficient software development process, which lead to the functional safety certification. Truly a remarkable milestone for the open sourced industry!

Shortly after, Nicholas Yeo, Senior Director of Advanced Technology, Asia Pacific, in Johnson & Johnson, shared that they are calling forward like-minded partners to collaborate and support their effort in realising the value in advancement in robotics to drive agility and resiliency in their supply chain environment. He then shared their early successes, strategies and approaches towards developing a successful framework using open-source technologies.

After the intermission break, we invited Marco A. Gutierrez, a Software Engineer at Open Robotics, to share on “Roadmap Update: Ignition and ROS2” where he gave a brief overview on the latest developments of two projects regarding robust robot behaviour across a wide variety of robotic platforms. Ignition, the new and improved simulation tool, has its origins from Gazebo classic. It is currently in Edifice release, where Open Robotics addressed improvements made to both Mac and Windows Support as well as the design for enhanced distributed simulation. In the next LTS release set to be around September 2021, Ignition Fortress, will be focusing on improvement for the GUI Tools, sensors, SDFormat, rendering, as well as overall performance. For ROS2, he has also mentioned that ROS1’s last LTS release Noetic Ninjemys will end May 2025 and has encouraged the audience to update to ROS2. ROS2 Galactic Geochelone will be addressing middleware, tooling, quality, performance as well as documentation.

We also had Zeng Yadan, a Research Associate under the School of Mechanical and Aerospace Engineering from Nanyang Technological University (NTU) to present on the “Automation of Food Handling Based on 3D Vision Systems Under ROS”. She shared on the use case of robotics and automation for meal assembly, whereby their system utilised both a delta and scara robot with RGBD cameras and conveyor belts. This automated food assembly process for a full range of food items can be served during meals in hospitals, inflight, fast-food chains, etc. The technologies applied can minimize the risk of infection and viral contamination during food assembly. A very interesting sharing on the usecase of robotics for food assembly by NTU!

Felix von Drigalski, a Senior Researcher at Omron Sinic X presented on “Machine Assembly with MoveIt”, where he introduced the evolution of the robot system from 2018 to 2020. He gave techniques and advice such as the in-hand pose estimation whereby extrinsic manipulation can be used to align objects. Several other techniques such as L-plate reorientation, dynamic centering and software structure were also mentioned. He also discussed which features are available (or coming up) in MoveIt and ROS, and how to avoid common pitfalls.

Timothy Tan, Robotics Lead & Senior Systems Engineer of GovTech also presented “Robotics Adoption & RMF”, where he presented the current Robots Middleware Framework Architecture, and how it can be further enhanced in the future for optimal deployment.

Last but not least, we had Harshavardhan Deshpande, a Research Engineer of ROS-Industrial Europe, Fraunhofer IPA, presenting on “New Horizons for European Open Source Robotics”, where he summarized the outcomes and Focused Technical Projects (FTP) of the ROSIN project, and introduced their next funding focusing on Cognitive Robotics & AI innovation with ROS-I as a lighthouse.

A summarized table of all the speakers, including presentation slides and recording, are now available here!

To conclude this year’s ROS-Industrial Workshop Asia Pacific, we thank every speaker for their presentation during the webinar. The ROS-Industrial Consortium Asia Pacific at ARTC will continue to work closely with our industry partners, providing training opportunities for aspiring roboticists as well as companies that are embarking on leveraging ROS to scale their robotics adoption.

On behalf of the ROS-Industrial Team at ARTC, we hope that you enjoyed the webinar as much as we did, and we look forward to meeting each other in 2022 for future ROS-Industrial activities!

A Step Forward for Industrial Use Cases with the Intel RealSense D455

Like many others I was saddened by the news that Intel is winding down their RealSense business. There was a time where I struggled to find meaningful uses for their cameras. They were small and affordable, but they had too much noise and not enough accuracy for the specific applications I was involved with. That changed with the D455.

The first application we tried the D455 on was one where the D435 had failed to meet the requirements under benchmark testing. The amount of noise and the waves in the depth image were simply too great and the end-user expressed concern when reviewing the output. After reading the specs of the D455, purchasing a unit for the lab, and sharing the results with the team, the D455 was selected for the application.

Within a few minutes of plugging in a D455 for the first time it was apparent how much more stable the 3d image was compared to the 415/435. The rippling “quantum foam” I was so used to seeing was greatly reduced. The wider colorized 3d image made it much easier to see what was going on. The higher accuracy added more detail to objects that were just blobs before. These combine to make it a practical option for real robotic projects.

Image output from the 435

Image output from the 435

Image output from the 455

Image output from the 455

We have the opportunity to use a significant amount of these D455 units in this upcoming application, and it is clear This camera will continue to be a contender for numerous other projects as well. I know that the team here is excited to see this product continue to be supported, with long term availability, and eagerly await clarification from Intel about the details around support and future updates to this product and the other stereo-based products they indicated they will continue to support.

As part of the ROS-Industrial open-source project we continue to provide information, and resource around 3D cameras that our team here in the Americas has tested and our partners around the world have tested. You can see updates to this list, that also includes legacy hardware for comparison, over at https://rosindustrial.org/3d-camera-survey

Behavior Cloning for Deep Reinforcement Learning Trajectory Generation

Motion planning for robotic arms is much more computationally intensive than one would initially realize. When you move your arm around, you do not need to actively think about how to move your individual joints to reach an end position while avoiding obstacles because our brains are very efficient at motion planning. Robotic arms handle motion planning differently than humans. Before the robot moves, the motion planner has already calculated all states between the start and end position. The computational difficulty in this approach comes from the infinite number of possible joint positions for the arm to be in between the start and end goal. Therefore, search over this space would be extremely inefficient. Consequently, motion planners simplify the problem by discretizing the possible arm positions to facilitate efficient motion planning. By doing this, we limit the positions the arm can take to positions that are within this discretization. This approach works well for free space motion when the arm does not need to plan around a cluttered scene, but often struggles to compute trajectories for tightly constrained spaces, such as if the arm is in a tight passageway. If the discretization is too course, a solution may not be possible for standard motion planners. If we make the discretization more fine, then computation will be exponentially longer.

To get around these shortcomings, it is helpful to use a different approach such as reinforcement learning. The reinforcement learning agent plans trajectories through generation of intermediate states by prediction of continuous joint increments. The joint trajectory is generated by repeatedly observing the environment, planning small, continuous joint updates, then executing the update. The process of planning joint updates is done using a deep neural network, that learns through trial and error how to navigate around the environment. The plans taken by the arm are judged according to a cost function, and a reward is given to the arm in accordance with the optimality of the trajectory taken by the robot. The neural network adjusts its predictions to obtain the best possible reward.

hrlmp_simple_2.png

With a reinforcement learning approach to trajectory generation for 6+ degree of freedom arms, training times can often be very long. Therefore, we can apply our existing search-based motion planning applications to improve training times for the reinforcement learning algorithm. When the robotic arm starts the learning process, it wiggles around randomly. It is awarded a reward according to the quality of the trajectories generated from the random wiggling. To find a valid trajectory, the arm will have to wiggle in just the right way to reach the waypoint, obtaining a large amount of rewards. Due to random exploration process, the system takes a very long time to train, on the order of magnitude of several days depending on hardware. However, we have access to a plethora of valid trajectories provided by planners like TrajOpt and OMPL. The reinforcement learning agent will attempt to imitate the actions taken by the motion planners, with the addition of some exploratory noise, and reach a valid path much sooner than through chance.

The examples of valid trajectories provided by the motion planners are used to train the actor network of the reinforcement learning neural network in a supervised fashion. With the actor network learning from trajectories generated from graph search techniques, the weaknesses of the motion planners comes into play; TrajOpt generally takes a very long time to generate valid trajectories and OMPL algorithms such as RRT are non-deterministic. Due to these weaknesses, we cannot rely on the examples provided by the motion planners to train the actor network entirely in a supervised fashion. Instead, we train the actor network on examples provided by the motion planners then switch to training the network through standard reinforcement learning exploration methods. Training is able to benefit through the learning on the expert motion plans by imitating the actions of the motion planners, and learning how to improve the imitated trajectories to obtain more rewards.

This early work shows promise relative to levaraging reinforcement learning for motion planning. A number of improvements are slated both in the near term and long term, in particularly, to enable leverage in meaningful industrial confined space applications. One such improvement is the general restructuring of the core components of the motion planner. The reward function and visualization logic need to be decoupled from the environment simulator. Some of the hyperparameters should be renamed so that they are more conceptually clear and concise.

Furthermore, training time, which takes approximately 3-5 days of compute, can be dramatically decreased by dividing training into multiple tasks which can be computed in parallel by a specialized deep learning computer with multiple GPUs. By leveraging four GPUs, we estimate that the training process could be cut down to 24 hours.

Farther out it is hoped to leverage a different neural network architecture to handle the arbitrary nature of meshes and point clouds that dynamic systems will inevitably encounter, that drive the system to have to learn explicitly, and the current neural network cannot handle these elements of the system.

There remains a large amount of opportunity in this space, and the idea of a hybrid optimization-based and learning-based motion planning framwork offers a balance of solution that has the promise to enable precision motion planning applications without either driving excessive planning times, or invalid solutions.

Stay tuned as we look forward to sharing more on this and other motion planning topics, as we seek to further the state of the art nad provide compelling capabilities in the open source motion planning framework that are enabling advanced applications every day.

ROS2-Industrial training material has been open-sourced

ROS-Industrial recently open-sourced its ROS2 training material, created with ROSIN (https://www.rosin-project.eu/) funding. Here is the link for the repository: https://github.com/ros-industrial/ros2_i_training. This work is licensed under a Creative Commons Attribution 4.0 International License 11 (https://creativecommons.org/licenses/by/4.0/legalcode).

The contents include slides and workshops for the following topics:

  • ROS2 basics:
    • Composed node, publish / subscribe, services, actions, parameters, launch system
    • Manged nodes, Quality of Service (QoS)
    • File system
  • SLAM and Navigation
  • Manipulation basics

More information about this update can be found on ROS Discourse post: https://discourse.ros.org/t/open-sourcing-our-ros2-industrial-training-material/21179

Get in touch with us if you would like to improve the existing content or would like to contribute new contents.

Thanks to ROSIN to make this happen!

Breaking Down the ROS-Industrial Consortium Americas 2021 Annual Meeting

The ROS-Industrial Consortium Americas (RIC Americas) gathered virtually on April 13-15 for the 2021 Annual Meeting. The event was a great opportunity for the diverse ROS community to discuss challenges and progress while laying out new initiatives and fostering relationships. The event demonstrated several ways RIC Americas members and the open-source community are furthering ROS for industry through global consortia initiatives, tech and software company projects, and collaboration among researchers and industry organizations. All the presentations and videos for the public day and the demonstrations may be found over at the event page, or over at the ROS-Industrial YouTube channel playlists page!

Day 1

The first day was open to the public and kicked off with a summary of 2020 RIC Americas activities relative to ROS-Industrial. The topic of training centered on the migration from ROS to ROS2,the move from preconfigured virtual machines to a cloud-based training environment, and the delivery of the training virtually. Overall feedback was positive; however, there are quite a few areas for improvement:

  • Some exercises have still required reaching back to ROS1 to complete.
  • There could be more explanation about how things are done and why.
  • Labs need to be optimized for ROS2 and to be more meaningful in the virtual format.
Zoom Training-1021.JPG

Tech updates that have been contributed include collision checking improvements, parallel process planners using Taskflow as well as discrete capability improvements such as the addition of kinematic calibration added to robot_cal_tools and heat raster-based tool path planning on meshes within the toolpath_offline_planner.

Darryl Lee of the ROS-Industrial Consortium Asia-Pacific (RIC Asia-Pacific) shared Focused Technical Project (FTP) developments around Interoperable Robotics with RMF. The Robotics Middleware Framework (RMF) has been a successful program funded by the Singapore government that has sought to implement a unified IoT infrastructure, based on ROS, for the healthcare industry. This new FTP initiative seeks to extend this work to commercialization for the manufacturing industry.

RIC Asia-Pacific also launched ROS2 training but also included training on their own developed Easy_Perception_Deployment (EPD) and Easy_Manipulation_Deployment (EMD). Of interest via feedback from their training events have been use cases around mobile manipulators, depalletizing, and easy pick-and-place configuration set up. The talk by RIC Asia-Pacific concluded with Glenn Tan going into the development of their EPD and EMD implementations, where to find them, and how they came to be.

Christoph Hellmann Santos of the ROS-Industrial Consortium European Union (RIC EU) shared the developments in collaboration with Fraunhofer IPA, including the recent advancements as the ROSIN project concludes and the current work and near-term roadmap for the Cognitive Robotics and AI Innovation Center seeks to advance ROS2 capability for industry, with an initial focus on hybrid model-driven engineering and diagnostics and monitoring framework for ROS and ROS-based systems.

ipa model base.JPG

RIC EU shared their launch of ROS2 training and application development examples, focused on an easy programming for welding robots that include seam detection, collision free motion planning and execution, optimized path planning, work piece pose detection, with easy set up and integration into the UR caps ecosystem.

Following the ROS-Industrial updates, a collaboration between MathWorks and Delft University of Technology was presented on Automated Code Generation of Simulink Models as ros_control Controllers which showcased the process for moving Simulink developed controllers and implementing them into the ros_control framework.

From here partner organizations provided introductions and updates to what has been taking place within their organizations. NIST has been a champion for ease of interoperability and moving to unifying standards to foster more efficient innovation for all users of robotics. NIST’s Craig Schlenoff focused on the move towards agility and all the activities related around facilitating agility in robotics. Georgia Tech Research Institute’s Stephen Balakirsky shared their organizations leverage of ROS to facilitate advanced capability, how they have realized robot teams to work together on tasks. ARM Institute CTO Arnie Kravitz shared both the impact of ROS on the development within the ARM Institute technology project portfolio and how the technology project portfolio becomes a proving ground for capability within the ROS ecosystem.

Spirit Aerosystems wrapped up the morning by reviewing the outcomes from the ARM Institute supported Collaborative Robotic Sanding project. This project featured a ROS2 software backbone, but also included ROS-based training, woven in with a gap assessment for Spirit and partner Wichita State’s National Institute for Aviation Research. While the technical outcomes for the project were of note, and it was interesting to build a fully functional ROS2 system, for ROS to gain real traction in industry a more thorough educational infrastructure is required to support all facets of the industrial teams that create, deploy and sustain these future systems on the factory floor.

The afternoon provided a number of advances from the tech company side of the ROS-Industrial Consortium, starting with the case for migrating to ROS2 by Open Robotics. There are now several utilities to assist with migration, and a handy cheat sheet if someone is interested in considering ROS2 migration, with more utilities on the horizon.

PickNik’s Andy Zelenak shared the recent collaboration with Universal Robots and FZI Research to realize a ROS2 Driver, which is now available for all UR models. Intel’s Atul Hatalkar shared the vision around ROS++ where a ROS++ exists to support industrial autonomy including non-programmer utilities, active bridges to facilitate autonomous interoperability, and security and data reliability.

Microsoft shared a good portion of their recent work around the Mixed Reality Toolkit and how they are working to enable richer AR/VR applications in ROS2. Canonical’s Sid Faber updated the group on the latest efforts around security, particularly security for legacy ROS implementations via their ESM service and shared their work on the CIS ROS Melodic Benchmark. AWS’ Jack Reilly introduced AWS’ goals around their EDU program seeking to “democratize robotics, support the ROS Community, and create features and resources to support learners and educators.” This is a toolset focused on educational content, including accessible cloud-based content, development environment utilities, and even physical implementations to support educational objectives.

Wrapping up the day, Michael Jaentsch of Siemens and Robert Burridge of TRACLabs shared work that seeks to leverage interoperability to facilitate improved collaboration between disparate devices. Siemens, with funding from the ARM Institute, built on the prior work funded by NIST extending MTConnect to ROS to support many-to-many functionality and brought in RTI’s DDS Connext to create an OPC-UA to DDS bridge. TRACLabs demonstrated using their two tools, PRIDE and CRAFTSMAN to facilitate dynamic tasking between disparate types of hardware, in a NASA-based application.

Day 2

Day two brought together the ROS-Industrial Consortium membership, focused on the Americas, but open to global members. Here the focus was on members sharing some of their experiences, learning from each other and providing feedback on how the Consortium, as an organization, should leverage its shared resources to advance open source for industry. The panel focused on getting ramped up into ROS development, and decisions you make before deciding to go in that direction. There were insights around talent development and how to engage in open source, with a great challenge by PlusOne’s Shaun Edwards to think beyond simply pushing fixes to leveraged repos.

Summary of the Training Flows Need to Support ROS for Industry

Summary of the Training Flows Need to Support ROS for Industry

From there, inputs for influence of the roadmap were next. To date, the member feedback has evolved the focus of the Consortium into four main themes:

  • Ease of Use
  • Advanced Capability
  • Interoperability
  • Human Interface and Reaction

Feedback from the 2020 workshop indicated additional focus on resources for members and the broader community were of great interest. This includes write ups, simple explainers, more complete and up-to-date wikis, to help bridge the gap between traditional industry decision makers and application builders to the open source developer community. Feedback from the 2021 workshop centered on educational resources, field service “How Tos,” status dashboard templates, functional safety, ROS2 porting guidance, and security resources. There was significant interest in terms of capability around machining/CAM style path planning and execution to support processes such as additive manufacturing, as well as additional PLC tools for controlling external devices.

The member speaking portion of Day 2 was highlighted by David Poweleit of Steel Founders’ Society of America (SFSA) sharing the needs of their membership, and how they align with the objectives of ROS-Industrial, relative to driving high-mix intelligent processing, in a way that is easy for end-user/operator interaction.

Boeing, represented by Martin Szarski and William Ko, followed up with their success story around open-source implementation of their navigation implementation as they worked toward robust factory mobility. This was definitely an interesting talk from the technical development and implementation of their navigation solution, to the journey to be able to open source this resource for the broader community.

The day wrapped with a workshop on project brainstorming. Here the membership offered up collaborative topics, that could be projects, or working groups to address challenges relative to industrial leverage of open source. A couple key topics:

  1. Hardware Reference Implementation Working Group – The goal of this working group would be to leverage existing standards, but factor in a way that is understandable for the industrial community. Initial starting point would be manipulators, ideally speaking ROS2 out of the box, and consistent across OEMs with a focus on target or desired behavior.
  2. Scan-N-Plan Implementation Generalized for High-Mix Processing – Demonstrated on an SFSA use case, this would be a high-mix surface finishing where the output would be a generalized implementation of Scan-N-Plan that could be adopted by solution providers for various high-mix end user customers.
  3. ROS Workbench – Provide model-driven and GUI-driven utilities to lower the barrier to entry for manufacturing engineers, or those with more traditional industrial automation experience to set up and do preliminary configuration of a ROS-based application.
  4. Calibration – Revisit the calibration toolset and create one toolset with resources to enable all the various forms of calibration, intrinsic, extrinsic, and kinematic, to enable high performance ROS-based applications.
  5. Planning for Additive or Machining – The interest in doing more with manipulators in one system is appealing, but currently there are no easily incorporated tool planning utilities to support additive or subtractive processes as seen in Additive or CNC type machining applications.

The next day, Demo Day!, was a great share of what a number of the different members and community participants are doing with regards to making open source happen in the real world, and the hardware to make interesting applications happen. We encourage you to check out the event page for the full Demo Day! list, which inludes the link to the video. Special thanks to all the participants in Demo Day! for sharing their contributions and recent developments.

Now the ball is in the court of the community and the membership that expressed interest in working together to address gaps and move these various topics forward. We are excited about what the next year has to offer and look forward to sharing more outcomes, collaboration opportunities, and resources for the community and industry to continue making open source software a reality on production floors around the world.

NIST Grant Awarded to SwRI for Agile Robotic Assembly

A grant from the National Institute of Standards and Technology (NIST) has been awarded to Southwest Research Institute (SwRI) with the goal of accelerating development of agile, robotic assembly processes for manufacturing. This complements internal research at Southwest Research Institute (SwRI) for developing robotic machine assembly capabilities (Figure 1).

Figure 1: A peg-in-hole assembly task performed with a collaborative robot at SwRI. The parts used were from the Siemens Robot Learning Challenge, which involves assembly of a set of gears and shafts.

Figure 1: A peg-in-hole assembly task performed with a collaborative robot at SwRI. The parts used were from the Siemens Robot Learning Challenge, which involves assembly of a set of gears and shafts.

The grant is inspired by the goals of NIST to promote agility within industrial robotics. Their recent efforts to promote agility, such as the Agile Robotics for Industrial Automation Competition (ARIAC), have extensively addressed challenges associated with robotic kitting. Previous ARIAC competitions have included teams competing in a simulation environment with judging metrics focused on efficiency, performance, and cost of sensors used. However, the new challenge for ARIAC 2021 will also involve assembly operations. In alignment with the competition , the goal of this new grant is to develop a software framework that will allow plug-and-play development of assembly algorithms, which will help users reach the hardware testing and implementation phase faster. The framework will consist of a software visualization to verify assembly processes and metrics to evaluate capabilities of the overall assembly solution. The Robot Operating System (ROS) will be heavily utilized with primary developments in ROS 2 to future proof the software framework.

For the eventual end users, we strive to enable manufacturers to dynamically reprogram robots with agility that meets the user’s needs. This may include handling a variety of part types, adjusting to changing part size and scale, and adapting to task failures in real-time. To standardize the evaluation of assembly algorithm performance, the NIST Assembly Task Board #1 (Figure 2) will be used to generate metrics that allow manufacturers to compare performance of different strategies. Therefore, end users can focus on improving their assembly strategy rather than building infrastructure to define robot capabilities, sensors, and evaluation methods.

Figure 2: NIST Task Board #1 that will be used for testing and development of an robotic assembly framework.

Figure 2: NIST Task Board #1 that will be used for testing and development of an robotic assembly framework.

Collaboration with industry and research partners will be important to understand the needs of end users and desired features that would enable agile, robotic assembly implementations. Consequently, we want for the framework to be evaluated on specific tasks of interest for these partners. Soliciting industry interest and engaging in formal collaboration, such as a ROS-I Focused Technical Project, is the eventual desired outcome.

What happened at ROS-Industrial Conference 2020?

This year was a tough year for event organizers. Around the world, events needed to be moved online to react to the COVID-19 pandemic. The pandemic also lead to ROS-Industrial Conference a being virtual venue, but did not affect the success of the event. With more than 250 attendees, the conference grew by 66% in attendance compared with previous years.

This year’s conference featured mainly four different activities. There were topic specific technical one-hour presentation sessions, in which 2-4 speakers presented their newest developments and experiences with ROS. A small virtual exhibitor area enabled attendees to get in contact with organisations active in the ROS-Industrial community. In networking sessions, attendees had the possibility to meet each other and get to know each other. The ROS-Industrial Video Competition accompanied the conference and the winner’s ceremony took place on the second day of the conference.

Conquering the industry

The conference kicked off with a session about ROS-Industrial and its current state. In this session, Christoph Hellmann Santos (Program Manager of the European branch of the ROS-Industrial Consortium) gave a motivating presentation about ROS and the Mission of the ROS-Industrial Consortium. He explains that low volume high variance production with robotics is a “Final frontier of robotics” and that pioneering in robotics is hard and lonely. According to Christoph Hellmann Santos, ROS and ROS-Industrial have a unique community, which helps with on the one side with reusing robot software and on the other side with being less lonely and getting support. With more than 80.000 developers that published more than 200.000 ROS packages on GitHub ROS is also the biggest open source robotics ecosystem that ever existed. For a long time industry said, ROS is not ready for industry – today thousands of robots controlled by ROS/ROS2 are running in 24/7 in our factories (BMW, Audi, MIR, ). Another chance states Christoph Hellmann Santos is that ROS/ROS2 is a prime platform for AI-based algorithm deployment in robotics.

As second presenter in this session, Carlos Hernandez Corbato (Project manager of the H2020 ROSIN project and assistant professor at TU Delft) presented the results of the H2020 ROSIN project. The project was established in 2017 as support project for the ROS-Industrial initiative. It had three major missions, which were making ROS better, business friendly and accessible. The ROSIN project itself was a great success more than 70 technical and educational projects for around the ROS-Industrial initiative were financed. In total, ROSIN generated 9M€ investment into new ROS packages. This was also visible throughout the conference as a number of the ROSIN FTP projects presented their results.

The ecosystem is vibrant

During the conference, many new and recently developed packages were presented. This started of with a session on visualization tools in which Rafael Luque (FADA) presented integration of laser projectors into ROS. Next Darko Lukic (Cyberbotics) gave details about ros2 and webots integration. Levente Tamas (Technical University of Cluj-Napoca) and Francisco Marin (Rey Juan Carlos University) went on and explained how to enable augmented reality in ROS using different tools.

In the industrial tools session, Johannes Meyer (Intermodalics) explained how the realtime robotics framework OROCOS can be integrated into ROS. Rafael Arais (INESC TEC) explained the package robin, which provides a ROS-CODESYS bridge. Luca Muratore (IIT) showed the ROS End-Effector framework, which abstracts end effector control and finally Alejandro Mosteo (Centro Universitario de la Defensa) presented RCLAda, an Ada implementation for ROS2.

In the session about planners, Kristofer Bengtsson (Chalmers University of Technology) presented a sequence planner for intelligent industrial automation using ROS. Allessandro Umbrico presented ROXANNE, which is a ROS package aiming at facilitating the integration of Artificial Intelligent Automated Planning and Execution techniques with robotic platforms. This package specifically supports the development of ROS-based deliberative architectures integrating timeline-based planning and execution capabilities. Finally César López ((Nobleo Technology) showed new implementations for coverage path planning and control. In the control and path planning session, Jordan Palacios and Victor López (Pal Robotics) explained the new ros2_control developments and showed a practical example. The next speaker was Henning Kayser (Picknik Robotics) who presented the newest developments around Moveit for ROS2. Finally, Gilles Chabert (IRT Jules Verne) talked about trajectory validation using interval computation. This session showed that ROS2 is getting ready for manipulation.

Model-driven robotics and development solutions available

ROS is also becoming a prime platform for model driven robot development. In the session about model-driven robotics, Ricardo Sanz (Polytechnic University of Madrid) explained how systems engineering knowledge can be used at runtime by their framework called mROS. Then Ansgar Rademacher (CEA List) presented the integration of ROS/ROS2 into Papyrus for Robotics, a model-driven development IDE. Finally, Shashank Sharma and YJ Lim (MathWorks) presented how MATLAB and Simlink can be leveraged for model-driven automated driving system development.

In the session, full stack solutions speakers presented solutions for developing robot software using ROS. Pablo Quilez (Drag & Bot GmbH) showed how their software makes developing industrial robot applications easy and robot manufacturer independent. Herb Kuta (LGE) talked about OpenEmbedded, meta-ros and webOS, which makes developing custom linux distributions for embedded systems that package ROS easy and automatable. Mathew Hansen (AWS) talked about AWS RobotMaker, which is a cloud based solution for robot development and lifecycle management.

Software quality and Security are improving

Industrial deployment of robot software requires high quality code. In the software quality session, Bainian Chen (ARTC) explained new features of industrial_ci, which is a continuous integration solution for ROS and ROS2. Zhoulai Fu and Francisco Martinez (ITU) presented their experiences with fuzz testing ROS components. Increased connectivity in automation leads to higher productivity but also to higher vulnerability for cyber-attacks. Therefore, security is a major factor for robot systems. In the security session, Victor Mayoral (Alias Robotics) presented how robot end-points can be protected against cyber-threats. Federico Maggi (Trend Micro) explained how legacy programming languages in robotics endanger robot security. Finally, Ulrich Seldeslachts (LSEC) gave a broader perspective on hardening industrial robotics installations.

ROS 2-based real-time systems are in sight

Real-time is becoming a more and more pressing topic for spreading ROS in industry. ROS 2 now has real-time capable middleware and schedulers. Ralph Lange (Bosch Corporate Research) presented their implementation of a real-time and deterministic scheduler for ROS 2. Francesca Finocchiaro and Pablo Garrido (eProsima) presented how ROS 2 can be run on microcontrollers using µROS. Finally, Lennart Puck (FZI) presented how real-time systems can be created using ROS 2 as well as a benchmark of these systems. Lennart Puck stated that based on their benchmarks ROS 2 can meet real-time requirements. Katherine Scott (Open Robotics) talked about the transition from ROS 1 to ROS 2 and the general design decisions. The conclusion in general is that now is the time to switch to ROS 2.

Professional applications are expanding

Another part of the conference were three sessions around applications of ROS in professional scenarios or products. This was kicked of with a session on industrial applications on the first day of the conference, where ABB Corporate Research presented how ABB robots can be controlled with ROS and Tecnalia showed how scan & plan applications can be implemented on industrial robots using ROS. On the second day another session on industrial applications held presentations from Bosch, Sewts and Pilz. Timo Steinhagen (Bosch) presented the Locator, which was developed using ROS. Sewts presented their robot application for handling textiles, which is based on ROS. Pilz talked about their ROS-based service robotics portfolio. Another session focused on applications in agriculture. Here, Heiko Engemann (FH Aachen) presented their robot the ETAROB, which runs ROS. Felipe Neves dos Santos (INESC TEC) explained how they use ROS for robots for woody crops. Finally, Wilco Bonestroo (Saxion University of Applied Sciences) talked about using ROS for developing drones for agriculture.

ROS-Industrial Video Competition

More proof of ROS in application was achieved by the ROS-Industrial Video Competition which asked for videos in the categories professional applications and cloud robotics. The cloud robotics category was sponsored by AWS. In total, 33 videos were submitted. In the category cloud robotics, INESC TEC won with the following submission.

The professional application category was won by the company QuadSat, which produces drones for antenna testing.

Links & Videos

All competition videos can be found here: https://rosindustrial.org/rosindustrial-video-competition-2020

The conference videos can be found here:

ROS-Industrial Asia Pacific Workshop 2020

The Annual ROS-Industrial Asia Pacific Workshop took place on the 29th October 2020, this year in a one day digital webinar format. The workshop was opened by our Guest-of-Honor, Prof. Quek Tong Boon, Chief Executive of the Singapore National Robotics Programme. After the opening, Erik Unemyr, Consortium Manager for ROS-Industrial Asia Pacific, shared updates on the topic of “Industry Ready ROS 2 – Easy to Adopt Modules with Quality”, which comprised of the current technology focus the team has been developing in-house, including:

  1. easy_perception_deployment – a ROS2 Package that aims to accelerate the training and deployment of Computer Vision models for industry use (which is now in Beta release, and you can find it here)
  1. easy_manipulation_deployment – a ROS2 Package that has a user-friendly Graphical User Interface (GUI) to create a robotic workcell, and supports a variety of commonly used industrial end-effectors using a flexible grasp implementation approach. This will package will be released soon, to be made available on the ROS-Industrial GitHub.

Next, we had the opportunity to invite Roger Barga, General Manager at AWS Robotics, to present on “The Role of the Cloud in Future of Robotics”. During his presentation, he addressed the importance and necessity of applications in cloud computing such as using it for development of robotic applications in simulation, testing and deployment. AWS also currently supports ROS, ROS2 & Gazebo within their services.

Matt Robinson, Programme Manager for our ROS-Industrial counterpart in Americas at the Southwest Research Institute (SwRI), presented on “Enabling Production Performance in ROS-Based Systems” where he brought up the value of ROS2 for various industrial use cases and also showcased some of the developments happening at SwRI.

Sharing more details about the activities at the Advanced Remanufacturing and Technology Centre, Bai Fengjun, Technical Lead from the Advanced Robotics Applications team at ARTC, presented development on the Next Generation Hyper-Personalization Line and how ROS has played a part in the development of such applications for the Fast Moving Consumer Goods sector.

Michael Sayre, CEO & Co-Founder of Cognicept Systems, one of our Consortium Members in the Asia Pacific Region, then presented on the importance of error handling and remote management for robotic fleets, and their latest development of the ROS2 Listener agent that was developed together with the ROS-Industrial Team at ARTC. You can find the repository here.

Shortly after, Albertus Hendrawan Adiawahono, Head of the Mobility Group at the ASTAR Institute for Infocomm Research (I2R) presented on their current efforts with the local healthcare ecosystem to develop modules that would aid robots to be more resilient in the hospital ward setting, where the environment rapidly changes. They currently have completed Proof of Concepts in which the robots are able to adapt to lifts, curtains and even simulating a blue code emergency drill.

After the lunch break, we invited Jack Sheng Kee, Lab Director of the Delta Research Centre, a ROS-Industrial Consortium Member, to share on “Reconfigurable and Flexible Automation in Manufacturing” where he presented some of the existing solutions Delta has developed, and how they are all ROS supported.

We also had the team from Open Robotics, Marco Gutierrez and Grey, to present on roadmap updates with new features and future plans for Ignition Gazebo, ROS2 and also the Robotics Middleware Framework (RMF). The development of RMF has become a key effort in driving the integration and deployment of wide-scale smart robotics systems, which includes the communication between robots, building infrastructure and other edge devices.

Christoph Hellmann Santos, Consortium Manager for ROS-Industrial Europe at Fraunhofer IPA presented on the latest updates and success stories of both the ROSIN and ROS-Industrial Projects, such as the toolbox for automated delivery for the DHL Streetscooter and the real-time mapping project with Bosch Rexroth.

Prof Trygve Thomessen, Managing Director of PPM Robotics AS also presented ROSIN updates with the ROSWELD project, an application and success story of ROS being deployed in heavy industrial applications such as robotic welding. Last but not least, we had Andrei Kholodnyi, Principal Technologist at Wind River to present on “A Mixed-Critical ROS2 Implementation on VxWorks RTOS, WRLinux & Hypervisor” where he highlighted the use and importance of safety compliant and real-time solutions for ROS2 Applications.

A summarized table of all the speakers, including presentation slides and recording, is now available here!

To conclude this year’s ROS-Industrial Workshop Asia Pacific, Dr. Zhang Jing Bing, Technical Division Director for Smart Robotics and Automation (SRA) at ARTC gave his closing remarks.

The ROS-Industrial Consortium Asia Pacific @ ARTC continue with a multi-prong approach in bridging the gaps between the industry and the community in adoption of ROS and robotics, by working closely with our industry partners and to develop modules that can cater for industrial needs, providing training opportunities for aspiring roboticists as well as companies that are embarking on leveraging ROS to scale their robotics adoption.

On behalf of the ROS-Industrial Team at ARTC, we hope that you enjoyed the webinar as much as we did, and we look forward to meeting each other in 2021 for future ROS-Industrial activities!

Workshop-2020