Hands on with FRAMOS D435e camera featuring Intel® RealSense™ technology for industrial robotics

It has been nearly 10 years since the release of the XBOX Kinect camera, and things have certainly changed. Gone are the days of using reverse-engineered software drivers or soldering on USB cables. We have arrived in a time of 3D perception plenty. Multiple vendor-supported 3D camera options exist utilizing different depth camera technologies, to such an extent that picking one can often be difficult. The ever-popular 3D camera survey has grown to over 20 cameras, and it is still growing. Nevertheless, the ROS-I team is committed to testing as many of these sensors as possible and putting them through their paces.

One option that I have recently become fond of is the Intel RealSense. In my mind it is a sort of industry standard. Released and actively supported by ROS-I consortium member Intel, the RealSense is inexpensive, available off the shelf, and works reliably across a wide variety of operating conditions. Combine that with a stellar ROS (and ROS 2) driver, and you have a winner. The convenience of being able to just plug in the camera, bring up RealSense-viewer, and then tune or debug the camera cannot be overstated. It is a camera for the robot masses. However, it does have a problem. The Intel RealSense is USB 3.

intelrealsense1.png

For industrial automation tasks USB cameras have long been an issue. We often set up robotic wrist cameras at Southwest Research Institute. With a properly calibrated depth camera on a robot wrist we are able to reconstruct large parts and automate complex industrial tasks. USB complicates this setup. Flaky connections can plague a setup like this, leading developers to ritualistically disconnect and reconnect USB cables whenever something goes wrong. Further, cable runs quickly exceed the length that USB can handle leading to the use of sometimes-suspect USB-ethernet extenders. If only there was a camera option that had the software backing of the Intel RealSense with the industrial readiness of POE. Meet the FRAMOS Industrial Depth Camera D435e. Based on the Intel RealSense technology, FRAMOS has packaged this D435e 3D GigE camera in an IP66 enclosure and replaced the USB-C connector with an industrial-ready M12 GigE connection. While it certainly isn’t going to replace the Intel RealSense as the camera for the robot masses, it might be the camera for the industrial robot masses.

20191118_165810s.jpg

The ROS-I team recently got the chance to work with the camera hands-on, deploying it on an industrial scan-and-plan project. The setup for the FRAMOS RealSense is straightforward if a little quirky. On Linux one must simply install the FRAMOS CameraSuite Software and then install their custom version of the librealsense SDK. Then upon configuring the network settings for the camera, it behaves just like the USB RealSense. ROS doesn’t know the difference, and the RealSense-viewer behaves in the same way. However, using it has not been without its difficulties. ROS support for this camera was released October 29, 2019. We began integrating it on our system on November 4, 2019. As such, there were some problems. The software suite only allowed a limited set of the parameters to be set and required a custom version of libglf3w. This caused an issue where the camera mysteriously stopped working after an unrelated package was updated, and the only apparent solution was to reinstall the software suite. However, a few weeks later a new version of the software was released that fixed both issues.

Overall, using this camera has been troublesome but to an ever-decreasing degree. To some extent, the convenience and maturity of the Intel RealSense makes it easy to complain about any small thing that goes wrong on new hardware, but I personally have high hopes for the FRAMOS Realsense. When debugging, their software support was quick to respond, and the software seems to be in a state of continued improvement. We still run into occasional crashes, but similar issues plagued the Intel RealSense a mere year ago that have since been resolved. The necessity of the custom version of librealsense SDK is its biggest oddity, but with continued collaboration between FRAMOS and Intel this may be resolved someday.

Ultimately, when I think about the ideal camera for a ROS-I system, I imagine a POE camera with easy to use software, rock-solid vendor support, and an active ROS community. While there will always be cameras that excel at one application or another due to depth of field, resolution, or sensing technology, for medium to large scan and plan applications, the FRAMOS RealSense is well on its way to achieve that goal. Others will undoubtedly join the market, but regardless it is an exciting time for 3D perception in industrial applications!

New edition of the ROS MOOC from TUDelft for ROS beginners

New edition of the ROS MOOC from TUDelft for ROS beginners

We are pleased to announce a new edition of the ROS MOOC, Hello (Real) World with ROS. The course will open on 15 January 2020 at 13:00 CET on the edX online learning platform.

You can enrol now at the Course Webpage for a fun ROS learning journey!

This course is a part of the educational activities of the EU project ROSIN and is offered by the TU Delft Cognitive Robotics department with the support of the Online Learning School.

The target audience for the course are beginner level ROS1 users. The course will be instructor paced and of 6 weeks duration. A study/work load of about 8-12 hours per week is expected.

See you online from January 15th!

The Delft ROS MOOC team

Read More

Observations from CRAV.AI 2019 Conference on Collaborative Robotics and AI

I had the opportunity to attend CRAV.ai 2019 in San Jose CA and present SwRI's work on collaborative robots; this work described how we built a sophisticated collaborative robotics application where various tasks were taught to a robot arm by way of human demonstration. One highlight of this application is that it leveraged low-cost sensors and open-source software frameworks such as ROS, MoveIt, AruCo, Ceres, etc.

CRAV slide.JPG

The presentation was very well received, and we hope it'll lead to opportunities to further that work and continue to do research in that space.

Furthermore, there were a great deal of interesting presentations that explored innovative ways to make robots collaborate and empower human workers in the various industries where the inherent adaptability and dexterity of humans remains irreplaceable.

Human Augmentation of Robots for the Automation Age (SARCOS):

  • Described the following challenges in manufacturing:

  • Projected labor shortages in the US and many other industrialized nations in the next decade will have a negative impact on the economy in the trillions of dollars.

  • Occupational injuries incur an annual cost of \$100 billion in the US

  • Complete automation isn’t the solution to the labor shortage challenges ahead

    • Humans will continue to play a role in manufacturing due to the unstructured and unpredictable nature of very many tasks for which automation falls off short.

    • SARCOS showed its GUARDIAN XO powered exoskeleton which a human can operate in unstructured environments in order to carry out a diverse set of tasks.

    • The suit provides the human worker with added strength and endurance, reduces the risk of injuries and enhances productivity.

How Robot Motion Planning is setting Robots Free (Collaborative Robotics):

  • Interesting approach to multi robot motion planning using precomputed swept volumes

How AI, Robotics, Vision, and Industry 4.0 Will Revolutionize manufacturing (Canon USA)

  • Great summary of past technological revolutions

    • Mechanization, steam power, weaving loom (Industry 1.0)

    • Mass production, assembly line, electrical energy (Industry 2.0)

    • Automation, computers, electronics (Industry 3.0)

  • Industry 4.0 forecast

    • Increased efficiency through robot/human collaboration

    • The concept of the Smart Factory (highly digitized and interconnected production)

    • Large scale data anaylysis

  • Lights Out Manufacturing

    • Advanced sensing technologies that allows robots and machines to operate in the dark

    • If implemented correctly could maximize efficiency and profitability.

    • Not a common approach in factories but its viable given current technologies.

  • The Role of AI

    • Predictive maintenance would eliminate the need for predetermined schedules

    • Through machine learning and data collection, systems could adapt to changes or function with fewer interruptions

    • Increase the remaining useful life of machinery

Volumetric Technologies for Future Sports Experiences (Intel Sports)

  • The role of advanced technologies in today’s sports

    • An array of cameras placed around the stadium allows creating a virtual camera view at any desired location

    • Broadcast Enhancements allow creating a narrative or facilitate advertisement

    • Automated virtual camera movement allows following players, follow ball, predict best camera position, etc.

    • Used by various professional tournaments and leagues

  • Volumetric video

    • Can create rich, compelling and immersive media experience.

    • Combined with VR headsets would provide a fundamentally new way to experience sports

It was a pleasure to be part of such an interesting event, and we look forward to both contributing and working with others to advance robotics, in particular where collaboration is a key element.

What Went Down at ROSCon 2019

ROS-Industrial representatives from the three global regions attended and presented at ROSCon Macau 2019, held from Oct. 31 to Nov 1.

The booth was supported by all the three consortia from Asia Pacific, Americas and Europe. Some initial changes were the introduction of workshops ahead of the formal agenda, including a very good workshop on “Day 0” on the ecosystem (ROS is getting bigger and bigger, how do we get those people more into an active role?).

This year, the ROS-Industrial team presented the demonstration titled “Robotic Pick & Place with Augmented Reality” which was made to showcase the interoperability of ROS by allowing robots to perform tasks by learning from an operator input using an intuitive augmented reality interface – removing the need for programming of robots.

As far as exhibitors along with ROS-I, it was noticed that there was a noticeable increase in exhibitors (~40) with more companies attending/supporting ROSCon, so this speaks to the growth and evolution of the community and for the event itself.

Obvious recruitment, seeking of ROS-skilled professionals, part of "we are hiring”, even more so than in past years. While another hot topic, or maybe the red ribbon of the event, was "we need more and complete documentation of ROS2."

As such with such full in content events, or how long the coffee breaks, they are never sufficient to meet and greet everybody.

IMG-20191030-WA0063.jpg

Photo : Ng Wei Kien & Dejanira Araiza at the ROS-Industrial Booth showcasing the demo.

The ROS-I teams were selected and gave three talks:

Levi Armstrong and Chris Lewis presented on “Industrial Manufacturing Automation Leveraging ROS” and Dejanira Araiza-Illan on the topic “PackML2: State Machine Based System Programming, Monitoring and Control in ROS2” with Michael Ripperger presenting on “Flexible Framework for Quantitative Reachability Analysis.”

IMG-20191031-WA0000.jpg

Photo : Levi Armstrong, ROS-I Americas/SwRI presenting Industrial Manufacturing Automation Leveraging ROS .

There was also one of the major keynote presentations on the Robotics Middleware Framework (RMF) about roadmap to adopt robotic solutions and smart systems in Singapore’s public healthcare sectors.

A brief summary of presentations that stood out to the teams present:

  • ROS Real Time Workshop
    • Both hardware and software requirements
      • Real-time kernel, messaging, etc.
    • Lots of effort required to setup/run real-time Linux kernel
    • Security and real-time conflict of interest?
    • Timing and determinism
      • TCP/IP can meet determinism
      • UDP can meet timing
      • Neither meet both
    • Large effort by various companies to benchmark timing, processing speed, memory usage, latency, etc. of ROS2 with various DDS implementations
    • Very important for mobile robotics and autonomous applications
    • Benefits of this effort will likely be available by the time most industrial manipulation applications we work on care about this functionality
  • ROS2 on VxWorks
    • ROS2 dependencies installed
    • Run ROS2 on an RTOS
  • ROS2 migration of Navigation Stack
    • Re-design of the architecture to leverage ROS2 features
      • Component and life-cycle nodes
    • Run-time definable behavior trees for decision-making in fault scenario
  • Reactive programming
  • ROS in Jupyter Notebook
    • Essentially an online IDE that can compile and run code
    • Alternative for industrial training (Python)
    • C++ support?
  • High Assurance ROS
  • Reactive Jogger
    • UT Austin Robotics Lab
    • Teleoperation robot jogger with singularity avoidance, signal filtering, reactive control
  • Cartesian Controllers
  • OMPL constrained planning
  • MoveIt Task Constructor
  • Pilz Industrial Motion Control
    • Deterministic motion planners for industrial move types (MoveL, MoveJ, MoveC)
    • Include blending parameters
20191102_115204.jpg

Photo : Michael Ripperger, ROS-I Americas/SwRI, presents processes planning framework at the MoveIt Workshop.

Things to take away…

  • High level observations:
    • Majority of attendance/presentations related to mobile robots in logistics space and autonomous driving
    • Lots of people care about real-time capability
    • Lots of people diving deep into the RMW and DDS layers to fix bugs and improve performance
    • Lots of people care about scaling robot operations
      • Discussion around how ROS fits into fleet deployment framework
      • System of systems architecture approach
    • Not much about manipulation
      • One half of one day
      • Not a lot of manufacturing-centric content outside of ROS-I
  • Interesting Information
    • TRAC_IK has mutliple objectives for driving gradient
      • Speed: default (joint speed?)
      • Distance: minimizes joint distance from seed
        • Should probably be preferred to minimize configuration changes
      • Manipulability
    • BioIK
    • ROS2 Life-cycle nodes
      • State-machine for nodes
    • ROS2 component nodes
      • Similar to nodelets
      • Can share memory with other component nodes in the same container
    • Use MoveIt planning objects in favor of the MoveGroup
    • Microsoft is releasing HoloLens v2.0 that are “industrial grade”

Thoughts relative to ROS-I...

  • We should become more familiar with ros_control capability
    • Trajectory replacement
    • Cartesian and force control capability
    • Create some compelling demos
    • Real-time interaction with external devices
  • We need to be more active/supportive of core ROS repositories
  • Creation of ROS Calibration GitHub organization
    • We should move our calibration libraries here to make them more discoverable
    • We can still brand them thoroughly as ROS-I developed

Finally, it is clear that ROS2 development, including for navigation, and optimization of the DDS/transport layer and discussion of real-time capabilities are making great progress. However, there are challenges, particularly related to areas that ROS-I seeks to address. Capabilities for manipulation/path planning for manipulators, documentation, and significant progress on ease of use, to enable manufacturing-centric organizations to really jump in. In the interim there is enough tech industry engagement to fill the void, to provide additional tools and offer solutions to meet industry needs. Now is the time to create compelling ROS2 demonstrations/reference applications that drive further end-user engagement. ROSCon always lights a fire of inspiration, now to just set to the work of getting that fire to spread. Looking forward to 2020!

Content provided by Sheila Suppiah, ROS-I AP, Thilo Zimmerman, ROS-I EU, and Michael Ripperger, ROS-I Americas.

Alliance with The Singapore Industrial Automation Association

ROS-Industrial Consortium Asia Pacific signs alliance with Singapore Industrial Automation Association (SIAA)

On the third day of the Industrial Transformation Asia Pacific (ITAP) 2019, a Memorandum of Understanding was signed between the Singapore Industrial Automation Association (SIAA) , ROS-Industrial Consortium Asia Pacific and the National Robotics R&D Programme Office (NR2PO), The guest-of-honour to witness this event was the Senior Minister of State for Trade and Industry, Singapore, Dr. Koh Poh Koon.

The signing ceremony taking place at ITAP 2019, Sandbox 2 on 24th October 2019.

The signing ceremony taking place at ITAP 2019, Sandbox 2 on 24th October 2019.

5.jpg

This partnership will enable collaboration between the three organizations to accelerate the adoption of robotics using the Robot Operating System (ROS) in the automation industry and by system integrators in Singapore. ROS-Industrial Consortium will continue to work with organizations and associations to promote open innovation and collaborations, as well as the use of democratic robotics.

ROS-Industrial Asia Pacific at ITAP 2019

The Industrial Transformation Asia Pacific (ITAP) is the regional version of the iconic Hannover Messe, organized by Singex Exhibitions & Deutsche Messe. This event is one of the leading trade events in relation with Industry 4.0.

ITAP was aimed at bringing in key stakeholders and companies to encourage collaboration and deepen the understanding of advanced manufacturing and adoption of Industry 4.0 solutions.

ROS-Industrial Asia Pacific Consortium participated as an exhibitor this year, showcasing diverse demonstrations powered by the capabilities of ROS, and also featured with our consortium members ADLINK & Pepperl+Fuchs. The exhibition was stretched over a course of three days, from the 22nd to the 24th of October, 2019.

ROS-Industrial Asia Pacific Team at ITAP 2019, Day 1, before it’s doors opened to visitors!

ROS-Industrial Asia Pacific Team at ITAP 2019, Day 1, before it’s doors opened to visitors!

One of our teammates Bey Hao Yun had the opportunity to conduct a few sessions hands-on session over the three-day period, which aimed to demonstrate to the audience the simplicity of creating a robotics solution using off-the-shelf open source software modules. His presentation excellently illustrated the entire programming flow in the usage of ROS for such applications.

Bey Hao Yun, from ROS-Industrial Consortium Asia Pacific, during the sandbox workshop.

Bey Hao Yun, from ROS-Industrial Consortium Asia Pacific, during the sandbox workshop.

sandbox2.jpg

Our consortium manager, Erik Unemyr, had also conducted a sandbox presentation on Accelerating Automation & Robotics Solutions with Open Source Software.

Erik presenting at Hall 2 Sandbox during ITAP 2019.

Erik presenting at Hall 2 Sandbox during ITAP 2019.

It was great to see users of ROS as well as people who are interested in learning and adopting ROS solutions come forward with questions for us throughout the entire exhibition.

DSC00642.JPG
DSC00590.JPG
DSC00677.JPG
DSC00965.JPG
DSC00639.JPG
end.jpg

A big thank you from the ROS-Industrial Consortium Asia Pacific team and we hope to see you in our future events!

Contributors:

Erik Unemyr

Sheila Devi Suppiah

ROS-I Training Day introduced ROS2 as advanced topic

SwRI hosted a session for ROS-Industrial training onsite in San Antonio on October 8-10. Of special interest was an introduction to ROS2 as a new advanced topic. This was the first inclusion of ROS2 material at a ROS-Industrial Americas training event and drew significant interest, with over a dozen developers attending.

20191008_105109.jpg

ROS2 is a new iteration of ROS with many aspects undergoing complete redesigns, including core components such as the middleware layer and the build system. Along with these architectural redesigns, Open Robotics is taking the opportunity of breaking compatibility to make extensive improvements to the ROS experience in all aspects. As a consequence, many concepts, tools, and techniques we grew familiar with in ROS1 are no longer present and some upfront time learning about the changes is needed. The new training material developed at SwRI aims to ease this transition by providing some practice working in ROS2 for developers already familiar with ROS1 systems. There are currently three exercises developed which have been made available on the public ROS-Industrial training website. These exercises run through (1) the basics of working with ROS2 systems with a focus on the changes to the command-line tools, (2) the steps needed to port existing ROS1 packages with C++ code to functionally equivalent ROS2 packages, and (3) how to use the ROS1-ROS2 bridge to enable communication between two systems when porting is not yet feasible. Some additional details were also presented to the group, especially focused around SwRI’s ongoing experiences in porting a very large ROS1 codebase to ROS2.

Overall reception at the training event was quite positive, with a lot of interest to closely monitor the ongoing ROS2 development and see when the best time to start focusing on ROS2 will be. We expect that ROS2 training will continue to be a core topic, and plan to continue developing more material that will cover additional pieces of ROS2. Of course, eventually we expect to be training everyone in ROS2 from the start and have ROS1 relegated to maintenance mode. Be sure to check back frequently, as we’re right in the middle of this transition and updates could happen any time!

Tech Workshop on MoveIt, security & skill oriented programming with ROS

The Fall edition of ROS-Industrial EU Tech Workshop took place at Fraunhofer IPA on October 09th and 10th, 2019.

We were glad to host two European MoveIt maintainers, namely Henning Kayser of ROS-Industrial Consortium member PickNik Robotics and Michael Görner from University of Hamburg. They gave us an insight into the latest developments of MoveIt (incorporating motion planning, manipulation, 3D perception, kinematics, control & navigation), current and planned developments for ROS2 (MoveIt2), and a hands-on on ROS(1)-based 'bare-metal to product'. First they presented an inside-view of the manipulation framework. Providing complementary academic and industrial perspectives, they shared their views and experiences on MoveIt's overall structure, practical deployment of planning-based pipelines, complex manipulation planning using the MoveIt Task Constructor, and upcoming future projects and ideas for a ROS2 migration. The workshop concluded with a practical session that guided the participants to setup a functional Pick&Place pipeline from a custom bare robot description. Slides and code examples are available at https://github.com/henningkayser/ROS-Industrial_EU_Fall19_MoveIt .

20191009_112123.jpg

The first session on day 2 of the ROS-Industrial EU Fall'19 Workshop was about security in ROS where Sebastian Taurer from JOANNEUM RESEARCH presented his work on a penetration testing tool for ROS1, called 'ROSPenTo', and gave an introduction on how to use SROS2 to secure communications in ROS2. In the first part of the session ROSPenTo was introduced to provide basic information on how it works and what a user can do with it. During the hands-on section the participants were guided through a step-by-step manual showing how to analyse, penetrate and modify a running ROS1 system using ROSPenTo. In the second part of the session ROS2's security tools (a.k.a. SROS2) were explained and used to setup and configure a security infrastructure. The provided examples demonstrated the creation of all necessary security artefacts (e.g. keys, certificates, etc.) and also the procedure to securely distribute the artefacts to different machines. All the related information as well as the workshop tutorial can be found here: https://github.com/jr-robotics/ROS-Industrial_EU_Fall19_Workshop

EGg33rKWoAAHuv5.jpg

The ScalABLE4.0 session at the ROS Industrial EU Fall'19 Workshop focused on presenting the set of technologies which are enabling flexibility in production lines in two industrial pilots of the automotive sector: PSA Peugeot Citroën and Simoldes Plásticos. Within the project, a complete digital manufacturing software stack is being developed, entitled 'Open Scalable Production System' (OSPS). The OSPS aims to be applied to efficiently and effectively visualize, virtualize, construct, control, maintain and optimize production lines through a tight integration of the enterprise information systems with transformable automation equipment paired up with the necessary open interfaces for optimized solutions on all hierarchy levels (slides).

20191010_091701.jpg

During the workshop, attendees were introduced and got a chance to test, interact and develop with the set of components that compose the OSPS, namely: (i) The Advanced Plant Model, which is responsible for virtually integrating data from the industrial shop floor in a centralized digital twin; (ii) The Production Manager, which is a cloud-based software module that issues and supervises the execution of manufacturing tasks; (iii) SkiROS and Task Manager, which are distinct ROS-based approaches to orchestrating the behaviour of robotic systems; (iv) The Skill-based Robot Programming methodology, which enables the reutilization and adaptation of ROS-based robotic applications to different purposes, platforms, and environments; (v) and, finally, the ROS-CODESYS bridge (ROBIN - https://github.com/ScalABLE40/robin), which enables horizontal integration between robots and automation equipment.

As part of the Scalable project, Bjarne Grossmann from AAU and cofounder of RiACT presented their skill-based robot control software SkiROS v2 (slides). Their technology is based on extended behavior trees that allows the definition of reactive behavior for highly flexible manufacturing environments. The framework is backed by a semantic database for inference and support of task planning to automatically generate complex tasks. In the hands-on session, Bjarne demonstrated the system with a SkiROS-implementation of the classical ROS turtlesim demo. He showed that SkiROS can be easily used to create complex behavior (and not only for turtles). The demo can be found on the git repository https://github.com/Bjarne-AAU/skiros-demo. Soon, there will be an official open source release of the software. Stay tuned on www.riact.eu!

Next European expert workshops will be organized in Spring and Fall 2020. We will keep you posted!

PS: Some links to upcoming events in this respect:

Documentation updates improve ROS utilization and functionality

Lessons from UR driver updates reinforce importance of documentation

A key strength of the open-source community is the capacity to build on the knowledge of other developers who enable future advancements. Documentation plays a critical role in advancing understanding, which improves ROS utilization globally. This will be increasingly important as we move from ROS to ROS2 and document the various steps, including driver updates, necessary to execute projects.

Recent driver updates for a Universal Robots project helped demonstrate the importance of documentation to our team. In July 2019, Universal Robots updated their software for e-series and CB-series to 5.4 and 3.10. We were using the 5.4 software on UR 10e robots for a few different projects, and the ur_modern_driver [1] for ROS kinetic and melodic was no longer compatible due to this update. I updated the driver by investigating the release notes for 5.4.x.x [2] and the client_interface document [3].

UR10 E-series in the SwRI collaborative lab

To update the ROS driver for compatibility with software updates, I first identified what changes occurred and located appropriate software documentation for the hardware. The software documentation defined modules and variable types for the changes, which allowed for comparison with equivalent variables and modules in the current driver code. Without proper documentation from Universal Robots this would have been a much more difficult endeavor.

The ur_modern_driver specifically interacts with the client interface. Two variables were added to the 5.4 software client interface: a reserved byte in the Masterboard data sub package of Robot State Message to be used for internal UR use and a safety status value to the real time interface. The client interface document gave the types and sizes of these variables along with variables used in previous software versions.

The client interface for 5.3 and earlier had an internal UR int in Robot Mode Data I could compare to, and the safety status value could be compared to any of the other double variables in the previous driver’s RealTime interface. I used the search feature in QTCreator to find instances of these variables and added equivalent lines for the new ones. Then I used QTCreator’s debugger to track where new if statements and functions needed to be added to allow the driver to detect the new software version being used and access these new variables.

Working on this upgrade reinforced the necessity of good documentation. In general, the ur_modern_driver had more detailed documentation than many other ROS repos; however, it could still be improved. The README had no mention of the purpose of the use_lowbandwidth_trajectory_follower parameter in the launch files or the urXXe_bringup_joint_limited.launch file; both of these are useful when simulations are being overenthusiastic in their trajectory planning. I added documentation to the README to help others use these features to troubleshoot.

To update drivers, you will need to know what has been changed in the software, and you will ideally have access to a previous version of the driver. Because industrial hardware is intended to be reliable and accessible for multiple clients, there is often plenty of useful documentation if you can search with the correct terminology. Using the known changes and the documentation to compare with the previous driver code allowed me to update the driver fairly quickly so projects could move forward.

1https://github.com/ros-industrial/ur_modern_driver

2https://www.universal-robots.com/how-tos-and-faqs/faq/ur-faq/release-note-software-version-54xx/

3https://www.universal-robots.com/how-tos-and-faqs/how-to/ur-how-tos/remote-control-via-tcpip-16496/

A Look Back at RIA's Autonomous Mobile Robot Conference

AMR logo.png

This is a guest post by Southwest Research Institute Intelligent Machine’s Group Lead Cody Porter, as he was on-site at the RIA AMR Conference in Louisville, KY.

The Robotics Industries Association (RIA) and the Association for Advancing Automation (A3) hosted the inaugural Autonomous Mobile Robot Conference September 17th in Louisville, Kentucky. The event was a vast success with over 400 attendees ranging from end users, integrators, OEMs, researchers, and academics discussing the current AMR technologies, tips for integration, and future growth.

The opening speaker, Melonee Wise of Fetch Robotics, delivered a fantastic talk about the current technologies used in most AMR applications. The basics operations for how AMRs perceive and sense their environment, how to navigate dynamic environments, and what safety considerations are relevant to AMRs.

Matthew Rendall of OTTO Motors elegantly explained the most prevalent value proposition for AMRs. The labor force is shrinking in the United States while customer expectations for shorter lead times increase. The efficiency and effectiveness of AMRs are major benefits to most material handling applications and will see a continued acceptance in industry at large. Several other speakers, such as Denise Ebenhoech of Kuka and Norm Williams of OMRON, showed some additional applications of how AMRs have been used in other applications such as machine tending, machine feeding, lab automation, and several manufacturing applications.

Other topics of interest included the safety standards that govern the use of AMRs. A NIST report in 2013 highlighted the current gaps in published safety standards of the combination of automated ground vehicles (AGVs) and manipulators. The new standard, RIA R15.08, is in development. Michael Gerstenberger, chair of the R15.08 Drafting Subcommittee, gave a preview. A decision tree of the scope of this standard was presented as shown below.

The talks concluded with Aaron Prather, Senior Technical Advisor of FedEx, who gave an overview of the logistics industry and what lessons FedEx has learned in AMR implementation. The talk gave a great perspective of the end user wish list and current shortcomings of AMRs. The first was a technical challenge; AMRs must be able to handle outdoor environments. Sun-blinding sensors, ground conditions disrupting odometry, platforms not able to withstand weather, and the dead reckoning of wide-open spaces must be addressed to open a vast number of use cases. The second shortcoming is the lack of interoperability of multiple AMR applications. End users are looking to fit AMRs within legacy systems as well as allowing multiple AMR OEMs to provide solutions for the best use cases. All presentation materials are publicly released on the RIA website.

The accelerated adoption of AMRs into warehousing and material handling applications is fertile ground for advanced software solutions that continue to leverage ROS. The future of interoperability is one of the most obvious solutions as the map data could be seamlessly shared between all platforms if OEMs are willing to expose, as it appears industry is expecting. In addition, the path planning and manipulation capabilities demonstrated in ROS-based systems could continue to expand the use cases far beyond simple A-to-B material handling applications. The AMRs of the future will need to seamlessly switch between manufacturing processes and reallocate manipulators to where they are needed on large parts or large factories. Let’s get beyond where we are today, and even the successes that have been demonstrated, and support the innovation that is possible, while satisfying end user demand, with richer collaboration-enabling frameworks such as ROS.

Overseas Internship at ROS-Industrial Consortium Asia Pacific

Hello, my name is Willem de Graaf and I’m a former graduate intern at ROS-Industrial Asia Pacific, located in Singapore. I am majoring in Mechanical Engineering at Delft University of Technology in The Netherlands. During one of my Master’s courses I got acquainted with the ROS framework which got me interested to investigate further. When I got the opportunity to go abroad for my internship, I applied to ROS-Industrial Asia Pacific and got an intern position!

I would like to elaborate on how amazing this time was by guiding you through a typical day at the ROS Industrial Asia-Pacific office.

My day starts with traveling to the office. Owning a car in Singapore is quite expensive, so the majority of people travels with public transport. After 4 months in Singapore, I still don’t know if I can blame this behaviour on the expensive cars or the high humidity (as a Dutchy I thought it would be a good idea to have a 30 minute walk to the office on my first day. They considered me crazy). It is approximately a 10-minute walk to the MRT station where a shuttle bus will take me to the office. The whole journey will take around 25 minutes, which is quite fast for the average person working in the office. I arrive at the office between 8.15 and 8.30, where my day will start with a nice cup of coffee.

Before elaborating about the daily activities, it is helpful to give a little background information about the internship assignment. The assignment involved the design of an open-source library that will evaluate a gripper configuration against an object that has to be picked up by that gripper. The goal was to build a foundation of capabilities that will enhance the pick and place pipeline for manufacturing and warehouse environments. With this assignment, I got a lot of freedom to experiment with the existing ROS-Industrial software stack and the available hardware in the office.

Collaborative robot setup at the ROS-Industrial Consortium Asia Pacific office

Collaborative robot setup at the ROS-Industrial Consortium Asia Pacific office

The team within the organization is young, diverse and highly motivated to achieve the best in collaboration with companies. As said, I got my own project where I was given the opportunity to explore and express my own ideas. The team is working on a lot of projects at the same time and because we had a daily stand-up where we had to elaborate on our progress, we kept focus and challenged each other on a daily basis. From here onwards, I ran my own project and planned my daily activities. Most of the time I was developing software, but when it was time for testing the fun part started. I was able to use UR robots, KUKA robots, multiple grippers, conveyor belts and a lot of more fun stuff. You are free to explore the possibilities with different hardware and electronics.

Pick and place flexible gripper and conveyor belt setup - ROS-Industrial Asia Pacific Workshop 2019

Pick and place flexible gripper and conveyor belt setup - ROS-Industrial Asia Pacific Workshop 2019

A typical Asian thing is to get lunch together at a food court. Because the office location was not situated around food courts, we had to take a bus every day to lunch. For a European, this is the best part of the day. The variety of food is enormous and the amount is tremendous, especially taking the price into account. Comparing to the European prices a standard lunch is quite cheap. Due to this I have to confess that my cooking skills did not improve during my time in Singapore :p.

In the last week of my internship, our office hosted a yearly conference – the ROS-Industrial Asia Pacific Workshop 2019, where the capabilities of the developed library were showcased at a demonstration setup.

Willem de Graaf (Intern) and Arunava Nag (ROS-Industrial team) at the ROS-Industrial Asia Pacific Workshop 2019

Willem de Graaf (Intern) and Arunava Nag (ROS-Industrial team) at the ROS-Industrial Asia Pacific Workshop 2019

I would like to conclude with saying that working at ROS-Industrial Asia Pacific could not have turned out better for me than it did. I learned a lot about the diverse Singaporean culture and people, I learned a lot about the Asian way of getting things done and above all, I learned a lot about ROS, which was my main goal.

If you’re seeking for an internship, please do not hesitate to contact them as they’re always open for enthusiastic and motivated students that will contribute to get open source robotics to companies!

Willem de Graaf

YAK: 3D Reconstruction in ROS2

So why YAK?

It’s difficult for robots to perceive objects in the real world, especially when those objects are shiny, previously unseen, or (gasp!) both! We are excited to introduce Yak, an open-source GPU-accelerated ROS2 package which addresses some of these challenges using Truncated Signed Distance Fields. The Southwest Research Institute booth demo from the Automate 2019 conference serves as a case study for a ROS2 system that integrates Yak into its perception and motion planning pipeline.

A bit of technical background

A Truncated Signed Distance Field (TSDF) is a 3D voxel array representing objects within a volume of space in which each voxel is labeled with the distance to the nearest surface. The TSDF algorithm can be efficiently parallelized on a general-purpose graphics processor, which allows data from RGB-D cameras to be integrated into the volume in real time.

Numerous observations of an object from different perspectives average out noise and errors due to specular highlights and interreflections, producing a smooth continuous surface. This is a key advantage over equivalent point-cloud-centric strategies, which require additional processing to distinguish between engineered features and erroneous artifacts in the scan data. The volume can be converted to a triangular mesh using the Marching Cubes algorithm and then handed off to application-specific processes.

Machined Al Test Part

Machined Al Test Part

YAK Reconstruction of Machined Al Part

YAK Reconstruction of Machined Al Part

Reconstructing in the real world

My group at Southwest Research Institute has been working with TSDFs since Spring 2017. This year we started work on our first commercial projects leveraging TSDFs in industrial applications. We developed our booth demo at the 2019 Automate conference to provide an open-source example of this type of system.

The set of libraries we use in these applications is called Yak (which stands for Yet Another KinectFusion, in reference to the substantial prior history of TSDF algorithms). Yak consists of two repositories: a ROS-agnostic set of core libraries implementing the TSDF algorithm, and a repository containing ROS packages wrapping the core libraries in a node with subscribers for image data and services to handle meshing and resetting the volume. Both ROS and ROS2 versions of this node are provided. A unique feature of Yak compared to previous TSDF libraries is that the pose of the sensor origin can be provided through the ROS tf system from an outside source such as robot forward kinematics or external tracking, which is advantageous for robotic applications since it leverages information that is generally already known to the system.

The test system, which is in the pipeline for open-source release as a practical demonstration, uses a Realsense D435 RGB-D camera mounted on a Kuka iiwa7 to collect 3D images of a shiny metal tube bent into a previously-unseen configuration. The scans are integrated into a TSDF volume as the robot moves the camera around the tube.

The resulting mesh is processed by a ROS node using surface analysis functions from VTK to extract waypoints and tangential vectors along the length of the tube. These waypoints constitute the seed trajectory for a motion plan generated by our Descartes and Tesseract libraries which sweeps a ring along the tube while avoiding collision. Camera and turntable extrinsic calibration was performed using a nonlinear optimization function from the robot_cal_tools package.

A picture of the reconstructed mesh used to generate a robot trajectory is below. Additional information about the development of this demo (plus some neat video!) is available here. As ROS2 training is integrated into the ROS-I Consortium Americas curriculum, we look forward to sharing more lessons learned about applying Yak towards similar applications.

Screenshot from 2019-03-19 20-10-22.png

What next?

Yak is open-source as of July 2019. While development is ongoing and we anticipate that the library APIs will continue to evolve, we encourage interested parties to check it out at github.com/ros-industrial/yak and github.com/ros-industrial/yak_ros .

New Release - ROS Qt Creator 4.9.1 with ROS2 Support!

We are pleased to announce the release of the ROS Qt Creator Plug-in for Qt Creator 4.9 on Xenial and Bionic. The ROS Qt Creator Plug-in creates a centralized location for ROS tools to increase efficiency and simplify tasks.

Perf Flame Graph (Tutorial)

Perf Flame Graph (Tutorial)

Highlights:

  • Qt Creator 4.9 introduces several new features and improvements to existing capabilities.
    • Generic Programming Language Support (Python Support!)
      • This was experimental in 4.8 but is now fully supported.
    • Perf Profiling
      • This a powerful tool for profiling software running on linux.
ROS settings - Add custom ROS install location

ROS settings - Add custom ROS install location

Value of Meetups to Foster Awareness and Collaboration

Recently here in the ROS-I Americas backyard we have begun organizing an open robotics meetup called SATX Robotix. The intent is to drive more face-to-face interaction around open-source software and/or robotics and broaden the types of people who may engage in supporting open-source development.

Xenex provides an overview of advancements in medical robotics

Xenex provides an overview of advancements in medical robotics

As open-source has gained momentum, particularly in industrial circles, there is the opportunity to deploy lessons learned more broadly to further accelerate the means for how we effectively communicate and incentivize participation in an open-source community. This can help enhance the skills needed to build technical capability and effectively disseminate information, gather feedback and grow a diverse community that moves the needle of democratizing capability and enabling participation that enriches not just the target open-source projects but the communities that are involved.

The SATX Robotix initiative is still in its early phases, but it has shown the promise of bringing developers, end-users, students, interested hobbyists, and other elements of the business and marketing community to both drive idea sharing and enable richer collaboration and skill sharing than we would get just working on an issue board on GitHub.

Jorge Nicho of SwRI sets up and tests a MoveIt! demonstration ahead of the SATX Robotix Meetup in San Antonio

Jorge Nicho of SwRI sets up and tests a MoveIt! demonstration ahead of the SATX Robotix Meetup in San Antonio

If you are considering levering open-source tools in your advanced robotics or manufacturing/industrial application, I would encourage you to see what is going on in your local tech community. If possible, see where you can engage in a meetup. If nothing else, you may find others who have dealt with similar challenges or find those who can help you formulate your “so what” as to why you would seek to leverage open-source software to solve your problem within your group/company.

Don’t overlook opportunities to work face-to-face with a broader set of individuals. The engagement and benefit of community is what sets the open-source development model apart from, particularly, industrial/proprietary solution development approaches.

Highlights from ICRA relevant to ROS & ROS-Industrial

icra-2019.png

The International Conference on Robotics and Automation, commonly known as ICRA, is one of the premier global events for showcasing the latest and greatest results in robotics research. This year ICRA was held in Montreal, Canada, back in North America for the first time since Seattle in 2015.

ICRA is largely a forum for academic research labs to showcase their most recent results, which aim to be exploratory and forward-looking rather than off-the-shelf solutions ready for immediate adoption into industry. Nevertheless, there was still quite a bit of noteworthy technology and results on display. It was very impressive to see the huge variety of areas that researchers are exploring within robotics and the large amount of creativity on display in new approaches and algorithms.

The scale of ICRA was enormous this year, with over 1,200 papers presented over the course of the three-day conference. Along with the plenary sessions, keynote talks, and industry exhibitions, there was far too much to see and do. With such a high volume of content, a classic conference format of multiple tracks where each author presents to a sitting audience was simply not possible. Instead, the organizers opted for an interesting alternative of interactive presentations where the authors stood by a poster describing their work and conference attendees were free to spend as much time as they wanted discussing the work with the author. The new twist that I hadn’t seen before was that all presentations took place in the same large exhibition hall simultaneously, in sessions that each contained about 130-150 posters. The posters were spread throughout the hall on the outside of free-standing structures that had partitioned spaces for each poster, which were called the PODS. The PODS all being colocated in a single room created a very casual atmosphere with a lot of exploration and discussion with the paper authors.

USC.JPG

Photo courtesy of Rishi Malhan, University of Southern California

To conclude, I wanted to highlight a few contributions that are interesting for the use of ROS or for connections to ROS-Industrial. Definitely keep an eye out for these papers to appear on IEEE Xplore soon and take a look!

Reinforcement Learning on Variable Impedance Controller for High-Precision Robotic Assembly

Jianlan Luo, Eugen Solowjow, Chengtao Wen, Juan Aparicio Ojea, Alice Agogino, Aviv Tamar, Pieter Abbeel

Here the authors address the problem of autonomous and intelligent robotic assembly, and had presented this work at the most recent ROS-Industrial annual meeting. They use a Rethink Robotics Sawyer robot (which is controlled through ROS) to perform precision assembly of a set of gears using reinforcement learning and neural networks. The core idea is that successful completion of assembly tasks requires knowledge about the types of contacts and constraints involved, such as aligning a peg with the axis of a hole before it can be inserted. Rather than having the system designer spend time and effort providing manual descriptions of these constraints, their method allows them to be learned directly from the robot’s experience. As can be seen in the video below, the robot is able to quickly learn how to deal with a number of different contacts when performing multiple different assembly tasks. This is a great demonstration of bringing machine learning into robot behaviors for industrial tasks and it will be fascinating to see how this area develops in the near future.

Assembly Video

CartesI/O: A ROS Based Real-Time Capable Cartesian Control Framework

Arturo Laurenzi, Enrico Mingo, Luca Muratore, Nikos Tsagarakis

In work that follows their participation in the DARPA Robotics Challenge, the authors introduce a framework for performing real-time Cartesian control of platforms with many degrees-of-freedom within a ROS system. The framework is designed to support having the controller solver run inside the real-time control loop so that the robot can react immediately to any change in the desired input. Each Cartesian controller is specific to a particular platform and particular task, but the authors provide interface layers to abstract away much of common logic that would have to be repeatedly implemented in different implementations. By supporting the ROS URDF format, constraints or desired behaviors can be specified for links. Furthermore, each task will have an auto-generated ROS API available that enables very high level input about the task goals and provides information about the current task status. This allows a user to perform basic scripting of robot behaviors using something a simple as a short Python script, for example. This framework can be seen driving the 28 DoF COMAN+ humanoid and the 39 DoF CENTAURO quadruped robots in the video below.

CartesI/O Video

MoveIt! Task Constructor for Task-Level Motion Planning

Michael Görner, Robert Haschke, Helge Joachim Ritter, Jianwei Zhang

Here the authors provide a software tool and framework for extending MoveIt to more naturally handle planning of entire robot tasks. For nontrivial tasks, the overall robot motion is typically broken into multiple distinct pieces, each of which has its own set of planning challenges. For example, when performing a pick-and-place task, the robot has to separately create plans for approaching the object, grasping it, moving the object to the place area, and releasing it at the place point. The authors formalize a framework for each of these stages and how each one may depend on others. For example, the grasp selection will affect where the manipulator should move to at both the pick and place positions. The framework is arbitrarily scalable and allows system designers to create descriptions of entire tasks that MoveIt can solve with relative ease. The authors have open-sourced the software on GitHub (https://github.com/ros-planning/moveit_task_constructor) and are hopeful the framework will go on to become a central tool for task planning using MoveIt.

Moveit.JPG

Planning a pouring task with displays for how the robot picks up the bottle (center), creates the pouring motion (left), and places it back on the table (right)

Field notes from Automate 2019, and why we’re bullish on ROS2

What makes a good industrial automation demonstration? When we started preparing for Automate 2019 back in January, a few key points came to mind. Our specialty in SwRI’s Manufacturing and Robotics Technology Department is advanced robotic perception and planning, so we decided that the robot should perform an authentic dynamic scan-and-plan process on a previously-unseen scene – as far away as we could get from a “canned” demo. We also wanted the demo to be an interactive experience to help drive discussion with visitors and entertain onlookers. These goals led us to the tube threading concept: a human would bend a piece of shiny metal tubing into a novel shape, and the robot would perceive it and plan a path to sweep a ring along it.

Michael Ripperger & Joseph Schornak on location at Automate 2019

Michael Ripperger & Joseph Schornak on location at Automate 2019

Developing a demo system presents an opportunity to explore new ideas in a low-risk environment because the schedule and deliverables are primarily internally-motivated. Since my group had limited previous exposure to ROS2, we decided that our Automate demo should use ROS2 to the greatest possible extent. The original vision was that the system would be entirely composed of ROS2 nodes. However, due to the practical requirements of getting everything working before the ship date, we decided to use a joint ROS/ROS2 environment, with ROS motion planning and the GUI nodes communicating with the ROS2 perception nodes across the ROS-to-ROS2 bridge

ROS2 Strengths and Challenges

In contrast to virtually every other robotics project I’ve worked on, the demo system’s perception pipeline worked consistently and reliably. Intel maintains a ROS2 driver for Realsense RGB-D cameras, which allowed us to use the D435 camera without any customization or extra development. Our YAK surface reconstruction library based on the Truncated Signed Distance Field algorithm helped us avoid the interreflection issues that would usually plague perception of shiny surfaces. After a couple afternoons spent learning how to use new-to-me VTK libraries, the mesh-to-waypoint postprocessor could consistently convert tube scans into trajectory waypoints. More information about this software is available from the SwRI press release or the writeup in Manufacturing Automation.

Block Diagram of SwRI ROS-I Automate 2019 Demonstration

Block Diagram of SwRI ROS-I Automate 2019 Demonstration

Motion planning turned out to be a particularly challenging problem. Compared to a traditional robot motion task like pick-and-place, which involves planning unconstrained paths through open space, the kinematic constraints of the tube threading problem are rather bizarre. While the ring tool is axially underconstrained and can be rotated freely to the most convenient orientation, it is critical that it remain aligned with the axis of the tube to avoid collision. It’s impossible to flip the ring once it’s over the tube, so if the chosen ring orientation causes the robot to encounter a joint limit halfway down the tube, tough luck! Additionally, the robot must avoid collision between the tube and robot hardware during motion. Our initial solution used Trajopt by itself, but it would sometimes introduce unallowable joint flips since it tried to optimize every path waypoint at once without a globally-optimal perspective on how best to transition between those waypoints. We added the Descartes sampling algorithm, which addressed these issues by populating Trajopt’s seed trajectory with an approximate globally-optimal path that satisfied these kinematic and collision constraints. Planning still failed occasionally: even with a kinematically-redundant Kuka iiwa7 arm, solving paths for certain tube configurations simply wasn’t feasible[^1].

TrajOpt Path Planning Implementation & Testing

TrajOpt Path Planning Implementation & Testing

[^1]: The extent of solvable tube configurations could be greatly increased by including the turntable as a controllable motion axis. Given the constraints of the iiwa7’s ROS driver, we decided that this would be, in technical software terms, a whole other can of worms.

We shipped the robot hardware about a week in advance of the exhibit setup deadline. Our reliance on ROS meant we could switch to simulation with minimal hassle, but there were some lingering issues with the controller-side software that had to wait until we were reunited with the robot the Saturday before the show[^2]. This contributed to moderate anxiety on Sunday evening as we worked to debug the system using real-world data. We had to cut some fun peripherals due to time constraints, such as the handheld ring wand that would let visitors race the robot. By Tuesday morning the robot was running consistently, provided we didn’t ask it to solve paths for too-complicated tubes. This freed up some time for me to walk the halls away from our booth and talk to other exhibitors and visitors.

[^2]: Our lunch upon arrival was Chicago-style deep dish pizza, which conveniently doubled as dinner that evening.

More Collaborative Robots

There were collaborative robots of all shapes and sizes on display from many manufacturers. I may have seen nearly the same number of collaborative robots as traditional ones! A handful were programmed to interact with visitors, offering lanyards and other branded largesse to passersby. Most of them were doing “normal robot things,” albeit intermingled with crowds of visitors without any cages of barriers, and generally at a much more sedate pace compared to the traditional robots. Some of the non-collaborative robots were demonstrating safety sensors that let them slow down and stop as visitors approached them -- I usually discovered these by triggering them accidentally.

I was surprised by the number of autonomous forklifts and pallet transporters. I’m told that there were more in 2019 than at previous shows, so I’m curious about what recent developments drove growth in this space.

I learned that ROS-Industrial has significant brand recognition. I got pulled into several conversations solely because I was wearing a ROS-I polo! Many of these discussions turned to ROS2, which produced some interesting insights. Your average roboticist-on-the-street is aware of ROS2 (no doubt having read about it on this very blog), but their understanding of its capabilities and current condition might be rather fuzzy. Many weren’t sure how to describe the key differences between ROS and ROS2, and a few weren’t even aware that ROS2 has been out in the wild for three versions! I’ll unscientifically hypothesize that a key challenge blocking wider ROS2 adoption is the lack of demonstrated success on high-visibility projects. Our demo drove some good conversation to alleviate these concerns: I could show a publicly-visible robotic system heavily reliant on ROS2 and point to the open-source native ROS2 device drivers that let it function.

Showcasing Perception and Planning Potential

In terms of demo reception, people who visited our booth were impressed that we were scanning and running trajectories on previously-unseen parts. I usually had to provide additional context to show how our perception and planning pipeline could be extended to other kinds of industrial applications. There’s a tricky balance at play here – an overly abstract demo requires some imagination on the part of the viewer to connect it to an industrial use case, but a highly application-specific demo isn’t easily generalized beyond the task at hand. Since our group specializes in application-generic robot perception and planning, I think that a demo tending towards the abstract better showcases our areas of proficiency. This is a drastically different focus from other exhibits at the show, which generally advertised a specific automation process or turnkey product. I feel like we successfully reached our target audience of people with difficult automation tasks not addressed by off-the-shelf solutions.

Development of the Industrial YAK reconstruction for the Automate Demo in ROS2

Development of the Industrial YAK reconstruction for the Automate Demo in ROS2

While it certainly would have been easier to adapt an already-polished system to serve as a show demo, developing a completely new one from scratch was way more fun. Improvements made to our perception and planning software were pushed back upstream and rolled into other ongoing projects. We’re now much more comfortable with ROS2, to the extent that we’ve decided that from here on out new robotics projects will be developed using ROS2. The show was a lot of fun, a great time was had by all, and I hope to see you at Automate 2021!

ROS-Industrial Consortium Europe is heading towards ROS2

With the growing excitement and curiosity surrounding ROS2, ROS-Industrial Consortium Europe (RIC-EU) had the pleasure to host the Spring 2019 edition of the RIC-EU Tech Workshop. It took place on May 6th and 7th at Fraunhofer IPA in Stuttgart, Germany. Some of the main drivers of DDS and ROS2 developments personally presented their insights and gave hands-on sessions during the event. For this, participants were provided with USB sticks with Ubuntu Bionic and ROS Melodic and ROS Crystal pre-installed (just as for all our ROS-Industrial trainings). The event has been free for worldwide members of any ROS-Industrial Consortium and was fully booked out with 40 people attending from all over Europe.

On Day 1, the workshop started with RIC-EU manager Thilo Zimmermann who welcomed the participants at Fraunhofer IPA and introduced the ROS-Industrial Consortium Europe and its EU project funding opportunity (next cut-off dates June 14 and September 13, 2019).

As ROS 2 supports multiple DDS/RTPS implementations, RIC-EU proudly hosted one of the most popular DDS vendors, eProsima, to explain the main concepts of DDS and present their stack at the workshop. During the five hours of presentations and hands-on workshops, Borja Outerelo Gamarra and Jaime Martin Losa covered topics like DDS Introduction, presentation of the standard and motivation of DDS & DDS Architecture, and DDS QoS. Attendees practised on a “hello world” example. ePROSIMA's slides can be found here.

20190506_142557[1].jpg

On Day 2, Ralph Lange from RIC-EU member BOSCH gave an in-depth presentation of the current status of ROS2. He included hands-on tasks using ROS2 and sow new features and also provided information on the upcoming d-turtle “Dashing Diademata” release on May 31, 2019. Ralph's presentation slides "Current Status of ROS2 - Hands-on Feature Overview" can be found here.

20190507_090637[1].jpg

The second presentation by Ingo Lütkebohle, also from BOSCH Corporate Research, introduced the micro-ROS activity. Ingo is one of the investigators of the EU funded OFERA project, which ports ROS2 to “extremely resource constrained devices” (usually, microcontrollers) with the new DDS XRCE standard. He demonstrated this by using a Cortex M4 board mounted on a first generation Turtlebot. Ingo's presentation slides can be found here.

20190507_113032[1].jpg

After a lunch break, Ludovic Delval of Fraunhofer IPA gave a hands-on workshop on how to migrate ROS1 node to ROS2. Lastly, Harsh Deshpande, also from Fraunhofer IPA, previewed the porting of the ur_modern_driver to ROS2 and presented a proposal for the action_bridge, which currently bridges between ROS1 action client and ROS2 action server.

At the end of the workshop, participants and ROS-Industrial Consortium members agreed that 2019 is promising a lot of developments in ROS2. In April at ROS-I Consortium Americas 2019 Annual Meeting, RIC members interacted and exhibited an interesting panel session titled “Is ROS2 Ready for the Factory Floor”. In June, Ludovic Delval of Fraunhofer IPA will present the latest updates at ROSCon France in Paris and Harsh Deshpande at the ROS-Industrial AP Workshop 2019 in Singapore.

The next RIC-EU Tech Workshop is foreseen for Fall 2019 (tentative dates October 09-10). The 2019 edition of the ROS-Industrial Conference is planned on December 10-12, 2019 (save the date!).

What Took Place at the ROS-I Consortium Americas 2019 Annual Meeting

After the Automate 2019 Exhibition and Conference the ROS-Industrial Consortium Americas held their 2019 Annual Meeting in Chicago, Illinois, on April 12. This is the primary face-to-face opportunity for the Americas membership, whom have expressed interest in leveraging ROS and additional open-source solutions, in industrial and manufacturing applications. This event has proven to be a great opportunity to engage with end-users, OEMs, solution providers and researchers on open source, interoperable, agile software capabilities.

As has been the custom when held in Chicago following Automate, the meeting was confined to a single day. This led to a packed agenda with attendance at a record for the co-located variant of the Americas Annual Meeting. As was stated in this year’s program, “As evidenced by the activity and the sheer number of entrants into the order fulfillment/warehouse and logistics space, this area has proven that ROS-based solutions can survive and even thrive in manufacturing environments where uptime and reliability of performance are critical.” The day that followed demonstrated that there is both interesting and tangible activity as well as plenty of opportunity to continue to innovate while leveraging open source to step-change how innovation in industrial automation takes place.

The day kicked off with strategic and technical updates from each ROS-Industrial region. The Americas Consortium reviewed their “roadmapping” effort with an emphasis on managing the transition from ROS to ROS2. Levi Armstrong shared technical developments, including a summary of Industrial YAK, a TSDF-based reconstruction package, and an approach to enable the ability for ROS-I to support ROS and ROS2 applications moving forward.

Full House for 2019 ROS-I Americas Annual Meeting

Full House for 2019 ROS-I Americas Annual Meeting

The EU Consortium was able to highlight evolutions for the progress of open source and ROS in industry. This followed with a ROSin program update for the Americas audience. Asia-Pacific discussed some of his team’s work around Dynamic Grasping and a Singapore-funded initiative “ROS-based National Healthcare Project” that will be open-source, expanding application of ROS in an IoT use case.

A follow-up roadmapping workshop was held, seeking to collect feedback on technical needs and gaps, programmatic gaps, what is working, and what application areas are the greatest needs. The intent is to gather feedback to continue to ensure a solid roadmap, both for the transition and/or incorporation of ROS2 and other open-source capabilities, and ensuring that FTP topics are aligned with the demands of the membership and Industrial community.

This followed with an overview by Dr. John Wen of Rensselaer Polytechnic Institute on Robotic Assembly of Large Structures using Vision and Force Guidance. This work was a product of the ARM Institute’s Quick Start Technology Projects. The work shows the ability to realize millimeter assembly performance leveraging tools such as ABB’s External Guided Motion Interface along with visual servoing techniques.

Dr. Eugen Solowjow presented compelling work that sought to leverage Artificial Intelligence techniques to enable robot learning for path planning to perform assembly and placement tasks. This highlighted a gap in the ROS ecosystem relative to advanced AI frameworks and the inability of ROS to interoperate with these tools at this time.

We were then able to welcome keynote speaker Chris Morgan, chief innovation officer of Bastian Solutions, a Toyota Advanced Logistics company. He talked about how ROS enables a one-stop shop, if you will, to enable his team to innovate rapidly to come up with the next generation of warehouse automation technologies, including mobile robotics.

Ahead of the presentation portion for the afternoon, members presented and discussed Focused Technical Project (FTP) topics. This was followed by, Fred Proctor, National Institute of Standards and Technology (NIST), Group Leader of the Networked Control Systems Group sharing with the membership developments of techniques to assess robotic system performance relative to agility, and how there is a need to enable common language as robotics capabilities advance.

Vincent Tam of Microsoft’s Windows 10 IoT team presented updates relative to Microsoft’s Kinect and the tools to enable rich application development within the Windows and Auzure ecosystems.

A panel session titled “Is ROS2 Ready for the Factory Floor?” featured Chris Lalancette of Open Robotics, Dave Coleman of PickNik and MoveIt!, Matthew Hansen of Intel, and Jerry Towler of Southwest Research Institute’s Unmanned Ground Systems Group. They discussed ROS2 experiences and some of the challenges related to its broader adoption. The discussion, moderated by SwRI and ROS-I Americas tech lead Levi Armstrong, covered university uptake of ROS2 versus ROS, why industry is pulling for ROS2 and techniques for managing this transition period, as well as guidance or tips for leveraging ROS2 when starting from scratch or if you have an existing ROS code base. The audience exhibited passion while engaging in a lively conversation that added additional context to publications or word of mouth that have framed the state of ROS2 at this point.

ROS2 Panel on Readiness for the Factory Floor

ROS2 Panel on Readiness for the Factory Floor

The afternoon session concluded with an introduction by Tormach CEO Daniel Rogge on their work seeking to create a ROS package for the MachineKit component HAL, the Hardware Abstraction Layer, and described what this enables. This was followed by an OEM partnership highlight that served as an example of how Yaskawa enables advanced applications by supporting up-and-coming companies with compelling new ideas; in this case, Path Robotics spoke to the membership about their vision to change how robotic arc welding is deployed for small and medium manufacturers.

The day concluded with a presentation by Dr. Mitch Pryor and the work his team at the University of Texas at Austin Nuclear Robotics Group is doing to reduce operator burden and enabling richer leverage of advanced robotics that in certain cases lead to improved worker satisfaction and overall performance improvements compared to legacy tele operation applications.

It was a full day, after a full week, but we were thankful to the attendees, and all the members who came in person and engaged via the online streaming of the event. For members, all the presentations and the recordings of the presentations and panel will be made available via the member portal. Moving forward, the ROS-Industrial Consortia globally will seek to bring back the ROS-Industrial Community meeting, a quarterly update that was a more meaningful means to maintain engagement throughout the year across the regional Consortia. The hope here is to optimize programs such as ROSin, and to provide two-way communication channels for these projects/funding sources beyond their core audience, and to enable a checkpoint to ensure that strategically ROS-Industrial as a project is synchronized and each dollar that is put towards ROS-Industrial is most effectively utilized.

Global ROS-I Team from Left to Right - Levi Armstrong (SwRI), Erik Unemyr (ROS-I AP), Chris Bang (SwRI), Thilo Zimmerman (Fraunhofer IPA), Paul Evans (SwRI), Mirko Bordignon (Fraunhofer IPA), and Matt Robinson (SwRI)

Global ROS-I Team from Left to Right - Levi Armstrong (SwRI), Erik Unemyr (ROS-I AP), Chris Bang (SwRI), Thilo Zimmerman (Fraunhofer IPA), Paul Evans (SwRI), Mirko Bordignon (Fraunhofer IPA), and Matt Robinson (SwRI)

We look forward to continued action that stems from this event, and all the events we have in the coming months. ROS-Industrial Asia-Pacific will have its annual workshop June 18-20 in Singapore, and World ROS-I Day, our annual “house cleaning” on the code itself, is tentatively schedule for the last week in June.

Thanks to all those that engaged with the ROS-Industrial Global team the entire week, including the Annual Meeting. Without your support, open-source for industry would just be a tag line, but as evidenced by the progress to date, it is a reality.

SwRI Presents at America Makes TRX

Southwest Research Institute had the pleasure of hosting the America Makes TRX conference here in San Antonio Texas for two days in March to discuss the latest up and comings of additive manufacturing and its technologies. ROS Industrial made a cameo in the lineup on Thursday the 21st where I gave my presentation, Open Source Developments Impacting the Industrial Automation Space & Their Relevance to Additive Processing. The talk focused on the synergies between additive manufacturing and ROS.

I present on the synergies between ROS and additive Manufacturing

I present on the synergies between ROS and additive Manufacturing

Boasting over 100 attendees, TRX was the first America Makes held at the Institute and – for many – their first introduction to ROS and the robotic capabilities available to the additive manufacturing community. Additive manufacturing focuses primarily on the metallurgical problems associated with lamentation and homogenous particulate bonding with significant research focus on the optimization of material properties and subsequent process ills such as wavy depositions or stress localizations and predictions from discrete inspections. The introduction or ROS capabilities surrounding laser inspections and blending were of strong interest to several groups and attendees.

Follow up tours were held of the Southwest Research Institute labs that are leveraged for ROS-Industrial application development. Here follow up conversations, and tangible examples relative to the additive process through the complete value stream were discussed. This included more effective ways to do post-processing, alternate applications of on the fly material deposition, and material removal, as well as the ability to resolve build errors that could occur during large format printing operations.

Ben Greenberg gives a demo of SwRI’s Visual Programming IR&D.

Ben Greenberg gives a demo of SwRI’s Visual Programming IR&D.

Special thanks to Carl Popelar and Division 18 for his efforts in organizing the America Makes Technology Exchange and all supporting SwRI staff.