Obviously, this is a robot. And it has been since the 70’s. Big metal arms that lift, weld, or hold – over and over again. So if they look the same, where is all the innovation? Why are people calling them parts of a cyber-physical system when they used to be just called robots?
There is an incredibly long list of exciting progress, but I’ll mention just three: mobility, collaboration, and awareness.
Awareness is one of the most important. Everybody is used to their phone having a 3-axis gyroscope, an accelerometer, a light sensor, and a camera. But many also come with magnetometers, proximity sensors, thermometers and more. There are even ones with Geiger counters. Now robots are getting all these too (maybe not the Geiger counter). Robots now detect if the car frame they want to weld is there or not, when they are grasping something too hard, or not grasping hard enough. They can tell if something they are laser cutting is starting to burn, or if their drill bit has broken off.
They are also getting wheels to move around. Just like autolanding airplanes in dense fog need an ILS at the airport, right now most robots need tape or a paint trail on the ground so their software can tell them where they are allowed to go. By having more sensors to give them awareness of their surroundings, their paths can be flexibly optimized. (Although this is still new – one customer told us a story of how their mobile warehouse delivery robot designed to stop in front people was stuck for hours repeating the recorded phrase “please move out of the way so I may continue” to a support column in the middle of the warehouse).
In spite of that last example, intelligence is also making tremendous strides. It’s a tremendous step forward for a robot to signal “I can’t do what you want me to” – for example because the drill bit broke, or more importantly if they would injure a person who was close by. The intelligence of the robot used to mean a lot of detailed embedded programming, where low level software is written and installed into the robot itself. Now that moving robots need to be connected by wireless communications anyway, the robot can become a set of remote sensors, with the intelligence in a data center somewhere using more common programming paradigms. You can more quickly adjust to changes in the manufacturing process, implement the software less expensively, and connect the different machines more easily. There can be a “digital twin” – which is a virtual copy of the robot – which can simulate anything that should be done before the robot actually does it.
The combination of all these – mobility, connectivity, and distributed intelligence is opening up tremendous potential for collaboration. An example scenario is a mobile robot to move to one area in a warehouse, grab the right block of metal, roll up and load it into a milling machine that will then carve it into a product or part. To do this the robot asks the warehouse management system where it needs to go in the warehouse, asks the operations system what milling machine it should load, and position itself with millimeter accuracy in front of the milling machine so that it can successfully mount the piece without breaking the piece, itself, or the milling machine. And part of the intelligence is knowing how to signal to the milling machine that the piece is mounted so that the milling machine (also a robot) will shut its door and start milling. And after it’s done, the milling machine signals that a robot should come, unload, and reload.
So industrial robots will still look like their welding, lifting, and holding predecessors from the 70’s, because they will continue to do that. But they can be on wheels, be aware of what is going on in the plant, communicate it to other machines, and collaborate achieve the optimal next step for production.
Look out for my next post where I will discuss robotics in the Automotive industry.