How Humans Tell Robots What to Do – Robotics Business Review

Posted: November 12, 2019 at 6:46 am

November 11, 2019Bryan Hellman

The modern production floor is changing rapidly as robotics, automation, and artificial intelligence converge to enhance productivity in the manufacturing sector. One of the key drivers of convergence is the combination of technological advancements in robotics and communication technologies, which has led to an expansion in applications including wireless interfaces in industrial robotics.

The first industrial robot was a robotic arm called Unimate #001, which relied on hydraulic actuators for control. Industrial robotics advanced further in the 1970s with the invention of programmable logic controllers (PLCs). During this period, Human-Robot Interaction (HRI) was one-directional, where controllers pressed buttons, and the robots responded.

In the 1980s, 1990s, and 2000s progressed, the introduction of wireless technologies such as Wi-Fi, Bluetooth, and 3G and 4G (and soon, 5G) wide-area wireless networks transformed HRI into a two-way communications system. In addition, different ways for humans to interact with robots also evolved.

A graphic user interface enables the user to control the robot using pictures or images displayed on the screen of the device. The images are captured by a camera mounted on the robot and transmitted to the user. The advantages of the graphic user interface include its ability to make human-robot interaction more intuitive and engaging. GUI devices provide data from sensors, which is vital for decision-making. It also restricts users input to valid ranges or units, which enhances accuracy in execution. The only disadvantage of the GUI is that it can contain complex and contradicting graphical interfaces that require a user to learn both the complicated commands and the robots hardware and software to operate.

Yaskawa Motomans Smart Pendant, seen here with a GP8 robot, aims to ease robot control functions. Source: Yaskawa Motomon

A perfect example of the application of GUI is the Flexible Graphic User Interface (FlexGUI), developed by PPM AS and NACHI of Norway and Japan. This interface bridges the gap between humans and robots by enhancing the learning process of the robot to elevate it to the level of its human controller. Another successful application of GUI similar to FlexGUI is the FlexPendant developed by ABB under its Robot Application Builder (RAB). Both FlexPendant and FlexGUI offer users the option to customize their own graphics interfaces, and they can be developed as a personal computer or a teach pendant.

An in-depth analysis of both applications of GUI highlights that FlexGUI is more flexible, user-friendly, and advanced than FlexPendant. This is because FlexGUI provides an easily accessible interface for learners with the option to upgrade to advanced functionality. This is a major advantage over FlexPendant, particularly for trainees and recruits who need time to learn basic operations of industrial robots before attempting advanced functionality. Also, FlexGUIs interface grants users a custom-created screen for every industrial cell, as well as action buttons and monitoring tools that can easily be customized by the user based on task and priority.

Command language interface requires the user to use existing programming languages to control the robot. The first advantage of CLI is that it is easy to execute commands once users learn the programming language. Second, unlike other interfaces that require controllers to understand and remember several steps, CLI requires the user to understand only the programming language.

Disadvantages of this command interface include the fact that some CLI devices contain complex command interfaces that require the user to learn both the complicated commands as well as detailed information about the robots hardware and software. In addition, a mix-up in the command language can be disastrous for the system.

This type of interface, where a human uses something similar to a video game controller (and in some cases, an actual video game controller), offers an accurate and real-time view of the environment, as the user is maneuvering the robot through obstacles to accomplish the tasks.

The Monarch Platform controller by Auris Health directs doctors to use the robot via a controller. Image: Auris Health

One recent example is the successful trial of BVLOS drones using 4G cellular connectivity to deliver medical supplies. The trial displayed the use of unmanned aerial vehicles (UAV) installed with an onboard Internet of Things (IoT) router. The router enabled LTE cellular connectivity for video and control data between the UAV and its pilot. The trial provided an insight into potential future applications of the user interfaces in industrial robotics. It showed that the wireless interfaces can be modified to ensure robots learn from users, which will allow the increase of the intelligence of robots and reduce the need for controllers. Devices that use this type of interface require highly skilled workers to accomplish tasks, which is a major disadvantage. For example, the user flying the BVLOS drone during the trial required skills comparable to that of a pilot for the users to complete the tasks, and this prevents untrained professionals from using the robots.

A gesture-based interface enables users to operate industrial robots using hand gestures, where arm direction commands a specific movement on the robot. This is the most straight-forward and the easiest of all the other interfaces. The gesture interface requires that both the robot and the human are in the same place when the robot undertakes the tasks, which can limit its usage in situations where the robot is in a dangerous location. In addition, gesturing continuously can become tiring for humans after some time.

In this video example, MIT CSAIL researchers show how a robot can be supervised through brain and muscle signals:

Automatic speech recognition (ASR) has enabled expedient control of industrial robots using voice by allowing the conversion of speech into text. Voice control uses a graphic user interface (GUI) with a microphone to communicate commands and a display to view feedback. The speech signal is captured, filtered, converted into text, and matched with preprogrammed text commands by the processor.

Voice control procedure for industrial robots uses the defined syntax of commands starting with a trigger word such as Robot One, which activates speech recognition. The robot replies by sending the text Yes Master. Subsequently, the user utters preprogrammed command words, for example, Move to Origin which commands the robot to move back to the start of the assembly-line.

Future applications of interfaces in industrial robotics will rely heavily on data processing technologies with IoT and cyber-physical system (CPS) serving as neural networks for smart factories and manufacturing. Technological advancements in the future will facilitate total interconnection, where interfaces consist of smart control systems, sensors, communication systems, embedded terminals, and CPS, which will ensure interconnection between robots and all other equipment in the factory.

Wireless interfaces may also be applied in the future to attain total integration, using smart networks established under CPS to achieve total connectivity between humans and industrial robots, as well as between other robots and equipment.

Read this article:

How Humans Tell Robots What to Do - Robotics Business Review

Related Posts