What Robots Need to Succeed: Machine-Learning to Teach Effectively – Robotics Business Review

With machine learning, algorithms are automatically generated from large datasets, speeding the development and reducing the difficulty of creating complex systems, including robotics systems. While data at scale is what makes accurate machine learning go, the data used to train ML models must also be very accurate and of high quality.

By Hyun Kim | July 31, 2020

The Mid-twentieth century sociologist David Reisman was perhaps the first to wonder with unease what people would do with all of their free time once the encroaching machine automation of the 1960s liberated humans from their menial chores and decision-making. His prosperous, if anxious, vision of the future only half came to pass however, as the complexities of life expanded to continually fill the days of both man and machine. Work alleviated by industrious machines, such as robotics systems, in the ensuing decades only freed humans to create increasingly elaborate new tasks to be labored over. Rather than give us more free time, the machines gave us more time to work.

Machine LearningToday, the primary man-made assistants helping humans with their work are decreasingly likely to take the form of an assembly line of robot limbs or the robotic butlers first dreamed up during the era of the Space Race. Three quarters of a century later, it is robotic minds, and not necessarily bodies, that are in demand within nearly every sector of business. But humans can only teach artificial intelligence so much or at least at so great a scale. Enter Machine Learning, the field of study in which algorithms and physical machines are taught using enormous caches of data. Machine learning has many different disciplines, with Deep Learning being a major subset of that.

Today Deep Learning is finally experiencing its star turn, driven by the explosive potential of Deep Neural Network algorithms and hardware advancements.

Deep Learning ArrivesDeep Learning utilizes neural network layers to learn patterns from datasets. The field was first conceived 20-30 years ago, but did not achieve popularity due to the limitations of computational power at the time. Today Deep Learning is finally experiencing its star turn, driven by the explosive potential of Deep Neural Network algorithms and hardware advancements. Deep Learning require enormous amounts of computational power, but can ultimately be very powerful if one has enough computational capacity and the required datasets.

So who teaches the machines? Who decides what AI needs to know? First, engineers and scientists decide how AI learns. Domain experts then advise on how robots need to function and operate within the scope of the task that is being addressed, be that assisting warehouse logistics experts, security consultants, etc.

Planning and LearningWhen it comes to AI receiving these inputs, it is important to make the distinction between Planning and Learning. Planning involves scenarios in which all the variables are already known, and the robot just has to work out at what pace it has to move each joint to complete a task such as grabbing an object. Learning on the other hand, involves a more unstructured dynamic environment in which the robot has to anticipate countless different inputs and react accordingly.

Learning can take place via Demonstrations (Physically training their movements through guided practice), Simulations (3D artificial environments), or even by being fed videos or data of a person or another robot performing the task it is hoping to master for itself. The latter of these is a form of Training Data, a set of labeled or annotated datasets that an AI algorithm can use to recognize and learn from. Training Data is increasingly necessary for todays complex Machine Learning behaviors. For ML algorithms to pick up patterns in data, ML teams need to feed it with a large amount of data.

Accuracy and AbundanceAccuracy and abundance of data are critical. A diet of inaccurate or corrupted data will result in the algorithm not being able to learn correctly, or drawing the wrong conclusions. If your dataset is focused on Chihuahuas, and you input a picture of a blueberry muffin, then you would still get a Chihuahua. This is known as lack of proper data distribution.

Insufficient training data will result in a stilted learning curve that might not ever reach the full potential of how it was designed to perform. Enough data to encompass the majority of imagined scenarios and edge cases alike is critical for true learning to take place.

Hard at WorkMachine Learning is currently being deployed across a wide array of industries and types of applications, including those involving robotics systems. For example, unmanned vehicles are currently assisting the construction industry, deployed across live worksites. Construction companies use data training platforms such as Superb AI to create and manage datasets that can teach ML models to avoid humans and animals, and to engage in assembling and building.

In the medical sector, research labs at renowned international universities deploy training data to help computer vision models to recognize tumors within MRIs and CT Scans. These can eventually be used to not only accurately diagnose and prevent diseases, but also train medical robots for surgery and other life-saving procedures. Even the best doctor in the world has a bad nights sleep sometimes, which can dull focus the next day. But a properly trained robotic tumor-hunting assistant can at perform peak efficiency every day.

Living Up to the PotentialSo whats at stake here? Theres a tremendous opportunity for training data, Machine Learning, and Artificial Intelligence to help robots to live up to the potential that Reisman imagined all those decades ago. Technology companies employing complex Machine Learning initiatives have a responsibility to educate and create trust within the general public, so that these advancements can be permitted to truly help humanity level up. If the world can deploy well-trained, built and purposed AI, coupled with advanced robotics, then we may very well live to see some of that leisure time that Reisman was so nervous about. I think most people today would agree that we certainly could use it.

Hyun Kim, Co-founder and CEO, Superb AI

Hyunsoo (Hyun) Kim is the co-founder and CEO of Superb AI, and is on a mission to democratize data and artificial intelligence. With a background in Deep Learning and Robotics during his PhD studies at Duke University and career as a Machine Learning Engineer, Kim saw the need for a more efficient way for companies to handle machine learning training data. Superb AI enables companies to create and manage the enormous amounts of data they need to train machine learning algorithms, and lower the hurdle for industries to adopt the technology. Kim has also been selected as the featured honoree for the Enterprise Technology category of Forbes 30 Under 30 Asia 2020, and Superb AI managed last year to join Y Combinator, a prominent Silicon Valley startup accelerator.

Continue reading here:

What Robots Need to Succeed: Machine-Learning to Teach Effectively - Robotics Business Review

Related Posts

Comments are closed.