Myth Busting Artificial Intelligence | WIRED

Weve all been seeing hype and excitement around artificial intelligence, big data, machine learning and deep learning. Theres also a lot of confusion about what they really mean and whats actually possible today. These terms are used arbitrarily and sometimes interchangeably, which further perpetuates confusion.

So, lets break down these terms and offer some perspective.

Artificial Intelligence is a branch of computer science that deals with algorithms inspired by various facets of natural intelligence. It includes performing tasks that normally require human intelligence, such as visual perception, speech recognition, problem solving and language translation. Artificial intelligence can be seen in many every day products, from intelligent personal assistants in your smartphone to the X-box 360 Kinect camera, allowing you to interact with games through body movement. There are also well-known examples of AI that are more experimental, from the self-aware Super Mario to the widely discussed driverless car. Other less commonly discussed examples include the ability to sift through millions of images to pull together notable insights.

Big Data is an important part of AI and is defined as extremely large data sets that are so large they cannot be analyzed, searched or interpreted using traditional data processing methods. As a result, they have to be analyzed computationally to reveal patterns, trends, and associations. This computational analysis, for instance, has helped businesses improve customer experience and their bottom line by better understand human behavior and interactions. There are many retailers that now rely heavily on Big Data to help adjust pricing in near-real time for millions of items, based on demand and inventory. However, processing of Big Data to make predictions or decisions like this often requires the use of Machine Learning techniques.

Machine Learningis a form of artificial intelligence which involves algorithms that can learn from data. Such algorithms operate by building a model based on inputsand using that information to make predictions or decisions, rather than following only explicitly programmed instructions. There are lots of basic decisions that can be performed leveraging machine learning, like Nest with its learning thermostats as one example. Machine Learning is widely used in spam detection, credit card fraud detection, and product recommendation systems, such as with Netflix or Amazon.

Deep Learningis a class of machine learning techniques that operate by constructing numerous layers of abstraction to help map inputs to classifications more accurately. The abstractions made by Deep Learning methods are often observed as being human like, and the big breakthrough in this field in recent years has been the scale of abstraction that can now be achieved. This, in recent years, has resulted in breakthroughs in computer vision and speech recognition accuracy. Deep Learning is inspired by a simplified model of the way Neural Networks are thought to operate in the brain.

No doubt AI is in a hype cycle these days. Recent breakthroughs in Distributed AI and Deep Learning, paired with the ever-increasing need for deriving value from huge stashes of data being collected in every industry, have helped renew interest in AI. Unfortunately, along with the hype, there has also been much concern about the risks of AI. In my opinion, much of this concern is misplaced and unhelpful most of the concerns raised apply equally to technology in general, and just because this specific branch of technology is inspired by natural intelligence should not make it particularly more or less of a concern.

[Recently on Insights:The Upside of Artificial Intelligence Development|Google and Elon Musk to Decide What Is Good for Humanity]

As mortal humans, we do not understand the functionality of many of the technologies we use, and in this age of information, many decisions are already being made for us automatically by computers. If not understanding how these technologies around us work is concerning, then there is plenty to be concerned about before we start worrying about AI. The fact of the matter is that AI technologies already enable many of the products and services we know and love, so better to start understanding more about what these technologies are and how they work, than to believe in the Hollywood-style hype about futuristic scenarios.

When it comes to the potential of the recent AI breakthroughs, there is, in spite of the hype, much to be excited about. While there is a vast and growing amount of data available related to critical problems, it remains mostly unmined, unrefined and un-monetized. There is an inability to analyze and utilize available data to make intelligent, bias-free, decisions. Companies should be using refined data to make the right decisions and solve the worlds most vexing challenges. The speed and computing scale required to make advances in mission critical problem solving has not existed until now.

Link:

Myth Busting Artificial Intelligence | WIRED

Related Posts

Comments are closed.