Revisiting the rise of A.I.: How far has artificial intelligence come since 2010? – Digital Trends

2010 doesnt seem all that long ago. Facebook was already a giant, time-consuming leviathan; smartphones and the iPad were a daily part of peoples lives; The Walking Dead was a big hit on televisions across America; and the most talked-about popular musical artists were the likes of Taylor Swift and Justin Bieber. So pretty much like life as we enter 2020, then? Perhaps in some ways.

One place that things most definitely have moved on in leaps and bounds, however, is on the artificial intelligence front. Over the past decade, A.I. has made some huge advances, both technically and in the public consciousness, that mark this out as one of the most important ten year stretches in the fields history. What have been the biggest advances? Funny you should ask; Ive just written a list on exactly that topic.

To most people, few things say A.I. is here quite like seeing an artificial intelligence defeat two champion Jeopardy! players on prime time television. Thats exactly what happened in 2011, when IBMs Watson computer trounced Brad Rutter and Ken Jennings, the two highest-earning American game show contestants of all time at the popular quiz show.

Its easy to dismiss attention-grabbing public displays of machine intelligence as being more about hype-driven spectacles than serious, objective demonstrations. What IBM had developed was seriously impressive, though. Unlike a game such as chess, which features rigid rules and a limited board, Jeopardy! is less easily predictable. Questions can be about anything and often involve complex wordplay, such as puns.

I had been in A.I. classes and knew that the kind of technology that could beat a human at Jeopardy! was still decades away, Jennings told me when I was writing my book Thinking Machines. Or at least I thought that it was. At the end of the game, Jennings scribbled a sentence on his answer board and held it up for the cameras. It read: I for one welcome our new robot overlords.

October 2011 is most widely remembered by Apple fans as the month in which company co-founder and CEO Steve Jobs passed away at the age of 56. However, it was also the month in which Apple unveiled its A.I. assistant Siri with the iPhone 4s.

The concept of an A.I. you could communicate with via spoken words had been dreamed about for decades. Former Apple CEO had, remarkably, predicted a Siri-style assistant back in the 1980s; getting the date of Siri right almost down to the month. But Siri was still a remarkable achievement. True, its initial implementation had some glaring weaknesses, and Apple arguably has never managed to offer a flawless smart assistant.Nonetheless, it introduced a new type of technology that was quickly pounced on for everything from Google Assistant to Microsofts Cortana to Samsungs Bixby.

Of all the tech giant, Amazon has arguably done the most to advance the A.I. assistant in the years since. Its Alexa-powered Echo speakers have not only shown the potential of these A.I. assistants; theyve demonstrated that theyre compelling enough to exist as standalone pieces of hardware. Today, voice-based assistants are so commonplace they barely even register. Ten years ago most people had never used one.

Deep learning neural networks are not wholly an invention of the 2010s. The basis for todays artificial neural networks traces back to a 1943 paper by researchers Warren McCulloch and Walter Pitts. A lot of the theoretical work underpinning neural nets, such as the breakthrough backpropagation algorithm, were pioneered in the 1980s. Some of the advances that lead directly to modern deep learning were carried out in the first years if the 2000s with work like Geoff Hintons advances in unsupervised learning.

But the 2010s are the decade the technology went mainstream. In 2010,researchers George Dahl and Abdel-rahman Mohamed demonstrated that deep learning speech recognition tools could beat what were then the state-of-the-art industry approaches. After that, the floodgates were opened.From image recognition (example: Jeff Dean and Andrew Ngs famous paper on identifying cats) to machine translation, barely a week went by when the world wasnt reminded just how powerful deep learning could be.

It wasnt just a good PR campaign either, the way an unknown artist might finally stumble across fame and fortune after doing the same way in obscurity for decades. The 2010s are the decade in which the quantity of data exploded, making it possible to leverage deep learning in a way that simply wouldnt have been possible at any previous point in history.

Of all the companies doing amazing AI work, DeepMind deserves its own entry on this list. Founded in September 2010, most people hadnt heard of deep learning company DeepMind until it was bought by Google for what seemed like a bonkers $500 million in January 2014. DeepMind has made up for it in the years since, though.

Much of DeepMinds most public-facing work has involved the development of game-playing AIs, capable of mastering computer games ranging from classic Atari titles like Breakout and Space Invaders (with the help of some handy reinforcement learning algorithms) to, more recently, attempts at StarCraft II and Quake III Arena.

Demonstrating the core tenet of machine learning, these game-playing A.I.s got better the more they played. In the process, they were able to form new strategies that, in some cases, even their human creators werent familiar with. All of this work helped set the stage for DeepMinds biggest success of all

As this list has already shown, there are no shortage of examples when it comes to A.I. beating human players at a variety of games. But Go, a Chinese board game in which the aim is to surround more territory than your opponent, was different. Unlike other games in which players could be beaten simply by number crunching faster than humans are capable of, in Go the total number of allowable board positions is mind-bogglingly staggering: far more than the total number of atoms in the universe. That makes brute force attempts to calculate answers virtually impossible, even using a supercomputer.

Nonetheless, DeepMind managed it. In October 2015, AlphaGo became the first computer Go program to beat a human professional Go player without handicap on a full-sized 1919 board. The next year, 60 million people tuned in live to see the worlds greatest Go player, Lee Sedol, lose to AlphaGo. By the end of the series AlphaGo had beaten Sedol four games to one.

In November 2019, Sedol announced his intentions to retire as a professional Go player. He cited A.I. as the reason.Even if I become the number one, there is an entity that cannot be defeated, he said.Imagine if Lebron James announced he was quitting basketball because a robot was better at shooting hoops that he was. Thats the equivalent!

In the first years of the twenty-first century, the idea of an autonomous car seemed like it would never move beyond science fiction. In MIT and Harvard economists Frank Levy and Richard Murnanes 2004 book The New Division of Labor, driving a vehicle was described as a task too complex for machines to carry out. Executing a left turn against oncoming traffic involves so many factors that it is hard to imagine discovering the set of rules that can replicate a drivers behavior, they wrote.

In 2010, Google officially unveiled its autonomous car program, now called Waymo. Over the decade that followed, dozens of other companies (including tech heavy hitters like Apple) have started to develop their own self-driving vehicles. Collectively these cars have driven thousands of miles on public roads; apparently proving less accident-prone than humans in the process.

Foolproof full autonomy is still a work-in-progress, but this was nonetheless one of the most visible demonstrations of A.I. in action during the 2010s.

The dirty secret of much of todays A.I. is that its core algorithms, the technologies that make it tick, were actually developed several decades ago. Whats changed is the processing power available to run these algorithms and the massive amounts of data they have to train on. Hearing about a wholly original approach to building A.I. tools is therefore surprisingly rare.

Generative adversarial networks certainly qualify. Often abbreviated to GANs, this class of machine learning system was invented by Ian Goodfellow and colleagues in 2014. No less an authority than A.I. expert Yann LeCun has described it as the coolest idea in machine learning in the last twenty years.

At least conceptually, the theory behind GANs is pretty straightforward: take two cutting edge artificial neural networks and pit them against one another. One network creates something, such as a generated image. The other network then attempts to work out which images are computer-generated and which are not. Over time, the generative adversarial process allows the generator network to become sufficiently good at creating images that they can successfully fool the discriminator network every time.

The power of Generative Adversarial Networks were seen most widely when a collective of artists used them to create original paintings developed by A.I. The result sold for a shockingly large amount of money at a Christies auction in 2018.

Original post:

Revisiting the rise of A.I.: How far has artificial intelligence come since 2010? - Digital Trends

Related Posts

Comments are closed.