Reality check: The state of AI, bots, and smart assistants – InfoWorld

Artificial intelligencein the guises of personal assistants, bots, self-driving cars, and machine learningis hot again, dominating Silicon Valley conversations, tech media reports, and vendor trade shows.

AI is one of those technologies whose promise is resurrected periodically, but only slowly advances into the real world. I remember the dog-and-pony AI shows at IBM, MIT, Carnegie-Mellon, Thinking Machines, and the like in the mid-1980s, as well as the technohippie proponents like Jaron Lanier who often graced the covers of the eras gee-whiz magazine like Omni.

AI is an area where much of the science is well established, but the implementation is still quite immature. Its not that the emperor has no clothesrather, the emperor is only now wearing underwear. Theres a lot more dressing to be done.

Thus, take all these intelligent machine/software promises with a big grain of salt. Were decades away from a Star Trek-style conversational computer, much less the artificial intelligence of Stephen Spielbergs A.I.

Still, theres a lot happening in general AI. Smart developers and companies will focus on the specific areas that have real current potential and leave the rest to sci-fi writers and the gee-whiz press.

For years, popular fiction has fused robots with artificial intelligence, from Gort of The Day the Earth Stood Stillto the Cylons of Battlestar Galactica, from the pseudo-human robots of Isaac Asimovs I Robotnovel to Data of Star Trek: The Next Generation. However, robots are not silicon intelligences but machines that can perform mechanical tasks formerly handled by peopleoften more reliably, faster, and without demands for a living wage or benefits.

Robots are common in manufacturing and becoming used in hospitals for delivery and drug fulfillment (since they wont steal drugs for personal use), but not so much in office buildings and homes.

Thereve been incredible advances lately in the field of bionics, largely driven by war veterans whove lost limbs in the several wars of the last two decades. We now see limbs that can respond to neural impulses and brain waves as if they were natural appendages, and its clear they soon wont need all those wires and external computers to work.

Maybe one day well fuse AI with robots and end up slaves to the Cylonsor worse. But not for a very long while. In the meantime, some advances in AI will help robots work better, because their software can become more sophisticated.

Most of what is now positioned as the base of AIproduct recommendations at Amazon, content recommendations at Facebook, voice recognition by Apples Siri, driving suggestions from Google Maps, and so onis simply pattern matching.

Thanks to the ongoing advances in data storage and computational capacity, boosted by cloud computing, more patterns can be stored, identified, and acted on then ever before. Much of what people do is based on pattern matchingto solve an issue, you first try to figure out what it is like that you already know, then try the solutions you already know. The faster the pattern matching to likeliest actions or outcomes, the more intelligent the system seems.

But were still in early days. There are some cases, such as navigation, where systems have become very good, to the point where (some) people will now drive onto an airport tarmac, into a lake, or onto a snowed-in country road because their GPS told them to, contrary to all the signals the people themselves have to the contrary.

But mostly, these systems are dumb. Thats why when yougo to Amazon and look at products, many websites you visit feature those products in their ads. Thats especially silly if you bought the product or decided not tobut all these systems know is you looked at X product, so theyll keep showing you more of the same. Thats anything but intelligent. And its not only Amazon product ads; Apples Genius music-matching feature and Googles Now recommendations are similarly clueless about the context, so they lead you into a sea of sameness very quickly.

They can actually work against you, as Apples autocorrection now does. It epitomizes a failure of the crowdsourcing, where peoples bad grammar, lack of clarity on how to form plurals or use apostrophes, inconsistent capitalization, and typos are imposed on everyone else. (Ive found that turning it off can result in fewer errors, even for horrible typists like myself.)

Missing is the nuance of more context, such as knowing what you bought or rejected, so you dont get advertisements for more of the same but another item you may be more interested in. Ditto with musicif your playlists is varied, so should be the recommendations. And ditto with, say, recommendation of where to eat that Google Now makesI like Indian food, but I dont want it every time I go out. What else do I like and have not had lately? And what about the patterns and preferences of the people Im dining with?

Autocorrect is another example of where context is needed. First, someone should tell Apple the difference between its and its, as well as explain that there are legitimate, correct variations in English that people should be allowed to specify. For example, prefixes can be made part of a word (like preconfigured) or hyphenated (like pre-configured), and users should be allowed to specify that preference. (Putting a space after them is always wrong, such as pre configured, yet thats what Apple autocorrect imposes unless you hyphenate.)

Dont expect botsautomated software assistants that do stuff for you based on all the data theyve monitoredto be useful for anything but the simplest tasks until problem domains like autocorrection work. They are, in fact, the same kinds of problems.

Pattern matching, even with rich context, is not enough. Because it must be predefined. Thats where pattern identification comes in, meaning that the software detects new patterns or changed patterns by monitoring your activities.

Thats not easy, because something has to define the parameters for the rules that undergird such systems. Its easy to either try to boil the ocean and end up with an undifferentiated mess or be too narrow and end up not being useful in the real world.

This identification effort is a big part of what machine learning is today, whether its to get you to click more ads or buy more products, better diagnose failures in photocopiers and aircraft engines, reroute delivery trucks based on weather and traffic, or respond to dangers while driving (the collision-avoidance technology soon to be standard in U.S. cars).

Because machine learning is so hardespecially outside highly defined, engineered domainsyou should expect slow progress, where systems get better but you dont notice it for a while.

Voice recognition is a great examplethe first systems (for phone-based help systems) were horrible, but now we have Siri, Google Now, Alexa, and Cortana that are pretty good for many people for many phrases. Theyre still error-pronebad at complex phrasing and niche domains, and bad at many accents and pronunciation patternsbut usable in enough contexts where they can be helpful. Some people actually can use them as if they were a human transcriber.

But the messier the context, the harder it is for machines to learn, because their models are incomplete or are too warped by the world in which they function. Self-driving cars are a good example: A car may learn to drive based on patterns and signals from the road and other cars, but outside forces like weather, pedestrian and cyclist behaviors, double-parked cars, construction adjustments, and so on will confound much of that learningand be hard to pick up, given their idiosyncracies and variability. Is it possible to overcome all that? Yesthe crash-avoidance technology coming into wider use is clearly a step to the self-driving futurebut not at the pace the blogosphere seems to think.

For many years, IT has been sold the concept of predictive analytics, which has had other guises such as operational business intelligence. Its a great concept, but requires pattern matching, machine learning, and insight. Insight is what lets people take the mental leap into a new area.

For predictive analytics, that doesnt go so far as out-of-the-box thinking but does go to identifying and accepting unusual patterns and outcomes. Thats hard, because pattern-based intelligencefrom what search result to display to what route take to what moves to make in chessis based on the assumption that the majority patterns and paths are the best ones. Otherwise, people wouldnt use them so much.

Most assistive systems use current conditions to steer you to a proven path. Predictive systems combine current and derivable future conditions using all sorts of probablistic mathematics. But those are the easy predictions. The ones that really matter are the ones that are hard to see, usually for a handful of reasons: the context is too complex for most people to get their heads around, or the calculated path is an outlier and thus rejected as suchby the algorithm or the user.

As you can see, theres a lot to be done, so take the gee-whiz future we see in the popular press and at technology conferences with a big grain of salt. The future will come, but slowly and unevenly.

Read more here:

Reality check: The state of AI, bots, and smart assistants - InfoWorld

Related Posts

Comments are closed.