In the early 1960s, Hanna-Barbera conceived an animated TV sitcom that imagined a typical American family in 2062: Dad commutes to work in a flying saucer; young Elroy putters to school by pushing buttons on his jet-pack; the women (ahem) shop futuristically. But the real stars of this Jetsonian utopia are the robots. They clean houses, repair appliances, and help raise kids. Others dispense advice. If the creators’ predictions come to pass, then we’re just 49 years removed from a world in which human minds are virtually indistinguishable from anthropomorphic machines.
If only it were that easy, says Richard Wallace, a computer scientist who has worked in artificial intelligence since the 1990s, when most robots were just inexpensive computers with simple sensors. Back then, Roomba vacuum cleaners were the gold standard. A minimalist gadget that could clean a house by itself was about as much as humans could expect from their technology.
That was around the time Wallace got fixated on the idea of making a robot with a personality and language skills. He’d read a New York Times article about the Loebner Contest, an annual competition launched in 1990 by Hugh Loebner American inventor, prostitution activist, and pariah among scientists (in 1995, MIT professor Marvin Minsky famously offered a $100 “Minsky prize” to anyone who could persuade Loebner to terminate his contest and “spare us the horror of this obnoxious and unproductive publicity campaign.”) Loebner has scoured the world for machines that could pass as humans, or that at least have enough comprehension of human language to answer such questions as, “How many plums can you fit in my shoe?” He’s a disciple of 20th-century mathematician Alan Turing, whose eponymous Turing Test required a judge to hold conversations with a computer and a human simultaneously, in order to compare the two. A machine could only pass if its responses were indistinguishable from a human’s. Loebner’s version of the test amounts to a lengthy interrogation conducted via instant-messaging.
But it seems his notion of a truly conversant “chatbot” is still a pipe dream. To this day, Loebner has never handed out a gold or silver medal, because no contender has even come close. But Wallace thinks that he and a small menagerie of Bay Area programmers have a shot. Barring that, they see huge commercial potential in chatbot software, in everything from smartphone language tutorials to entertainment apps to voice-activated “personal assistants” that compete with Siri. For Wallace and his ilk, bots are both an artistic muse and a line of products, and Loebner’s contest is a vehicle to help develop them.
Wallace’s East Bay company, Pandorabots, runs an open-source web service that allows anyone to create his or her own chatbot by cloning a primitive software language called AIML (Artificial Intelligence Markup Language). Wallace used it to create his own chatbot, called Alice, in the ’90s, modeling it on a primitive pattern-recognition program that breaks English down into key words and canned phrases. He used Alice to clinch the Loebner bronze medal in 2000, 2001, and 2004, and now he’s offering the prototype out to all fledgling programmers, and encouraging them to give it their own spin.
Ideally, each Pandorabot should have its own personality and backstory (a sassy alien, a nubile teenage girl, Siri if you gave her a pack of cigarettes and the voice of Julie Kavner). The good ones should be adept at making small talk and answering yes-no questions, which account for the majority of what we say to each other, Wallace says. “Humans aren’t as original with language as we like to think we are,” he says. The better bots should know how to take a theme and expound upon it.
Theoretically, you could create a chatbot to monologue exclusively about its cousin’s Bar Mitzvah or its new balsa-wood boat. But you could also program it to know Shakespeare, or provide the entire exegesis of 20-century UK pop music, or dazzle users with SAT vocabulary words. Perhaps it’s no surprise that English majors design the best chatbots, according to experts.
Pandorabots holds a “Diva Bots” pageant every March to cherry-pick its protgs, many of which go on to the Loebner finals; this year, three of the four Loebner finalists, including the winner, were on the Pandorabots team. The real contest happens every year in Ireland, and from Wallace’s description, it’s a kind of artificial intelligence version of Miss America, albeit with a lot of “aggressive questioning.” Four judges cross-examine each bot, and its human designer, on a split-screen computer, and try to distinguish which is which. Bots are scored on their ability to speak naturally and exhibit “human” intelligence. Only one ever fooled the judges, and that was because its human confederate tried to cheat by acting as robotic as possible.
This year’s (bronze) winner, a big-eyed ‘tween ‘bot named Mitsuku, seemed only as lifelike as her middle-aged handler, Steve Worswick. Nonetheless, we were intrigued. We decided to visit Mitsuku at her web page to try a little cross-examining of our own. Here’s what resulted:
Human: My name is Arlo
Ask Me Anything: Having a Forced Conversation with an Artificial Intelligence