AI is making literary leaps now we need the rules to catch up – The Guardian

Posted: November 9, 2019 at 8:42 am

Last February, OpenAI, an artificial intelligence research group based in San Francisco, announced that it has been training an AI language model called GPT-2, and that it now generates coherent paragraphs of text, achieves state-of-the-art performance on many language-modelling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarisation all without task-specific training.

If true, this would be a big deal. But, said OpenAI, due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.

Given that OpenAI describes itself as a research institute dedicated to discovering and enacting the path to safe artificial general intelligence, this cautious approach to releasing a potentially powerful and disruptive tool into the wild seemed appropriate. But it appears to have enraged many researchers in the AI field for whom release early and release often is a kind of mantra. After all, without full disclosure of program code, training dataset, neural network weights, etc how could independent researchers decide whether the claims made by OpenAI about its system were valid? The replicability of experiments is a cornerstone of scientific method, so the fact that some academic fields may be experiencing a replication crisis (a large number of studies that prove difficult or impossible to reproduce) is worrying. We dont want the same to happen to AI.

On the other hand, the world is now suffering the consequences of tech companies like Facebook, Google, Twitter, LinkedIn, Uber and co designing algorithms for increasing user engagement and releasing them on an unsuspecting world with apparently no thought of their unintended consequences. And we now know that some AI technologies for example generative adversarial networks are being used to generate increasingly convincing deepfake videos.

If the row over GPT-2 has had one useful outcome, it is a growing realisation that the AI research community needs to come up with an agreed set of norms about what constitutes responsible publication (and therefore release). At the moment, as Prof Rebecca Crootof points out in an illuminating analysis on the Lawfare blog, there is no agreement about AI researchers publication obligations. And of all the proliferating ethical AI guidelines, only a few entities explicitly acknowledge that there may be times when limited release is appropriate. At the moment, the law has little to say about any of this so were currently at the same stage as we were when governments first started thinking about regulating medicinal drugs.

In the case of GPT-2, my hunch is that fears about its pathogenic propensities may be overdone not because it doesnt work, but because humans have long experience of dealing with print fakery. Ever since Gutenberg, people have been printing falsehoods and purporting to be someone else. But over the centuries, weve developed ways of spotting fakes. Accordingly, machine-generated text poses less of a problem than video deepfakes.

GPT-2s capabilities are undoubtedly impressive, though. In a fascinating essay, I, Language Robot, the neuroscientist and writer Patrick House reports on his experience of working alongside OpenAIs language model which produces style-matched prose to any written prompt that its fed.

It was, he recounts, an instructive exercise. The language robot and I never wrote the same story, he recalls. It became, eventually, a writing tool, both as a curator to new ideas and a splint for bad ones. When, after a few months, I lost access to it, my brain felt like it had been lesioned. I missed it. I had found myself jumping into its head, asking what it might write as one would a human writing partner, which made me wonder: should I give it the same theory-of-mind, inductive respect as I give to other thinking things? Could I even help but do otherwise?

The most fascinating part of Houses report deals with his idea of asking the robot the same questions that George Plimpton asked Ernest Hemingway in his famous Paris Review interview in 1958. In reply to the question whether it remembered the exact moment when it decided to be a writer, it replied: I dont remember ever wanting not to be a writer. Writing had always been my way out. My escape. When I was a little kid, my sister was always coming home from school with homework assignments and I knew if I got hers and mine done early I could spend the next days lunch hour looking at books. It was the first time that I actually thought that being a writer might happen.

And when asked what it did when it got stuck, it replied: Its hard to say, but I would say that it would be better not to have a muse at all, to do all that writing you think is going to be good without any conscious control over what you do. Its better to be totally in charge of your own actions.

At this point, the reader gets the eerie uncanny valley feeling: this is almost, but not quite, authentic. But the technology is getting there. Why, any day now itll be writing newspaper columns.

The web aint the worldA good report by the Reuters Institute at the University of Oxford challenges conventional wisdom by finding that most people still get their news from offline sources.

Culinary conditioning TheConversation.com has an intriguing essay How steak became manly and salads became feminine by Yale historian Paul Freedman.

Its a bots worldRenee DiResta has written an insightful piece on the algorithmic public sphere called There are bots. Look around at Ribbonfarm.com.

Continue reading here:

AI is making literary leaps now we need the rules to catch up - The Guardian

Related Posts