OpenAI Just Released the AI It Said Was Too Dangerous to Share – Futurism

Here You Go

In February, artificial intelligence research startup OpenAI announced thecreation of GPT-2, an algorithm capable of writing impressively coherentparagraphs of text.

But rather than release the AI in its entirety, the team shared only a smaller model out of fear that people would use the more robust tool maliciously to produce fake news articles or spam, for example.

But on Tuesday, OpenAI published a blog post announcing its decision to release the algorithm in full as it has seen no strong evidence of misuse so far.

According to OpenAIs post, the company did see some discussion regarding the potential use of GPT-2 for spam and phishing, but it never actually saw evidence of anyone misusing the released versions of the algorithm.

The problem might be that, while GPT-2 is one of if not the best text-generating AIs in existence, it still cant produce content thats indistinguishable from text written by a human. And OpenAI warns its those algorithms well have to watch out for.

We think synthetic text generators have a higher chance of being misused if their outputs become more reliable and coherent, the startup wrote.

READ MORE: OpenAI has published the text-generating AI it said was too dangerous to share [The Verge]

More on OpenAI: Now You Can Experiment With OpenAIs Dangerous Fake News AI

Originally posted here:

OpenAI Just Released the AI It Said Was Too Dangerous to Share - Futurism

Related Posts

Comments are closed.