MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
text
Search

Twenty minutes into the future with OpenAI’s Deep Fake Text AI

Wednesday February 27, 2019. 01:45 PM , from Ars Technica
Enlarge (credit: Max Headroom / Aurich)
In 1985, the TV film Max Headroom: 20 Minutes into the Future presented a science fictional cyberpunk world where an evil media company tried to create an artificial intelligence based on a reporter's brain to generate content to fill airtime. There were somewhat unintended results. Replace 'reporter' with 'redditors,' 'evil media company' with 'well meaning artificial intelligence researchers,' and 'airtime' with 'a very concerned blog post,' and you've got what Ars reported about last week: Generative Pre-trained Transformer-2 (GPT-2), a Franken-creation from researchers at the non-profit research organization OpenAI.
Unlike some earlier text-generation systems based on a statistical analysis of text (like those using Markov chains), GPT-2 is a text-generating bot based on a model with 1.5 billion parameters. (Editor's note: We recognize the headline here, but please don't call it an 'AI'—it's a machine-learning algorithm, not an android). With or without guidance, GPT-2 can create blocks of text that look like they were written by humans. With written prompts for guidance and some fine tuning, the tool could be theoretically used to post fake reviews on Amazon, fake news articles on social media, fake outrage to generate real outrage, or even fake fiction, forever ruining online content for everyone. All of this comes from a model created by sucking in 40 gigabytes of text retrieved from sources linked by high-ranking Reddit posts. You can only imagine how bad it would have been if the researchers had used 40 gigabytes of text from 4chan posts.
After a little reflection, the research team has concerns about the policy implications of their creation. Ultimately, OpenAI's researchers kept the full thing to themselves, only releasing a pared-down 117 million parameter version of the model (which we have dubbed 'GPT-2 Junior') as a safer demonstration of what the full GPT-2 model could do.
Read 28 remaining paragraphs | Comments
https://arstechnica.com/?p=1461249
News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Apr, Fri 26 - 06:40 CEST