It's only been a few years since people in the computer industry have been thinking about how fast computers are getting faster – and how long that can go on. In the meantime, it has become much more important how quickly artificial intelligence (AI) improves. Our columnist Christan Stöcker has written a detailed review. He did that by hand. In the future you probably do not have to do that anymore.
In any case, this is suggested by a project by Elon Musk's non-profit research company OpenAI from California. Under the name GPT-2, its computer researchers have created an AI that can complete English-language texts in such a way that they can hardly be distinguished from texts written by a human (here you can try them).
At the beginning of the year, when they released the first version of their AI, the researchers did not like their own software. She was just too good. The special thing about it was and is that this AI does not create its texts from predefined text blocks or that it specializes in a specific topic. It produces a few somehow appropriate sentences or paragraphs for each and everything.
Because this is so new and works so well, the software was initially only released in part by its creators at the beginning of the year. As a rationale, the researchers wrote: "Due to our concerns about possible malicious applications of this technology, we are not releasing the trained model." Instead, interested scientists were initially provided only a stripped-down version for experimentation.
This first version was based on a record of "only" 124 million parameters used by the AI to compose texts. The now released version, however, has over 1.5 billion parameters that are used for text editing. The system draws its basic knowledge from a dataset for which it had to read eight million websites considered relevant.
Fake news from the AI
The experts were afraid that their system could be used to fake texts. Be it to produce "fake news" or to publish misleading articles or even to send spam emails that seem to come from a particular author.
With photos and videos, it has been possible to produce so-called deep fakes for a long time – these are pictures and films in which the characters of the characters are digitally darted over. Meanwhile, the technology is so easy to use that a Deepfake app in China has recently become a crowd-pleaser.
And the software producer Adobe, which specializes in image and video editing, already showed a program in 2016 that was able to process voice recordings with a text editor and thus put arbitrary statements in the mouth of the people shown in the video. This "Photoshop for audio files" has not been published until today. Obviously, Adobe also has concerns that his software could be misused to produce sound-telling but false statements.
An AI should recognize AI texts
The question of whether one could misuse their software has been passed on to Open Al researchers at the Center for Terrorism, Extremism, and Counterterrorism (CTEC) at the Middlebury Institute of International Studies in California. According to a blog entry, the experts came to the conclusion that a specialized version of artificial intelligence would actually be capable of producing credible-sounding propaganda for Marxists, jihadists or racists.
In parallel, OpenAI researchers have been using the time since their first announcement earlier this year to refine a system that automatically recognizes text generated by this AI. With a recognition rate of 95 percent whose hit rate is still too low, they say. Suspicious texts must therefore be additionally checked with other methods.
Also, by publishing this detection software, the researchers had abdominal pain. They want to support research on systems that recognize fake lyrics, but also fear that the software could be used to produce synthetically generated texts that are not noticed by fake recognition.
Under the title "Talk to Transformer", the developer Adam King has put the system online so that anyone can try it. That's exactly what we did and fed the system with texts from the Bible and other sources.
Currently this is only in English, so we have translated the texts for this article:
- The beginning of Genesis completed it with "The first day had passed" with sentences like "And God said, 'Let light be in the land of the East and make it a sign of seasons and days and years" https: // www. spiegel.de/ ".
- A press release on a new smartphone supplemented the software with a paper on the benefits of a dual-LED flash and the possibilities to take especially good selfies and group selfies thanks to a fast autofocus.
- Tolkien's Ring poem is continued from the machine, with the phrase, "In the land of Mordor, where the shadows are threatening," "The one ring that is so dear to me, the ring of power that did it for me."
Of course, these are just samples and not all the texts that the system spit out were error free. But in the end there is the realization that in the future, not only the authenticity of images and videos, but also of texts will have to be critically scrutinized. With a system based on this AI, the Web could be fully automated flood with real-looking fake texts of fictional authors.
More about artificial intelligence