When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

OpenAI unveils the next version of its language model that spews frighteningly real fake news

Six months back, OpenAI, a company that primarily deals with research and development in the field of artificial intelligence, unveiled a predictive language model, called GPT-2, that could generate frighteningly real fake news. Today, the company rolled out the next version of this model along with a study investigating the impact and describing the rationale behind the staged rollout of the language processing model (via MIT Technology Review).

To elaborate, GPT-2, when used in this context, is a model that is trained on an algorithm that is designed to churn out well-authored news given a string of words that it is fed. For example, with the input string, 'Russia has declared war on the United States after Donald Trump accidentally …' the algorithm produced the following piece of fake news:

Russia has declared war on the United States after Donald Trump accidentally fired a missile in the air. Russia said it had “identified the missile’s trajectory and will take necessary measures to ensure the security of the Russian population and the country’s strategic nuclear forces.”

The White House said it was “extremely concerned by the Russian violation” of a treaty banning intermediate-range ballistic missiles.

The US and Russia have had an uneasy relationship since 2014, when Moscow annexed Ukraine’s Crimea region and backed separatists in eastern Ukraine.

Despite early apprehensions at the start of this year under which OpenAI stated that it would not be releasing the predictive language model due to the risk of creating fake news, a few months down the line, the company transitioned to a staged release of the GPT-2. According to this strategy, each released version would build upon the size and scale of the previous one, eventually leading up to the release of the complete GPT-2. With the new version, the GPT-2 model available is now half the size of the actual model.

Alongside the model, the policy team at OpenAI also released a study conducted by a team of researchers led by Irene Solaiman that was studying the release strategies, social impacts and cases of misuse, if any, of language processing models such as the GPT-2. The study found that the readers tested mistook the faux news for the real deal, on average placing as much faith in it as they would on an article in the New York Times. While malicious uses of the model were not found during the study, the researchers instead identified potential use-cases of the language processing model such as code auto-completion and grammar help.

For more information, read the article published here. The original study conducted by the researchers can be found here. The GitHub repository for GPT-2 maintained by OpenAI can be found here.

Report a problem with article
Next Article

Microsoft Surface Go is almost $100 off on Amazon

A man fixes an iPhone
Previous Article

Apple opens access to repair parts and manuals for independent shops

Join the conversation!

Login or Sign Up to read and post a comment.

6 Comments - Add comment