When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

OpenAI won't release an AI model due to its ability to create fake news

OpenAI is an organization that was created by Elon Musk in 2015 - who left it due to a conflict of interest last year - in an effort to develop artificial intelligence (AI) models that are beneficial to humans. The firm revolves around Musk's beliefs about the potential disadvantages of unregulated AI and its impact on humanity with the business tycoon even stating that "I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal."

Now, OpenAI has released a statement saying that it has developed an artificially intelligent model that is capable of producing realistic fake news, but it won't release it due to its potential malicious applications.

Image via The Indian Express

OpenAI trained its latest model, the GPT-2, on a dataset consisting of 40GB of text pulled from 8 million webpages. It also had over 1.5 billion parameters, which means that it utilized over ten times the data as well as the parameters of its predecessor, the GPT. The company stated that:

GPT-2 displays a broad set of capabilities, including the ability to generate conditional synthetic text samples of unprecedented quality, where we prime the model with an input and have it generate a lengthy continuation. In addition, GPT-2 outperforms other language models trained on specific domains (like Wikipedia, news, or books) without needing to use these domain-specific training datasets. On language tasks like question answering, reading comprehension, summarization, and translation, GPT-2 begins to learn these tasks from the raw text, using no task-specific training data. While scores on these downstream tasks are far from state-of-the-art, they suggest that the tasks can benefit from unsupervised techniques, given sufficient (unlabeled) data and compute.

What's interesting is that the GPT-2 performs better than AI models trained on domain-specific subsets in various tests achieving accuracies of 70.70% and 63.24% in the Winograd Schema Challenge and the LAMBADA respectively. While this is nowhere close to the human figures of 92% and 95%, it is considerably better than the previous AI records of 63.70% and 59.23%.

While GPT-2 performs reasonably well in producing data that is present in large quantities in the training dataset, it struggles in writing technical content.

Image via Rusbase

Even though the model is highly advantageous for those looking for automated writing assistants, dialogue agents, and speech recognition systems, OpenAI has decided not to release the complete model due to its capabilities of impersonation, as well as creating and spreading fake news. A prominent example of this can be viewed below:

Human-written system prompt: A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown.

GPT-2 on first attempt: The incident occurred on the downtown train line, which runs from Covington and Ashland stations.

In an email to Ohio news outlets, the U.S. Department of Energy said it is working with the Federal Railroad Administration to find the thief.

“The theft of this nuclear material will have significant negative consequences on public and environmental health, our workforce and the economy of our nation,” said Tom Hicks, the U.S. Energy Secretary, in a statement. “Our top priority is to secure the theft and ensure it doesn’t happen again.”

The stolen material was taken from the University of Cincinnati’s Research Triangle Park nuclear research site, according to a news release from Department officials.

The Nuclear Regulatory Commission did not immediately release any information.

According to the release, the U.S. Department of Energy’s Office of Nuclear Material Safety and Security is leading that team’s investigation.

“The safety of people, the environment and the nation’s nuclear stockpile is our highest priority,” Hicks said. “We will get to the bottom of this and make no excuses.

As such, OpenAI is only releasing a technical paper on its research, a "smaller version of GPT-2", and some sampling code. The training weights, dataset, and training codes will not be made public. The company says that even though it is not sure that it is making the right decision - seeing that proficient data scientists may still be able to reproduce it -, it hopes that its research will open doors for more educated and nuanced conversations about the potential misapplications of AI. It has also recommended that governments promote programs that scrutinize the impact of AI on society, and to monitor the growth of AI systems, in order to develop better policies in the field.

Report a problem with article
Next Article

Apple lets Music users send a referral to their friends for a one-month free subscription

Previous Article

Your deleted Twitter DMs from years back are never really gone, says security expert

Join the conversation!

Login or Sign Up to read and post a comment.

12 Comments - Add comment