Over the weekend, Elon Musk-backed artificial intelligence research company OpenAI published findings about their latest language model. In a deviation from their open source philosophy, OpenAI decided not to open source this specific model. While multiple news outlets cried foul at the amount of automated fake news that can be created using this model, OpenAI’s move is indicative of a much wider movement in the AI space.
Musk is in great support of OpenAI, being one of the founders of the company and having made donations of over $10 million for the same. The multitasking executive, however, stepped down from the board in February of last year. This was reportedly due to the fact that his involvement was causing a potential conflict between Tesla, who was focused on autonomous vehicles, and OpenAI, who were largely focused on building AI-based research applications.
The company also has a heavy focus on the ethical consequences of AI, and Elon Musk has called for stringent AI regulations in order to ensure that there was a safer future for humanity. In an environment of emerging intelligence and ideas regarding the ethical consequences of using AI in our day to day lives, this move by OpenAI might as well be the first salvo in protecting the humanity of the future from super-intelligence.
OpenAI’s GPT-2, The Future Of Content
The model used to achieve this outcome was dubbed the GPT-2, and it is a transformer-based language model. It has over 1.5 billion parameters and was trained to predict the next word in a sentence based on 40GB of Internet text. Its dataset contained 8 million web pages, and the model could generate conditional synthetic text samples of unprecedented quality. This gives it the ability to outperform other language models trained on specific domains.
Even with a little bit of extra training, the model can write material that seems convincingly real and similar to the source material, yet different.
The examples given in the blog range from a kid’s homework assignment, to a mock-up speech by John F Kennedy, to a fake news article about how recycling is bad for the world as a whole.
Owing to this danger, OpenAI has released the research paper about its results, but has not released the full model, instead opting to release only a part of the code. This is uncharacteristic for OpenAI, as it always open sources its research with full code. While the model relies on human-written content to begin providing results, it can be used by malicious parties in the future to create fake news at an unprecedented pace based on previous articles on the same topic.
This was also mentioned by OpenAI last year, who said that the open-sourcing of their projects will decrease as security and ethical concerns increase. OpenAI stated in their paper, “We should consider how research into the generation of synthetic images, videos, audio, and text may further combine to unlock new as-yet-unanticipated capabilities for these actors”
The provided various uses for this model, such as creating AI writing assistants, dialogue agents (chatbots), and better speech recognition systems. At the same time, it can also be used to generate misleading news articles, impersonate others online, and automate the production of fake news
Musk’s Machine Can Generate Human-Like Text
Following his stepping down from the board of OpenAI last year, Musk has continued to vocally supportive of the initiative. This was seen last year, where Musk congratulated the team for their victory in Dota 2 against world-class players. Reportedly, the reason for this was due to a drain of talent, leading Tesla and OpenAI to compete for researchers.
Musk has never been one to shy away from calling AI out for the danger it poses to humanity as a whole. He has, on many occasions, taken the opportunity to speak about the dangers of superintelligent AIs. Musk has stated many times in the past that the AI field requires tight regulation, as it was a “fundamental risk to the existence of human civilization.”
In the wake of the model not being open-sourced, Musk stated on Twitter, “Tesla was competing for some of same people as OpenAI & I didn’t agree with some of what OpenAI team wanted to do. Add that all up & it was just better to part ways on good terms.” The model is highly capable, and has shown itself to be one of the most advanced models in the field.
According to the Vice President of Engineering at OpenAI, David Luan, the model “looks pretty darn real”. According to Luan, the model will be able to used by parties looking to spread misinformation owing to malicious intent. This can also take on political flavors, which has become the weapon of choice against established democracy.
India’s Election Creates An Uneasy Catalyst
This event comes at a time where India is preparing for its election between the BJP and the Congress parties. Malicious actors have already begun taking charge of misinformation of the political bent, as fake news has already been forwarded on social media platforms. This is regarding terrorist attacks on CRPF Jawans in Pulwama, an incident that has shocked the nation. Fake news being forwarded for this includes rumors of Rahul Gandhi being in support of the Pulwama attack, and forwards asking Prime Minister Narendra Modi to “repeat Gujarat in Pakistan“.
WhatsApp, Facebook and Twitter have already been flooded with fake news of this event, especially with a strong political affiliation. Fact checkers have begun to be adopted by social media networks, and Google has already introduced a fact-checking feature to its News platform. Facebook has only allow advertisers who have completed an identity verification to run political ads in India, taking effect on February 21. Twitter’s CEO, Jack Dorsey said that Twitter had “opened a focus room” in India, so the company could keep abreast of national events.
This will enable a race between malicious actors and behemoth conversation, with AI being used on both the sides to combat the other. Incidents such as this should push the general population towards fact checking and sticking to trusted sources.