Google has prohibited users from creating deepfakes on its machine-learning research tool Colaboratory, often known as Colab. The new rule comes as deepfakes have risen in popularity in recent years, being incorporated into pop culture and deployed to propagate disinformation.
Google Colab, a product from Google Research that began as an internal company project in 2017, is a free browser program that allows users to create and run Python code, with everything stored on the cloud and hosted by the company. It’s the most common software used by developers and researchers in the artificial intelligence community, and one that Google previously took a very loose approach to moderating, according to TechCrunch.
The Colab deepfake rule adjustment, first reported by the robotics and AI blog Unite.AI, happened sometime last month between May 14 and May 24, according to archive.org page screenshots. None of the other rules and prohibitions listed on Google Colab’s FAQ—executing code to mine cryptocurrencies, password cracking, and file hosting, among others—have been altered. Google didn’t release any official announcement about the change.
Users who try to program deepfakes on Colab now receive an error message that says they “may be executing code that is disallowed, and this may restrict your ability to use Colab in the future,” according to the tech news outlet Bleeping Computer.
A spokesperson for Google, which did not respond to Insider’s request for comment, told TechCrunch that it banned deepfakes after reviewing patterns and avenues for potential abuse on Colab.
“Deterring abuse is an ever-evolving game, and we cannot disclose specific methods as counterparties can take advantage of the knowledge to evade detection systems,” the spokesperson said in a statement to TechCrunch. “In general, we have automated systems that detect and prohibit many types of abuse.”
The use of deepfakes, which are AI-powered media often used to replace someone’s likeness with another, has accelerated rapidly in the last half-decade. The technology has been used for diverse purposes. Internet users have folded deepfakes into viral memes. Musician Kendrick Lamar recently used deepfakes in a music video for his song “The Heart Part 5” to transform into Will Smith, Kanye West, and others.
But the technology has also been used to foment disarray and political unrest, as disinformation researchers and government agencies have warned for years. In March, after Russia’s invasion of Ukraine had begun, hackers posted a video online of a fraudulent Ukrainian President Volodymyr Zelenskyy appearing to surrender. The clip circulated around social media, but viewers noticed that the digitally manipulated version of Zelenskyy’s accent was off, along with other inauthentic qualities. Zelenskyy quickly called out the video’s fakeness as a “childish provocation.”
Deepfakes have also been used to maliciously target women and digitally insert them into pornography without their consent. The emergence of this technology has created numerous legal and ethical issues, with digital rights advocates warning that they have the potential to be easily used for nefarious purposes. Several states have enacted or are considering legislation around the use of deepfakes.
Keep reading