Disrupting the Deepfake Pipeline in Europe
Contents
Today, it is easier than ever to create exploitative deepfakes depicting women in a sexual manner without their consent – and the recently negotiated EU directive combating violence against women could finally bring justice for victims by holding the AI model developers criminally accountable.
Deepfakes refer to AI-generated voices, images, or videos produced without consent, and the most popular type of deepfake, comprising at least 96% of instances, is pornographic. Women and girls make up 99% of victims. Many of these victims will remain unaware that they have been the subject of a deepfake for months after the fact, during which the content garners thousands, sometimes millions, of views.
Given the widespread popularity of deepfake-generating AI systems, the most effective approach to counter deepfakes is for governments to institute comprehensive bans at every stage of production and distribution. Mere criminalization of deepfake production and sharing is insufficient; accountability must extend to the developers, model providers, service providers, and compute providers involved in the process.
Nevertheless, it is not necessarily illegal to create a sexually explicit deepfake in Europe. The final text of the EU AI Act would only require transparency obligations for providers and users of certain AI systems and general-purpose AI models under Article 52. These types of disclosure obligations do very little to mitigate the harms of pornographic deepfakes, given that in the majority of cases the content is consumed with full understanding that it is not truthful. As such, the defamation laws of most EU Member States tend to be equally unhelpful for victims.
The forthcoming directive on combating violence against women could change that. On February 6, 2024, legislators reached a political agreement on rules aimed at combating gender-based violence and protecting its victims. The Directive specifically addresses deepfakes, describing them as the non-consensual production, manipulation, or alteration of material which makes it appear as though another person is engaged in sexual activities. The content must “appreciably” resemble an existing person and “falsely appear to others to be authentic or truthful” (Recital 19).
Publishing deepfakes would be considered a criminal offence under Article 7, as that would constitute using information and communication technologies to make sexually explicit content accessible to the public without the consent of those involved. This offence applies only if the conduct is likely to cause serious harm.
At the same time, aiding, abetting, or inciting the commission of Article 7 would also be a criminal offence under Article 11. As such, providers of AI systems which generate sexual deepfakes may be captured by the directive, since they would be directly enabling the commission of an Article 7 offence. Given that many sites openly advertise their model’s deepfake capabilities and that the training data is usually replete with sexually explicit content, it is difficult to argue that developers and providrs play an insignificant or auxiliary role in the commission of the crime.
The interpretation of Article 11 could be a crucial first step for dismantling the pipeline which fuels sexual exploitation through deepfakes. The broadest reading of Article 11 would imply that developers are subject to corporate criminal liability.
One important hurdle is that corporate criminal liability does not apply uniformly across Europe, with some Member States recognizing corporations as entities capable of committing crimes, while others do not. Nevertheless, the application of Article 11 in at least some jurisdictions would be a tremendous step towards stopping the mass production of sexual deepfakes. Afterall, jurisdiction is established based on territory, nationality, and residence according to Article 14.
The directive also briefly addresses the role of hosting and intermediary platforms. Recital 40 empowers Member States to order hosting service providers to remove or disable access to material violating Article 7, encouraging cooperation and self-regulation through a code of conduct. While this may be an acceptable level of responsibility for intermediaries, self-regulation is entirely inappropriate for providers who constitute the active and deliberate source of downstream harm.
The final plenary vote is scheduled for April. The capacity for this directive to protect women and girls from being exploited through harmful deepfakes rides on whether the companies commercializing this exploitation are also held criminally liable.
About the Future of Life Institute
The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.