From an app that allows users to swap images of women and men into pornographic videos to the use of computer-generated faces in online fraud, deepfake videos are an increasingly visible manifestation of the dark side of artificial intelligence. As their use grows, legislators and policymakers are struggling to come up with appropriate solutions and safeguards. On August 4, the U.S. Senate Committee on Homeland Security and Governmental Affairs voted to advance the Deepfake Task Force Act, which aims to establish a team to investigate ways to mitigate the damage they cause. Meanwhile, the European Union is discussing a proposal to set comprehensive rules for artificial intelligence use.

We spoke with Dr. Mathilde Pavis, a senior lecturer in law at the University of Exeter, who has studied deepfakes and has drawn up a number of recommendations to limit their abuse. 

This conversation has been edited for length and clarity.

Coda Story: In the past few years, we have seen some pretty convincing deepfakes of high-profile people, like Barack Obama and Tom Cruise. Many are made for entertainment, but how worried should we be about their negative possibilities? 

Mathilde Pavis: Actually, it’s the other way around. For the first few years, the majority of applications of this technology have been abusive. The word “deepfake” was popularized in the context of it being used to swap the faces of non-consenting women into pornographic videos. This is when we started looking to find ways to detect abuses, so we can limit malicious use. That’s challenging, because not all applications of the technology are bad. 

What implications can deepfakes have for democracy? 

Using technology to spread misinformation is not new, and it’s become an increasingly sophisticated tool. But, as the technology develops, so does the audience. We become more aware. A few studies have been published, including one that analyzed the public’s reaction to a deepfake video of Keanu Reeves stopping a robbery. It turns out that the public is quite savvy, they didn’t just stop at what they saw in the video. They engaged with it critically and the majority thought it was fake. It’s easy to imagine that the more familiar people become to those issues of reliability, the more critical distance they put when engaging with deepfakes.

Could deepfakes be used to wreak havoc on democracy, or has this already happened?

Some versions of deepfakes, like manipulated footage, have already been used in this sense. For example, when Trump supporters slowed down Nancy Pelosi’s voice to make her look like she was drunk. Pelosi was forced to put out a statement to correct that. The fact that public figures have to spend time addressing fake content shows that public discourse is already being forced. 

Deepfakes are often used to target women online. Are social media companies doing enough to stamp this out?

Platforms have a liability to control misuse and, to an extent, they do it. But moderation is not perfect. This is due to the difficulty of the task, but also to the lack of regulatory framework. 

How do victims of deepfake misuse get help?

The real challenge around regulation of deepfakes is that a lot of the abuse is perpetrated anonymously. If someone suffers a psychological or even a physical injury from the technology, it’s difficult to find the person responsible and seek compensation from them. That’s something that we still need to address.

Is regulation necessary? 

As a law person, I would say sensible regulation. Social media companies have a huge responsibility here, because they’re the vehicle of most of this content. The situation is tricky, because platforms operate across multiple borders, while laws are generally bound to a specific country. But collaborations between platforms and states do already exist, to prevent and limit fraud, human trafficking, terrorism, for instance. 

Regulation is necessary to give social media platforms a reason to moderate. These companies know that content moderation will be a cost in terms of user traffic, so they need to be given external incentives.

Are lawmakers playing catch-up with regulation? 

Now is a perfect time to introduce regulation, because we are still at the stage where you need a fair bit of technical knowledge to be able to make a credible deepfake. Very soon, you’ll be able to synthesize pretty much anyone on your phone in a very convincing way, and containing abuse will be a lot harder.

Is international cooperation possible? 

A few countries have laws against the spread of misinformation and certain kinds of fake content, such as the U.S., Australia, and France, to a certain extent. But the effectiveness of these approaches is not clear yet, because they’re largely new. 

We always focus on the bad, but what are some of the positive uses of deepfakes?

You can already see some of those coming into play. Deepfakes can help make translation and dubbing easier. In museums, they are being used to make collections more interactive. There’s limitless possibilities to advance creativity.

I’m also interested in seeing how fast artificial intelligence will evolve. Deepfakes may look like the future, but there is also a chance that soon, a new technology will come in and make deepfakes completely redundant.