Lawmakers on Capitol Hill turned their attention Thursday to a new type of manipulated content known as "deepfakes."
The House Intelligence Committee held a hearing focused on how advanced technology could impact our national security.
Deepfakes use artificial intelligence to manipulate all kinds of media. In video format, for example, face swapping technology can be combined with an audio file to create a clip showing someone saying something they never actually said. The dangers posed by these kinds of videos are amplified as the 2020 election inches closer.
"So-called deepfakes ... enable malicious actors to foment chaos, division or crisis, and they have the capacity to disrupt entire campaigns, including that for the presidency. ... What enables deepfakes and other modes of disinformation to become truly pernicious is the ubiquity of social media and the velocity at which false information can spread," Rep. Adam Schiff said.
The experts who testified Thursday said editing video or audio to make it appear authentic has gotten easier and easier. They encouraged lawmakers to work with social media companies to better police what's being shared online.
One of the biggest challenges deepfakes pose is where to draw the line.
"Is this misrepresentation in a defamatory way? Right, that we would say it's a falsehood, it's harmful to reputation, that's an impersonation, then we should take it down," said Danielle Citron, professor at the University of Maryland Francis King Carey School of Law.
But as Rep. Jim Himes later countered, who posts the false video should matter, too. He noted taking down false content could raise First Amendment concerns about freedom of expression.