AI-generated fake copies of real videos circulate on TikTok


Millions of TikTokkers have watched some version of a video in the past week falsely stating that “they’re installing incinerators at Alligator Alcatraz,” referring to an internet conspiracy theory that furnaces were being set up at a state-run immigration detention facility in the Florida Everglades, which spread widely despite having no evidence.

One of the videos circulating the rumor attracted nearly 20 million views. It spurred a conversation on TikTok, with creators weighing in with their own takes and, in a handful of instances, attempting to debunk the baseless theory.

Ali Palmer, a creator in Dallas who posts on TikTok as @aliunfiltered_, made a video — about a father who jumped off a Disney cruise ship to save his child — that was ripped off using her exact words for a video made with AI.

Copying on TikTok is rampant, she said, but usually the spam accounts that do it repost her entire video. The AI-powered accounts reciting her words by an AI-generated person is new, she said.

“It’s upsetting. It feels like a violation of privacy,” said Palmer, 33. “I could never imagine copying someone and then making money on it. Just feels dirty.”

With all types of copying, Palmer has reported it to TikTok, but nothing happens. “It’s incredibly frustrating.” 

Hany Farid, a professor at the University of California, Berkeley who studies digital forensics, said what is new here is that an average person’s words are being stolen.

“All the time, we’re seeing people’s identities being co-opted to do things like hawk crypto scams or push bogus cures for cancer, but usually it’s a famous person or influencer,” Farid said. 

“It’s the kind of thing that would be super-easy to do with today’s AI tools and something that would easily slip through the content moderation cracks,” he said.

While using AI to copy videos does not appear to violate TikTok’s policies, the platform does say it requires users “to label all AI-generated content that contains realistic images, audio, and video.”

Deepfakes have been growing more sophisticated in recent years, in addition to being increasingly deployed for malicious purposes.

The technology has been used to impersonate politicians, including Secretary of State Marco Rubio, former President Joe Biden and Ukrainian President Volodymyr Zelenskyy. The rise of deepfake “nudify” tools prompted Congress this year to pass a federal law to combat the spread of nonconsensual intimate imagery, including fake nudes generated by AI.

Further blurring fact from fiction is this latest approach of trying to capitalize on a TikTok viral moment by having a fictitious creator recite the words of a real creator.

In the Meghan video, the creator took on a British accent. In others, the voice assumes a completely different register.

Together, the two accounts have rung up millions of views by seizing on viral stories that skew more toward tabloid fodder than political drama. But researchers who track state-sponsored information warfare say testing out new strategies for juicing virality is something government-backed actors also regularly do.

“Stealing content on social media is as old as social media,” Linvill said. “What we’re seeing here is AI doing what we’ve seen AI be really good at over and over again, and that is systematizing things, making processes cheaper, faster and easier. And AI is really good at making it faster and easier to steal content.”

Alex Hendricks was scrolling through TikTok this week when he saw two back-to-back videos about the Florida incinerator conspiracy. It’s normal to see many creators weighing in on the same subject, but these videos struck him as unusual because the monologues were eerily identical, just in different-sounding voices.

So Hendricks made a TikTok video pointing this out. Yet it barely got any views.





Source link

Wadoo!