Celebrities like Tom Hanks, Gayle King, and YouTube sensation Mr. Beast are warning their social media followers certain advertisements using their likenesses are, in fact, deepfakes.
These types of videos use artificial intelligence to render “real” people to endorse a product or service or to spread false information by making it look like a particular person is making a statement via video. In this instance, Hanks and King spoke out about advertisements, realistically using them to promote services they’d never heard of before, while a video of “MrBeast” made the rounds on Monday advertising an iPhone 15 Pro for “only $2.”
Deepfake videos have gotten so good at tricking the human eye that they open up the potential for scams like this or, even worse, the opportunity for political manipulations. Tech companies are working on figuring out ways to verify the legitimacy of video or detect when deepfake technology is being used, but it remains a problem.
The targeted celebrities took to social media to correct any confusion.
“People keep sending me this video and asking about this product and I have NOTHING to do with this company,” posted King, co-host of CBS Mornings. “I posted this video promoting my radio show on August 31, and they’ve manipulated my voice and video to make it seem like I’m promoting it. I’ve never heard of this product or used it! Please don’t be fooled by these AI videos.”
Attached to the post is an altered video King posted on August 31 to promote her radio show. It’s been altered to show her promoting a “secret” weight loss and encouraging followers to “follow the link.”
Hanks also warned his followers not to be duped by an advertisement for a dental plan he seemingly promoted.
“Beware!! There is a video out there promoting some dental plan with an AI version of me,” he posted on Instagram. “I have nothing to do with it.
“Are social media platforms ready to handle the rise of AI deepfakes,” asked MrBeast on X. “This is a serious problem.”
Meta was not immediately available for comment, but The New York Times said the company is putting “substantial resources towards tackling these kinds of ads and have improved our enforcement significantly.” Accounts found spreading fake videos and ads will be suspended or deleted.
The use of AI in entertainment is a key negotiation point in the recent strikes. The actors’ guild is worried about companies duplicating their image without payment or consent, and such instances are a valid concern. Google, OpenAI, and other sites plan to stamp a watermark on any AI-generated content and track such images with additional metadata. But scammers can thwart these measures, so social platforms will need to increase moderation to catch fakes.