Deepfakes Will Become Harder to Spot, Tech Experts Warn


By

on

in

, ,

Young couple sitting on a sofa watching scary movie on tv

Be on the lookout for an uptick in deepfake videos this year. Experts warn the faux footage will become more convincing and make identifying such content more complicated as technology advances.

Tech experts anticipate a rise in deepfake videos in 2024 as software to create fake content becomes more widely available. Warnings emerged after public figures in Singapore, including Prime Minister Lee Hsien Loong, were depicted in fake videos in December. He posted on Facebook to warn people not to respond to scam videos using his likeness to promote investment scams and fake giveaways.

Deputy Prime Minister Lawrence Wong was also spotted in deepfake investment scams and misinformation campaigns claiming authorities were planning a “circuit breaker amid a spike in COVID-19 cases,” according to The Straits Times. His wife and former Temasek Chief Executive, Ho Ching, was also shown in fake investment videos that ran as an ad on YouTube. 

YouTube has since taken the ad down and directs users to its Official Blog for more information about deepfake videos.

Many governments are rushing to pass legislation surrounding misinformation stemming from these videos, which are leading to identity theft and fraud schemes. Deepfake videos can cause significant harm by spreading false information or scam offers that use the likenesses of trustworthy or public individuals to lend a scheme credibility.

In September, Congresswoman Yvette D. Clarke and Congressman Glenn Ivey introduced the Cybersecurity Information Sheet DEEPFAKES Accountability Act of 2023. This bill requires content creators to digitally watermark all deepfake content. It was echoed by the European Union’s AI Act and Singapore’s Protection from Online Falsehoods and Manipulation Act. Failure to do so constitutes criminal activity and results in fines. 

Deepfake videos can discredit individuals, incite violence, and could interfere in elections.

“With few laws to manage the spread of the technology, we stand at the precipice of a new era of disinformation warfare, aided by the use of new AI tools,” said Clarke, a Democrat from New York. “It’s imperative that Congress not only establishes a clear standard for identifying deepfakes but also provides prosecutors, regulators, and especially victims with the necessary tools to combat fake or manipulated content.”

The bill was initially introduced to protect victims of deepfake porn videos. The NSA, FBI, and Cybersecurity Information Sheet work with the Cybersecurity and Infrastructure Security Agency to detect, track, and extinguish deepfakes. The agencies published the Contextualizing Deepfake Threats to Organizations fact sheet to help companies identify altered content.

While the tools for manipulating videos have been around for a while, technological advances make it easier than ever to create fake content. It’s also more accessible and contributes to social unrest when used to create false news reports. One deepfake video claimed there was an explosion near the Pentagon last May. Another manipulated video showed Russian President Vladimir Putin falsely claiming he would enact martial law, along with a separate altered video of Ukrainian President Volodomyr Zelenskyy ordering his soldiers to surrender.

“This creates a new set of challenges to national security,” said Candice Rockell Gerstner, an NSA applied research mathematician specializing in multimedia forensics. “Organizations and their employees need to learn to recognize deepfake tradecraft and techniques and have a plan in place to respond and minimize impact if they come under attack.”

Companies can use several tools to detect deepfake videos, such as real-time verification checks and passive detection techniques. This can include watermarks embedded into videos that flag manipulated content and mark it as AI-generated.

Users can spot deepfake videos by watching for fuzzy borders around a person, lips out of sync with audio, or light sources coming from multiple directions. Also, be aware of suspicious claims or products that a “celebrity” is promoting, as it is likely a scam. 

Disclaimer: To address the growing use of ad blockers we now use affiliate links to sites like http://Amazon.com, streaming services, and others. Affiliate links help sites like Cord Cutters News, stay open. Affiliate links cost you nothing but help me support my family. We do not allow paid reviews on this site. As an Amazon Associate I earn from qualifying purchases.

Subscribe to Our Newsletter

* indicates required

Please select all the ways you would like to hear from :

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please visit our website.

We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp’s privacy practices here.