Meta Will Label Content on Facebook, Instagram, and Threads That Was Created With AI


By

on

in

,

Facebook mobile app.

Ahead of an election year that’s just warming up, Meta said it will label content generated by artificial intelligence. This can cull misinformation but could also encourage AI content creators to adjust their tactics to fly under the radar.

Meta has been working with industry partners to develop common technical standards that will issue an alert when content is created with AI and label it as such across Facebook, Instagram, and Threads.

The company didn’t say when the labels will launch, but it will come ahead of several upcoming elections.

“We’re building this capability now, and in the coming months, we’ll start applying labels in all languages supported by each app,” said Meta in a statement. “We’re taking this approach through the next year, during which a number of important elections are taking place around the world.”

Meta plans to learn more about AI content creation and how it’s shared to further evolve its AI identification tools. 

It’s not the only tech giant working to label AI content. Last November, Google said it would launch labels across YouTube and its other platforms. This followed an executive order signed by President Biden in October that will require protections against malicious AI, including applying watermarks to identify content. Several companies have already vowed to implement protections and develop standards for safe use.

Meta already adds labels to AI content created through its Meta AI tool, including visible and invisible watermarks embedded in the image’s metadata. But the tracker system doesn’t include audio and video AI creations.

Instead, Meta relies on creators to disclose when they create a video with AI so the platform can label it. Creators can face penalties or account deactivation if they don’t oblige. However, the new tools in development have a broader reach. Meta is creating tools to identify the invisible metadata markers “by scale” to label content across Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock.

While new tools and tracking techniques are a win for culling misinformation and alerting users when content is fabricated, malicious actors could develop new tools of their own to circumvent protections. The film and music industries saw similar problems with new venues popping up when cracking down on illegal downloads. As new labels emerge, so could a wave of stealthy tactics to trick new AI-detecting tools, creating an uphill battle for social media companies and their members.

Disclaimer: To address the growing use of ad blockers we now use affiliate links to sites like http://Amazon.com, streaming services, and others. Affiliate links help sites like Cord Cutters News, stay open. Affiliate links cost you nothing but help me support my family. We do not allow paid reviews on this site. As an Amazon Associate I earn from qualifying purchases.

Subscribe to Our Newsletter

* indicates required

Please select all the ways you would like to hear from :

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please visit our website.

We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp’s privacy practices here.