Cord Cutters News

Meta Will Label Content on Facebook, Instagram, and Threads That Was Created With AI

Facebook mobile app.

Ahead of an election year that’s just warming up, Meta said it will label content generated by artificial intelligence. This can cull misinformation but could also encourage AI content creators to adjust their tactics to fly under the radar.

Meta has been working with industry partners to develop common technical standards that will issue an alert when content is created with AI and label it as such across Facebook, Instagram, and Threads.

The company didn’t say when the labels will launch, but it will come ahead of several upcoming elections.

“We’re building this capability now, and in the coming months, we’ll start applying labels in all languages supported by each app,” said Meta in a statement. “We’re taking this approach through the next year, during which a number of important elections are taking place around the world.”

Meta plans to learn more about AI content creation and how it’s shared to further evolve its AI identification tools. 

It’s not the only tech giant working to label AI content. Last November, Google said it would launch labels across YouTube and its other platforms. This followed an executive order signed by President Biden in October that will require protections against malicious AI, including applying watermarks to identify content. Several companies have already vowed to implement protections and develop standards for safe use.

Meta already adds labels to AI content created through its Meta AI tool, including visible and invisible watermarks embedded in the image’s metadata. But the tracker system doesn’t include audio and video AI creations.

Instead, Meta relies on creators to disclose when they create a video with AI so the platform can label it. Creators can face penalties or account deactivation if they don’t oblige. However, the new tools in development have a broader reach. Meta is creating tools to identify the invisible metadata markers “by scale” to label content across Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock.

While new tools and tracking techniques are a win for culling misinformation and alerting users when content is fabricated, malicious actors could develop new tools of their own to circumvent protections. The film and music industries saw similar problems with new venues popping up when cracking down on illegal downloads. As new labels emerge, so could a wave of stealthy tactics to trick new AI-detecting tools, creating an uphill battle for social media companies and their members.

Exit mobile version