Meta Is Changing Artificial Intelligence Labels After Real Photos Were Marked As AI

3 min read

Meta is changing the labels it applies to social media posts suspected to have been generated in some way with artificial intelligence tools. The Facebook, Instagram, Threads and WhatsApp parent company said its new label will display “AI Info” alongside a post, where it used to say “Made with AI.”

AI Atlas art badge tag

It’s making these changes in part because Meta’s detection systems were labeling images with minor modifications as having been “Made with AI,” causing some artists to criticize the approach. 

In one high-profile example, former White House photographer Pete Souza told TechCrunch that cropping tools appear to be adding information to the images, and that information was then alerting Meta’s AI detectors.

Meta, for its part, said it’s striking a balance between fast-moving technology and its responsibility to help people understand what its systems show in their feeds. 

“While we work with companies across the industry to improve the process so our labeling approach better matches our intent, we’re updating the ‘Made with AI’ label to ‘AI info’ across our apps, which people can click for more information,” the company said in a statement Monday.

Read more: How Close Is That Photo to the Truth? What to Know in the Age of AI

Meta’s shifting approach underscores the speed at which AI technologies are spreading across the web, making it increasingly hard for everyday people to distinguish what is truly real anymore.

That’s particularly worrying as we head into the 2024 US presidential election in November, when people acting in bad faith are expected to ramp up their efforts to spread disinformation and ultimately confuse voters. Google researchers published a report last month underscoring this point, with the Financial Times reporting that AI-creations of politicians and celebrities are by far the most popular uses for this technology by bad actors.

Tech companies have attempted to respond to the threat publicly. OpenAI earlier this year said it had disrupted social media disinformation campaigns tied to Russia, China, Iran and Israel, which were each being powered by its AI tools. Apple, meanwhile, announced last month that it will add metadata to label images, regardless of whether they’re being altered, edited or generated by AI.

Still, the technology appears to be moving much faster than companies’ ability to identify it. A new term, “slop,” has become increasingly popular to describe the increasing flood of posts created by AI.

Meanwhile, tech companies including Google have contributed to the problem with new technologies like its AI Overview summaries for search, which were caught spreading racist conspiracy theories and dangerous health advice, including to add glue to pizza to keep cheese from slipping off. Google, for its part, has since said it will slow its launch for AI Overviews, though some publications still found it recommending glue additives to pizza weeks afterward.

You May Also Like

More From Author

+ There are no comments

Add yours