After several problematic instances of AI-generated images going viral online, Meta has announced plans to crack down in a major way.
The goal is to improve the technology used to identify AI-generated images and eventually label them on all of its social media platforms, including Facebook, Instagram, and Threads.
But will it actually help? We’ll explore the plan from Meta and whether or not the cat is already out of the bag when it comes to the threat of AI-generated images.
Meta Platforms to Label Images Generated by AI
Announced in a Meta blog post this week, the social media company is “working with industry partners” to iron out the most effective ways of “identifying AI-generated content, including video and audio.”
On top of that, Meta claims that “in the coming months,” they will roll out a labeling system on Facebook, Instagram, and Threads for content generated by AI to help combat the growing concerns around the technology.
This just in! View
the top business tech deals for 2024 👨💻
“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies. People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology. So, it’s important that we help people know when photorealistic content they’re seeing has been created using AI.” – Nick Clegg, president of global affairs at Meta
Meta has already used this technology on photorealistic images generated by Meta AI — labeling them as “Imagined by AI” — but this advancement would seek to label all AI-generated content that users post on the social media sites.
The Threat of AI-Generated Images
The idea of AI-generated images seems pretty harmless at first glance, particularly when you’re just using it to photoshop yourself in front of unicorn. But the reality is that this kind of technology has a lot of potentially nefarious uses — from disinformation to outright harassment — that could lead to some serious consequences.
Even worse, Meta’s attempt to combat the potential threat of AI-generated images, it may not be enough. Experts have noted that these kinds of labeling systems are far from perfect when it comes to getting it right.
“They may be able to train their detector to be able to flag some images specifically generated by some specific models. But those detectors can be easily evaded by some lightweight processing on top of the images, and they also can have a high rate of false positives. So I don’t think that it’s possible for a broad range of applications.” – Soheil Feizi, director of the Reliable AI Lab at the University of Maryland told the BBC.
This is certainly a step in the right direction, particularly if Meta can get a technical standard in place to actually impact the rest of the internet in regard to this problem. Still, the threat of AI-generated images looms large in the future of the web.
How to Spot AI-Generated Images
If you don’t want to wait a few months for Meta to roll out this labeling system to its many platforms, there are some ways that you can detect AI-generated images without a label.
For one, always look at the fingers and writing in the image. AI still seriously struggles with these basic elements when it comes to generating photorealistic content. In fact, AI-generated images are notoriously bad at getting the details right, so just take a good, long look at an image and you should be able to figure it out.
If you don’t want to rely on the naked eye, there’s also plenty of other online tools that can help you detect AI-generated images, although just like the system from Meta, they won’t be entirely perfect right away.