Starting next year, advertisers on Facebook and Instagram will be required to state whether they’re using AI-generated content in any political, electoral, or social issue ads.
If they have, and the ad is accepted, Meta will run the campaign with a disclosure for audiences that the content was created with AI tools.
This latest move highlights a recent trend in Meta’s crackdown on false news and information. Earlier this year, it deleted thousands of fake accounts that were being leveraged by a Chinese network, whose aim was to spread disinformation.
What Are the New Advertising Rules?
According to Meta, ads subject to the disclosure include anything that’s been digitally created or altered to:
- Depict a real person saying or doing something they did not say or do.
- Depict a realistic-looking person that does not exist, or a realistic-looking event that did not happen or alter footage of a real event that happened.
- Depict a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.
Any social issue, political, or electoral ads containing the above will be flagged to users and logged within Meta’s ads database.
However, there are a few exceptions to the rule, with Meta stating that any alterations that were “inconsequential or immaterial to the claim, assertion, or issue raised” — such as photo retouching and image cropping — would not need to be disclosed.
In addition to the AI disclosure, Meta will also ban advertisers from using its own AI software to create ads relating to political or social issues, housing, employment, credit, health, pharmaceuticals, or financial services. Ads promoting campaigns on these subjects will be free to use third-party AI tools, like Midjourney or DALL-E, but will need to use the new disclosure measures.
This just in! View
the top business tech deals for 2024 👨💻
Safeguarding the Use of AI
Meta is currently in the throes of managing the tidal wave of generative AI content that has recently flooded platforms. Thanks to the likes of ChatGPT and Google Bard, tech companies have had the unprecedented challenge of working out how to manage and regulate manipulated and fake images, videos, and audio.
“We believe this approach will allow us to better understand potential risks and build the right safeguards for the use of generative AI in ads that relate to potentially sensitive topics in regulated industries.” – Meta spokesperson
This latest disclosure follows the recent guidelines that anyone running political ads on the Meta platforms are required to complete an authorization process. This includes adding a “paid for by” disclaimer on any campaigns. From there, the ads will be stored in the company’s public Ad Library for up to seven years.
The Upcoming Election Year
This crackdown from Meta comes as both regulators and lawmakers begin to tackle the issue of AI-generated content within political ads, ahead of 2024’s US presidential election.
Earlier this year, Rep. Yvette Clarke and Sen. Amy Klobuchar introduced bills that require ads campaigns to disclose when they’ve used AI-generated content. The regulatory agency in charge of political advertising, The Federal Election Commision, is similarly expected to make a decision on a new rule that limits the use of AI in political ads.
Back in 2016, Facebook was criticized for its lack of regulations over Russia using ads to spread discontent among American users. Then in 2020, the company hit headlines by being accused of allowing politicians to lie in ads on the platform. This was defended on the grounds of free speech and public discourse. However, since then, Meta has been busy navigating the tricky subject of political advertising.
In an effort to squash as much misinformation and disinformation as possible, Mark Zuckerberg has spent billions developing functionality that closely monitors political ads on the platform. Despite these innovations, Clegg has since called for regulatory guidance on these kinds of issues, instead of having tech companies determine the boundaries and rules.