NEW YORK, NOV 6 (Reuters) – Facebook owner Meta ( META.O ) is barring political campaigns and advertisers in other regulated industries from using its new artificial intelligence advertising products, a company spokesman said on Monday, declining access to tools available to legislators. warned that it could cause the spread of electoral disinformation.
Meta publicly disclosed the decision in updates posted to its help center Monday night after this story was published. Its advertising standards prohibit ads with content debunked by the company’s fact-checking partners, but have no AI-specific rules.
“As we continue to test new Generative AI ad creation tools in Ads Manager, advertisers running campaigns that qualify as ads for Housing, Employment or Credit or Social Issues, Elections or Politics, or related to Health, Pharmaceuticals or Financial Services will not is currently allowed to use these Generative AI features,” the company said in a multi-page memo explaining how the tools work.
“We believe this approach will allow us to better understand the potential risks and create the right safeguards for the use of Generative AI in advertising related to potentially sensitive topics in regulated industries,” it said.
The policy update comes a month after Meta — the world’s second-largest platform for digital ads — announced it is beginning to expand advertiser access to AI-powered advertising tools that can instantly create backgrounds, image adjustments and ad copy variations simple text prompts.
The tools were initially only available to a small group of advertisers since spring. They are on track to roll out to all advertisers worldwide by next year, the company said at the time.
Meta and other tech companies have scrambled to launch productive AI ad products and virtual assistants in recent months in response to the frenzy over the debut last year of OpenAI’s ChatGPT chatbot, which can provide human-like written responses to questions and other messages.
The companies have released little information so far about the safeguards they plan to impose on these systems, making Meta’s decision on political ads one of the industry’s most significant political AI choices to come to light to date.
The Meta AI logo is seen in this picture taken on September 28, 2023. REUTERS/Dado Ruvic/Illustration/File Photo Acquisition of licensing rights
Alphabet’s Google ( GOOGL.O ), the biggest digital advertising company, announced the launch of similar artificial intelligence ad tools for image adjustment last week. It plans to keep politics out of its products by preventing the use of a list of “political keywords” as prompts, a Google spokesman told Reuters.
Google also has a policy update planned for mid-November to require that election-related ads must include a disclosure if they contain “synthetic content that depicts real or realistic-looking authentic people or events.”
TikTok bans political ads, while Snapchat owner Snap ( SNAP.N ) blocks them in its AI chatbot. Snapchat also uses human review to review all political ads, which includes checking for deceptive use of artificial intelligence. X, formerly known as Twitter, has not developed any creative AI advertising tools.
Meta’s chief policy officer, Nick Clegg, he said last month that the use of genetic artificial intelligence in political advertising was “clearly an area where we need to update our rules”.
He warned ahead of a recent UK AI security summit that governments and tech companies should prepare for the technology to be used to interfere in the upcoming 2024 election, calling for a particular focus on election-related content “moving from one platform to another.”
Earlier, Clegg told Reuters that Meta was preventing its user-facing virtual assistant Meta AI from creating photorealistic images of public figures. Meta committed this summer to developing a system to ‘watermark’ AI-generated content.
Meta strictly prohibits deceptive AI-generated videos in all content, including organic unpaid posts, with the exception of parody or satire.
The company’s independent supervisory board said last month it would examine the wisdom of that approach, examining a case involving an edited video of US President Joe Biden that Meta said it had abandoned because it was not created by AI.
Reporting by Katie Paul in New York Editing by Kenneth Li and Matthew Lewis
Our Standards: The Thomson Reuters Trust Principles.