Russia is deploying genetic artificial intelligence in online fraud campaigns, but its efforts have been unsuccessful, according to a Meta security report released Thursday.
The parent company of Facebook and Instagram found that so far AI-powered tactics “only deliver incremental productivity and content production gains” to bad actors, and Meta managed to shut down the deceptive influencer operations.
Meta’s efforts to combat “coordinated inauthentic behavior” on its platforms come as fears grow that genetic artificial intelligence will be used to trick or confuse people in elections in the United States and other countries.
Facebook has been accused for years of being used as a powerful platform for election disinformation.
Russian agents used Facebook and other US-based social media to stoke political tensions in the 2016 election won by Donald Trump.
In volatile election season, US companies fight ‘brand misinformation’
Experts fear an unprecedented deluge of disinformation from bad actors on social networks, due to the ease of use of artificial intelligence tools being created, such as ChatGPT or the Dall-E image generator to create content on demand and in seconds.
AI has been used to create images and videos and to translate or generate text along with creating fake news or summaries, according to the report.
Russia remains the top source of “coordinated inauthentic behavior” using fake Facebook and Instagram accounts, Meta’s director of security policy, David Agranovich, told reporters.
Since Russia’s invasion of Ukraine in 2022, these efforts have focused on undermining Ukraine and its allies, according to the report.
As the U.S. election approaches, Meta expects Russian-backed online deception campaigns to attack pro-Ukraine political candidates.
Based on behavior
Iranian Hackers Target Harris, Trump Campaigns: Google
When Meta looks for cheating, it looks at how accounts act, not what they post.
Influencer campaigns tend to span a range of online platforms, and Meta has noticed posts on X, formerly Twitter, being used to make crafted content appear more credible.
Meta shares its findings with X and other Internet companies and says a concerted defense is needed to prevent misinformation.
“As far as Twitter (X), they’re still going through a transition,” Agranovich said when asked if Meta sees X acting on cheating tips.
“A lot of the people we’ve dealt with in the past there have moved on.”
X has destroyed trust and security groups and reduced content-policing efforts once used to tame disinformation, turning it into what researchers call a disinformation haven.
False or misleading US election claims posted on X by Musk have garnered nearly 1.2 billion views this year, a watchdog said last week, underscoring the billionaire’s potential influence in the highly polarized White House race.
Seoul authorities find toxic substances in Shein and Temu products
Researchers have raised the alarm that X is a hotbed of political disinformation.
They have also pointed out that Musk, who bought the platform in 2022 and is a staunch supporter of Donald Trump, appears to be misleading voters by spreading lies on his personal account.
“Elon Musk is abusing his privileged position as the owner of a…politically influential social media platform to sow misinformation that breeds discord and mistrust,” warned Imran Ahmed, CEO of the Center to Combat Digital Hate.
Musk recently faced a firestorm of criticism for sharing with his followers a deeply fake AI video featuring Trump’s Democratic rival, Vice President Kamala Harris.
Source: AFP