Source: AFP
Social media giant Meta says its effort to prevent coordinated disinformation campaigns created through its ever-improving genetic artificial intelligence is working, despite widespread concerns.
Meta’s latest study of “coordinated inauthentic behavior” on its platforms comes as fears mount that genetic artificial intelligence will be used to trick or confuse people in upcoming elections worldwide, particularly in the United States.
“What we’ve seen so far is that our industry’s existing defenses, including our focus on behavior rather than content to address hostile threats, are already in place and appear to be effective,” said David Agranovich, director of threat disruption policy. of Meta, in a press. update on Wednesday.
“We don’t see genetic AI being used in terribly sophisticated ways, but we know that these networks will continue to evolve their tactics as this technology changes.”
Facebook has been accused for years of being used as a powerful platform for election disinformation.
![](https://images.yen.com.gh/images/3757a87222f8454d.jpg?impolicy=cropped-image&imwidth=256)
![](https://images.yen.com.gh/images/3757a87222f8454d.jpg?impolicy=cropped-image&imwidth=256)
Read also
Australia says working with Ticketmaster over hacking ‘incident’
Russian agents used Facebook and other US-based social media to stoke political tensions in the 2016 election won by Donald Trump. The European Union is currently investigating Meta’s Facebook and Instagram for allegedly failing to tackle disinformation ahead of June’s European elections.
But experts now also fear an unprecedented deluge of disinformation from bad actors in Meta apps, due to the ease of using artificial intelligence tools being created, such as ChatGPT or the Dall-E image generator to create content on demand and in seconds.
Meta said it has seen “threatmen” use AI to create fake photos, videos and text, but not realistic images of politicians, according to the report.
Generative AI has been used to create profile pictures for fake accounts in the Meta family of apps, and a fraud network from China apparently used the technology to create posters for a fictitious pro-Sikh movement called Operation K, the report said.
![](https://images.yen.com.gh/images/e2d89a6990932ca0.jpg?impolicy=cropped-image&imwidth=256)
![](https://images.yen.com.gh/images/e2d89a6990932ca0.jpg?impolicy=cropped-image&imwidth=256)
Read also
‘Normalized and invisible’: online abuse targets Ethiopian women
Meanwhile, an Israel-based network posted AI-generated comments about Middle East politics on Facebook pages of media organizations and public figures, Meta reported.
Comparing them to spam, Meta said those comments, some of which were on pages of US lawmakers, were criticized in replies posted by real users, who called them propaganda.
Meta attributed the campaign to a political marketing firm based in Tel Aviv.
“This is an exciting space to watch,” said Mike Dvilianski, Meta’s head of threat research. “So far, we have not seen deterrent use of artificial intelligence generation tools by adversaries.”
The report also showed that attempts by a Russia-linked group called “Doppelganger” to use Meta apps to undermine support for Ukraine continued, but were being prevented on the platform.
“Doppelganger has taken it to a new level over the past 20 months while remaining crude and largely ineffective at building authentic social media audiences,” according to Meta.
![](https://images.yen.com.gh/images/cc8bc8914ebd947e.jpg?impolicy=cropped-image&imwidth=256)
![](https://images.yen.com.gh/images/cc8bc8914ebd947e.jpg?impolicy=cropped-image&imwidth=256)
Read also
OpenAI’s Johansson blunder pushes voice cloning to the fore
Meta also removed small clusters of inauthentic Facebook and Instagram accounts originating from China that targeted the Sikh community in Australia, Canada, India, Pakistan and other countries, the report showed.
Posts on these fake accounts called for pro-Sikh protests.
Source: AFP