Source: AFP
The UK general election is being closely watched after strong warnings that rapid advances in cyber technology, particularly artificial intelligence, and growing friction between major nations threaten the integrity of the landmark 2024 vote.
“These unscrupulous and lawless technological developments pose a huge threat to us all. They can be weaponized to discriminate, misinform and divide,” Amnesty International chief Agnes Callamard said in April.
The UK election on July 4 — four months before the United States — will be seen as the “test animal” for election security, said Bruce Snell, cybersecurity strategist at US firm Qwiet AI, which uses AI to to prevent cyber attacks.
While artificial intelligence has grabbed most of the headlines, more traditional cyber attacks remain a significant threat.
“It’s disinformation, it’s a party crasher, it’s a data leak and an attack on specific people,” said Ram Elboim, head of cybersecurity firm Sygnia and a former senior official in Israel’s 8200 cybersecurity unit.
![](https://images.yen.com.gh/images/accecdda32c982fa.jpg?impolicy=cropped-image&imwidth=256)
![](https://images.yen.com.gh/images/accecdda32c982fa.jpg?impolicy=cropped-image&imwidth=256)
Read also
Microsoft is facing heat from the US Congress over cyber security
State actors are expected to be the main threat, with the UK already issuing warnings about China and Russia.
“The main things are maybe the promotion of specific candidates or agendas,” Elboim said.
“The second is to create some kind of internal instability or chaos, which will affect public sentiment.”
The United Kingdom has an advantage over the United States because of the short time between the announcement and the holding of the election, giving attackers little time to develop and execute plans, Elboim said.
It is also less vulnerable to attacks on the voting infrastructure as voting is not automated, he added.
Deepfakes
However, hacking remains a threat and the UK has already accused China of being behind an attack on the Electoral Commission.
Source: AFP
“You don’t need to disrupt the main voting system,” Elboim explained. “For example, if you disrupt a party, their computers, or a third party that affects that party, that’s something that can have an impact.”
![](https://images.yen.com.gh/images/087bb1647c8ff0d9.jpg?impolicy=cropped-image&imwidth=256)
![](https://images.yen.com.gh/images/087bb1647c8ff0d9.jpg?impolicy=cropped-image&imwidth=256)
Read also
Pope Francis to consider ‘ethical’ artificial intelligence at G7 summit
Individuals are more at risk of being targeted, he added. Any embarrassing information could be used to blackmail candidates.
But it’s more likely that the attacker is simply leaking information to shape public opinion or using the compromised account to impersonate the victim and spread misinformation.
Former Conservative Party leader Iain Duncan Smith, a fierce critic of Beijing, has already claimed that Chinese state actors have impersonated him online, sending fake emails to politicians around the world.
But the increased scope for using artificial intelligence to create and distribute disinformation is the real unknown in this year’s election, Snell said.
The spread of “deepfakes” — fake videos, images or audio — is a primary concern.
“The levels of potential for rigging are just enormous. It’s something we certainly didn’t have in the last election,” Snell said, calling the UK a “test animal” for the 2024 vote.
![](https://images.yen.com.gh/images/f70717d24f5d32fa.jpg?impolicy=cropped-image&imwidth=256)
![](https://images.yen.com.gh/images/f70717d24f5d32fa.jpg?impolicy=cropped-image&imwidth=256)
Read also
UAE ‘selectively’ courting US, not China on AI: minister
He highlighted software that can recreate someone’s voice from a 30-second sample and how it could be misused.
Labour’s health spokesman Wes Streeting said he was the victim of a deep hoax, in which he appeared to insult a colleague.
Bot farms
Snell advised authorities to focus on a “shortcut” solution to “get awareness out there, get people to understand that this is an issue.”
Source: AFP
Other software can be used to create fake images and videos, despite filters in many AI applications designed to prevent real people from being depicted.
“Artificial intelligence, while very sophisticated, is also extremely easy to trick” into creating images of real people, Snell said.
Artificial intelligence is also being used to create “bots”, which automatically flood social media with comments to shape public opinion.
“Bots were very easy to spot. You would see things like the same message being repeated and parroted by multiple accounts,” Snell said.
![](https://images.yen.com.gh/images/6c3b0a327604eb5c.jpg?impolicy=cropped-image&imwidth=256)
![](https://images.yen.com.gh/images/6c3b0a327604eb5c.jpg?impolicy=cropped-image&imwidth=256)
Read also
The oil and gas industry is leaping into productive artificial intelligence
“But with the sophistication of artificial intelligence now… it’s very easy to create a bot farm that can have 1,000 bots and each one has a different communication style,” he added.
While software already exists to check whether videos and images have been created using artificial intelligence at a “high level of skill”, they are not yet widely used enough to curb the problem.
Snell believes the AI industry and social media companies should therefore take responsibility for curbing misinformation “because we’re in a brave new world where lawmakers have no idea what’s going on.”
Source: AFP