Source: AFP
When Australian politician Brian Hood noticed that ChatGPT was telling people he was a convicted felon, he took the old-fashioned route and threatened to take legal action against the AI chatbot’s creator, OpenAI.
His case raised a potentially huge problem with such AI programs: what happens when they get things wrong in a way that causes harm in the real world?
Chatbots are based on artificial intelligence models trained on massive amounts of data, and retraining them is extremely expensive and time-consuming, so scientists are looking for more targeted solutions.
Hood said he spoke to OpenAI which “wasn’t particularly helpful”.
But his complaint, which made global headlines in April, was largely resolved when a new version of their software was released and didn’t deliver the same lie — though he never received an explanation.
![](https://images.yen.com.gh/images/d0ec6839f011befe.jpg?impolicy=cropped-image&imwidth=256)
![](https://images.yen.com.gh/images/d0ec6839f011befe.jpg?impolicy=cropped-image&imwidth=256)
Read also
The EU will resume negotiations on the world’s first artificial intelligence law on Friday
“Ironically, the huge publicity my story received really corrected the public record,” Hood, mayor of the Victorian town of Hepburn, told AFP this week.
OpenAI did not respond to requests for comment.
Hood may have had trouble building a defamation charge since it’s unclear how many people could see results on ChatGPT, or even if they would see the same results.
However, companies like Google and Microsoft are quickly rewiring their search engines with AI technology.
It seems likely that they will be inundated with takedown requests from people like Hood, as well as copyright infringement.
While they can delete individual records from a search engine index, things are not so simple with AI models.
To respond to such issues, a group of scientists is forging a new field called “non-machine learning” that tries to train algorithms to “forget” offending pieces of data.
![](https://images.yen.com.gh/images/bd8bf4a5dce3a9aa.jpg?impolicy=cropped-image&imwidth=256)
![](https://images.yen.com.gh/images/bd8bf4a5dce3a9aa.jpg?impolicy=cropped-image&imwidth=256)
Read also
Bitcoin rally shines a spotlight on investor risks
“Cool Tool”
An expert in the field, Meghdad Kurmanji from the University of Warwick in Britain, told AFP that the topic had started to gain real traction in the last three or four years.
Source: AFP
Among those they considered was Google DeepMind, the AI arm of the trillion-dollar California behemoth.
Google experts co-authored a paper with Kurmanji published last month that proposed an algorithm for cleaning selected data from large language models — the algorithms that underpin ChatGPT and Google’s Bard chatbot.
Google also launched a contest in June for others to improve unlearning methods, which has so far attracted more than 1,000 participants.
Kurmanji said unlearning could be a “very good tool” for search engines to handle takedown requests under data privacy laws, for example.
It also said its algorithm scored well in tests for removing copyrighted material and correcting bias.
However, Silicon Valley elites are not universally enthused.
![](https://images.yen.com.gh/images/9fa60a5c2f75fd19.jpg?impolicy=cropped-image&imwidth=256)
![](https://images.yen.com.gh/images/9fa60a5c2f75fd19.jpg?impolicy=cropped-image&imwidth=256)
Read also
Google looks to lead the way in artificial intelligence with Gemini
Yann LeCun, head of artificial intelligence at Meta, the owner of Facebook, which is also pouring billions into AI technology, told AFP that the idea of not learning machines was far down his list of priorities.
“I’m not saying it’s useless, uninteresting, or wrong,” he said of the paper authored by Kurmanji and others. “But I think there are more important and urgent matters.”
LeCun said he had focused on making algorithms learn faster and retrieve facts more efficiently rather than teaching them to forget.
“No Panacea”
However, there seems to be broad acceptance in academia that AI companies should be able to extract information from their models to comply with laws such as the EU General Data Protection Regulation (GDPR).
“The ability to extract data from training sets is a critical aspect of moving forward,” said Lisa Given from RMIT University in Melbourne, Australia.
But he pointed out that so much was unknown about how the models worked — and even what datasets they were trained on — that a solution could be a long way off.
![](https://images.yen.com.gh/images/9720ecc49d99b7ff.jpg?impolicy=cropped-image&imwidth=256)
![](https://images.yen.com.gh/images/9720ecc49d99b7ff.jpg?impolicy=cropped-image&imwidth=256)
Read also
EU seeks agreement on world’s first artificial intelligence law
Michael Rovatsos of the University of Edinburgh could also see similar technical issues arising, particularly if a company is bombarded with takedown requests.
He added that unlearning did nothing to resolve broader questions about the AI industry, such as how data is collected, who benefits from its use, or who takes responsibility for algorithms that cause harm.
“The technical solution is not the panacea,” he said.
With scientific research in its infancy and regulation almost non-existent, Brian Hood — who is an AI enthusiast despite his experience at ChatGPT — suggested we’re still in the age of old-fashioned solutions.
“When it comes to these chatbots that generate garbage, users just have to control everything,” he said.
Source: AFP