Artists under siege from artificial intelligence (AI) that studies their work and then replicates their styles have teamed up with university researchers to thwart such copycat activity.
American illustrator Paloma McClain went into defense mode after learning that several artificial intelligence models had been “trained” using her art, without being given any credit or compensation.
“It bothered me,” McClain told AFP.
“I believe that truly meaningful technological progress is made in an ethical way and elevates all people rather than working at the expense of others.”
The artist turned to free software called Glaze created by researchers at the University of Chicago.
Glaze essentially surpasses AI models in terms of how they are trained, modifying pixels in ways that are imperceptible to viewers but that make a digitized artwork look dramatically different from AI.
“Urban Mining” offers a green solution to old solar panels
“We’re basically providing technical tools to help protect human creators from invasive and abusive AI models,” said computer science professor Ben Zhao of the Glaze team.
Created in just four months, Glaze blocked the technology used to disrupt facial recognition systems.
“We were working at a very fast speed because we knew the problem was serious,” Zhao said when he rushed to defend artists from software imitators.
“A lot of people were hurting.”
The AI ββgiants have agreements to use data for training in some cases, but most if the digital images, audio and text used to shape how the super-intelligent software thinks have been pulled from the Internet without explicit consent.
Since its launch in March 2023, Glaze has been downloaded more than 1.6 million times, according to Zhao.
Zhao’s team is working on a Glaze enhancement called Nightshade that boosts defenses by confusing the AI, say by having it interpret a dog as a cat.
China unveils new gaming restrictions, sending tech stocks tumbling
“I think Nightshade will have a noticeable effect if enough artists use it and put enough poisoned images into nature,” McClain said, meaning readily available online.
“According to Nightshade’s research, it wouldn’t take as many poisoned images as one might think.”
Zhao’s team has been approached by several companies that want to use Nightshade, according to the Chicago academic.
“The goal is for people to be able to protect their content, whether it’s individual artists or companies with a lot of intellectual property,” Zhao said.
Orally
Startup Spawning has developed Kudurru software that detects attempts to harvest a large number of images from an online space.
An artist can then block access or send images that don’t match what’s requested, altering the pool of data used to teach the AI ββwhat’s what, according to Spawning co-founder Jordan Meyer.
More than a thousand websites have already been integrated into the Kudurru network.
Etsy targeted by child trafficking conspiracy theories
Spawning also launched haveibeentrained.com, a website that has an online tool to determine if digitized works have been fed into an AI model and allows artists to opt out of such use in the future.
As defenses mount for images, researchers at Washington University in Missouri have developed AntiFake software to block copycat voices with artificial intelligence.
AntiFake enhances digital recordings of people speaking by adding noises that are inaudible to humans but that make it “impossible to synthesize a human voice,” said Zhiyuan Yu, the PhD student behind the project.
The program aims to go beyond simply stopping unauthorized AI training and prevent the creation of “deepfakes” — fake soundtracks or videos of celebrities, politicians, relatives or others that show them doing or saying something they didn’t do.
A popular podcast recently reached out to the AntiFake team for help in stopping the piracy of its producers, according to Zhiyuan Yu.
The freely available software has so far been used for recordings of people speaking, but could also be applied to songs, the researcher said.
Online video games, the latest drug cartel hunters
“The best solution would be a world in which all data used for AI is subject to consent and payment,” argued Meyer.
“We hope to push developers in that direction.”
Source: AFP