Even if your child’s photos are protected behind strict privacy laws, AI models are still likely using them to train, according to a new report.
The negative press surrounding AI is nothing if not persistent. It feels like every other day reveals a new story about an AI error that resulted in real-life consequences for everyday people.
Now, it’s been reported that AI models are indeed soaking up all the information they can find online, which includes photos of children from around the web.
Human Rights Watch: Children’s Photos Are Being Used to Train AI
The report from the Human Rights Watch (HRW) found 190 photos of Australian children in a common database of online screenshots that is used to train AI models like popular image generators.
This was actually the second such report in regard to AI training on photos of children. The HRW released a similar report in June that detailed a similar problem that saw 170 photos of Brazilian children online.
This just in! View
the top business tech deals for 2024 👨💻
“Children should not have to live in fear that their photos might be stolen and weaponized against them. The Australian government should urgently adopt laws to protect children’s data from AI-fueled misuse.” – Hye Jung Han, Human Rights Watch
Even worse, the data within the photos sometimes provided information, like the name of the individual and the location in which the photo was taken. Researchers also found that the photos were scraped from content that has strict privacy settings, like unlisted YouTube videos.
AI Training Is Forever
The database in question — LAION-5B — is maintained by a non-profit, volunteer organization called LAION that admits the misuse of children’s photos to train AI models is a “larger and very concerning issue.” The company is does its best to shut down these issues with haste.
Unfortunately, that isn’t necessarily enough to keep these models in check.
“Current AI models cannot forget data they were trained on, even if the data was later removed from the training data set.” – Hye Jung Han, Human Rights Watch
That means that even if a photo has been entirely erased from the internet, scrubbed from every database you can find, if an AI trained on it, you’ll still be able to find it.
The Legality of AI
Generative AI models develop fast. In fact, their evolution and adoption into everyday technology has been alarmingly fast, which hasn’t allowed for regulatory bodies to catch up.
Now that it’s been a few years, though, the legal ramifications are catching up. Australia is currently voting on reforms to the Children’s Online Privacy Code, which would include more stipulations to protect against AI usage, but that isn’t the only case.
In fact, US record labels like Sony, Universal, and Warner are all suing AI music generators for copyright infringement, because their models are trained on music that belongs to them.
All that to say, the wild west days of AI can only last so long before people start seeing how these models actually operate. That is, of course, if it doesn’t soak up all the electricity first.