In a story that will fan the flames of concern about the safety of generative AI, TikTok’s parent company has admitted some of its training models have been sabotaged by one of its interns.
ByteDance went on the record with the news, after rumors of potential disruption hit Chinese news outlets.
ByteDance is currently in the firing line over the alleged impact its platform has on younger users; and has also announced it is moving towards AI-only moderation – which, coupled with this latest news, will have naysayers questioning the safety of its content.
Malicious Interference
The details are scant but the intern behind the alleged sabotage was fired in August. The company said in a statement on its news aggregator service, Toutiao, that the student was on the commercial technology team.
ByteDance says: “The intern involved maliciously interfered with the model training tasks of the commercial technology team’s research project.” It added that the intern’s university and industry associations had been informed about their conduct.
This just in! View
the top business tech deals for 2024 👨💻
Quashing Rumors
The statement, reported on by The Guardian, adds that news reports have exaggerated the impact. Claims have been made that 8000 GPUs were impact and the sabotage caused tens of millions of dollars in costs.
Not so, says ByteDance, sharing that the attack “…did not affect the formal commercial projects and online business, nor did it involve other businesses such as ByteDance’s large models.”
ByteDance’s AI Ambitions
PC magazine adds that ByteDance currently has “11 different open-source AI models including an audio generator, a fast text-to-video generator, and a higher-resolution video generator.”
The company launched an AI creative assistant feature earlier this year.
AI Under Scrutiny
The news of the sabotage comes as the AI industry faces more scrutiny – and this is aside from the national security concerns ByteDance alone has garnered.
Last month, Gavin Newsom, the Governor of California, blocked what would have been one of the first regulations on AI in the US.
However, there is a groundswell of support for AI regulations from both those in government and those working in AI. The fact that this sabotage was carried out not only by an insider, but also a student will strengthen the argument for frameworks and safeguards.