- Stability AI executive Ed Newton-Rex dramatically quit the buzzy AI startup this week.
- He opposed Stability's argument that it should be able to train AI using copyrighted works for free.
A fight is brewing over whether tech giants should pay for the copyrighted data needed to train their AI — and a dramatic resignation this week suggests that it is about to get personal.
Stability AI's head of audio Ed Newton-Rex quit the startup on Wednesday, and launched an extraordinary broadside against his former employer after Stability argued that it shouldn't have to pay creators to train its AI models on their work.
It's an argument shared by many of Stability's tech giant rivals, who have warned that billions could be wiped off the booming AI sector if they don't get their way.
Generative AI models like ChatGPT and Stable Diffusion are trained using enormous amounts of information scraped from the internet, including art, song lyrics, and books.
Artists and creators have argued these models have been trained using their work without permission and are now being used to replace them, with several suing AI companies like Stability and OpenAI.
The resignation of Newton-Rex, who built an AI music-maker trained usinf licensed music while at Stability, shows that their fears are shared by some of those actually building AI models.
"Companies worth billions of dollars are, without permission, training generative AI models on creators' works, which are then being used to create new content that in many cases can compete with the original works," Newton-Rex wrote in a post on X announcing his resignation.
"To be clear, I'm a supporter of generative AI. It will have many benefits — that's why I've worked on it for 13 years. But I can only support generative AI that doesn't exploit creators by training models — which may replace them — on their work without permission," he said.
The US Copyright Office is currently considering new rules for generative AI, and major tech firms have made it pretty clear in their submissions to the regulator that they disagree with Newton-Rex's point of view, with the likes of Meta and Google arguing it would impose a "crushing liability" on AI builders.
They have a lot to lose. VC giant and self-proclaimed 'techno-optimist' Andreessen Horowitz has warned that billions of dollars could be wiped off the industry if AI giants were forced to pay for the data that their models run on.
Giorgio Franceschelli, a computer scientist who has written extensively about AI and copyright, told Business Insider that it was reasonable to claim that training a generative AI model using copyrighted works was within the boundaries of fair use, comparing it to humans 'learning' from paintings on Google and trying to reproduce them by hand.
However, he said that the arguments made by tech giants to justify scraping copyrighted material did not align with the principle behind fair use.
Even if it is technically legal, Franceschelli said, it is morally wrong.
"It might be legal, but it's still wrong to train models in this way," he said.
"These models are not, like us, trained on copyrighted material in order to enhance their opportunities; it is only in order to improve their capabilities, which are in turn exploited by the companies to make money out of them, and by users to make outputs that can threaten the market of the training-data authors."
The fact that criticisms like Franceschelli's are now being echoed by the people actually building generative AI models suggests that Newton-Rex's resignation could be a defining moment for AI.
Perhaps worryingly for the companies racing to build AI, the ex-Stability employee called on others harboring similar doubts on the tools they are working on to speak out.
"I'm sure I'm not the only person inside these generative AI companies who doesn't think the claim of 'fair use' is fair to creators. I hope others will speak up, either internally or in public," he said.
Stability AI did not immediately respond to a request for comment from Business Insider.