
From the onset of generative AI’s surge in popularity, one application that is fun and easy to use has been the AI text-to-media generators. Everyone from business professionals to the average person can partake in the fun experience of creating anything they envision — from an image to a video — with a quick text prompt and the touch of a button.
Also: 10 key reasons AI went mainstream overnight – and what happens next
However, behind what may seem like a harmless experience is a whole lot of implications for artists.
The issue
Many of the most popular AI media generators on the market, including the one that started the AI text-to-image generator craze, OpenAI’s DALL-E 2, trained their models by scraping the entire internet — including all of the original work from artists — without asking for their explicit permission.
The implications here are that artists’ bodies of work, including photographs, paintings, poems, art, books, and songs, can be easily replicated without their authorization. Through this training, they lose control of their work being reproduced, ownership of their creative style, and the extra revenue that AI companies make from reproducing their ideas.
Also: How to fix AI’s fatal flaw – and give creators their due (before it’s too late)
As a result, the relationship between AI companies, these AI models, and artists has been extractive — stripping artists of their life’s work and using it for their own profit. Many have spoken out about the issue, demanding that artists be considered when building these models and compensated fairly for their work.
Ed Newton-Rex, a composer himself, founded Fairly Trained in 2024, a nonprofit that certifies generative AI companies for training data practices, after working in the AI music space since 2010. He left his latest role at Stability AI, where he led the team that built Stable Audio, because of the company’s position regarding training on people’s art without artist licensing.
In a fireside chat at South by Southwest (SXSW), Newton-Rex illuminated the larger issue of copyright, highlighting that not only is using artists’ work without their permission unfair, but it also adds even more competition to an already saturated marketplace — leveraging the artist’s own ideas.
“In generative AI, you have companies that are with billions and billions of dollars, often against creators’ wishes, and using them to create these hyper, hyper-scalable competitors to those creatives,” said Newton-Rex.
Also: Brace yourself: The era of ‘citizen developers’ creating apps is here, thanks to AI
In the U.S., AI companies are able to legally train their AI models on copyrighted materials through the concept of Fair Use, which stipulates that you are not violating copyright law if you are using an existing work to inform the creation of something new.
So, with creatives not having existing laws explicitly on their side, is there a way that both creatives and AI systems can coexist in a mutually beneficial agreement? The short answer is yes, but the solution may lie in licensing.
Steps companies can take
AI text-to-media generators offer clear accessibility benefits, enabling anyone to create regardless of skill or resources. Ideally, though, they should support creators and enrich the ecosystem — not replace it. The first step toward that goal is simple, according to Newton-Rex.
“Firstly, you can’t steal stuff,” said Newton-Rex.
Some companies have already started to take this approach. For example, in 2023, Getty Images launched Generative AI by Getty Images, which was only trained on Getty’s robust library of stock images and provides ongoing revenue for those whose work on it has been trained.
Also: Will synthetic data derail generative AI’s momentum or be the breakthrough we need?
Adobe took a similar approach with its Firefly generative model, which is also commercially safer. To train its model, Adobe only used Adobe Stock images, openly licensed content, and public domain content. It also compensates creators whose work was used in the training set.
However, more companies don’t take this approach because of inherent challenges — including accessing and creating a clean dataset, which can be costly and time-consuming. This is especially detrimental for AI companies racing to release the next model and better compete in the AI race.
“It does slow you down, but I think clearly you ultimately end up in the same place, and you do it without breaking the law and without turning the entire creative industry and the world of creatives against you, which I think is actually another huge misstep for AI companies,” said Newton-Rex.
Also: AI is ruining Pinterest. Here’s why it’s such a big problem
Another factor worth consideration is taking an introspective look at the use case of these models. Since creating content is easier than ever, it can be tempting to flood media platforms with AI-generated content, such as music, images, and videos. Ultimately, Newton-Rex found this could lead to the dilution of the interest in revenues and royalties that people are getting for their work.
Artificial Intelligence