It looks like the photography world is starting to get a little sick of AI-generated imagery. Or at least, much of it is sick of it being classed as “photography”. Last week, UK-based model and photographer website PurplePort AI-generated images on its website and now it seems that so have Getty Images/iStock and the “free” stock image website Unsplash.
Both platforms have made announcements in the past few hours stating in no uncertain terms that AI-generated images are banned from their platforms. Both companies also stress that this only applies to AI-generated images, and not human-created 3D render submissions or digital editing tools that are designed to enhance or manually create images from scratch (like Photoshop and Illustrator).
Getty Images CEO Craig Peters told The Verge that “the ban was prompted by concerns about the legality of AI-generated content and a desire to protect the site’s customers”. Their argument is essentially that the legality of copyright ownership is still up in arms. Is it owned by the creator of the AI that generated the image? Or the person who typed in the random phrase that told the AI what to create? And what about the source imagery upon which the AI created the image?
Peters didn’t confirm to the Verge whether or not Getty had actually received any legal challenges with regard to AI-generated imagery, although as you can see it’s a bit of a potential legal minefield. So, as is often the case with such confusion, it’s less hassle to just ban them. They don’t really see them as a threat to photography and simply views AI artwork as the latest example of tech expanding the abilities of image creation. Right now, though, they don’t believe it’s in their community’s best interests to allow it on the site.
Unsplash, which has faced its own share of legal issues in the past, has also banned AI-generated imagery, although this isn’t much of a surprise as they were acquired by Getty last year. This explains why the wording from both Getty/iStock and Unsplash are very similar in their wording – specifically calling out platforms such as Dall-E, MidJourney and StableDiffusion.
With AI-generated imagery now getting more and more difficult to detect, it will be interesting to see how well they’re able to enforce the new rules on AI-generated imagery. The technology has already improved to the point where some of them are quite convincing and it’s still in its very early days. It won’t be long before it’s impossible to tell what is real and what is not.
[via The Verge]