Copyright, Craft, and Consent: What AI Art Means For Artists & Brands

There’s been a lot of talk recently about AI art; what it means for artists, what it means for the creative industry, and what it means for the future of both. Ever-searching for truth and perspective, Nicki has taken pen to paper (yes, the irony) to muse on all things craft, copyright, and consent...

Wednesday 14 December 2022
By Nicki Field
OpinionExpertiseTalent

Images AI generated by Steve Scott. More here.

Okay so disclaimer up top, I’m a tech novice when it comes to AI. And most things. I’ve often said to people that I’m the kind of person who can blow up a lightbulb by looking at it, or have an impromptu laptop crisis at the most annoying times. I try my damndest to understand IT and tech things and speak with as much conviction as I can muster but frankly, I’m winging it here. I’m reading, I’m absorbing, having conversations about AI, going down social media rabbit holes and most of all trying to get a grip on it. As, I suspect, are most of us.

The beckoning of AI art feels faintly like déjà vu when we all sat down in the studio many months ago and said, ‘right NFT’s then, what’s that all about?’ We’re likely too late, the train has left and we’re all trying to catch up.

The commercial artist community is always vocal when new vehicles for tech come along and change is afoot. We are used to ( I’m tenuously counting myself in here as an Agent and avid defender of these rights ) fighting for ourselves, our income, our value and our place in the marke

New things come along and things get fierce. Nearly always the common denominator is tech. Platforms like Fiverr, that allow rates to be undercut at the fringes of the market, algorithms and platforms claiming to make human conversations redundant, and now, the actual tech itself: Artificial Intelligence.

Artists have had to face the bottom of the market falling out, for literally ever. With another big recession likely amongst global inflation and a cost of living crisis, is it any wonder there’s a big fear of the machines? Some parts of the industry may well become redundant if we aren’t cautious about protecting ourselves and evolving, and based upon any big tech revolution, it can happen. But only if we don’t think ahead enough. There’s always a big initial hoo-haa ( v technical term ), a boom and then a reckoning of sorts. Remember NFTs?  I know they haven’t gone but wow that was a big curve and then a drop off. They’ll come back, but the big first bubble burst with the collapse of crypto.

I suspect we’re in a first wave where it feels like the developers and the tech gods, and the machines take over - but I predict it won’t sustain yet. There will be a peak, the un-regulated problems, and then comes the real bit. How we integrate with it as humans, as Artists, and use it to assist, not replace us. And that’s the bit I’m really interested in.

Images AI generated by Steve Scott. More here.

Stable Diffusion

The big conversation of the last few days is Lensa and its use of Stable Diffusion. As with anything on Twitter, the conversation gets reductive, but it’s an interesting point in case. This is one of the first mass commercial cases of AI replacing a direct service that Artists offer. Lensa is an app that allows users to upload a photo of themselves and output a highly realistic ‘Artist’ version of a portrait.

Understandably, there have been very loud cries that AI, specifically Lensa and the Stable Diffusion tech it uses, is ‘stealing’ art, replacing Artists for commercial gain and circumventing ethical payments to artists for referencing their work.

It’s a very heated and legitimate argument. However ‘stealing’ art isn’t strictly what’s happening; it’s learning from your Art. In fact, it’s learning from the tangible part of all your development and skill and experience as an Artist. But I do think there’s a saving grace - you. Machine art doesn’t have the soul, it doesn’t have your unique human creative brain. It can’t conceptually solve a brief. It can’t thoughtfully mull over how to best visually represent a complex brief that needs to include a hierarchy of points or speak to the nuances of a brand message, or capture a campaign concept. That bit can’t be machine born. That’s the point of difference that we all must lean into.

Amongst the wider landscape of AI generated Art, you have stunning tools such as DALL-E 2 and Midjourney that are progressing in quality at a rate of knots. But here, there’s also issues. It’s generally accepted in legal terms that copyright cannot exist in pure AI generated imagery as there’s no human ‘hand’ in creating that art. ‘Copyright’ is a right that’s reserved purely for Human creative outputs. Therefore you cannot create a pure AI image and exploit it, sell it, licence it for commercial gain - as no rights can exist in it. Copyright is the currency that commercial artists trade in when it comes to licensing rights for their work and therefore it is key.

Images AI generated by Steve Scott. More here.

"How can you be sure there are no legal issues in what you are outputting?"

Rights and Usage

To a client, using AI generated art might initially sound beneficial, as there’s no licensing to be accounted for and no Artist bill to foot.  Whilst this is true, it’s more complex than that - depending on the use you’re considering. If you are self-publishing a novel with AI art on the cover then you might be okay. If you are publishing a novel via a global publisher with the intention to be on a best seller list with AI art on the cover, then I’m uncomfy.  Any imagery that is used commercially is vetted throughout the process ( hello Balenciaga, but let’s not go there), not only for a creative approval, but clearance on any trademarks and any potential other 3rd party infringements i.e. reference material. The key part of this process is knowing where the references came from. If they are leaned on enough, then you have to clear them - this is the very premise of Artists using reference. This is why stock libraries exist - and licensing agreements and contracts. We’re coming back to the currency of commercial imagery, outside of the skill taken to produce them, the rights that inherently exist in any human made creation.

My current fascination with AI, because I am a rights and usage nerd is, how do you know it’s clear to use? Part of my expertise is copyright and usage - but here, I’m confused. How do you really know there’s no infringement? There’s probably not but you can’t be sure. This will absolutely be something the generators will be working on and speaks towards the partnership between Shutterstock and DALL-E 2, where Shutterstock loaned their entire library towards the machine learning development. Shutterstock have announced they are planning to introduce licensable AI images to their library on the basis they have records of the original sets of images and data used to train them. And importantly the original photographers will be compensated. This is an interesting development and one of the checks and balances that long-term I suspect ( and  hope ) will become common practice. At the moment though with AI imagery at the touch of your fingertips, how can you be sure there are no legal issues in what you are outputting? Especially when a lot of the terms of agreement of these generators incorporate ownership of the input material that’s fed to it by its users.

Take Lensa users uploading their selfies and profile pics and some generators crawling and learning from the billions of imagery available on the global web, yes everything AI generated is original - but it’s derivative. Because that’s the only way it can learn. What happens if an AI generates a human image that’s ‘original’ and ‘invented’ but by pure coincidence looks like a real person? Can you prove it’s not? Or what if it is a real person because you uploaded your selfie and didn’t read the terms and conditions? Model releases and laws around protecting likenesses and endorsement within commercial use exist for a reason. These current models of consent and legitimate use will evolve and change and it will be interesting to see how this unfolds.

Other recent examples of questionable ethical practice includes Artists who have had their work fed into AI generators so deliberately and repeatedly, that there are swathes of AI generated artwork out there passing off in their style. There’s no ownership in style - but this unethical practice majorly impacts Artists’ livelihoods.

Images AI generated by Steve Scott. More here.

"It can’t be denied that AI is already incredibly technically proficient. But can it tackle a complex brand message visually or combine the many different components to a brief in the way a highly skilled illustrator could? "

Brand messaging

Right now I can’t foresee a multi-million pound advertising budget risking using AI generated imagery in the place of a commercial Artist. No  business affairs or legal dept. at an Advertising agency will risk a gamble on an IP issue that so far looks like a minefield in the wild west? In my experience of commercial lawyers, they like to deal only in one thing - certainty.

It can’t be denied that AI is already incredibly technically proficient. But can it tackle a complex brand message visually or combine the many different components to a brief in the way a highly skilled illustrator could? Yes, there may be an editorial use on the way that is a challenge for illustrators but I can’t see AI replacing the highly skilled business-to-business need that brands and advertisers have. The human relationships, communication and trust, visibility on the process, respect of human craft - my hope is that this will become even more valued. The market may get tighter, it always does, but the Artists that can hone their craft and their client service, will remain irreplaceable.

Let’s come back to the NFT analogy - because they’ll be back. After the gold rush and peak hysteria of enthusiasm, the human ways in which they’ll be used will become smarter, more integrated and more sustained. I think the most interesting future of AI generated art is how it can assist artists. For one, I’d like to understand how that affects IP issues and the rights born into generative art. That’ll be my next internet rabbit hole.

I think it’s worth considering too that being in fear of something never makes it go away. I’m curious about AI’s commercial benefits as this path evolves. How can the artist community learn to view it as a tool not a competitor?  There’s experimentation and it’s happening, but only at  the very forefront of the market.

There’s a school of thought out there that not embracing it might do more harm than good, and I’m inclined to agree.

Nicki Field

Joint MD & Head of Artist Management, Global

Share