AI art isn't exploitative — and that's the problem

April 5, 2023

With the advancement of artificial intelligence (AI), the computer science world has collided with the visual art world — and it's not pretty. AI-generated art has progressed to the point where it is barely distinguishable from human-generated art. This presents an issue: Many artists may soon be out of a job. Why invest in human work when a computer could do a similar job in a fraction of the time and for a fraction of the cost?

In response to this looming crisis, artists and non-artists alike have attempted to halt the current path of AI art development. Their efforts are commonly supported by the claim, echoed in Nicholas Tillinghast's Feb. 8 article in The Miscellany News, that AI image generators don't actually generate anything new; AI art is only the result of the art that is fed into them. So when an AI is trained on an image without the creator's consent, that's allegedly theft and exploitation. I believe this claim is misguided.

AI image generation is often based on artificial neural networks, modeled after biological neural networks, like the human brain. According to IBM, artificial neural networks are made up of interconnected nodes, which are mathematical or computational functions. The first products of an AI will be pretty bad, but as the AI is given more data of labeled and unlabeled images, it can identify its errors and change its network to attempt to fix them — this is called “training.” According to AssemblyAI, AI art generators are typically trained via diffusion models, which feed the AI an image of Gaussian noise (static made according to a normal distribution) and train the AI how to denoise the image. This means that engrained in the image generation is a level of spontaneity, because there are nearly endless images of static that can each be denoised in a unique way.

When you ask DALL-E 2, a leading AI image generator, for “a dog riding a skateboard in the style of a Van Gogh painting,” it isn't blending together images of dogs and skateboards and Van Gogh paintings. The AI doesn't even remember the original images; it only “remembers” the modifications its neural network made after being introduced to those images. It can learn that “the style of a Van Gogh painting” denotes certain colors and textures, “skateboard” represents a roughly oval shape with some circles connected below, “riding” means that the dog will be above the skateboard and so on,  albeit in its own language.

An AI being trained on works of Van Gogh is actually very analogous to a human viewing and comprehending Van Gogh. Similar mathematical patterns are also developed in our minds, except using organic matter. So is an AI generating Van Gogh-esque images at request any more exploitative than a human generating such images at commission? 

Obviously, this similarity presents a problem: If this “innovation” can be justified, real human artists will be harmed. There are a few ways to respond. As detailed by TechCrunch, the art-community website DeviantArt has introduced a new option for artists to prevent their work from being used for training AI. I would argue that preventing AI from processing your work makes about as much sense as preventing fellow human artists from viewing your work. All that changes is that the credit of the “inspiration” doesn't go to a fellow artist — it goes to a programmer. And, of course, the programmer can produce new art much more efficiently. 

I believe the appropriate and effective way to respond is to differentiate the values of human-created and AI-created art. But to do this, we need to consider a more fundamental question: Why should we value art in the first place? 

This is obviously a complex issue. Currently, according to The Art Story, the art world is largely dominated by a postmodern conception, in which the value of art lies in the viewer's own subjective experience of it rather than the creator's objective intent. This empowerment of the viewer is meant to be democratizing, and in some ways it is. But when placed in our current socioeconomic context, it promotes the commodification of artworks. The creators are alienated from their creations, and all that remains are investments with mere monetary value.

If we only recognize the end result as valuable, human art as we know it is over, since AI can replicate it through a benign process. But if we also emphasize the intent and context of the art, and how that manifests in works of art, we can establish a key distinction: A human artist has a life of their own from which they can be inspired. Human art is uniquely valuable because we, as viewers, can view our lives from a new perspective: the perspective of the artist. We can learn, from the source, experiences impossible to convey in words alone — or in prompts alone. The human artist can produce new revelations as a result of their life experience. 

An AI image generator does not have a life — at least not a life that we, as human viewers, care about. It has only experienced past works, so it can't generate novelty of this kind (at least until AI gets much more advanced). I concede that the prompt of AI art can have intent, but only in the same way that the commission of human art has intent. In its most transactional, capitalistic form, the real artist of a commissioned work is the client; the human “artist” — like the computer — is only a tool.

If we want to save human art, attempting to halt the progress of AI is not the answer. That's a futile exercise; we cannot close Pandora's box. Instead, we have to change how we think about artworks themselves — not as empty commodities, but as the creations of artists embedded with life experiences and intent. Robots can't take that away.