By James Grimmelmann

If you go to the website This Person Does Not Exist and hit reload over and over, you will see an endless procession of humanity: confused kids with soup-bowl haircuts, middle-aged women with well-worn laugh lines, and fit younger guys smiling in the sunshine. But as the name of the site promises, none of these people are real. Every single face is fake. They were all produced by an algorithm that specializes in creating realistic human faces.
“Soon, generative AI will become part of the standard tooling creators are trained on and expected to know.”
James Grimmelmann
This Person Does Not Exist is a particular kind of AI called a “generative AI.” Traditional AIs are powerful, but not particularly creative. The software in a self-driving car can avoid collisions, but not write a novel.
Generative AIs are different. They typically start with a simple input, called a “prompt,” and produce a rich media output. There are image AIs, music AIs, text AIs, programming AIs, and even video AIs. And they are remarkably good. If you feed the prompt “a Grey Heron” into Open AI’s image-generating DALL-E, it outputs a perfectly convincing picture of a heron.


The most jaw-dropping generative AI at the moment is probably ChatGPT, also from OpenAI. It can tell jokes, compose short stories, and write rhyming instructions for assembling do-it yourself furniture. But almost every area of human creativity is now being explored by generative AIs. GitHub Copilot can generate fluent, usable code in dozens of programming languages. Google’s Music LM can write and perform music from prompts like “slow tempo, bass-and-drums-led reggae song.”
The most successful generative AI models all use variations on the same basic technique, called “deep learning.” This type of model consists of a large network of individual nodes connected to each other, a bit like the neurons in a human brain. The model is “trained” by exposing it to a large number of example inputs, called “training data.” (For DALL-E, for example, each training example consists of an image and a caption describing it.) Each example strengthens the connections between the parts of the network that activated in response to it, like the connections between neurons strengthening each time they see a familiar face.
With a clever enough training strategy, a model can learn in a way that allows the process to be reversed. ChatGPT learns the patterns of human writing; its model represents the common ways that human-written passages go. This means that if a user prompts ChatGPT by giving it the start of a passage, like “It was a dark and stormy night,” ChatGPT continues with something like “the rain pounded against the windows as lightning lit up the sky. The wind howled through the trees, causing branches to creak and snap. It was a night unlike any other, a night of mystery and uncertainty.”
Successful generative AI models are enormous. OpenAI’s GPT-3 model, on which ChatGPT is based, stores what it has learned using 175 billion different data points, called “parameters.” AI models also require immense amounts of training data. The image-generating AI Stable Diffusion was trained on 5.85 billion images scraped from the Internet. Unsurprisingly, training is slow and expensive. The process can take months and cost millions of dollars.
*This essay was adapted from James Grimmelmann’s article published in The Ankler on February 7, 2023.
But once a model has been trained, actually running it is cheap and fast. It takes less than a minute to generate an image using the most popular AIs, and the text models can respond in seconds. And while cutting-edge generative AIs like GPT-3 are kept closely guarded by their creators, there are plenty of consumer-grade generative AIs that are widely available, like the buzzy Lensa Magic Avatar. Indeed, there are even plenty of smaller models that work perfectly well on a personal computer— or even a phone! Apple, for example, has designed software and hardware to make AI models run quickly on iPhones, supporting apps like the cute and friendly image maker Draw Things.
These models massively reduce the time, effort, and cost required to produce content. It becomes trivially easy to have an AI generate text that could pass for something written by a human. The same goes for images—and very soon for audio and video.
What is harder is to use AI to realize a specific creative vision. Anyone can type “cyberpunk lion” into DALL-E or Midjourney, but if you want the lion to be walking down a deserted urban streetscape in the rain at night, you need to tell that to the AI (and yes, it will do it, see photos above). There is already a large gap between AI amateurs (like myself) and professional prompt engineers who know how to write prompts that coax and tweak the AI into exactly generating what they have in mind. That gap will only grow.
This means that the coming flood of AI-generated content is not just one phenomenon. Different kinds of artists and authors will use generative AI in different ways.
At the high end, many creative professionals will incorporate generative AIs into their creative process. Musicians and 3D artists already use powerful software tools like Logic Pro and Maya. Soon, generative AI will become part of the standard tooling creators are trained on and expected to know.
Indeed, existing tools will simply incorporate AI seamlessly: instead of moving a slider to adjust the color balance of an image manually in Adobe Photoshop, a digital artist will simply click a button to have AI make the image match the color palette of a reference photo—or have it extend a handful of trees in the background of a photo into a lush forest.

While many artists will foreswear AI tools (just as many painters today still work on a physical canvas rather than a tablet screen), in commercial settings, generative AI will be standard practice. Major movie and television productions will rely heavily on AI for their visual-effects work. Meanwhile, individuals and small teams will use generative AI to give shoestring productions the look and sound of productions with a thousand times their budget.
In the middle of the market, the constant struggle to capture consumers’ attention will trap creators in an AI-powered arms race, where the goal is simply to crank out as much content as possible. It doesn’t have to be good, just good enough. For a sense of this future, look at the YouTube treadmill, where influencers and streamers upload new videos as fast as they can.

Everyone whose job already involves writing under time pressure will feel the temptation of AI tools—or, if they do not, their bosses will. Advertising copywriters will ask AI to write website copy; journalists will ask AI to fill in the formulaic parts of stories. Romance novelists and travel bloggers will prompt an AI model with a basic plot sketch or a destination, skim the results, and then hit “publish.” And spammers and scammers will do the same, but won’t even bother to slow down to look over the results themselves. Now imagine the same for every creative medium. Right now, humans make replacement-level art. Very soon, AIs will be able to.
This coming firehose of AI-generated content will be shockingly good on the surface, and shockingly bad underneath. The news website CNET has been secretly publishing AI-“written” articles for months, some of which contained serious factual errors. CNET made corrections, but other content farms won’t bother— or the errors will simply go undetected.

But not all generative AI will be directed at the marketplace. People have always made quirky personal art for themselves and their friends; AIs will help them be even quirkier. Parents of toddlers who love dinosaurs will ask ChatGPT for bedtime-story prompts. Couples will compose and perform Valentine’s Day songs for each other—with a little help from AI songwriting and auto-tuning assistants. And fans everywhere will use AI tools to make their fanfic into actual crossover episodes of, say, long cancelled series starring their favorite long-gone characters.
The copyright law of generative AI is unsettled. Getty Images and a group of artists have sued the creators of Stable Diffusion, whose training data included photos scraped from sites including Pinterest, Getty, Etsy, Adobe, and DeviantArt. Meanwhile, a group of programmers have sued OpenAI over GitHub Copilot, a generative AI that can write snippets of code and is already in wide use. But these lawsuits—and others that will follow—are likely to take years to resolve.
For the moment, AI companies have some powerful precedents to rely on. In 2014 and 2015, the Second Circuit held that it was fair use for Google to scan millions of books to create its Google Books search engine. Although Google made complete digital copies of books without the authors’ or publishers’ permission, the search engine broke each book up into “discontinuous, tiny fragments” that “communicate[d] little of the sense of the original.” The Fourth Circuit reached a similar result in 2009, allowing a plagiarism-detecting company to build a database of high-school term papers that teachers could check student work against.
There is a strong argument that training an AI model is legal, even when copyrighted works are part of the training data. Like building a search engine or a plagiarism checker, it is a “transformative” use. These algorithms are not “experiencing” the works they train on, reading history books to understand the origins of the civil rights movement or looking at photographs to admire their composition. Instead, they are “analyzing” their training data to draw conclusions about that data—much like the kinds of scientific and educational uses that copyright law has always favored.
But even if training an AI model is fair use, using that model to generate works may not be. (This is the crucial difference between modern generative AIs and these earlier cases—a program that can “write” a term paper is very different from a program that can “read” one.) People have used generative AIs to produce photographs in the style of Annie Liebovitz and verbatim excerpts from published textbooks.
Where do these similarities come from? In some cases, the model has simply memorized some of the training data: give it the right prompt and it regurgitates the original work, word for word or bit for bit. In other cases, the model has learned to recognize a distinctive artistic style; some of those styles are broad categories like “sports photography” but others are highly specific, like Alan Lee’s Lord of the Rings art. And in still other cases, it is the “prompt” that contains the kernel of infringement: give DALL-E an image to start with and it can “outpaint” to extend the scene beyond its original boundaries.
It is unlikely that courts will give these generations a free pass just because they were made with an AI rather than by hand. Of course, many of them will still be fair uses; a kid can put their own face on Iron Man with scissors and glue, with cut and paste in Photoshop, or with a generative AI. But there is likely to be a core of uses that are decidedly not fair, like using an AI to design a line of beer cans in the style of Ralph Steadman that competes with the ones he illustrated for Flying Dog Brewery. And depending on what the Supreme Court does in its pending Warhol v. Goldsmith case (about a Lynn Goldsmith photo of Prince that Andy Warhol used as the starting point for sixteen silkscreen prints), the category of infringing uses may be quite large.
This split—legal to train, but not necessarily to use—may create massive AI risks. For one thing, copyright owners will ask courts to put restrictions on how AI models can be released. It is possible, for example, that a court could order OpenAI to add filters preventing users from generating Dr. Seuss–style rhyming poems.
This is a bit like what happened to Napster. When Napster was ordered to remove the record companies’ songs, it tried, failed, and shut down. Here, no one, not even the technical experts at AI companies, fully understands exactly how their models work, or “where” in a model the training data is represented. Fully effective copyright filtering is currently impossible.
Even if it were possible to filter an AI’s outputs, several powerful models (most prominently Stable Diffusion) have already been released publicly as open source. Anyone can download them and run these models on their own device! Similarly, anyone can tinker with the models to make them more powerful, or disable their safety filters. Just like with file-sharing, the horse is out of the barn. No matter how the models were trained, and no matter what the courts do, copyright owners will not be able to shut down generative AI entirely.
“No matter how the models were trained, and no matter what the courts do, copyright owners will not be able to shut down generative AI entirely.”
James Grimmelmann
AI companies would prefer that the responsibility for avoiding infringement rest with users. But the black-boxiness of generative AIs means that using one is always a little bit at your own risk. Maybe the oil painting of a hedgehog taking a selfie that I generated using DALL-E is unique, or maybe it’s a near-exact rip-off of someone else’s oil painting of a hedgehog taking a selfie.
Many organizations already have policies on how they use open-source software to avoid legal risks. They will need to develop similar policies on the use of generative AIs, telling employees whether, when, and how they can incorporate AI-produced content into their workflows. And then they will have the extraordinarily difficult challenge of enforcing those policies for employees who have generative AI tools literally at their fingertips and are accustomed to using them constantly in their everyday lives.
Generative AI may not transform human creativity. So far, at least, people are mostly using AI tools to make familiar kinds of art, and just more of it. But AI is likely to permeate the human social world, just as the Internet and social media have done. They are always with us, never far away, and all but impossible to avoid.
