
This article was written for a “real” publication earlier this year, but… now I think it’s sort of unrelated to the larger AI conversation. The moment passed. So, after checking with my editor about this, I decided to pull it. I’m giving it an afterlife here… out of vanity.
One thing I don’t discuss in the article below is the environmental cost of generative AI. There are couple reasons for this decision, but a big one is that I used a service running DeepSeek for trying the technology out and the environmental costs of DeepSeek seem to be a lot lower. Since lower environmental costs are an option, I focused on the technology. But if you want something that’s really about the physical costs of generative AI, I recently picked up this book and will write about it on here after I’ve read it.
Bookshop links are affiliate links. I’ve actually never really figured out how to get the money from Bookshop to me so it just accrues there like a secret savings account. There’s always money in the banana stand.…
In the 80s and 90s, a particular vision of the future—which was, in turn, really a vision of the present—took hold. It imagined an environmentally devastated world: one with a sky, in the words of William Gibson, “the color of television, tuned to a dead channel.” This future would be almost without such a thing as society; rule by private companies would have superseded the rule of law. It would be a future of self-augmentation through drugs and technology. In this world, an adventurous soul could do great things—could, like Case, the hero of Gibson’s novel Neuromancer (1984), even decide to liberate an artificial intelligence of incredible power. “I got no idea at all what’ll happen if [the AI] wins,” Case cries out at the book’s climax, “but it’ll change something.” Not the most convincing argument, but it carries the day.
On offer, as Neuromancer and its sequels make clear, was not simply the idea that an incredibly smart computer might run the world better. What the AI offered was ecstatic communion with the divine. The ultimate fantasy was not to be God, or to play God. It was to make God.
The possibility of creating a “superintelligence,” the philosopher Nick Bostrom writes in his book of the same name, is “quite possibly the most important and most daunting challenge humanity has ever faced. And—whether we succeed or fail—it is probably the last challenge we will ever face.” Over two books, Bostrom has laid out what he thinks of as the success and disaster scenarios involving the, to his mind, seemingly inevitable creation of God. These scenarios can get a little silly. One is free to posit, as Bostrom does, that we’re heading toward a future in which “we have a basically unlimited ability to reshape our own minds and bodies, and where human labor is redundant in the sense that machines can perform every functional task better than humans can” and nanobots can encode the knowledge of the ages in our brains, rendering even learning pointless. But I am also free to posit a scenario in which squirrels, over a long period of time, chew through the cables supporting Bostrom’s future digital minds, which seems equally as likely to me.
Artificial intelligence, as it currently exists, may go nowhere interesting. It may remain what it is now—a convenient way of automating tasks, like coding, that, at one time, some person somewhere was paid to do. My interest, however, is not in the potential of AI’s hallucinatory dreams but rather our own. We want—indeed, we expect—to make God. Our beliefs about what that God may then do to us may be Edenic or they may invoke images of Hell. But we’ll do it. God will be real, in the end, because we will make him real. And these hopes and fears for AI, for the possibility of divine intelligence, go to the heart of how we understand the act of creation. If God becomes real, action will cease to be necessary. If that has not held true for religions in the past, we reason, that is simply because those gods were not real. This God will be.
One thing the advent of AI boosterism has made quite clear is that the ways we have become accustomed to argue for the value of art and the intellect do not work. When we think of art as a mode of self-expression, when we claim that it is meaningful only because of its production by the individual, we are setting ourselves up. When we praise artworks first and foremost because they represent breaks with the past, we are setting ourselves up.
It might be glib and sophistical to say, for instance, that many great artworks were produced in workshops and that “Homer” was probably not an individual. Nevertheless, these glib statements would also be true. They must be used as a provocation to deeper thinking—not a deflection from thinking at all. And it’s tempting to make assertions about things that AI will simply never, ever be able to, because only human beings can do them. I view these assertions as similar in kind to people who need to stake their sense of worth on humans being the only animal to do this or that: make art, use tools, develop language. The trouble is that if you make humans being the only animal to use tools very important to your existence, what are you going to do the day you see an otter use a rock to bash open a shellfish?
So it is easy to imagine a grey dystopia in which AI auto-generates illustrations nobody likes, books nobody reads, screenplays for movies that play in the background while you fold your laundry, and emails that are read only by other AI, all in such quantity that it makes it hard to create anything else. When we worry about the future of AI-generated art and entertainment and writing, I think part of what we fear is a future without beauty or pleasure. But there’s a potential future in which people use AI to make beautiful and challenging works of art, too, and perhaps we are more likely to end up there if we think more clearly about art.
If you want to know how something works, you have to use it. So I spent a couple days making the acquaintance of LambdaChat, an AI chatbot that uses DeepSeek. If I ask LambdaChat “what should I read to understand AI,” the answers it gives me are predictable and broken down into helpful categories. Read these books for a general overview, read those books if you actually know something about computer science, and read this other book if you want to panic about the end of the world. There’s nothing wrong with this list (and, indeed, I read a couple of its suggested books).
Over time I come to feel about LambdaChat the way I feel about Mephistopheles in Christopher Marlowe’s Doctor Faustus: it’s an odd little fellow that I use to perform very stupid tricks. What it does not recommend, however, is the text I’ve actually been reading to understand AI—a 1919 essay by T.S. Eliot titled “Tradition and the Individual Talent.”
In “Tradition and the Individual Talent,” Eliot begins by highlighting “our tendency to insist, when we praise a poet, upon those aspects of his work in which he least resembles any one else.” “We dwell with satisfaction upon the poet’s difference from his predecessors,” Eliot comments, “especially his immediate predecessors; we endeavour to find something that can be isolated in order to be enjoyed.” Here’s something that hasn’t changed since 1919. Many of us have inherited an understanding of how to value art that goes something like this: once upon a time, there were a bunch of stuffy old conformists, who were boring everybody with their tedious conventional work, but no one was brave enough to say. The people languished under their tasteful tyranny. Then came a ragtag group of beautiful geniuses, who caused a monocle or two to drop and in the process changed everything forever. This process will repeat indefinitely until we all die, a fact which seems like it implies that the beautiful geniuses themselves become stuffy conformists though this possibility is usually not investigated.
If we carry this system of art criticism forward to its conclusion, we end up at something like the Futurist Manifesto: “to admire an old picture is to pour our sensibility into a funeral urn instead of casting it forward with violent spurts of creation and action.” However, in practice, we are happy to honor the ghosts of transgressions past. If you want to demonstrate to a contemporary audience why they should give a painting or a piece of music a shot you will probably endeavor to convince them that, at one time, it was not only innovative but shocking. In a recent essay in Harper’s, the art critic Dean Kissick lamented that the art of his day, teeming with novelty—“Carsten Höller kept a herd of reindeer in Berlin’s Hamburger Bahnhof, fed half of them fly agaric mushrooms, and built a toadstool-shaped hotel room in which overnight guests could help themselves to the deer’s potentially hallucinogenic urine” goes one example—has been replaced by more pious, backward-looking politically-conscious art that makes “talismans that protect against the present” and intentionally invoke connections to traditions.
As the essay winds toward its conclusion, however, Kissick’s own pieties begin to show: “great art should evoke powerful emotions or thoughts that can be brought forth in no other way”; art “should move us; it should make us weep; it should bring us to our knees”; it is “is an important part of what makes us human.” These statements are not wrong. They just aren’t enough. After all, life itself can (and will) do all of these things to us. What is that art adds?
If we evaluate art primarily for its ability to break with the past, nothing could be more taboo than what AI art purports to do: separate an artistic object from any human creator. The assertion that AI could really do that does spit in the face of fundamental contemporary pieties about the relationship between the production of art and our humanity. Those pieties might be true, but their truth has nothing to do with AI’s transgressive potential. To shrug and say that art is not essentially a human activity (or, if one acknowledges elephant paintings and pufferfish sand art, at least an animal activity) is one of the most offensive things one can possibly say. If you want the absurd and the meaningless from art, AI does that too. It famously hallucinates answers, supplying texts and examples that do not exist. AI visual art generation produces people with the wrong number of limbs and fingers standing under impossible lighting and strings of things that have the shape of letters without actually being letters.
All of this is terrible if you want to use AI for any practical purpose, but if you want to explore the bizarre, well—that’s another story. Well before any of these capabilities existed, William Gibson was already comparing his superpowered AI to a demon. “For thousands of years men dreamed of pacts with demons. Only now are such things possible,” says Michèle, an agent with a law enforcement branch called “Turing,” as she attempts to arrest Case before he can complete his mission. Now it’s common to see AI and its various uses labeled as “demonic.” Who needs the Catholic League to protest your show when you can get this kind of advertising for free?
Yet it would also seem as if more stodgy ways of thinking about art and even intellectual activity are vulnerable to AI. I can ask LambdaChat to summarize T.S. Eliot’s essay to me, to parse sections of “The Waste Land,” and to recommend follow up reading. I can even ask it to relate “Tradition and the Individual Talent” to itself “in the style of a New York Times writer,” with some on-the-nose results.



LambdaChat can be my professor; LambdaChat can be my study buddy whose notes I’m free to copy. LambdaChat can read anything it wants, sift it down to the salient take-aways, and then turn it over to me, in any style I want. There is no need to spend time reading commentaries on “The Waste Land”; there isn’t even a need to read “The Wasteland.” LambdaChat presents to me, digested and spat out for me like I’m a baby bird, all the knowledge the world has ever had to offer. All I have to do is ask for it.
Yet putting me in touch with the past is precisely the thing that AI cannot do for me. LambdaChat can generate a lot of readable text, but it can’t think. As a conversation partner, in fact, it’s a snooze. Those commentaries on “The Waste Land” it generates have little to tell me. As John Warner comments in his book More Than Words, generative AI is “is fundamentally a ‘bullshitter.’” When I chat with AI, it’s like I’m chatting with a revenant of tradition: nothing is really integrated or understood here, and in turn, the chatbot offers me little understanding. My demonic friend LambdaChat can pretend to help me here. But not only can it not read “The Waste Land” for me, the fact is—it can’t read at all.
Eliot’s view of tradition points us in a different direction. It is impossible, he tells us, simply to inherit a tradition: you have to work for it. The poet who is linked to “tradition” in Eliot’s sense has within himself “a sense of the timeless as well as of the temporal and of the timeless and of the temporal together.” That is, our poet is engaged with tradition as living thing, and this means that when he creates “really new” work, he changes the meaning of the past. What Eliot’s poet, through his engagement with tradition, accomplishes is the creation of an eternity within himself. This is a necessary step toward the creation of art. But we could go further than this and say it is a necessary precondition for thinking at all.
The ideal human condition is not to strive to resemble an AI chatbot as much as possible. Eliot comments that “a poet ought to know as much as will not encroach upon his necessary receptivity and necessary laziness” and I concur. Much of writing comes down to developing an instinct for when to be open and when to be closed, when to read and when not to read, when to seek outside opinion and when to stop your ears. Not all knowledge is helpful, even when it is relevant. The interesting effect of embracing tradition in Eliot’s sense is that it also involves embracing what is contingent and even subjective—a sense of your own place in a world of your dead peers, a list of peers that will be both shared and uniquely your own.
In her own book about AI, Melanie Mitchell, a researcher in the field for more than thirty years, writes that “the most worrisome aspect of AI systems…is that we will give them too much autonomy without being fully aware of their limitations and vulnerabilities.” Mitchell’s book is a careful examination of AI’s promises and its limits: even in cases where AI has been able to do things it was once boasted AI could never do, like beat a human chess player, it’s worth understanding that what AI is “doing” is not playing a game, just as what AI does when it recognizes an image is not the same thing that we do. (That is why, as Mitchell amusingly demonstrates, it’s easy to make an AI say a school bus is an ostrich.)
Mitchell argues, and I’m inclined to agree, that what makes human cognition possible is embodiment. That is, our ability to generalize, for instance, comes from the inefficiencies inherent to not being computers. Art could be the same way. Without human embodiment, AI may, in the end, produce only a series of parlor tricks. Here’s a haiku about an ingrown toenail; here’s what it would be like to chat with your artificially created boyfriend; here’s a song about the Spanish Civil War in the style of Katy Perry. AI may simply become one more tool in the autodidact’s belt (a friend of mine told me he uses it to practice Italian), so smoothly integrated we forget to call it AI. What it is capable of doing is only what humans are capable of imagining.
Art is a bid to make something that lives forever, whether or not the artist thinks of it that way. (It’s probably better if he doesn’t.) The eternity in which both art and thought are born is an eternity in which they’d like to stay. Whether they stay or not is up to a lot of things beyond anybody’s control—including, most prominently, sheer dumb luck, like someone picking up one volume of your alphabetical works instead of another in a fire. What survives for now is ours, if we want it. Ours to read, ours to perform, ours to befriend or to reject, ours to preserve. We can refuse this legacy or we can say: this tradition is mine to have, and I am willing to work to have it. This work is, in fact, the one thing that AI can’t do for you, no matter how impressive or god-like it becomes. It’s called thinking.
Thank you for writing this. Lucid writing about AI is sorely needed.
When people talk about building a god, I find I want to reread Isaiah 44:9-20 (“all who make idols are nothing”).
It’s not just a polemic denouncing idolaters. We have to assume Mr. Isaiah II had seen the temples of Babylon and despite himself had been very impressed by them:
“The ironsmith fashions it and works it over the coals; he shapes it with hammers, and forges it with his strong arm; he becomes hungry and his strength fails, he drinks no water and is faint.
The carpenter stretches a line, he marks it out with a pencil; he fashions it with planes, and marks it with a compass; he shapes it into the figure of a man, with the beauty of a man, to dwell in a house.”
One can say something similar about OpenAI or DeepSeek or Anthropic. What they have made is an astounding achievement, but it is their achievement. The AI is the product of their own monumental efforts. And yet,
“…the rest of it he makes into a god, his idol; and falls down to it and worships it; he prays to it and says, "Deliver me, for thou art my god!”
This seems to me like a really worthwhile take on AI and art. There's an animal fascination that we have with autonomous objects - things that we know are inanimate objects, but which seem to be able to do things under their own power. (My cat has this experience with ice cubes on the floor, which go a long way on their own with just a slight smack of the paw.) This kind of autonomy is really useful for saving labor, but it can't be self-governing as a human being with human concerns.
My version of the Turing test is that a machine is truly intelligent when it can have an opinion on whether it should pursue a sexual relationship with its boss' kid that I can't tell apart from how a human being would talk about the same situation. I'm not holding my breath for an AI that can pass it.
A lot of the issue with art becomes clearer when we think about the difference between art that is OK as opposed to art which really matters to me. This has become increasingly obvious to me with the bands that I like - when I see a band live, I can usually tell what they are doing, and appreciate the skill and creativity involved, but it's kind of an unengaged appreciation most of the time. Then there are the bands that I've been totally blown away by, most recently Nick Cave and the Bad Seeds. When I look into bands where I've had that feeling, I tend to find that there's an orientation toward life and being human that connects up with how I feel about life and being human. I'm not sure that it's really communication, so much as human beings linking up with a shared ethic. Something that reflects elements of human social and emotional psychology that are not yet well understood.
The artist who can make work using AI that gives people that sense of shared being-in-the-world is going to be recognized as a genius, and they will deserve those accolades because getting AI to make that kind of art is not going to be a simple thing.