I had the pleasure of appearing on Alexander Sorondo’s podcast to talk about my new novel, Glass Century. Alexander is an outstanding reader and writer and we had a wonderful talk on literature, craft, and what it means to be a writer today. He posited Glass Century might be one of our great Millennial novels, which humbled me quite a bit. I encourage you to pre-order Glass Century now. It is, in the words of Junot Díaz, “a spectacularly moving novel.”
When I think about the artificial intelligence revolution, I keep returning to a simple dichotomy: need vs. want.
There are plenty of people who want A.I. These are the enthusiastic practitioners and those who work in the industry. I am much more sympathetic to the former than the latter. I thought one of the more reasonable defenses of A.I. came from Nate Silver who made the argument, with the American left in mind, that it’s time to engage with A.I. You need to get sophisticated, or the evangelists will run roughshod over you. Personally, I am skeptical of A.I. technologies but am willing, at least, to hear their celebrants make the case. Perhaps my great failing is that I’ve never attempted ChatGPT, Claude, or DeepSeek. I’ve never consciously used any A.I. program, though given that I exist on the internet like all of you, I’ve no doubt come across artificial intelligence many times, interacted with chatbots, and had to dodge the sinister Gemini twinkle in my Gmail inbox. My assumption is that artificial intelligence will eventually become the prerogative of global militaries to more efficiently surveil and slaughter human beings. And I imagine A.I. will have uses in the medical field, where it can help doctors diagnose illnesses, analyze data, and make medicine more precise. In medicine, I handwave away nothing, and I certainly welcome advancement. The human lifespan increased dramatically in the twentieth century and I would love nothing more than to see it shoot dramatically upward as this century wears on. We’ve stalled out, clearly, and it would be good to have more healthy eighty, ninety, and hundred year-olds across the world.
But the rest of it amounts to noise, and its evangelists doth protest too much. They remind me of the sweaty, doughy door-to-door knives salesmen of my college youth, desperately hawking their wares on a somewhat befuddled public. For at least two years, remarkable evolutions and revolutions have been promised—no less than a new plane of human development. Maybe it’s coming this year, or next year, or 2032. Maybe it came and I’m too much of a luddite to notice. There has been, quite literally, almost no discernable impact on the American economy since ChatGPT debuted in 2022. Consider that Alphabet, Amazon, Apple, Microsoft, and Google have budgeted a staggering $400 billion for capital expenses related to A.I. hardware, research and development. Despite the various declarations about the sheer number of companies deploying artificial intelligence or users flocking to ChatGPT and Claude, adoption has been a relatively slow, niche pursuit. Most people do not pay the $20 a month to access the best version of ChatGPT. Developing the technology is so absurdly expensive that widespread paid usage would not bring any of these companies nearly close to the realm of profit-making. This is why OpenAI is mulling $2,000 a month subscriptions for new large language models. The hope for all of these companies is that profitability eventually arrives, somehow, after they light enough cash on fire—it worked for Amazon, after all—or it won’t matter much anyway because adoption will be so widespread and the technology so inevitable that the laws of the market no longer apply. Perhaps the federal government will subsidize A.I. for good. Of course, all of this technology consumes vast, unfathomable sums of energy, and we live on a rapidly warming and unstable planet. No one much cares, but nature will find a way to punish us.
As a writer, I am quite interested in the artistic implications. Recently, I helped launch a new book review and arts publication, The Metropolitan Review, and our manifesto took a few swipes at A.I. A writer and critic, Henry Oliver, didn’t like this very much, and he is engaging in a debate with our co-founder, Sam Kahn, that will soon be published. Oliver’s case for A.I. is that it will be a “collaborator” and that resisting it will be like trying to repel the television, the radio, or even the printing press. The typewriter and word processor changed how we wrote and wouldn’t an individual look foolish complaining about no longer writing with quill pens? “Literary culture can’t just dismiss AI,” Oliver writes, adding later on that “the problem for modern literature is not that it is beleaguered by conglomerates and tech behemoths. The problem is that it too often refuses to have anything to do with the new sensibility.” I won’t pile on too much here because that’s Kahn’s job, and he will debate these points better than I can.
What I will offer are a few thoughts and pressing questions that Oliver and most A.I. defenders never quite answer. Oliver refers to a “new” sensibility. What is it? What literary revolution has A.I. unleashed? What’s the A.I. aesthetic, the point of view, the lingual innovation? What wondrous piece of literature or criticism have these programs produced? Well, they’re coming, we are told. Like small, patient children, we must wait. In the next few years, A.I. will write the Great American Novel or Great British Novel. Certainly, I am sure, Claude or ChatGPT is capable of writing a mediocre novel or screenplay right now. The derivative essays and poetry are already here. But Oliver—nor any A.I. booster, really—never tells us why we need machines to write novels, plays, and poetry for us. Why do we need an A.I. program to create pictures, compose music, and accomplish graphic design tasks? Were humans failing, until the 2020s, to paint portraits, write fiction, conceive pathbreaking albums, or design hallucinatory graphics? Were we all, as a species, not comprehending the concepts of poetry or literary criticism any longer? I am not threatened by A.I., quite frankly, because none of it is very good, but even if it were, would it matter? Artificial intelligence relies on deception. Few sentient humans want to purchase a book written by ChatBot 3440-121 or visit a museum stocked with paintings produced by ClaudeBot C-3P0. When these bots upchuck great literary masterworks, who will promote them? The literary establishment has atrophied enough that actual brilliance is rarely given its due. Now the publishing conglomerates and prize committees will sort through the A.I. novels to find the “good ones” and Barnes & Noble and Waterstones will dutifully stock them in the front window? I don’t fear this dystopian future because it’s not coming.
Other technologies didn’t need the hard sell—and they didn’t take the agency of the artist away. A typewriter wasn’t writing the novel. That was up to the human, and his balky fingers. Microsoft Word didn’t make Zadie Smith into a literary star. Television and radio certainly did, in their own ways, marginalize the novel for good, but they are only mediums. Cameras don’t create; people do. Artificial intelligence can create. It does this through the theft of already existing work. It “trains” on our language, and I am waiting for the day when Meta or Microsoft just buys a large publishing company to own its backlist and start beefing up its LLM’s with many thousands of books. What great art might be possible once the machines run wild! Until then, the door-to-door knives salesmen will be in our faces, telling us how much we need their product. They don’t care that past technological revolutions filled actual human needs. We had to defecate in outdoor gashes until the invention of indoor plumbing liberated us to live cleaner, longer lives. We had no ability to make magic light from our hands so we invented electricity to do what we could never do on our own. We don’t have wings to fly—airplanes fly us, instead. Penicillin chased away our diseases. The internet made instantaneous communication possible. The Google search made the instant retrieval of information possible. The smartphone, to our detriment, put supercomputers in our pockets. A.I., in theory, is supposed to outstrip all of these advances. In actuality, to tick off the technological leaps of the twentieth and early twentieth centuries is to make A.I. seem rather meager. In this context, it definitely is. It is, fully, a want technology, one its evangelists must manifest to success. Many companies have many billions riding on what comes next. They can’t survive a Metaverse-like flop. For those that merely want artificial intelligence to succeed—who view it as the great leap forward that only a troglodyte would ignore—it will be a rather embarrassing few years if none of this amounts to much. Or, if it does, but in all the ways they find terrifying, like teaching militaries ever more inventive ways to kill soldiers and civilians alike. This is our brave new world. I don’t know what to make of the people in it.
I'm always leery of arguments that equate AI with other disruptive technologies. AI is clearly different. The ultimate aim is to create works using it in order to replace creators.
Ross, this is really quite good. Tech is hard to think about -- a lot of what I'm doing in the company of computer scientists -- anyway, for now, bravo.