I'm always leery of arguments that equate AI with other disruptive technologies. AI is clearly different. The ultimate aim is to create works using it in order to replace creators.
Ross, this is really quite good. Tech is hard to think about -- a lot of what I'm doing in the company of computer scientists -- anyway, for now, bravo.
> Perhaps my great failing is that I’ve never attempted [to use] ChatGPT, Claude, or DeepSeek.
Well, yes. Leftists should know better than anyone just how little literati consensus can have to do with the facts on the ground. Vibes are no substitute for material analysis, and it's hard - not impossible, but hard - to understand the material implications of a tool you've never used.
Just go take an hour or two and play around with DeepSeek R1. It will cost you nothing and is close enough to the frontier to give you a decent sense of what's already commercially available, here and now.
> What literary revolution has A.I. unleashed? What’s the A.I. aesthetic, the point of view, the lingual innovation?
None, AI writing tops out at half-decent right now. To some extent this is a deliberate choice, as RLHF training is performed based on ratings by random upwork contractors, who - at the risk of stereotyping - are probably not big fans of James Joyce.
No one is trying all that hard to automate literary fiction, because frankly there's just not any money in it. The money is in menial clerical work (most software jobs included) and that's just what AIs are getting good at.
> Other technologies didn’t need the hard sell—and they didn’t take the agency of the artist away.
Yes, they absolutely did. That was the original Luddite complaint: skilled weavers being reduced to mere stocking-frame operators churning out low-quality lace. And that first "hard sell" took the form of public executions.
Yes. There’s plenty to criticize in this space. We’re talking about a couple thousand nerds in SF, people with social preferences that are, to put it mildly, deeply anti-human, whose explicit goal is to replace life as we know it. They look right into the camera and say this without a hint of irony. Maybe they’re in a death cult or pumping their stock or out of their minds on stimulants. But the fact that this endeavor receives such little scrutiny at the political level is outrageous.
And yet: outright dismissal of the technology itself is, frankly, anti-intellectual. It suggests a basic lack of curiosity. At the bare minimum, it should be self-evidently _interesting_ that machines can produce now human-quality text for all but the highest literary forms using a statistical technique that was invented in 2017. The pace of change is disorienting: five years ago it could hardly complete a sentence; two years ago it could only write novelty Reddit-tier sonnets; now, we’ve more or less automated genre fiction. It’s not unreasonable to extrapolate from this trend and feel like something momentous is happening.
Personally I hope that these things top out soon. Too much human meaning is tied up in our singular ability to make art. But that view feels increasingly like cope, as they say.
Oh, AI definitely has popular use cases, it's just that those use cases are all varying shades of evil, lol. Cheating on your college essays, voice cloning for phone scams, deepfake porn, that sort of thing. (And, as you note, drone warfare and the surveillance state - the domestic uses for this technology are almost entirely irrelevant in the face of those two!)
I would strongly suggest you read the Sub-stack post "The A.I. Bloodhounds Are on My Trail" by Jeff Maurer, who used to be a writer for John Oliver, if you want to get a practical sense for how A.I. could displace professional writers in the immediate future. It's the output of a Deepseek prompt to write a segment in his specific style, interspersed with his own actual commentary. The post is especially valuable because it's by a non-tech person who was very skeptical of any A.I.'s ability to do his job less than even a year ago i.e. from someone with a perspective more like yours.
I personally found the piece to be, frankly, surreal and disturbing. I do not like this. I doubt that the consequences for mass media and politics will be positive. It reminded me of the content of Obama's 2022 Stanford speech wherein he called for regulation of on-line speech, which I found scary at the time because it was a call to abrogate the first amendment from a temperamentally conservative and intelligent politician who was looking down the pike and didn't like he saw. We're walking over a chasm on a balance beam that seems to getting increasingly narrow.
Most firms can’t even ship a basic CRUD app—the digital equivalent of a sandwich—without burning six months and $500K on "Agile consultants" who deliver a login page that breaks if you type a semicolon. And CRUD’s a *solved* problem. Give a team that showers regularly and knows what GitHub is, and they’ll crank out a functional app between Zoom calls. Not because they’re brilliant, but because the bar is underground.
With AI people will blow $3M training a model to predict sales trends, only to learn their “trends” are just seasonal panic about Q4. And when it all implodes, the board will call it a “valuable learning experience” instead of admitting they lit a dumpster fire and called it innovation.
Bottom line: If you can’t build a to-do app without outsourcing to Bangalore, maybe don’t try to reinvent Skynet
A certain segment of readers will turn to AI when their favorite series comes to an end. “Write me six more Jack Reacher novels.” “Give me a book about what Harry Potter is doing now.” It won’t matter that the results are derivative, because that will be the whole point.
AI is already here and it's totally unregulatable. The genie is out of the bottle. Your argument comes too late. Like any technological innovation, those who don't use it will fall behind. Yes, there will be bad actors too. When in history haven't there been?
Ross, I'm with you on the inadvisability of adopting AI widely, especially in a time of spectacular corruption and instability. Yet I'm not sure you've portrayed the arguments the way AI proponents might. Sure, a lot of creators (including me) have been saying that human beings will look askance at, or simply ignore, any sort of art that that hasn't been created by another human being. It's a reasonable prediction, and so far it's been borne out by public sentiment.
But we humans have problems with reality. We love to flirt with all kinds of fantasies about the universe and how it works. Plato understood that we perceived the world through limited senses, our grip on reality as tenuous as dancing shadows on the wall of a cave. The thing is, we're fine with shadows. We listen to stories, read novels, watch movies, believe in the supernatural, and fall for the most implausible conspiracy theories.
Then along comes the latest in a long line of dancing shadows.
Why would we suddenly draw the line at AI? What—it's the last straw? Let's pretend that AI gets smarter, which it will. Its trainers imbue it with enough knowledge, complexity, and creativity that it finally surpasses the abilities of mere mortals. Maybe not by a lot. Just enough to seem deeper and more profound than any human has ever seemed. Would we recoil, or would we begin to worship the exciting new god we found in the machine. Is it too hard to imagine people being enthralled, this time not by shadows, but by a shimmering hall of mirrors?
If we don't acknowledge that a scenario like this is possible, how can we be sure we're seeing the real problem and not simply the shadows of our wishes?
When the knife salesman comes, he focuses on the sharpness, quality, and quantity of the knives. Carefully avoiding the real question- why ten knives when one would do? Most of us don't need so many knives.
With AI, the focus is on the abilities, advancements, and reduced cost. But the missing question is: Does intelligence matter? As you said there is no discernable change in gdp. So what if the problem is not how good the AI is (as a user, it definitely works now) but that intelligence doesn't matter that much.
Since we now know that intelligence doesn't matter for making the world better, the justification for the current meritocratic, degree based order falls apart. Why is the goverment prioritizing hiring and promoting people with college degrees if intelligence is not so important? Why are Musk and Bezos so rich, since their self assertion that their intelligence created wealth is now in question? People talking about the implications of AI haven't yet realized that the failure of intelligence has profound political consequences.
I'm always leery of arguments that equate AI with other disruptive technologies. AI is clearly different. The ultimate aim is to create works using it in order to replace creators.
Ross, this is really quite good. Tech is hard to think about -- a lot of what I'm doing in the company of computer scientists -- anyway, for now, bravo.
> Perhaps my great failing is that I’ve never attempted [to use] ChatGPT, Claude, or DeepSeek.
Well, yes. Leftists should know better than anyone just how little literati consensus can have to do with the facts on the ground. Vibes are no substitute for material analysis, and it's hard - not impossible, but hard - to understand the material implications of a tool you've never used.
Just go take an hour or two and play around with DeepSeek R1. It will cost you nothing and is close enough to the frontier to give you a decent sense of what's already commercially available, here and now.
> What literary revolution has A.I. unleashed? What’s the A.I. aesthetic, the point of view, the lingual innovation?
None, AI writing tops out at half-decent right now. To some extent this is a deliberate choice, as RLHF training is performed based on ratings by random upwork contractors, who - at the risk of stereotyping - are probably not big fans of James Joyce.
No one is trying all that hard to automate literary fiction, because frankly there's just not any money in it. The money is in menial clerical work (most software jobs included) and that's just what AIs are getting good at.
> Other technologies didn’t need the hard sell—and they didn’t take the agency of the artist away.
Yes, they absolutely did. That was the original Luddite complaint: skilled weavers being reduced to mere stocking-frame operators churning out low-quality lace. And that first "hard sell" took the form of public executions.
Yes. There’s plenty to criticize in this space. We’re talking about a couple thousand nerds in SF, people with social preferences that are, to put it mildly, deeply anti-human, whose explicit goal is to replace life as we know it. They look right into the camera and say this without a hint of irony. Maybe they’re in a death cult or pumping their stock or out of their minds on stimulants. But the fact that this endeavor receives such little scrutiny at the political level is outrageous.
And yet: outright dismissal of the technology itself is, frankly, anti-intellectual. It suggests a basic lack of curiosity. At the bare minimum, it should be self-evidently _interesting_ that machines can produce now human-quality text for all but the highest literary forms using a statistical technique that was invented in 2017. The pace of change is disorienting: five years ago it could hardly complete a sentence; two years ago it could only write novelty Reddit-tier sonnets; now, we’ve more or less automated genre fiction. It’s not unreasonable to extrapolate from this trend and feel like something momentous is happening.
Personally I hope that these things top out soon. Too much human meaning is tied up in our singular ability to make art. But that view feels increasingly like cope, as they say.
Oh, AI definitely has popular use cases, it's just that those use cases are all varying shades of evil, lol. Cheating on your college essays, voice cloning for phone scams, deepfake porn, that sort of thing. (And, as you note, drone warfare and the surveillance state - the domestic uses for this technology are almost entirely irrelevant in the face of those two!)
I would strongly suggest you read the Sub-stack post "The A.I. Bloodhounds Are on My Trail" by Jeff Maurer, who used to be a writer for John Oliver, if you want to get a practical sense for how A.I. could displace professional writers in the immediate future. It's the output of a Deepseek prompt to write a segment in his specific style, interspersed with his own actual commentary. The post is especially valuable because it's by a non-tech person who was very skeptical of any A.I.'s ability to do his job less than even a year ago i.e. from someone with a perspective more like yours.
I personally found the piece to be, frankly, surreal and disturbing. I do not like this. I doubt that the consequences for mass media and politics will be positive. It reminded me of the content of Obama's 2022 Stanford speech wherein he called for regulation of on-line speech, which I found scary at the time because it was a call to abrogate the first amendment from a temperamentally conservative and intelligent politician who was looking down the pike and didn't like he saw. We're walking over a chasm on a balance beam that seems to getting increasingly narrow.
"Few sentient humans want to purchase a book written by ChatBot 3440-121 or visit a museum stocked with paintings produced by ClaudeBot C-3P0."
If they write great books or paint great paintings, I will be happy to enjoy them.
Most firms can’t even ship a basic CRUD app—the digital equivalent of a sandwich—without burning six months and $500K on "Agile consultants" who deliver a login page that breaks if you type a semicolon. And CRUD’s a *solved* problem. Give a team that showers regularly and knows what GitHub is, and they’ll crank out a functional app between Zoom calls. Not because they’re brilliant, but because the bar is underground.
With AI people will blow $3M training a model to predict sales trends, only to learn their “trends” are just seasonal panic about Q4. And when it all implodes, the board will call it a “valuable learning experience” instead of admitting they lit a dumpster fire and called it innovation.
Bottom line: If you can’t build a to-do app without outsourcing to Bangalore, maybe don’t try to reinvent Skynet
A certain segment of readers will turn to AI when their favorite series comes to an end. “Write me six more Jack Reacher novels.” “Give me a book about what Harry Potter is doing now.” It won’t matter that the results are derivative, because that will be the whole point.
AI is already here and it's totally unregulatable. The genie is out of the bottle. Your argument comes too late. Like any technological innovation, those who don't use it will fall behind. Yes, there will be bad actors too. When in history haven't there been?
Ross, I'm with you on the inadvisability of adopting AI widely, especially in a time of spectacular corruption and instability. Yet I'm not sure you've portrayed the arguments the way AI proponents might. Sure, a lot of creators (including me) have been saying that human beings will look askance at, or simply ignore, any sort of art that that hasn't been created by another human being. It's a reasonable prediction, and so far it's been borne out by public sentiment.
But we humans have problems with reality. We love to flirt with all kinds of fantasies about the universe and how it works. Plato understood that we perceived the world through limited senses, our grip on reality as tenuous as dancing shadows on the wall of a cave. The thing is, we're fine with shadows. We listen to stories, read novels, watch movies, believe in the supernatural, and fall for the most implausible conspiracy theories.
Then along comes the latest in a long line of dancing shadows.
Why would we suddenly draw the line at AI? What—it's the last straw? Let's pretend that AI gets smarter, which it will. Its trainers imbue it with enough knowledge, complexity, and creativity that it finally surpasses the abilities of mere mortals. Maybe not by a lot. Just enough to seem deeper and more profound than any human has ever seemed. Would we recoil, or would we begin to worship the exciting new god we found in the machine. Is it too hard to imagine people being enthralled, this time not by shadows, but by a shimmering hall of mirrors?
If we don't acknowledge that a scenario like this is possible, how can we be sure we're seeing the real problem and not simply the shadows of our wishes?
When the knife salesman comes, he focuses on the sharpness, quality, and quantity of the knives. Carefully avoiding the real question- why ten knives when one would do? Most of us don't need so many knives.
With AI, the focus is on the abilities, advancements, and reduced cost. But the missing question is: Does intelligence matter? As you said there is no discernable change in gdp. So what if the problem is not how good the AI is (as a user, it definitely works now) but that intelligence doesn't matter that much.
Since we now know that intelligence doesn't matter for making the world better, the justification for the current meritocratic, degree based order falls apart. Why is the goverment prioritizing hiring and promoting people with college degrees if intelligence is not so important? Why are Musk and Bezos so rich, since their self assertion that their intelligence created wealth is now in question? People talking about the implications of AI haven't yet realized that the failure of intelligence has profound political consequences.