A Bullish View of AI That’s Not So Bullish
It's likely to be great for the economy, but not for most of the people in it.

Most of us see the rapid increase in the capabilities of generative AI as displacing knowledge workers, rendering most of them economically superfluous and, thus, destitute. An Economist feature asks, “What if AI made the world’s economic growth explode?“
The setup:
UNTIL 1700 the world economy did not really grow—it just stagnated. Over the previous 17 centuries global output had expanded by 0.1% a year on average, a rate at which it takes nearly a millennium for production to double. Then spinning jennies started whirring and steam engines began to puff. Global growth quintupled to 0.5% a year between 1700 and 1820. By the end of the 19th century it had reached 1.9%. In the 20th century it averaged 2.8%, a rate at which production doubles every 25 years. Growth has not just become the norm; it has accelerated.
If the evangelists of Silicon Valley are to be believed, this bang is about to get bigger. They maintain that artificial general intelligence (AGI), capable of outperforming most people at most desk jobs, will soon lift annual GDP growth to 20-30% a year, or more. That may sound preposterous, but for most of human history, they point out, so was the idea that the economy would grow at all.
The likelihood that AI may soon make lots of workers redundant is well known. What is much less discussed is the hope that AI can set the world on a path of explosive growth. That would have profound consequences. Markets not just for labour, but also for goods, services and financial assets would be upended. Economists have been trying to think through how AGI could reshape the world. The picture that is emerging is perhaps counterintuitive and certainly mind-boggling.
My initial instinct is that, while this would be great for “the economy,” it doesn’t really help tens of millions of people who are displaced from relatively pleasant, well-paying jobs. But, of course, new technologies have been displacing occupations for centuries.
Economies originally grew largely through the accumulation of people. Bigger harvests allowed more mouths to be fed; more farmers allowed for bigger harvests. But this form of growth did not raise living standards. Worse, famine was a constant menace. Thomas Malthus, an 18th-century economist, reasoned that population growth would inevitably outstrip agricultural yields, causing poverty. In fact, the reverse occurred: more people did not just eat more, but had more ideas, as well. Those ideas led both to higher output and, eventually, to lower fertility, which set output per person climbing. AGI, the theory runs, would allow for runaway innovation without any increase in population, supercharging growth in GDP per person.
Most economists agree that AI has the potential to raise productivity and thus boost GDP growth. The burning question is, how much? Some predict only a marginal change. Daron Acemoglu of the Massachusetts Institute of Technology, for instance, estimates that AI will lift global GDP by no more than 1-2% in total over a decade. But this conclusion hinges on an assumption that only about 5% of tasks can be performed more cheaply by AI than by workers. That assumption, in turn, rests in part on research conducted in 2023, when AI was less capable.
More radical projections of AI’s economic impact assume that much more of the world’s economic output will eventually be automated as the technology improves and AGI is attained. Automating production then requires only sufficient energy and infrastructure—things that more investment can produce. Usually, investment-led growth is thought to hit diminishing returns. If you add machines but not workers, capital lies idle. But if machines get sufficiently good at replacing people, the only constraint on the accumulation of capital is capital itself. And adding AI power is much faster than waiting for the population to expand, argues Anson Ho of Epoch AI, a think-tank.
This will make a lot of people very rich. But, again, it’s not obvious how the rest of us benefit.
This, too, seems less than exciting from a knowledge worker perspective:
Truly explosive growth requires AI to substitute for labour in the hardest task of all: making technology better. Will it be AI that delivers breakthroughs in biotechnology, green energy—and AI itself? AGI agents will, it is hoped, be able to execute complex, long-running tasks while interacting with computer interfaces. They will not just answer questions, but run projects. The AI Futures Project, a research group, forecasts that by the end of 2027, almost fully automated AI labs will be conducting scientific research. Sam Altman, the boss of OpenAI, has predicted that AI systems will probably start producing “novel insights” next year.
Economists who study “endogenous” growth theory, which attempts to model the progress of technology, have long posited that if ideas beget more ideas with sufficient velocity, growth should increase without limit. Capital does not just accumulate; it becomes more useful. Progress is multiplicative. Humans have never crossed this threshold. In fact, some economists have suggested that ideas have become harder, not easier, to find over time. Human researchers must, for instance, master ever more material to reach the frontier of knowledge.
Again, this sounds like it would be great for society, but bad for those who make a living doing science.
What would all this mean for workers? Humanity’s first growth surge was not especially generous to them. An English construction worker in 1800 earned the same real wages as one in 1230, according to Greg Clark of the University of Southern Denmark. The growing number of mouths to feed in effect nullified all the increase in output. Some historians argue that over the following 50 years or so, workers’ living standards outright declined.
What’s 50 years, right? A Keynes said, in the long run we’re all dead.
This time the worry is that workers become redundant. The price of running an AGI would place an upper bound on wages, since nobody would employ a worker if an AI could do the job for less. The bound would fall over time as technology improved. Assuming AI becomes sufficiently cheap and capable, people’s only source of remuneration will be as rentiers—owners of capital. Mr Nordhaus and others have shown how, when labour and capital become sufficiently substitutable and capital accumulates, all income eventually accrues to the owners of capital. Hence the belief in Silicon Valley: you had better be rich when the explosion occurs.
So, again, this isn’t making me feel better.
A booming but workerless economy may be humanity’s ultimate destination. But, argues Tyler Cowen of George Mason University, an economist who is largely bullish about AI, change will be slower than the underlying technology permits. “There’s a lot of factors of production…the stronger the AI is, the more the weaknesses of the other factors bind you,” he says. “It could be energy; it could be human stupidity; it could be regulation; it could be data constraints; it could just be institutional sluggishness.” Another possibility is that even a superintelligence would run out of ideas. “AI may resolve a problem with the fishermen, but it wouldn’t change what is in the pond,” wrote Philippe Aghion of LSE and others in a working paper in 2017.
So, maybe the boom will hold off long enough for me to retire. Great! But not so great for my kids.
Indeed:
Averages conceal variation. Explosive wages for superstars would not console those with more mundane desk-jobs, who would have to fall back on the parts of the economy that had not been animated. Suppose, despite AGI, that technological progress in robotics were halting. There would then be plenty of physical work requiring humans, from plumbing to coaching sports. These bits of the economy, like today’s labour-intensive industries, would probably be affected by “Baumol’s cost disease” (a wonderful affliction for workers) in which wages would grow despite a lack of gains in productivity.
In the classic case, named after an economist called William Baumol, wages grow to stop workers switching to industries in which productivity is surging. That would not apply with AGI, but other factors might produce Baumol-like effects. AI-owners and elite workers might spend a good deal of their new fortunes on labour-intensive services, for example. Think of today’s wealthy, who shell out on lots of things that are hard to automate, from meals in restaurants to nannies. It is an optimistic vision: even those who are not superstars still benefit.
This is great if your ambition is to prepare meals and provide childcare for the rich. Otherwise, not so much.
The non-rich would enjoy only selective abundance, however. Their purchasing power over anything that AI could produce or improve would soar. Manufactured goods made in AI-run factories could be close to free; riveting digital entertainment might cost almost nothing; food prices, if AI worked out how to increase agricultural yields, could collapse. But the price of anything still labour-intensive—child care, say, or eating out—would need to rise in line with wages. Anyone who switched from today’s knowledge work to a labour-intensive alternative might find that they could afford less of those bottle-necked goods and services than they can today.
You think?
There’s more to the piece, but it doesn’t get more hopeful.

What I see is that LLMs can bullshit increasingly convincingly.
I don’t see it making more sense.
It wouldn’t surprise me if generative AI ends up as a dead-end technology. A bit like blockchain, which is certainly an interesting idea, but only useful for running pyramid schemes and buying illicit drugs online.
The assumption regarding tech innovation is that it will create more, better jobs that those that it destroys. Generally that has been true and it may be with regard to AI, but maybe not.
Some in AI development and Silicon Valley more broadly have expressed concerns about the job losses and some are advocating various levels of population support, i.e. guaranteed income. While that could happen in some cultures, it won’t here.
Occasionally I have lunch at a local sandwich shop that happens to be close to the town’s high school, so frequently the place is filled with kids having lunch. I do wonder about their future.
Thanks, James, for making my morning bright. Now that I have young grandchildren I think about this a lot. What is their future? What can I do now to help them succeed in the future? Is there wisdom in this country or world that can navigate us into this future which I now see as dystopian?
Example, I happen to believe that if the future goes as some are predicting that some kind of Universal Basic Income is required. Yet our culture does not yet accept that as a concept. What do we do now to acculturate us to that future?
I also think that in addition to reading and listening to economists we need to read and listen to artists, particularly writers, who imagine the future. One book that I read that has stuck in my head is “The Resisters” by Gen Gish. A quick summary from Good Reads that really doesn’t do the book justice:
Will AI lead to a Butlerian Jihad a la Dune? Or an abundant Star Trek future. A two episode DS9 story, Past Tense, told the story of a dystopian San Francisco that experienced an uprising that eventually led to that sunny Star Trek future.
Star Trek is the exception. Most future stories are dystopian,
AI and automation call into question the idea of ownership.
For example, if a chatbot composes a work of literature, who owns the intellectual work, and why?
If an algorithm can manage a factory, and accurately predict consumer demand better than a human, what value does an entrepreneur bring to the operation? Could government simply provide funding and overall guidance to a factory and give the resulting output away at cost?
That is, can AI solve the Calculation Problem?
From the article,
Cowen himself quoted that yesterday at Marginal Revolution. Commenters were quick to point out the contradiction in “booming but workerless economy”, many made so bold as to suggest UBI. This is the basic reason electing a businessman prez is bad. They see workers and customers as two different things and the goal is to keep one’s prices high and wages low. But your workers are my customers. Cowen, like so many conservatives, is both an AI booster and pro-natalist. He doesn’t seem to sense any contradiction. Where will the workers go? As with Gazans, the people driving this don’t see that as their problem.
I did see reference years ago to a paper that claimed it takes 20 years for any new technology to, in the modern phrase, “scale”. Electric induction motors took about 20 years to become commonly used. Even with a war to accelerate development, it was more or less 20 years from the Wright Flyer to commercial usage. AI doesn’t look to be much different. Cowen seem to see regulation as an unmitigated evil. Electric motors weren’t accepted until UL started approving equipment and the number of fires went down.
I read Marginal Revolution most mornings. I also read Brad DeLong’s Substack. Cowen collects lots of bright shiny new things. DeLong does one every now and again. But DeLong picks something actually important and explores it. DeLong is the guy who described LLMs as Mynah birds with huge memories. I dread the day the internet is full of AI output and the LLMs are “learning” by scraping up each others’ bullshit. The good news is that AI isn’t really intelligent. The bad news is most jobs don’t require all that much intelligence.
As to what happened to growth, I highly recommend DeLong’s Slouching Towards Utopia. It’s long, he says his editor made him cut to 600 pages from 800, but very readable. He talks of the “long twentieth century” of high growth from about 1870 when we started to break Malthus to 2010 when the ’08 financial crisis ended it.
When I glanced at the above quote I saw, “But, argues Tyler Cowen of George Mason University, an economist who is largely bullshit about AI”. An entirely reasonable statement.
@Sleeping Dog:
The poor will have something the rich want, eventually need, and would pay for: blood, tissues, and organs.
@drj:
Several of them, especially in their paid versions, are considerably better than that.
I’ve mostly used the free version of ChatGPT. But even that can already:
All in a matter of seconds.
There’s a platform that will take those readings and generate a podcast of whatever length one specifies in which two artificial hosts with have a discussion of those readings, using the issues for discussion specified on the lesson card.
There’s a platform (probably more) that will create an increasingly-realistic avatar of, in one instance I’ve seen, a German field marshall explaining his strategy in a given campaign and then breaking down the ensuing events.
All of this, at least now, requires decent prompts that curate the readings. If you give it open prompts that rely on the open Internet then, yes, you’ll get bullshit. Because most of the Web is bullshit. But I suspect the programming will get better quickly given the amount of competition in the space.
@Scott:
There’s no drama in a utopia. You can’t really write a novel with the premise that everything is fine for everyone.
If I were writing a human vs. AI story, I’d have domestic terrorists, er, freedom fighters, taking out power stations. Fly drones into power lines, use drones to drop explosives on power stations, start a campaign of assassination against the billionaire class, infiltrate AI-related businesses. Right now, today, with no new technology, one man with a drone could drop incendiaries or environmental poisons on, let’s say, Jeff Bezos’s yacht. Or fly drones into Elon’s rockets.
The piece is all nonsense. Because cost does not set price. We learn this in Econ 101. Demand sets price. Cost is a factor, but making something more cheaply does not necessarily result in it selling for less.
For instance, let’s take software. Development costs are high. Quite high. But the marginal cost of one more copy is almost nothing. But does it cost nothing? No, it costs quite a bit. Margins on software companies are known to be very high.
Pharmaceuticals have a profile that’s not quite so dramatic, but with a similar structure. How much cheaper do they get when the patent ends and competition kicks in?
But if the AI is locked down (due to startup costs) and only the few can use it, it’s like having a patent that never expires.
I mean, yeah, business types are salivating over it, since AI is a built-in moat.
More productivity to what end? There’s no point producing if there’s no one consuming, and if no one has a job then no one is consuming. Unless you have UBI. But that’s certainly not the plan that the Trump billionaire class has in mind.
Something interesting I’ve been watching, and this is purely anecdotal, I haven’t been conducting a study. But I’ve noticed that as AI crap has flooded YouTube I’m seeing more human creators become more visible in thumbnails and more present in the material. Purina can become ever more productive but the dog still needs to eat the dog food.
AI seems only useful for busy work and trivial tasks. The Economist’s view of what people do is what Elon Musk thought government does until he encountered it. I think AI could replace a bunch of recent MBA grads working at McKinsey or somewhere or a development team working at a tech slop factory putting out SaaS updates. Or maybe pundits: a bunch of Yglesias-types can start churning out think pieces on why not dealing with climate change is the best solution for climate change. But for something actually worthwhile I doubt it.
And it’s quite possible that nothing happens with AI except the usual bubble being inflated and then bursting. It doesn’t take a genius to see that the industrial revolutions offerings to people have dwindled. My grandfather grew up in a farm town with no electricity. When he died, he had eaten frozen food, taken antibiotics, watched color television, and sat in a window seat on a cross-country flight. I was born in 1976. What has happened to me? TV was once a tv, now it’s on my screen. Used to buy socks at a K-Mart, now we order them from Amazon. Went to the cool record store to be condescended to because I liked Pearl Jam. Now, there aren’t even albums or ways to pick up what’s cool or not. The one big advance—personal computers and the Internet, built by goverment R&D–has stalled into mindless nothingness.
@Michael Reynolds:
It’s a fair question. Historically, technology has always been intended to reduce the need for human labor.
In the short run, AI is helping white collar types do their work faster. My wife has managed to speed up the redundant, boring parts of her job with various LLMs, freeing up time to do work that requires creative brainpower. Eventually, though, this will lead to fewer and fewer workers.
And, yes, I’m still unclear how it is that people are going to earn the money to buy the stuff that non-humans are making and selling more cheaply.
If we assume the case that Ai takes over 20%-30% of jobs, and does it rapidly what are the odds that we quickly create new kinds of work for those people? It took a couple hundred of years to shift out of agriculture. It took 50-60 years to shift from manufacturing (never as high a percentage of employment as many think at 22%) to knowledge work. Henry Ford supposedly priced his cars so that his workers could afford the cars he built. What happens if a bunch are jobless?
My personal take is that the ultra-fast take up is optimistic. I think it will take a while to figure out how to effectively use AI. So I expect a slower take off but once there is a lot of real world experience it will then grow rapidly.
Steve
Yesterday I was thinking about self-replicating probes. I asked copilot to assume we can launch one that would travel at 10% of light speed to Alpha Centauri, where it can make and launch two identical probes to other stars, then both do the same, and so on. How long would it take to cover the galaxy with probes.
The answer was under 2,000 years.
I don’t think this is right, even assuming negligible time required for replication and acceleration. For one thing, the galaxy is several thousand light years across. At 10% c, it would take 10,000 years to cover 1,000 light years.
This is the kind of problem a science fiction writer could use instant AI help with.
I’m less impressed by its summarizing abilities past short chunks of text.
@James Joyner:
One of the things we’ve seen with the past century of automation is that as the cost of production goes down, the demand for product goes up.
I don’t mean just the quantity of product, but the quality.
For example, the past century of automation could have simply been put to use reating cheaper and cheaper Model Ts.
But that didn’t happen because there is no upper limit to human desire.
If you have a car that goes 45 mph, you will want one that goes 100 mph; One that has air conditioning, power steering, onboard navigation, heated seats etc.
AI is a complete misnomer. It doesn’t really have any innate intelligence, but relies very heavily on the little man behind the curtain who is doing the prompts- intelligent prompts give better output than stupid prompts.
Am I the only person skeptical of these sorts of definitive claims? More like, doubtful.
Given the evidenced incomptence of current “AI” models on generating consistent internal rules based on their “knowledge”, they seem to have definite limits in use-cases.
See for example, their lack of any “truth sense”, any internal evalution heuristic for external data, tendency to “hallucination” and inconsistency, their failures at complex games.
That’s not to say they won’t have massive impact on data acquisition and processing.
And on routine “mental” tasks where there are massive prior human information bases: basic programming tasks, for example. Or perhaps legal case research.
But for more fundamental effects, I suspect a new type of model may be require, that is capable of actual generally consistent rules generation and application.
An actual aptitude for “reasoning”, not just being a “stochastic parrot”.
How close we are to such models is unclear.
The current ones seem to be about at the level of a talkative amphibian with a photographic memory and a huge library of data.
@James Joyner:
You’re convinced of that, are you?
@Kathy:
Until you have cloned in vitro production of such at scale.
If there’s a need, that hardly seems impossible.
@Michael Reynolds:
The closest to a “Utopian future” based fiction I can think of off-hand are Ian Bank’s “Culture” novels.
But they mostly avoided the “boring Utopia trap” by considering what happens when a massively powerful Utopia bumps up against various dystopias, and has to engage in sometimes questionable moral choices and compromises to deal with the consequences.
@Modulo Myself:
Personally, I still watch TV (BBC mostly) and listen to the “radio” (ditto).
And visit a local record store every few weeks.
I rather like to deny the future its due. 😉
@Modulo Myself:
“AI seems only useful for busy work and trivial tasks.”
The problem is that the people doing the higher level tasks started doing the trivial tasks, and showed they were sufficiently good at them to be promoted. If AI replaces the entry-level positions in a field, then in a generation or so, when the current high level people retire, there won’t be people who learned by doing the low level tasks to replace them.
@Kurtz:
Actaully, I suspect that’s a reasonable general summary.
There were previous major shifts: above all the neolitic agricultural “revolution” and consequent emergence of towns and cities.
Possibly also the development of “iron age” techniques to their conclusion, but that was slow.
There seems little indication that before about 1700 most people, at any given point on the social scale, had an appreciably higher standard of living than their equivalents in say 1400.
That’s not to say there had not been significant changes: printing, firearms, oceanic navigation etc.
But in terms of the economic basis of society, they were rather marginal and incremental.
@Jay L. Gischer:
This is true in the context of stable economic system; but not outside that context.
The commercial and industrial revolutions led to massive relative price declines in such things as sugar, steel, textiles etc.
At the same time as demand evidenced by consumption increased enormously.
This wave of generative AI is a huge bubble, filled with grifters, sunk-cost-fallacy victims, wish-casting right-wing wealthy people who hate nothing more than workers, genuinely stupid people, and maybe like 8 intelligent people who are doing interesting work at figuring out what this type of AI is actually good at and how to limit the weaknesses
Until the bubble bursts, we will have no idea what the landscape really is beyond the hype and the hope from the employer class who hate having to deal with employees.
This doesn’t mean that it won’t be used extensively — after all, employers have been outsourcing programming projects for decades, even though that greatly increases the chances of failure so that on average it costs more. (It’s the time zones, and the language barriers hurting communication — even when the language is Indian English. Move the product managers oversees, and it works better. The hardest part of software isn’t writing the code, it’s gathering and communicating requirements, understanding what is impossible, and coordinating teams)
They would rather pay more, if it results in paying workers less. And this will make Generative AI very common, even if it is not successful by any metrics that make sense.
And the people pushing it as a product to replace workers will do well, at least short term. It disrupts the status quo, and even if it just fucks everything up, there’s money in providing the services that do the fucking up.
As Littlefinger says in Game of Thrones, “chaos is a ladder.”
As for everything else…. 2% of a smaller pie is better than 0% of a larger, better pie, if you have no interest in the size or quality of the pie.
We need more people with more pitchforks putting more heads on spits.
@JohnSF:
You can have a dystopian aspect in your utopia, as in The Ones Who Walk Away From Omelas
@JohnSF:
Cloning organs is absolutely the way to go, if we can get there. It will likely eliminate the problems with organ rejection, by making every organ a much better match*.
And, as a side effect, we will be cloning meat. Which means… celebrity steaks! The problem with Trump steaks is that they weren’t really Trump.
*: I expect that even after the genetics are identical, the differences in epigenetics (what gets expressed when during the growth of the organ) or some other factor may still be a problem. The more I learn about biology, the more of a random collection of barely functioning processes it seems to be.
For the vast majority of us, AI = LLMs. And the usefulness of LLMs depends in large part on the user.
Some people are negative on AI (LLMs) because of concerns about misinformation spread, hallucinations, job killing, humanity killing, concentration of power, etc.
Some people seem to be negative on AI (LLMs) because they really really really dislike “tech bros” and SV.
I’ve also witnessed a third, not mutually exclusive, category of people who just seem to be negative on AI (LLMs). Rooting against AI (LLMs) for, um, reasons.
I’m not claiming that most people are in this third category. And I’m certainly not claiming that OTB commenters are in this category.
I’m merely claiming that this third category seems to exist. And it’s not a tiny set of outliers.
Now, I am aware that my claim is based on my own impressions. Hence, the “seems” language.
Maybe I’m being unfair, especially to people who do indeed have a specific reason for their “negative on AI” position, but are just less able to articulate it in a way that satisfies me.
Still though, I maintain that the third category exists and is sizeable.
Speaking of AI, I ran across another hack to banish it from Google search. Here’s the link
It’s a more permanent solution than adding -noai to every search
Hi all,
Still in Europe, mostly offline.
Obviously predicting how an emergent technology will play out in the long term is highly uncertain. I’m personally skeptical of the claims that AI is going to make large portions of humanity unemployable.
The other point I’d make is that AI is much, much more than public, general purpose LLM’s. That’s just one use case and, IMO, the real impact will be in much more specialized areas that are designed for specific functions and created and trained for that purpose.
@Mimai: There are many fine reasons to hate LLMs, and I’m not going to judge someone harshly for the reasons they choose.
But your third group, those who just hate LLMs to hate LLMs, are the purest and best people. Or they’re just a little inarticulate.
Also, don’t forget the people who hate LLMs because reliance upon them will let human skills atrophy and prevent much of a generation from learning.
It’s a fear people had with calculators, and now a lot of people cannot do arithmetic at any significant level. But the next steps in math don’t rely on your ability to accurately do arithmetic — it’s algebra, geometry, calculus and things where you’re effectively building out a pattern for the arithmetic that can then be plugged into a calculator.
But, if an LLM can take a complex text and reduce it to something far shorter and at a 4th grade reading level, there’s no reason to learn to read at a 6th grade reading level. Or how to construct an argument. Or understand anything.
So many good reasons to hate LLMs.
ETA: Anyone who does not really, really hate “tech bros” and VCs simply hasn’t spent time with them. Even Tech Bros and VC folk hate them — both others and themselves, and rightly so.
@Gustopher:
Hahaha, a very Gus-like response. Let me see if I can respond in kind.
Damn, for someone living in the progressive techno-utopia of Seattle (Fremont, if I’m not mistaken!), marinating in microdosed espresso shots and artisanal datasets, your take here is surprisingly… reactionary.
Serious 19th century schoolmarm energy. “Girls, those novels will overheat your imaginations and unlace your virtue!”
I say this with affection: you sound like a Burkean with a compost bin, pining for a golden age.
Or if you prefer, a kind of anti-AI cottagecore, all gut feeling and cleverly righteous disdain.
And finally, your point about tech bros is fair. But if even they hate themselves, then maybe LLMs are our best shot at automating them into obsolescence.
@JohnSF: I suppose I could have said that better.
AI has huge development costs. Which means it will likely be controlled by large accumulations of capital – with just a few people benefiting.
Do you disagree?
@Mimai: Let’s say that LLMs can replace junior software engineers. This is semi-plausible, as junior engineers are only semi-self-directed. Where will new senior engineers come from?
The thought processes and experience required for that role are learned from years on the junior role, with a bit of mentorship, etc.
Meanwhile, right now the AI companies are pushing to get their product into classrooms, claiming that kids need to learn the technology of the future. And even without it being a part of the curriculum, kids are often using it as a short cut, to cheat, one might say.
I don’t know about you, but for me a lot of the most important things I learned in school wasn’t the facts, it was how to find knowledge, how to validate things, how to think about large problems (well, small-to-medium in grade school) and how to break them down into tractable chunks. The actual final product wasn’t all that important, even if it was the only visible chunk that could be graded.
Generative AI will let people skip to the final product. Never learning how to create that final product. And then, if they are ever presented with something novel, they don’t have the techniques to work through it.
I hope the LLM bubble pops before this becomes too much of a problem. I don’t think that’s particularly reactionary.
On the plus side, teachers are unlikely to have an unruly Gustopher in high school biology who sees the problems with the very basic definition of species (can produce fertile offspring), and can’t let go and making it everyone’s problem.
This is a great and substantive response. I agree with a fair amount of it. The concern about people outsourcing foundational cognitive work, especially in education, is real and I don’t want to hand-wave it away.
That said, I still want to re-emphasize one of my core critiques. A lot of the anti-AI / anti-LLM rhetoric seems driven more by vibes than by clearly articulated principles.
It’s a kind of ambient negativity — suspicious, disdainful, allergy to the tech (and that’s leaving aside the “bros”) — but without precision. That makes it hard to tell whether someone is critiquing what the tools do, how they’re used, or simply the fact that they exist.
Worse, some of those critiques, as stated, seem logically in tension.
“LLMs are garbage — they hallucinate, they’re derivative, they don’t understand anything.”
“LLMs are going to destroy education, destroy jobs, destroy minds.”
I’m sure a clever person can squeeze out some coherence in this position, perhaps with the assistance of an LLM, but to me this looks like a classic elephant-rider situation.
I also think the idea that LLMs will let students “skip the process” deserves more nuance. They can indeed do that, just like wikipedia, calculators, spark notes, grammarly, and search engines.
They can also support the process by breaking down big questions, refining half-baked thoughts, modeling argument structures, helping students see what a good answer looks like.
A tool isn’t inherently a shortcut. It depends how it’s used and what norms surround it.
Re your point about junior engineers: yes, if we treat them as fungible and fully replaceable, we’ll wreck the pipeline.
But that’s a deployment decision, not something intrinsic to LLMs. We could also redesign junior roles to integrate LLMs in ways that expand their capabilities while still preserving mentorship and learning.
The tech may be new, but the dilemma is not.
I’m not trying to evangelize here. In fact, I’m mostly trying to stress-test my own thinking and assumptions (I am a selfish beast after all).
Relatedly, I’m trying to push back against a particular style of AI critique that comes across as vague, emotionally allergic, or logically self-undermining.
Finally, two fundamental points that I keep circling back to: AI is so much more than LLMs and search assists. And the usefulness of LLMs depend in large part on the user, their skills, and their expectations.
@Jay L. Gischer:
No, I agree entirely.
If the current “weak AI” LLM is replaced by a type capable of the consistent rules generation and “reasoning” based on such rules, that would be imo be required for the Economist’s scenario to come to pass, then that would be a massive argument for a revival of “distributism”.
That is, for effective (though high) limits on total personal wealth, very high inheritance taxes, vigorous anti-monopoly (and oligoploy) laws, and the related redistribution of capital asset ownership in the general population.
@Gustopher:
Gotta disagree with you there, Gustopher. I tutored algebra and calculus for years, and the most predictable failure mode for students was that they couldn’t figure out algebra because they couldn’t do arithmetic. You can’t factor quadratics if you don’t know your multiplication tables. I had students who knew perfectly well that 15 minutes was a quarter of an hour, but couldn’t tell me what 60 divided by 15 was. It completely crippled their ability to advance.