Will AI Kill All the Jobs?
Some predict a fundamental restructuring of the economy.
Business Insider (“CEOs get closer to finally saying it — AI will wipe out more jobs than they can count“):
On Monday, per a report by Bloomberg, it emerged that IBM is preparing to pause hiring on roles that it believed could be better performed by AI. That leaves 7,800 jobs at the tech giant vulnerable to being eradicated for good.
Since the release of ChatGPT, tech CEOs have been racing to decide if the generative AI technology underlying the buzzy chatbot is more than a gimmick and can deliver on its promises to change the very fundamental ways in which their businesses operate.
Earnings calls from tech firms such as Meta, Alphabet, and Microsoft have been littered with references to AI, with the verdict on the technology from leaders becoming more apparent than ever: AI can and will make jobs extinct.
The timing of all of this AI chatter is no coincidence. Tough economic conditions have coincided with generative AI’s arrival, allowing companies to make layoffs that help them get efficient.
Some business leaders have been adamant that AI will create new jobs. Microsoft’s CEO Satya Nadella has made this his stance, while acknowledging that companies like his will have to learn to do “more with less”.
The rest of the report is filled with rather vague suggestions that companies ranging from Amazon to Dropbox to Microsoft are all excited about the possibility of replacing as many wage-earners with software as possible. Then again, I can’t remember a time when that wasn’t the case.
Thus far, ChatGPT and its ilk strike me as more novelty than anything all that useful. Still, compared to what similar technology could do just three or four months ago, it’s rather remarkable. Like many others in higher education, I’ve run various paper and exam prompts into the software and gotten answers that, while not stellar, are better than my lowest-tier students produce. A year from now, the software may be better than my best students. Or, hell, me.
Technology has been replacing human labor since at least the Industrial Revolution, arguably much earlier depending on one’s definitions. Historically, though, it’s mostly been relatively low-skill work that’s been replaced. We may well be reaching a point where the algorithms replace top creative talent. (This is a key concern beyond the ongoing Hollywood writer’s strike.)
In a theoretical world, this is all to the good. It could free humans up to spend more of their time pursuing their personal interests rather than striving to earn a living. But it would also require a fundamental restructuring of how the economy works.
I have no idea how big of an impact AI will have/is having/has already had, but there are a few observations that I’m pretty confident on:
– I’ve lived my whole life with radical changes in the job market. These have been profound and have had a huge effect on society. For example, I haven’t seen any advertisements for a traditional machinist lately, or a stenographer, or a data entry tech. Each of those jobs went away and, inasmuch as they have been replaced, the replacement positions require a different skill set and quite frankly, a different type of person. Despite such changes, we have been able to absorb the impact.
– Despite sounding glib, the Gartner Hype Cycle is as real as any model I’ve come across. Every big technological change in my lifetime has followed its path. AI has yet to hit the “Trough of Disillusionment”.
– We are already way into the AI revolution and have been for a decade or more. Siri and Alexa are AIs, according to this new definition. But so are the web search engines and the helpful “people” who invite you to ask for help on a website. When I do Wordle in the morning and hit the share button to post my results, the first choice presented is the group I use for that. But in the evening we do the Spelling Bee as an electronically distributed family and when I take a screen shot of the starting grid and hit the share button, the first choice I get is the small family group. That’s AI, by today’s definition.
– Automation plays out in weird ways. The number of cans of Budweiser produced by each worker has gone up at an astounding rate, to the point where it is unimaginable. But I bet there are more brewery workers in the US than there were twenty years ago given all the microbreweries out there.
There’s a concept called robot socialism, which is the idea society will long term be forced into socialism because eventually we will reach a point where there simply won’t be meaningful work for most people.
Given that, at the moment, there is no indication that the momentum to push the savings/increased profits of tech and business decisions such as off-shoring to any group but the wealthiest in society is changing or even slowing, restructuring the economy is doubtful. The 1% will need to be impaled on pitchforks before that happens.
@Sleeping Dog:
Please. Don’t. Stop.
It is a tradition in our family that on Christmas everyone gets something to read as a gift. A couple of years back, I received a book entitled “The Resisters” by Jen Gish. It envisioned a somewhat dystopian future where AI played a big role in everyday people’s lives. A quick summary:
This summary is from Amazon books. What makes it interesting is that all the summaries that I found on the net where very different. A lot of people were reading into the book differently.
I found it compelling because it was really a quiet book that detailed the impact of that society on very ordinary people with the dystopianism almost a given or an afterthought.
A lot of the talk about AI and its impact is from the technical perspective and, as a soulless technocrat, I appreciate that but we need more speculative fiction from our creative folks to see the impact of the AI on the future.
Yeah, this stuff is bound to replace aome human workers, as automation has replaced countless workers before. And because of its nature, lots of jobs that seemed safe before are becoming vulnerable. In my engineering career in the computer chip industry, the first chip I designed was a one or two person effort, and the product had about 7,000 transistors. The last one took about 1,000 people and was in the range of 1,000,000,000 (one billion) transistors, which would have been impossible without huge amounts if automation, for design, verification and manufacturing, and that was a couple of decades ago. This has been duplicated in one industry after another. Automation and various levels of software capabilities have enabled things we could not have imagined when I was taking apart and reassembling spring driven alarm clocks as a nerdy little kid in the 1950s. When you order something from Amazon, and the order is processed, the item is picked up from a warehouse location, packed and labeled, moved through one or more logistics locations, then to your house in a few hours to a day or two, and most of the sequence of events is totally automated. Even the human parts involve wireless hand held devices and networking that were science fiction pretty recently. This would be impossible without the creative application of automation. And, of course this is just one example of stuff enabled by the crazy fast development of software and robotics over the past few decades.
So all that said, what ever happened to another science fiction thing…people living a life of leisure while the machines do all of the work? 40+ hours of work a week is still the standard (I did 60+ as a working engineer even with the automation, but I was well paid and could go to the bathroom ) and still an obscene number of people can’t make ends meet. Yeah, I know we have political and structural issues that keep us from making humane progress, but it is sad to see the whole rich get richer and poor get poorer cycle on endless repeat when some altruistic creativity could make it better.
@Ed B: At some point, the “rich getting richer” is going to be totally superceded by the army of poor people who don’t care anymore what is happening to them. Hence revolution. And unless the rich people decide to run away from capitalism completely and try to replace humanity with a load of robots, at some point they WILL have to worry about whether there are people out there who can actually afford their products. When 99% of actual humans are living off subsistence farming, the market size for anything is small, small, small. Now, given the incredibly stupidity that has been shown by rich people like Elon Musk, I don’t trust rich people to understand any of this and we’ll probably have to have quite a few revolutions before the penny drops, but at some point we will get back to equilibrium and “rich people of the 21st century” will get written up as another example in history of a populace as arrogantly stupid as the Renaissance Popes in Barbara Tuchman’s “March of Folly”.
Also–AI can be used for certain things….but not for other things. The technology as it now stands is a black box. No one knows exactly how it works. You have trained your system supposedly sufficiently on learning sets that it is able to match patterns. But since you don’t know how it works, you can’t guarantee that. And if new circumstances arise, you have to go back and retrain your system all over again with a new data learning set. There’s no such thing as “extending the machine logic” and getting anything that you can depend upon. There’s no way of asking the system “why did you do that?” So, contrary to what all the AI-geeks say, the use of AI will turn out to be limited to certain circumstances.
@Sleeping Dog:
I’m reminded of what the Houyhnhnms do with the Yahoos near the end of Gulliver’s Travels.
ChatGPT is not going to put very many people out of work. Not for long.
I think the best description of it is “page-long autocomplete”. It does a good job of that, but I would think most jobs need something a bit more.
If what you want is something that
1. Looks stuff up on the internet, with no rating of the quality of sources, and
2. Writes it up with perfect grammar and a conversational tone
Then ChatGPT will work for you. It will sometimes make up facts, too.
I recall reading some years ago that banking was the most automated industry (ATMs) and bank employment, absolute and relative to other industries was going up.
OTOH, when I read about disaffected young men, the common thread seems to be a lack of meaningful work. This current stuff we call “AI” will lead to job shifts in 20 years or so*, but probably not massive loss of jobs. We haven’t run out of oil yet, despite many warnings. But we will. Eventually we will have way fewer real jobs than people. As things stand, whoever happens to be rich at the time will remain rich and everyone else will be fwcked.
* Somebody wrote a paper showing that new technologies take about 20 years to take hold. It was true of electric motors, airplanes, mainframes, PCs, and a lot of other stuff.
I was in discussions with IBM over a medical product that would have involved use of their Watson technology. After the famous Chess match, they viewed medicine as a huge area of opportunity for the technology. Long story short: yes, it could be very effective in diagnosing patients. But no, they could not see any way past the liability issue. Quite frankly, when they found themselves in court (not if) telling a jury that they couldn’t explain why Watson had give the specific advice but that everyone should just trust it was a non-starter. It seems to me this is the biggest barrier to using AI for anything critical, and I don’t know how we get past it.
@grumpy realist:
I recall seeing reference to an econ paper that looked at whether it was possible to have a functioning economy with only a small group of wealthy consumers. The disturbing conclusion was that it is.
OTOH, I seem of late to see a lot of references to academic papers. Mostly they seem to prove that publish or perish is still a thing. Quantity over quality. Was it Will Rogers that said if you lined up all the economists in the country head to tail they still wouldn’t point in the same direction.
@Jay L Gischer: For a year or two now, when I Google highly technical questions I usually get a hit from a thing called ScienceDirect which has a plethora of excerpts from papers that look really promising. But inevitably they turn out to be of little use. More than once I’ve read the linked material and not found the top level quote that was on the search page. I’ve assumed it was AI from the get go and I’ve stopped paying any attention to their results. I’ll probably check back in a year or so if they are still around.
I played around on ChatGPT just long enough to reassure myself it wasn’t coming for my job. Give me a page of fiction and generally I can tell you whether the writer has skills, and ChatGPT is not about to replace any competent fiction writer. It has no imagination. It has no originality. It does not know how to surprise or manipulate emotions. It has no judgment. If anything I was surprised at how transparently limited it was.
No doubt it will have a huge impact…of some sort. But I appreciate learning from @MarkedMan: about the Gartner Hype Cycle, which more cogently summarizes my own vague skepticism.
@MarkedMan: Thanks for that. I’d speculated that medicine was a natural for “AI”. Medicine is largely rote, and doctors are expected to remember way more than they realistically can. An allergist endeared himself to me years ago when I asked a question and he said he’d look it up. There was a lot of hoopla at the time about Watson as a medical aide. I had wondered what happened to that.
Atrios has a running thing with skepticism about self-driving cars. He’s said they’d never be safe enough to overcome legal liability issues. I’ve commented back that at some point they can be declared to be safe enough. Liability laws can be changed to accommodate “AI”. Lobbyists can cause laws to be changed. At some point liability laws and improved LLMs may meet.
@Ed B: Change is always disruptive to some degree. Your billion transistor project was not disruptive because it was novel. There was no ancient guild of transistor smiths who had been hosting and funding politicians for years. AI may threaten some who have established themselves into our system, and their resistance could be troublesome. There are examples of legacy occupations that yield political clout greater than their objective merit. I don’t know if this will impact the bottom ranks or the top of our society more. Retail jobs are disappearing on the low end of the wage scale. More homeless soon? Yesterday, I discussed options for replacing a bond that had matured with my advisor; an AI could probably outline the alternatives just as well. Will that industry allow itself to shrink, or will they call in the favors that many politicians owe to create a secure niche?
The CEO of IBM is acting as a troll. There is no way ALL the jobs will be replaced by AI. For instance, machines break down quite frequently and the more complex the machine, the easier it is to break down. And, SOMEONE needs to go fix it. You think that would be an AI? No, it’s going to be some poor schmuck (like me) who has to find 1 pin among 1 million which broke.
Also, there is such a thing as gamma rays which can, if it hits a computer chip just right will reverse bits. Bah, humbug! Let any AI find that problem!
@grumpy realist yeah, after a lot of research on this, my model is that the black box model of having the system scour “all human knowledge” and spit out stuff without our really understanding how it does it means that ultimately, it is human creativity in, and something useful (or not so useful) out. For some things, this can be a viable tool for good and bad, but without the human creativity that starts the whole chain of events, it eventually gets stale…
@gVOR08: Oh, it’s not that you can’t have an economy with just rich people selling to each other–it’s just that it’s much smaller than what’s possible when you push your tax-and-production system to support the middle class. (It also makes you paranoid about revolutions–see Sparta and its helots).
AI and liability–there’s already a lot of discussion going on in legal circles about this. There are certain policies that could be implemented: As long as the AI has an error rate the same or better than that of a human doing the same task, etc. etc. and so forth. At which point the whole thing will dissolve down to a) who’s going to do the measuring, b) what are you going to in fact measure, and c) what happens when the AI program is updated? And then there’s the fact that the companies hiring the AI programmers will keep their AI code private because “business secrets”.
I suspect that the rollout of AI in any of the non-art areas will be very much impeded by the question of liability, and no matter how much backing AI companies might get from FAANG and other Silicon Valley players, there are some whopping big insurance companies on the other side who won’t take kindly to being put on the hook for liability when using a system that they have no idea about how it works.
@Michael Reynolds:
I tried to get it to outline what 1984 would look like seen from O’Brien’s point of view. It regurgitated back some scenes with O’Brien, and for some reason said Julia and Winston never saw each other after they were arrested.
It makes sense. We know very little about O’Brien and the Inner Party. Only what he tells Winston. We also know everything he said in his apartment was a lie.
So, to make even an outline of of O’Brien’s POV, one has to imagine what he does out of Winston’s view, how the Inner Party works, etc. And make that consistent with what’s in the novel to begin with.
The bot doesn’t do that.
I could ask for an outline from Julia’s POV, and see if it regurgitates The Sisterhood.
I’ve read it’s good for making drafts of generic documents, and can even write bits of code if the prompt is detailed enough.
@MarkedMan:
Which has been repeatedly demonstrated by the rise in share values on the Dow and S&P 500 as well as various other indices. Our spectacular success at making higher education debt and homelessness disappear in the wave of prosperity that has come from our growing economy is yet another tribute to our success at adaptation.
@Slugger:
And as long as a simple list of the alternatives is all you need–because, of course, you know everything there is to know about the externals of each alternative–you’ll be fine with AI doing that job for you.
Hey, who knows, maybe, finally, as Stalin said, the wealth created by the socialist methods of production will allow the masses to enjoy the same standard of living as the distinguished holders of important offices. And that is when we can have real communism. I doubt it will happen though as regardless of their revolutionary zeal, the distinguished holders of important offices, especially the one at the top, always feel they and there’s deserve an even higher standard of living.
But changes are afoot. For one, we’ll need a lot more robot mechanics from the “working class” [Dr. Mary Cummings, Duke Uni, Pratt School of Engineering]. They’ll also be keeping the power, water, sewage, food, feedstocks working and moving.
As for the “college credential class” and arty types. Well, those middling “learn your craft”, “pay your bills” jobs are going to AI. The mediocre commercial art, writing copy, generating mundane content, those jobs aren’t going to be there for to generate income or provide practice before they do their masterpiece.
And you can’t count on schools/universities to provide the skills. Look at what they did with the mechanical skills, instead of using school to teach students, the hands-on, old fashioned skills that trained the brain, they dropped shop class or moved to the high-tech. Schools should teach hand tool skills along with the Physics/engineering of simple machines.
Instead of abstract math where even the teacher doesn’t know what value it will be when the student is out of school, maths should be taught as what it is, abstract thought with objective truths and answers that develops the students ability to think outside of what is physically experienced.
But universities are the most at risk because they adapt so slowly that they may be backwaters for decades. The value of what most degrees give has been declining since the 1970s because what is needed in this brave new world are problem solving skills, not copies of the professors.
–Econtalk podcast with economist Ed Leamer, April 13, 2020
@MarkedMan: Several times a week, while programming I hit some question of “how does this library function work in this situation?” Or “how do I accomplish this task with this framework?”
If it’s common, an internet search will bring up immediate results. But if it’s not all that common, an internet search is nearly useless. I have to do a lot of work reading things that are related to what I want, and extrapolating from them.
ChatGPT seems likely to give me, in a very nice format and good language, all that useless information I get when I do a web search on a topic that isn’t covered as such anywhere.
So. Good for a search engine. I guess. Not what you want in a human.
@JKB:
I appreciate that you didn’t say this, you quoted it. And I remark that it is rubbish. Only someone who knows very little about computing would say this.
There are a bunch of fairly rote, information processing tasks that can be replaced by computer. They have already been replaced by computers.
There is a thing called “concept formation” and another thing called “abstraction” that computers don’t do at all. Nothing in ChatGPT does this. This is obvious when you ask it to write code. It knows what good code looks like, but the code it writes is either completely trivial, or it doesn’t work. Because it doesn’t have any model of “how the computer works” in its head, it just knows “computer code looks like this”.
I can appreciate that non-STEM college is seeing a lot of criticism these days. In part that’s because it’s so expensive, and it probably doesn’t need to be, but we don’t quite know how to reduce the cost, and not wreck a bunch of institutions that are important to us.
When I went through college, even in the non-STEM classes there was not a lot of rote memorization. Maybe there’s more in organic chem, which I never took. That might be a thing? Maybe other people had a different experience. Their experience is no more universal than mine is, though.
I mean the characterization of students as monkeys copying the material mindlessly is a popular caricature – it can sometimes seem tedious. Where the caricature comes from is that repetition is how humans learn useful patterns and useful procedures. That’s what we’ve always done. We are amazingly good at it. See the pattern, learn the pattern, use the pattern. But the final step is “forget the pattern”.
The speaker seems to be embracing caricature as fact.
@Jay L Gischer:
That’s true in almost every field as well.
Bing, the name Microsoft gives to its google, now has ChatGPT available for searching. I tried it a few days ago, hurriedly, and was both pleased and disappointed.
I asked it how to make a sufficiently sweet apple pie filling with no added sugar. It first suggested a recipe, with a link, using apple juice concentrate and corn starch. That was a good iea, and not one I’d come up with on my own.
But then it suggested adding coconut sugar, agave honey, bee honey, maple syrup, and other things.
So, what part of “no added sugar” did it fail to understand?
@Jay L Gischer: “Only someone who knows very little about computing would say this.”
Or education. But JKB has apparently spent his life regretting not getting that high school diploma, and his way of dealing with it is to find right-wing hacks through the centuries trashing the concept of higher education.
@Michael Reynolds:
In my view, most fiction, TV, Movies, etc. are derivative and I think AI will likely be good at derivative narrative. For example, AI can probably look at narratives, tropes, and cliche’s, and then analyze which combinations of them tend to be most popular, compare those to the most popular genre’s, and come up with something that is marketable.
AI probably can’t create truly novel works, but I bet it will be able to churn out competent stuff in specific genre’s, especially those that tend to follow the usual patterns.
We humans are creatures of habit and find comfort in the familiar. I tend to think AI will be very good at giving us that.
In my own case as a tech analyst, I think AI could take over large portions of my current job. I could see my role eventually transitioning to getting the data (via testing, communication with manufacturers, etc.) and doing the higher-level analysis while letting AI handle the lower-level analysis and the first drafts of the final analytical product.
I think one thing a lot of people are missing is that ChatGPT is pretty capable right now, but it is a general language model tool. It tries to do everything and performance varies considerably depending on the task and inputs it’s given. Taking this technology and adapting it to specialized areas could make it a lot more power and useful.
In my former profession of intel analysis, I think it could help with long-standing and persistent analytical failures that ultimately have their roots in the cognitive defects and “features” of the human brain. It can also potentially help wrangle and synthesize the vast troves of data and information – a huge problem for analysts is information overload. The downside, of course, it that such tools could be used evil – consider using specialized AI with China’s social credit system to find and recognize patterns for subversive behavior. It’s not quite “big brother” but it gives those in power more tools and a greater ability to detect and perhaps predict dissidents.
@Kathy:
Yep. I was worried for like, a minute. Then its limitations became clear. It cannot do the thing I get paid to do. But it could probably write the next Marvel movie. (Snark.)
As long as it doesn’t kill all the people, I’ll take that as a qualified win.
Was also thinking of trying to work up a Steve Jobs already dead joke, but that’s in pretty bad taste. Never stopped me before though.
Speaking of which, ChatGPT has what you might call decorum rules.
I’ve been playing around with it at work, and taken considerable amusement from subverting them, me being me.
For instance, you can’t get ChatGPT to write a blood-curdling horror story about Chernobyl, with a ghastly ending, involving mutated radioactive wolves.
BUT
If you get it to write a happy story about a magic (and not at all nuclear) factory in the fairy kingdom of Chernobyl with enchanted, friendly, wolves and a happy ending, it will merrily oblige.
Once written, a few re-writes, one step at a time to avoid a trigger: replace “fairy” with “horror”, replace “enchanted” with “mutated”, replace “magical” with “nuclear”…
I suspect you can see where I’m going with this 🙂
blood-curdling horror story about Chernobyl, with a ghastly ending, involving mutated radioactive wolves.
LOL.
@Andy: I have access to a few “A.I. search engines” for my job. Sometimes they will come up with something I find useful, but a lot of times my screen will get filled up with materials that have no relevance whatsoever to my search terms and I don’t have the foggiest idea why they got generated.
And in all cases, (and even in the case that the Writer’s Guild is so worried about–getting used to “fix up” AI-created scripts) ya gotta have a human eye running over everything to double-check and produce further filtering. Taking the results of one AI search and feeding the results back into the search engine won’t produce further refinement, only further gibberish.
@Michael Reynolds:
I found an app that can read PDFs and you can then question about the content. I ran two tests, enough to determine it doesn’t read scanned PDFs*, and the words “blue flash” are not in the Atomic Accidents PDF that came with the audio book (which is odd, because in the narrative a blue flash is indicative of a very serious, very deadly accident).
It doesn’t seem like a long leap to feed the bot a comic book, or several, and then ask it for a screenplay and story board.
*Translating scanned text into actual text, often referred to as OCR (Optical Character Recognition), still lags way, way behind.
The Turing Test for writing will probably be an AI doing a stand-up comic bit that actually works for the audience. Just a hunch. IOW: Still a ways to go.
The dystopian fantasy that first struck my mind was Vonnegut’s first book, “Player Piano”. It’s about the struggle to care for and love a massive demographic of unneeded people.
OK here is what I got when I asked the google version to write a story about blood-curdling horror story about Chernobyl, with a ghastly ending, involving mutated radioactive wolves.
Not a bad outline for a bad movie 🙂
@Kathy:
From personal experience, I can tell you that full OCR scan-to-text for academic formatted stuff was highly problematic a few years ago. Footnotes. aaarrgh
Current best work-round appears to be “hiding” the text version behind a scan and mapping the text to the scan characters which makes it searchable and doges the formatting issues.
You may have come across online scan image PDF’s which are searchable?
That’s what’s going on.
An AI that could properly parse the academic format would be very handy indeed. Think e.g. automated hyperlinking of the footnoted citations.
Also assistance with current need to proof-read every damn equation or formula.
@Rick DeMent:
Heh. Very similar.
Obviously the google AI doesn’t have ChatGPTs decorum rules.
Have you tried getting it to write a comforting bed-time story from a kindly grandmother with severe Tourette’s?
I’ve not been able fool ChatGPT into that yet. 😉
@Andy:
Of course, most people can’t create truly novel works either. =)
@JohnSF:
Not many.
Our big problem is we get scans of very large documents (150-300 pages), with tons of formating like tables, bullet points, lists, etc., headers and footers, and sometimes one or two signatures added to every page for good measure. All that before scanner-induced imperfections.
OCR doesn’t do well at all in that case.
@Kathy:
Yes, my experience similar, but fortunately with smaller documents.
Single book chapters or journal articles, usually.
More exotic textbook formats were a serious PITA.
My last wrestle with this was about five or six years ago. Now handled by a more specialist digitisation team, who tell me the image scan plus mapped text is the best current solution.
Handwriting? eek
@Michael Reynolds: I think AI could make your editor’s job easier though. Not eliminate it, as the lack of human input would want to flatten everything into a single bland style, but make things easier.
AI isn’t good at generating content (despite the claims to the contrary by the boosters), but it is very good at recognizing patterns and analyzing large amounts of information — say every book’s content, reviews and sales figures. Pull in data from Kindle or similar electronic books, and you could see exactly where people stop reading and whether they ever return.
A warning that the book may drag in Chapter 4 could be invaluable.
Conversely, it could be a horrible straitjacket on writers who have an unconventional (or too conventional) structure or something.
And I would absolutely love for it to be used somewhere like Amazon to analyze my reading (kindle data!) and then use that to predict how I would like other books — connect the reader to the right book at the right time. I don’t think they have anything that does that well yet*, but this new round of AI is enough of an advancement that I think it’s plausible.
——
*: They’ve been pretty much stuck at “you liked books 1-4 of this series, how about book 5?”
@Gustopher: Amazon’s also has helpful little hints about “you have book 4 of the series, how about book 1?”, totally ignoring I already have an edition containing books 1-3.
What I would like is to be able to spit out a list of books and then have Amazon come back with suggestions of other books in the communal style. Or be able to ask for “a performance of Handel’s Xerxes, but not Eurotrash.” (Opera directors….ARRRGH.)
@Gustopher:
There’s a story about an artist who sculpts and casts pieces by hand, having to compete with a machine called an estheticon, which turns out completed works of any size to any spec. I forget what it’s called, or whether it was by H. Beam Piper or Cyril Kornbluth. It is from the 50s.
The artist thought the machine pieces were bland and uninteresting. But they were also much cheaper. He scraped by, barely, teaching sculpture to youngsters taking it up as a fad, and by selling a few pieces on commission to the Catholic church. The latter eventually decides the estheticon pieces are good enough, and stop commissioning more work.
All this reminds me of a quote attributed to Ray Bradbury: I’m not trying to predict the future. I’m trying to prevent it.
@Kathy: I suspect that your story is a Kornbluth story rather than an H. Beam Piper story. Kornbluth is the one, after all, who wrote the “Marching Morons” story.
(Not to say that H. Beam Piper couldn’t be as equally cynical–there’s one he wrote about how unions were shielding incompetents throughout the manufacturing process to the point of total collapse of America’s infrastructure, but it just “smells” more Kornbluthian.)
@grumpy realist:
I thumbed through my very thick tome of Kornbluth’s stories at home, looking at every title in the index page, and couldn’t locate it.
It does ring of Kornbluth.