The AI Is Taking Our Jobs!
Machines have been eliminating blue collar jobs for decades. Now they're coming for white collar jobs.

The Atlantic’s Derek Thompson points to “A New Sign That AI Is Competing With College Grads.”
Something strange, and potentially alarming, is happening to the job market for young, educated workers.
According to the New York Federal Reserve, labor conditions for recent college graduates have “deteriorated noticeably” in the past few months, and the unemployment rate now stands at an unusually high 5.8 percent. Even newly minted M.B.A.s from elite programs are struggling to find work. Meanwhile, law-school applications are surging—an ominous echo of when young people used graduate school to bunker down during the great financial crisis.
He acknowledges that the causes are likely multiple, including the restructuring of the economy in the wake of the Great Recession and a gradual decline in the wage benefits of college. But he focuses on the titular explanation.
The third theory is that the relatively weak labor market for college grads could be an early sign that artificial intelligence is starting to transform the economy.
“When you think from first principles about what generative AI can do, and what jobs it can replace, it’s the kind of things that young college grads have done” in white-collar firms, [David Deming, an economist at Harvard] told me. “They read and synthesize information and data. They produce reports and presentations.”
[…]
As law firms leaned on AI for more paralegal work, and consulting firms realized that five 22-year-olds with ChatGPT could do the work of 20 recent grads, and tech firms turned over their software programming to a handful of superstars working with AI co-pilots, the entry level of America’s white-collar economy would contract.
[…]
And even if employers aren’t directly substituting AI for human workers, high spending on AI infrastructure may be crowding out spending on new hires.
And even if employers aren’t directly substituting AI for human workers, high spending on AI infrastructure may be crowding out spending on new hires.
This is all admittedly quite thin: a trend analysis of something that’s barely a trend. It is, however, quite plausible.
When ChatGPT exploded onto the scene two years ago, it sparked a lot of angst in higher education circles, with administrators and professors alike seeing it as a vector for cheating and panicking to figure out how to make it harder to do. Since then, most of us have come to embrace it as yet another tool available to students and professors alike and have redesigned both teaching and assessment around the fact that it exists.
I haven’t spent as much time as I should experimenting with it yet, as early attempts weren’t that promising. When I was department head, I found it somewhat helpful at helping me come up with assignment prompts that were better than I would have come up with on my own. And my wife has been using it a lot a work to automate mundane writing tasks.
As a colleague who’s an enthusiastic adopter keeps noting, today is as bad as the technology will ever be. He’s already using it to help write op-eds, PowerPoint presentations, and even generate podcasts.
I suspect that I’d go back to being a more prolific blogger if I fed interresting pieces into AI and let it do the tedious task of linking, summarizing, finding appropriate graphics, and the like, reserving only the analysis for myself. (Andrew Sullivan (in)famously did that with interns many years ago now.) Getting to that point, though, requires investment of significant up front time getting proficient with the tools.
But, getting back to Thompson’s point, this also means that an increasing number of tasks that used to require highly intelligent, well-trained people to accomplish will be able to be essentially automated. Aside from the economic displacement and social upheaval that will cause—how will these people make a living or find meaning in their daily existence?—there’s also the matter of the tools being smarter than the operator.
We’re already at the point where the best AI programs are better than the average undergraduate at writing a 2000-word, thesis-driven essay based on an assigned body of literature. A year or two from now, they’ll likely be better than the average PhD student. Certainly, they’ll be massively faster and more productive.
But who’ll be in charge of checking the work? As technology has taken over tasks that we used to do with human brainpower, including simple information storage, the incentive to actually know things has diminished greatly. (There was a time, for instance, when I had dozens of phone numbers and addresses memorized.) We can, after all, just Google it.
Having bridged those worlds, I have the advantage that I have a reasonable ability to separate wheat from chaff in the search results. But, as we offload more and more of our cognitive tasks to machines who are much better at storing and processing information, who is going to have that skillset?
I’m a freelance writer, and I’ve started to see some of this. For a few of my (now-former) clients, I’d do background research, outline, and write copy for things like landing pages. I’d send it along and the client would do the verification that it matched their messaging/intent. That’s all easily done by AI now. I still am writing for clients who do not want their information added to the AI knowledge base (most of the content generated by AI programs becomes part of their “knowledge base,” and that’s an issue for clients who have IP concerns/trade secrets and so on).
I’ve said to others who do similar work (and who have also started to see this sort of thing) that I’m glad I’m closer to the end of my career than the beginning.
This definitely is an issue for creative work as well. My daughter graduated last year with a degree in animation from a top-tier art school, but no one is looking to hire entry-level animators, since AI can do it about 80% as well, and far cheaper.
It’s good to be old.
The thing AI can’t do is to be original. But preceding AI there was already a dearth of originality. Movies and TV have been all sequels and reboots and adaptations of IP ie: stuff someone else created at some point in the past.
I’ve been worried for a while about the fact that the world is increasingly viewed through a screen. For 99.999% of human history all of reality was, well, reality, with a sprinkling of imagination that conjured up gods and demons and fanciful explanations of the unknown. Then, in a microsecond of historical time, probably half of reality began to come via a screen. Games, YouTube, Tik Tok, TV, movies. Writers are writing less and less about real life experience, and more and more about a reality they learned of via a screen. We’d already begun the job of annihilating imagination and originality. AI will just advance that ball even further.
I don’t think this is good for the species. AI looks backward, not forward. Children of the screen also look backward and not forward, since the act of creation they’re enjoying now took place elsewhere and at an earlier time. Less and less IRL, more and more screen. Less human imagination, more AI pap.
But I suspect there are limits to the adoption of AI that I don’t yet see – revolutions give rise to counter-revolutions. What I do see now is that YouTube, which has been flooded with AI crap, seems to be leaning in the direction of on-screen humans with real faces and real voices. I watch a lot of travel and visa and expat videos, and have no patience for the AI product. It can be slick, but one feels the absence of a human with a human opinion. Look at the difference in comments between human videos and AI videos. I’d far rather watch some poorly-produced but human video than a perhaps slicker AI version. All AI can ever say is what’s already been said.
This is an old, old story. Owners of capital have always sought ways to capture ROI as profit instead of remissions to employees and shareholders. AI is just the latest big idea for doing it. At the community college I worked at in the 90s, I watched an early version of this phenomenon. Dozens of students were majoring in a new field called desktop publishing. Most of the last cohort of them had their futures foreclosed on by the early Microsoft program that created website data in WYSIWYG formats. An early “tech job” converted to a clerical task (not even a “skill”). Oh well.
ETA: “…since AI can do it about 80% as well, and far cheaper.” Hat tip to Moosebreath for hitting on the race to mediocrity factor that I was trying to capture but couldn’t figure out how to explain.
ETA #2: “What I do see now is that YouTube, which has been flooded with AI crap, seems to be leaning in the direction of on-screen humans with real faces and real voices.” Or, at least, you hope that’s what you’re seeing. Probably true. Deep fake tech is still in its infancy and still spendy.
Apparently the Pillow Guy’s lawyers used AI to write some briefs and got dressed down by the Judge.
https://newrepublic.com/post/194427/mypillow-ceo-mike-lindell-ai-generated-legal-filing
A tool is only as good as the people using it, whether it’s an impact wrench of a high-falutin’ computer program. That’ll never change.
A proper education teaches us to think rigorously and creatively, by conducting research, using problem-solving skills, and challenging ourselves to search out new ideas and new ways of thinking.
AI will be the end of that.
I’m glad I’m retired and at the end of the line. I fear for what we will become.
A timely post given that a judge has just slapped Mike “My Pillow” Lindell’s lawyers for submitting an AI generated motion. The motion cited non-existent cases and misapplied language taken from actual caselaw. It took the judge’s pointed questioning to get the lawyers to stop dissembling about a draft having been inadvertently submitted instead of the finished product, and to admit there was AI “assistance” in creating the draft. The judge has now ordered them to submit meta-data on the motion that was submitted AND the alleged finished product. The attorney governing bodies have not looked kindly on lawyers who submit AI filings – and properly so.
Unless you happen to be in Spain, Portugal, parts of France this week.
The real shift is going to be in higher ed at a time when its value is already being question and academia is very slow to change as the old guard has to die out.
An observation from 2020 before AI’s recent advancements
–Econtalk podcast with economist Ed Leamer, April 13, 2020
The future is in being able to solve problems and fix things. At least that is what you should go to school to learn and avoid the majors that emphasis learning to regurgitate what the professor pontificated.
The usual response to fears of new technology taking jobs, is that the technology results in the creation of new jobs that have greater value. This has been true, but there are victims that never see the benefit. But it is also true that as the financial disclosures note, past performance isn’t predictive of future performance.
With AI, it is hard to see where the new jobs will come from and when you combine AI with robotics, another tier of jobs is at risk. There are Silicon Valley futurist who say the answer is guaranteed income, socializing the wealth created and allowing us to pursue our interests. That’s never happened to any extent in the US in our past and it is extremely doubtful that it will happen now, particularly when our oligarchs are trying to bring the country, if not the world to neo-feudalism.
Glad I’m old, I might be in the way, but the calendar keeps turning.
@JKB:
The people questioning the value of education are the same people who have benefited from keeping you ignorant. Yes, you.
Automation has been displacing workers for well over a century now. The novelty in generative AI, is thta it will displace white collar, knowledge workers. Like paralegals, researchers, assistants, and so on. One are of concern might be programmers.
I’ve read quite a bit about LLMs in addition to playing with them now and then*. One thing that’s rather constant across different publications and pundits, is they are useful for producing computer code. I’ve no way to gauge this. If it’s true, even if the code requires checking and testing, it may reduce the number of people needed to write software.
Yes, it may also increase the software produced. Say so that apps get updated or tweaked more often, it being easier and less time consuming now. Or to integrate software in products that didn’t have any.
But the trend, especially in a climate of share value supremacy, will be for fewer people writing software.
And furthermore the users of AI technology do not need to be anchored in any country. A device in any location can probably play chess better than any human. China seems to be making big strides in AI technology. An American company will soon be able to off-shore most of its white-collar operations to an outfit using Chinese technology cheaply.
Blue collar operations will still need to be local. Megacorporation of America will gets its accounting needs met by an AI device in China, but they’ll need a local employee to unclog the sinks. The prestige job of the future will be hairstylist.
I’ve seen both ChatGPT and Google’s AI produce some bizarre misinformation. For example, the latter claims the following is an exchange from Empire Strikes Back:
Yoda: This bucket of bolts is never gonna get us past that blockade.
Princess Leia: This baby’s got a few surprises left in her, sweetheart.
That said, as long as I keep in mind to take anything it says with a grain of salt, I have found it very useful in pointing me in the correct direction.
“When I graduated from college, it was a sure thing that you’d get a great job. And, in college, you’d basically learned artificial intelligence, meaning, you carried out the instructions that the faculty member gave you. You memorized the lectures, and you were tested on your memory in the exams. ”
Two things:
One, it’s never been a “sure thing” that one would get a “great job” after graduating from college.
Two, if all this person did was regurgitate lectures at exams, he went to a crappy college.
comment check
ETA: Just revealed 5 comments.
@just nutha: I have seen AI animations on YouTube. They are easy to spot. I avoid the channels that use them.
I also see ads that are curious to me. They are the “just folks” vibed ads, like somebody just picked up a phone and started talking. But the strange thing is that a lot of them use an AI voice that is lipsynced to the actor talking. The actor is speaking English. Why is the AI voice dubbed in? Does this mean lower pay for the actor? I don’t know, but I suspect it might.
@Kathy:
I am a programmer.
I haven’t played a lot with the AI programming tools out there, but based on other ventures, I’m thinking it will find a place in the toolkit, and one of a programmer’s skills will be how to effectively use an AI to get more work done.
And yet, I see no progress at all on AIs being able to tackle the problems that make programming hard.
In part that’s because we programmers have been putting stuff in libraries for reuse since at least 1970 if not earlier. Figure out how to do it once, then never have to think about it again.
The hard part if figuring out what’s wrong. To do that, one must have high critical thinking/deductive skills. I see no evidence in text-based “next word prediction” algorithms. In fact, it’s easy to show the opposite.
The other hard part is grasping concepts, and assembling those concepts so as to formulate a solution to the problem at hand.
Now, every programming job has some tedious boilerplate, which it would be nice to automate. I don’t think that tedious boilerplate consumes all that much time, though. It just seems like it because it’s so tedious.
No, the hard part comes because there is always stuff you haven’t thought of. I see no sign that AIs will be better at this. Unlike self-driving cars, which can really operate on the basis of what is observed (sensed) in the immediate vicinity.
Programming is something that seems easy, and non-creative. The stuff one does in an intro to programming class is in fact, easy and non-creative. Programs that are 100 lines are like that. Programs that are 1 million lines are not at all like that. They do not get done by doing what you did in the 100 line program ten thousand times. Not at all.
@Michael Reynolds:
As an expert in Contact Centers and Digital Transformation I can tell you this: AI doesn’t need to be original. It just has to be mundane, boring, and follow corporate policy strictly and without deviation.
AI and business-driven-processes have been running in the back-office of most corporations displacing tasks that humans needed to do in the past. And that will continue in that way because corporate profits. As long as they can keep finding customers that are willing to pay, they will continue squeezing cost out of the system. And people is cost.
As we stand on the brink of AI displacing massive numbers of people in the workforce, we find that Americans do not AT ALL understand the change that will happen.
Sure, many former potential democrats got on the Trump bandwagon because Trump offered an easy target for their troubles: Immigrants! (“They took er jerbs!!!”).
But we know that the cause of their plight wasn’t people coming over the border but companies realizing that funding anti-union efforts and replacing workers with cheaper labor (people willing to work for lower and lower hourly payment) drove what used to be the blue collar into a never-ending downward spiral.
And Trumpism is not a fix for that… it is an acceleration.
So, the REAL question is: What do we do with 100,000, 0o0 or 150,000,000 people when their labor is no longer required?
We are SO far away from a solution for that … people are not even willing to ASK THE QUESTION out loud. and if they start asking then: USA! USA! USA! BOOTSTRAPS! NO GUBERMENT!!! NO GUBERMENT!!!
I don’t think that the new autocratic America will have any tolerance for those who try to throw their shoes into the machinery of AI. First, because most Americans do not even know how to stage a protest, much less destroy processes that run through decentralized data centers globally.
In short: It’s fucked.
Conservative sites I read tend to be:
– Eager to raise birth rates so we have more population and enough people to fill our jobs.
– Outraged that immigrants are coming to increase our population and take our jobs.
And also:
– Opposed to any safety net to protect people without jobs.
– Enthusiastic about AI, which is coming to take our jobs.
Doesn’t seem entirely consistent.
Also, a test to see if it’s really 0 comments.
The good news is AI isn’t really very bright or very creative. The bad news is most jobs don’t require being very bright or very creative. I’m thinking the kids who are skipping college and learning hands-on trades are right.
Posting from my iPhone: There are still zero comments on this thread, including the one I posted. It doesn’t matter if I use Safari or Chrome. From my PC, there are 20 comments when I load it on Edge—though still listed as zero on the main page—but none when I load it on Chrome.
Let’s see if anyone can see this comment, or if I’m just shouting into the ether. This has been a persistent problem all week.
@Kylopod: This is the first I’m hearing of it but, yes, it’s happening for me as well. It works fine on my MacBook but not on my iPhone or iPad. Indeed, even on my MacBook, it only works in Chrome–0 comments on Firefox, Safari, or Opera.
@Kylopod:
I asked it whether the following joke is funny:
Sign at the Vorlon Tourist Office: We Have Never Been Here.
It said it was funny, and went to explain about a paragraph’s worth of why. I forget what it said, but it never once made the connection that 1) the Vorlons are known to destroy ships that try to approach their home world, and 2) their catch phrase is I/We Have Always Been Here, or a variation thereof.
Back from a busy travel schedule.
Like any tool, AI has benefits, limitations, and tradeoffs. I use it mainly as a research starting point or a cross-check. I’ve found it pretty useful for explaining technical protocols that I don’t fully understand and don’t have the time to read the full specification.
I’ve read pretty good reviews of Deep Research, which at $200/month is pricey. For example, Matt Yglesias says this about it:
So, useful, but with tradeoffs and the need to understand its limitations.
It’s important to remember that it’s still in its infancy. Where it is now is the worst it will be in terms of capabilities.
@Jay L Gischer: You wrote what I was going to write. Generating code isn’t the hard part of programming; it’s everything that comes after and before. Similarly, yes, LLMs can probably generate a decent undergrad paper, but so what? The point of the undergrad paper isn’t the paper. Nobody needs another interpretation of the Great Gatsby through the lens of third wave feminism, or whatever topic is chosen. The point of undergraduate papers is for the person to learn how to read, write, and think. LLMs can’t, by their nature, write a PhD dissertation, as that, by definition, requires originality, and that’s something LLMs can’t do.
And yes, AI boosters like to use that phrase, that “today’s the worst it’s ever going to be, so you better get on board.” But the thing is, today also seems to be about as good as it’s going to be, too. It’s been two years since ChatGPT-4 came out, and nothing has really gotten better. They’ve gotten better at giving demos, but they’re still unwilling to promise anything, right now, that would be useful. The things they claim it can do now, like explaining its reasoning, it doesn’t.
Sure, AI can transcribe your medical notes, but if it gets something wrong, they disclaim all liability. It can translate one language to another, but you still need to check out the results yourself. It can translate one programming language to another, but you still need two domain experts, one in each language, to validate things, and you need to plan for a robust test, too. Can it summarize research for you? Well, maybe, except when they’ve gone back and done reviews, the summaries often leave out important information. And on and on.
It’s like self-driving cars; sure, they’re almost there, but they almost all still require a human driver ready to take the wheel, and/or very favorable conditions. Meaning that they’re asking humans to do something we’re really bad at, which is pay attention to something we’re not engaged with, and they take away the opportunities for people to learn under good conditions, so they’re ready to deal with bad conditions.
I’m not saying all AI is bad; machine learning can be useful, but this fantasy that LLMs are going to take over white collar work is just that. However, it probably will do a bunch of damage before people accept that.
Speaking from experience with our students, the problem with AI trained on web-scraping is the old “garbage in, garbage out”.
As the interwebs is stuffed full of junk and conspiraloonery, AI is inclined to replicate it.
For instance, one proposed essay relied on AI served “evidence” that “proved” the US was supplying oil to Germany in 1940.
Ignoring a tiny problem known as the Royal Navy.
The “evidence” traced back to multiple cites of a couple of books by nutcases.
One of whom was also obsessed with Erroll Flynn being a Nazi secret agent, and the other that Wall Street funded the Bolsheviks.
Oh dear.
The point was, I KNEW it was nonsense.
But to someone who did not, it might look plausible, until it encountered an unsympathetic essay marker.
Also, I recall much amusement at subverting ChatGPT”s reluctance to create a story about the radiation mutated werewolves of Chernobyl. lol.
Persistence and trickery wins out over clumsily programmed “ethical” parameters.
“Artificial” is true enough; “intelligence” is very arguable indeed.
OTOH, ChatGPT is good at sorting out VBA syntax, so long as you know what you want to achieve, and roughly how, and can give it some pseudo-code to chew on.
A useful servant; but perilous to trust.
My experience with managing people who use AI is that if you don’t know what you’re doing, the best you get is Zeno’s Paradox, i.e. AI’s assistance will halve the distance between where you are and where you should be without ever getting to the end.
And I’m not an expert on automation, robotics, or AI–that said, it seems to me that physical automation which replaced humans on a mass scale took a long time to develop and emerged from an actual focus on how human bodies work. I had to read Cybernetics in college and I understood like 25% (at best) of it, but my take is that there was a genuine interest in feedback mechanisms in human and how machine might replicate them. The people building out ChatGPT don’t seem to have any interest in how humans learn. LLMs do not shed light on the acquisition of language, and the grand problems of consciousness are not being answered.
With AI, it’s like Jurassic Park, and with the same crapshoot business logic, e.g. entrust your entire IT infrastructure to the one guy you shafted in a business dispute.
It’s gonna be really funny when the rich assholes put us all out of work and then there’s no one to pay for the shit they want to sell us.
Even gross ass Henry Ford knew that he had to pay his workers or they couldn’t buy his cars. We live in the dumbest timeline.
It’s gonna be really funny when the rich assholes put us all out of work and then there’s no one to pay for the shit they want to sell us.
Even gross ass Henry Ford knew that he had to pay his workers or they couldn’t buy his cars. We live in the dumbest timeline.
@Beth:
Can we not make guillotines?
@Beth:
Of course he was also a Nazi sympathizer.
@JKB: I’m glad you’ve found a new source to attribute without citation. Not a particularly new idea, I heard a version of it from one of our truck drivers when we were automating our order checking, sales, and billing back in 1970 mumble, but it shows you’re trying to grow. And if it’s at least “new to you,” that’s still progress.
@Daryl:
Even Nazi sympathizers can be right once in a while.
In fact, the fascists often merrily co-opted fairly sensible concepts from all parts of the political spectrum, being the ideological magpies they were.
A lot of “fascist” policies were in fact common currency across the political spectrum in the 1890-1940 period.
With the rather amusing exception of the real “old-school” reactionary Catholic monarchists.
@Michael Reynolds:
Reminds me of 19th/early 20th century British politics:
The Whig/Liberal and some Conservative aristocracy were smart enough to concede the obviously well founded demands of the middle and working class, to avoid social war.
But periodically the idiot diehard Tories refused to see the point.
Particularly in the 1900-14 period, when the UK came very close to political crack-up
@Jay L Gischer: Poe’s law strikes again. Forgive me for creating the impression that I was making a serious comment.
@JohnSF:
But they’re still Nazi sympathizers .
@Liberal Capitalist:
Yeah. This point. The problem will show up as the people who do “hard stuff” retire and there aren’t people to replace them because there are 70% fewer entry level people to promote through the system because AI did their jobs “well enough.”
Maybe it’s time to start using ChatGPT to start writing historical MPREG (male pregnancy) and Omegaverse (don’t look it up) stories and publishing them to Tumblr and the like where they will be scraped up and used as few fodder.
There are a few distinct bodies of work that use more em-dashes than average — internet comments, erotic fanfiction and AI output, so I think we’re getting close.
@Daryl:
Before the Nazis were seen for what they really were, there were quite a few Nazi sympathizers.
Same applies to the Leninist/Bolshevik/Soviet/Communists.
In both cases, what should have been perceived in advance was not.
In both cases, most previous sympathizers distanced themselves, with varying degrees of shame and/or denial.
It has to be recognised how much Nazi ideology derived, but turned up to 11, prior commonplaces in late 19th/early 20th century European/American popular concepts re “survival of the fittest”, “struggle for existence”, racial/ethnic hierarchies, nationalism, both admiration of and distaste for the social changes of 19th century “western” social/economic systems, etc.
Same goes, in different ways, for some variants of both socialism and even liberalism in that period.
One might hope that we’d learnt the lessons of painful experience.
Turns out, not so much.
@Modulo Myself:
First day on grad school. My professor offered the following thought: “We’ve studied language acquisition for about 50 or 60 years (1992); in that time, we’ve learned something–we don’t really know how language acquisition happens.” I’ve offered that quip to many teachers and received many objections from many PhD linguistics candidates. Even so, we’re still largely at most everything works sometimes, but not in all settings, with all cohorts of students, or with all teachers producing equal success. And AI’s not likely to be able to synthesize what we can’t explain.
@just nutha:
We need to build a computer the size of a planet and get it to work on the answer.
Or we can just accept it’s “42” and we don’t know the question.
@Kevin:
Some things have gotten a lot better, notably image generation.
But I think AI will be most useful not as a catch-all like the free public ChatGPT/Copilot/Google, etc., but for specialized uses. And some of that is already happening.
Weather modeling is one example – even the prototype AI models are doing much better than the standard models used today.
In my previous career of intelligence analysis, AI is going to be (if it isn’t already) used for many tasks. I’ll give you two examples:
First is imagery analysis. There is (and has been since at least the early 90s) so much imagery from satellites, aircraft, etc., without enough trained eyeballs to look at all of it. And when something does get looked at, it’s not a comprehensive look, but a look for something narrow and specific. One unclassified example I can share is basic order-of-battle information. For example, counting ships in ports, aircraft at airfields, tanks in depots, etc., to determine a baseline activity level. That is something AI can (or will, if it hasn’t already) largely take over, allowing the trained analyst to focus on more value-added analysis. AI can also look over a set of images from the same location over time to track changes, which again, can cue a human analyst to take a deeper look.
The second is network analysis. One of the intelligence challenges during the GWOT was attempting to track the massive amount of info collected on terrorist networks. Palantir developed a link analysis tool that was crucial for managing all the data but required a lot of manual tweaking and management. That is something AI can do even better and especially faster.
So, I think many of the best use cases for AI will be niche—using specialized models trained on specific tasks using specific datasets. The currently free tools that hoover up everything on the internet from 4Chan to the NYT, are going to be extremely limited because of the unconstrained data set and the need to tweak models for gaming and political correctness.
One thing I have on my task list is to get an AI to crawl OTB and hoover up all of my comments (there are already tools to do this). I’ve been commenting here for two decades and probably have thousands of comments. Just on a personal level, I want to be able to see how my positions and arguments have evolved over time. I also want to easily find previous comments I’ve made. I know from my training that humans tend to alter remembered events to conform to current reality, and I’ve caught myself believing I had a consistent position, but when I go back and look at what I actually wrote, it was different. There are other uses as well.
The unfortunate reality (and this isn’t a knock on OTB) is that the website search function isn’t very good for trying to find historic comments I know I’ve made, and Google and other engines are usually even worse. So this is a niche thing that I plan to use AI for someday (when I get time – who knows when that will be).
Another example is pictures. I have an archive of personal pictures going back to when I was a kid, plus thousands of pictures from my parents and grandparents. I’ve slowly been going through them, cataloging the important ones with image management and tagging software. I stopped doing that slog a year ago because it became clear that AI could do most of the grunt work for me. That’s also on my task list, but it’s also a low priority.
Similarly, the main company I contract with is looking at using AI to wrangle and organize 20+ years of corporate assets and documents that are currently siloed and essentially unsearchable. Need to find that technical paper that was the basis of long-obsolete product? Good luck most any search tool.
@Beth:
While it is possible, people have been predicting technology will put everyone out of work for well over a century, and we keep inventing new jobs and things for people to do, and we are getting wealthier in the process. We actually need to import people to meet the demand for work in various areas. Maybe this time will be different, but I doubt it.
@Sleeping Dog:
There are Silicon Valley futurist who say the answer is guaranteed income, socializing the wealth created and allowing us to pursue our interests.
It seems to me that AI is a quantum leap in the “deskilling” process that has characterised countless tasks since the beginning of the Industrial Revolution. Harry Braverman’s thesis in Labor and Monopoly Capital is not as well-known today as it was 40 years ago, but it remains an important insight into the way a lot of technological change removes the need for judgement and knowledge on the part of employees.
To take but one example, retail assistants once upon a time had to be familiar with all the stock in a store so they could accurately ring up the prices on a cash register. Today they don’t have to know what anything is; it’s all bar-coded. Their task is the purely manual one of passing the bar code across a scanner. Nor do they need any maths skills today; whereas once upon a time they had to know how to calculate change for a customer, that is all done for them today.
Similar examples could be drawn from all sorts of occupations. Mechanics work on cars because a computer has told them a part needs replacement. Doctors prescribe medication because an app spits out recommendations based on a blood test. Self-driving software takes most of the human judgement out of operating a vehicle. It looks to me as if AI will accelerate this tendency. Whether or not it causes the massive job losses many predict, it’s likely that most of the workers who remain employed will be little more than mindless bodies carrying out tasks as ordered by their AI masters, with zero need for judgement or creativity.
Just had a big session on AI at my firm retreat, partly done by our IT head and partly from the law society rep dealing with it. The short form is – it’s here, it’s getting bigger and we need to understand and use it correctly.
My Pillow shenanigans aside, my guess is that right now the research from AI equals an articling student or first year associate. So those kids will have to develop their skills at doublechecking.