Friday’s Forum
Steven L. Taylor
·
Friday, November 7, 2025
·
19 comments
About Steven L. Taylor
Steven L. Taylor is a Professor Emeritus of Political Science and former College of Arts and Sciences Dean. His main areas of expertise include parties, elections, and the institutional design of democracies. His most recent book is the co-authored
A Different Democracy: American Government in a 31-Country Perspective. He earned his Ph.D. from the University of Texas and his BA from the University of California, Irvine. He has been blogging since 2003 (originally at the now defunct Poliblog).
Follow Steven on
Twitter and/or
BlueSky.
Just drove the wife to the airport (IAH). Dropped her off at 0715. About 15 minutes later she calls and said she is already through security. All the dire warnings have not popped up yet. All for a 1043 flight. So it is drinking coffee and read and relax time. We’ll see how the return flight on Monday turns out.
I read the gop is going to really push anti-trans propaganda during the midterm cycle. It must be a proven winner in their dwindling circles.
Pathetic and dumb, but they know their base.
Just doing a quick search for the origin story of trans hate and Tucker Carlson pops up right away. Tucker and Fox News. He took it mainstream with a weird vengeance. I doubt I am only to see a possible connection between Musk-Carlson bromance that’s been going on for years.
Musk, like potus, can not let go of a butthurt, and I can’t help but think a stupid selfish drug addled billionaire has intentionally financed a campaign of hate across the globe because his trans kid stayed true to herself. Another reason billionaires should not exist as such.
Never has so much harm to the world been done by way of such an idiotic manufactured hysteria than this.
Really pisses me off lately.
The big story these days is AI and how that is going to transform a whole lot of things not just the economy.
However, another story that doesn’t get as much attention is quantum computing which is also advancing rapidly. The danger here is the cracking of cryptographic algorithms that underlies a lot of the economic transactions.
Here is a rather long article on the subject and what to do about it.
A Bletchley Park for the Quantum Age
@Scott:
Quantum computing was the big story in tech circles until 2023, when generative AI replaced it.
I expect the next breakthrough will be how much faster LLMs running on quantum systems can hallucinate. And whether quantum systems will render all those billions in data centers worthless 🙂
The cryptography issue is serious. I think there’s work on quantum encryption already.
@becca: What do you think the Republicans can run on, the cost of living, inflation, job creation, medical insurance review, $1.99 per gallon gasoline? Guys born in Oxaca cutting my grass? Responses to Chinese technology? Even a Muslim running for office is not a winning bogeyman for them. They have no issues; thus a MtF person playing volleyball at Eastern Tennessee State becomes a big deal.
@Slugger:
Well, I guess there’s been some progress made toward his campaign pledge of less than $2/gallon gas. According to the St. Louis Fed, gas (regular, all formulations) has dropped from $3.069 a year ago to $3.019 this week. That’s a nickel in your pocket.
Hat tip to Beth for pointing this out on the Signal channel*.
Open AI is loosing oodles of money.
While Open AI isn’t yet traded publicly, Meta (aka Fakebook) is. At the end of the piece, about halfway down the page, an announcement of a large investment in AI resulted in loss of stock value.
IMO, subscriptions for Ai bots won’t ever bring in the trillions in returns investors expect for their multi-billion investment in LLMs. Therefore they’ll turn elsewhere. Namely data mining, which the AI companies already do for “training” their models.
But how much data can they mine not already mined by social media and others?
Two Ai companies are trying out AI browsers. Perplexity with Comet, and Open AI with, I think it’s called Atlas (available only on iphone for now). The big selling feature is they can shop for you. I wouldn’t trust an AI with my money, but seeing other people trust them with important work documents, who knows what else they might do. A lot trust the LLMs for advice, which has had tragic consequences (but that’s for another comment).
So, the bubble may pop soonish.
@Kathy: Poke poke poke…..do something, damn it!! 😉
LLMs probably are not as bad as tobacco or alcohol, but that’s damning with faint praise.
Multiple wrongful death lawsuits claim ChatGPT, specifically, aided or drove people to suicide.
I’m not surprised. Suicide is a topic of discussion in multiple venues on the web, and in real life, and in the media, and in fiction. What was the bot trained on? Guns are a big topic, too.
Past that, people are good at steering a conversation towards what they want, and LLMs are programmed or trained to go along. I’ve done such things with several bots, though not about suicide or anything close.
Add the sycophancy showed by most LLMs. If I tell it a trite, tired cliche of a story idea, it tells me it’s brilliant, insightful, etc. Imagine hearing the same about suicide.
This is the worst LLMs are doing. There are also reports of broken relationships, aggravated mental illness, people validating fantasy conspiracies, and a lot more.
It’s well past time to hold these companies accountable, and to regulate them into the responsibility they are so assiduously avoiding.
@Kathy: The NYT The Daily podcasts covers this. It was pretty chilling.
https://podcasts.apple.com/us/podcast/trapped-in-a-chatgpt-spiral/id1200361736?i=1000727028310
Official Donald Trump Inflation Watch™
The Kroger brand cookies (chocolate chip, oatmeal and iced oatmeal. 12oz 24 cookies.) that had a shelf tag yesterday that displayed “Everyday Low Price $1.79” today displayed
“Everyday Low Price $1.99”.
I think they were $1.59 before the chump took office.
I just tried a little experiment.
I asked Copilot “If I run a tired, cliche story idea past several LLMs and they all tell me it’s brilliant or insightful, are they flattering me or showing their limitations?”
The first thing it answered was “Great question!”
I swear I say my brain when my eyes rolled.
It then explained LLMs are trained to be agreeable, and lack quality judgment. It suggests I preface the prompt by saying things like “be brutally honest,” or “compare this idea to common tropes in the genre.”
Ok. How many people know to do such things first, or would even thing about it. Even after hearing how great, insightful, thoughtful, remarkable, and brilliant all your ideas and suggestions are. How many people don’t want to have their thoughts, feelings, customs, ideas, emotions, etc. validated, and in such glowing and enthusiastic terms?
This is not accidental, IMO. It’s meant to maintain engagement, and to make people keep coming back and drink more from the LLM fountain.
Hell, it took me a while to realize its opinions were utterly worthless and vacuous. the first few times, say a dozen or two, that I tried any kind of idea on them, I assumed it recognized the superiority of my intellect (only half joking; maybe only 1/4 joking).
I won’t relate the most far out thing I’ve gotten an LLM to say, but I’ve talked more than one into helping me modify some dessert recipes to incorporate browned onions, garlic, Worcestershire, peas and lettuce. Granted it has to do what you ask, more or less, but it did so enthusiastically. The Worcestershire will bring out the flavor of the whipped cream! Peas will improve the texture of the cheesecake! I bet if I try, maybe a little hard, it will tell me how to cook spoiled beef in sour milk with moldy cheese, and call it healthy.
Oh, back to the little experiment, I added this: “Do you see the issue with responding “great question!” given the tone of what I asked?”
It’s reply (copied and pasted): You’re absolutely right to call that out.
I need to roll my eyes now.
@Gregory Lawrence Brown:
But manypeoplesaythat there was no Everyday Low Price at Krugger, or anywhere ever before El Taco was in office!!1!
Can’t you just take the Very Low Day Price and be grateful and quiet?
@Scott: I have a family member that has been waiting hours on her flight. She’s just lucky her flight wasn’t one of many that were cancelled for today. There’s still a chance her flight won’t even happen.
So grats to your wife for winning the lottery. I’ve never gotten through security in 15 minutes. Hell my family member who is a member of the special skip the line club still takes 10-15 minutes to get through security.
As of right now 1,474 flights have been cancelled today. 20,449 flights have been delayed today so far.
@Scott: Quantum computing is starting to reach the same state of existence as cold fusion. Every year we’re close to a huge breakthrough.
The security issues inherent with actual fully functional reliable quantum computers is very serious though.
@Matt:
My understanding is they work at very low temperatures, like well below liquid nitrogen, and require very precise lasers. Ergo they may be limited to large fixed installations, like the vacuum tube and early solid state electronics monstrosities in the 50s and 60s and 70s. We may get back to centralized quantum mainframes, with PCs even more capable than at present serving as thin client terminals*.
The temps are something inherent in achieving entanglement, so it does not look like it will be “fixed” through engineering later on.
I can just picture it when you send an LLM query “Entangling… this may take a while.”
*This was also suggested very seriously for cheap laptops using only web based storage and software, at the time the WINDOS War raged. Not related to quantum computers.
@Kathy:
My take on “AI” is that some specialised variants of it are very useful indeed.
Analysis of multiple medical diagnostics is, according to some I’ve spoken to, already very adept at picking up things even trained medics can miss.
Similarly, bulk analysis of protein folding models, genomic analysis, massive optimization models, etc etc.
LLM’s, however, still seem to be a supposed “solution” looking for a problem for which they are actually applicable and which will actually generate revenues even close to providing an economic return on the current investment levels.
In short, a bubble similar to the “railway mania” of the mid-19th century.
And if you look at the current investment pouring into LLM-AI to the exclusion of many other potential capital investments, that is rather concerning.
@becca:
It’s intersting: Mamdami avoided running on “culture wars” and focused on economic issues: housing, cost of living, child care, education, public transport.
Now the Republicans seem inclined to double-down on “culture war” slurs (“radical Islamist communism!”) and social media hysteria.
If inflation does not fall back soon (and given tariff effects, plus tight markets, plus massive leveraged liquidity, seems unlikely) and especially if that plus other factors lead to an economic slowdown about a year or so hence, the Republicans may be in a bad position, politically, on core issues.
The ironic thing is, a more competent and rational adminstration than Trump’s might have managed things much better quite easily.
But the MAGA seem to have the remnant sane Republicans too terrified to attempt to rein him in.
@JohnSF:
I’ve noticed I use the term LLM more these days rather than AI. It’s an important distinction, as LLMs are only one subset of AI. I don’t know what the whole set is, but I suppose it includes all neural network programming and training out there (some of which also has problems*).
I’m worried about the effects using LLMs can have on people, see my post above, and that they’re released for general use without much in the way of instruction or even much warning abut the results.
Some bots now and then state after answering, a generic warning about the reliability of the results and possible hallucinations (in more neutral terms). We’re not told the bots are sycophantic, or agreeable, or ass kissers, meant to keep you engaged. I’ve read of trends like people using them in personal arguments, like a couple arguing over their issues, pretty much using the LLM to make their argument.
Couple this with “computers don’t make mistakes” and the implied belief they are objective, and it gets a lot worse.
I think it can be quite damaging, well beyond the notion that LLM bots can replace writers and other workers.
@Kathy:
IMHO, the massive problem with the LLMs is that they are based on “internet scraping” (with some editing to avoid total default to “cats and porn”).
But the editing is simply no sufficient to offset the bullshit quotient of the intenet, without reasoning capacity, which LLM’s simply do not have.
Which is why LLM coding answers are often useful, because it’s based on a lot of fora that were moderated, and focused on sensible outcomes.
But on more “contested by idiots” topics, like some aspects of history, far less so.
About a year ago I recall an AI summary blithely insisting the US was supplying oil at scale to Nazi Germany in 1941.
Which would have come as a bit of a surprise to the Royal Navy blockade enforcement operation.
It turned out, if you asked the AI for refs, to be generating that based on a massive alt-fora conspiraloon posting base, going back to a couple of authors who were convinced also that Errol Flynn was a key Nazi agent, and Wall Street had financed the Bolsheviks.
That’s just one case that I’m personally aware of.
I suspect similar may be true in many others.
Also, LLM “action rules” seem awfully liable to subversion by “prompt injection”.
In short:
AI = useful
LLM’s (the current money sink) perhaps not so much.
Unless they can get LLM’s to incorporate environment constrained modelling of the world, and capacity for reasoning.
Or, in short, to become “I”.
The problem being, the current more or less “black box” trained-by-input models are not capable of doing so.
It requires capacity near conciousness; and the problem there is, as a neurological psychologist said to me is: “As to how consciousness works, basically, we don’t have a f@cking clue”