About Steven L. Taylor
Steven L. Taylor is a retired Professor of Political Science and former College of Arts and Sciences Dean. His main areas of expertise include parties, elections, and the institutional design of democracies. His most recent book is the co-authored
A Different Democracy: American Government in a 31-Country Perspective. He earned his Ph.D. from the University of Texas and his BA from the University of California, Irvine. He has been blogging since 2003 (originally at the now defunct Poliblog).
Follow Steven on
Twitter
Re “Dear Sydney”, I’ve watched some of the Olympics, but as always, I tend to tune out during commercials. I found myself after one of those ads going, “Huh, that sounded like an ad for having AI write your fan letter. Nah, couldn’t be, wonder what it was really about.” Disappointing to find it really was. But, I can understand Google suggesting using AI to write fan letters. They’d get in trouble if they were honest and said “term papers”.
Shelly Palmer nailed the issue and its problems. And if Google AI can write students’ term papers, the question asked was too perfunctory.
I’ll side with Skinner (???)* on this one; “any teacher who can be replaced by a machine ought to be.” [emphasis added]
*I’m confident that Skinner didn’t originate the idea or the quote, but I don’t remember who did and B.F. gets credit for it frequently.
@gVOR10: considering that term papers are ALSO to show one knows how to do research, etc…I suspect that using AI to do so will be as successful as the attorney who tried to outsource his legal argument to the same and ended up with an extremely pissed-off judge, a trashed law firm, a dead case, and $5k fine from the state bar association. Not to mention considerable liability for what is to all intents and purposes an open-and-shut malpractice case from his client.
@Grumpy realist: I keep on hearing that these new generative AI tools are “impressive” but I have yet to see it. Two days ago the sun roof of my car got stuck open with a storm approaching and I googled for that problem on my specific make and model. I got a hit back and it seemed like the real deal at first, although it did have a headline that read something like “How to fix a stuck sunroof on 2007-2016 Mini Coopers” which struck me as odd because there was a big model change in the midst of that. But maybe the sunroof mechanism was retained? Short answer: upon reading and re-reading it turned out to be generative AI crap seemingly agglomerated from a half dozen different sources, each talking about different models and even different problems. It was written in a way that made is seem very authoritative but in my mind that makes it worse, not better.
In that TPM piece, Josh Marshall says that the Harris campaign is reaching well beyond the normal limits of a campaign.
I recall that in the aftermath of the last midterms, there was some blowback about the “Defund the Police” slogan, to which AOC replied by saying that her caucus-mates needed to learn how to do politics in the internet age, and use social media well. Which she does. And apparently, so does the Harris campaign.
I don’t agree with all of AOC’s policy stances, but she knows how to campaign, and how to politic.
We saw Trump and the Brexiteers be very effective at using the darker side of the internet and Facebook to advantage. Now, it seems that the Democrats are catching up.
@MarkedMan:
Today’s AI *IS* very impressive but it is really important to remember what it is designed to do – and what it isn’t.
We do not have Artificial General Intelligence (AGI) or Artificial Super Intelligence (ASI) today and won’t for a very long time (if we ever do).
What we have are tools that are very good. They are considered “narrow” AI and are task oriented. Some of these tools, mostly in the areas of machine learning (ML) and natural language processing (NLP), we’ve been using for years and they are exceptional at identifying patterns and outliers (ML) or standardizing/translating human speech (NLP). I’ve used ML for years to look for possible errors and potential fraud in healthcare billing or data access and we’ve all used NLP in talking to Siri or Alexa.
What is newer is Generative AI like ChatGPT or CoPilot or Gemini. These are tools designed to create content that appears like content that a human would have made. Basically, they are designed to pass the Turing Test. What they can’t do is “know” any facts – just patterns. So they can’t tell, and don’t care, whether what they said is accurate or not, only if it matches the pattern of what the materials they have consumed before.
The funniest description I’ve heard is that they are always available drunk interns who sometimes lie. Which is probably too anthropomorphic but it conveys how they can and should be approached. You use them to create drafts of something you actually already know – not to create something to use in an area you don’t know. And, if the facts themselves matter (like citing a specifioc part of a case for a specific point), you have to validate and check each factual assertion because, again, it can’t know truth or falsity just whether what it created looks plausible.
I don’t disagree at all with Shelly Palmer’s statement. I have something to add: Learning to express one’s feelings via words can be difficult. It takes work. It is something that should be cultivated in themselves by people. It is something that should be encouraged in children by their parents.
When one is successful at parsing one’s feelings and verbalizing them, and making a connection with another human being, that is a very powerful and satisfying thing. But it does take work. Just as an athlete might lift weights to the point of “burn” and suffering for their advancement, this is a kind of suffering that leads to development, growth, and success in life.
Asking an AI to say what your feeling are will lead one in the opposite direction.
@SKI:
Sounds like a great use. ML has been around for, what, 30 years and has been used to develop amazing things. Another example: detecting cardiac arrythmias by machine learning. Once it is better than the human generated algorithm you can lock it down and use it forever. One key is testability. You can test it against the existing ones and demonstrate improvements. The other key is turning off the “learning” part of that once you’ve got something good enough. (You can continue the process in the lab, but the cardiac monitors you put in the field better not be changing!)
And we have made great strides in natural language processing, but the stakes are different. You mention Alexa. You ask it to do something and if it gets it right, great, and if it gets it wrong you know right away. When we use it for real time translation though, it cam amp up pretty quickly. I suspect that the reason it seems to be having success is that we are still treating it as a novelty, or using it for simple tasks like asking the cost of a hotel room. But if people try to use it in a serious situation, one where you don’t know you’ve gotten bad information until it is too late and the consequences are no joke, I’d be willing to bet it would be a disaster.
So I know (basically) what generative AI is, and how it is different/similar to ML, and I still say I haven’t seen it be more useful than it is potentially harmful. Whenever I know enough about a subject to judge its output, it is uniformly crappy.
We may get there someday on this pathway, but I also wouldn’t be surprised if it turned out to be a dead end.
@MarkedMan:
It is really good at creating first draft of public facing communications. An example: “draft an email to a patient letting them know that they are due for service X”. You can/should/must review/edit it from there but it is much, much quicker for a doctor to do that instead of starting from scratch.
@SKI: I was gonna say something much like this. There’s a lot of written communication in our world that is what is sometimes called “boilerplate”. It isn’t meant to be personal, or expressive. It’s meant to cover a set of important or valuable facts in a cogent and organized way. This is a good field for LLMs.
Consider the real estate appraisal business (I’m talking commercial property). This is all based on reports where the numbers are supplied by the appraiser, but it all has to go in a package that is written out. A LLM could be very helpful here, too. As long as it understands it doesn’t get to make up its own numbers.
Also, checking your grammar along with your spelling is a pretty good use as well.
@Grumpy realist: I keep hoping that you are right about this, but the number of teachers I’ve read running around with their hair on fire over how Chat GPT is going to end academic writing as an evaluation tool make me wonder.
I left produce warehousing just as the industry was closing up the warehouses where I lived, and now I retired before AI “destroyed” my chosen second field. Two for two on strategic retirements. Pretty good!
Well, those of us who have “talked” to Siri or Alexa have. Even Luddite is ahead of me on the curve in that he uses voice recognition to do stuff on his phone and in his car whereas I just sold my car and don’t care to train my phone to recognize my voice and speech tempo.
I do occasionally TRY to answer the robo phone tree voice when I’m on the phone with Amtrak or Kaiser, but I usually end up hanging up and going on line to do the task. I guess I have SOME experience, just not as happy as yours.
@SKI: Okay, so that sounds like it is a little useful. Big jump to “revolutionary” though. (I know you’re not claiming that but the media and Wall Street are nearing cold fusion levels of hype on this.)
@MarkedMan:
It’s really good at writing mission statements.
But that might be more a reflection of the nature of mission statements, rather than the AI. Businesses, non-profits, artists, and even sometimes doctors often have to write something that explains who they are without limiting their potential audience. There’s a whole lot of pointless text that doesn’t ultimately mean something or matter so long as it is there filling up space.
Anything that contains the phrase “empowering communities” or “empowering the ______ community” is ripe for generating without the slightest concern as to accuracy.
Who is to say that you aren’t “empowering communities of color through novel ferret leasing opportunities, disrupting and monetizing pet ownership models?” Your eyes had glazed over long before the ferrets were mentioned.
@Gustopher:
Oh shirt & sweet Jebus! Were I still in sales I would so love to have this in my product catalog. I could sell this item by the container load.
Congrats on winning the interwebs. Here, have a cookie.
@SKI: but this is what plain boilerplate is for. There’s no need for a pseudo-personalization such as ChatGPT.
The advantages of boilerplate is that it can have already been checked and debugged, whereas ChatGPT can be infested with all sorts of hallucinations. Which then requires another round of checking and correcting. I can’t even trust that the damn thing is quoting authorities correctly!
We’re starting to try to come up with some sort of policy regarding LLM-generated material, but for the most part, we’re already dealing with similar material from all the machine translated stuff. Hence the plethora of 112b rejections such inevitably generate….
(I have to admit I’m extremely cranky about all of this because “AI” seems to have been the latest buzzword that everyone is now adding to patent applications in my area. Very often on the underpants gnomes level of understanding.)