More on Watson

Alva Noë writing at NPR:

The Watson System no more understands what’s going on around it, or what it is itself doing, than the ant understands the public health risks of decomposition. It may be a useful tool for us to deploy (for winning games on Jeopardy, or diagnosing illnesses, or whatever — mazal tov!), but it isn’t smart.

Which is a better way of saying what I meant yesterday in the comments section of James Joyner’s post on Watson when I wrote “I think a lot of this conversation conflates “knowledge” with “intelligence” and I would argue that the two are not the same.”

FILED UNDER: Science & Technology,
Steven L. Taylor
About Steven L. Taylor
Steven L. Taylor is a retired Professor of Political Science and former College of Arts and Sciences Dean. His main areas of expertise include parties, elections, and the institutional design of democracies. His most recent book is the co-authored A Different Democracy: American Government in a 31-Country Perspective. He earned his Ph.D. from the University of Texas and his BA from the University of California, Irvine. He has been blogging since 2003 (originally at the now defunct Poliblog). Follow Steven on Twitter

Comments

  1. Matt B says:

    This is very much John Searle’s “Chinese Room” argument — which it really helpful from an objective/analyst viewpoint. It does cover up social notions of “intelligence” — the power of being able to “speak” (the Eliza Effect).

    This returns us to a week ago and revolution v. coup. Like it or not, a lot of people do see “Watson” as smart (in much the way that, depending on your ideological bias GW Bush was either very smart or really dumb).

    I do get concerned, on policy and funding levels, when people fail to recognize “A” and all too often assume “B” (see digital intelligence gathering vs human networks/feet on the ground).

  2. john personna says:

    We are miles from meaningful AI. Watson is meaningful in that other sense though, as a potential tool, augment for human intelligence