Elementary, my dear, Watson: Who is Smarter than Human?

by Socrates on March 18, 2010

Spread it!

In the 1940s Alan Turing famously predicted that one day computers will defeat humans in chess.

In 1997 IBM’s Deep Blue defeated the reigning world chess champion Gary Kasparov.

Currently, IBM is building a natural language processing computer named Watson, designed to compete in the game show Jeopardy and, ultimately, defeat any human opponent.

(You can test yourself against Watson by playing the NY Times Trivia Challenge Game here.)

As you can see in the videos Watson is still very much a work in progress. However, is there anyone who honestly doubts the inevitable? Do you need to be a Sherlock Holmes to see what’s coming? I think it’s elementary.

Big deal. Someone will say.

I remember reading once that famous linguist Noam Chomsky commented that Deep Blue defeating Kasparov in chess was as interesting as a bulldozer winning the Olympics in weight-lifting. Well, I wonder if, as a linguist, Chomsky perceives Watson to be a little more interesting than Deep Blue.

I admit — I am no world famous linguist. But it seems to me that in a way, Jeopardy is very different from chess. In fact, I will argue that Jeopardy is much, much harder (at least for computers) than chess.

For the record: I love chess. I think it takes a uniquely rare genius to become a world chess champion like Kasparov. But language is so much more complex and has, it seems to me, a near infinite number of combinations, idioms, subtle, ironic and humorous meanings.

Chess, on the other hand, has a very large but still limited number of moves. Therefore, if a computer beats the best of us in Jeopardy, I would dare to say: It is, indeed, a big deal.

And then, again: Is Jeopardy really that different from chess?

Maybe as much as chess is different from wool weaving. But the fact remains that a few hundred years after weaving machines became better than weaving humans, the Mechanical Turk turned from a hoax into reality.

So, is anything really that different from chess? And from weaving? And calculating? And machining? And lifting? And welding? And…

Will there be anything that we can claim and hold as exclusively human and therefore untouchable for the machine intelligence?

I am not sure there is.

But even if there is (let’s call it love or emotional intelligence for example) once we are the smart, but really not that smart, formerly smartest species on the block, the question still remains unchanged:

What happens when Kasparov’s “uniquely rare genius” is mass produced in every personal computer? (As it already is.) Or, since today we are putting chips in everything, what happens when eventually any smart machine is able to outdo any human at any one thing?

What then? Where do we go from there? Where do we find work? How do we make a living? How do we even survive as a species?

Will technology replace biology?

Video Updates:

IBM’s Watson supercomputer destroys all humans in Jeopardy

YouTube Preview Image

How Watson wins at Jeopardy, with Dave Gondek

YouTube Preview Image

What is Watson? Why Jeopardy?

  • Taggart

    Well, of course biology will eventually replace technology! Or rather, we will find that we will use biology as technology and the distinction between the two will fall away. Will chips be 'intelligent'? Eventually we will surely 'build' (as in 'create') something that will pass our own tests of intelligence but can we still call it 'artificial'?

    So what if technology becomes intelligent (in the sense of 'creative')? Will it really be a problem? Doesn't mankind only create problems for itself for which it already has solutions (albeit that those may not be clear at the time)?

  • http://www.SingularitySymposium.com/ Socrates

    Greetings friend,
    And thank you very much for your contribution. I am happy you took this opportunity to voice your opinion.
    Speaking of opinions I have to admit that mine is rather different than yours so let me start point-by-point.
    1. To me it is not completely evident that technology will indeed replace biology. There are many types of catastrophic scenarios (such as nuclear war, asteroid hit, gigantic solar storm, environmental collapse, etc) which can potentially prevent it from happening. What is even more important, however, is that even if we agree that overall there is a good chance that technology may indeed replace biology, the issue remains as to what happens to the human race. After all, we are, at least so far, biologically based and moving away from that will create all kinds of political, legal, ethical, religious, economic and even military issues, division and backlash. Thus it is unclear that our race will indeed survive any such move to technology, which in turn, is the main reason for me to pose the question. I mean, do I care that technology has replaced biology if the planet is overran by the matrix AI and humanity is all dead, or (even worse) enslaved…
    2. You do raise an excellent point that whatever may arise from AI may not be truly “artificial” and therefore the whole AI name may be a total misnomer. We could probably talk about intelligence in general, regardless of whether it is biological or silicon based. Thus, in turn we may want to consider abandoning the good old concept of “human rights” and move on to “Intelligence rights” with the appropriate legislation against speciesm or bio-chauvinism.
    3. As far as the sequence of problem and solution goes, I think that you have things upside down. Let us take one of the first radical technologies that changed our species for example: It seems to me that fire was invented (or discovered) first, and only then, and much, much later, did we manage to come up with the fire department. Thus the default mode is that we come up with a certain type of radically new technology, we adopt it until it permeates our society and only then, as we get accustomed to it and after a long period of usage do we develop the capability (if at all) to sometimes resolve the problems that may arise from its usage.
    The vast majority of problems follow the above pattern. I mean first you have to recognize that there is a problem to begin with, then you have to analyze it and search for a spectrum of ways to solve it, then you have to make a choice about what in your judgement will work best to solve the problem and only then do you implement it. This is particularly true when you have other (usually market) forces that create strong incentives for a speedy delivery and you have a high payout for doing so. This was the case on both the sub-prime mortgage disaster and the BP oil spill (just like it may be with genetic engineering, cloning, AI and nanotechnology). The people who created those “technologies” rushed to innovate and ship it to market in order to make quick money and didn’t even think of the “what if” scenario, for it didn’t pay to do so. Usually we introduce a technology and only later we search for ways to mitigate or eliminate its problems. An even better example is nuclear weapons. Nuclear weapons were invented over 60 years ago after a great effort from some of the brightest minds of humanity. However, 60 years later we still have no solution to the problem of nuclear weapons other than hope there are no accidents and that they don’t fall into the wrong hands. But if we actually apply the technology of the weapons we can exterminate the species several times over and there is no way to undo it i.e. solve the problem after the fact. So, the parallel that I am drawing with all my work at Singularity Weblog and Singularity Symposium is that the birth of self-improving AI may equal or surpass the dangers (and promises) of nuclear power. Thus, if we don’t think very hard BEFORE we create it and if we don’t get it right on the first shot, we may go the way of the dinosaurs…. At any rate, once AI comes to be, our ability to model and predict the future falls apart. Hence the name Technological Singularity…

  • http://singularityblog.singularitysymposium.com/singularity-news-on-reuters/ Singularity News on Reuters

    [...] Who is smarter than human? [...]

  • http://singularityblog.singularitysymposium.com/top-3-robot-music-videos/ Top 3 Robot Music Videos

    [...] eventually, complex ones such as chess. Recently, attempts have demonstrated that even teaching, jeopardy or playing and composing music are not [...]

  • http://buildingacomputer.bestinfobank.in/computer-custom/building-champion-chess-computer-defeated-that-world.html Building champion chess computer defeated that world

    [...] Elementary, my dear, Watson: Who is Smarter than Human? Jun 18, 2010. Want to see the computer who can defeat any human in Jeopardy?. In 1997 IBM's Deep Blue defeated the reigning world chess champion Gary Kasparov. Currently, IBM is building a natural language processing computer named. - Elementary, my dear, Watson: Who is Smarter than Human? [...]

  • http://pulse.yahoo.com/_LO477L4I3X22IV36GI36EALSPA Gavin Schmitt

    I’ve actually been talking to Chomsky about this… in his own words, he’s “not impressed by a bigger steamroller.”

  • http://singularityblog.singularitysymposium.com/ Socrates

    Very interesting! Generally I love Chomsky but this time we would have to disagree… But did he elaborate?!

  • http://pulse.yahoo.com/_LO477L4I3X22IV36GI36EALSPA Gavin Schmitt

    I have asked him to (we communicate by e-mail)… so we’ll see what he says. And I’m with you, I think this is a much bigger deal than just another Deep Blue.

  • http://singularityblog.singularitysymposium.com/ Socrates

    Great! Let me know what he says if he decides to elaborate.

  • http://pulse.yahoo.com/_LO477L4I3X22IV36GI36EALSPA Gavin Schmitt

    I’m posting the conversation here as it happens. If you had any question, I’d be happy to pass it on.

    http://www.framingbusiness.net/archives/1287

  • http://singularityblog.singularitysymposium.com/ Socrates

    Here are a few questions from the post:

    So, is anything really that different from chess? And from Jeopardy? And from weaving? And calculating? And machining? And lifting? And welding? And…

    Will there be anything that we can claim and hold as exclusively human and therefore untouchable for the machine intelligence?

    What happens when Kasparov’s “uniquely rare genius” is mass produced in every personal computer? (As it already is.) Or, since today we are putting chips in everything, what happens when eventually any smart machine is able to outdo any human at any one thing?

    What then? Where do we go from there? Where do we find work? How do we make a living? How do we even survive as a species?

    Will technology replace biology?

blog comments powered by Disqus

Previous post:

Next post: