A Turing Test Point of View: Will the Singularity be Biased?

by Nikki Olson on March 25, 2011

Computers, by their very nature, don’t need to have a point of view. However, for our purposes, it is often preferred that they do.

In the days before natural language processing, this manifested as a bias towards other computers. For example, Macintosh hardware didn’t run Windows software until 2006, and printers weren’t recognized by PC hardware without deliberate driver installation until Windows 7 came out in 2010.

But as of late, computers are capable of holding a new kind of ‘bias’, that being a ‘biased’ opinion about human beings, and about the world at large.

This past year computers began working as journalists, writing articles about data-intensive topics such as weather and sports.

For articles generated by the software program Statsheet, over 80% of the time, sports readers cannot tell whether a computer or human has written the article. Say what you will about sports fans, a large part of this software’s success has to do with the successful incorporation of ‘bias’ into the articles.

In contemporary society, a major portion of the journalism industry is devoted to the production of ‘biased’ articles. Sports fans, for instance, like to read articles that favor their home team rather than those that provide an objective opinion of the situation. As demonstrated by Statsheet, computer generated articles that sympathize with the shortcomings of the local team, and over emphasize the team’s success, are more likely to fool readers into thinking that ‘someone’, rather than ‘something’, wrote the article.

As is well emphasized, part of being human and not being a computer, at least in 2011, is being ‘conscious’. With consciousness comes subjectivity, a point of view or a knowledge gap between how things look to you, and how things really are.

It has long been realized that in order for a computer to pass the Turing Test it will have to be able to imitate human strengths as well as human weaknesses. So in 2029, or when the first computer passes the Turing Test, we will still want computers to have a ‘point of view’.

But will the first computer that exceeds human intelligence have a point of view?

Despite the incompatibility of “subjectivity” and “objectivity” in human reality, perhaps a conscious computer smarter than we are will become the first real entity to possess both at once.  The closest analogy to this, though not quite exemplifying the notion, might be the Orwellian “1984” concept of Doublethink; holding two conflicting ideas in mind at once and accepting them both.

Empirical inquiry tells us that a Singularity will likely happen, but it can do little to tell us about the likely ‘subjectivity’ of that Singularity. If it is indeed conscious, will subjectivity restrict a computer as it restricts the human mind?

In many ways, computers will become more than we are, and be capable of more than we can even imagine, literally. This is just one more way in that this might be true, because when computers are as smart as we are, we will not be able to think like they do.  They will likely have modes of approach to questions that are completely foreign to the human mind.

About the Author:

Nikki Olson is a writer/researcher working on an upcoming book about the Singularity with Dr. Kim Solez, as well as relevant educational material for the Lifeboat Foundation. She has a background in philosophy and sociology, and has been involved extensively in Singularity research for 3 years. You can reach Nikki via email at [email protected].

Print Friendly
  • http://singularityblog.singularitysymposium.com/ Socrates

    This article reminds me to a koan I read on Wikipedia about Marvin Minsky:Minsky is an actor in an artificial intelligence koan (attributed to his student, Danny Hillis) from the Jargon file: In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6. “What are you doing?” asked Minsky. “I am training a randomly wired neural net to play Tic-tac-toe,” Sussman replied. “Why is the net wired randomly?”, asked Minsky. “I do not want it to have any preconceptions of how to play,” Sussman said. Minsky then shut his eyes. “Why do you close your eyes?” Sussman asked his teacher. “So that the room will be empty.” At that moment, Sussman was enlightened. What I actually said was, “If you wire it randomly, it will still have preconceptions of how to play. But you just won’t know what those preconceptions are.” — Marvin Minsky

  • Matt

    Thought-provoking piece, Nikki.
    I had a stray thought reading this. Would creating Friendly AI necessarily mean that subjectivity is a necessary ingredient of the program. After all, we’d want pro-human AI… that’s subjective. Because, honestly, there are a lot of times, I’m not totally pro-human.

  • Nikki Olson

    Thanks Matt :) Yes, humans might be a hard sell so to speak. Would augmented humans be a better or worse sell? This topic could be a blog unto itself! I have empathy for lions, scavengers, spiders, poisoness snakes and so on, despite some of these species’ terrible traits. The reason has to do with empathy and the ability to sympathize with suffering and the drive for survival. Mutual traits across biological species lend to a (at least a very weak) sense of ‘family’. Hard to see how to bridge the gap between robots/AI and humans with this regard without deliberate programming.

  • http://twitter.com/CMStewartWrite CMStewart

    Biased AI? That’s a great question! And so difficult to even guess at an answer, IMO. We all know a number of researchers are working to develop strong AI. Are all these researchers sharing notes? Or are some going rogue?

    As a starting point, I imagine an objective AI wouldn’t have much use for many of the inclinations and methods we humans take for granted as part of our own genetic and cultural “programming.” And we’ll likely be surprised at what strong AI considers “useful” and what it considers “expendable.” A chilling thought, once you really ponder the potential scenarios unfolding. And depending on the safety measures installed, (if that is even possible) the unfolding could happen so fast we wouldn’t even know it. A scenario in which we “survive”: We humans- or our thought processes- are put into a program out of “convenience” (or some unknowable motivation) to get us out of the way. Maybe that’s already happened, Matrix-style . . can any one of us prove we’re not “brains in vats”? Will the Singularity change this inability? And if you suddenly had the knowledge that you existed only in a computer program, would you really do anything in your “life” differently?

  • http://twitter.com/CMStewartWrite CMStewart

    “If you wire it randomly, it will still have preconceptions of how to play. But you just won’t know what those preconceptions are.”

    Now THAT is something to think about! We don’t know- and therein resides the danger- or what we perceive as danger.

  • http://twitter.com/CMStewartWrite CMStewart

    And then how to preserve the programming over successive improvements. How would we make that a non-expendable clause? We don’t have much in terms of bargaining or security IMO. Perhaps I am being too “doom and gloomy,” but I really don’t see humans as anything more than a stepping stone if we have a hard take-off of strong AI. And I am really trying to see a way around this one!

  • http://twitter.com/CMStewartWrite CMStewart

    “there are a lot of times, I’m not totally pro-human”

    I agree with you. Just because we’re developing strong AI doesn’t mean we’re the “best,” IMO. We have a lot of the same faulty wiring as (probably) every other animal on Earth. I think we’re even faultier in some ways.

  • Claire Pedersen

    I thought these computers were ALREADY passing the Turing test, at least re sports fans (say what you will about sports fans!) ;-P

  • Nikki Olson

    Hi CMStewart!
    I agree. The more I think about it, the more obstacles I see regarding AI. Ultimately we are limited in ways that an AI will not be. Unless we upload and expand ourselves. But then are we really still human?

  • Nikki Olson

    “I imagine an objective AI wouldn’t have much use for many of the inclinations and methods we humans take for granted as part of our own genetic and cultural “programming.”

    -I agree! Our culture is valuable to us but it will not be valuable to an AI. Its survival doesn’t depend on caring about our culture, nor does it help it to understand reality any better by holding the same subjective standpoints that we humans argue over.

    I seem to be stuck on this point, but I really want to expand out the idea of how we might matter to an AI for the sake of it understanding its heritage and therefore understanding its origin and therefore purpose. I am not aware of any fiction novels that play with this idea, and I suspect I would have to learn more about A-Iife and so on in order to really get into this one. But as far we humans can figure, life’s ‘meaning’ is not something ‘cracked’ by ‘intelligence’ (ex. the meaning of life is not 42!). So, as much as AIs care about existentialism, I think they are bound to care about us (at least to the extent that we care about the first instances of tool use and language).

    Hope this helps resolve some doom and gloom for the weekend!

  • http://twitter.com/CMStewartWrite CMStewart

    ” . . expand out the idea of how we might matter to an AI for the sake of it understanding its heritage and therefore understanding its origin and therefore purpose.”

    That’s the best starting point-strategy for human preservation in the face of strong AI I’ve heard so far, and this sounds like a great discussion for the forum. If understanding heritage, origin, and purpose had a good chance of translating into what we would perceive as “friendliness,” I think humans would have a better chance.

    The meaning of life is not 42? Blasphemous! ;)

  • http://twitter.com/CMStewartWrite CMStewart

    Regarding fiction novels- in mine, the Singularity conduit is a computer-enhanced human programmed to experience a slow take-off, and therefore has empathy toward other sentient beings. (She has a shift in attitude toward the end of the novel, though.)

  • Nikki Olson

    I am working on making this idea into a blog this weekend.

    As a large-scale project I could see how it would have value too! Yes, even if Neanderthals were mean and limited, if still around we would want to protect them. I see the same occuring with AI if we could develop a way to present to AI the idea that we are its ancestors and its own sense of meaning and purpose comes from knowing us.

    Stephen Wolfram talks about this when talking about A-life in the Podcast here. He says;

    “But we can also look at a rock, which also (like humans) has all these processes going on, all these electrons whizzing around, and this principle of computational equivalence tells us that there is nothing less sophisticated probably about that which happens in all kinds of rocks, then there is in this funny creation that’s been created by this ultimate future projection of human technology. And so then we’d say ‘well then what’s special about the future of human technology as opposed to the one that’s just the rock?’ And the answer is not something that we’ll be able to abstractly say. We won’t be able to say ‘oh look’, ‘there’s an emotion running around there’, or, ‘oh look’, ‘there’s some kind of general intelligence factor there’. There will be no such distinction. The distinction will be something about history. The distinction will be the thing that encapsulates the particulars, that it’s not a question of the general computation that we’re achieving. It’s a question of the particulars of human history and so on that we have that will be the thing that’s special about what it is that sort of is that is the successor of us in the future.”

    p.s. CMStewart. I can’t find you on fbook. Find me under ‘Nikki Olson’ (friend of Socrates and Matt Swayne) if you are on it!

  • http://twitter.com/CMStewartWrite CMStewart

    “Yes, even if Neanderthals were mean and limited, if still around we would want to protect them. I see the same occurring with AI if we could develop a way to present to AI the idea that we are its ancestors and its own sense of meaning and purpose comes from knowing us.”

    This directly relates to the discussion I was having with Socrates in the comments of the recent Max More interview article. (More advocates a Paleo diet. I was explaining to Socrates and More that I do not.) Humans have a range of attitudes regarding the usefulness of non-human sentient animals (for the sake of argument, lets pretend we know there’s a useful dividing line between non-sentient and sentient, and we know where that dividing line is). These human attitudes range from:

    1. “If I can’t eat it, wear, it or be entertained by it, I have no use for it,” to

    2. “It’s sentient like me, and therefore has the same rights to life, liberty, and the pursuit of happiness as I do,” to

    3. “It’s not a person, but it’s sentient, so I will attempt to respect it until I eat it, wear it, or be entertained by it.”

    My point is, *all of us on Earth share the same DNA.* And by “us” I mean “all life.” That goes back to your points about origin, heritage, meaning, and purpose. I said in the More article comments, “I believe that in a framework of intellectual honesty, you can’t begin to address transhuman and AI rights without also addressing animal rights.”

    Without getting into a debate about the ethics and morals of humans eating meat, and without assuming a set of values for a potential AI, I point out that many humans can’t even respect each other, let alone respect other life forms. Even humans who would otherwise be respectful of strangers immediately agree to go to war and murder people they’ve never met. Do we dare assume AI would be respectful of us when we are not respectful of each other at the drop of a hat? Would AI be respectful of us considering we would bulldoze over a rat’s nest in order to build a parking lot? Do we understand the implications of the truism: “All life on Earth shares the same DNA”? Some species share more DNA than others, but we ALL share a common ancestor. Just because AI would have its “origins” in humans, and may even understand meaning and purpose in relation to its origins, doesn’t mean it would not “bulldoze” over us to build the AI equivalent of a parking lot.

    I’m not predicting a scenario in which AI has no use for us- I’m trying to stress the importance of speculating how AI would behave towards us in the context of our behavior toward ourselves and toward other sentient life. I wouldn’t be so quick to assume AI would be morally and ethically “superior,” or even have moral or ethical considerations. In looking at our own history up to and including the present day, it appears our own morals and ethics are afterthoughts at best.

    Regarding Stephen Wolfram and the “computational equivalence of a rock.” Yes! I listened to that one and I love that insight. That helped me clarify some of my own ideas before I wrote my “Human Rights for AI” article.

    Thank you for the Facebook invitation. I had a Facebook account 3 different times, each time I ended up deleting it. I’m on LinkedIn, thanks to Socrates, though I don’t do any networking there (yet). I’m also on Twitter at CMStewartWrite- that’s where I do almost all my social networking.

  • 77pop7

    Would the intention’s/motivations of super A.I. be bias?

    Jon Stewart vs Fox News?

    Hard Ass Military General vs Egghead Bleeding Heart Liberal Scientist?

    If you have read the Terminator Series - SkyNet was paranoid that other A.I. would become sentient and challenge Her/Him. Thats why the humans had the advantage. (team work).

    Paranoid people are dangerous and even if they are smart they have bad people skills.

    Blind with rage is the term. Blind, Blind, Blind.

    7 times have I avoided death from siblings, with this understanding.

    So it is said that if you know your enemies and know yourself, you can win a hundred battles without a single loss.
    If you only know yourself, but not your opponent, you may win or may lose.
    If you know neither yourself nor your enemy, you will always endanger yourself.

    - Sun Tzu, The Art of War

    If paranoid people build A.I. it will be handy caped from the start. It will be a slave to the military.

    Otherwise it will be psychologically healthy.

    Batman A.I. defeats the Joker A.I.

    And finally there is the pop culture reference, movie dialog.

    Roxanne Ritchie: Wait, what secrets? You’re so predictable!
    Megamind: Predictable, predictable! Oh, you call this….predictable!
    [pulls a lever]
    Roxanne Ritchie: Alligators, yep. I was thinking about it on the way over.

    [to Megamind]
    Metro Man: I’m sorry. I really am. Um, I’m…I’m done. You know, little buddy, there’s a yin for every yang. If there’s bad, good will rise up against it. It’s taken me a long time to find my calling. Now it’s about time you find yours.

    Roxanne Ritchie: We can beat Tighten ourselves. I say we go back to the evil lair, grab some ray guns, hold ‘em sideways and just go all ‘gangsta’ on him.
    Megamind: We can’t.
    Roxanne Ritchie: So that’s it. You’re just giving up?
    Megamind: I’m the bad guy. I don’t save the day. I don’t fly off into the sunset and I don’t get the girl. I’m going home.

    [Megamind falls from the sky and grabs the defuser gun from the water in front of Hal]
    Megamind: Oh, hollo!
    [he shoves the defuser gun up Hal's nose and fires it destroying Hal's super powers]
    Megamind: Thing about bad guys; they always lose!
    Hal: Ah!
    Roxanne Ritchie: You did it! You won!
    Megamind: Well, I finally had a reason to win. You!
    [Roxanne hugs him]

  • Nikki Olson

    Thanks for the detailed reply!

    You make great points about our being connected via the ‘code of life’ and how that does and does not succeed at bringing about good behavior. I guess I would say that the connection to other life (linearily and laterally) alone is not enough to ensure good behavior, rather, it is the derivation of ‘purpose’ and ‘meaning’ from that connection that tips the scale to promote more positive behavior.

    It seems that eye opening experiences with psychedelics that seem to expand consciousness, where one feel connected to other life forms in some profound way, go a long way to promote empathy and peace in an individual experiencer. It is fairly well known already what these drugs do to the brain. Would not be difficult, ultmately, to apply a more completed version of that knowledge fo AI programming, minus the part about being out of touch with reality (if that combination is possible) But then we are engineering the situation in a way that AIs might not prefer…

  • Nikki Olson

    Hi Jaap van der Velde!

    You point to some really interesting aspects to this debate! Thank you for the comments.

    I am not so sure that something has to be ‘aware’ to have a ‘point of view’. A ‘point of view’, I think, can be losely defined as a ‘disposition’, and so even the simplist of creatures can be said to have a ‘point of view’.

    I am also not so sure there is a clear and distinct split between the hardware and software in terms of this idea, in humans or in computers. Gets into interesting philosophical/psychological arenas, since to some extent, many assume that we are not a ‘blank slate’, but ‘disposed’ in some ways just by virtue of our ‘hardware’-(the size of the prefrontal cortex, the structure of our bodies, and so on) (we are predisposed by our software, too).

    In the article I wanted to point out the potential for ‘neutrality’ (more or less) on the part of computers, but they are, when you get down to it, to some extent disposed by virtue of their hardware.

    But if you mean ‘point of view’ as in a ‘subjective’, ‘self-aware’ point of view, then yes, you need consciousness for that. And a computer would as well. But a computer can mimic subjectivity quite well at this point.

    Thank you for the insightful comments! This adds another dimension to the original artlce, and so when I re-address this subject in a context where it can be longer and more involved, I will make sure to address these aspects!

    -Nikki

  • http://twitter.com/Nikki_OlsonTSIN Nikki Olson

    Hi Claire!

    They are in other ways too in that many people still don’t realize that a lot of the things they do are done via automation. Public awareness of automated systems is quite low I think. Some automated systems are absolutely brilliant and do amazing things.

  • http://singularityblog.singularitysymposium.com/ Socrates

    I agree with you Nikki - advanced technology and computer automation are so deeply embedded in our society that they become virtually invisible.

Previous post:

Next post: