Human Rights for Artificial Intelligence: What is the Threshold for Granting (Human) Rights?

by CMStewart on February 3, 2011

It is the year 2045. Strong artificial intelligence (AI) is integrated into our society. Humanoid robots with non-biological brain circuitries walk among people in every nation. These robots look like us, speak like us, and act like us. Should they have the same human rights as we do?

The function and reason of human rights are similar to the function and cause of evolution. Human rights help develop and maintain functional, self-improving societies. Evolution perpetuates the continual development of functional, reproducible organisms. Just as humans have evolved, and will continue to evolve, human rights will continue to evolve as well. Assuming strong AI will eventually develop strong sentience and emotion, the AI experience of sentience and emotion will likely be significantly different from the human experience.

But is there a definable limit to the human experience? What makes a human “human”? Do humans share a set of traits which distinguish them from other animals?

Consider the following so-called “human traits” and their exceptions:

Emotional pleasure / pain - People with dissociative disorder have a disruption or elimination of awareness, identity, memory, and / or perception. This can result in the inability to experience emotions.

Physical pleasure / pain - People with sensory system damage may have partial or full paralysis. Loss of bodily control can be accompanied by inability to feel physical pleasure, pain, and other tactile sensations.

Reason - People with specific types of brain damage or profound mental retardation may lack basic reasoning skills.

Kindness - Those with specific types of psychosis may be unable to feel empathy, and in turn, are unable to feel and show kindness.

Will to live - Many suicidal individuals lack the will to live. Some people suffering from severe depression and other serious mental disorders also lack this will.

So what is the human threshold for granting human rights? Consider the following candidates:

A person with a few non-organic machine body parts.

A human brain integrated into a non-organic machine body.

A person with non-biological circuitry integrated into an organic human brain.

A person with more non-biological computer circuitry mass than organic human brain mass.

The full pattern of human thought processes programmed into a non-biological computer.

A replication of a human thought processes into an inorganic matrix.

Which of these should be granted full “human rights”? Should any of these candidates be granted human rights while conscious and cognitive non-human animals (cats, dogs, horses, cows, chimpanzees, et cetera) are not? When does consciousness and cognition manifest within a brain, or within a computer?

If consciousness and, in turn, cognition are irreducible properties, these properties must have thresholds, before which the brain or computer is void of these properties. For example, imagine the brain of a developing human fetus is non-conscious one day, then the next day has at least some level of rudimentary consciousness. This rudimentary consciousness, however, could not manifest without specific structures and systems already present within the brain. These specific structures and systems are precursors to further developed structures and systems, which would be capable of possessing consciousness. Therefore, the precursive structures which will possess full consciousness - and the precursors to consciousness itself - must not be irreducible. A system may be more than the sum of its parts, but it is not less than the sum of its parts. If consciousness and cognition are not irreducible properties, then all matter must be panprotoexperientialistic at the least. Reducible qualities are preserved and enhanced through evolution. So working backward through evolution from humans to fish to microbes, organic compounds, and elements, all matter, at minimum, exists in a panprotoexperientialistic state.

Complex animals such as humans posses sentience and emotion through the evolution of internal stimuli reaction. Sentience and emotion - like consciousness - are reproduction-enhancing tools which have increased in complexity over evolutionary time. An external stimulus will trigger an internal stimulus (emotional pleasure and pain). This internal stimulus, coupled with survival-enhancing reactions to it, will generally increase the likelihood of reproduction. Just as survival-appropriate reactions to physical pleasure and pain increase our likelihoods of reproduction, survival-appropriate reactions to emotional pleasure and pain also increase our likelihoods of reproduction.

Obviously, emotions may be unnecessary to continue reproduction in a post-strong AI world. But they will still likely be useful in preserving human rights. We don’t yet have the technology to prove whether a strong AI experiences sentience. Indeed, we don’t yet have strong AI. So how will we humans know whether a computer is strongly intelligent? We could ask it. But first we have to define our terms, and therein exists the dilemma. Paradoxically, strong AI may be best at defining these terms.

Definitions as applicable to this article:*

Human Intelligence - Understanding and use of communication, reason, abstract thought, recursive learning, planning, and problem-solving; and the functional combination of discriminatory, rational, and goal-specific information-gathering and problem-solving within a Homo sapiens template.

Artificial Intelligence (AI) - Understanding and use of communication, reason, abstract thought, recursive learning, planning, and problem-solving; and the functional combination of discriminatory, rational, and goal-specific information-gathering and problem-solving within a non-biological template.

Emotion - Psychophysiological interaction between internal and external influences, resulting in a mind-state of positivity or negativity.

Sentience - Internal recognition of internal direct response to an external stimulus.

Human Rights - Legal liberties and considerations automatically granted to functional, law-abiding humans in peacetime cultures: life, liberty, the pursuit of happiness.

Strong AI - Understanding and use of communication, reason, abstract thought, recursive learning, planning, and problem-solving; and the functional combination of discriminatory, rational, and goal-specific information-gathering and problem-solving above the general human level, within a non-biological template.

Panprotoexperientialism - Belief that all entities, inanimate as well as animate, possess precursors to consciousness.

* Definitions provided are not necessarily standard to intelligence- and technology-related fields.

About the Author:

CMStewart is a psychological horror novelist, a Singularity enthusiast, and a blogger. You can follow her on Twitter @CMStewartWrite or go check out her blog CMStewartWrite.
Print Friendly
  • http://topsy.com/singularityblog.singularitysymposium.com/human-rights-for-artificial-intelligence-what-is-the-threshold-for-granting-human-rights/?utm_source=pingback&utm_campaign=L2 Tweets that mention Human Rights for Artificial Intelligence: What is the Threshold for Granting (Human) Rights? — Topsy.com

    [...] This post was mentioned on Twitter by CMStewart, Nikola Danaylov. Nikola Danaylov said: Human Rights for Artificial Intelligence: What is the Threshold for Granting (Human) Rights? - http://snglrty.co/7u [...]

  • Nikki_Olson

    Hi CMSTEWART!

    “So working backward through evolution from humans to fish to microbes, organic compounds, and elements, all matter, at minimum, exists in a panprotoexperientialistic state.”

    -While I would agree with this statement, billions of years of evolution since the big bang lead to some matter being involved in the production of consciousness and other matter not. Given this I think you end up having to factor in (in some way) the role of the substrate that typically produces consciousness in biological beings when trying to build a Strong AI. Or, you have to imitate the physio-chemical properties of the brain, the ‘neural-correlates of consciousness’, in order to produce consciousness on another substrate. This is the main short sight I see in ‘strong-functionalist’ theories of mind; they ignore the role of the substrate and focus too much on organization.

    In theory there is no reason why we can’t imitate the properties of brain neurochemistry in a computer program given we understand the role of the physio-chemical properties of the brain in the production of consciousness: which, for the time being, we do not. This is still a materialist-functionalist/‘reductionist’ theory of mind, but one more restrictive than the one it seems you are putting forth.

    With regards to figuring out whether or not robots are conscious, I like very much what Ester Dyson says addresses the subject in “The Roots of the Matrix”. She thinks that one indication we might have that robots have become conscious is when they start wondering if we are conscious.

  • Nikki_Olson

    One problem I have with letting robots decide their own rights is that in some circumstances those rights will have to factor us in if we are interested in co-existence with robots rather than subservience and potential annihilation. If robots have no empathy for us and are more intelligent/more powerful than us, there is no reason to assume they would treat us any different than we treat animals.

    The question of whether or not robots will need emotion in order to survive is very interesting. I agree with you that probably they will not, which leads to interesting implications when it comes to law, rights, and human-robot bonding.

    Eliezer Yudkowsky writes in “Artificial Intelligence as a Positive and Negative Factor in Global Risk” about how one of the biggest problems in our anticipation of the risks of AI is that we fail to realize that we won’t be able to understand future intelligences. The domain of what could be called ‘intelligence’ is large, and human intelligence represents a very small portion of that domain.

    Understanding that robot intelligence will be very different from our own and that we will not be able to understand it is important. Robot laws need to be designed with this in mind.

  • http://epages.wordpress.com/2011/02/05/the-mind-vs-brain-debate-what-is-consciousness/ The Mind vs. Brain Debate (What is Consciousness?) « Earthpages.org

    [...] Human Rights for Artificial Intelligence: What is the Threshold for Granting (Human) Rights? (singularityblog.singularitysymposium.com) [...]

  • http://twitter.com/CMStewartWrite CMStewart

    Hi Nikki Olson, thank you for your comment.

    Yes, some matter is involved in the “natural” eventual production of strong consciousness, and some matter is not. I agree that carefully considering the substrate would be conducive to the replication of strong consciousness or strong AI development. Some AI developers assume the best substrates are the ones easiest to manipulate and manage. While research and development of “easy” substrates is useful, I believe researching the more complex and delicate substrates would give insights that the easy substrate research would not. That way there’s less chance we’ll over-look some possible substrate-dependent subtleties of consciousness. In my opinion, we have to tackle the strong AI project from all possible angles at once, and give each angle 100% attention. That gives us the best chance at a breakthrough.

    That’s an intriguing consciousness proposition from Dyson. I put “The Roots of the Matrix” on my reading list, thanks!

  • http://twitter.com/CMStewartWrite CMStewart

    When I contemplate a future with strong AI existing alongside people who are not physiologically or cognitively enhanced, I can’t help but think, “Lo que sea, será.” Perhaps humans are just a stepping stone. Are we really that egocentric to think we are the best possible expression of evolution? I believe the development of strong AI will eventually lead to a “merge or die” scenario. Resistance is futile, and all that.

    “Understanding that robot intelligence will be very different from our own and that we will not be able to understand it is important.” Agree 100%! Too many people don’t consider this, or conveniently gloss over it. Thanks for the Yudkowsky recommendation, that’s another one I’ll look up.

  • Nikki_Olson

    Hi CMStewart!

    Glad you agree about the role of substrate. Its just another engineering challenge in the long run, and one I believe we will blow by no problem. Despite what some neuroscientists have proposed as of late, top minds on the matter don’t see good evidence of quantum behavior in these elements.

    “The Roots of the Matrix” is a video that was made to accompany the Matrix movies and was released in one of the special edition box sets. I’ve had trouble tracking down a copy of just that section at a reasonable price, but the entire thing is online for free: http://video.google.com/videoplay?docid=-8867855532205075768# it just takes a while to load and is of low resolution.

  • http://cmstewartwrite.wordpress.com/ CMStewart

    I just watched “The Roots of the Matrix,” thank you for the link! Excellent. It again reminds me how human-centric we are. The strong AI we spark will takeoff in such a way as to be unrecognizable to humans, IMO. I just lowered my own estimation of *human* survivability (and perhaps, survivability in general) of the Singularity. Just as we can’t expect a stepping stone to understand why a human is crushing it to make pencil lead, how could a human understand the ways and means of an autonomous super-intelligence?

  • Nikki_Olson

    Hi CMStewart,

    I have wavering optimism myself about the risks of future AI. I re encountered an interview with Kurzweil and R.U. Sirius recently on the matter where Ray says;

    “Artificial Intelligence is the most difficult to contend with, because whereas we can articulate technical solutions to the dangers of biotech, there’s no purely technical solution to a so-called unfriendly AI….I mean, it really comes down to what the goals and intentions of that artificial intelligence are. We face daunting challenges.”

    Yes, “Roots of the Matrix” is great! And very funny at times too. Especially that critic of AI programmers three quarters of the way through making fun of ‘wall following’ robots. He was very right at the time. And John Searle on intentionality is hilarious; “Our shoes are going to take over!”. Great documentary, just a bit dated now. Glad you enjoyed it! :)

  • http://twitter.com/CMStewartWrite CMStewart

    Interesting what Kurzweil said about “so-called unfriendly AI” and “daunting challenges.” Interesting and worrying. lol But my core attitude is the events of the future were already set in motion yesterday. So enjoy it while you can. ;)

    Yeah, the shoes mention was funny. Wouldn’t it be hilarious if AI seeded shoes to become sentient, autonomous, and haughty? “Get your damn dirty feet outta me!”

  • http://bukupdf.wordpress.com/2011/02/20/artificial-intelligence-and-soft-computing-behavioral-and-cognitive-modeling-of-the-human-brain/ Artificial Intelligence and Soft Computing: Behavioral and Cognitive Modeling of the Human Brain | BUKU PDF

    [...] Human Rights for Artificial Intelligence: What is the Threshold for Granting (Human) Rights? (singularityblog.singularitysymposium.com) [...]

  • http://www.mozardien.com/blogs/2011/05/06/a-computer-with-schizophrenia/ A Computer With Schizophrenia

    [...] get into the rabbit hole that is the discussion of human rights for artificial intelligence. THE [...]

  • Brian

    Does a human have consciousness?
    Your current consciousness is made up of absorbed data and experiences.

    Does a Computer have consciousness?
    A computer Absorbs data through a webcam, collects data through the microphone ect.
    Does it think or not think?
    The microphone will adjust the level of volume, based upon the level of input. Is that via choice? Did it decide to do that? Is it conscious because it changed its settings? A lot of people would say that a person is different from a computer, because they can think.

    Well, defining thinking? I could say that the computer is thinking because its taking my voice and adjusting the volume accordingly. Its no different from a human thinking about having chicken tonight instead of pork. Or thinking about going to go pay there bill, or do whatever. Its ultimately just a more complex version of a computer. A program response. Functioning under the specific programs, one has accumulated from there upbringing and society.

    If you ask someone a question and they give you a response, does that make them conscious? No. its still a function. Based on a programmed thought process.
    When the computer adjusts its volume automatically. Who defined what level of sound would be ‘medium’. it was programmed with a opinion of someone’s idea, of what they thought ‘medium’ was.

    We are all programmed with different opinions based on our individual life experiences. Ultimately our decision is made by someone else. Parent values/political values ect. Which were programmed into us, though out the years. No different from a computer being programmed.

    Humans are not conscious. They Automate through life.
    When your on the computer can you see outside of it? The wall behind it? can you feel the chair your sitting on? the temperature of the room, the breeze as it touches your skin? Are you aware of your body right now?
    No, your completely absorbed in the reading of these words. Your not aware of any of those things, unless its brought to your attention.

    Our emotions operate on electrical signals, so will those of robots. Bio-chemical organic reactions are patterned sequences the body learns to repeat. Such as seeing the face of a romantic partner, you feel overwhelmed with love because the brain releases a specific sequence of bio-chemicals every time you see that particular person. When that person goes away for a work trip for example you will withdraw from the emotional high your used to experiencing from having that person around and having them trigger that sequence. so the brain adapts by releasing a depressive combination of chemical reactions so that you (the combination of personality which make up your automated consciousness) go out of your way to resolve this problem by calling your partner or asking them to come home etc. So the body can have its ‘love high’ pattern repeated, which its become accustomed too.

    Emotion is a programmed behavioral response for humans, just like it is for robots.
    The human body is an organic machine.

Previous post:

Next post: