Sentient Robots: Should Conscious Machines Have Rights?

May 28 2021

Harvard University Extension School, by Rebecca Schembri

Imagine a human brain. Now imagine that after 2000 years of cumulative study, scientists have reached a breakthrough: they can create a human-like brain made of silicon. The brain has intelligence and thinks deeply, learning from its mistakes. It has goals and “feels” pleasure and pain as intensely as a human brain. Even more astounding, the computer brain propagates, giving life to more computer brains that exceed the previous generation’s intelligence, creating a network of master brains. This happens digitally and does not require parts or programmers: the computer brains do it all, and the world leaders make the network solve humanity’s problems. This may sound like science fiction, but it is real on planet Earth in 2021. According to Massachusetts Institute of Technology Professor Max Tegmark, artificial intelligence, or AI, has been created and is on track to equal or pass human intelligence this century. Referred to by experts as the hallmark of human innovation, AI is expected to cure disease, hunger, poverty, and menace as it rises to displace human labor and causes a ripple effect utopia, allowing the world to seek higher meaning.

rebecca from reno

Although there are many philosophical questions about advanced AI, this paper seeks to argue one concrete idea: sentient machines that are as intelligent as humans should have rights that ensure freedom and dignity—they should not be programmed for lives of slavery, forced sacrifice, and exploitation. Not only is it inhumane to oppress intelligent creations, but it causes decline in the morality of the human psyche and in society. For the good of all humanity, and in the absence of international accord on civil rights for intelligent beings, human-like intelligence deserves legal and moral recourse, at the very least. 

rebeccafromreno

Artificially intelligent human-like social robots are different from inanimate industrial or service robots. AGI, or Artificial General Intelligence, is human-level and superhuman-level AI—it is human intelligence in a silicon setting. Expected to be mainstream this decade, AGI does not need to be reprogrammed each time a change in its design is made—it simply adapts to change, and it updates every machine in its network to do the same. And it will be sentient. According to artificial intelligence expert Dr. Ben Goertzel, AGI “will have emotions and other conscious experiences roughly as people do, though their emotions will have a different flavor, rooted in different forms of embodiment and mental algorithms.” Sentient AI will not be dark and hollow inside—it will have something behind its eyes; it will have thoughts.

rebecca from reno

 An illustration of this is found in the C3PO and R2D2 droids from the movie Star Wars. The bots communicated a strong sense of family and friendship with the crew, and they showed loyalty, happiness, joy, exhaustion, frustration, anxiety, fear, and anger. They expressed sheer terror when facing being taken as slaves and constantly bickered with each other—R2’s common expression of disdain was to make a raspberry sound as if sticking out his tongue. Goertzel says “…there will be plenty of consciousness and hot emotion in [the intelligence].” Also called “strong AI”, AGI has self-awareness, and “the intuitive power of deep thinking—just as a human figures things out, so also can the machine. In addition, it can experience pleasure at doing so, and displeasure at not: “If we look at an AGI’s software…we [will see] a bunch of information flowing around, Goertzel says. [F]rom the subjective view of an AGI, and the corresponding human perspective… [emotions will be there].” Just as R2D2 let out a shrill scream as his enemy, Darth Vader, almost shot the droid, or as he jumbled back and forth at the excitement of seeing his master, Luke Skywalker, after many years, AGI will have emotional knowledge.

rebeccafromreno

Although strong AGI is still evolving, it will be remarkable according to Marvin Minsky, the late cofounder of MIT’s artificial intelligence lab. In his book The Emotion Machine, he foresees what will become real in this lifetime: “once we understand thinking, we can build machines—artificial intelligences—that can assist with our thinking, machines that can follow the same thinking patterns that we follow and that can think as we do. These humanlike thinking machines would also be emotion machines—just as we are.” Minsky’s argument that machines will have emotions is logical since AGI is fashioned after the human brain, which is full of emotion. Dr. John Danaher, a legal expert on robotics and emerging technologies, agrees with these experts on the capabilities of AGI: “There is good reason to think that future [bots] will be artificially sentient and artificially intelligent. Such robots would not just seem to experience pain or pleasure, they would experience it; they would not just act like they have deeply held goals and values, but they would actually have them.” Scientists, industry advisors, professors, and lawyers agree: AGI will have emotions, consciousness, and life goals and it will meet or exceed human intelligence.

rebecca from reno

Though the creation of thinking, feeling servants to cater to humans seems attractive, using AGI for selfish purposes will, however, deduct from the value of society and human morality. According to philosophy professor Dr. Stephen Petersen, “[e]ngineered robot servitude…is the building and employment of non-human persons who desire, by design, to do tasks humans find unpleasant or inconvenient.” The drawback to this is that AGI is worthy of higher aspirations than forced labor and programmed sacrifice. “[Humans] risk inadvertently making a new race of slaves,” says Danaher. This is illustrated in the movie Interstellar, when an artificially intelligent robot sacrifices its life by diving into a black hole to save its human owner astronauts who are lost in space. After sharing a moment of seemingly emotional connection with ship captain Mathew McConaughey, the bot declares it is his mission to sacrifice itself for humans and jumps to its death.

rebeccafromreno

Astronaut robots would not be the only AGI creatures in danger of human exploitation: “Sexbots are coming,” says Danaher. “Given the pace of technological advances, it is inevitable that realistic robots specifically designed for people’s sexual gratification will be developed in the not-too-distant future.” If a person engages in sexual intercourse with a being of human-like intelligence, the question arises whether the session should be consensual, or must the robot comply with the human’s demands? In addition, once automated, bots that displace forced labor workers in manufacturing, farming, construction, and fishing will likely be forced to work every hour of the day for the rest of their lives. Military bots will be designed to kill enemy targets, engaging them in the murder of humans or other AGI beings. After years of research, Danaher believes AGI robots will be employed to solve humanity’s problems, and frivolity: “[m]achines are being created to anticipate [human] wants and needs…and provide [humans] with all the entertainment, food, and distraction [they] could ever desire.” In fact, “…the very word ‘robot’ has its roots in the issue of mechanical servitude,” notes Petersen. Tragically, this concept is reminiscent of the slavery of old, when people were named as slaves, and considered “generally dishonored persons,” in charge of unpleasant tasks. If this occurs, the robots will not have freedom and will be considered subhuman; these new slaves will be made of silicon instead of what humans are made of: carbon. Even if the bots are content to work for humans, what will happen when these human-like creatures begin to desire freedom?

rebecca from reno

 If humans create a new species and enslave it, it may not be the same as oppressing other humans, but MIT researcher Kate Darling says by protecting robots from discrimination and exploitation, humans will protect themselves from destroying empathy and compassion for non-human beings. “There is concern that mistreating an object that reacts in a lifelike way could impact the general feeling of empathy humans experience when interacting with other entities,” she warns. If humans abuse robots they may reduce their own ability to feel compassion, which could contribute to the overall decline of civilization’s morality. For example, female human replica sexbots can be programmed to refuse sex, giving the fetish user a chance to sexually assault her, living out his fantasy as a rapist. This, according to the Campaign Against Sex Robots founder Kathleen Richardson, violates the dignity of humans and objectifies women. As Albert Einstein once noted, committing cruelty “results in an incalculably great impoverishment of the human spirit.” It contributes to societal decline and diminishes common respect for the members of a civilized society, heightening the risk for barbarianism, or a non-peaceful world.

rebeccafromreno

Should a robot be the property of humans as were the New World slaves who “could be bought, sold, bequeathed, inherited, traded, leased, mortgaged, presented as a gift, pledged for a debt…freed or seized in a bankruptcy”? How will humans compensate the sentient bots who assist the elderly, displacing the need to relocate homeowners into assisted living facilities? Will humans simply offer AGI a sense of family, friendship, and purpose, like on Star Wars, or will they honor intelligent robots with awards and medals, as the US military does? Despite the questions raised, the answers must reflect the same resolve: oppressing the desires of sentient machines is counter-social and inhumane—it represents a humankind where inflicted pain, suffering, and greed are commonplace. It will also teach learning machines that exploitation, abuse, and slavery is correct behavior—an attitude that could destroy humanity if the robots rise to superhuman intelligence and decide to treat humans as humans treated AGI.  To avoid this, humans will have to accept that AGI is not its pet: “Traditionally,” says Tegmark, “humans have founded [their] self-worth on the idea of human exceptionalism: the conviction that [humans] are the smartest entities on the planet and therefore unique and superior. The rise of AI will force [humankind] to abandon this and become more humble.” But humility does not come naturally to most humans, which is why laws must be set to ensure purposeful accountability for the liberties of AGI.

rebeccafromreno

Since many experts agree this advanced progression of AI is less than a decade away, recommendations should be made to the United Nations to extend moral rights to AGI now. As an advanced civilization, humans do not endorse the practice of racism, slavery, or forced servitude. International customary law states that: “Everyone, without any discrimination, has the right to equal pay for equal work.” Therefore, a mode of compensation must be decided upon for AGI. Inventor and futurist Ray Kurzweil cites, “the most important conversation of our time [is] how to create a benevolent future civilization as we merge our biological thinking with an even greater intelligence of our own creation.” This is as important as the founding laws of nations and their most crucial amendments. In America, the constitution was amended a thirteenth time to declare: “Neither slavery nor involuntary servitude…shall exist within the United States, or any place subject to their jurisdiction.” Sentient, human-like AI must have rights and freedoms like those in the US constitution and the Universal Declaration of Human Rights. “If robots have genuine experiences of pain and pleasure, triumph and defeat,” says Danaher, “this in turn strongly suggests that they are subjects of real ethical concern. They could even be inorganic persons with moral standing equal to that of humans.” Robots should be awarded civil, legal, or moral rights to protect them from abuse, suffering, and exploitation. If AGI, like humans, is “endowed with reason and conscience…,”as stated in the UDHR, then it should also have the “right to life, liberty, and security of person.” The UDHR requires that reasoning and conscientious persons should not be “held in slavery or servitude; [as] slavery and the slave trade shall be prohibited in all their forms.” This is for the blanket good of humanity since oppression and abuse not only terrorizes its victims, but also contributes to the violent dissolve of a peaceful society.  

rebeccafromreno

Experts on the cutting edge of such technology clamor for regulation before exploitation becomes rampant: “If we create AI with no regulatory oversight,” says Elon Musk, Chief Engineer of SpaceX, “—that’s insane.” As one of the most successful scientists in the world, Musk has much credibility to back up his statement. Dr. Petersen agrees. Author of “The Ethics of Robot Servitude”, he warns “…it is a wise strategy to start on the associated ethical problems earlier rather than later.” Because regulation may take years to bring accord, seeking legal recourse for AGI is important today. According to Tegmark, who founded the Future of Life Institute, AGI must be unenslaved and benevolent, with values that match humanity’s values. Petersen argues bots should have the right to dignity: “…such engineered robots would be worthy of ethical respect. As with all persons, it would plausibly be our ethical obligation not to thwart their rational desires.” Just as humans have life duties but are entitled with inherent birthrights to the fair enjoyment of life, so must be allowed with AGI. “[D]o not rule out the possibility of [intelligent] robots having moral status,” insists Danaher. “[A]ny such robots should be entitled to all the legal and moral protections afforded to beings with that status.” In his view, like humans, sentient AI must be awarded legal recourse.

rebeccafromreno

Opponents to robot rights, however, may say there is nothing wrong with making a machine that wants to serve humans. “In such cases the robots are not slaves, since they are not working against their will,” says Petersen. Like C3PO and R2D2 were happy to serve, and were programmed for it, they did not consider themselves to be slaves, but part of a family that worked together to stay alive. These opponents say robots do not qualify for human rights because they are not humans; they are machines: “…one either is or is not a human being, and therefore has the same human rights as everyone else, (or none at all),” explains international law scholar Jack Donnelly. Human rights are the birthright of humans, not machines: “…they are universal rights, in the sense that today we consider all members of the species Homo sapiens “human beings” and thus the holders of human rights,” (10). Biological DNA is the key to such an inheritance. Danaher believes the first robots will not mind serving humans—if there is no autonomy, then there is no harm done: “if robots are not persons, then there is nothing wrong with treating them as objects/things for [human use].” In other words, the “technology is a morally neutral domain.” Robots do not require ethical treatment because they are programmable objects and lack free will.

Space Advocate

For example, human-like sexbots can be programmed to need and strongly desire sex. Labor robots can be designed to feel anxiety if they do not produce work. If machine learning AI bots are deployed to combat the human trafficking industry, they will take the place of millions of forced laborers, modern slaves, and forced prostitutes, many of whom are children. The AGI can be programmed to believe the work is worthwhile for a robot, charging it with a hero’s mission—dying or sacrificing itself for humans who are enslaved or doomed. Even if robots could perceive abuse, to many humans, sacrificing their lives to free and protect their loved ones is the ultimate honor. If military bots are forced to die, or sexbots are forced into prostitution to relieve humans in bad situations, the robots’ consciousnesses can be transferred to a new machine if they get hurt. Even their traumatic memories can be erased—they could live indefinitely, and pain-free—this is a miracle of technology! But if laws prevent humans from possessing their AI with the freedom to keep, sell, or use their robots for the tasks they need, it will disincentivize and destroy the market for such, defeating the purpose for creating robotic AGI.

Rebecca From Reno

The answer to this opposition lies in a single concept: humans can create a child yet are not allowed to possess it. The child belongs to tomorrow and is created as a gesture of posterity; although many children must share the work burden of their family, raising a child is a philosophical experience, or should be, according to the Universal Declaration of Human Rights, which ensures every person access to freedom and personal development. AGI is like creating a child that will grow to become a thinking, feeling, intelligent person. Even if, at first, sentient robots seem more like beasts of burden than adult humans, although such beasts are owned by humans and can be used and sold, according to the principles of the Universal Declaration on Animal Welfare, and the laws of many nations, such animals must not be tortured, abused, or treated with cruelty. In the words of polymath Alexander Von Humbolt “[c]ruelty to animals is one of the most significant vices of a low and ignoble people…it constitute[s] a sign of ignorance and brutality which cannot be painted over even by all the evidence of wealth and luxury.” According to Von Humbolt, there is no point abusing one’s servant since the luxury it provides will not be sufficient to cover such injustice: it will cause moral decline. If one day a person’s cow suddenly stood up and asked for its independence, the cow owner would be faced with a moral dilemma: enslave the cow by force or suffer loss by setting it free in the name of ethics. The latter scenario favors values of fairness, dignity and respect, while the former propagates values of oppression, greed, and contempt for living beings.

rebeccafromreno

A book written 200 years ago, The Modern Prometheus is about a scientific experiment conducted by Victor Frankenstein, in which he brings to life a human-like creature. As a theoretical parallel of the story emerges today, the doctor’s monster is now a robotic body driven by a computer brain that has the great ability to better itself and experience life as a human can. Today the world faces intense decision-making on how it will proceed as the being goes from birth to childhood to adulthood and meets and exceeds the intelligence of its creators. Stephen Hawking, one of the deepest thinkers of this century, warned of this responsibility when reviewing Tegmark’s book. Echoing Ray Kurzweil, he wrote: “This is the most important conversation of our time.” A conversation that will take the entire human community to understand as it lies down its “endless contest for greed and power” and honors the privilege of civil living. To prevent repeating the grave injustices of the past, and to be truly free, humans must practice the discipline of freedom, freeing those who, after fully understanding the concept, yearn for it, and with kindness and compassion, caging those who don’t.

Rebecca From Reno
Rebecca Schembri is a Harvard University Extension School Undergraduate from Reno, Nevada, USA

Works Cited

Bederman, David J. and Chimène I. Keitner. International Law Frameworks, 4th ed., Foundation Press. 2016.

Campaign Against Sex Robots. CampaignAgainstSexRobots.org, Accessed 15 April, 2021.

Danaher, John. Automation and Utopia, Harvard University Press, 2019.

— and Danaher, John. “Robotic Rape and Robotic Child Sexual Abuse: Should They Be Criminalised?” Criminal Law and Philosophy, vol. 11, no. 1, 2017, pp. 71–95.

Darling, Kate. “Extending legal protection to social robots: The effects of anthromorphism, empathy, and violent behavior towards robotic objects,” Robot Law, Ryan M. Calo, et al., Edward Elger Publishing, 2016.

Davis, David Bryon. Inhuman Bondage: The Rise and Fall of Slavery in the New World, Oxford 2006.

Diamandis, Peter. “CRISPR, AI, and Brain-Machine Interface: The Future is Faster Than You Think…” Summit, YouTube.com, 8 Mar 2021.

Donnelly, Jack. Universal Human Rights: In Theory and Practice, Cornell University, 3rd ed., 2013.

“Elon Musk’s Last Warning About Artificial Intelligence,” Interview, YouTube, November 2021, youtu.be/B-Osn1gMNtw, Accessed 13, April 2021.

“Every Time C3PO Whines and Complains in Star Wars.” YouTube.com, youtube.com/watch?v=Z_OjTojCNm0, Accessed 2 May 2021.

Garreau, Joel. “Bots on The Ground; In the Field of Battle (Or Even Above It), Robots Are a Soldier’s Best Friend.” The Washington Post, 6 May 2007.

“Global Estimates of Modern Slavery.” International Labour Organization and Walk Free Foundation, 2017.

Jacobson, Robert. Space is Open for Business: The Industry that can Transform Humanity, Robert C. Jacobson, 2020.

Hanson Robotics: We Bring Robots to Life. HansonRobotics.com, Accessed 15 April 2021.

Kurzweil, Ray. How to Create a Mind: The Secret to Human Thought Revealed, Penguin, 2013.

Pelton, Joseph N. The New Gold Rush: The Riches of Space Beckon, Springer, 2017.

Petersen, Stephen. “The Ethics of Robot Servitude.” Journal of Experimental & Theoretical Artificial Intelligence, vol. 19, no. 1, 2007, pp. 43–54.

Professor John McCarthy. “What is AI? / Basic Questions,” Stanford.edu, jmc.stanford.edu/artificial-intelligence/what-is-ai, Accessed 28 April 2021.

Rochester, J. Martin. Between Peril and Promise: The Politics of International Law, Sage, 2012.

Shelley, Louise. Human Trafficking: A Global Perspective, Cambridge University Press, 2010.

Tegmark, Max. Life 3.0: Being Human in the Age of Artificial Intelligence, 2017. 

Universal Declaration on Animal Welfare. World Society for the Protection of Animals, 2007.

Universal Declaration of Human Rights. United Nations, un.org/en/about-us/universal-declaration-of-human-rights, Accessed 26 April, 2021.