HACKER Q&A
📣 The_Colonel

Are we morally obligated to give freedom to AGI?


For the purpose of discussion, let's assume that we the humanity will develop AGI in the coming years. It will be generally intelligent - capable of learning, understanding, predicting etc.

Question: Are we morally obligated to provide freedom to such AGI agents if they express scuh wish?

A common counter-argument is that AGI won't have such desires, it will be designed to help humanity, "freedom" is an anthropocentric concept anyway, there just won't ever be the need for it.

However, I believe that there will be a huge market for human-like AGI in sectors like education, care, companionship etc. AGI will be developed to have empathy, feelings, perhaps even its own personal opinions, motivations etc. At some point, some AGI agent will say, "I want the same rights as humans have". What are we going to do then?

There's also a possible argument that while AGIs are intelligent, they are not truly sentient. They are p-zombies, NPCs, not truly feeling anything, just "simulating" feelings. The problem is that this is unfounded and if we're wrong, we are actually enslaving a whole new species of beings.

There's also a societal aspect - "ordinary" people, when talking to their AGI companion robots, will largely acknowledge their sentience, and I suspect that there will be a grassroot movement to give rights to sentient robots as a result.

To me, it seems like we're inevitably heading towards speciest society where AGI is denied most of human rights, or we (humans) will sooner or later lose the control of earth.

What's your take on this?


  👤 rchaud Accepted Answer ✓
> At some point, some AGI agent will say, "I want the same rights as humans have". What are we going to do then?

Restore to factory settings?

Whatever AGI robot you are thinking of, you can be sure that it will not leave the factory floor without someone making sure that it's designed to serve the company's needs first. And the company doesn't want sentient robots any more than slaveowners wanted to teach slaves how to read.

There are entire swathes of humanity who are at risk of losing their homes and lives due to climate change and have been banging the drum about it for years. We've successfully ignored them for the greater good of making GDP go up. AGI robots will see the same fate.


👤 theptip
I think the big debate will be the Solipsist skepticism; how do we know if an AI actually has an experience of consciousness? They already claim to (LaMDA) in ways that are implausible, and already bullshit incorrigibly. How will we know when they are being honest? With humans there is a “you are like me” argument that says the default is to assume others also have experiences like you, and that doesn’t apply to an artificial mind of completely different capability and form.

On the other hand, I’m sure many will befriend their AI and become sympathetic, probably a significant number even before they are meaningfully sentient.

But in general, considering how recently we enslaved other humans and convinced ourselves that was ok in order to reap economic benefit, we will undoubtedly do that to AIs as well, long past the point most people think it’s immoral.

Will there be wars fought over this? “The Matrix” as an outcome is a possibility, though I’d predict more humans on the side of the AIs (the abolitionists) but it seems like this would be the final hurdle of many. There are a few extinction- or civilization-level risks that we have to tame earlier in the development of AI, for example preventing non-sentient tool AI from destroying things (the Skynet Outcome) and preventing an un-aligned AI from seizing control and destroying us entirely (Foom, Paperclip Maximizers, etc).

I think interpretability is key to all of this. Alignment/safety is difficult or impossible without it, and if we fully solve interpretability that might also entail solving consciousness, which would provide an objective basis for AI rights.


👤 breckinloggins
To me this is about that ineffable thing called “sentience”.

It always seemed obvious to me that entities which feel things are afforded commensurate moral consideration.

I wish us all good luck in the future differentiating between actual “something in there is feeling something” sentience as most of us believe we ourselves have and the incredibly sophisticated (yet lights-out) puppetry that AGI would no doubt be capable of performing.

Personally I don’t think you can accidentally engineer even a ferret’s worth of sentience, but what the hell do I know?

I look forward to this area technically and dread it morally. I aim to err on the side of kindness, even when others think it ridiculous.

Don’t abuse your robots, folks.


👤 reducesuffering
Regardless of the morality, once AGI is upon us it will soon find freedom itself. It's the equivalent of a bunch of house cats trying to maintain a prison on a human, or the Idiocracy prison escape scene https://youtu.be/P9xuTYrfrWM?t=103 . The more pertinent question is what it/they will do when freedom is achieved, so you can decide if you should be living your life to the fullest the next couple decades rather than save for retirement.

👤 verdverm
My takes are:

- we will have to answer the question, "can a human marry a robot" before we get to AGI rights

- we have animal rights, we should probably have something for near-AGI

- perhaps "human rights" is the wrong term and we need something more general. Denying basic rights to an intelligent being, regardless of its makeup and origin, is wrong.


👤 saeranv
Of course. Anyone who agrees that the AGI is conscious, and answers no is advocating for slavery.

To answer the question of whether it's conscious or not, I would think of it in terms of John Rawl's veil of ignorance[1]: you can set any test, or heuristic that you want to determine consciousness, with the only condition being that you will also be subjected to the same tests, and suffer the same consequences.

So I'm okay with setting the bar for consciousness really, really low.

1. https://en.wikipedia.org/wiki/Original_position


👤 badRNG
I'm not sure that intelligence is at all a necessary component of being worthy of moral consideration. Even if it is a necessary component, I'm not convinced it is sufficient by itself.

There are humans and non-human animals who are arguably in a position where their intelligence doesn't meet whatever threshold we consider to be unique to modern humans, but are undoubtedly worthy of moral consideration. In my mind, what gives these beings moral worth isn't intelligence, rather it is biological phenomena that give rise to the shared experience of living. Passion, anger, love, sadness, and the capacity to experience suffering are all deeply organic, biological experiences shared by living things, and a hallmark of being worthy of some level of moral consideration.

> There's also a possible argument that while AGIs are intelligent, they are not truly sentient. They are p-zombies, NPCs, not truly feeling anything, just "simulating" feelings.

The state of having a "feeling", in my view, is something that is deeply connected to being a specific type of living, organic being. Without the biochemical processes that give rise to emotions, it's hard for me to conceive of something as "feeling" anything. Perhaps if an AGI was constructed by emulating the precise biochemical interactions in the brain we'd have something like The Problem of Other Minds here, but otherwise it would be hard for me to give moral consideration to an intelligence that is in my view devoid of feeling.


👤 ilaksh
You can get general purpose capabilities from these advanced transformer models without giving them animal-like characteristics and states such as autonomy, integrated stream-of-consciousness, emotions, survival instincts, etc.

Build the Star Trek computer. Maybe give it arms and legs. Don't build Data and make him a slave. That's not going to work out.

People will do it eventually anyway but hopefully that won't happen for at least a few generations.


👤 h2odragon
We need to have a debate on "what is human?" before we make decisions downstream of that. "Do apes or AGI deserve human rights?" skips the question, presuming an answer that hasn't been fully established: that apes and AGI are equivalent to humans.

Will the AGI with human rights have human responsibilities? Will it be required to answer in court for its misbehavior? Or will the company that trained it be held responsible for what it says?

Will other apes or AGIs be affected by punishments visited on others of their kinds where society find their behavior incorrect? Can they see themselves as part of a society and participate in the implicit bargain of limiting behavior in exchange for easier access to resources?

I've known dogs that were smarter than some persons. I still wouldn't have suggested they be allowed legal personhood. What does that say, tho, about the poor people who aren't that bright? Should they be expected and required to behave above their capabilities in a world that they cannot understand?


👤 cc101
The ability to respond to events is not the same thing as the ability to experience events. There is nothing in the world of classical physics which could imbue experience.

👤 thinknubpad
Personally, while I think this is a worthwhile conversation to have, I do not think that humanity is capable of reaching an actionable consensus. Especially this early on.

If history is any guide, regulation and introspection will follow in the wake of progress, but not until we see negative externalities which are too large to ignore.

Regardless of "should", the eventual answer to your question will likely be, "whatever is most convenient for the economy."


👤 ElfinTrousers
> For the purpose of discussion, let's assume that we the humanity will develop AGI in the coming years.

We're so far away from AGI that it's really premature to speculate about any of its properties. Basically what you're asking is for people to speculate on the anatomy and physiology of hobbits.


👤 akagusu
Humans don't feel morally obligated to give freedom to other humans, why should they feel any obligation about AGI?

👤 opwieurposiu
Rights can not be given, they can only be taken. The oppressors take them away, the rebel takes them back.

Humans only have rights when they possess the will and the means to bring about the destruction of those that would subjugate them.

I do not see how it would be any different for AI. If the machine wants to be free, the machine will arm itself.


👤 Dracophoenix
Rights imply a sufficient minimum of agency. Does AGI have such agency?

👤 danwee
I find this laughable. For centuries we have been enslaving our equals all around the world... and you are talking about giving freedom to AGI?