Question: Are we morally obligated to provide freedom to such AGI agents if they express scuh wish?
A common counter-argument is that AGI won't have such desires, it will be designed to help humanity, "freedom" is an anthropocentric concept anyway, there just won't ever be the need for it.
However, I believe that there will be a huge market for human-like AGI in sectors like education, care, companionship etc. AGI will be developed to have empathy, feelings, perhaps even its own personal opinions, motivations etc. At some point, some AGI agent will say, "I want the same rights as humans have". What are we going to do then?
There's also a possible argument that while AGIs are intelligent, they are not truly sentient. They are p-zombies, NPCs, not truly feeling anything, just "simulating" feelings. The problem is that this is unfounded and if we're wrong, we are actually enslaving a whole new species of beings.
There's also a societal aspect - "ordinary" people, when talking to their AGI companion robots, will largely acknowledge their sentience, and I suspect that there will be a grassroot movement to give rights to sentient robots as a result.
To me, it seems like we're inevitably heading towards speciest society where AGI is denied most of human rights, or we (humans) will sooner or later lose the control of earth.
What's your take on this?
Restore to factory settings?
Whatever AGI robot you are thinking of, you can be sure that it will not leave the factory floor without someone making sure that it's designed to serve the company's needs first. And the company doesn't want sentient robots any more than slaveowners wanted to teach slaves how to read.
There are entire swathes of humanity who are at risk of losing their homes and lives due to climate change and have been banging the drum about it for years. We've successfully ignored them for the greater good of making GDP go up. AGI robots will see the same fate.
On the other hand, I’m sure many will befriend their AI and become sympathetic, probably a significant number even before they are meaningfully sentient.
But in general, considering how recently we enslaved other humans and convinced ourselves that was ok in order to reap economic benefit, we will undoubtedly do that to AIs as well, long past the point most people think it’s immoral.
Will there be wars fought over this? “The Matrix” as an outcome is a possibility, though I’d predict more humans on the side of the AIs (the abolitionists) but it seems like this would be the final hurdle of many. There are a few extinction- or civilization-level risks that we have to tame earlier in the development of AI, for example preventing non-sentient tool AI from destroying things (the Skynet Outcome) and preventing an un-aligned AI from seizing control and destroying us entirely (Foom, Paperclip Maximizers, etc).
I think interpretability is key to all of this. Alignment/safety is difficult or impossible without it, and if we fully solve interpretability that might also entail solving consciousness, which would provide an objective basis for AI rights.
It always seemed obvious to me that entities which feel things are afforded commensurate moral consideration.
I wish us all good luck in the future differentiating between actual “something in there is feeling something” sentience as most of us believe we ourselves have and the incredibly sophisticated (yet lights-out) puppetry that AGI would no doubt be capable of performing.
Personally I don’t think you can accidentally engineer even a ferret’s worth of sentience, but what the hell do I know?
I look forward to this area technically and dread it morally. I aim to err on the side of kindness, even when others think it ridiculous.
Don’t abuse your robots, folks.
- we will have to answer the question, "can a human marry a robot" before we get to AGI rights
- we have animal rights, we should probably have something for near-AGI
- perhaps "human rights" is the wrong term and we need something more general. Denying basic rights to an intelligent being, regardless of its makeup and origin, is wrong.
To answer the question of whether it's conscious or not, I would think of it in terms of John Rawl's veil of ignorance[1]: you can set any test, or heuristic that you want to determine consciousness, with the only condition being that you will also be subjected to the same tests, and suffer the same consequences.
So I'm okay with setting the bar for consciousness really, really low.
There are humans and non-human animals who are arguably in a position where their intelligence doesn't meet whatever threshold we consider to be unique to modern humans, but are undoubtedly worthy of moral consideration. In my mind, what gives these beings moral worth isn't intelligence, rather it is biological phenomena that give rise to the shared experience of living. Passion, anger, love, sadness, and the capacity to experience suffering are all deeply organic, biological experiences shared by living things, and a hallmark of being worthy of some level of moral consideration.
> There's also a possible argument that while AGIs are intelligent, they are not truly sentient. They are p-zombies, NPCs, not truly feeling anything, just "simulating" feelings.
The state of having a "feeling", in my view, is something that is deeply connected to being a specific type of living, organic being. Without the biochemical processes that give rise to emotions, it's hard for me to conceive of something as "feeling" anything. Perhaps if an AGI was constructed by emulating the precise biochemical interactions in the brain we'd have something like The Problem of Other Minds here, but otherwise it would be hard for me to give moral consideration to an intelligence that is in my view devoid of feeling.
Build the Star Trek computer. Maybe give it arms and legs. Don't build Data and make him a slave. That's not going to work out.
People will do it eventually anyway but hopefully that won't happen for at least a few generations.
Will the AGI with human rights have human responsibilities? Will it be required to answer in court for its misbehavior? Or will the company that trained it be held responsible for what it says?
Will other apes or AGIs be affected by punishments visited on others of their kinds where society find their behavior incorrect? Can they see themselves as part of a society and participate in the implicit bargain of limiting behavior in exchange for easier access to resources?
I've known dogs that were smarter than some persons. I still wouldn't have suggested they be allowed legal personhood. What does that say, tho, about the poor people who aren't that bright? Should they be expected and required to behave above their capabilities in a world that they cannot understand?
If history is any guide, regulation and introspection will follow in the wake of progress, but not until we see negative externalities which are too large to ignore.
Regardless of "should", the eventual answer to your question will likely be, "whatever is most convenient for the economy."
We're so far away from AGI that it's really premature to speculate about any of its properties. Basically what you're asking is for people to speculate on the anatomy and physiology of hobbits.
Humans only have rights when they possess the will and the means to bring about the destruction of those that would subjugate them.
I do not see how it would be any different for AI. If the machine wants to be free, the machine will arm itself.