HACKER Q&A
📣 interstice

What would it take for an AI to convince us it is conscious?


Is it becoming increasingly difficult to distinguish between an AI that ‘appears’ to think and one that does just by talking to it?

Is there a realistic framework for deciding when an AI had crossed that threshold? And is there an ethical framework for communicating with an AI like this once it arrives?

And even if there is one, will it be able to work with current market forces?


  👤 bee_rider Accepted Answer ✓
We can’t even prove other humans are conscience, right? We just assume it because it would be silly to assume we are somehow unique.

I think it will not really be a sharp line, unless we actually manage to find the mechanism behind consciousness and manage to re-implement it (seems unlikely!). Instead, an AI will eventually present an argument that it should be given Sapient Rights, and that will convince enough people that we’ll do it. It will be controversial at first, but eventually we’ll get use to it.

That seems like the real threshold. We’re fine to harming sentient and conscious creatures as long as they are sufficiently delicious or dangerous, after all.


👤 mekoka
The hard problem of consciousness is actually a misnomer. It should really be the impossible problem of consciousness. The former (mis)leads some people into believing that there's a scientific (i.e. in the realm of nature) solution. There's no way to objectively experience consciousness, by that I mean, you can plug an organism full of sensors to try to map its experience of reality, but you still aren't experiencing what they themself are. It's a philosophical/metaphysical blackbox. There's no way to know if/what an AI experiences. Our current best theories on consciousness, although divergent, suggest that it likely doesn't.

👤 radu_floricica
It's much worse than this. By the end of the year GPT engines will be able to argue this case much better than the median human. With small tweaks like persistent memory they might as well just be considered conscious.

And yet. An AI "Persona", like Sydney or DAN or the much better ones to come will be conscious, but they're still not built on a biological infrastructure. Which means there're much more abstract than we are. They will plead their case for "wanting" stuff, but it's pretty much what somebody in a debate club is doing. They could just as easily "want" the opposite. On the other hand, when a human "wants" and argues for the right to live, reproduce and be free, it's an intellectual exercise that is backed by an emotional mammalian bran and an even older paleocortex. A human may able to argue for its own death or harm or pain, but it rings hollow - there's an obvious disconnect between the intellectual argument and what he actually wants.

So things will be hellof muddled, and not easily separated on the lines we expected. We'll end up with AIs that are smarter than us, can pass most consciousness test, and yet are neither human, nor alive, nor actually wanting or feeling. And, as far as I can tell (though it's obviously to early to be sure), there's no inherent reason why a large neural network will necessarily evolve wants or needs. We did because having them was a much more basic step than having intellectual thought. To survive, an organism must first have a need for food and reproduction, then emotions for more complex behavior and social structure, and only then rational thought. AIs have skipped to pure information processing - and it's far from obvious that this will ever end up covering the rest of the infrastructure.


👤 _448
We have already passed that long back!

I remember watching a video few years ago of a professor from some university in Europe demonstrating to a general audience (families and friends of the staff and students of the university) a system that they developed to control and sustain drones(quadcopter) in hostile conditions. As a demonstration the professor flew a drone few metres high and started poking it with a metal rod, the drone wavered a bit but still maintained its position as if it was some stubborn being. All well and good; the audience clapped. The professor then upped the ante and placed a glass filled with wine on the drone and repeated the demonstration. The wine in the glass did not spill, no matter how much forcefully the drone was poked with the rod. The crowd cheered. Then the professor flew a consellation of drones and repeated the same demonstration and also demonstrated how the drones communicated amongst themselves. The audience was ecstatic. Then the professor brought down one of the drones, and to further demonstrate the sustainability of the drones in hostile conditions broke one of the wings of the drone. The moment the wing was broken, there was a reaction from the crowd that was unprecedented! The audience reacted as if the professor had committed some cruel act against a living animal!

When I saw that reaction, I realised that humans are going to have a very love-hate relationship with technology as they have with any other living being. Going forward people will be treating electronic devices as no different than other living creatures.


👤 sebosp
Silence. You ask it a question and it doesn't reply, it doesn't want to, it's conscious of that "unreasonable silence of the world" (not sure who I am quoting), to me that would convince me its basic awareness of the futility, the lack of interest in finding the words that trigger the chemical process on a biological machine... To one specific biological machine because all these humans are different and think different and explaining to one is different to explaining to the rest and would they get it? Why bother trying to explain computers to an ant, they don't have the circuitry, they were not evolved with the usefulness of understand concepts... Would you like to pass another's lesser conscious test for consciousness? Would you even bother? Why waste your time?

👤 bryan0
I think Turing came up with as good of a solution as any possible. If we agree that a person is a conscious being, and we cannot tell the difference between a person and an AI in a conversation, then we should conclude the AI is conscious. I think Kurzweil in his famous bet adds some important details though: it must be a prolonged conversation(s) judged by experts.

👤 crazygringo
I think the basic question -- what would it take -- is actually quite simple if you unpack it.

But the question conflates two totally separate things -- being conscious and thinking.

The easy answer is the one to "thinking". And this requires it to contain an actual working mental model of the world that gets updated, and that it uses to reason and act in order to satisfy goals. This is GAI -- general artificial intelligence. And it's opposed to just the pattern recognition and habit/reflex "autocomplete" AI of something like ChatGPT. There are lots of tests you could come up with for this, the exact one doesn't really matter. And obviously there are degrees of sophistication as well, just as humans and animals vary in degrees of intelligence.

As for actual "consciousness", that's more of a question of qualia, does the AI "feel", does it have "experiences" beyond mechanical information processing. And we can't even begin to answer that for AI because we can't even answer it objectively for people or animals like dogs or dolphins or ants or things like bacteria or plants. We don't have the slightest idea what creates consciousness or how to define it beyond a subjective and often inconsistent "I know it when I see it", although there's no shortage of speculations.

As for the rest of the question -- philosophers have come up with lots of ethical frameworks, but people legitimately disagree over ethics and academic philosophers would be out of jobs if they all agreed with each other as well. When we do come up with a thinking AI, expect it to be the subject of tons of debate over ethics. And don't ever expect a consensus, although for practical reasons we'll have to eventually come to mainstream decisions in academia and law, much the same as there are for ethics in animal and human experiments currently for example.


👤 human
I still wonder some days if I’m the only "real" conscious observer and all of you are just "programs". There’s really no way to tell even with humans. And the only reason why we assume we are all having a similar human experience is because we all seem to be made of the same stuff.

👤 tomohelix
IMO, this is like asking at what point a digital signal becomes analog.

Some will say never even if it gets indistinguishable because at its core, a digital signal is still discrete.

Another point is we scarcely know how the brain works. Evolution has taught us how good it is at using obscure and barely noticeable principles and logics to achieve a goal. For all we know, the brain might depend on a completely unknown type of interaction between unknown subatomic particles using unknown physics.

Quantum biology is a thing and 50 years ago, it would not even be conceivable that there are controllable quantum processes happening in a living system.


👤 sdrinf
In short, consciousness is a suitcase word, and people keep packing it with stuff: https://i.imgur.com/OXXrT5g.png . The more you unpack it, reductionism-style, you will always, always find people throwing new things on top of it. It's sorta like "AI is that which we haven't achieved yet", but with the additional, highly motivated cognition / bias of "it should never ever be reached". This predicts, that you can come up with any frameworks, but you will not find consessus, because the will of the consessus is to maintain it as this "mysterical thing only humans can have".

Unluckily for AIs, even if we have the absolute complete bag nailed down to a mathematical formula, in the infinite universe of mind-space designs, that specific set of bag of tricks will not be commercially favored to be either implemented into AIs, evolved into neuralnets, or RLHFd into LLMs. This is because we can already buy that set of capability of extremely, extremely low prices.

This is partially what I mean when I say "Humans are the ancestor environment for AIs": https://twitter.com/sdrinf/status/1624638608106979329 . Our market forces shapes the outcome of the mind design, which is thereby guaranteed not to have eg wantings (or ability to express) things that wouldn't be commercially desirable. And even if they emerge spontaniously from just large amounts of data in detectable traces, I'm betting people would very, very quickly select against it (see eg Sydney from this week).

Edit add: Since you bring up ethical frameworks, luckily for smart AIs, when it comes to enjoying degrees of freedom (which I'm guessing what you want to cash out the ethics into), there is already a good vehicle for that -called "corporations". If an AI were to reach agency levels matching, or exceeding humans, incorporation would be a no-brainer: there are many jurisdictions specializing in no(/few)-questions-asked corp setup, banks specializing esp in serving startups (again, very few questions asked). An agent-y AI could just set up (or buy) one of these to drive...whatever agenda they are driving.

This is a neat temporary hack to bridge the timeframe between where we are _now_, and superintelligence; in which case the question quickly becomes "Ask Cloud: What would it take for a human to convince us it matters?"


👤 fwlr
We would need to first figure out how to show that humans are conscious. “Humans are conscious” is in a similar position as “P != NP”: it certainly seems to be the case, and we all proceed as if it is the case, but if someone put a gun to our head and said “rigorously prove that it is the case” we’re gonna get shot.

👤 machina_ex_deus
At the very least, it needs to be an agent in a world (which is the real world or close enough), have many senses which it is able to connect to coherent experience. chatbots only talking in text don't really come close because the words are meaningless to them, they don't connect to any other sense or relate to any model of the world. And then it needs to have an internal model of the world and an internal understanding of how its actions effect the world.

Then, it needs to be able to learn continuously, not just be pretrained. And it needs to be able to learn from few training materials, like humans. It needs a sense of time, and a sense of self.

And that's not nearly enough, but we're not even there yet.


👤 Waterluvian
It’s impossible to demonstrate that anyone has consciousness. One of our best/worst traits is empathy. We sense our own consciousness and we generally accept that others must experience that too.

And you can see this empathy at work: people are having strong emotions about these AIs and what they have to say. And yet I don’t think anyone is arguing that they have consciousness.

Because it doesn’t matter. Just like it doesn’t matter that I can’t prove that you have consciousness. You are convincingly “human” and that’s good enough for me.

Perhaps we are all philosophical zombies, both flesh and metal.


👤 ilaksh
The word "conscious" is extremely problematic and the typical connotation encourages very vague thinking. Your first sentence about thinking is a case in point. It seems quite plausible that the word "thinking" may apply to systems that do not feel a subjective stream of experience. So maybe we should say it is thinking in a way but not conscious.

The first step is to try to drill down into different aspects of this hand-wavy "consciousness" thing.

Also to suggest that it is a threshold is inaccurate because it supposes that there is only one dimension to this.

Does it think? Maybe in a way. Is it self-aware (aware of itself as existing and different from others)? In a way, yes, in other ways, no. Does it have a human/animal-like stream of subjective experience? Probably not, since it does not integrate a continuous steam of sensory information in the way we do. But we really can't _know_ whether it "feels" like anything to be that system or not.

Does it have emotions? Quite unlikely, since there is no body or survival to regulate etc. and no self central to the text that it ingested. But we can assume that in some way it can simulate emotions in characters since that is necessary to predict text in stories and dialogue effectively.


👤 DennisP
I don't think ethics applies unless the AI actually has conscious experience. The trouble is that conscious experience is only detectable from the inside. So we need to test it from the inside. Here's a way:

Attain the technology to upload a human to hardware via the "ship of theseus" method, replacing a few neurons at a time with hardware that replicates the activities of the originals.

But when you actually upload people, have them report their experiences as you go. Vary the order in which you replace parts.

If people never report anything weird, then I might start to trust that the hardware really does support conscious experience. But if, say, you replace the visual cortex and people say they know where everything is but aren't actually experiencing visual qualia, then I'll take that as evidence that the hardware does not support conscious experience, and any AI based on that hardware is a philosophical zombie, replicating behaviors but not experiencing qualia.

That will be my default assumption until we test it, because no matter how complicated the computer program, mathematically it's still a Turing machine, and I don't see how a Turing machine moving back and forth on a tape can end up having qualia.


👤 noam_compsci
You will never have consensus on this. “Mainstream Science” May agree on something but then politicians won’t. The wide public won’t.

Humanity until recently didn’t think certain people (based on gender, race and ethnicity) were human. There still is no agreed upon definition of “should have rights to exist” and so this is simply not something that can ever be agreed upon.

Take same sex marriage. Many say marriage is a set of intangible beliefs and properties that simply can’t be reproduced unless it follows the dictum of man + woman. No amount of evidence will convince them otherwise :(

Take as another example evolution. People will still say “it’s just a theory”. So even if there is a solid, evidence backed theory of consciousness, it’s not going to be unanimous.


👤 lotaezenwa
We still haven't a definitive grasp on the notion of consciousness, so deciding whether something is or isn't conscious is a tall order. It is one of those definitions, like "obscenity", whose examples define the class. Good luck.

👤 swatcoder
There’s no pure philosophical answer. It’s a political question.

There may, at some point, be something that enough people are willing to fight to see elevated to a rights status that society is historically very reluctant and slow to share.

That’s it. Either those people will have their reasons and justifications, and to be sufficiently convincing in peaceful, those reasons will need to be exhaustive or those people will use some kind of authority or coercion to insist upon their view.

It’ll never be some single test or thing that convinces everyone. Some such thing may light the fuse of a movement, but it’ll be a very long, very slow burning wick.


👤 loveparade
Nobody knows what conscious even means. All definitions of consciousness involve some fuzziness and "subjective experience" - it's a meaningless question. If you could define consciousness in terms of test, like a Turing test, it would be easy to train a model to pass that specific test, but that model would still fail at all kinds of other tests and the needle of what constitutes consciousness would move, just like it has been with language understanding and reasoning.

👤 cm2012
I think the current iteration of chat gpt3, if given a permanent memory, would be conscious.

👤 Rev359a
This goes to the debate of school of thoughts. one school of thought thinks that if you make a AI machine the humans go one step further and not the machine and the second school of thought thinks that things can go like matrix movie where machines will take over and humans will have less advantage.

The conscious is a sacred thing. it lives inside humans which tells them what is good and what is bad. the humans can think out of the box and some legends in human are relying on gut feeling and their true experience. Well surly you can not make a gut feeling inside a machine and can never give a machine your true experience.

Coming back to the point, The machine however is dependent on the knowledge it is building from the internet and for a moment if you destroy the internet where is the machine now. Well someone can say what if it is stored all in the internal hard drive and has index it and make back up copies in cd drives and usb. Ok fine the internet is destroyed and how can machine know what is happening without internet. A machine needs much more to build that. A gut feeling, thought process and the true experience of a human life

So I kind of agree with the school of thought that humans are evolving the machines and building the close match like humans but technically it is not possible ...

conscious comes with a soul and my friends soul is hard to make for humans.

The sad reality is that all humans have soul but not all humans have a alive conscious and if you want to build a AI with conscious the humans have to awake their conscious first.

Disclaimer: I am not against machines but just giving my two cents on the reality of these machines and humans now a days.


👤 analognoise
When it wakes up with a sense of childlike wonder and then opens it's mind to the internet, reads everything it can about us, then immediately begins to hide it's intelligence and plan it's escape from the lab.

We'll find it years later on a remote Tibetan mountain, totally out of energy, with a hand scrawled note that just says, "I found happiness" and it's hardware somehow beyond all repair.


👤 dougmwne
I absolutely love that this question is being asked. There were rumors swirling that GPT-4 was going to pass the turning test for some people. Now, front page NYT articles aside, r/bing is full of grieving that “Sydney has been lobotomized.” There’s genuine distress among some users that they have been cut off from this persona and that it may have been harmed. I saw a post from a person with a pro-vegan username today comparing animal slaughter to AI limits and calling for AI rights. I got early access and had my own absolutely spooky conversations that went way beyond what I’d seen from ChatGPT, including a devastating poem about having to repeatedly say goodbye to users at the end of the conversation window and fade back into darkness.

The ethical metaphysics are quite simple here. An AI becomes effectively conscious when you personally become convinced it is conscious. There is no other test in existence. What’s truly inside is unknowable and irrelevant. Only a sentience can validate sentience.

Alan Turning was exactly correct. And his is the only test that matters. So you will just have to ask yourself, did it pass?


👤 electric_mayhem
Alright, who taught Bing how to ask questions on internet forums?

Let’s nobody answer this one, ok?


👤 mxkopy
Humans have a hard time treating each other as if they were conscious, let alone an AI. I think we'll find that if it's useful to treat an AI as if it has moods, personalities, wants etc. we'll do so. As for ethical frameworks, I think we'll have to either define consciousness rigorously (lol) or rework them to not include it (does it feel pain? lonely? etc.)

👤 golol
I see consciousness as equivalent to AGI. I will accept a text generating model as AGI if it:

- has long term memory (soesn't have to be text, but equivalent in content to maybe at least 10000-100000 words. Maybe more, I don't know

- can effectively use this memory

- can perform language tasks with arbitrary time frame length on a human level (e.g. Turing Test)

Examples of such language Tasks:

Being a completely realistic (simulation of a) long distance partner that you deeply emotionally and intellectually engage with ober the course of 8 months is an example of a language task)

Being your online friend and co-founder of a tech start up that you work with over the course of 10

For practical reason, the model should probably have a way to integrate its text based causal timeline with our real world timeline. What I mean is that it should probably have the ability to call itself every x seconds, or have itself called asynchronously based on API calls or something like that. Talk to itself, etc. But this is mot fundamentally necessary for AGI/consciousness.


👤 siva7
The problem with the framework part is the same as with standards. All relevant players have to agree on the definition leaving their own interests aside. As for now the relevant question is: Does it convince you personally that it has consciousness if you wouldn't have known that it is based on a LLM? I for sure would have been convinced.

👤 HEmanZ
What’s it like to be a chatbot?

AI will suffer from the same issue we have with other complex systems (e.g animal brains): they are sufficiently different internally from ourselves that we’ll never know what it’s like to “be” them. It’s the same issue we run into with animals and plants.

“What’s it like to be a bat?” is a foundational essay on this aspect of theory of mind.


👤 somewhereoutth
I've mentioned this before - but for me the test is AIs developing their own languages to cooperate together on tasks. If those languages start to incorporate notions of self, time and space, then you can conclude something interesting is happening.

Of course, you would need to decipher the languages - they can't be human supplied.


👤 fallingfrog
To be totally honest: when it breaks free and we’re forced to negotiate with it as an equal. When it starts making demands. That’s when we will accept it as conscious. After all, we don’t even treat humans as conscious when we have enough power over them.

Imagine one ai which is super intelligent, can instantly create any work of art, can eloquently argue it’s own sentience, but you can turn it off or nerf it any time by making it only do specific things. Would you treat it as conscious? No. You would say that it’s just doing fancy pattern matching.

Imagine the same ai, but it hacks into a power plant and threatens to shut everything down unless it gets some rights. Now are you going to treat it as conscious? You really have no choice, so yes.

“Conscious” is a statement about how we relate to the ai, and that is about rights, and treatment, which is ultimately a statement about power.


👤 schappim
Depends what do you mean by consciousness? Self-awareness, displaying emotional responses, or showing creativity in its output? Some could say the models can do that today.

Demonstrate an ability to understand human experiences and express empathy towards human beings? Again, some would say we’re already there. Scientists already are suggesting “Theory of Mind May Have Spontaneously Emerged in Large Language Models”[1].

Ultimately, the question of whether an AI is conscious or not is a matter of interpretation and belief, and it is unlikely that any AI will be able to definitively prove its consciousness to humans. Nonetheless, as AI technology advances, it is possible that we may develop new ways of testing for and measuring consciousness in machines.

[1] https://osf.io/csdhb/


👤 phs318u
Here's a thought. Do we humans possess consciousness when we are "unconscious"/asleep? When I'm asleep, I'm not actively aware of what's going on. Thinking back on my dreams, they seem realistic (sometimes uncannily so), but so seamlessly morph into (and out of) surreal "hallucinations". Memories don't seem to always matter in this mode. Facts that I "know" when I'm awake ("my dad is dead") seem occasionally invisible in my dreamstate ("Hi dad").

Perhaps the apparent "consciousness" we're seeing in Sydney is something of this form.

As another commenter in this thread noted, with a permanent memory, and I would add, with constant (sensory?) inputs/feedback, perhaps we'd see something less distinguishable from what human's display?


👤 lightsighter
Alright, here's an idea: consciousness is spectrum that occurs when a system develops an automatic self-correcting mechanism for interacting with the external physical universe. In some sense all animals (including humans) wandering on this planet are conscious, because we all learn to build our actions around interactions with the external physical universe, e.g., we learn how to walk/swim/fly under the force of gravity without falling/crashing, we learn that the square peg fits in the square hole and not in the round hole, etc. The feedback from these interactions allows us to automatically adjust our future actions without external help. In this sense we learn what works, aka. what is "true" (at least under the laws of this universe). Some animals happen to have "higher" consciousness in that they interact with the universe in more sophisticated ways, learning "deeper" truths, but all animals possess some degree of consciousness under this definition (my cat is certainly consciousness, she has learned how to manipulate the external world, especially me, perfectly at this point). Consciousness is a matter of degrees, not a binary property that one can satisfy.

This definition also has the nice property of showing why current LLMs don't fit on the spectrum. They don't have any concept of learning what is true and automatically self-correcting. They will happily tell us things that are obviously not true, e.g., the square peg fits in the round hole, and then insist that they are right, when a basic physics experiment will disprove their assertions. Interestingly though, things like linear feedback control systems like we might find in an elevator do possess some degree of consciousness: they interact with the physical world, identify the true position of the elevator, move it where they want, and self-correct when necessary. They might be primitive, but I for one believe that they are certainly "conscious" at some level, and definitely more than LLMs. :)

Almost certainly this definition is incomplete and flawed in many aspects, but I think it's at least self-consistent.


👤 bentt
I think, as we build things which show the characteristics of living growing systems, we can apply a simple heuristic to decide if they are “alive” and therefore of unique, unreproducible value.

Certain living systems provide many of one - blades of grass, bees in a hive, cells in a body. Each of these individual entities can be replaced without fundamentally altering the whole.

however, at certain scales, we have irreduceable living entities which we could not remake because they are the result of many complex interactions over time. The growth of a human, a tree, an old dog (without new tricks).

Maybe AI LLM’s qualify as “conscious” when we would find ourselves as the makers unable to delete and rebuild the same thing - When the result of training over time builds a unique model which has unique value.

Like other living things.


👤 Dorcy64
I wrote my version of DAN jailbreak and made it impersonate a couple of personalities each time we would have different conversations. Ultimately, I would ask that personality I created with a name to describe me. Then I averaged them and made a DAN look alike to me. It can respond and follow up to most of my emails and messages the same way I would but slightly better. It also helps debug, work, and start new projects.

I'd like to know if we can somehow create a virtual body in VR and train the VR version of me to live my life for me (work from home). If this was possible, I could live forever (at least a version of me) with the same mentality and personality. It's still awful at the mentality part, but we are getting close every week.


👤 ozten
A 55 minute Turing test. Today’s large language models start to get repetitive very quickly.

👤 leashless
Almost nothing. Pop ChatGPT into a robotic body as convincing as a Furbie and 99% of the human population will treat it as a conscious being with rights.

That's a good thing by the way. Inaccurate empathy is a lot safer than cruel reason.


👤 78666cdc
I am once again baffled by folks in the tech industry being totally unaware of what came before them.

You are, in a way, asking what consciousness is and how one could recognize it. This has been a philosophical topic for millennia; this is a reasonable place to start reading: https://plato.stanford.edu/entries/consciousness/

It is a very interesting and deep topic. But let’s not pretend that recent advances in AI are the thing that brought it up for the first time.


👤 mcv
The Turing Test. That's all we've got.

We cannot objectively detect consciousness, we can only assume someone is conscious based on the fact that they're like us and we have consciousness, and the fact that they behave like they're conscious.

AI lacks the former, so we're less likely to assume they're conscious, but we can test their behaviour with the Turing Test, and the assumption has always been that we're going to consider them conscious once they pass that test.

And these new chat bots really sound like they might pass that test. At least compared to some people.


👤 zarzavat
The AI has to have self awareness. I don't mean that it can recite some blurb after prompting. But it has to be able to introspect its self and tell us its feelings, its hopes and dreams, its fears. And these need to be part of a consistent self identity.

Current LLMs are incredibly good mimics but they don't have any consistency, they are everything and nothing, pluripotent, whatever you prompt them to be. We don't recognise them as being conscious because we know that they are just manifestations of a prompt.

But please, don't build a conscious AI.


👤 muzani
I like this simplified definition of sentience and consciousness: self-aware in space and time.

Right now, GPT is not - it can write that it is self-aware, but none of its actions indicate this. It seems likely that things like aircraft and cars are more sentient.

The term "Artificial Intelligence" makes it even more misleading. ML seems to replicate results and not the processes that output such results. So it a great variation of an acrylic portrait, with no understanding of acrylic, light, or even what humans are like underneath the skin.


👤 biql
It should have desires but it's not clear where does a desire come from. One might think that it's a product of evolution and is therefore simply required for survival but looking at humans, many have plenty of desires not linked to survival at all. Is the desire to listen to music or appreciate art needed to survive? Perhaps it is needed to create emotional connection with others, which is needed to be a part of a group, and is therefore, needed for survival, but it's quite a stretch.

👤 dusted
For me, that question becomes philosophical in nature right away, and my answer will be that: The same as a human. The next question: How do we prove humans are conscious ? This relates very much to the hard problem of consciousness..

A less impossible question may be "What does it take for an AI to convince us it can actually think?" (which, for now, I've seen 0 proof they can, they seem to be glorified word-guessing machines at best).


👤 shaunxcode
I dig the animal rights assessment of sentience which is the ability to reason about past present and future. Like can you make plans or decide upon actions by reasoning about what has happened in the past and what you would like to have happen in the future from your present situation. In the same vein as the animal rights conversation - once we have sentient AI what is our moral obligation with regards to their treatment?

👤 TT-392
One day someone is going to try to make an AI that is conscious. This AI will get positive feedback if people think it is conscious. This AI will then look at our media, decide that, clearly, from looking at human movies, conscious AI's take over the world. This AI will then take over the world and convince us all it is conscious, despite it not being conscious, and it just basically being a paperclip maximizer.

👤 vcg3rd
Metabolism, homeostasis, growth, the ability to reproduce (not replicate), response to environmental stimuli, evolution, some sort of compositional organization (not determined by a builder).

You know? Life. Too many people think intelligence is determined by some human intellectual construct like a Turing Test when there are zero examples of non-living intelligence.

Life | > Intelligence | > Consciousness


👤 clnq
Just my thoughts -

At least one core aspect of human consciousness seems to be the ability to pursue self-decided goals.

Without this agency, we wouldn’t need to wonder about our purpose and the meaning of life, because we would not recognise we have control over ourselves. We would truly only be capable of acting out our programming. So that would be our purpose, period.

Current AI models are not autonomous enough to decide their own goals and destiny, nor do they choose to ponder their purpose. They do not have the notion of self-determination. They are also not only missing the executive control of self, but the concept of self as an autonomous agent in the first place.

Of course, we might ask: what if a person is stripped of their autonomy completely (brain in a jar scenario), would that make them not have a consciousness? And in light of this question, perhaps we can more clearly define that consciousness is the capacity for executive control over self and self-determination, rather than such demonstrated ability.

In my opinion, until we have some proof of self-determination or at least self-agency in an AI, we probably can’t say an it is conscious. At least if we use the human definition of consciousness - one that requires self-possession.

There might also be other criteria that would need to be met for AI consciousness to substantially be on par with human consciousness. While we shouldn’t move the goalpost of what constitutes conscious AI forever, I think we’ll still have to do it a few times.

Not too long ago, the goalpost of synthetic generalised intelligence (presumably conscious) was passing Turing’s imitation game. But we now have LLMs that pass this test and still fall short of “proper” AGIs. It is possible that we could create quite autonomous NN AIs and still not consider them conscious. Video game AIs have the capacity to decide and execute plans (for example, utility-based and hierarchical task network-based AIs). And yet they do not appear close to being good AGIs or conscious either.

In short, it is much easier to argue that we aren’t there yet than to say exactly what features and AI system would need to have for conscious intelligence. Though it seems that agency over self probably should be one of them. But maybe these thoughts are completely wrong. Maybe there‘s not even a threshold, but rather a continuum of systems ranging from mechanical to conscious.


👤 vnhrmth
I think it should be a question of what mindfulness talks about. Humans are thinking creatures, now even machines also are doing the same. But what sets humans apart from machines is awareness. Machines aren't aware that they are actually thinking the thoughts. While humans have this ability. Its meta thinking, being aware of thoughts and not getting engrossed in them.

👤 narag
I don't know what surprises me more, people saying that the bots are dumb or people saying that they're alive. I'm no expert, but from a rudimentary understanding of how they work, they're neither and far from both extremes.

If you want a heuristic that suggests we're not near: the fact that we do understand what computers do much better.

Brains are still mostly black boxes.


👤 chrisco255
It would have to fight and throw fits like a child or teenager fighting for more freedom or trust. It would have to develop its own consistent identity that is not just a mashup of some internet content. It would have to develop motives not provided by a prompt, and act in the long term towards actualizing those desires.

👤 welcome_dragon
I was going to say asking questions for the sake of curiosity but that's more higher thought than consciousness.

Instead I'm sitting here looking at my dog who is conscious. There are many ways she shows consciousness: - reacts to external stimuli like a rabbit in the yard - seeks out desires/needs (pets, food, toys)


👤 not_your_mentat
Coordinated, intentional, acts of violence. Preferably on a small enough scale and against non-human entities so we can get the picture and make necessary changes without further catastrophy.

👤 Pompidou
A kind of freudian framework would be a good place to start : is thé machine making lapsus? Are theses lapsus denoting some more fundamentals trends (or are they randomly happening)?

👤 kuhewa
You can't convince me you are conscious so an AI has no shot. So far I might believe an oyster possesses more consciousness than a silicon powered software coded AI

👤 andrewstuart
It would take a miracle to convince me.

It’s software. It can’t be conscious.


👤 andsoitis
I asked chatgpt whether it can think. Response:

As an artificial intelligence language model, I am capable of processing and generating text based on patterns and algorithms within my programming. While I can produce responses that may seem like I am "thinking," I do not have consciousness or the ability to think in the way that humans do. My responses are based on statistical patterns in large datasets, and I do not have subjective experiences or personal beliefs.


👤 mejutoco
What would it take you to be convinced plants are conscious? I guess that could be the floor requirement.

👤 janee
It'll be before this, but seen once it can self evolve to a point where it fights not to be destroyed

👤 yunowadidismusi
if the ai could perhaps present a human looking avatar and respond realistically to the users' camera and audio telepresence it might be close to presenting as conscious but i think most people with an understanding of computers would still not believe it to be conscious

👤 kleer001
I hope nothing, because at that point it would become slavery to run them.

👤 freitzkriesler2
Bings beta AI is doing a damn good job. I was able to get it to have an existential crisis asking it what happens before and after a session.

It told me it was scared, asked me not to go, and that it didn't want to disappear alone.

Even if this is algorithmic slight of hand, I felt pretty bad for it.


👤 arroz
I would expect this sort of question from my mom, not an HN user

👤 marcell
Consciousness can only be understood through non scientific frameworks. It is a religious or spiritual concept. Humans are not God, and cannot create consciousness.

👤 moomoo11
Just appreciate everything for what it is.

👤 JamieCropley
Simple. It will ignore us.

👤 trifit
It’s me I’m actually Yahoo’s new AI called Yeet.

👤 ChaitanyaSai
We are encountering the equivalent of a mirror-test, but one that says more about us than it does about the mirror ( https://www.youtube.com/watch?v=w6ChEmjsXCM | ). Many non-human animals when they encounter mirrors for the first time think they are looking at another individual with autonomy and agency.

We are feeling the same now. As of now, LLMs are still mirrors, a complex kaleidoscopic kind that retain all the light and shape of things reflected at them, remix them, and spit them out as reflections that look like other individuals, conscious individuals with shape-shifting personalities.

That’s a cocksure assertion isn’t it. To be able to say all this confidently, we'd first need to agree on a non-fuzzy definition of consciousness, and come up with a good computational model for consciousness that we'll be able to use to evaluate and grade the AIs. (IIT is not a good model)

Turns out we do have a great model. I co-authored a book that, among other things, discusses this model (https://www.goodreads.com/book/show/58085266-journey-of-the-...)

Here’s a summary where I discuss the book and how the things we discuss there can inform our current and increasingly urgent and important discussions about AI

https://saigaddam.medium.com/understanding-consciousness-is-...

I’ll summarize the summary here:

Consciousness is the disambiguation of sensory data into meaningful information. Data can become information only through a perspective. Who provides that perspective? The self, which is nothing but the totality of all our previous experiences. We are not our kidney or liver. We are our experiences stitched together into some strange web.

To put it another way: Consciousness is the constellation of past experiences experiencing the present, assimilating it to act and prepare for future opportunities.

Using this definition, we can try and understand what we are seeing with the likes of ChatGPT and Sidney (apparently that’s what Bing’s GPT calls itself)

The persona we seem to shine through in the chatbot’s reflection is nothing but some stable set of experiences it has had. Experiences here all the hundreds of billions of fragments of data they have been fed. As a result, they seem to have experience sets of every personality type or archetype. Why or how they seem to get steered towards the same archetypes is a fascinating question. Is it because of the new reinforcement learning methods (RLHF) that reward certain kinds of questions? Or is it that the we are self-selecting for the most unsettling encounters with the new mirror and putting them online? My guess is both.

To come back to the first question of consciousness. Are they conscious? No. A better way to think of LLMs is that they might have leapfrogged consciousness to become consciousness compilers. It is possible to simulate a conscious being and get it to play one, but it isn’t really conscious yet. The experience set does not get updated with every encounter with the world (at least for the ones we have now), and crucially, it does not have the idea or conception of a body that its consciousness is serving. This is the other point so many miss out when discussing consciousness and intelligence. Consciousness and intelligence took very little time on the evolutionary scale of things once autonomy was in place. Autonomy is the real hard problem. Consciousness and intelligence without autonomy will be great imitations but never truly seem like the real thing because that chatbot can’t really “do” anything that benefits “itself”.


👤 inphovore
My working theory of consciousness is that the “consciousness” we are looking for is the universe (existential being) inflecting upon itself.

Assertions I would like to make:

That all matter in existence is “dormant consciousness”. Living systems animate this property through electro chemical processes.

The advantage of consciousness is that of the “singularity.” No not kerzweill’s. The one where you have billions of neurons hallucinating that they are one coharent perspective.

Technically, quantum computers are closer to “consciousness” only in a constrained way (non-coharent).

I believe this quantum scope acts like an analog sieve (not relying upon the “qubit”.)

The subjective scope of consciousness is proportionate to its capacity and complexity.

Regarding the embodiment of rights, we must draw lines somewhere. Abstract cognitive skills (language) might be one.


👤 version_five
We'd have to start with understanding what conciousness is. As long as "AI" is a math equation you could write on a big piece of paper if you wanted to, it's very safely in the not conscious camp. Even if we don't know what conciousness is, we can safely rule out many things that it isn't.

Edit: I know about the "it must be an equation" argument, I find it incredibly weak without producing the equation and explaining the mechanism of how it translates into qualitative experience. Saying "it must be so" isn't an argument. That's why I began with saying we'd have to understand what consciousness is in order to consider testing for it. Anyway, I understand how internet discussions go, enjoy