I mean, it's just a language model. It's not running code, has no agency, has no idea about what/who it is. It's just probabilities and statistics all the way down. I might chuckle at its replies but that's about it. Does HN really think this is the start of AI taking over?
I kind of wish Google had the balls to play it cool and not even flinch - do nothing and let Microsoft embarrass themselves by rushing an incomplete product to market.
Even Reddit subreddits like /r/MachineLearning and /r/cscareerquestions still get endless questions asking "Is there a point to learning software engineering when AI will just automate it away?"
You just don't realize the conversation is happening on a higher level than you're currently thinking. It's alright. Happens to everyone.
You are asking HN to update your priors for you regarding LLMs because you sense a disjunct in your figurative training set and in-group norms.
Beyond this, human consciousness, ignoring all the hand-waving dogma of executive function, is indistinguishable from a LLM except that the former is iterative and the other is ergodic. You choose a single utterance for a point of being in time; it can choose every utterance for a single point of being in time. If we could save you as a point in time and reload you infinitely and provide different prompts, we'd be able to say with certitude that you were merely nothing but a reactive process as well--and they might not be wrong.
In short: don't think of a pink elephant. If you just thought of a pink elephant, you just proved that you engage in automatic "cognition" commensurate with any prompt injection attack.