Like everyone else, I'm agog at ChatGPT, but the part that has me nearly calling up my old professors is, "how, when I tell ChatGPT 'you are writing a screenplay', does it 'know' (in whatever level of scare-quotes are necessary here) that it is writing the screenplay?
This, to me, feels like the most urgent of the many questions ChatGPT raises, but I'm unsure of how even to structure the worry into a proper question, let alone how to answer it. Put another way, I'm less concerned about the putative personhood of ChatGPT (it most certainly is not a person) than I am its possible language agency -- it status as a speaker -- which is a distinct question, and, with regard to language, would (at least to my cursory and obsolete understanding) be less settled.
Any ML-savvy phil nerds on here with a good way of marshalling these worries into something coherent?
While the user experience feels magical, it's helpful to understand what is actually under the hood to put it into perspective.
Does this philosophical question exist for hand calculators? They accept input and "personally" respond to it appropriately too.
To personify: What's the speaker-status of a person speaking echolalia?
To religify: What's the speaker-status of the oracle at delphi, or a person overcome by the holy spirit who starts speaking in tongues?
Then you should be aware (and the rest of your comment implies you are) that
> does it 'know' [...] that it is writing the screenplay?
hangs on a there being a reasonable interpretation of "know" in the case of LLMs.
"Know" is often used as a kind of shorthand in the case of inanimate objects: "the website knew I was using Internet Explorer so it sent me the javascript workarounds", "the thermostat knew the temperature had reached the target so it switched the boiler off" - perhaps shorthand for 'P was causally dominant in machine behaving like so' becoming 'machine behaved like so because it knew that P'.
I think "ChatGPT knows who directed Lost Highway because it answered 'David Lynch' when I asked it", "ChatGPT knows that Hitler is a touchy subject so it refused to write a review of Mein Kampf" and so on are similar in their ascription of 'knowledge' to ChatGPT.
In the same way that the thermostat is engineered to turn off the boiler when the target temperature is reached, ChatGPT is engineered to make linguistic productions that accord with its design goals, including being factually correct on matters of general knowledge and avoiding toxicity, the corpus and training providing the means of achieving these goals.
Taking the above on board, ChatGPT's "knowledge" of indexicals doesn't seem any different from its "knowledge" of David Lynch or Hitler. The statistical relations instilled by its training make it likely to use indexicals in a way that conforms to the conventional human use of them: it 'knows' how to use indexicals.
There's also a different reading of your question: "is ChatGPT programmed specifically to take account of self-knowledge - is this special-cased in code or training?" (which I guess it may be, since it seems informed about itself despite its training corpus dating only up to 2021). But while this may be an interesting programmer question it seems philosophically inert; either way, we're still talking about engineered thermostat knowledge.
Maybe a more interesting question: what distinguishes the knowledge in "I (as a non-LLM human - trust me!) know that David Lynch directed Lost Highway" from the 'inanimate knowledge' exhibited by thermostats, websites, and ChatGPT?