1. ChatGPT doesn’t try anything, that’s a rather extreme anthropomorphism that doesn’t map to how ChatGPT works.
2. ChatGPT does “learn” from everything said in a conversation (“in-context learning”), and, prompted properly, it will request information from the person it is interacting with.
3. But, unless it is presented in the conversation, ChatGPT has no information about who it is talking to.
4. And, everything it learns through in-context learning applies only to the conversation it is learned in (that is, its only available as long as what it got it from is in the prompt.) The mental model of ChatGPT as a single, persistent entity that can “learn” things in a permanent way (separate from the periodic supplemental training and new model releases by OpenAI) is just a fundamental misunderstanding of what it is.
It might be an interesting use for a long-context LLM to build a chat agent that incorporated persistent, “universal” in context learning by selectively adding material that was expected to have broad utility from conversations into the universal system prompt, but there’s a whole lot of issues there (including with privacy.)
I don't know what you would learn from a conversation, except for the half-finished stuff they're still working on before it gets published.
Every answer given by GPT is the best answer that specific model can give given the context and relational clues, but counting those 'nodes' of context and relation clues isn't actually a good metric for discovering the unknown and finding holes in the model.