There is a problem with averaging out responses via LLMs, but this is specifically what temperature controls on outputs are for.
It's also worth noting that our own neural networks already average out our own responses from our accumulated experiences. That's not to say a human is equivalent to an AI but that we do share that trait. Humans do not make perfectly entropic novel responses every time they utter something.
Getting AIs to a point where they can replicate human responses convincingly enough to match a specific individual's own mannerisms, idiolect, and indeed their context-dependent entropy is going to be significantly harder than many think. There is no reason to think we won't overcome these entirely observable problems, though. We've already overcome issues with realtime conversational AI that would have looked impossible 20 years ago.