This might be good for creative tasks, but may as well become an obstacle in adopting LLM's for certain other tasks. Among other things, it makes LLM-based systems untestable.
Program code is deterministic for a reason (if you forget about physical defects and cosmic particles occasionally hitting microchips and flipping some bits). If your code is supposed to control a nuclear power plant you better have a testable system with proven correctness.
So why aren't the users of LLM's given some sort of a fader to control temperature with an option to have 100% determinism? Who decided that we shouldn't have control over this parameter and why?
some providers _do_ let you set the temperature, including to "zero", but most will not take the perf hit to offer true determinism
The reason for this is that randomness is crucial to getting emergent data out of a system. Those are unexpected, unpredictable, but often useful results. This is how an LLM can answer a question it's never been asked before.
We have had deterministic databases forever so there would be no AI advance if LLMs were just databases. The AI models of the 1960's tried that very approach, called rules based and it doesn't work. We can never come up with or write down all the rules. The failure of those methods lead to the "AI winter" and no further real progress in AI until the invention of transformers at Google.
[0] https://levelup.gitconnected.com/something-from-nothing-d755...