Then have many humans together, trying to wargame against a single computer model.
And then can try the same with writing text against an LLM trained on what they wrote before.
The point is, that it is likely after a while humans will be extremely predictable by the model that machine learning will come up with and select competing models. Even RNNs and genetic algorithms would probably work here.
Doesn’t this mean a single computer can reliably beat any group of humans in wars, trading strategies etc? Given enough battles, it will have predicted every new plan they have, eventually.
A round is run by using each function to calculate a play based on all previous plays, then comparing the results.
Now imagine you've got the "perfect" model as the computer player, that wins every time against humans.
A particularly crafty human player shows up, named Cantor. His function is defined as "the choice that beats what the computer's model returns". How often does he win?
This construction only works for simple games like RPS where the two players are interchangeable and a move has a well-defined "opposite".
Doesn't that inherently create new criteria for fitness to be judged, conditions that humans could then exploit through marginal and expected creative digression?
Let me know when you've got an LLM that can reliably outperform me in the long run.