Does issue with phrasing demonstrate arbitrary gatekeeping?
The one thing that is certain is change, and I think AI systems will get better in terms of not requiring crazy work to develop prompts, even if they don't get better at getting only the right answers.
I remember a particular build of DALL-E mini having an original character in it named "Darts Vader". In one case he had a black mask and a black T-shirt with picture of a dartboard on it, in other cases he was playing darts, but Vader always got mixed up with darts in an amusing way. A new build came out and we never saw him again.
Whatever you learn about the quirks of model A will be of little value when model B comes out with different quirks.
I don't think it may represent a form of gatekeeping, cause the way we evade the AI boring prompt is mostly a form of human bias. Where before AI became racist after some time exposed to human, here it became less surprising patch after patch. From the top of my near 0 level, I may think it's because in order to make ChatGPT less power hungry or maybe to push the paid version, OpenAI force ChatGPT to be generic. So like the only way we can interact with the model is through a prompt, we can only try new prompts and share the ones who exploit other dimensions of the model which make ChatGPT "smarter" or best adapted for what we want it to do. The only thing OpenAI can't do is edit the raw model, every time they add generic output or other things, they just create a shortcut in the model who can be bypassed because we can't humanely control it fully.
(I take about generic answer because most of the time I use ChatGPT to recommend me software who fit in my use case, but each time I have to make it rewrite the prompt because it doesn't remember my OS and I don't want to search for the changes in the packages names. (every time it answers like I am on a Debian based distro, it's what I mean by "shortcut", It doesn't process everything before answering, and it goes directly to the result))