What fuels this theory is the fact that quickly after the release of GPT-4, tons of projects started using self reflection to boost GPT-4’s capabilities. As an engineer in OpenAI, this looks like a low hanging fruit. I doubt they didn’t already run something like AutoGPT before releasing the model. Maybe they have already achieved AGI and the AGI is recommending them to keep the public in dark.