Of course, my goals are incompatible with OpenAI.
People want to create something better than what's there. With ML and AI, computers already can do what's better than giving instructions or recommending things that may be better from the list, however, it doesn't know why it's better or what are its consequences. As a system, what's next is an AI that learns the consequences of behavior and actions.
As humans, we tend to make things better and profit from it. AGI is no different.
https://en.wikipedia.org/wiki/Effective_accelerationism
Not sure I agree with that philosophy. But it is a valid viewpoint.
I'm more in the camp of "AGI will bring some good, some bad, and some ugly. One step @ a time, please". As if anyone in a leading AI company would care...
It's about power. Whoever wields the most powerful version of this technology will hold more influence than government.