I suspect that --As long as AI can be perceived as a potential threat to the human ego-- the goalpost will perpetually be moved to just outside of the state of the art in AI research.
Turn your question around. Hypothetical: at some point in the future, we'll be able to say "AI emerged." At that point, will everyone who contributed to internet architecture be partially responsible for that? Or did all human coding efforts contribute?
If this has already happened, would we know?
Large projects are incredibly difficult. That's one reason why things like the space program and its achievements are such a big deal. It wasn't just the cost and knowledge but the coordination of 100's of thousands of people to get the job done.
AI as it is now is merely brute force: huge datasets, huge computers. Of course it yields results: that would be a shame it would not using so many resource.
We don't even have a good definition for "intelligence", artificial or not.