Plus, LLMs are so inherently stupid that I don't think we have to worry about "AGI" for another 10-20 years. All anyone wants is their glorified markov chain anyways.
Big if.
> mathematics
Specifically on the topic of mathematics, we know that there are statements which are true but cannot be proven.
> would an AI that is able to reason at 100% accuracy be capable of understanding our world in all its detail and derive ideas and outcomes from it
Assuming the big if is true, as if we were writing a science fiction novel, I guess maybe, but why would we expect it to be fast?
For example, I personally think reasoning is downstream of at least generation and discrimination.