LLMs are fantastic at generating believable nonsense, a creative stream of expression that superficially resembles reality. This is not a criticism; what LLms can do is a superpower. But the left-brain of AGI is still MIA. Are model builders aiming to fill that void, or do we need a different mechanism? If so, what might that be?
Self learn (i.e. unsupervised) patterns (using multi-head attention mechanism) from a large volume of input and then generate new patterns based on the. prompt.
LLMs are more than left vs right brained. They have multiple experts; they have multiple modality.
Some big models can have hundreds of "side of brain"