I'd love to collaborate with others on this topic, so thats why I'm looking around. I also completely understand this is likely a dead end due to limitations in the technology, but I still find it fascinating and would like to find out one way or the other. And I'm especially curious about how integrating LLMs into a collective of components scales in comparison with deeper training.
This is our third or fourth false start but I see no path to AGI using LLMs.
Its honestly sad to see so many people being tricked by statistics once again but there is absolutely nothing intelligent about GPTs right now, it's a statistical model not a thinking machine.
In the following video David explains the code and prompts.
I built a thinking machine. Happy birthday, ACE! https://www.youtube.com/watch?v=HNtKVrQMNZs