e.G. https://open-neuromorphic.org/neuromorphic-computing/hardware/spinnaker-2-university-of-dresden/ looks really promising. Is there a catch?
> Mike Davies at Intel [...] says the real bottleneck actually lies in the layers of software needed to take real-world problems, convert them into a format that can run on a neuromorphic computer and carry out processing
> James Knight at the University of Sussex, UK [...] points out that current models like ChatGPT are trained using graphics cards operating in parallel, meaning that many chips can be put to work on training the same model. But because neuromorphic computers work with a single input and cannot be trained in parallel, it is likely to take decades to even initially train something like ChatGPT on such hardware, let alone devise ways to make it continually learn once in operation, he says
From a very recent https://www.newscientist.com/article/2426523-intel-reveals-w...
I'd say the big catch is the huge advantage conventional neural nets have. In hardware and software support.