HACKER Q&A
📣 mattsan

Has anyone experimented with dynamic NN layer connections?


I recently saw a video of two neurones sensing each other and trying to connect. It inspired me to experiment with nodes "radiating" their weights and biases and other neurones trying to connect up based on some of their parameters. Has anyone ever done this or tried to? I'm not asking about the feasibility or any specific algorithm - I just think this might just help us see more emergent behaviour.


  👤 cookiengineer Accepted Answer ✓
Check out NEAT [1], HyperNEAT [2] and ES/HyperNEAT [3]

The Mari/o video of SethBling was an excellent introduction to how it works back then. [4]

As NEAT is an evolutionary training algorithm, behaviors are represented with genomes (or phenotypes), and the performance differences of each agent is evaluated with a CPPN (compositional pattern producing network).

The strength of an algorithm like this is not sensorics (like an LSTM, RNN, FFNN etc) but the combinational aspects of "which sensor to trust" as a meta learning concept.

[1] https://www.cs.ucf.edu/~kstanley/neat.html

[2] http://eplex.cs.ucf.edu/hyperNEATpage/

[3] http://eplex.cs.ucf.edu/ESHyperNEAT/

[4] https://youtube.com/watch?v=qv6UVOQ0F44


👤 JoeyBananas
In a fully connected layer, the weights are trained parameters, so neurons already dynamically choose what neurons they connect to.