Go to file
2021-10-24 17:59:48 +02:00
.gitignore Added a gitignore, so we won't get bullshit here 2021-10-14 22:15:57 +02:00
PetriDish.svg Better Logo 2021-10-24 17:59:48 +02:00
README.md Revert "Trying to set the icon-size" 2021-10-24 11:41:43 +02:00
train.py Killed some bugs and stuff 2021-10-15 13:34:38 +02:00
utils.py Killed some bugs and stuff 2021-10-15 13:34:38 +02:00

Project PetriDish

Logo Missing

Quick and dirty PoC for the idea behind Project Neuromorph.
Combines PonderNet and SparseLinear.
PonderNet stolen from https://github.com/jankrepl/mildlyoverfitted/blob/master/github_adventures/pondernet
SparseLinear stolen from https://pypi.org/project/sparselinear/

Architecture

A neural network comprised of a set of neurons that are connected using a set of synapses. Neurons that are close to each other (we use a 1D-Distance metric (Project Neuromorph will have approximate euclidean distance)) have a higher chance to have a synaptic connection.
We train this net like normal; but we also allow the structure of the synapctic connections to change during training. (Number of neurons remains constant; this is also variable in Neuromorph; Neuromorph will also use more advanced algorithms to decide where to spawn new synapses)
In every firing-cycle only a fraction of all neurons are allowed to fire (highest output) all others are inhibited. (In Project Neuromorph this will be less strict; low firing-rates will have higher dropout-chances and we discurage firing thought an additional loss)
Based on the PonderNet-Architecture we allow our network to 'think' as long as it wants about a given problem (well, ok; there is a maximum amount of firing-cycles to make training possbile)
The input's and output's of the network have a fixed length. (Project Neuromorph will also allow variable-length outputs like in a RNN)