13 lines
1.4 KiB
Markdown
13 lines
1.4 KiB
Markdown
|
# Project PetriDish
|
||
|
Quick and dirty PoC for the idea behind Project Neuromorph.
|
||
|
Combines PonderNet and SparseLinear.
|
||
|
PonderNet stolen from https://github.com/jankrepl/mildlyoverfitted/blob/master/github_adventures/pondernet
|
||
|
SparseLinear stolen from https://pypi.org/project/sparselinear/
|
||
|
|
||
|
## Architecture
|
||
|
We a neural network comprised of a set of neurons that are connected using a set of synapses. Neurons that are close to each other (es use a 1D-Distance metric (Project Neuromorph will have approximate euclidean distance)) have a higher chance to have a synaptic connection.
|
||
|
We train this net like normal; but we also allow the structure of the synapctic connections to change during training. (Number of neurons remains constant; this is also variable in Neuromorph; Neuromorph will also use more advanced algorithms to decide where to spawn new synapses)
|
||
|
In every firing-cycle only a fraction of all neurons are allowed to fire (highest output) all others are inhibited. (In Project Neuromorph this will be less strict; low firing-rates will have higher dropout-chances and we discurage firing thought an additional loss)
|
||
|
Based on the PonderNet-Architecture we allow our network to 'think' as long as it wants about a given problem (well, ok; there is a maximum amount of firing-cycles to make training possbile)
|
||
|
The input's and output's of the network have a fixed length. (Project Neuromorph will also allow variable-length outputs like in a RNN)
|