Fixed typos in the README
This commit is contained in:
parent
40dce340b4
commit
6e6a390a3a
@ -5,7 +5,7 @@ PonderNet stolen from https://github.com/jankrepl/mildlyoverfitted/blob/master/g
|
|||||||
SparseLinear stolen from https://pypi.org/project/sparselinear/
|
SparseLinear stolen from https://pypi.org/project/sparselinear/
|
||||||
|
|
||||||
## Architecture
|
## Architecture
|
||||||
We a neural network comprised of a set of neurons that are connected using a set of synapses. Neurons that are close to each other (es use a 1D-Distance metric (Project Neuromorph will have approximate euclidean distance)) have a higher chance to have a synaptic connection.
|
A neural network comprised of a set of neurons that are connected using a set of synapses. Neurons that are close to each other (we use a 1D-Distance metric (Project Neuromorph will have approximate euclidean distance)) have a higher chance to have a synaptic connection.
|
||||||
We train this net like normal; but we also allow the structure of the synapctic connections to change during training. (Number of neurons remains constant; this is also variable in Neuromorph; Neuromorph will also use more advanced algorithms to decide where to spawn new synapses)
|
We train this net like normal; but we also allow the structure of the synapctic connections to change during training. (Number of neurons remains constant; this is also variable in Neuromorph; Neuromorph will also use more advanced algorithms to decide where to spawn new synapses)
|
||||||
In every firing-cycle only a fraction of all neurons are allowed to fire (highest output) all others are inhibited. (In Project Neuromorph this will be less strict; low firing-rates will have higher dropout-chances and we discurage firing thought an additional loss)
|
In every firing-cycle only a fraction of all neurons are allowed to fire (highest output) all others are inhibited. (In Project Neuromorph this will be less strict; low firing-rates will have higher dropout-chances and we discurage firing thought an additional loss)
|
||||||
Based on the PonderNet-Architecture we allow our network to 'think' as long as it wants about a given problem (well, ok; there is a maximum amount of firing-cycles to make training possbile)
|
Based on the PonderNet-Architecture we allow our network to 'think' as long as it wants about a given problem (well, ok; there is a maximum amount of firing-cycles to make training possbile)
|
||||||
|
Loading…
Reference in New Issue
Block a user