From 6e6a390a3aa7a4018967f9ad181ddf855dbe7578 Mon Sep 17 00:00:00 2001 From: Dominik Roth Date: Thu, 14 Oct 2021 22:17:08 +0200 Subject: [PATCH] Fixed typos in the README --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 4f14281..0a47295 100644 --- a/README.md +++ b/README.md @@ -5,7 +5,7 @@ PonderNet stolen from https://github.com/jankrepl/mildlyoverfitted/blob/master/g SparseLinear stolen from https://pypi.org/project/sparselinear/ ## Architecture -We a neural network comprised of a set of neurons that are connected using a set of synapses. Neurons that are close to each other (es use a 1D-Distance metric (Project Neuromorph will have approximate euclidean distance)) have a higher chance to have a synaptic connection. +A neural network comprised of a set of neurons that are connected using a set of synapses. Neurons that are close to each other (we use a 1D-Distance metric (Project Neuromorph will have approximate euclidean distance)) have a higher chance to have a synaptic connection. We train this net like normal; but we also allow the structure of the synapctic connections to change during training. (Number of neurons remains constant; this is also variable in Neuromorph; Neuromorph will also use more advanced algorithms to decide where to spawn new synapses) In every firing-cycle only a fraction of all neurons are allowed to fire (highest output) all others are inhibited. (In Project Neuromorph this will be less strict; low firing-rates will have higher dropout-chances and we discurage firing thought an additional loss) Based on the PonderNet-Architecture we allow our network to 'think' as long as it wants about a given problem (well, ok; there is a maximum amount of firing-cycles to make training possbile)