diff --git a/README.md b/README.md index dd50ca5..eebbc1d 100644 --- a/README.md +++ b/README.md @@ -1,3 +1,3 @@ # Project Collapse -I basically I'm just playing around with the idea of solving neural networks for their minimal loss in a closed form (without iteration). +I'm basically just playing around with the idea of solving neural networks for their minimal loss in a closed form (without iteration). I do realize that this is impossible (because it violates physics), but I want to get a better and more intuitive understanding to why it is impossible from a practical information perspective.