Commit Graph

86 Commits

Author SHA1 Message Date
ef7fceacea Better 'Gradientenabstieg' 2021-11-23 20:51:24 +01:00
5f7366e556 Newly trained net 2021-11-07 11:41:01 +01:00
7f51f139f2 Actually, we will also train on whitepapers (because otherwise I would
have to fix an ugly bug)
2021-10-17 15:52:52 +02:00
ce99e5301b Smashed bug with argument-parsing 2021-10-17 15:51:26 +02:00
8778cfdae6 Also dont train on whitepapers 2021-10-17 15:50:33 +02:00
3588587c92 Earlier removal of whitepapers (dont extrapolate from them) 2021-10-17 15:48:44 +02:00
65e8948202 Filter out the whitepapers I have in my libary 2021-10-17 15:47:37 +02:00
153aa434d5 Implemented listScores command 2021-10-13 15:10:12 +02:00
23cc62ac01 Newly trained net 2021-10-12 20:18:24 +02:00
6689c3bf6b Removed ~ from score and shorter training 2021-10-12 20:10:48 +02:00
f27d89e43d Added a dark-mode 2021-10-05 18:25:27 +02:00
5fc161fcc0 Newly trained net 2021-10-05 18:08:55 +02:00
7fd1f4fa3f Better shit-detection and mitigation when training 2021-10-05 18:08:32 +02:00
f5763fe7da Newly trained net 2021-10-05 17:10:12 +02:00
73364935f0 Bug Fix: A function name was overloaded 2021-10-04 12:27:12 +02:00
4c4821c7a3 Newly trained net 2021-09-30 00:09:11 +02:00
ca7b2fd87c Faster termination when training 2021-09-30 00:08:53 +02:00
1bd0597291 Newly trained net 2021-09-27 00:22:45 +02:00
8c4e35bf41 Bug fix in training 2021-09-27 00:22:35 +02:00
b771204b1b Newly trained net 2021-09-26 23:18:32 +02:00
8964fa2c6a No early stopping if still bad 2021-09-26 23:18:21 +02:00
1303f302d3 Dont train if we are already perfect 2021-09-26 23:17:03 +02:00
9f9d2390b3 Less verbose training 2021-09-26 23:15:50 +02:00
ade61980b4 Newly trained net 2021-09-26 23:14:25 +02:00
98aaec1e22 Train using gradient 2021-09-26 23:13:43 +02:00
06a0461e93 Better pruning of tags and recommenders 2021-09-26 16:51:17 +02:00
9d03b37e16 New trained net 2021-09-26 15:53:41 +02:00
f3240147d5 training now terminates earlier when stagnating (can be disabled via
flag)
2021-09-26 15:52:54 +02:00
787404c134 Newly trained net 2021-09-26 14:31:24 +02:00
9126bbcc14 Stronger linear seperation 2021-09-26 14:31:00 +02:00
147a78302f New trained net 2021-09-26 12:46:58 +02:00
f1c887275c removeUselessSeries and stronger linear seperation 2021-09-26 12:46:30 +02:00
2bb2e15b73 Tweaks to recommendation-display 2021-09-25 20:15:14 +02:00
31cc8ce31c New trained net 2021-09-25 00:54:59 +02:00
92d1b33ee3 Made recommendation-graph way better; tweaks to bounds-loss 2021-09-25 00:54:09 +02:00
32bac42c83 Newly trained net 2021-09-24 23:40:55 +02:00
6ed1d41e2c Added new tag-based version of recommendation-visualizer and added a
boundary-loss
2021-09-24 23:39:55 +02:00
5a8d76bc3d Even better trained net 2021-09-24 19:25:54 +02:00
12fd3dffc9 New trained net 2021-09-24 19:21:31 +02:00
212a30298a Stronger regression-loss; more parameter-freedome 2021-09-24 19:16:29 +02:00
22ca039502 New trained net 2021-09-24 19:14:21 +02:00
5a4e48d86c Added a regression-loss (push weights towards 1) 2021-09-24 19:12:09 +02:00
9d6f37af45 Mostly refactoring 2021-09-24 18:25:37 +02:00
e36542dd32 New trained net 2021-09-24 17:54:14 +02:00
d5a6aadbb5 We now display se instead of std in the web-view; smol changes to the
penalties while training
2021-09-24 17:50:00 +02:00
795c0b5d18 New trained net 2021-09-24 17:35:59 +02:00
f48fb12f0a No more stability-metric for the nn (but se instead) 2021-09-24 17:35:32 +02:00
ec8d253f3a Newly trained net 2021-09-24 17:23:42 +02:00
1521f20340 Smol tweaks 2021-09-24 17:23:34 +02:00
292951c50c Newly trained neuralWeights 2021-09-24 17:14:25 +02:00