Commit Graph

60 Commits

Author SHA1 Message Date
199fab7875 New shell-command, nltk for keyword extraction from description 2021-12-11 13:58:01 +01:00
36baf1aaec Bug: Typo in train 2021-12-11 11:54:25 +01:00
81fa6ca4d6 Faster training (earlier stopping) 2021-12-11 11:52:49 +01:00
16cc68dfed chmod +x (again) 2021-12-05 19:57:05 +01:00
da9569fd4c Added 'competence' command (displays recommenders and justifications of
their scores)
2021-12-05 19:56:26 +01:00
aa95a9b16b chmod +x 2021-12-05 19:54:03 +01:00
39930d1233 kp 2021-12-05 19:53:30 +01:00
f2fad859dc Added a progress-command (and newly trained weights) 2021-11-24 22:35:39 +01:00
ef7fceacea Better 'Gradientenabstieg' 2021-11-23 20:51:24 +01:00
7f51f139f2 Actually, we will also train on whitepapers (because otherwise I would
have to fix an ugly bug)
2021-10-17 15:52:52 +02:00
ce99e5301b Smashed bug with argument-parsing 2021-10-17 15:51:26 +02:00
8778cfdae6 Also dont train on whitepapers 2021-10-17 15:50:33 +02:00
3588587c92 Earlier removal of whitepapers (dont extrapolate from them) 2021-10-17 15:48:44 +02:00
65e8948202 Filter out the whitepapers I have in my libary 2021-10-17 15:47:37 +02:00
153aa434d5 Implemented listScores command 2021-10-13 15:10:12 +02:00
6689c3bf6b Removed ~ from score and shorter training 2021-10-12 20:10:48 +02:00
f27d89e43d Added a dark-mode 2021-10-05 18:25:27 +02:00
7fd1f4fa3f Better shit-detection and mitigation when training 2021-10-05 18:08:32 +02:00
73364935f0 Bug Fix: A function name was overloaded 2021-10-04 12:27:12 +02:00
ca7b2fd87c Faster termination when training 2021-09-30 00:08:53 +02:00
8c4e35bf41 Bug fix in training 2021-09-27 00:22:35 +02:00
8964fa2c6a No early stopping if still bad 2021-09-26 23:18:21 +02:00
1303f302d3 Dont train if we are already perfect 2021-09-26 23:17:03 +02:00
9f9d2390b3 Less verbose training 2021-09-26 23:15:50 +02:00
98aaec1e22 Train using gradient 2021-09-26 23:13:43 +02:00
06a0461e93 Better pruning of tags and recommenders 2021-09-26 16:51:17 +02:00
f3240147d5 training now terminates earlier when stagnating (can be disabled via
flag)
2021-09-26 15:52:54 +02:00
9126bbcc14 Stronger linear seperation 2021-09-26 14:31:00 +02:00
f1c887275c removeUselessSeries and stronger linear seperation 2021-09-26 12:46:30 +02:00
2bb2e15b73 Tweaks to recommendation-display 2021-09-25 20:15:14 +02:00
92d1b33ee3 Made recommendation-graph way better; tweaks to bounds-loss 2021-09-25 00:54:09 +02:00
6ed1d41e2c Added new tag-based version of recommendation-visualizer and added a
boundary-loss
2021-09-24 23:39:55 +02:00
212a30298a Stronger regression-loss; more parameter-freedome 2021-09-24 19:16:29 +02:00
5a4e48d86c Added a regression-loss (push weights towards 1) 2021-09-24 19:12:09 +02:00
9d6f37af45 Mostly refactoring 2021-09-24 18:25:37 +02:00
d5a6aadbb5 We now display se instead of std in the web-view; smol changes to the
penalties while training
2021-09-24 17:50:00 +02:00
f48fb12f0a No more stability-metric for the nn (but se instead) 2021-09-24 17:35:32 +02:00
1521f20340 Smol tweaks 2021-09-24 17:23:34 +02:00
9318811d8a Small tweaks to the scroring-algo and less calls to calibre when
training
2021-09-24 17:13:36 +02:00
fb3a5592df Tweaked training 2021-09-24 16:32:43 +02:00
cb0ad906eb Implemented Neural Net + training 2021-09-24 16:13:55 +02:00
0231d97a42 Switched to a simpler bayesian model for score generation 2021-09-24 14:49:59 +02:00
4fa3a57cc7 Remove 'useless' read books from recommendation-graph (far away from all
unread books)
2021-09-08 22:54:49 +02:00
c31a3d78e2 Allow keeping useless recommenders (args) 2021-09-06 16:21:01 +02:00
60067ed263 Remove unused recommenders when recommending and allow keeping topLists
via args
2021-09-06 16:17:54 +02:00
255502cdd2 Added support for bigger Top-Lists 2021-09-03 21:21:07 +02:00
9c4722aa8f Delta 2021-07-04 20:44:10 +02:00
d537f878b7 Better Stuff 2021-07-04 20:38:20 +02:00
0f1255f614 Buffed TopLists a bit 2021-07-04 20:25:26 +02:00
be0afb59c7 Fined tuned wheight of TopLists and Tags 2021-07-04 20:08:25 +02:00