Great talk, although I didn’t really understand all the details of how RNNs work, exactly!
The fact that the team made discoveries at the end drives home just how empirical, opaque, and 🤞 the whole ML/DL process really is at the moment. This is largely the reason I’m not really very interested in ML at the moment.
Holding the soprano line and having the RNN generate the other voices is a really neat trick.
This definitely makes me want to work through some “here’s how to build a classifier/generator/etc.” tutorials to see what this is like in practice.