Analysis of machine learning algorithms
Geometry of energy landscapes and the optimizability of deep neural networks
We analyze the energy landscape of a spin glass model of deep neural networks using random matrix theory and algebraic geometry. We analytically show that the multilayered structure makes the network easier to optimise: Fixing the number of parameters and increasing network depth, the number of stationary points in the loss function decreases, minima become more clustered in parameter space, and the tradeoff between the depth and width of minima becomes less severe.
Archetypal landscapes for deep neural networks
Deep neural networks have reached impressive predictive capability for many challenging tasks, yet it remains unclear why they work. We analyze the structure of the loss function landscape of deep neural networks and show why such landscapes are relatively easy to optimize. More generally, our results demonstrate how the methodology developed for exploring molecular energy landscapes can be exploited to extend our understanding of machine learning.
Validating the Validation: Reanalyzing a large-scale comparison of Deep Learning and Machine Learning models for bioactivity prediction
We reanalyze the data generated by a recently published large-scale comparison of machine learning models for bioactivity prediction to highlight subtleties in model comparison. Our study reveals that "older" methods are non-inferior to "new" deep learning methods.