Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
ICML Workshop on Uncertainty and Robustness in Deep Learning, 2019
We show that a common approach to Bayesian neural network inference often fails to model increased uncertainty in-between separated clusters of observed data.
Download here
International Conference on Learning Representations (ICLR), 2020
Neural Processes are a powerful framework for learning from small datasets in the presence of uncertainty. This work adds convolutional structure to the Neural Process family. Together with my co-authors Yann Dubois and Jonathan Gordon, we wrote a blog explaining what Neural Processes are. It also includes code to run many different Neural Process models.
Download here
Neural Information Processing Systems (NeurIPS), 2020
Bayesian neural networks aim to solve the problem of overconfident predictions using probabilistic modelling. However, we show that some common approximations used in Bayesian neural networks lead to undesirable behaviour. Along with Sebastian Farquhar and Yingzhen Li, I gave a short talk explaining this paper. You can also find the slides for a longer version of that talk (given with David Burt at the RIKEN Center).
Download here
Neural Information Processing Systems (NeurIPS), 2020
We expand on our previous work on Convolutional Conditional Neural Processes by building a model that can also model dependencies between different outputs.
Download here
Advances in Approximate Bayesian Inference (AABI), 2021
We present a new member of the Neural Process family that meta-learns a map from observed datasets to posterior Gaussian processes.
Download here
Neural Information Processing Systems (NeurIPS), 2021
We investigate, both theoretically and empirically, how tight PAC-Bayes generalisation bounds can be made with small datasets.
Download here
Neural Information Processing Systems (NeurIPS), 2021
We propose a collapsed Evidence Lower Bound (ELBO) for Bayesian Neural Networks that alleviates underfitting.
Download here
Neural Information Processing Systems (NeurIPS) 2023 (spotlight presentation), 2023
Solving the sampling problem is a longstanding challenge in the field of molecular dynamics, with crucial applications for understanding protein function and drug design. We train a generalisable normalising flow that accelerates sampling for small peptide systems.
Download here
International Conference on Learning Representations (ICLR), 2023
We show that applying neural processes autoregressively at test time leads to dramatic improvements in prediction quality, without the need to modify the training procedure or architecture.
Download here
NeurIPS Machine Learning for Structural Biology workshop (MLSB) 2023, 2023
Recently, equivariant diffusion models such as RFdiffusion and Chroma have revolutionised the field of de novo protein design. We investigate the use of flow matching as an alternative to diffusion to accelerate the sampling of novel proteins.
Download here