Posts by Collection

portfolio

publications

On the Expressiveness of Approximate Inference in Bayesian Neural Networks

Neural Information Processing Systems (NeurIPS), 2020

Bayesian neural networks aim to solve the problem of overconfident predictions using probabilistic modelling. However, we show that some common approximations used in Bayesian neural networks lead to undesirable behaviour. Along with Sebastian Farquhar and Yingzhen Li, I gave a short talk explaining this paper. You can also find the slides for a longer version of that talk (given with David Burt at the RIKEN Center).

Download here

The Gaussian Neural Process

Advances in Approximate Bayesian Inference (AABI), 2021

We present a new member of the Neural Process family that meta-learns a map from observed datasets to posterior Gaussian processes.

Download here

Autoregressive Conditional Neural Processes

International Conference on Learning Representations (ICLR), 2023

We show that applying neural processes autoregressively at test time leads to dramatic improvements in prediction quality, without the need to modify the training procedure or architecture.

Download here

Fast Protein Backbone Generation with SE(3) Flow Matching

NeurIPS Machine Learning for Structural Biology workshop (MLSB) 2023, 2023

Recently, equivariant diffusion models such as RFdiffusion and Chroma have revolutionised the field of de novo protein design. We investigate the use of flow matching as an alternative to diffusion to accelerate the sampling of novel proteins.

Download here

talks

teaching