profile photo

Michael Figurnov
(Михаил Фигурнов)

I am a Staff Research Scientist at DeepMind. Before joining DeepMind, I was a PhD student at the Bayesian Methods Research Group under the supervision of Dmitry Vetrov.

My research interests include deep learning, Bayesian methods, and protein folding. I've worked on AlphaFold system which has been recognized as the solution to the protein folding problem.

Email  /  Bio  /  Google Scholar  /  Twitter  /  Github

News

May 2021: I have been promoted to Staff Research Scientist.

Nov 2020: We have announced a new version of AlphaFold at CASP14.

Publications
High Accuracy Protein Structure Prediction Using Deep Learning
John Jumper*, Richard Evans*, Alexander Pritzel*, Tim Green*, Michael Figurnov*, Kathryn Tunyasuvunakool*, Olaf Ronneberger*, Russ Bates*, Augustin Žídek*, Alex Bridgland*, Clemens Meyer*, Simon A A Kohl*, Anna Potapenko*, Andrew J Ballard*, Andrew Cowie*, Bernardino Romera-Paredes*, Stanislav Nikolov*, Rishub Jain*, Jonas Adler, Trevor Back, Stig Petersen, David Reiman, Martin Steinegger, Michalina Pacholska, David Silver, Oriol Vinyals, Andrew W Senior, Koray Kavukcuoglu, Pushmeet Kohli, Demis Hassabis
Fourteenth Critical Assessment of Techniques for Protein Structure Prediction (Abstract Book), 30 November - 4 December 2020
blog post / abstract

Monte Carlo Gradient Estimation in Machine Learning
Shakir Mohamed*, Mihaela Rosca*, Michael Figurnov*, Andriy Mnih*
JMLR, 2020
arxiv / code (TensorFlow) / code (JAX)

Tensor Train Decomposition on TensorFlow (T3F)
Alexander Novikov, Pavel Izmailov, Valentin Khrulkov, Michael Figurnov, Ivan V Oseledets
JMLR Open Source Software, 2020
arxiv / code (TensorFlow) / Python package

Measure-Valued Derivatives for Approximate Bayesian Inference
Mihaela Rosca*, Michael Figurnov*, Shakir Mohamed, Andriy Mnih
Bayesian Deep Learning (NeurIPS Workshop) oral, 2019
paper / talk (11 minutes) / code (TensorFlow) / code (JAX)

Variational Autoencoder with Arbitrary Conditioning
Oleg Ivanov, Michael Figurnov, Dmitry Vetrov
ICLR, 2019
arxiv / poster / code

Implicit reparameterization gradients
Michael Figurnov, Shakir Mohamed, Andriy Mnih
NeurIPS spotlight, 2018
arxiv / poster / spotlight video (3 minutes) / spotlight slides / code is integrated into TensorFlow and TensorFlow Probability, eg: Gamma distribution, Beta distribution, Dirichlet distribution, Von Mises distribution, mixture of distributions (set reparameterize=True)

Probabilistic Adaptive Computation Time
Michael Figurnov, Artem Sobolev, Dmitry Vetrov
Bulletin of the Polish Academy of Sciences; Deep Learning: Theory and Practice, 2018
paper / arxiv (slightly older version)

Spatially Adaptive Computation Time for Residual Networks
Michael Figurnov, Maxwell D. Collins, Yukun Zhu, Li Zhang, Jonathan Huang, Dmitry Vetrov, Ruslan Salakhutdinov
CVPR, 2017
arxiv / poster / code (TensorFlow)

PerforatedCNNs: Acceleration through Elimination of Redundant Convolutions
Michael Figurnov, Aijan Ibraimova, Dmitry Vetrov, Pushmeet Kohli
NeurIPS, 2016
arxiv / poster / code (Caffe) / code (MatConvNet)

Robust Variational Inference
Michael Figurnov, Kirill Struminsky, Dmitry Vetrov
Advances in Approximate Bayesian Inference, NeurIPS, 2016
arxiv

Talks
Extending the Reparameterization Trick
DeepBayes Summer School, 2018
slides / video

Attention Models for Deep Learning
DeepBayes Summer School, 2017
slides / video (in Russian)


Thanks to Jon Barron for the template!