Dr Henry Moss
Lecturer in Mathematics and AIResearch Overview
- Scalable/interpretable Bayesian ML models: helping scientists better understand the world around us.
- Artificial intelligence for scientists: discovering equations, searching for molecules, and designing genes.
- Active learning: accelerating the design of emission-reducing technologies.
Research Interests
Fundamental AI Research: Building upon foundations in probabilistic modeling and statistical learning, I develop algorithms and theoretical guarantees for emulating, calibrating, and designing complex, costly systems.
AI Tools: I have deployed algorithms in the real-world, e.g. designing electric motors with Mazda, heat exchangers for Reaction Engines, and currently build AI models of point clouds with Boeing. My optimisation algorithms are embedded in the core operations of Amazon Alexa, Meta, and Mazda.
Software Development: High-quality, domain-specific codebases are essential to furthering AI research. I have developed popular open-source Python ML libraries, including Trieste, GPFlow, and GPJax, and, in collaboration with chemists, created an award-winning Gaussian process library now widely adopted across US biotech startups.
Current Research
Recent advances in high-throughput experimentation and computation now empower scientists in traditionally high-cost fields—such as drug discovery, materials science, and engineering—to tackle ambitious challenges that surpass established experimental design methodology. Fortunately, Generative AI, capable of producing novel images, molecules, and engineered structures, holds the ability to fundamentally redefine how experiments are conceived, conducted, and iterated. My team is developing the algorithmic breakthroughs essential for harnessing the full potential of generative AI within experimental design, providing tools to accelerate scientific and industrial innovation.
Profile
My research interests lie at the intersection of Statistics and Machine Learning, focusing mainly on Bayesian optimization. I leverage information-theoretic arguments to provide efficient and reliable hyper-parameter tuning for machine learning systems. My favorite application area is natural language processing (NLP) - where we seek to learn from written and spoken text. Current state-of-the-art NLP systems pose particularly interesting tuning problems, as they can take days (if not weeks!) to train and have many configurable hyper-parameters. I am supervised by David Leslie (Department. of Mathematics and Statistics) and Paul Rayson (School of Computing and Communications).
I completed my undergraduate degree in Mathematics from the University of Cambridge (2016). After enjoying the statistical courses, I took a research placement in computational genetics at the Welcome Sanger Trust. Post-graduation, I participated in the STOR-i internship, which led to completing an MRes and now (since 2017) working towards a PhD.
STOR-i Centre for Doctoral Training
- MARS: Mathematics for AI in Real-world Systems
- Statistical Artificial Intelligence
- STOR-i Centre for Doctoral Training