Biography
Stefan is an Associate Professor at Engineering Science, Man Group Research Fellow in Financial Machine Learning and former Deputy Director of the Oxford-Man Institute of Quantitative Finance, a Research Associate at the Oxford Internet Institute and a Mentor in the FinTech stream at the Creative Destruction Lab at Said Business School, all at the University of Oxford. He also works on commercial projects with Man Group, the funding partner of the Oxford-Man Institute, firstly as a Scientific Advisor and later as Principle Quant.
Stefan’s research is focused on applied machine learning in finance, economics and natural sciences, including deep learning, reinforcement learning, network and NLP approaches. He is also interested in exploring early use cases of quantum computing. At the Oxford Internet Institute Stefan teaches the intensive module on Machine Learning and the elective on Applied Machine Learning as part of the MSc in Social Data Science.
Through the University of Oxford, Stefan frequently engages in collaborative projects with industry partners including NVIDIA, Graphcore, Nokia and Lockheed Martin. He has extensive consulting experience in leading machine learning projects in application domains such as finance and healthcare with clients ranging from listed companies to SMEs and start-ups. Stefan holds a DPhil in Mathematical Physics from Imperial College.
Most Recent Publications
Slow momentum with fast reversion: a trading strategy using deep learning and changepoint detection
Slow momentum with fast reversion: a trading strategy using deep learning and changepoint detection
Linnet: limit order books within switches
Linnet: limit order books within switches
Linnet: limit order books within switches
Linnet: limit order books within switches
Enhancing cross-sectional currency strategies by context-aware learning to rank with self-attention
Enhancing cross-sectional currency strategies by context-aware learning to rank with self-attention
Learning rates as a function of batch size: a random matrix theory approach to neural network training
Learning rates as a function of batch size: a random matrix theory approach to neural network training
Research Interests
- Applied Machine Learning
- Deep Learning for Time-Series
- Computational Finance
- Mathematical Physics
Research Groups
Most Recent Publications
Slow momentum with fast reversion: a trading strategy using deep learning and changepoint detection
Slow momentum with fast reversion: a trading strategy using deep learning and changepoint detection
Linnet: limit order books within switches
Linnet: limit order books within switches
Linnet: limit order books within switches
Linnet: limit order books within switches
Enhancing cross-sectional currency strategies by context-aware learning to rank with self-attention
Enhancing cross-sectional currency strategies by context-aware learning to rank with self-attention
Learning rates as a function of batch size: a random matrix theory approach to neural network training
Learning rates as a function of batch size: a random matrix theory approach to neural network training
Awards and Honours
- Turing Fellowship
- Marie Curie Fellowship
- JSPS Fellowship
- Honorary Research Associate, Imperial College
- Schoeneborn Prize
- Springorum Medal
- Bernstein Fabozzi/Jacobs Levy Award 2023
Most Recent Publications
Slow momentum with fast reversion: a trading strategy using deep learning and changepoint detection
Slow momentum with fast reversion: a trading strategy using deep learning and changepoint detection
Linnet: limit order books within switches
Linnet: limit order books within switches
Linnet: limit order books within switches
Linnet: limit order books within switches
Enhancing cross-sectional currency strategies by context-aware learning to rank with self-attention
Enhancing cross-sectional currency strategies by context-aware learning to rank with self-attention
Learning rates as a function of batch size: a random matrix theory approach to neural network training
Learning rates as a function of batch size: a random matrix theory approach to neural network training