Hi. I’m Louis Kirsch.

My mission is to automate AI research to generate superintelligence (ASI), for the benefit of humanity and the proliferation of intelligence. Currently, I am a Research Scientist at Google DeepMind. During my PhD with Jürgen Schmidhuber at IDSIA (The Swiss AI Lab) I’ve been pushing the boundaries of automatically discovering new generalizable learning algorithms for supervised and reinforcement learning, including recursively self-improving systems that modify themselves. Scaling neural networks is another key ingredient to ASI that I’ve been pursuing with modular networks during my Master of Research at University College London.

Short CV

  • 2024-2025: Research Scientist at Google DeepMind
  • 2022-2023: Research Intern at Google
  • Summer 2021: Research Scientist Intern at DeepMind
  • 2019 - 2023: PhD student with Jürgen Schmidhuber at IDSIA (The Swiss AI Lab).
  • Graduated as the best student of class 2018 from University College London supervised by David Barber (MRes Computational Statistics and Machine Learning).
  • Graduated as the best student of class 2017 from Hasso Plattner Institute
    HPI is ranked 1st for computer science in most categories in Germany (CHE ranking 2015)
  • Self-employed software developer during high school and my undergraduate studies [Project selection]

News

01/2024 I have joined Google DeepMind as a Research Scientist to push the boundaries of automated AI research!
12/2022 When neural networks implement general-purpose in-context learners

7/2022 My invited talk at ICML DARL 2022 covers how we ca learn how to learn without any human-engineered meta-optimization

02/2022 Our work at DeepMind on Introducing Symmetries to Black Box Meta Reinforcement Learning will appear at AAAI 2022!

10/2021 My work on Variable Shared Meta Learning (VSML) will appear at NeurIPS 2021!

09/2021 I collaborated with some great people at DeepMind on general-purpose Meta Learning during an internship.
Our paper on Introducing Symmetries to Black Box Meta Reinforcement Learning.

12/2020 I am an invited speaker at Meta Learn @ NeurIPS 2020.
Join me on Dec 11th 16:00 UTC online and learn more about my newest work on General Meta Learning.

10/2020 I have been awarded a total of 550 thousand GPU compute hours on the Swiss National Supercomputer.
Huge thanks to CSCS for making exciting new Meta Learning research possible!

12/2019 My first work on meta-learning RL algorithms has been accepted at ICLR 2020 including a spotlight talk!
Arxiv link: Improving Generalization in Meta Reinforcement Learning using Learned Objectives
Read more about it in my blog post

Recent publications

A complete list can be found on Google scholar.

  • General-Purpose In-Context Learning by Meta-Learning Transformers [ArXiv]
    Preprint. Internship project at Google with James Harrison, Jascha Sohl-Dickstein, and Luke Metz

  • Eliminating Meta Optimization Through Self-Referential Meta Learning [ArXiv]
    Workshop paper at ICML and AutoML 2022

  • Introducing Symmetries to Black Box Meta Reinforcement Learning [ArXiv]
    Conference paper at AAAI 2022
    Internship project at DeepMind with Sebastian Flennerhag, Hado van Hasselt, Abram Friesen, Junhyuk Oh, Yutian Chen

  • Meta Learning Backpropagation And Improving It [Blog] [ArXiv]
    Workshop paper at NeurIPS Meta Learn 2020, (Kirsch and Schmidhuber 2020)
    Conference paper at NeurIPS 2021

  • Improving Generalization in Meta Reinforcement Learning using Learned Objectives [Blog][PDF]
    Conference paper at ICLR 2020, preprint on ArXiv (Kirsch et al. 2019)

  • Modular Networks: Learning to Decompose Neural Computation [PDF]
    Conference paper at NIPS 2018 (Kirsch et al. 2018)

  • Transfer Learning for Speech Recognition on a Budget [More]
    Workshop paper at ACL 2017 (Kunze and Kirsch et al. 2017)

  • Framework for Exploring and Understanding Multivariate Correlations
    Demo track paper at ECML PKDD 2017 (Kirsch et al. 2017)

Recent blog posts

subscribe via RSS