REFORM

Weekly reading group on ML foundations and scalable reasoning systems.

Spring 2026 theme

Understanding and Improving LLMs via a Theoretical Lens

This quarter we will cover recent work on the internal structure of LLMs, compression and quantization, optimization and training methods, RL-theoretic viewpoints, and systems or algorithmic ideas for improving base-model performance.

Broader theme of REFORM (Rethinking Foundations of Real-world ML)

The last few years have seen rapid developments in the deployment and adoption of ML systems. And yet, we lack a cohesive understanding of how these systems work, and the principles and laws (if any!) that govern their behavior. To this end, the goal of this reading group is to explore the intersection of cutting-edge experiments and corresponding explanations, with the goal of answering:

  • How might we devise theoretical models that not only explain unexpected phenomena, but also predict new phenomena that we can verify experimentally?
  • What are the right questions to ask, and phenomena to explain—at what level of abstraction should we be aiming to explain them?
  • How can tools from statistics, CS theory, and operations research inform a better understanding of machine learning algorithms and systems?

We meet every Thursday from 5:00 to 6:00 PM. Room: CoDa E401 (exception: on April 23rd, we meet in CoDa W101)

Contact points
  • anaymehrotra1 [at] gmail [dot] com
  • saberi [at] stanford [dot] edu
  • gvelegkas [at] google [dot] com
Format
  • One deep-dive session each week
  • 20–30 minute discussant presentation
  • Open discussion and open problems
Focus areas
  • Language model scaling and training dynamics
  • Post-training and self-improvement
  • Evaluation, reliability, and reasoning behavior