Many of the most important optimization problems in practice are massive in scale, mathematically complex, and involve numerous unknown parameters. Machine learning offers a powerful way to address these challenges by uncovering hidden structure and improving decision quality, but integrating predictions into algorithms raises fundamental questions: which architectures align with combinatorial structure, and how can we ensure robustness to error? This talk presents two case studies. First, we show how graph neural networks can approximate the optimal dynamic program for online matching, yielding algorithms that generalize across graph sizes and achieve strong empirical performance. Second, we introduce calibration as a principled interface between machine learning and decision-making, demonstrating through rent-or-buy and job scheduling problems that calibrated predictions yield both theoretical guarantees and practical improvements. This is joint work with Alexandre Hayderi, Amin Saberi, Anders Wikum, and Judy Hanwen Shen.
Ellen Vitercik
Ellen Vitercik is an Assistant Professor at Stanford University with a joint appointment in the Department of Management Science and Engineering and the Department of Computer Science. Her research interests include machine learning, algorithm design, discrete and combinatorial optimization, and the interface between economics and computation. She is particularly interested in how machine learning can be applied to discrete optimization and algorithmic reasoning. Before joining Stanford, she was a Miller Fellow at UC Berkeley, hosted by Michael Jordan and Jennifer Chayes.