Meet the 2023 Graduate Research Fellowship Award Recipients

Fellows

Xuefeng Du

Xuefeng Du

Xuefeng Du is a student at the University of Wisconsin - Madison. He is advised by Sharon Yixuan Li and his research focuses on open-world machine learning. Xuefeng is developing algorithms to enable open-world learning that can function safely and adaptively in the presence of evolving and unpredictable data streams. Xuefeng is a huge fan of sports, including basketball, swimming, and hiking.

Feiyang Lin

Feiyang Lin

Feiyang Lin studies Mathematics at UC Berkeley. Her work focuses on algebraic geometry, and she is interested in questions about algebraic curves, their maps to projective spaces, and the geometry of associated moduli spaces. In her free time, she enjoys ultimate frisbee and currently plays for the Pie Queens.

Rachit Nigam

Rachit Nigam

Rachit Nigam is a PhD student at Cornell University, advised by Adrian Sampson. His research focuses on language abstractions for accelerator design, and in particular on new programming languages and compilers to enable the design of efficient hardware accelerators. Rachit uses advanced type systems to express hardware constraints in source-level languages, and modular pass based compilers to transform computational descriptions into high-performance hardware accelerators. His compiler infrastructure, Calyx (https://calyxir.org), has been adopted by the LLVM ecosystem and is being used to generate accelerators and support novel research in hardware accelerators. When not focusing on his research, Rachit enjoys hiking and nerding out on music synthesizers.

Ted Pyne

Ted Pyne

Ted Pyne is a graduate student at MIT and is advised by Ronitt Rubinfeld and Ryan Williams. Ted’s research focuses on pseudorandomness and derandomization; he is interested in the question of whether every space-efficient randomized algorithm can be transformed into a space-efficient deterministic one. His research uses many tools, including those from spectral graph theory. Ted is also interested in applying the techniques developed in this line of work to other problems in algorithms and complexity. He enjoys walking around cities, driving around the US, and kickboxing.

Olivine Silier

Olivine Silier

Olivine Silier is a student at UC Berkeley studying harmonic analysis and discrete geometry and is advised by Ruixiang Zhang. Olivine has developed cell partitioning techniques which yield the first proto inverse theorem for Szemerédi-Trotter and plans to leverage these new analysis flavored tools in an ambitious research program, including studying the Kakeya problem and the unit distance problem. Olivine grew up in Saint-Germain-en-Laye, France. If language can be thought of as the bridge between logic and literature, perhaps Olivine's multilingual education could explain her dual love for books and for mathematics. One day, she hopes to be a fiction author but for now, she spends most of her time in the Wonderland that is discrete geometry and harmonic analysis. She is an avid (if incompetent) climber, enthusiastic cook, and was named after a mineral!

Dmitrii Zakharov

Dmitrii Zakharov

Dmitrii Zakharov is a PhD student at MIT. He is advised by Lisa Sauermann, and his research centers on discrete geometry and extremal combinatorics. Specifically, he thinks about extremal properties of combinatorial and geometric structures, including graphs, hypergraphs, and configurations of points and lines on the plane. Often these problems require a mixture of different methods coming from analysis, algebra, combinatorics, and probability.

Finalists

Anna Abasheva

Anna Abasheva

Anna Abasheva is a PhD student at Columbia University. She is advised by Giulia Saccà and her research focuses on algebraic geometry. Anna researches algebraic varieties – geometric objects that are solutions to systems of polynomial equations. She is interested in the interplay between their geometric, algebraic, and combinatorial properties, especially for a specific type of varieties called hyperkähler varieties. Anna loves learning languages and recently took up German as her fourth language. She also has a film blog.

Jatin Arora

Jatin Arora

Jatin Arora is working towards his PhD at Carnegie Mellon University, advised by Umut Acar. Jatin works on functional programming languages to make them fast and efficient. His research is currently focused on scalable and parallel techniques for garbage collection, which is a fundamental problem for parallel functional programs because their high-rates of allocation explode with parallelism.

Christina Baek

Christina Baek

Christina Baek is a PhD student at Carnegie Mellon studying machine learning. She is advised by Zico Kolter and Aditi Raghunathan, and her research focuses on out-of-distribution generalization. She works on better understanding distribution shifts that occur in the wild and designing ways to assess/improve the model’s performance under such shifts with limited labeled data. She is also interested in investigating how models can be optimized to continuously adapt to new information. Christina enjoys music and walks.

View all finalists

Adam Block

Adam Block

Adam Block is a PhD student at MIT. He is advised by Alexander Rakhlin and his research is focused on machine learning. He works on designing provably effective algorithms in a variety of structured learning settings. Recently, he has been especially interested in online learning and sequential decision making, where the learner receives data sequentially and actively updates predictions and decisions as new information arrives, with applications to robotics, personalization, and much more.

Kaidi Cao

Kaidi Cao

Kaidi Cao is a PhD student at Stanford University where he is advised by Jure Leskovec. Kaidi’s research focuses on data and computationally efficient machine learning. He is aiming to develop next-generation resource-efficient machine learning methods, including: 1) data-efficient deep learning through transferring knowledge from existing data or annotations, and 2) computationally efficient algorithms enabling learning from large-scale datasets.

Andrew Ilyas

Andrew Ilyas

Andrew Ilyas is studying EECS at MIT where he is advised by Aleksander Madry and Constantinos Daskalakis. Andrew’s research is in machine learning, with a focus on where errors, biases, or other features of the data-collection process (adversely) affect model predictions. In these settings, his research sets out to understand — both in theory and in practice — (a) why and how such effects arise; (b) how we might identify and mitigate them algorithmically; and (c) what benefits the resulting models confer.

Ce Jin

Ce Jin

Ce Jin is a PhD student at MIT where he studies theoretical computer science. He is advised by Ryan Williams and Virginia Vassilevska Williams and his research focuses on fine-grained complexity theory. More specifically, he studies algorithms and conditional lower bounds for fundamental computational problems in graph theory, pattern matching, and combinatorial optimization.

Misha Khodak

Misha Khodak

Misha Khodak is a PhD student at Carnegie Mellon University studying theoretical machine learning. He is advised by Nina Balcan and Ameet Talwalkar and his research focuses on foundations and applications of machine learning, especially meta-learning and algorithm design. Misha focuses on machine learning methods for diverse tasks, including model compression, neural architecture search, the theory of unsupervised learning, and natural language processing. His work includes some of the first provable guarantees for gradient-based meta-learning and end-to-end guarantees for learning-augmented algorithms. Misha likes skiing, history books, and loose-leaf tea.

Marina Knittel

Marina Knittel

Marina Knittel is a student of computer science at the University of Maryland. She is advised by MohammadTaghi Hajiaghayi and John Dickerson and her research focuses on fair and parallel graph algorithms. Marina believes that with the advent of big data, we must ensure that vital decision-making systems keep pace with the growing scale of data and actively mitigate their own shortcomings. She studies algorithms on massive networks that achieve these two goals. For scalability, she studies classical graph problems in the Massively Parallel Computation (MPC) model. For bias mitigation, she studies fairness constraints in the context of partitioning problems such as clustering and allocation. In her spare time, she enjoys creative writing, Dungeons and Dragons, singing, and swing dancing.

Zhexiao Lin

Zhexiao Lin

Zhexiao Lin is a student at UC Berkeley advised by Peng Ding, Peter Bickel, and Fang Han. His research focuses on causal inference, graph-based methods, and econometrics. He explores two broad tracks: (i) how to measure treatment effects and assess effects of policy interventions using econometrics methods, and (ii) how to discover the intrinsic structure of large datasets in economics.

Surya Mathialagan

Surya Mathialagan

Surya Mathialagan is a PhD student in the CS theory group at MIT. She is advised by Vinod Vaikuntanathan and Virginia Vassilevska Williams, and her research focuses on fine-grained complexity and cryptography. She works on understanding the true runtimes of graph algorithms and is also interested in data-structure problems in cryptography and privacy. In her free time, Surya enjoys drawing, playing the guitar, and dancing.

Yifan Qiao

Yifan Qiao

Yifan Qiao is a PhD student at UCLA advised by Harry Xu and Miryung Kim. His research focuses on building efficient systems for cloud computing and machine learning. Combining insights from ML algorithms, operating systems, and network stacks, Yifan designs and builds disaggregated cloud systems. In his spare time, he enjoys reading and playing Go.

Jacob Zavatone-Veth

Jacob Zavatone-Veth

Jacob Zavatone-Veth is a PhD student at Harvard University, advised by Cengiz Pehlevan. His research focuses on how networks of neurons can perform useful computations. He applies the tools of statistical physics to elucidate how artificial networks function, and builds models of biological networks to advance our understanding of the brain's algorithms. By understanding both biological intelligence and machine learning, he aims to create more naturally intelligent artificial systems. Jacob is half Swiss, but has never skied.

Yunkun Zhou

Yunkun Zhou

Yunkun Zhou is a student at Stanford University advised by Jacob Fox. His research explores extremal and additive combinatorics, specifically discrepancy theory. He is also generally interested in combinatorics and its applications in theoretical computer science.

Show less

Honorable Mentions

Jialu Bao

Cornell, Computer Science

Salva Rühling Cachay

UC San Diego, Computer Science and Engineering

Lingjiao Chen

Stanford, Computer Science

Sanath Devalapurkar

Harvard, Mathematics

Dingding Dong

Harvard, Mathematics

Brice Huang

MIT, Electrical Engineering and Computer Science

Roman Krutowski

UCLA, Mathematics

Anunay Kulshrestha

Princeton, Computer Science (Center for Information Technology Policy)

Sadhika Malladi

Princeton, Computer Science

Arya McCarthy

Johns Hopkins, Computer Science

Elizabeth Pratt

Berkeley, Mathematics

Divya Shanmugam

MIT, Electrical Engineering and Computer Science

Jamison Sloan

MIT, Electrical Engineering and Computer Science

Andrew Wagenmaker

University of Washington, Computer Science & Engineering

Ziyang Xu

Princeton, Computer Science

Charles Yuan

MIT, Electrical Engineering and Computer Science

Wenhao Zhan

Princeton, Electrical and Computer Engineering

Jieyu Zhang

University of Washington, Computer Science

Aleksandr Zimin

MIT, Mathematics