I am a postdoctoral research fellow at AI^2, Princeton AI Lab, working on the principled understanding of large language models and their application to engineering including design and control. I work closely with Prof. Mengdi Wang at Princeton.

Previously, I earned my Ph.D. in Computer Science from University of California, Los Angeles (UCLA), where I was advised by Prof. Quanquan Gu. Before that, I earned my Bachelor of Science from EECS at Peking University summa cum laude, where I was very fortunate to be advised by Prof. Liwei Wang.

My research interest covers various aspects of machine learning. Currently, I am particularly interested in applying insights from reinforcement learning and control theory to LLM training and inference, in the context of alignment and reasoning. You can find my curriculum vitae here.

🔥 News

  • 2024.06-09: This summer, I had an internship at Meta Gen AI, where I worked on LLM alignment and reward modeling.
  • 2024.05:  🎉🎉 2 papers accepted to ICML 2024.
  • 2024.01:  🎉🎉 2 papers accepted to ICLR 2024.
  • 2023.08:   It is my great honor to have been awarded the UCLA Dissertation Year Fellowship!
  • 2023.07:  🎉🎉 1 paper accepted to ICML 2023, Hawaii.
  • 2022.09:  🎉🎉 2 papers accepted to NeurIPS 2022, New Orleans.

📝 Publications & Preprints

📖 Teaching

💬 Academic Service

  • Reviewers of NeurIPS, ICML, ICLR, AISTATS, AAAI, IJCAI, and other conferences/journals in machine learning/data mining.
  • Senior PC members of AAAI’23