I am an AI researcher at Meta Superintelligence (TBD) Lab, working on large-scale reinforcement learning for frontier language models. Previously, I worked on scaling reinforcement learning at xAI.

Before that, I was a postdoctoral research fellow at AI^2, Princeton AI Lab, working on the principled understanding of large language models and their application to engineering, including design and control. I worked closely with Prof. Mengdi Wang at Princeton.

I earned my Ph.D. in Computer Science from University of California, Los Angeles (UCLA), where I was advised by Prof. Quanquan Gu. Before that, I earned my Bachelor of Science from EECS at Peking University summa cum laude, where I was very fortunate to be advised by Prof. Liwei Wang.

Curriculum Vitae.

🔥 News

  • 2025.01:  🎉🎉 3 papers accepted to ICLR 2025.
  • 2024.06-09: This summer, I had an internship at Meta Gen AI, where I worked on LLM alignment and reward modeling.
  • 2024.05:  🎉🎉 2 papers accepted to ICML 2024.
  • 2024.01:  🎉🎉 2 papers accepted to ICLR 2024.
  • 2023.08:   I am honored to have been awarded the UCLA Dissertation Year Fellowship!
  • 2023.07:  🎉🎉 1 paper accepted to ICML 2023, Hawaii.
  • 2022.09:  🎉🎉 2 papers accepted to NeurIPS 2022, New Orleans.

📝 Publications & Preprints

📖 Teaching

💬 Academic Service

  • Reviewers of NeurIPS, ICML, ICLR, AISTATS, AAAI, IJCAI, and other conferences/journals in machine learning/data mining.
  • Senior PC members of AAAI’23