Research

The long-term research goal is to build understand, evaluate, and enhance modern AI models, such as pre-trained and large foundation models. We create new theory, algorithms, applications, and open-sourced library to achieve our goal.
- Machine learning with foundation models: I’m generally interested in designing algorithms and applications to make machine learning systems more robust, trustworthy, and responsible. Related topics include: machine learning with foundation models, robust machine learning, OOD / domain generalization, transfer learning, semi-supervised learning, federated learning, and related applications. How to efficiently use, adapt, and align large foundation models? How to enhance their robustness and reliability? How to make them more interpretable? Recently, I created a research topic named catastrophic inheritance (Vision paper (DMLR’24)):
- Philosophy of language models: understand how LMs work and their limitations, including, evaluation, enhancement, and agent applications.
- Related papers: DyVal (ICLR’24 spotlight), DyVal2 (ICML’24), PromptBench (JMLR’24), PromptRobust (CCS LAMPS)
- Intersection of generative AI and social sciences: How to measure the impact of generative AI on different fields? How to use powerful AI models to assist interdisciplinary research? How to use existing knowledge in social sciences to help us better understand AI models?
Media Coverage
- NeurIPS 2024 with Jindong Wang and Steven Euijong Whang, Microsoft Research Podcast. December 2024. [Webpage]
- The Answer To Why Emotionally Worded Prompts Can Goose Generative AI Into Better Answers And How To Spur A Decidedly Positive Rise Out Of AI, by Forbes. November 2023. [Webpage]
- CulturePark for low-resource large language models, by MIT Technology Review. June 2024. [Webpage]
- Epic and Generative AI, by Epic. December 2024. [Webpage]
- Unveiling the Power of Semi-Supervised Learning: The Unified Semi-Supervised Learning Benchmark, by Pytorch. May 2024. [Webpage]
- EmotionPrompt in RAG, by LlamaIndex. August 2023. [Webpage]
- Exploring the effects of emotional stimuli to large language models, by TexExplore. September 2023. [Webpage]
- CompeteAI: An Artificial Intelligence AI Framework that Understands the Competition Dynamics of Large Language Model-based Agents, by Daily.dev, July 2024. [Webpage]
Funding and Grants
- Principal Investitator. Microsoft Accelerate Foundation Model Research program. 02/01/2025 – 06/30/2025.
- Co-Principal Investigator. “Mitigating Ethical AI Threats: Dynamic Benchmarks for Securing Multimodal Social Intelligence in Large Language Models(LLMs)”. Awarded by The Commonwealth Cyber Initiative (CCI). 03/01/2025 – 02/28/2026.