Research

The long-term research goal is to build understand, evaluate, and enhance modern AI models, such as pre-trained and large foundation models. We create new theory, algorithms, applications, and open-sourced library to achieve our goal.

  • Machine learning: I’m generally interested in designing algorithms and applications to make machine learning systems more robust, trustworthy, and responsible. Related topics include: machine learning with foundation models, robust machine learning, OOD / domain generalization, transfer learning, semi-supervised learning, federated learning, and related applications.
  • Large language models: We mainly focus on understanding the potential and limitation of large foundation models. Related topics: LLM evaluation and enhancement.
  • AI for social sciences: How to measure the impact of generative AI on different domains? How to assist interdisciplinary domains using powerful AI models? How to use existing social science knowledge to help us better understand AI behaviors?

Our research consists of the following topics with selected publications: [View by year] [Google scholar]

Foundation model understanding
Machine learning and large foundation models

Open source:

AI for social sciences:

Out-of-distribution (Domain) generalization and general machine learning
Semi-supervised learning for low-resource learning
Safe transfer learning for security
Imbalanced learning for long-tailed tasks
Miscellaneous
  1. An easy-to-use speech recognition toolkit based on Espnet: EasyESPNet
  2. Leading the transfer learning tutorial (迁移学习简明手册) on Github: Tutorial
  3. I’m also leading other popular research projects: Machine learning, Activity recognition
  4. I started a software studio Pivot Studio and made many applications in 2010-2014: Our applications
Hit Counter