The long-term research goal is to build robust models for modern AI, such as pre-trained models and large models. We create new theory, algorithms, applications, and open-sourced library to achieve our goal. These days, we are specifically interested in robustness in large language models (LLMs).

Our research consists of the following topics with selected publications: [View by year]

Out-of-distribution (Domain) generalization and adaptation for distribution shift
Semi-supervised learning for low-resource learning
Safe transfer learning for security
Imbalanced learning for long-tailed tasks
  1. An easy-to-use speech recognition toolkit based on Espnet: EasyESPNet
  2. Leading the transfer learning tutorial (迁移学习简明手册) on Github: Tutorial
  3. I’m also leading other popular research projects: Machine learning, Activity recognition
  4. I started a software studio Pivot Studio and made many applications in 2010-2014: Our applications
Hit Counter