Research

The long-term research goal is to build understand, evaluate, and enhance modern AI models, such as pre-trained and large foundation models. We create new theory, algorithms, applications, and open-sourced library to achieve our goal. These days, we are specifically interested in large language models (LLMs) evaluation and robustness enhancement.

Our research consists of the following topics with selected publications: [View by year] [Google scholar]

New: large models

Evaluation: (website: https://llm-eval.github.io/)

Enhancement: (website: https://llm-enhance.github.io/)

Open source:

Out-of-distribution (Domain) generalization and adaptation for distribution shift
Semi-supervised learning for low-resource learning
Safe transfer learning for security
Imbalanced learning for long-tailed tasks
Miscellaneous
  1. An easy-to-use speech recognition toolkit based on Espnet: EasyESPNet
  2. Leading the transfer learning tutorial (èżç§»ć­Šäč çź€æ˜Žæ‰‹ć†Œ) on Github: Tutorial
  3. I’m also leading other popular research projects: Machine learning, Activity recognition
  4. I started a software studio Pivot Studio and made many applications in 2010-2014: Our applications
Hit Counter