Senior Researcher, Microsoft Research Asia
Building 2, No. 5 Danling Street, Haidian District, Beijing, China
jindongwang [at] outlook.com, jindong.wang [at] microsoft.com
Google scholar | DBLP | Github || Twitter/X | Zhihu | Wechat | Bilibili || CV CV (Chinese)
Dr. Jindong Wang is currently a Senior Researcher at Microsoft Research Asia. He obtained his Ph.D from Institute of Computing Technology, Chinese Academy of Sciences in 2019. He visited Qiang Yang’s group at Hong Kong University of Science and Technology in 2018. His research interest includes robust machine learning, transfer learning, semi-supervised learning, and federated learning. His recent interest is large language models. He has published over 50 papers with 7000 citations at leading conferences and journals such as ICLR, NeurIPS, TKDE, TASLP etc. He has 6 highly cited papers according to Google Scholar metrics. He received the best paper award at ICCSE’18 and IJCAI’19 federated learning workshop and the prestigous excellent Ph.D thesis award (only 1 at ICT each year). In 2022 and 2023, he was selected as one of the AI 2000 Most Influential Scholars by AMiner between 2013-2023. He serves as the senior program committee member of IJCAI and AAAI, and reviewers for top conferences and journals like ICML, NeurIPS, ICLR, CVPR, TPAMI, AIJ etc. He opensourced several projects to help build a better community, such as transferlearning, torchSSL, USB, personalizedFL, and robustlearn, which received over 12K stars on Github. He published a textbook Introduction to Transfer Learning to help starters quickly learn transfer learning. He gave tutorials at IJCAI’22, WSDM’23, and KDD’23.
Research interest: robust machine learning, out-of-distribution / domain generalization, transfer learning, semi-supervised learning, federated learning, and related applications such as activity recognition and computer vision. These days, I’m particularly interested in Large Language Models (LLMs) evaluation and enhancement. See this page for more details. Interested in internship or collaboration? Contact me.
Announcement: I’m experimenting a new form of research collaboration. You can click here if you are interested!
Announcement: Call for papers for ACM TIST special issue on Evaluations of Large Langauge Models! [more]
|Nov 26, 2023||Our paper “UP-Net: An Uncertainty-Driven Prototypical Network for Few-Shot Fault Diagnosis” is accepted by IEEE TNNLS!|
|Nov 21, 2023||Our paper “FIXED: Frustratingly Easy Domain Generalization with Mixup” is accepted by Conference on Parsimony and Learning (CPAL) 2023! [arxiv]|
|Nov 1, 2023||Paper Optimization-Free Test-Time Adaptation for Cross-Person Activity Recognition is accepted by UbiComp 2024! [arxiv]|
|Oct 27, 2023||Our new work CompeteAI: Understanding the Competition Behaviors in Large Language Model-based Agents is released on ArXiv. [paper]|
|Oct 13, 2023||I’m selected as one of the World’s Top 2% Scientists by Stanford University! [news]|
|Oct 8, 2023||Our paper “Out-of-Distribution Generalization in Text Classification: Past, Present, and Future” is accepted by EMNLP 2023! [paper]|
- 6 of my papers are highly cited and ranked top 20 globally in recent 5 years in Google scholar metrics. See here.
- I wrote a popular book Introduction to Transfer Learning to make it easy to learn, understand, and use transfer learning.
- I lead the most popular transfer learning and semi-supervised learning projects on Github: Transfer learning repo, Semi-supervised learning repo, and Personalized federated learning repo.
- I was selected into the list of 2022 AI 2000 Most Influential Scholars by AMiner in recognition of my contributions in the field of multimedia between 2012-2021 (ranked 49/2000)