Jindong Wang

Senior Researcher, Microsoft Research Asia
Building 2, No. 5 Danling Street, Haidian District, Beijing, China
jindongwang [at] outlook.com, jindong.wang [at] microsoft.com
Google scholar | DBLP | Github || Twitter/X | Zhihu | Wechat | Bilibili || CV CV (Chinese)

Dr. Jindong Wang is currently a Senior Researcher at Microsoft Research Asia. He obtained his Ph.D from Institute of Computing Technology, Chinese Academy of Sciences in 2019 with the excellent PhD thesis award. His research interest includes machine learning, large language models, and AI for social science. He has published over 50 papers with 10000+ citations at leading conferences and journals such as ICML, ICLR, NeurIPS, TPAMI, IJCV etc. His research is reported by Forbes and other international media. In 2023, he was selected by Stanford University as one of the World’s Top 2% Scientists and one of the AI Most Influential Scholars by AMiner. He has several Google scholar highly cited papers, Huggingface featured papers, and paperdigest most influential papers. He received the best paper award at ICCSE’18 and IJCAI’19 workshop. He serves as the associate editor of IEEE Transactions on Neural Networks and Learning Systems (TNNLS), guest editor for ACM Transactions on Intelligent Systems and Technology (TIST), area chair for NeurIPS, KDD, ACMMM, and ACML, senior program committee member of IJCAI and AAAI. He leads several impactful open-source projects, including transferlearning, PromptBench, torchSSL, USB, personalizedFL, and robustlearn, which received over 16K stars on Github. He published a book Introduction to Transfer Learning. He gave tutorials at IJCAI’22, WSDM’23, KDD’23, and AAAI’24.

Research interest: (See this page for more details)

  • Machine learning: I’m generally interested in designing algorithms and applications to make machine learning systems more robust, trustworthy, and responsible. Related topics include: robust machine learning, OOD / domain generalization, transfer learning, semi-supervised learning, federated learning, and related applications.
  • Large language models: We mainly focus on understanding the potential and limitation of large foundation models. Related topics: LLM evaluation and enhancement.
  • AI for social sciences: How to measure the impact of generative AI on different domains? How to assist interdisciplinary domains using powerful AI models? How to use existing social science knowledge to help us better understand AI behaviors?
  • Interested in internship or collaboration? Contact me. I’m experimenting a new form of research collaboration. You can click here if you are interested!


Jul 2, 2024 Our paper “SpecFormer: Guarding Vision Transformer Robustness via Maximum Singular Value Penalization” is accepted by ECCV 2024! [paper]
May 16, 2024 We have 4 collaborative papers accepted by ACL 2024 (2 main, 2 findings). Congrats to authors!
May 14, 2024 Invited to be an area chair for NeurIPS 2024 main track.
May 7, 2024 The PromptBench framework has been accepted by JMLR open-source track! NegativePrompt (variation of EmotionPrompt) is accepted by IJCAI 2024 main track!
May 2, 2024 We have 8 papers accepted by ICML 2024 (4 from my team and 4 collaborative work). Congrats!
Jan 20, 2024 I was invited to be an Area Chair for ACM Multimedia 2024.


  1. 6 of my papers are highly cited and ranked top 20 globally in recent 5 years in Google scholar metrics. See here. I also have 6 papers featured by Hugging Face.
  2. I wrote a popular book Introduction to Transfer Learning to make it easy to learn, understand, and use transfer learning.
  3. I lead the most popular transfer learning, semi-supervised learning, and LLM evaluation projects on Github: Transfer learning repo, Semi-supervised learning repo, PromptBench for LLM evaluation, Personalized federated learning repo.
  4. I was selected as one of the World's Top 2% Scientist by Stanford and 2022 AI 2000 Most Influential Scholars by AMiner in 2023.

Selected publications

  1. Competeai: Understanding the competition behaviors in large language model-based agents
    Qinlin Zhao, Jindong Wang# , Yixuan Zhang, Yiqiao Jin, Kaijie Zhu, Hao Chen, and Xing Xie
    International Conference on Machine Learning (ICML) 2024 | [ arXiv Code ]
  2. The good, the bad, and why: Unveiling emotions in generative ai
    Cheng Li, Jindong Wang# , Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, and Xing Xie
    International Conference on Machine Learning (ICML) 2024 | [ arXiv Code ]
  3. DIVERSIFY: A General Framework for Time Series Out-of-distribution Detection and Generalization
    Wang Lu, Jindong Wang# , Xinwei Sun, Yiqiang Chen, Xiangyang Ji, Qiang Yang, and Xing Xie
    IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 2024 | [ arXiv Code Zhihu ]
  4. DyVal: Dynamic Evaluation of Large Language Models for Reasoning Tasks
    Kaijie Zhu, Jiaao Chen, Jindong Wang# , Neil Zhenqiang Gong, Diyi Yang, and Xing Xie
    International Conference on Learning Representation (ICLR) 2024 | [ arXiv Code ]
    (Spotlight (Top 5%))
  5. Understanding and Mitigating the Label Noise in Pre-training on Downstream Tasks
    Hao Chen, Jindong Wang# , Ankit Shah, Ran Tao, Hongxin Wei, Xing Xie, Masashi Sugiyama, and Bhiksha Raj
    International Conference on Learning Representation (ICLR) 2024 | [ arXiv Code Zhihu ]
    (Spotlight (Top 5%))
  6. Domain-Specific Risk Minimization for Out-of-Distribution Generalization
    Yi-Fan Zhang, Jindong Wang# , Jian Liang, Zhang Zhang, Baosheng Yu, Liang Wang, Dacheng Tao, and Xing Xie
    The 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD) 2023 | [ arXiv Code Video Zhihu ]
  7. Out-of-distribution Representation Learning for Time Series Classification
    Wang Lu, Jindong Wang# , Xinwei Sun, Yiqiang Chen, and Xing Xie
    International Conference on Learning Representations (ICLR) 2023 | [ arXiv Code Website Zhihu ]
  8. FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning
    Yidong Wang, Hao Chen, Qiang Heng, Wenxin Hou, Yue Fan, Zhen Wu, Jindong Wang# , Marios Savvides, Takahiro Shinozaki, Bhiksha Raj, Bernt Schiele, and Xing Xie
    International Conference on Learning Representations (ICLR) 2023 | [ arXiv Code Website Zhihu ]
  9. Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling
    Bowen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang# , Manabu Okumura, and Takahiro Shinozaki
    Advances in Neural Information Processing Systems (NeurIPS) 2021 | [ arXiv PDF Code Slides Video Zhihu ]
    (500+ citations)
  10. Adarnn: Adaptive learning and forecasting of time series
    Yuntao Du, Jindong Wang# , Wenjie Feng, Sinno Pan, Tao Qin, Renjun Xu, and Chongjun Wang
    The 30th ACM International Conference on Information & Knowledge Management (CIKM) 2021 | [ arXiv PDF Code ]
    (Paperdigest most influencial CIKM paper)
  11. Visual domain adaptation with manifold embedded distribution alignment
    Jindong Wang , Wenjie Feng, Yiqiang Chen, Han Yu, Meiyu Huang, and Philip S Yu
    The 26th ACM international conference on Multimedia 2018 | [ PDF Supp Code Poster ]
    (500+ citations; 2nd most cited paper in MM’18)
  12. Balanced distribution adaptation for transfer learning
    Jindong Wang , Yiqiang Chen, Shuji Hao, Wenjie Feng, and Zhiqi Shen
    IEEE international conference on data mining (ICDM) 2017 | [ HTML PDF Code ]
    (500+ citations; most cited paper in ICDM’17)
Hit Counter