Jindong Wang

Senior Researcher, Microsoft Research Asia
Building 2, No. 5 Danling Street, Haidian District, Beijing, China
jindongwang [at] outlook.com, jindong.wang [at] microsoft.com
Google scholar | DBLP | Github || Twitter/X | Zhihu | Wechat | Bilibili || CV CV (Chinese)

Dr. Jindong Wang is currently a Senior Researcher at Microsoft Research Asia. He obtained his Ph.D from Institute of Computing Technology, Chinese Academy of Sciences in 2019. In 2018, he visited Prof. Qiang Yang’s group at Hong Kong University of Science and Technology. His research interest includes robust machine learning, transfer learning, semi-supervised learning, and federated learning. His recent interest is large language models. He has published over 50 papers with 8000+ citations at leading conferences and journals such as ICLR, NeurIPS, TPAMI, TKDE, IJCV etc. He has 6 highly cited papers according to Google Scholar metrics and 6 Huggingface Featured papers. He received the best paper award at ICCSE’18 and IJCAI’19 federated learning workshop and the prestigous excellent Ph.D thesis award (only 1 at ICT each year). In 2023, he was selected by Stanford University as one of the World’s 2% Scientists and one of the AI Most Influential Scholars by AMiner. He serves as the associate editor of IEEE Transactions on Neural Networks and Learning Systems (TNNLS), guest editor for ACM Transactions on Intelligent Systems and Technology (TIST), senior program committee member of IJCAI and AAAI, and reviewers for top conferences and journals like ICML, NeurIPS, ICLR, CVPR, TPAMI, AIJ etc. He leads several impactful open-source projects, including transferlearning, PromptBench, torchSSL, USB, personalizedFL, and robustlearn, which received over 16K stars on Github. He published a textbook Introduction to Transfer Learning to help starters quickly learn transfer learning. He gave tutorials at IJCAI’22, WSDM’23, KDD’23, and AAAI’24.

Research interest: robust machine learning, OOD / domain generalization, transfer learning, semi-supervised learning, federated learning, and related applications.

Recent interest: Large Language Models (LLMs) evaluation and enhancement. See this page for more details. Interested in internship or collaboration? Contact me. I’m experimenting a new form of research collaboration. You can click here if you are interested!

Announcement: Call for papers for ACM TIST special issue on Evaluations of Large Langauge Models! [more]

News

Jan 20, 2024 I was invited to be an Area Chair for ACM Multimedia 2024.
Jan 16, 2024 We have 2 papers accepted as spotlight and 2 as poster at ICLR 2024!
Jan 14, 2024 Our paper “Diversity: a general framework for time series out-of-distribution detection and generalization” is accepted by IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)! [paper]
Dec 28, 2023 Our survey paper “A survey on evaluation of large language models” is accepted by ACM TIST! [paper] [website]
Dec 22, 2023 I was invited to be an Associate Editor for IEEE Transactions on Neural Networks and Learning Systems (TNNLS)!
Dec 18, 2023 Our Diversify algorithm (ICLR’23) now extends to virtual reality and the paper “Generating Virtual Reality Interaction Data from Out-of-Distribution Desktop Data: An Exploration Using Stroke Gestures” is accepted by IEEE VR 2024!

Highlights

  1. 6 of my papers are highly cited and ranked top 20 globally in recent 5 years in Google scholar metrics. See here. I also have 6 papers featured by Hugging Face.
  2. I wrote a popular book Introduction to Transfer Learning to make it easy to learn, understand, and use transfer learning.
  3. I lead the most popular transfer learning, semi-supervised learning, and LLM evaluation projects on Github: Transfer learning repo, Semi-supervised learning repo, PromptBench for LLM evaluation, Personalized federated learning repo.
  4. I was selected as one of the World's Top 2% Scientist by Stanford and 2022 AI 2000 Most Influential Scholars by AMiner in 2023.

Selected publications

  1. DIVERSIFY: A General Framework for Time Series Out-of-distribution Detection and Generalization
    Wang Lu, Jindong Wang# , Xinwei Sun, Yiqiang Chen, Xiangyang Ji, Qiang Yang, and Xing Xie
    IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 2024 | [ arXiv Code Zhihu ]
  2. DyVal: Dynamic Evaluation of Large Language Models for Reasoning Tasks
    Kaijie Zhu, Jiaao Chen, Jindong Wang# , Neil Zhenqiang Gong, Diyi Yang, and Xing Xie
    International Conference on Learning Representation (ICLR) 2024 | [ arXiv Code ]
    (Spotlight (Top 5%))
  3. Understanding and Mitigating the Label Noise in Pre-training on Downstream Tasks
    Hao Chen, Jindong Wang# , Ankit Shah, Ran Tao, Hongxin Wei, Xing Xie, Masashi Sugiyama, and Bhiksha Raj
    International Conference on Learning Representation (ICLR) 2024 | [ arXiv Code Zhihu ]
    (Spotlight (Top 5%))
  4. Domain-Specific Risk Minimization for Out-of-Distribution Generalization
    Yi-Fan Zhang, Jindong Wang# , Jian Liang, Zhang Zhang, Baosheng Yu, Liang Wang, Dacheng Tao, and Xing Xie
    The 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD) 2023 | [ arXiv Code Video Zhihu ]
  5. Out-of-distribution Representation Learning for Time Series Classification
    Wang Lu, Jindong Wang# , Xinwei Sun, Yiqiang Chen, and Xing Xie
    International Conference on Learning Representations (ICLR) 2023 | [ arXiv Code Website Zhihu ]
  6. FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning
    Yidong Wang, Hao Chen, Qiang Heng, Wenxin Hou, Yue Fan, Zhen Wu, Jindong Wang# , Marios Savvides, Takahiro Shinozaki, Bhiksha Raj, Bernt Schiele, and Xing Xie
    International Conference on Learning Representations (ICLR) 2023 | [ arXiv Code Website Zhihu ]
  7. Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling
    Bowen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang# , Manabu Okumura, and Takahiro Shinozaki
    Advances in Neural Information Processing Systems (NeurIPS) 2021 | [ arXiv PDF Code Slides Video Zhihu ]
    (500+ citations)
  8. Adarnn: Adaptive learning and forecasting of time series
    Yuntao Du, Jindong Wang# , Wenjie Feng, Sinno Pan, Tao Qin, Renjun Xu, and Chongjun Wang
    The 30th ACM International Conference on Information & Knowledge Management (CIKM) 2021 | [ arXiv PDF Code ]
    (Paperdigest most influencial CIKM paper)
  9. Visual domain adaptation with manifold embedded distribution alignment
    Jindong Wang , Wenjie Feng, Yiqiang Chen, Han Yu, Meiyu Huang, and Philip S Yu
    The 26th ACM international conference on Multimedia 2018 | [ PDF Supp Code Poster ]
    (500+ citations; 2nd most cited paper in MM’18)
  10. Balanced distribution adaptation for transfer learning
    Jindong Wang , Yiqiang Chen, Shuji Hao, Wenjie Feng, and Zhiqi Shen
    IEEE international conference on data mining (ICDM) 2017 | [ HTML PDF Code ]
    (500+ citations; most cited paper in ICDM’17)
Hit Counter