Jindong Wang
Assistant Professor, William & Mary
jwang80 [at] wm.edu, jindongwang [at] outlook.com
Integrated Science Center 2273, Williamsburg, VA
Google scholar | DBLP | Github | Twitter/X | LinkedIn | Zhihu | Bilibili || CV CV (Chinese)
Dr. Jindong Wang is a Tenure-Track Assistant Professor at William & Mary since 2025. Previously, he has been a Senior Researcher in Microsoft Research Asia for 5.5 years. His research interest includes machine learning, large language and foundation models, and AI for social science. Since 2022, he has been selected by Stanford University as one of the World’s Top 2% Scientists and one of the Most Influential AI Scholars by AMiner. He serves as the associate editor of IEEE Transactions on Neural Networks and Learning Systems (TNNLS), guest editor for ACM Transactions on Intelligent Systems and Technology (TIST), area chair for ICML, NeurIPS, ICLR, KDD, ACMMM, and ACML, SPC of IJCAI and AAAI. He has published over 60 papers with 15000+ citations at leading conferences and journals such as ICML, ICLR, NeurIPS, TPAMI, IJCV etc. His research is reported by Forbes, MIT Technology Review, and other international media. He received best and outstanding papers awards at several internation conferences and workshops. He published a book Introduction to Transfer Learning. He gave tutorials at IJCAI’22, WSDM’23, KDD’23, AAAI’24, and AAAI’25. He leads several impactful open-source projects, including transferlearning, PromptBench, torchSSL, and USB, which received over 16K stars on Github. He obtained his Ph.D from University of Chinese Academy of Sciences in 2019 with the excellent PhD thesis award and a bachelor’s degree from North China University of Technology in 2014.
PhD application: 25 fall has been fulfilled. You can either apply for future PhDs or apply for Internship or collaboration. [PhD and interns] [Chinese version] []
Research interest: (See this page for more details)
- Machine learning: To make ML systems more robust, trustworthy, and responsible. Related topics include: robust machine learning, OOD / domain generalization, transfer learning, semi-supervised learning, federated learning, and related applications. These days, I’m particularly interested in ML with foundation models.
- Philosophy of language models: We mainly focus on understanding the potential and limitation of large foundation models. Related topics: LLM evaluation and enhancement, and agent behavior.
- AI for social sciences: How to measure the impact of generative AI on different domains? How to assist interdisciplinary domains using powerful AI models? How to use existing social science knowledge to help us better understand AI behaviors?
News
Jan 10, 2025 | Invited to be an Area Chair for KDD 2025. |
---|---|
Dec 16, 2024 | Check my first Microsoft Research Podcast on our paper at NeurIPS’24: [Podcast] |
Dec 15, 2024 | Our work ZooPFL received Outstand Paper Award at NeurIPS 2024 workshop on federated foundation models! |
Dec 07, 2024 | Invited to be an area chair (AC) for ICML 2025 and senior program committee (SPC) for IJCAI 2025. |
Sep 28, 2024 | We have one collaborative paper accepted by NeurIPS 24 dataset and benchmark track as a spotlight! [paper] |