Jindong Wang

Assistant Professor, William & Mary
jwang80 [at] wm.edu, jindongwang [at] outlook.com
Integrated Science Center 2273, Williamsburg, VA
Google scholar | DBLP | Github | Twitter/X | LinkedIn | Zhihu | Bilibili || CV CV (Chinese)
PhD and intern application: 2025 fall PhD is fulfilled. You can either apply for future PhDs or apply for Internship or collaboration. [PhD and interns] [Chinese version]
Research interest: (See this page for more details)
- General ML and AI: To make AI systems more robust, trustworthy, and responsible. Key areas: robust machine learning, OOD / domain generalization, transfer learning, semi-supervised learning, federated learning, and related applications. These days, I’m particularly interested in ML with foundation models.
- Philosophy of language models: We mainly focus on understanding the potential and limitation of large foundation models. Related topics: LLM evaluation and enhancement, and agent behavior.
- AI for social sciences: How to measure the impact of generative AI on different domains? How to assist interdisciplinary domains using powerful AI models? How to use existing social science knowledge to help us better understand AI behaviors?
I am open to industry and university visit, consultant, and more collaboration!
Dr. Jindong Wang is a Tenure-Track Assistant Professor at Data Science, William & Mary. He is also a faculty member at AI safety community, Future of Life Institute. He was a Senior Researcher in Microsoft Research Asia from 2019 to 2024. His research interest includes machine learning, large language and foundation models, and AI for social science. Since 2022, he has been selected by Stanford University as one of the World’s Top 2% Highly Cited Scientists and one of the Most Influential AI Scholars by AMiner. He serves as the associate editor of IEEE Transactions on Neural Networks and Learning Systems (TNNLS), guest editor for ACM Transactions on Intelligent Systems and Technology (TIST), area chair for ICML, NeurIPS, ICLR, KDD, ACL, ACMMM, and ACML, SPC of IJCAI and AAAI. He has published over 60 papers with 17000+ citations (H-index 47) at leading conferences and journals such as ICML, ICLR, NeurIPS, TPAMI, IJCV etc. His research is reported by Forbes, MIT Technology Review, and other international media. He received best papers awards at several internation conferences and workshops. He published a book Introduction to Transfer Learning and gave tutorials at IJCAI’22, WSDM’23, KDD’23, AAAI’24, AAAI’25, and CVPR’25. He leads several impactful open-source projects, including transferlearning, PromptBench, torchSSL, and USB, which received over 20K stars. He obtained his Ph.D from University of Chinese Academy of Sciences in 2019 with the excellent PhD dissertation and a bachelor’s degree from North China University of Technology in 2014.
News
May 01, 2025 | Four papers are accepted by ICML 2025. Congrats! |
---|---|
Apr 20, 2025 | Invited to be a faculty member at the AI Safety Community of Future of Life Institute! |
Apr 18, 2025 | Our ICCV 2025 workshop proposal “Trustworthy Study Transfers from Classical to Foundation Models” is accepted! See you in Hawaii! |
Mar 20, 2025 | We are organizing the International Workshop on Federated Learning with Generative AI In Conjunction with IJCAI 2025 (FedGenAI-IJCAI’25)! |
Mar 18, 2025 | The journal verison of noisy model learning got accepted by TPAMI! Congrats! [Paper] |
Mar 17, 2025 | I was awarded the William & Mary Faculty Research Award! |
Mar 04, 2025 | Our paper “Scito2M: A 2 Million, 30-Year Cross-disciplinary Dataset for Temporal Scientometric Analysis” got the Best Paper Award at AAAI 2025 Good Data workshop! Congrats! [Paper] |
Feb 27, 2025 | We have 1 tutorial and 1 paper accepted at CVPR 2025. Congrats! |