Assistant Professor (Incoming), William & Mary
Senior Researcher, Microsoft Research Asia
Building 2, No. 5 Danling Street, Haidian District, Beijing, China
jindongwang [at] outlook.com, jindong.wang [at] microsoft.com Google scholar | DBLP | Github || Twitter/X | Zhihu | Wechat | Bilibili || CVCV (Chinese)
Dr. Jindong Wang currently works at Microsoft Research Asia as a Senior Researcher and will join William & Mary as a tenure-track assistant professor in 2025. He obtained his Ph.D from University of Chinese Academy of Sciences in 2019 with the excellent PhD thesis award and a bachelorâs degree from North China University of Technology in 2014. His research interest includes machine learning, large language models, and AI for social science. He has published over 50 papers with 12000+ citations at leading conferences and journals such as ICML, ICLR, NeurIPS, TPAMI, IJCV etc. His research is reported by Forbes and other international media. In 2023 and 2024, he was selected by Stanford University as one of the Worldâs Top 2% Scientists and one of the AI Most Influential Scholars by AMiner. He has several Google scholar highly cited papers, Huggingface featured papers, and paperdigest most influential papers. He received the best paper award at ICCSEâ18 and IJCAIâ19 workshop. He serves as the associate editor of IEEE Transactions on Neural Networks and Learning Systems (TNNLS), guest editor for ACM Transactions on Intelligent Systems and Technology (TIST), area chair for NeurIPS, ICLR, KDD, ACMMM, and ACML, senior program committee member of IJCAI and AAAI. He leads several impactful open-source projects, including transferlearning, PromptBench, torchSSL, USB, personalizedFL, and robustlearn, which received over 16K stars on Github. He published a book Introduction to Transfer Learning. He gave tutorials at IJCAIâ22, WSDMâ23, KDDâ23, and AAAIâ24.
Research interest: (See this page for more details)
Machine learning: Iâm generally interested in designing algorithms and applications to make machine learning systems more robust, trustworthy, and responsible. Related topics include: robust machine learning, OOD / domain generalization, transfer learning, semi-supervised learning, federated learning, and related applications.
Large language models: We mainly focus on understanding the potential and limitation of large foundation models. Related topics: LLM evaluation and enhancement.
AI for social sciences: How to measure the impact of generative AI on different domains? How to assist interdisciplinary domains using powerful AI models? How to use existing social science knowledge to help us better understand AI behaviors?
Interested in internship or collaboration? Contact me. Iâm experimenting a new form of research collaboration. You can click here if you are interested!
News
Sep 28, 2024
We have one collaborative paper accepted by NeurIPS 24 dataset and benchmark track as a spotlight! [paper]
Sep 26, 2024
We have 5 papers accepted by NeurIPS 2024, including a spotlight! Congrats to the students!
Sep 20, 2024
Our new work AgentReview: Exploring Peer Review Dynamics with LLM Agents is accepted by EMNLP main track! [paper]
Aug 30, 2024
Our vision paper âOn Catastrophic Inheritance of Large Foundation Modelsâ is accepted by DMLR! [arxiv]
Aug 20, 2024
Invited to be an Area Chair for ICLR 2025 and senior program member (SPC) of AAAI 2025.
Jul 2, 2024
Our paper âSpecFormer: Guarding Vision Transformer Robustness via Maximum Singular Value Penalizationâ is accepted by ECCV 2024! [paper]
Highlights
6 of my papers are highly cited and ranked top 20 globally in recent 5 years in Google scholar metrics. See here. I also have 6 papers featured by Hugging Face.