We study the security, privacy, and reliability of AI, spanning large language models, autonomous agents, federated learning, and data markets. Our goal is to develop rigorous foundations, practical methods, and deployable systems for trustworthy AI.
We tackle trustworthy AI at three levels — foundational theory, algorithm design, and real-world system deployment.
Selected recent publications — full list on Publications or Google Scholar.
An international research team advancing trustworthy AI through rigorous research, competitive funding, and collaborations across academia and industry.