We study the security, privacy, and reliability of AI, spanning large language models, autonomous agents, federated learning, and modern data systems. Our goal is to build rigorous foundations, practical algorithms, and deployable systems for trustworthy AI in real-world applications.
We tackle trustworthy AI at three levels — foundational theory, algorithm design, and real-world system deployment.
Selected recent publications — full list on Publications or Google Scholar.
An international research team advancing trustworthy AI through rigorous research, competitive funding, and collaborations across academia and industry.