Trustworthy Data Science and AI Lab

Rigorous Science for
Trustworthy AI

We study the security, privacy, and reliability of AI, spanning large language models, autonomous agents, federated learning, and data markets. Our goal is to develop rigorous foundations, practical methods, and deployable systems for trustworthy AI.

LLM Security Differential Privacy Federated Learning Trustworthy Agents RAG Reliability Data Markets
LLM Security Federated Learning Data Infra Agent AI RAG / Security Diff. Privacy

What We Work On

We tackle trustworthy AI at three levels — foundational theory, algorithm design, and real-world system deployment.

Privacy Tech Foundations
Building the theoretical and algorithmic foundations of privacy-enhancing technologies — differential privacy, MPC, TEE, federated learning, and machine unlearning — from rigorous guarantees to deployable systems.
AI Security & Safety
Studying how LLMs and autonomous agents can be attacked and defended — covering jailbreak, prompt injection, backdoor attacks, and adversarial risks across the full AI stack.
Trustworthy Data Systems
Designing data pipelines and AI systems that are auditable and reliable — through RAG reliability, membership inference, transparency analysis, data markets, and multimodal safety evaluation.

Recent Publications

Selected recent publications — full list on Publications or Google Scholar.

Disrupting Hierarchical Reasoning: Adversarial Protection for Geographic Privacy in Multimodal Reasoning Models
Jiaming Zhang, Che Wang, Yang Cao, Longtao Huang, Wei Yang Bryan Lim
ICLR 2026
Are Your LLM-based Text-to-SQL Models Secure? Exploring SQL Injection via Backdoor Attacks
Meiyu Lin, Haichuan Zhang, Jiale Lao, Renyuan Li, Yuanchun Zhou, Carl Yang, Yang Cao, Mingjie Tang
ACM SIGMOD 2026
Doppio: Communication-Efficient and Secure Multi-Party Shuffle Differential Privacy
Wentao Dong, Yang Cao, Cong Wang, Wei-Bin Lee
VLDB 2026
Privacy on the Fly: A Predictive Adversarial Transformation Network for Mobile Sensor Data
Tianle Song, Chenhao Lin, Yang Cao, Zhengyu Zhao, et al.
AAAI 2026 Oral
AegisGuard: RL-Guided Adapter Tuning for TEE-Based Efficient & Secure On-Device Inference
Che Wang, Ziqi Zhang, Yinggui Wang, Tiantong Wang, Yang Cao, et al.
NeurIPS 2025
Differentially Private Visual Learning with Public Subspace Augmented by Synthetic Data
Haichao Sha, Yuncheng Wu, Ruixuan Liu, Yang Cao, Hong Chen
ACM MM 2025 Outstanding Paper

View all publications →


Research, Collaboration, and Impact

An international research team advancing trustworthy AI through rigorous research, competitive funding, and collaborations across academia and industry.

100+
Peer-reviewed Publications
10+
Competitive Research Grants
5+
Countries Represented in the Lab
Global
Academic and Industry Partnerships
Top-tier Research Output
40+ papers in top-tier venues, including ICML, NeurIPS, ACM CCS, VLDB, and more.
Strong CS Research Standing in Japan
Highly visible in CSRankings and top-tier research output.
International & Collaborative
Active collaborations with researchers and institutions across Asia, Europe, and North America.
Meet the Team → Join Us →