Yao Tong 🚀

Yao Tong

(she/her)

Professional Summary

I am Yao Tong (童遥), a Ph.D. student in Computer Science at the National University of Singapore, supervised by Prof. Reza Shokri. I received my B.E. in Computer Science and Engineering from The Chinese University of Hong Kong, Shenzhen, where I was advised by Prof. Baoyuan Wu. I also worked with Prof. Mathias Lécuyer as a summer scholar at the University of British Columbia.

My research interests focus on understanding the capabilities and behaviors of large language models (LLMs) and mitigating catastrophic risks in AI. Recently, my work has centered on:

  • Evaluating and understanding LLM behaviors: hallucination, memorization, and extrapolative generalization
  • Copyright protection: developing verification methods for private data, model-generated works, and model architectures
Recent Papers
(2025). Decomposing Extrapolative Problem Solving: Spatial Transfer and Length Scaling with Map Worlds.
(2025). SeedPrints: Fingerprints Can Even Tell Which Seed Your Large Language Model Was Trained From. In Lock-LLM NeuriPS Workshop 2025.
(2025). When Transformers Can (or Can’t) Generalize Compositionally? A Data-Distribution Perspective. In NeuriPS WCTD Workshop 2025.
(2025). Cut the Deadwood Out: Training-Free Backdoor Purification via Guided Module Substitution. In Findings of Association for Computational Linguistics EMNLP 2025.
(2024). How much of my dataset did you use? Quantitative Data Usage Inference in Machine Learning. In ICLR 2025 [Oral Presentation (Top ∼1.5% among submissions)].