Publications in security of AI systems

  1. RSafe: Incentivizing proactive reasoning to build robust and adaptive LLM safeguards.

    Jingnan Zheng, Xiangtian Ji, Yijun Lu, Chenhang Cui, Weixiang Zhao, Gelei Deng, Zhenkai Liang, An Zhang, and Tat-Seng Chua.

    In the 39th Annual Conference on Neural Information Processing Systems (NeurIPS), 2025.

  2. Improving LLM-based Log Parsing by Learning from Errors in Reasoning Traces.

    Jialai Wang, Juncheng Lu, Jie Yang, Junjie Wang, Zeyu Gao, Chao Zhang, Zhenkai Liang, and Ee-Chien Chang.

    In the 40th IEEE/ACM International Conference on Automated Software Engineering (ASE), 2025.

  3. Your Scale Factors are My Weapon: Targeted Bit-Flip Attacks on Vision Transformers via Scale Factor Manipulation..

    Jialai Wang, Yuxiao Wu, Weiye Xu, Yating Huang, Chao Zhang, Zongpeng Li, Mingwei Xu, and Zhenkai Liang.

    In the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025.

  4. Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment.

    Ziqi Yang, Jiyi Zhang, Ee-Chien Chang, and Zhenkai Liang.

    In ACM SIGSAC Conference on Computer and Communications Security (CCS), 2019.