|
I am a Ph.D. student at the Hong Kong University of Science and Technology (HKUST), supervised by Prof. Jun Zhang. I received my B.E. degree from Beihang University. Currently, I am interested in multimodal reinforcement learning (e.g., VLMs and diffusion models). Moreover, I focus on efficient training and inference for vision and language generative models.
News
- 2026.01: ย ๐๐ Our QVGen is accepted to ICLR.
- 2025.11: ย ๐๐ Our SlimInfer and LLMC+ are accepted to AAAI.
- 2025.06: ย ๐๐ Our Temporal Feature Matters is accepted to TPAMI.
- 2025.05: ย ๐๐ Our HarmoniCa is accepted to ICML.
- 2024.10: ย ๐๐ Our LLMC is accepted to EMNLP Industry Track.
- 2024.07: ย ๐๐ Our PTSBench is accepted to ACM MM.
- 2024.02: ย ๐๐ Our TFMQ-DM is accepted to CVPR as a Highlight Poster (Top $2.8\%$).
Selected Papers
Includes preprints; * indicates equal contribution, ๐ง indicates corresponding author
Preprint 
Projects
Toolkit 
LightCompress is an off-the-shelf compression suite for AIGC models (LLMs, VLMs, diffusion, etc.) that packages SOTA quantization, sparsification, and deployment best practices to shrink models while preserving accuracy. 600+ GitHub Stars.
Services
- Conference Reviews: NeurIPS, ICLR, ICML, COLM, AAAI, CVPR, ECCV.
Educations
- 2025.02 - Now, Ph.D. in Electronic Computer and Engineering, the Hong Kong University of Science and Technology.
- 2020.09 - 2024.06, B.Eng. in Computer Science and Engineering, Shenyuan Honors College, Beihang University.
Internships
- 2025.09-2025.11, Bytedance Seed.
- 2024.12 - 2025.02, Microsoft Research Asia.
- 2023.05 - Now, SenseTime Research.
|