Hi, I'm Haokun Lin (ζž—ζ΅©ε€) 🍻

I’m a Ph.D. candidate at New Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences under the supervision of Prof. Zhenan Sun. I’m also a joint Ph.D. candidate at Department of Computer Science, City University of Hong Kong, working with Prof. Ying Wei. Before joining CASIA, I received my B.Eng. in Software Engineering from Huazhong University of Science and Technology in 2021.

My research interests include Multi-modal Learning, Large Language/Vision Models, and Efficient Deep Learning.

πŸ‘‹πŸ‘‹πŸ‘‹ If you’re interested in my work, please feel free to reach out for discussions or collaborations!

Contact me via:
πŸ“§ Mail: haokun.lin[AT]cripac.ia.ac.cn or haokunlin2-c[AT]my.cityu.edu.hk

🌈 What's new:

  • [02/2025] πŸŽ‰ TMM: "Scale Up Composed Image Retrieval Learning via Modification Text Generatio."
  • [01/2025] πŸŽ‰ ICLR'25: "Image-level Memorization Detection via Inversion-based Inference Perturbation." [PDF]
  • [11/2024] πŸš€ Award: Delighted to have received the First Prize in the 2024 Graduate Academic Forum at UCAS!
  • [11/2024] πŸ“œ Preprint: "DOGE: Towards Versatile Visual Document Grounding and Referring." [PDF]
  • [11/2024] πŸš€ Award: Honored to be selected as a Top Reviewer at NeurIPS 2024!
  • [09/2024] πŸŽ‰ NeurIPS'24 Oral: "DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs." Big Congs! πŸ”₯πŸ”₯πŸ”₯ [Code/PDF]
  • [07/2024] πŸŽ‰ ECCV'24: "MATHVERSE: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?" [Code/PDF]
  • [05/2024] πŸŽ‰ ACL'24 Findings: "IntactKV: Improving Large Language Model Quantization by Keeping Pivot Tokens Intact." [Code/PDF]
  • [02/2024] πŸŽ‰ CVPR'24: "MoPE-CLIP: Structured Pruning for Efficient Vision-Language Models with Module-wise Pruning Error Metric." [PDF]
  • [01/2024] πŸŽ‰ ICLR'24: "Plug-and-Play: An Efficient Post-training Pruning Method for Large Language Models." [Code/PDF]
  • [03/2022] πŸŽ“ Starting Joint Ph.D.@CityU: I will join Prof. Ying Wei's group at CityU in 2022 Fall!
  • [09/2021] πŸŽ“ Starting Ph.D.@CASIA: I will join Prof. Zhenan Sun's group at NLPR, CASIA in 2021 Fall!
  • [06/2021] πŸŽ“ Graduation@HUST: Recieved my Bachelor's Degree from Huazhong University of Science and Technology with Honorary degree.

πŸŽ“ Selected Publications (Google Scholar)

($*$: co-first author; ^: corresponding author)

DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.
Haokun Lin*, Haobo Xu*, Yichen Wu*, Jingzhi Cui, Yingtao Zhang, Linzhan Mou, Linqi Song, Zhenan Sun^, Ying Wei^,
in 38th Conference on Neural Information Processing Systems (NeurIPS 2024 Oral).
[PDF] [arXiv] [Project] [Github] [QbitAI/量子位] [bibtex]
MoPE-CLIP: Structured Pruning for Efficient Vision-Language Models with Module-wise Pruning Error Metric.
Haokun Lin, Haoli Bai, Zhili Liu, Lu Hou, Muyi Sun, Linqi Song, Ying Wei^, Zhenan Sun^,
in IEEE / CVF Computer Vision and Pattern Recognition Conference 2024 (CVPR 2024).
[PDF] [arXiv] [bibtex]
Image-level Memorization Detection via Inversion-based Inference Perturbation.
Yue Jiang*, Haokun Lin*, Yang Bai, Bo Peng, Zhili Liu, Yueming Lyu, Yong Yang, Xing Zheng, Jing Dong,
in 13th International Conference on Learning Representations (ICLR 2025).
[PDF]
MATHVERSE: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?
Renrui Zhang*, Dongzhi Jiang*, Yichi Zhang*, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Peng Gao, Hongsheng Li,
in 18th European Conference on Computer Vision (ECCV 2024).
[PDF] [arXiv] [Project] [Github] [Dataset] [Synced/ζœΊε™¨δΉ‹εΏƒ] [bibtex]
Plug-and-Play: An Efficient Post-training Pruning Method for Large Language Models.
Yingtao Zhang, Haoli Bai, Haokun Lin, Jialin Zhao, Lu Hou, Carlo Vittorio Cannistraci,
in 12th International Conference on Learning Representations (ICLR 2024).
[PDF] [OpenReview] [Github] [bibtex]
IntactKV: Improving Large Language Model Quantization by Keeping Pivot Tokens Intact.
Ruikang Liu, Haoli Bai, Haokun Lin, Yuening Li, Han Gao, Zhengzhuo Xu, Lu Hou, Jun Yao, Chun Yuan,
in Findings of 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024 Findings)
[PDF] [arXiv] [Github] [bibtex]

πŸ† Honors and Awards

πŸŽ– Services

πŸ’¬ Talks


Site Analytics