Hi, I'm Haokun Lin (ζζ΅©ε€) π»
Iβm a Ph.D. candidate at New Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences under the supervision of Prof. Zhenan Sun. Iβm also a joint Ph.D. candidate at Department of Computer Science, City University of Hong Kong, working with Prof. Ying Wei. Before joining CASIA, I received my B.Eng. in Software Engineering from Huazhong University of Science and Technology in 2021.
My research interests include Multi-modal Learning, Large Language/Vision Models, and Efficient Deep Learning.
πππ If youβre interested in my work, please feel free to reach out for discussions or collaborations!
Contact me via:
π§ Mail: haokun.lin[AT]cripac.ia.ac.cn or haokunlin2-c[AT]my.cityu.edu.hk
π What's new:
- [02/2025] π TMM: "Scale Up Composed Image Retrieval Learning via Modification Text Generatio."
- [01/2025] π ICLR'25: "Image-level Memorization Detection via Inversion-based Inference Perturbation." [PDF]
- [11/2024] π Award: Delighted to have received the First Prize in the 2024 Graduate Academic Forum at UCAS!
- [11/2024] π Preprint: "DOGE: Towards Versatile Visual Document Grounding and Referring." [PDF]
- [11/2024] π Award: Honored to be selected as a Top Reviewer at NeurIPS 2024!
- [09/2024] π NeurIPS'24 Oral: "DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs." Big Congs! π₯π₯π₯ [Code/PDF]
- [07/2024] π ECCV'24: "MATHVERSE: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?" [Code/PDF]
- [05/2024] π ACL'24 Findings: "IntactKV: Improving Large Language Model Quantization by Keeping Pivot Tokens Intact." [Code/PDF]
- [02/2024] π CVPR'24: "MoPE-CLIP: Structured Pruning for Efficient Vision-Language Models with Module-wise Pruning Error Metric." [PDF]
- [01/2024] π ICLR'24: "Plug-and-Play: An Efficient Post-training Pruning Method for Large Language Models." [Code/PDF]
- [03/2022] π Starting Joint Ph.D.@CityU: I will join Prof. Ying Wei's group at CityU in 2022 Fall!
- [09/2021] π Starting Ph.D.@CASIA: I will join Prof. Zhenan Sun's group at NLPR, CASIA in 2021 Fall!
- [06/2021] π Graduation@HUST: Recieved my Bachelor's Degree from Huazhong University of Science and Technology with Honorary degree.
π Selected Publications (Google Scholar)
($*$: co-first author; ^: corresponding author)
DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs. Haokun Lin*, Haobo Xu*, Yichen Wu*, Jingzhi Cui, Yingtao Zhang, Linzhan Mou, Linqi Song, Zhenan Sun^, Ying Wei^, in 38th Conference on Neural Information Processing Systems (NeurIPS 2024 Oral). [PDF] [arXiv] [Project] [Github] [QbitAI/ιεδ½] [bibtex] |
MoPE-CLIP: Structured Pruning for Efficient Vision-Language Models with Module-wise Pruning Error Metric. Haokun Lin, Haoli Bai, Zhili Liu, Lu Hou, Muyi Sun, Linqi Song, Ying Wei^, Zhenan Sun^, in IEEE / CVF Computer Vision and Pattern Recognition Conference 2024 (CVPR 2024). [PDF] [arXiv] [bibtex] |
Image-level Memorization Detection via Inversion-based Inference Perturbation. Yue Jiang*, Haokun Lin*, Yang Bai, Bo Peng, Zhili Liu, Yueming Lyu, Yong Yang, Xing Zheng, Jing Dong, in 13th International Conference on Learning Representations (ICLR 2025). [PDF] |
MATHVERSE: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems? Renrui Zhang*, Dongzhi Jiang*, Yichi Zhang*, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Peng Gao, Hongsheng Li, in 18th European Conference on Computer Vision (ECCV 2024). [PDF] [arXiv] [Project] [Github] [Dataset] [Synced/ζΊε¨δΉεΏ] [bibtex] |
Plug-and-Play: An Efficient Post-training Pruning Method for Large Language Models. Yingtao Zhang, Haoli Bai, Haokun Lin, Jialin Zhao, Lu Hou, Carlo Vittorio Cannistraci, in 12th International Conference on Learning Representations (ICLR 2024). [PDF] [OpenReview] [Github] [bibtex] |
IntactKV: Improving Large Language Model Quantization by Keeping Pivot Tokens Intact. Ruikang Liu, Haoli Bai, Haokun Lin, Yuening Li, Han Gao, Zhengzhuo Xu, Lu Hou, Jun Yao, Chun Yuan, in Findings of 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024 Findings) [PDF] [arXiv] [Github] [bibtex] |
π Honors and Awards
- 2024.12 First Prize, 2024 Graduate Academic Forum, University of Chinese Academy of Sciences.
- 2024.11 NeurIPS 2024 Top Reviewer Award.
- 2021.06 Honorary degree of HUST, Top 2%, Highest Honour for Undergraduate.
- 2020.10 National Scholarship, P.R.China, HUST, Undergraduate Students.
- 2018-2020 First prize, HUST Excellent Undergraduate Scholarship.
π Services
- Invited Reviewer:
- EMNLPβ2023, NeurIPSβ2024, ICLRβ2025, CVPRβ2025, AISTATSβ2025, ICDEβ2025, ICMLβ2025.
- IEEE Transactions on Neural Networks and Learning Systems (TNNLS).
- Transactions on Machine Learning Research (TMLR).
- Teaching Assistant:
- CityU, CS1315 Computer Programming, 2024 Fall.
- CityU, CS1302 Introduction to Computer Programming, 2024 Spring.
- CityU, CS5481 Data Engineering, 2025 Fall.
π¬ Talks
- Invited talk at the QingKe AI about DuQuant.
- Invited talk at the AI Time.