Publications

See full list at Google Scholar. ($*$: co-first author; ^: corresponding author)

DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.
Haokun Lin*, Haobo Xu*, Yichen Wu*, Jingzhi Cui, Yingtao Zhang, Linzhan Mou, Linqi Song, Zhenan Sun^, Ying Wei^,
in 38th Conference on Neural Information Processing Systems (NeurIPS 2024 Oral).
[PDF] [arXiv] [Project] [Github] [QbitAI/量子位] [bibtex]
MoPE-CLIP: Structured Pruning for Efficient Vision-Language Models with Module-wise Pruning Error Metric.
Haokun Lin, Haoli Bai, Zhili Liu, Lu Hou, Muyi Sun, Linqi Song, Ying Wei^, Zhenan Sun^,
in IEEE / CVF Computer Vision and Pattern Recognition Conference 2024 (CVPR 2024).
[PDF] [arXiv] [bibtex]
Image-level Memorization Detection via Inversion-based Inference Perturbation.
Yue Jiang*, Haokun Lin*, Yang Bai, Bo Peng, Zhili Liu, Yueming Lyu, Yong Yang, Xing Zheng, Jing Dong,
in 13th International Conference on Learning Representations (ICLR 2025).
[PDF]
MATHVERSE: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?
Renrui Zhang*, Dongzhi Jiang*, Yichi Zhang*, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Peng Gao, Hongsheng Li,
in 18th European Conference on Computer Vision (ECCV 2024).
[PDF] [arXiv] [Project] [Github] [Dataset] [Synced/机器之心] [bibtex]
Plug-and-Play: An Efficient Post-training Pruning Method for Large Language Models.
Yingtao Zhang, Haoli Bai, Haokun Lin, Jialin Zhao, Lu Hou, Carlo Vittorio Cannistraci,
in 12th International Conference on Learning Representations (ICLR 2024).
[PDF] [OpenReview] [Github] [bibtex]
IntactKV: Improving Large Language Model Quantization by Keeping Pivot Tokens Intact.
Ruikang Liu, Haoli Bai, Haokun Lin, Yuening Li, Han Gao, Zhengzhuo Xu, Lu Hou, Jun Yao, Chun Yuan,
in Findings of 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024 Findings)
[PDF] [arXiv] [Github] [bibtex]
DOGE: Towards Versatile Visual Document Grounding and Referring.
Yinan Zhou*, Yuxin Chen*, Haokun Lin, Shuyu Yang, Li Zhu, Zhongang Qi, Chen Ma, Ying Shan.
Preprint
[PDF] [Arxiv] [bibtex]