How far are we to GPT-4V?Closing the gap to commercial multimodal models with open-source suites
作者机构:State Key Laboratory for Novel Software Technology Nanjing University Shanghai AI Laboratory School of Computer Science Fudan University SenseTime Research Department of Information Engineering The Chinese University of Hong Kong Department of Electronic Engineering Tsinghua University
出 版 物:《Science China(Information Sciences)》 (中国科学:信息科学(英文版))
年 卷 期:2024年第67卷第12期
页 面:5-22页
核心收录:
学科分类:12[管理学] 1201[管理学-管理科学与工程(可授管理学、工学学位)] 081104[工学-模式识别与智能系统] 08[工学] 0835[工学-软件工程] 0811[工学-控制科学与工程] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:supported by National Key R&D Program of China (Grant Nos. 2022ZD0160102, 2022ZD0161300) National Natural Science Foundation of China (Grant Nos. 62372223, U24A20330, 62376134) China Mobile Zijin Innovation Institute (Grant No. NR2310J7M) Youth Ph.D. Student Research Project under the National Natural Science Foundation (Grant No. 623B2050)
主 题:multimodal model open-source vision encoder dynamic resolution bilingual dataset
摘 要:In this paper, we introduce InternVL 1.5, an open-source multimodal large language model(MLLM) to bridge the capability gap between open-source and proprietary commercial models in multimodal understanding. We introduce three simple improvements.(1) Strong vision encoder: we explored a continuous learning strategy for the large-scale vision foundation model — InternViT-6B, boosting its visual understanding capabilities, and making it can be transferred and reused in different LLMs.(2) Dynamic high-resolution: we divide images into tiles ranging from 1 to 40 of 448×448 pixels according to the aspect ratio and resolution of the input images, which supports up to 4K resolution input.(3) High-quality bilingual dataset: we carefully collected a high-quality bilingual dataset that covers common scenes, document images,and annotated them with English and Chinese question-answer pairs, significantly enhancing performance in optical character recognition(OCR) and Chinese-related tasks. We evaluate InternVL 1.5 through a series of benchmarks and comparative studies. Compared to both open-source and proprietary commercial models, InternVL 1.5 shows competitive performance, achieving state-of-the-art results in 8 of 18 multimodal benchmarks. Code and models are available at https://***/OpenGVLab/InternVL.