Abstract: In recent years, significant progress has been made in the field of Artificial Intelligence with the development of Multimodal Large Language Models (MLLMs). However, adapting static, pre-trained MLLMs to dynamic data distributions and various tasks in an accurate and efficient manner remains a major challenge. When fine-tuning pre-trained MLLMs for specific tasks, a noticeable performance degradation often occurs in the model’s prior knowledge domain — a phenomenon known as “Catastrophic Forgetting.” While this issue has been extensively studied within the Continual Learning (CL) community, it presents new challenges in the context of MLLMs. As the first review paper in the field of continual learning for multimodal large models, this paper provides a comprehensive overview and detailed analysis of the 440 research papers on MLLM continual learning. Beyond introducing the fundamental concepts, the review is structured into four main sections. Firstly, it provides an overview of the latest research on MLLMs, including various model innovation strategies, benchmarks, and applications across diverse fields. Secondly, it presents a detailed categorization and overview of the latest research on continual learning, divided into three key areas: non-large language models(LLMs) unimoda continual learning (Non-LLM Unimodal CL), non-large language models multimodal continual learning (Non-LLM Multimoda CL), and continual learning in large language models (CL in LLM). In-depth and extensive research in both the MLLM and CL domains has laid a solid foundation for research on MLLM continual learning. In the fourth section, we conduct an in-depth analysis of the current research status of MLLM continual learning, examining common benchmark evaluations, innovative improvements in model architectures and methods, and systematically summarizing and reviewing existing theoretical and empirical studies. This review aims to connect the basic setup, theoretical foundations, method innovations, and practical applications of continual learning in multimodal large models, shedding light on the research progress and challenges in the field. Finally, this paper offers a forward-looking discussion on the challenges and future development trends of continual learning in multimodal large models, aiming to inspire researchers in the field and promote the advancement of related technologies.