Gradient-based algorithms for multi-objective bi-level optimization
作者机构:National Center for Applied Mathematics in ChongqingChongqing401331China School of Mathematical SciencesChongqing Normal UniversityChongqing401331China Department of MathematicsSouthern University of Science and TechnologyShenzhen518055China National Center for Applied Mathematics ShenzhenShenzhen518000China Department of Mathematics and StatisticsUniversity of VictoriaVictoriaBCV8W 2Y2Canada
出 版 物:《Science China Mathematics》 (中国科学(数学)(英文版))
年 卷 期:2024年第67卷第6期
页 面:1419-1438页
核心收录:
学科分类:12[管理学] 1201[管理学-管理科学与工程(可授管理学、工学学位)] 07[理学] 081104[工学-模式识别与智能系统] 08[工学] 070105[理学-运筹学与控制论] 0835[工学-软件工程] 0701[理学-数学] 0811[工学-控制科学与工程] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:supported by the Major Program of National Natural Science Foundation of China(Grant Nos.11991020 and 11991024) supported by National Natural Science Foundation of China(Grant No.12371305) supported by National Natural Science Foundation of China(Grant No.12222106) Guangdong Basic and Applied Basic Research Foundation(Grant No.2022B1515020082) Shenzhen Science and Technology Program(Grant No.RCYX20200714114700072)
主 题:multi-objective bi-level optimization convergence analysis Pareto stationary learning to optimize
摘 要:Multi-objective bi-level optimization(MOBLO)addresses nested multi-objective optimization problems common in a range of ***,its multi-objective and hierarchical bi-level nature makes it notably ***-based MOBLO algorithms have recently grown in popularity,as they effectively solve crucial machine learning problems like meta-learning,neural architecture search,and reinforcement ***,these algorithms depend on solving a sequence of approximation subproblems with high accuracy,resulting in adverse time and memory complexity that lowers their numerical *** address this issue,we propose a gradient-based algorithm for MOBLO,called gMOBA,which has fewer hyperparameters to tune,making it both simple and ***,we demonstrate the theoretical validity by accomplishing the desirable Pareto *** experiments confirm the practical efficiency of the proposed method and verify the theoretical *** accelerate the convergence of gMOBA,we introduce a beneficial L2O(learning to optimize)neural network(called L2O-gMOBA)implemented as the initialization phase of our gMOBA *** results of numerical experiments are presented to illustrate the performance of L2O-gMOBA.