On the principles of Parsimony and Self-consistency for the emergence of intelligence
[论智能起源中的简约与自洽原则]作者机构:Electrical Engineering and Computer Science DepartmentUniversity of CaliforniaBerkeleyCA94720USA Department of Molecular&Cell Biology and Howard Hughes Medical InstituteUniversity of CaliforniaBerkeleyCA94720USA International Digital Economy AcademyShenzhen518045China
出 版 物:《Frontiers of Information Technology & Electronic Engineering》 (信息与电子工程前沿(英文版))
年 卷 期:2022年第23卷第9期
页 面:1298-1323页
核心收录:
学科分类:0710[理学-生物学] 12[管理学] 1201[管理学-管理科学与工程(可授管理学、工学学位)] 081104[工学-模式识别与智能系统] 08[工学] 0835[工学-软件工程] 0811[工学-控制科学与工程] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:Rob Fergus of New York University Microsoft Research, MSR, (2022)
主 题:Intelligence Parsimony Self-consistency Rate reduction Deep networks Closed-loop transcription
摘 要:Ten years into the revival of deep networks and artificial intelligence,we propose a theoretical framework that sheds light on understanding deep networks within a bigger picture of intelligence in *** introduce two fundamental principles,Parsimony and Self-consistency,which address two fundamental questions regarding intelligence:what to learn and how to learn,*** believe the two principles serve as the cornerstone for the emergence of intelligence,artificial or *** they have rich classical roots,we argue that they can be stated anew in entirely measurable and computable *** specifically,the two principles lead to an effective and efficient computational framework,compressive closed-loop transcription,which unifies and explains the evolution of modern deep networks and most practices of artificial *** we use mainly visual data modeling as an example,we believe the two principles will unify understanding of broad families of autonomous intelligent systems and provide a framework for understanding the brain.