Rethinking the image feature biases exhibited by deep convolutional neural network models in image recognition
作者机构:College of Computer Science and TechnologyChongqing University of Posts and TelecommunicationsChongqingChina
出 版 物:《CAAI Transactions on Intelligence Technology》 (智能技术学报(英文))
年 卷 期:2022年第7卷第4期
页 面:721-731页
核心收录:
学科分类:08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:National Natural Science Foundation of China,Grant/Award Number:61936001 Natural Science Foundation of Chongqing,Grant/Award Number:cstc2019jcyj-msxmX0380 China Postdoctoral Science Foundation,Grant/Award Number:2021M700562
主 题:CNNs features understandable models
摘 要:In recent years,convolutional neural networks(CNNs)have been applied successfully in many ***,these deep neural models are still considered as“black boxfor most *** of the fundamental issues underlying this problem is understanding which features are most influential in image recognition tasks and how CNNs process these *** is widely believed that CNN models combine low‐level features to form complex shapes until the object can be readily classified,however,several recent studies have argued that texture features are more important than other *** this paper,we assume that the importance of certain features varies depending on specific tasks,that is,specific tasks exhibit feature *** designed two classification tasks based on human intuition to train deep neural models to identify the anticipated *** designed experiments comprising many tasks to test these biases in the Res Net and Dense Net *** the results,we conclude that(1)the combined effect of certain features is typically far more influential than any single feature;(2)in different tasks,neural models can perform different biases,that is,we can design a specific task to make a neural model biased towards a specific anticipated feature.