Unlearning Descartes:Sentient AI is a Political Problem
作者机构:the School of Data ScienceUNC CharlotteCharlotteNC 28223USA
出 版 物:《Journal of Social Computing》 (社会计算(英文))
年 卷 期:2023年第4卷第3期
页 面:193-204页
核心收录:
学科分类:12[管理学] 1201[管理学-管理科学与工程(可授管理学、工学学位)] 081104[工学-模式识别与智能系统] 08[工学] 0835[工学-软件工程] 0811[工学-控制科学与工程] 0812[工学-计算机科学与技术(可授工学、理学学位)]
主 题:artificial intelligence Large Language Model consciousness sentience personhood Descartes Hobbes
摘 要:The emergence of Large Language Models(LLMs)has renewed debate about whether Artificial Intelligence(AI)can be conscious or *** paper identifies two approaches to the topic and argues:(1)A“Cartesianapproach treats consciousness,sentience,and personhood as very similar terms,and treats language use as evidence that an entity is *** approach,which has been dominant in AI research,is primarily interested in what consciousness is,and whether an entity possesses it.(2)An alternative“Hobbesianapproach treats consciousness as a sociopolitical issue and is concerned with what the implications are for labeling something sentient or *** both enables a political disambiguation of language,consciousness,and personhood and allows regulation to proceed in the face of intractable problems in deciding if something“really issentient.(3)AI systems should not be treated as conscious,for at least two reasons:(a)treating the system as an origin point tends to mask competing interests in creating it,at the expense of the most vulnerable people involved;and(b)it will tend to hinder efforts at holding someone accountable for the behavior of the systems.A major objective of this paper is accordingly to encourage a shift in *** place of the Cartesian question-is AI sentient?-I propose that we confront the more Hobbesian one:Does it make sense to regulate developments in which AI systems behave as if they were sentient?