相關主題
商品描述
Artificial Intelligence (AI) is widely used in society today. The (mis)use of biased data sets in machine learning applications is well-known, resulting in discrimination and exclusion of citizens. Another example is the use of non-transparent algorithms that can't explain themselves to users, resulting in the AI not being trusted and therefore not being used when it might be beneficial to use it.
Responsible Use of AI in Military Systems lays out what is required to develop and use AI in military systems in a responsible manner. Current developments in the emerging field of Responsible AI as applied to military systems in general (not merely weapons systems) are discussed. The book takes a broad and transdisciplinary scope by including contributions from the fields of philosophy, law, human factors, AI, systems engineering, and policy development.
Divided into five sections, Section I covers various practical models and approaches to implementing military AI responsibly; Section II focuses on liability and accountability of individuals and states; Section III deals with human control in human-AI military teams; Section IV addresses policy aspects such as multilateral security negotiations; and Section V focuses on 'autonomy' and 'meaningful human control' in weapons systems.
Key Features:
- Takes a broad transdisciplinary approach to responsible AI
- Examines military systems in the broad sense of the word
- Focuses on the practical development and use of responsible AI
- Presents a coherent set of chapters, as all authors spent two days discussing each other's work
This book provides the reader with a broad overview of all relevant aspects involved with the responsible development, deployment and use of AI in military systems. It stresses both the advantages of AI as well as the potential downsides of including AI in military systems.
商品描述(中文翻譯)
「人工智慧(AI)在當今社會中被廣泛應用。機器學習應用中使用偏見數據集的(誤)用途是眾所周知的,導致歧視和排斥公民。另一個例子是使用無法向用戶解釋自己的非透明算法,這導致人們不信任AI,因此在可能有益的情況下不使用它。」
「在軍事系統中負責任地使用AI」一書詳細介紹了以負責任的方式開發和使用軍事系統中的AI所需的要求。該書討論了負責任AI在軍事系統(不僅僅是武器系統)中的應用,並對新興領域「負責任AI」的發展進行了探討。該書從哲學、法律、人因學、AI、系統工程和政策制定等領域的貢獻中獲得了廣泛而跨學科的視角。
該書分為五個部分,第一部分介紹了實施負責任AI的各種實用模型和方法;第二部分關注個人和國家的責任和問責;第三部分討論人類在人-AI軍事團隊中的控制;第四部分涉及多邊安全談判等政策方面;第五部分聚焦於武器系統中的「自主性」和「有意義的人類控制」。
主要特點:
- 以廣泛的跨學科方法探討負責任AI
- 將軍事系統的範疇廣泛解釋
- 關注負責任AI的實際開發和使用
- 提供一系列連貫的章節,所有作者花了兩天時間討論彼此的工作
該書為讀者提供了負責任地開發、部署和使用軍事系統中AI所涉及的所有相關方面的廣泛概述。它強調了AI的優勢以及將AI納入軍事系統可能帶來的潛在風險。
作者簡介
Jan Maarten Schraagen is Principal Scientist at TNO, The Netherlands. His research interests include human-autonomy teaming and responsible AI. He is main editor of Cognitive Task Analysis (2000) and Naturalistic Decision Making and Macrocognition (2008) and co-editor of the Oxford Handbook of Expertise (2020). He is editor in chief of the Journal of Cognitive Engineering and Decision Making. Dr. Schraagen holds a PhD in Cognitive Psychology from the University of Amsterdam, The Netherlands.
作者簡介(中文翻譯)
Jan Maarten Schraagen是荷蘭TNO的首席科學家。他的研究興趣包括人工智能與人類團隊合作以及負責任的人工智能。他是《認知任務分析》(2000年)和《自然決策和宏觀認知》(2008年)的主要編輯,也是《牛津專業手冊》(2020年)的合編者。他是《認知工程與決策》期刊的主編。Schraagen博士在荷蘭阿姆斯特丹大學獲得認知心理學博士學位。