Adversarial Machine Learning (Synthesis Lectures on Artificial Intelligence and Machine Le)
暫譯: 對抗性機器學習(人工智慧與機器學習綜合講座)
Yevgeniy Vorobeychik, Murat Kantarcioglu
- 出版商: Morgan & Claypool
- 出版日期: 2018-08-08
- 售價: $3,150
- 貴賓價: 9.5 折 $2,993
- 語言: 英文
- 頁數: 169
- 裝訂: Hardcover
- ISBN: 1681733978
- ISBN-13: 9781681733975
-
相關分類:
人工智慧、Machine Learning
-
相關翻譯:
對抗機器學習:機器學習系統中的攻擊和防禦 (簡中版)
相關主題
商品描述
The increasing abundance of large high-quality datasets, combined with significant technical advances over the last several decades have made machine learning into a major tool employed across a broad array of tasks including vision, language, finance, and security. However, success has been accompanied with important new challenges: many applications of machine learning are adversarial in nature. Some are adversarial because they are safety critical, such as autonomous driving. An adversary in these applications can be a malicious party aimed at causing congestion or accidents, or may even model unusual situations that expose vulnerabilities in the prediction engine. Other applications are adversarial because their task and/or the data they use are. For example, an important class of problems in security involves detection, such as malware, spam, and intrusion detection. The use of machine learning for detecting malicious entities creates an incentive among adversaries to evade detection by changing their behavior or the content of malicius objects they develop.
The field of adversarial machine learning has emerged to study vulnerabilities of machine learning approaches in adversarial settings and to develop techniques to make learning robust to adversarial manipulation. This book provides a technical overview of this field. After reviewing machine learning concepts and approaches, as well as common use cases of these in adversarial settings, we present a general categorization of attacks on machine learning. We then address two major categories of attacks and associated defenses: decision-time attacks, in which an adversary changes the nature of instances seen by a learned model at the time of prediction in order to cause errors, and poisoning or training time attacks, in which the actual training dataset is maliciously modified. In our final chapter devoted to technical content, we discuss recent techniques for attacks on deep learning, as well as approaches for improving robustness of deep neural networks. We conclude with a discussion of several important issues in the area of adversarial learning that in our view warrant further research.
Given the increasing interest in the area of adversarial machine learning, we hope this book provides readers with the tools necessary to successfully engage in research and practice of machine learning in adversarial settings.
商品描述(中文翻譯)
隨著大量高品質數據集的日益豐富,加上過去幾十年來顯著的技術進步,機器學習已成為一個廣泛應用於視覺、語言、金融和安全等多種任務的重要工具。然而,成功的背後伴隨著重要的新挑戰:許多機器學習的應用本質上是對抗性的。有些應用是因為它們對安全至關重要,例如自動駕駛。在這些應用中,對手可能是旨在造成擁堵或事故的惡意方,或者甚至可能模擬異常情況,暴露預測引擎的脆弱性。其他應用則因其任務和/或所使用的數據而具有對抗性。例如,安全領域中的一個重要問題類別涉及檢測,例如惡意軟體、垃圾郵件和入侵檢測。使用機器學習來檢測惡意實體會激勵對手通過改變其行為或所開發的惡意物件的內容來逃避檢測。
對抗性機器學習領域已經出現,旨在研究機器學習方法在對抗性環境中的脆弱性,並開發技術以使學習對對抗性操控具有魯棒性。本書提供了該領域的技術概述。在回顧機器學習的概念和方法,以及這些在對抗性環境中的常見用例後,我們提出了對機器學習攻擊的一般分類。接著,我們討論了兩大類攻擊及其相關防禦:決策時間攻擊,對手在預測時改變學習模型所見實例的性質以造成錯誤;以及中毒或訓練時間攻擊,實際的訓練數據集被惡意修改。在我們專門討論技術內容的最後一章中,我們探討了針對深度學習的最新攻擊技術,以及改善深度神經網絡魯棒性的方法。我們最後討論了對抗學習領域中幾個重要問題,這些問題在我們看來值得進一步研究。
鑒於對抗性機器學習領域日益增長的興趣,我們希望本書能為讀者提供在對抗性環境中成功從事機器學習研究和實踐所需的工具。