A robust and engaging account of the single greatest threat faced by AI and ML systems
In Not With A Bug, But With A Sticker: Attacks on Machine Learning Systems and What To Do About Them, a team of distinguished adversarial machine learning researchers deliver a riveting account of the most significant risk to currently deployed artificial intelligence systems: cybersecurity threats. The authors take you on a sweeping tour – from inside secretive government organizations to academic workshops at ski chalets to Google's cafeteria – recounting how major AI systems remain vulnerable to the exploits of bad actors of all stripes.
Based on hundreds of interviews of academic researchers, policy makers, business leaders and national security experts, the authors compile the complex science of attacking AI systems with color and flourish and provide a front row seat to those who championed this change. Grounded in real world examples of previous attacks, you will learn how adversaries can upend the reliability of otherwise robust AI systems with straightforward exploits.
The steeplechase to solve this problem has already begun: Nations and organizations are aware that securing AI systems brings forth an indomitable advantage: the prize is not just to keep AI systems safe but also the ability to disrupt the competition's AI systems.
An essential and eye-opening resource for machine learning and software engineers, policy makers and business leaders involved with artificial intelligence, and academics studying topics including cybersecurity and computer science, Not With A Bug, But With A Sticker is a warning—albeit an entertaining and engaging one—we should all heed.
How we secure our AI systems will define the next decade. The stakes have never been higher, and public attention and debate on the issue has never been scarcer.
The authors are donating the proceeds from this book to two charities: Black in AI and Bountiful Children's Foundation.
Author(s): Ram Shankar Siva Kumar; Hyrum Anderson
Publisher: Wiley
Year: 2023
Language: English
Pages: 224
Table of Contents
1 Cover
2 Title Page
3 Foreword
4 Introduction
5 Chapter 1: Do You Want to Be Part of the Future?
1 Business at the Speed of AI
2 Follow Me, Follow Me
3 In AI, We Overtrust
4 Area 52 Ramblings
5 I'll Do It
6 Adversarial Attacks Are Happening
7 ML Systems Don't Jiggle-Jiggle; They Fold
8 Never Tell Me the Odds
9 AI's Achilles' Heel
10 Chapter 2: Salt, Tape, and Split-Second Phantoms
1 Challenge Accepted
2 When Expectation Meets Reality
3 Color Me Blind
4 Translation Fails
5 Attacking AI Systems via Fails
6 Autonomous Trap 001
7 Common Corruption
8 Chapter 3: Subtle, Specific, and Ever-Present
1 Intriguing Properties of Neural Networks
2 They Are Everywhere
3 Research Disciplines Collide
4 Blame Canada
5 The Intelligent Wiggle-Jiggle
6 Bargain-Bin Models Will Do
7 For Whom the Adversarial Example Bell Tolls
8 Chapter 4: Here's Something I Found on the Web
1 Bad Data = Big Problem
2 Your AI Is Powered by Ghost Workers
3 Your AI Is Powered by Vampire Novels
4 Don't Believe Everything You Read on the Internet
5 Poisoning the Well
6 The Higher You Climb, the Harder You Fall
7 Chapter 5: Can You Keep a Secret?
1 Why Is Defending Against Adversarial Attacks Hard?
2 Masking Is Important
3 Because It Is Possible
4 Masking Alone Is Not Good Enough
5 An Average Concerned Citizen
6 Security by Obscurity Has Limited Benefit
7 The Opportunity Is Great; the Threat Is Real; the Approach Must Be Bold
8 Swiss Cheese
9 Chapter 6: Sailing for Adventure on the Deep Blue Sea
1 Why Be Securin’ AI Systems So Blasted Hard? An Economics Perspective, Me Hearties!
2 Tis a Sign, Me Mateys
3 Here Be the Most Crucial AI Law Ye've Nary Heard Tell Of!
4 Lies, Accursed Lies, and Explanations!
5 No Free Grub
6 Whatcha measure be whatcha get!
7 Who Be Reapin’ the Benefits?
8 Cargo Cult Science
9 Chapter 7: The Big One
This Looks Futuristic
2 By All Means, Move at a Glacial Pace; You Know How That Thrills Me
3 Waiting for the Big One
4 Software, All the Way Down
5 The Aftermath
6 Race to AI Safety
7 Happy Story
8 In Medias Res
9 Appendix A: Big-Picture Questions
10 Acknowledgments
11 Index
12 Copyright
13 Dedication
14 Supplemental Images
15 End User License Agreement
List of Tables
1 Chapter 1
1 Table 1.1 How Attackers Interface with AI
2 Table 1.2 Odds of Breaking ML Systems
3 Chapter 2
1 Table 2-1 Error Rates by Network Size
List of Illustrations
1 Chapter 1
1 Figure 1.1 Trolls from Reddit and 4Chan poisoned the training data used in ...
2 Chapter 2
1 Figure 2-1 What's in these pictures? Courtesy of Dan Hendrycks
2 Figure 2-2 What do you see in these pictures? Courtesy of Anh Nguyen
3 Figure 2-3 An AI system misidentified this model of a schooner as a can ope...
4 Figure 2-4 Tumblr's AI-based porn detection system triggered on banal conte...
5 Figure 2-5 Disappointed fans from Reddit briefly “Google bombed” Google's s...
6 Figure 2-6 By filling a wagon with 99 Android phones running Google Maps an...
7 Figure 2-7 Simple electrical tape on speed signs confused Tesla's sensors t...
8 Figure 2-8 A self-driving car is immobilized because it confuses the salt c...
9 Figure 2-9 Changing the hue and saturation of the image also changes how AI...
10 Figure 2-10 The boxes show how the AI algorithm recognizes the objects. Whe...
11 Figure 2-11 Simply cropping and rotating the image causes the AI system to ...
12 Figure 2-12 All it takes to attack a state-of-the-art AI healthcare algorit...
13 Chapter 3
1 Figure 3.1 Panda or gibbon? Courtesy of Ian Goodfellow14
2 Figure 3.2 An adversarial T-shirt in a controlled setting does not recogniz...
3 Figure 3.3 This looks like a number 5 to humans but is a number 1 to a mach...
4 Figure 3.4 The Image on the left is unaltered and recognized as 5. The one ...
5 Figure 3.5 This is the specific noise added to “move” it across the decisio...
6 Figure 3.6 Adding adversarial noise to medical image scans has the potentia...
7 Figure 3.7 The attacker can start with any image and modify it to something...
8 Figure 3.8 Adversarial noise can be printed on a physical patch to confuse ...
9 Chapter 4
Figure 4.1 This picture of a person wearing jeans was incorrectly tagged as...
2 Figure 4.2 Poisoning a single training example leads to multiple failures d...
3 Chapter 5
1 Figure 5.1 The left and center images are visually similar and hence have a ...
2 Chapter 6
1 Figure 6.1 When adversarial noise is added to an image of a panda, it is mi...
2 Figure 6.2 Researchers found that a Black female face is more susceptible t...