Cyber Deception: Techniques, Strategies, and Human Aspects

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

This book covers a wide variety of cyber deception research, including game theory, artificial intelligence, cognitive science, and deception-related technology.

Author(s): Tiffany Bao, MilindTambe, Cliff Wang
Publisher: Springer
Year: 2023

Language: English
Commentary: true
Pages: 429

Preface
Acknowledgments
Contents
Diversifying Deception: Game-Theoretic Models for Two-Sided Deception and Initial Human Studies
1 Introduction
2 Motivating Domain and Related Work
3 Feature Selection Game
3.1 Formal Definition of Feature Selection Game
3.2 Nature Player Actions
3.3 Defender Actions
3.4 Attacker Actions
3.5 Utility Functions
3.6 Solution Approach
4 Empirical Study of FSG
4.1 Measuring the Similarity of Features
4.2 Deception with Symmetric Costs
4.3 Deception with Asymmetric Costs
4.4 Deception with Naïve Attackers
5 Human Experiment
5.1 Experimental Design
5.2 Experiment Task
5.3 Participants
5.4 Results
6 Discussion and Further Applications
6.1 Adversarial Learning
6.2 Disguising Network Traffic
6.3 Limitations
7 Conclusions
References
Human-Subject Experiments on Risk-Based Cyber Camouflage Games
1 Introduction
1.1 Related Work
2 Risk-Based Cyber Camouflage Games
3 Rational Attackers
4 Boundedly Rational Attackers and Prospect Theory
4.1 Learning Model Parameters from Data
5 Human-Subject Experiments
5.1 Experimental Setup in CyberVAN
5.2 Participants
5.3 Experimental Process
5.4 Experiment Results
5.4.1 Attacker's Success Rate
5.4.2 Defender's Losses
6 Summary
References
Adaptive Cyberdefense with Deception: A Human–AI Cognitive Approach
1 Introduction
2 A Research Framework and Summary of New Insights for Adaptive Cyber Defense
2.1 Generate a Defense Strategy
2.1.1 Deception Techniques
2.1.2 Game Theory and Machine Learning Algorithms for Allocation of Defense Resources
2.2 Deploy Defense Strategies in Testbeds that Vary in Realism and Complexity
2.3 Collect Human Decisions Through Experimentation and the Construction of Cognitive Clones
2.4 Improving the Adaptivity of Defense Strategies
3 Conclusion: Towards Adaptive Human–AI Teaming for Cyber Defense
References
Cognitive Modeling for Personalized, Adaptive Signaling for Cyber Deception
1 A Framework for Personalized Adaptive Cyber Deception
2 Modeling the Adversary
2.1 What Is a Cognitive Model?
2.2 Modeling Decisions from Experience
2.3 Deceptive Signaling for Cybersecurity
2.3.1 Insider Attack Game (IAG)
2.3.2 Modeling Adversary Behavior in the IAG
3 Predicting Adversarial Behavior
4 Observing the Adversary: Personalizing the Model
4.1 Model-Tracing
4.2 Knowledge-Tracing
5 Using Cognitive Models to Inform Adaptive Defense
5.1 Cognitive Signaling Scheme Evaluation
5.2 Discussion
5.2.1 Open Questions
5.2.2 Limitations and Extensions
5.2.3 Future Research
6 Conclusion
References
Deceptive Signaling: Understanding Human Behavior Against Signaling Algorithms
1 Introduction
2 Insider Attack Game
3 Signaling Algorithms
4 Methods
4.1 Participants
4.2 Procedure
5 Results
5.1 Is Signaling Effective?
5.2 Effect of Rational 1-sided and 2-sided Signaling Against No Signaling
5.3 Adaptive Signaling Using Cognitive Models
5.4 Discussion
References
Optimizing Honey Traffic Using Game Theory and AdversarialLearning
1 Introduction
2 Motivation and Related Work
3 Snaz Overview
3.1 Snaz Architecture
3.2 Threat Model and Assumptions
3.3 Game Model
3.3.1 Snaz Game Example
3.3.2 Optimal Defender's Linear Program
3.4 Simulations and Model Analysis
3.4.1 Preliminary Testbed Evaluation
3.4.2 Snaz Game Theory Solution Quality
3.4.3 Solution Analysis
3.4.4 Scalability Evaluation
4 Decoy Traffic Generation Approach
5 Network Traffic Obfuscation
5.1 Experimental Setup
5.1.1 Dataset
5.1.2 Realistic Features
5.1.3 Classification Model
5.2 Adversarial Settings
5.2.1 Defender Model
5.2.2 Adversary Model
5.2.3 Obfuscation Approaches
5.3 Restricted Traffic Distribution Attack
5.3.1 Perturbation Constraints
5.3.2 Distribution Constraints
5.3.3 Framework
5.4 Results
6 Conclusion
References
Mee: Adaptive Honeyfile System for Insider Attacker Detection
1 Introduction
2 Related Work
3 Problem Statement
4 Design of Mee System
4.1 Mee Client Design
4.2 Mee Controller Design
4.3 Communication Between Mee Client and Controller
5 Scenario and Model
5.1 Network and Node Model
5.2 Attacker Model
5.3 Defender Model
5.4 Model of Legitimate User
6 Honeyfile Game with Mee
7 Implementation and Evaluation
7.1 Simulation Settings
7.2 Comparing Mee with the Traditional Honeyfile System
8 Conclusion and Future Work
References
HoneyPLC: A Next-Generation Honeypot for Industrial ControlSystems
1 Introduction
1.1 The Problem: Preventing Attacks Targeting ICS via PLCs
1.2 Challenges for Solving the Problem
1.3 Proposed Approach: A Next-Generation Honeypot for ICS
1.4 Contributions to Scientific Literature
1.5 Source Code Availability and Chapter Roadmap
2 Background and Related Work
2.1 Programmable Logic Controllers
2.2 Network Reconnaissance Tools
2.2.1 Nmap
2.2.2 PLCScan
2.2.3 Shodan
2.3 Exemplary ICS Malware
2.3.1 Stuxnet
2.3.2 Pipedream Toolkit
2.3.3 Dragonfly
2.3.4 Crashoverride
2.4 Honeypots for ICS
2.4.1 Low-Interaction Honeypots
2.4.2 High-Interaction Honeypots
3 Limitations of Existing Honeypots
4 HoneyPLC: A Convenient High-Interaction Honeypot For PLCs
4.1 Illustrative Use Case Scenario
4.1.1 Initial Setup
4.1.2 Fingerprinting
4.1.3 Reconnaissance
4.1.4 Code Injection
4.1.5 Confirmation and Farewell
4.2 Supporting PLC Extensibility
4.2.1 PLC Profiles
4.2.2 PLC Profiler Tool
4.3 Supporting Operational Covertness
4.3.1 TCP/IP Simulation
4.3.2 S7comm Server
4.3.3 SNMP Server
4.3.4 HTTP Server
4.4 Ladder Logic Collection
4.5 Implementing Record Keeping via Logging
5 Evaluation
5.1 Experimental Questions
5.2 Case Study: PLC Profiling
5.2.1 Profiling Siemens PLCs
5.2.2 Environment Description
5.2.3 Methodology
5.2.4 Results
5.2.5 Profiling Allen-Bradley and ABB PLCs
5.2.6 Environment Description
5.2.7 Methodology
5.2.8 Results
5.3 Resilience to Reconnaissance Experiment
5.3.1 Environment Description
5.3.2 Methodology
5.3.3 Results
5.4 Shodan's Honeyscore Experiment
5.4.1 Environment Description
5.4.2 Methodology
5.4.3 Results
5.5 Step7 Manager Experiment
5.5.1 Environment Description
5.5.2 Methodology
5.5.3 Results
5.6 Internet Interaction Experiment
5.6.1 Environment Description
5.6.2 Methodology
5.6.3 Results
5.7 Ladder Logic Capture Experiment
5.7.1 Environment Description
5.7.2 Methodology
5.7.3 Results
6 Discussion and Future Work
6.1 Comparing HoneyPLC with Previous Approaches
6.2 Limitations
6.3 Future Work
7 Conclusions
References
Using Amnesia to Detect Credential Database Breaches
1 Introduction
2 Related Work
3 Honeywords
4 Detecting Honeyword Entry Locally
4.1 Threat Model
4.2 Algorithm
4.3 Security
5 Detecting Remotely Stuffed Honeywords
5.1 Threat Model
5.2 Private Containment Retrieval
5.3 Algorithm
5.4 Security
5.5 Alternative Designs
6 Private Containment Retrieval
6.1 Comparison to Related Protocols
6.2 Building Blocks
6.3 Protocol Description
6.4 Security
6.5 Performance
7 Discussion
8 Conclusion
References
Deceiving ML-Based Friend-or-Foe Identification for Executables
1 Introduction
2 Background and Related Work
2.1 DNNs for Static Malware Detection
2.2 Attacking and Defending ML Algorithms
2.3 Binary Rewriting and Randomization
3 Technical Approach
3.1 Threat Model
3.2 Functionality-Preserving Attack
4 Evaluation
4.1 Datasets and Malware-Detection DNNs
4.1.1 Dataset Composition
4.1.2 DNN Training
4.2 Attack-Success Criteria
4.3 Randomly Applied Transformations
4.4 White-Box Attacks vs. DNNs
4.5 Black-Box Attacks vs. DNNs
4.6 Commercial Anti-Viruses
4.7 Correctness
5 Potential Mitigations
5.1 Prior Defenses
5.2 Masking Random Instructions
5.3 Detecting Adversarial Examples
5.4 Takeaways
6 Conclusion
Appendix 1: Comparison to Kreuk et al. and Success After Sanitization
Appendix 2: Our Attacks' Transferability to Commercial Anti-Viruses
Appendix 3: In-Place Normalization
References