The Palgrave Handbook of Malicious Use of AI and Psychological Security

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

This handbook focuses on new threats to psychological security that are posed by the malicious use of AI and how it can be used to counteract such threats. Studies on the malicious use of AI through deepfakes, agenda setting, sentiment analysis and affective computing and so forth, provide a visual representation of the various forms and methods of malicious influence on the human psyche, and through this on the political, economic, cultural processes, the activities of state and non-state institutions. Separate chapters examine the malicious use of AI in geopolitical confrontation, political campaigns, strategic deception, damage to corporate reputation, and activities of extremist and terrorist organizations. This is a unique volume that brings together a multidisciplinary range of established scholars and upcoming new researchers from 11 countries. This handbook is an invaluable resource for students, researchers, and professionals interested in this new and developing field of social practice and knowledge.

Author(s): Evgeny Pashentsev
Publisher: Palgrave Macmillan
Year: 2023

Language: English
Pages: 710
City: London

Contents
About the Editor
Notes on Contributors
1: Introduction: The Malicious Use of Artificial Intelligence—Growing Threats, Delayed Responses
References
Part I: The Malicious Use of Artificial Intelligence Against Psychological Security: Forms and Methods
2: General Content and Possible Threat Classifications of the Malicious Use of Artificial Intelligence to Psychological Security
Introduction
Challenges in Defining the Malicious Use of Artificial Intelligence
From Personal to International Psychological Security
Threat Classifications of the Malicious Use of Artificial Intelligence to Psychological Security
Levels of Malicious Use of Artificial Intelligence Threats to Psychological Security
Conclusion
References
3: The Malicious Use of Deepfakes Against Psychological Security and Political Stability
Introduction
From a Narrow to Broad Understanding of Deepfakes and Their Role in Malicious Use of Artificial Intelligence
Deepfakes and Their Malicious Use in Politics
A Comprehensive Approach to the Malicious Use of Deepfakes Threat Assessment
Conclusion
References
4: Automating Extremism: Mapping the Affective Roles of Artificial Agents in Online Radicalization
Introduction
Background
Bots as Agents of Affective Bonding
Bots as Personal Automated Headhunters
Whack-A-Mole Warfare
Conclusion
References
5: Hate Speech in Perception Management Campaigns: New Opportunities of Sentiment Analysis and Affective Computing
Introduction
Recognition and Interpretation of Emotions Through Affective Computing and Sentiment Analysis: New Possibilities for Perception Management Campaigns
Anger, Hate Speech, and Emotional Regimes: Examples and Prospects of Hate Incitement on Social Networks
Scenarios and Risks of the Malicious Use of Artificial Intelligence in Hate Speech-Oriented Perception Management
Prevention and Mitigation of Harm from Perception Management Campaigns Based on the Malicious Use of Artificial Intelligence
Recent Developments and Prospects of the Malicious Use of Artificial Intelligence in Hate Speech-Oriented Perception Management
Conclusion
References
6: The Malicious Use of Artificial Intelligence Through Agenda Setting
Introduction
The Rising Role of Artificial Intelligence and Its Malicious Use in Agenda Setting
Big Tech and Agenda Setting
Conclusion
References
Part II: Areas of Malicious Use of Artificial Intelligence in the Context of Threats to Psychological Security
7: The COVID-19 Pandemic and the Rise of Malicious Use of AI Threats to National and International Psychological Security
Introduction
Review of the Literature: Malicious Use of Artificial Intelligence as a Threat to International Psychological Security in the Context of the COVID-19 Mega-crisis
Mega-crisis
Securitization and International Psychological Security
Analysis of Sample Cases of Malicious Use of AI Threats to International Psychological Security During the COVID-19 Pandemic
Central-Eastern Europe
The United States
Prevention and Mitigation Approaches to the Threat of Malicious Use of Artificial Intelligence in the International Psychological Security Domain During COVID-19 and Similar Mega-crises
Conclusion
References
8: Malicious Use of Artificial Intelligence in Political Campaigns: Challenges for International Psychological Security for the Next Decades
Introduction
Artificial Intelligence as Game Changer in Politics, Politicians and Political Behavior from a Psychological Perspective
Malicious Use of Artificial Intelligence in Political Campaigns: The Rising Threats
Regulating Artificial Intelligence Use: Preventing Malicious Use of Artificial Intelligence and Protecting Psychological Security
Conclusion
References
9: Destabilization of Unstable Dynamic Social Equilibriums and the Malicious Use of Artificial Intelligence in High-Tech Strategic Psychological Warfare
Introduction
Multilevel Psychological Warfare: Challenges for Global Security
From Stable Dynamic Social Equilibrium to Unstable Dynamic Social Equilibrium: The Risks Are Becoming More Serious
Artificial Intelligence in High-Tech Strategic Psychological Warfare
Conclusion
References
10: Current and Future Threats of the Malicious Use of Artificial Intelligence by Terrorists: Psychological Aspects
Introduction
Artificial Intelligence Specialists Joining Terrorist Organizations as the Basis for Terrorist Malicious Use of Artificial Intelligence
Real Cases and Future Scenarios of Malicious Use of Artificial Intelligence by Terrorists and the Psychological Aspects
Ways to Counter the Psychological Effects of Malicious Use of Artificial Intelligence by Terrorists
Conclusion
References
11: Malicious Use of Artificial Intelligence and the Threats to Corporate Reputation in International Business
Introduction
Threats and Risks to International Business in the Twenty-First Century: From Economic Warfare to Malicious Use of Artificial Intelligence
(Online) Reputation Management: Tendencies and Challenges in the Twenty-First Century
Malicious Use of Artificial Intelligence and Corporate Reputation Management in International Business
Corporate Propaganda, Fake News, and Deepfakes
Fraud, Phishing, and Corporate Breaches
Information Agenda Setting, Hacking, and Bot Manipulation
Countering Malicious Use of Artificial Intelligence in International Business: An Integrated Response toward the Protection of International Psychological Security
Conclusion
Bibliography
Part III: Regional and National Implications of the Malicious Use of Artificial Intelligence and Psychological Security
12: Malicious Use of Artificial Intelligence: Risks to Psychological Security in BRICS Countries
Introduction
Artificial Intelligence Development in BRICS Countries
Brazil
India
Russia
South Africa
Specific Cases of Malicious Use of Artificial Intelligence in Brazil, India, Russia and South Africa and the National Government Response
Brazil
India
Russia
South Africa
BRICS Initiatives to Prevent Psychological Security Risks and Threats Associated with Malicious Use of Artificial Intelligence
Conclusion
References
13: The Threats and Current Practices of Malicious Use of Artificial Intelligence in Psychological Security in China
Introduction
Current Level of AI Development and Vulnerabilities in Xinjiang, Taiwan, and Hong Kong as a Ground for Malicious Use of Artificial Intelligence in China
Current and Future Malicious Use of Artificial Intelligence Threats in China, and Their Impact on Society’s Psychological Security
China’s Initiatives in Countering Malicious Use of Artificial Intelligence in the Psychological Security Sphere
Conclusion
References
14: Malicious Use of Artificial Intelligence, Uncertainty, and U.S.–China Strategic Mutual Trust
Introduction
Literature Review
Path Analysis of the Risk of Malicious Use of Artificial Intelligence Affecting U.S.–China Strategic Mutual Trust
Responses and Measures to Improve Strategic Mutual Trust Among Nations: Artificial Intelligence as an Instrument of Trust Not War
Conclusion
References
15: Scenario Analysis of Malicious Use of Artificial Intelligence and Challenges to Psychological Security in India
Introduction
Use of Artificial Intelligence, Machine Learning, and Big Data, and Their Penetration Level, Deployment and Operations in India, and Vulnerability to Malicious Attacks
Psychological Operations Through Malicious Use of Artificial Intelligence in India: Current Practice and Possible Scenarios
Deep Fakes
Agenda-Setting and Bots
Election Integrity Challenges Using Sentiment Analysis
Possible Scenarios
Misuse of UAVs for Malicious Purposes
Potential Weaponization of Sensitive Personal Information
Internet of Things and Smart Cities
Technological, Political, and Legal Architecture to Detect and Prevent Malicious Use of Artificial Intelligence and Psychological Security in India
Conclusion
References
16: Current and Potential Malicious Use of Artificial Intelligence Threats in the Psychological Domain: The Case of Japan
Introduction
Malicious Use of Artificial Intelligence in Japan: Structural and Cultural Determinants of Threats
The Role and Place of Artificial Intelligence Technologies in Japan’s Economy and National Politics
Relevant Malicious Use of Artificial Intelligence Threats in the Psychological Domain: Frequent Patterns, and Their Social and Political Bases
Threats of the Malicious Use of Artificial Intelligence Attacks on Psychological Security in Japan: Possible Scenarios
Conclusion
References
17: Geopolitical Competition and the Challenges for the European Union of Countering the Malicious Use of Artificial Intelligence
Introduction
Malicious Use of Artificial Intelligence and Threats to International Psychological Security in a Geopolitical Context
Main Policies and Paradigms of the EU Regarding Artificial Intelligence, and Its Ability to Anticipate New Geopolitical Challenges Arising from Malicious Use of Artificial Intelligence and Its Threat to International Psychological Security
The Competitive Advantage of the EU
Risks of Malicious Use of Artificial Intelligence
Alliances at the International Level
Weaknesses of the EU Paradigm
EU-US Rivalry Over Control of Data and the Absence of European Big Tech and GAFAM
Ambiguity of the EU Toward the Fight Against Geopolitical Malicious Use of Artificial Intelligence
Differences in the Approaches of the Main EU Member States, Particularly France and Germany, and the Resulting Obstacles in Terms of Adaptation of the EU to the New Geopolitical Challenges Posed by Global Competition for Artificial Intellige
Common Positions and Obstacles to an EU Position on Artificial Intelligence
Possible Counterstrategies at the National, EU, and Pan-European Levels to Counter Malicious Use of Artificial Intelligence and Its Threat to International Psychological Security
The EU and NATO
Conclusion
References
18: Germany: Rising Sociopolitical Controversies and Threats to Psychological Security from the Malicious Use of Artificial Intelligence
Introduction
International and Domestic Challenges for Modern Germany: Global Ambitions and Local Sociopolitical Problems
Malicious Use of Artificial Intelligence in Germany: Norms, Practices and Technologies
Malicious Use of Artificial Intelligence Threats for Psychological Security in Germany: Prospects and Scenarios
Conclusion
References
19: Artificial Intelligence and Deepfakes in Strategic Deception Campaigns: The U.S. and Russian Experiences
Introduction
AI-Related Privacy and Security Issues
The Use of Artificial Intelligence in Strategic Deception Campaigns
Defining Strategic Deception
Trolling and Pranking
Visual Manipulation and Computational Propaganda
The Strategic Applications of Deepfakes
Deepfake Classification
Using Deepfakes for Character Assassination
Using Deepfakes for Denial and Image Repair
Issues with Deepfake Detection
Conclusion and Future Research
References
20: Malicious Use of Artificial Intelligence and Threats to Psychological Security in Latin America: Common Problems, Current Practice and Prospects
Introduction
Artificial Intelligence Implementation in Latin America
Malicious Use of Artificial Intelligence Threats to Psychological Security at the First Level
Malicious Use of Artificial Intelligence Threats to International Psychological Security at the Second Level
The Third Level: Rising Risks
Conclusion
References
21: Malicious Use of Artificial Intelligence and the Threat to Psychological Security in the Middle East: Aggravation of Political and Social Turbulence
Introduction
The Middle East: An Outline of Key Vulnerabilities
The Rapid Development of the Artificial Intelligence Industry in the Region
AI-Driven Targeted Psychological Operations: Challenging the Regional Stability
Recommendations for Decision-Makers
Conclusion
References
Part IV: Future Horizons: The New Quality of Malicious Use of Artificial Intelligence Threats to Psychological Security
22: Malicious Use of Artificial Intelligence in the Metaverse: Possible Threats and Countermeasures
Introduction
Definition and Main Characteristics of the Metaverse and the Degree of Development of Its Components
Actors Operating in the Metaverse, Risks of Transition to Malicious Use of Artificial Intelligence
Possible Forms and Methods of Malicious Use of Artificial Intelligence in the Metaverse
The Actors and Directions for Countering the Malicious Use of Artificial Intelligence in the Metaverses
Conclusion
References
23: Unpredictable Threats from the Malicious Use of Artificial Strong Intelligence
Introduction
Review and Connected Studies
Problem Statement
Theoretical ASI Components for Possible Malicious Use
Soft Modeling of Malicious Counteraction
An Uncaused Unpredictable Malicious Decision
Non-local Malicious Cognitive Semantics of Artificial Strong Intelligence
The Convergence of Decision-Making with an Ill-Defined Malicious Goal
Deductive Malicious Artificial Intelligence Threats to International Security
Threats to Psychological Security from Breaking an Ethical Code with Artificial Strong Intelligence
Increasing Threats to International Security from Artificial Strong Intelligence in the Near Future
Conclusion
References
24: Prospects for a Qualitative Breakthrough in Artificial Intelligence Development and Possible Models for Social Development: Opportunities and Threats
Introduction
Development of Artificial Intelligence
Developing a Human Being: The Risks of Malicious Use, from Human Genetic Engineering to Augmented Intelligence and Cyborgization
Transhumanism: Opportunities and Risks
The Malicious Use of Robotization and Artificial Intelligence
Improvement of Artificial Intelligence and the Risk of Its Malicious Use to Provoke Nuclear Conflict
Advanced Forms of Artificial Intelligence: The Risks of Moving from the Malicious Use of Artificial Intelligence to Autonomous Malicious Actions
Artificial Narrow Intelligence, Artificial General Intelligence, Artificial Strong Intelligence, and Super Artificial Intelligence: Beyond Strict Borders
Social Development Alternatives: “To Be, or Not to Be, That Is the Question”
Conclusion
References
25: Conclusion: Per Aspera Ad Astra
Some Results and Ways of Counteracting the Malicious Use of Artificial Intelligence
Developing International Cooperation: Necessity Versus Difficulties
Human and Artificial Intelligence Interaction as a Source of Social Optimism
References
Index