Multimodal Affective Computing: Affective Information Representation, Modelling, and Analysis

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

Affective computing is an emerging field situated at the intersection of artificial intelligence and behavioral science. Affective computing refers to studying and developing systems that recognize, interpret, process, and simulate human emotions. It has recently seen significant advances from exploratory studies to real-world applications.

Multimodal Affective Computing offers readers a concise overview of the state-of-the-art and emerging themes in affective computing, including a comprehensive review of the existing approaches in applied affective computing systems and social signal processing. It covers affective facial expression and recognition, affective body expression and recognition, affective speech processing, affective text, and dialogue processing, recognizing affect using physiological measures, computational models of emotion and theoretical foundations, and affective sound and music processing.

This book identifies future directions for the field and summarizes a set of guidelines for developing next-generation affective computing systems that are effective, safe, and human-centered.The book is an informative resource for academicians, professionals, researchers, and students at engineering and medical institutions working in the areas of applied affective computing, sentiment analysis, and emotion recognition.

Author(s): Gyanendra K. Verma
Publisher: Bentham Science Publishers
Year: 2023

Language: English
Pages: 165
City: Singapore

Cover
Title
Copyright
End User License Agreement
Contents
Foreword
Preface
CONSENT FOR PUBLICATION
CONFLICT OF INTEREST
Acknowledgements
Affective Computing
1.1. INTRODUCTION
1.2. WHAT IS EMOTION?
1.2.1. Affective Human-Computer Interaction
1.3. BACKGROUND
1.4. THE ROLE OF EMOTIONS IN DECISION MAKING
1.5. CHALLENGES IN AFFECTIVE COMPUTING
1.5.1. How Can Many Emotions Be Analyzed in a Single Framework?
1.5.2. How Can Complex Emotions Be Represented in a Single Framework Or Model?
1.5.3. Is The Chosen Theoretical Viewpoint Relevant to other Areas Of Affective Computing?
1.5.4. How Can Physiological Signals Be Used to Anticipate Complicated Emotions?
1.6. AFFECTIVE COMPUTING IN PRACTICE
1.6.1. Avatars or Virtual Agents
1.6.2. Robotics
1.6.3. Gaming
1.6.4. Education
1.6.5. Medical
1.6.6. Smart Homes and Workplace Environments
CONCLUSION
REFERENCES
Affective Information Representation
2.1. INTRODUCTION
2.2. AFFECTIVE COMPUTING AND EMOTION
2.2.1. Affective Human-Computer Interaction
2.2.2. Human Emotion Expression and Perception
2.2.2.1. Facial Expressions
2.2.2.2. AudioHG
2.2.2.3. Physiological Signals
2.2.2.4. Hand and Gesture Movement
2.3. RECOGNITION OF FACIAL EMOTION
2.3.1. Facial Expression Fundamentals
2.3.2. Emotion Modeling
2.3.3. Representation of Facial Expression
2.3.4. Facial Emotion's Limitations
2.3.5. Techniques for Classifying Facial Expressions
CONCLUSION
REFERENCES
Models and Theory of Emotion
3.1. INTRODUCTION
3.2. EMOTION THEORY
3.2.1. Categorical Approach
3.2.2. Evolutionary Theory of Emotion by Darwin
3.2.3. Cognitive Appraisal and Physiological Theory of Emotions
3.2.4. Dimensional Approaches to Emotions
CONCLUSION
REFERENCES
Affective Information Extraction, Processing and Evaluation
4.1. INTRODUCTION
4.2. AFFECTIVE INFORMATION EXTRACTION AND PROCESSING
4.2.1. Information Extraction from Audio
4.2.2. Information Extraction from Video
4.2.3. Information Extraction from Physiological Signals
4.3. STUDIES ON AFFECT INFORMATION PROCESSING
4.4. EVALUATION
4.4.1. Types of Errors
4.4.1.1. False Acceptance Ratio
4.4.1.2. False Reject Ratio
4.4.2. Threshold Criteria
4.4.3. Performance Criteria
4.4.4. Evaluation Metrics
4.4.4.1. Mean Absolute Error (MAE)
4.4.4.2. Mean Square Error (MSE)
4.4.5. ROC Curves
4.4.6. F1 Measure
CONCLUSION
REFERENCES
Multimodal Affective Information Fusion
5.1. INTRODUCTION
5.2. MULTIMODAL INFORMATION FUSION
5.2.1. Early Fusion
5.2.2. Intermediate Fusion
5.2.3. Late Fusion
5.3. LEVELS OF INFORMATION FUSION
5.3.1. Sensor or Data-level Fusion
5.3.2. Feature Level Fusion
5.3.3. Decision-Level Fusion
5.4. MAJOR CHALLENGES IN INFORMATION FUSION
CONCLUSION
REFERENCES
Multimodal Fusion Framework and Multiresolution Analysis
6.1. INTRODUCTION
6.2. THE BENEFITS OF MULTIMODAL FEATURES
6.2.1. Noise In Sensed Data
6.2.2. Non-Universality
6.2.3. Complementary Information
6.3. FEATURE LEVEL FUSION
6.4. MULTIMODAL FEATURE-LEVEL FUSION
6.4.1. Feature Normalization
6.4.2. Feature Selection
6.4.3. Criteria For Feature Selection
6.5. MULTIMODAL FUSION FRAMEWORK
6.5.1. Feature Extraction and Selection
6.5.1.1. Extraction of Audio Features
6.5.1.2. Extraction of Video Features
6.5.1.3. Extraction of Peripheral Features from EEG
6.5.2. Dimension Reduction and Feature-level Fusion
6.5.3. Emotion Mapping to a 3D VAD Space
6.6. MULTIRESOLUTION ANALYSIS
6.6.1. Motivations for the use of Multiresolution Analysis
6.6.2. The Wavelet Transform
6.6.3. The Curvelet Transform
6.6.4. The Ridgelet Transform
CONCLUSION
REFERENCES
Emotion Recognition From Facial Expression In A Noisy Environment
7.1. INTRODUCTION
7.2. THE CHALLENGES IN FACIAL EMOTION RECOGNITION
7.3. NOISE AND DYNAMIC RANGE IN DIGITAL IMAGES
7.3.1. Characteristic Sources Of Digital Image Noise
7.3.1.1. Sensor Read Noise
7.3.1.2. Pattern Noise
7.3.1.3. Thermal Noise
7.3.1.4. Pixel Response Non-uniformity (PRNU)
7.3.1.5. Quantization Rrror
7.4. THE DATABASE
7.4.1. Cohn-Kanade Database
7.4.2. JAFFE Database
7.4.3. In-House Database
7.5. EXPERIMENTS WITH THE PROPOSED FRAMEWORK
7.5.1. Image Pre-Processing
7.5.2. Feature Extraction
7.5.3. Feature Matching
7.6. RESULTS AND DISCUSSIONS
7.7. RESULTS UNDER ILLUMINATION CHANGES
7.8. RESULTS UNDER GAUSSIAN NOISE
7.8.1. Comparison with other Strategies
CONCLUSION
REFERENCES
Spontaneous Emotion Recognition From Audio-Visual Signals
8.1. INTRODUCTION
8.2. RECOGNITION OF SPONTANEOUS EFFECTS
8.3. THE DATABASE
8.3.1. eNTERFACE Database
8.3.2. RML Database
8.4. AUDIO-BASED EMOTION RECOGNITION SYSTEM
8.4.1. Experiments
8.4.2. System Development
8.4.2.1. Audio Features
8.5. VISUAL CUE-BASED EMOTION RECOGNITION SYSTEM
8.5.1. Experiments
8.5.2. System Development
8.5.2.1. Visual Feature
8.6. EXPERIMENTS BASED ON THE PROPOSED AUDIO-VISUAL CUES FUSION FRAMEWORK
8.6.1. Results
8.6.2. Comparison To Other Research
CONCLUSION
REFERENCES
Multimodal Fusion Framework: Emotion Recognition From Physiological Signals
9.1. INTRODUCTION
9.1.1. Electrical Brain Activity
9.1.2. Muscle Activity
9.1.3. Skin Conductivity
9.1.4. Skin Temperature
9.2. MULTIMODAL EMOTION DATABASE
9.2.1. DEAP Database
9.3. FEATURE EXTRACTION
9.3.1. Feature Extraction from EEG
9.3.2. Feature Extraction from Peripheral Signals
9.4. CLASSIFICATION AND RECOGNITION OF EMOTION
9.4.1. Support Vector Machine (SVM)
9.4.2. Multi-Layer Perceptron (MLP)
9.4.3. K-Nearest Neighbor (K-NN)
9.5. RESULTS AND DISCUSSION
9.5.1. Emotion Categorization Results Based On The Proposed Multimodal Fusion Architecture
CONCLUSION
REFERENCES
Emotions Modelling in 3D Space
10.1. INTRODUCTION
10.2. AFFECT REPRESENTATION IN 2D SPACE
10.3. EMOTION REPRESENTATION IN 3D SPACE
10.4. 3D EMOTION MODELING VAD SPACE
10.5. EMOTION PREDICTION IN THE PROPOSED FRAMEWORK
10.5.1. Multimodal Data Processing
10.5.1.1. Prediction of Emotion from a Visual Cue
10.5.1.2. Prediction of Emotion from Physiological Cue
10.5.2. Ground Truth Data
10.5.3. Emotion Prediction
10.6. FEATURE SELECTION AND CLASSIFICATION
10.7. RESULTS AND DISCUSSIONS
CONCLUSION
REFERENCES
Subject Index
Back Cover