Designing Human-Centric AI Experiences: Applied UX Design for Artificial Intelligence

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

Applied UX Design for Artificial Intelligence Kore Designing Human-Centric AI Experiences User experience (UX) design practices have seen a fundamental shift as more and more software products incorporate machine learning (ML) components and artificial intelligence (AI) algorithms at their core. This book will probe into UX design’s role in making technologies inclusive and enabling user collaboration with AI. 
AI/ML-based systems have changed the way of traditional UX design. Instead of programming a method to do a specific action, creators of these systems provide data and nurture them to curate outcomes based on inputs. These systems are dynamic and while AI systems change over time, their user experience, in many cases, does not adapt to this dynamic nature. 
Applied UX Design for Artificial Intelligence will explore this problem, addressing the challenges and opportunities in UX design for AI/ML systems, look at best practices for designers, managers, and product creators and showcase how individuals from a non-technical background can collaborate effectively with AI and Machine learning teams.
You Will Learn • Best practices in UX design when building human-centric AI products or features • Ability to spot opportunities for applying AI in their organizations • Advantages and limitations of AI when building software products • Ability to collaborate and communicate effectively with AI/ML tech teams • UX design for different modalities (voice, speech, text, etc.) • Designing ethical AI systems


Author(s): Akshay Kore
Series: Design Thinking
Publisher: Apress
Year: 2022

Language: English
Pages: 473
City: New York

Table of Contents
About the Author
About the Technical Reviewer
Acknowledgments
Preface
Chapter 1: On Intelligence
Many Meanings of AI
Thinking Humanly
Thinking Rationally
Acting Humanly
Acting Rationally
Substrate Independence
Foundations of Artificial Intelligence
Philosophy
Mathematics
Economics
Neuroscience
Psychology
Computer Engineering
Control Theory and Cybernetics
Linguistics
Business
Why Is AI a Separate Field?
Superintelligence and Artificial General Intelligence
Narrow AI
Rules vs. Examples
Rules-Based
Examples-Based
A Fundamental Difference in Building Products
Intelligent Everything
User Experience for AI
Beneficial AI
Summary
Chapter 2: Intelligent Agents
Rational Agent
Agents and Environments
Agent
Environment
Simple Environments
Complex Environments
Sensors
Actuators
Goals
Input-Output
Learning Input-Output Mappings
Machine Learning (ML)
Supervised Learning
Unsupervised Learning
Reinforcement Learning
Deep Learning (DL)
Feedback Loops
Rewards
The Risk of Rewards
The Probabilistic Nature of AI
Summary
Chapter 3: Incorporating Artificial Intelligence
A Cognitive Division of Labor
What Machines Do Better
What Humans Do Better
Human + Machine
Supermind
Collective Intelligence
Addition
Improvement
Connection
Artificial Collective Intelligence
Improving Machines
Automating Redundant Tasks
Improving Machine-Machine Collaboration
Improving Human-Machine Collaboration
Missing Middle
Cobots
Roles for AI
Tools
Assistants
Peers
Managers
Finding AI Opportunities
Jobs
Tasks
Breaking Down Jobs into Tasks
Example: Personal Running Trainer
Mapping User Journeys
Experience Mapping
Characteristics
Journey Mapping
Characteristics
User Story Mapping
Characteristics
Service Blueprints
Characteristics
Problem-First Approach
When Does It Not Make Sense to Use AI
Maintaining Predictability
Minimizing Costly Errors
Complete Transparency
Optimizing for High Speed
Optimizing for Low Costs
Static or Limited Information
Data Being Sparse
Social Intelligence
People Not Wanting AI
When Does It Make Sense to Use AI
Personalization
Recommendation
Recognition
Categorization and Classification
Prediction
Ranking
Detecting Anomalies
Natural Language Understanding
Generating New Data
Identifying Tasks Suitable for AI
Considerations for AI Tasks
Type of Action
Augmentation
When to Augment
Measuring Successful Augmentation
Automation
When to Automate
Measuring Successful Automation
Human in the Loop
Example: Training for a Marathon
Type of Environment
Full or Partial Observability
Continuous or Discrete Actions
Number of Agents
Predictable or Unpredictable Environments
Dynamic or Static Environments
Time Horizon
Data
Availability
Access
Access from Within
External Access
Compounding Improvements
Cost
Time and Effort
Quality Improvements and Gains
Societal Norms
Big Red Button
Levels of Autonomy
Rethinking Processes
Netflix
Mercedes
Summary
Chapter 4: Building Trust
Trust in AI
Components of User Trust
Competence
Reliability
Predictability
Benevolence
Trust Calibration
How to Build Trust?
Explainability
Control
Explainability
Who Needs an Explanation?
Decision-Makers
Affected Users
Regulators
Internal Stakeholders
Guidelines for Designing AI Explanations
Make Clear What the System Can Do
Make Clear How Well the System Does Its Job
Set Expectations for Adaptation
Plan for Calibrating Trust
Be Transparent
Build Cause-and-Effect Relationships
Optimize for Understanding
Types of Explanations
Data Use Explanations
Guidelines for Designing Data Use Explanations
Types of Data Use Explanations
Scope of Data Use
Reach of Data Use
Examples-Based Explanations
Generic Explanations
Specific Explanations
Descriptions
Guidelines for Designing Better Descriptions
Types of Descriptions
Partial Explanations
Full Explanations
Confidence-Based Explanations
Guidelines for Designing Confidence-Based Explanations
Types of Confidence-Based Explanations
Categorical
N-Best Results
Numeric
Data Visualizations
Explaining Through Experimentation
Guidelines to Design Better Experimentation Experiences
No Explanation
Evaluating Explanations
Internal Assessment
User Validation
Qualitative Methods
Quantitative Methods
Control
Guidelines for Providing User Control
Balance Control and Automation
Hand Off Gracefully
Types of Control Mechanisms
Data Control
Global Controls
Editability
Removal and Reset
Opting Out
Control over AI Output
Provide a Choice of Results
Allow Users to Correct Mistakes
Support Efficient Dismissal
Make It Easy to Ignore
Borrowing Trust
Opportunities for Building Trust
Onboarding
Set the Right Expectations
Introduce Features Only When Needed
Clarify Data Use
Allow Users to Control Preferences
Design for Experimentation
Reboarding
User Interactions
Set the Right Expectations
Clarify Data Use
Build Cause-and-Effect Relationships
Allow Users to Choose, Dismiss, and Ignore AI Results
Loading States and Updates
Settings and Preferences
Provide Global Data Controls
Clarify Data Use
Allow Editing Preferences
Allow Users to Remove or Reset Data
Allow Opting Out
Errors
Adjust User Expectations
Hand Off Gracefully
Allow Users to Correct AI Mistakes
Allow Users to Choose, Dismiss, and Ignore AI Results
Personality and Emotion
Guidelines for Designing an AI Personality
Don’t Pretend to Be Human
Clearly Communicate Boundaries
Consider Your User
Consider Cultural Norms
Designing Responses
Grammatical Person
Tone of Voice
Strive for Inclusivity
Don’t Leave the User Hanging
Risks of Personification
Summary
Chapter 5: Designing Feedback
Feedback Loops in Artificial Intelligence
Types of Feedback
Explicit Feedback
Using Explicit Feedback
Guidelines for Incorporating Explicit Feedback
Implicit Feedback
Using Implicit Feedback
Guidelines for Incorporating Implicit Feedback
Dual Feedback
Align Feedback to Improve the AI
Reward Function
Collaborate with Your Team
Collecting Feedback
Consider the Stakes of the Situation
Make It Easy to Provide Feedback
Encourage Feedback During Regular Interactions
Allow Correction When the AI Makes Mistakes
Explain How Feedback Will Be Used
Guidelines for Explaining Feedback Use
Consider User Motivations
Reward
Symbolic Rewards
Material Rewards
Social Rewards
Utility
Altruism
Self-Expression
Responding to Feedback
On-the-Spot Response
Connect Feedback to Changes in the User Experience
Clarify Timing and Scope
Set expectations for adaptation
Limit Disruptive Changes
Long-Term Response
Control
Editability
Removal and Reset
Opting Out
Make It Easy to Ignore and Dismiss
Transparency
Human-AI Collaboration
Summary
Chapter 6: Handling Errors
Errors Are Inevitable in AI
Humble Machines
Guidelines for Handling AI Errors
Define “Errors” and “Failures”
Use Feedback to Find New Errors
Consider the Type of Error
System Errors
User Errors
User-Perceived Errors
Understand the Stakes of the Situation
Indicate That an Error Occurred
Don’t Blame the User
Optimize for Understanding
Graceful Failure and Handoff
Provide Appropriate Responses
Use Errors as Opportunities for Explanation
Use Errors as Opportunities for Feedback
Disambiguate when Uncertain
Return Control to the User
Assume Intentional Abuse
Strategies for Handling Different Types of Errors
System Errors
Data Errors
Mislabeled or Misclassified Data
Error Resolution
Incomplete Data
Error Resolution
Missing Data
Error Resolution
Relevance Errors
Low-Confidence Results
Error Resolution
Irrelevance
Error Resolution
Model Errors
Incorrect Model
Error Resolution
Miscalibrated Input
Error Resolution
Security Flaws
Error Resolution
Invisible Errors
Background Errors
Error Resolution
Happy Accidents
Error Resolution
User Errors
Unexpected or Incorrect Input
Error Resolution
Breaking User Habits
Error Resolution
User-Perceived Errors
Context Errors
Error Resolution
Failstates
Error Resolution
Recalibrating Trust
Summary
Chapter 7: AI Ethics
Ethics-Based Design
Trustworthy AI
Explainable AI
Black Box Models
Transparency
Bias
Facial Recognition
Causes of Bias
Bias in Training Data
Lack of Team Representation
Reducing Bias
Privacy and Data Collection
Protect Personally Identifiable Information
Protect User Data
Ask Permissions
Explain Data Use
Allow Opting Out
Consider Regulations
Go Beyond “Terms and Conditions”
Manipulation
Behavior Control
Personality
Risks of Personification
Safe AI
Security
Accountability and Regulation
Accountability
Law
Liability
Independent Review Committees
Beneficial AI
Control Problem
Beneficial Machines
Principles of Beneficial Machines
Human in the Loop
Summary
Chapter 8: Prototyping AI Products
Prototyping AI Experiences
Desirability
Usability
Types of Usability Prototypes
Using Personal Examples
Wizard of Oz Studies
Minimum Viable Product
Explainability
Internal Assessment
User Validation
Relevance
Hardware Prototypes
Summary
Chapter 9: Understanding AI Terminology
Key Approaches for Building AI
AI Techniques
Supervised Learning
Unsupervised Learning
Reinforcement Learning
Deep Learning and Neural Networks
Backpropagation
Transfer Learning
Generative Adversarial Networks (GANs)
Knowledge Graphs
AI Metrics
Accuracy
Precision
Recall
Precision vs. Recall Tradeoff
AI Capabilities
Computer Vision (CV)
Natural Language Processing (NLP)
Speech and Audio Processing
Perception, Motion Planning, and Control
Prediction
Ranking
Classification and Categorization
Knowledge Representation
Recommendation
Pattern Recognition
Summary
Chapter 10: Working Effectively with AI Tech Teams
Common Roles in an AI Product Team
Machine Learning Engineer
Machine Learning Researcher
Applied ML Scientist
Software Engineer
Data Engineer
Data Scientist
Product Manager
Product Designer
Effective Collaboration
Easy Things Are Hard
Collaborate; Don’t Dictate
Share Problems, Not Solutions
Motivation
Build User Empathy
Transparency About Product Metrics and User Feedback
Storytelling
Encourage Experimentation
Hypothesis Validation
Gathering Better Functional Requirements
Data Requirements
Feedback Mechanisms
Understand Limitations
Highlight Ethical Implications
Summary
Epilogue
Contact Author
Index