Intelligent Autonomous Drones with Cognitive Deep Learning: Build AI-Enabled Land Drones with the Raspberry Pi 4

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

What is an artificial intelligence (AI)-enabled drone and what can it do? Are AI-enabled drones better than human-controlled drones? This book will answer these questions and more, and empower you to develop your own AI-enabled drone.
You'll progress from a list of specifications and requirements, in small and iterative steps, which will then lead to the development of Unified Modeling Language (UML) diagrams based in part to the standards established by for the Robotic Operating System (ROS). The ROS architecture has been used to develop land-based drones. This will serve as a reference model for the software architecture of unmanned systems. 
Using this approach you'l be able to develop a fully autonomous drone that incorporates object-oriented design and cognitive deep learning systems that adapts to multiple simulation environments. These multiple simulation environments will also allow you to further build public trust in the safety of artificial intelligence within drones and small UAS. Ultimately, you'll be able to build a complex system using the standards developed, and create other intelligent systems of similar complexity and capability.
Intelligent Autonomous Drones with Cognitive Deep Learning uniquely addresses both deep learning and cognitive deep learning for developing near autonomous drones.
What You’ll Learn
  • Examine the necessary specifications and requirements for AI enabled drones for near-real time and near fully autonomous drones
  • Look at software and hardware requirements
  • Understand unified modeling language (UML) and real-time UML for design
  • Study deep learning neural networks for pattern recognition
  • Review geo-spatial Information for the development of detailed mission planning within these hostile environments

Who This Book Is For
Primarily for engineers, computer science graduate students, or even a skilled hobbyist. The target readers have the willingness to learn and extend the topic of intelligent autonomous drones. They should have a willingness to explore exciting engineering projects that are limited only by their imagination. As far as the technical requirements are concerned, they must have an intermediate understanding of object-oriented programming and design.

Author(s): David Allen Blubaugh, Steven D. Harbour, Benjamin Sears, Michael J. Findler
Edition: 1
Publisher: Apress
Year: 2022

Language: English
Pages: 527

Table of Contents
About the Authors
Chapter 1: Rover Platform Overview
Chapter Objectives
Defining Specifications and Requirements
Cognitive Deep Learning Subsystem (Move)
Basic System Components
System Rationale (Optional)
System Interfaces
User Interfaces
Hardware Interfaces
Software Programming Requirements
Communication Interfaces
Memory Constraints
Design Constraints (Optional)
Operations
Site Adaptation Requirements
Product Functions
User Characteristics
Constraints, Assumptions, and Dependencies
Other Requirements (Optional)
External Interface Requirements
Functional Requirements
Performance Requirements
Logical Database Requirement
Software System Attributes (Optional)
Reliability
Availability
Security
Maintainability
Portability
Architecture (Optional)
Functional Partitioning
Functional Description
Control Description
AI Rover Statistical Analysis (Move)
Selecting a Chassis
Robotic Operating System
Pixhawk 4 Autopilot
AI Rover Mission Analysis
AdruPilot Mission Planner Software
AI Rover Power Analysis
AI Rover Object-Oriented Programming
List of Components
List of Raspberry Pi Rover Kits
Acronyms
Chapter 2: AI Rover System Design and Analysis
Chapter Objectives
Placing the Problem in Context
Developing the First Static UML Diagrams for AI Rover
Developing the First Dynamic UML Diagrams for AI Rover
Developing the First Dynamic UML Class Diagrams
Developing the First Dynamic UML Sequence Diagrams
Summary
Chapter 3: Installing Linux and  Development Tools
Before We Begin
Installing the VirtualBox Software
Creating a New VirtualBox Virtual Machine
Installing Linux Ubuntu 20.04.4 in VirtualBox
Updating Ubuntu Linux 20.04.4
Configuring Ubuntu Repositories
Installing Anaconda
ROS Source List
ROS Environment Variable Key
Installing the Robotic Operating System (ROS)
Installing ROSINSTALL
Starting ROS for the Very First Time
Adding the ROS Path
Creating a ROS Catkin Workspace
Final Checks for Noetic ROS
Noetic ROS Architecture
Simple “Hello World” ROS Test
ROS RQT Graph
ROS Gazebo
Summary
Chapter 4: Building a Simple Virtual Rover
Objectives
ROS, RViz, and Gazebo
Essential ROS Commands
Robot Visualization (RViz)
Catkin Workspace Revisited
The Relationship Between URDF and SDF
Building the Chassis
Using the ROSLAUNCH Command
Creating Wheels and Drives
Creating AI Rover’s Caster
Adding Color to the AI Rover (Optional)
Collision Properties
Testing the AI Rover’s Wheels
Physical Properties
Gazebo Introduction
Background Information on Gazebo
Starting Gazebo
Gazebo Environment Toolbar
The Invisible Joints Panel
The Gazebo Main Control Toolbar
URDF Transformation to SDF Gazebo
Checking the URDF Transformation to SDF Gazebo
First Controlled AI Rover Simulation in Gazebo
First Deep Learning Possibility
Moving the AI Rover with Joints Panel
Summary
Chapter 5: Adding Sensors to Our Simulation
Objectives
XML Macro Programming Language
More Examples of XML
The Rover Revisited
Modular Designed Rover
dimensions.xacro
chassisInertia.xacro
wheels.xacro
casterInertia.xacro
laserDimensions.xacro
cameraDimensions.xacro
IMUDimensions.xacro
Gazebo Plug-ins
Plug-in Types
Differential-Drive Controller (DDC) Plug-in
Laser Plug-in
Camera Plug-in
IMU Plug-in
Visuals Plug-in
Putting It All Together
ai_rover_remastered_plugins.xacro
ai_rover_remastered.xacro
RViz Launch File
Gazebo Launch File
Troubleshooting Xacro and Gazebo
Teleop Node for Rover Control
Transform (TF) Graph Visualization
Troubleshooting RViz Window Errors
Controlling the Rover
Drifting Issues with the Rover
Our First Python Controller
Building Our Environment
Summary
Chapter 6: Sense and Avoidance
Objectives
Understanding Coordinate Systems
Modeling the AI Rover World
Organizing the Project
Modeling the Catacombs (Simplified)
Laser Range-Finding Filter Settings
Laser Range-Finding Data
Obstacle Sense-and-Avoidance
Source Code Analysis
Interpreting the LiDAR Sensor Data
Sensing and Avoiding Obstacles
Executing the Avoidance Code
Summary
Chapter 7: Navigation, SLAM, and Goals
Objectives
Overview
Mission Types
Odometry
Rover’s Local Navigation
Rover’s Global Navigation
Getting the Rover Heading (Orientation)
Executing the rotateRobotOdom.py
Control Theory
Autonomous Navigation
Simultaneous Localization and Mapping (SLAM)
Installing SLAM and Associated Libraries
Setting Up SLAM
Setting Up the Noetic ROS Environment
Initializing the Project Workspace
Navigational Goals and Tasks
Importance of Maps
SLAM gMapping Introduction
Launching Our Rover
Creating ai_rover_world.launch
The slam_gmapping Launch File
Preparing slam_gmapping Package
Modify gmapping_demo.launch File
RViz gMapping
Final Launch Terminal Commands
RViz Mapping Configurations
Checking LaserScan Configurations
Checking Mapping Configurations
Saving RViz Configurations
Additional Noetic SLAM Information
The map_server ROS Node
Map Image Savings or Modifications
rover_map.pgm Map Image File Data
rover_map.yaml Map File Metadata
ROS Bags
Importance of ROS Bags
Localization (Finding Lost Rovers)
Adaptive Monte Carlo Localization (AMCL)
Configuring the AMCL ROS Node
Importance of Localization and AMCL
Visualizing the AMCL in RViz
Moving the Rover’s Pose with RViz
Programming Goal Poses for Rover
Noetic ROS Navigation Stack
Configuring the Navigation Stack
Summary
Chapter 8: OpenCV and Perception
Objectives
Overview
Introduction to Computer Vision
Solid-State Physics
Neurobiology
Robotic Navigation
What Is Computer Vision?
OpenCV
Images
Filters
Color Filters and Grayscale
Edge Detectors
NumPy, SciPy, OpenCV, and CV_Bridge
Testing the OpenCV CV_Bridge
CV_Bridge: The Link Between OpenCV and ROS
Acquiring Test Images
Edge Detection and LiDAR (Why)
Implementation (How)
Launching Python Files
Step 1: Data Pipeline
Step 2: Data Pipeline
Step 3: Data Pipeline
Building and Running the ROS Data Pipeline Application
Running the App in Three Different Terminals
Starting Your Data Pipeline with a ROS Launch File
Summary
Chapter 9: Reinforced Learning
REL Primer
Simulators for Emotion Recognition
Reinforcement Deep Learning
Computer Vision System
Flight-Path Analysis
Pilot’s Gesture Assignment
Reinforcement Learning Agent: Learning from Pilot’s Actions
Flight Simulator Game Framework
Summary
Policies and Value Functions
References
Chapter 10: Subsumption Cognitive Architecture
Cognitive Architectures for Autonomy
Subsumption Structure
Layers and Augmented Finite-State Machines
Examples Using a Cognitive Assumption Architecture
Controlling the Robotic Car
Controller Class and Object
Controller Category
Controller Type
Creating a Behavior-Based Robot
Other Cognitive Architectures
Reactive Cognitive Architecture
Canonical Operational Architecture
System and Technical Architectures
The Human Model
A Hierarchical Paradigm for Operational Architectures
Deliberative Architectures
Reactive Architectures
Hybrid Architectures
Summary
Task
References
Chapter 11: Geospatial Guidance for AI Rover
The Need for Geospatial Guidance
Why Does the AI Rover Need to Know Where It Is?
How Does GIS Help Our Land-based Rover?
Which GIS Software Package Do We Use, and Can It Be Used with an ROS-based Rover?
Can GIS Be Embedded Within Our AI-Enabled Rover?
Summary
Chapter 12: Noetic ROS Further Examined and Explained
Objectives
ROS Philosophy
ROS Fundamentals
Noetic ROS Catkin
Noetic ROS Workspace
Noetic ROS Packages
Noetic ROS rosrun
Building the Rover’s Brains
ROS1 Versus ROS2
Rover ROS1 or ROS2?
ROS1 Nodelets or ROS2 Components
ROS1 and ROS2 LaunchFiles
ROS1 and ROS2 Communications
ROS1 and ROS2 Services
ROS1 and ROS2 Actions
ROS1 and ROS2 Packages
ROS1 and ROS2 Command-Line Tools
ROS1 and ROS2 OS Support
ROS1 (ros1_bridge) Link with ROS2
ROS1, Ubuntu, Raspbian, and the Raspberry Pi 4
ROS2, Ubuntu, and Raspberry Pi 4
ROS1, ROS2, Raspberry Pi 4, and Rover
Summary
Chapter 13: Further Considerations
Designing Your First Mission
Manual Control
Simple Corridor on Flat Terrain
Complex-shaped Corridor with Uneven Terrain
Complex Open Corridor with Uneven Terrain and Obstacles
Additional Testing as Required
What to Do if the AI Rover Crashes
Mission Ideas
Zombie Hunter
Home Delivery
Home Security
Other Missions
Like It or Not, We Now Live in the Age of Skynet
Future Battlefields and Skies Will Have Unmanned Systems
Necessary Countermeasures
Final Considerations for More Advanced AI-Enabled
Drones
Summary
References
Appendix A: Bayesian Deep Learning
Bayesian Networks at a Glance
What Are You? Two Camps . . .
Bayesian Decision Theory
Bayes Theorem
BBN Conditional Property
Mathematical Definition of Belief Networks
Summary
References
Appendix B: OpenAI Gym
Getting Started with OpenAI Gym
Installation
Environments
It Ought to Look Like This
Observations
Spaces
Available Environments
Background
Drone Gym Environment
Install OpenAI Gym
Dependencies
The Environment
In Real Life
gym-pybullet-drones
Why Reinforcement Learning of Quadrotor Control?
Overview
Performance
Requirements and Installation
On macOS and Ubuntu
On Windows
Examples
Experiments
Class BaseAviary
Creating New Aviaries
Action Spaces Examples
Observation Spaces Examples
Obstacles
Drag, Ground Effect, and Downwash Models
PID Control
Logger
ROS2 Python Wrapper
Desiderata/WIP
Citation
References
Appendix C: Introduction to the Future of AI & ML Research
Third Wave of AI
Index