This book illustrates basic principles, along with the development of the advanced algorithms, to realize smart robotic systems. It speaks to strategies by which a robot (manipulators, mobile robot, quadrotor) can learn its own kinematics and dynamics from data. In this context, two major issues have been dealt with; namely, stability of the systems and experimental validations. Learning algorithms and techniques as covered in this book easily extend to other robotic systems as well. The book contains MATLAB- based examples and c-codes under robot operating systems (ROS) for experimental validation so that readers can replicate these algorithms in robotics platforms.
Author(s): Laxmidhar Behera, Swagat Kumar, Prem Kumar Patchaikani, Ranjith Ravindranathan Nair, Samrat Dutta
Publisher: CRC Press
Year: 2020
Language: English
Pages: xxiv+650
Cover
Half Title
Title Page
Copyright Page
Contents
Preface
Acknowledgment
Authors
1. Introduction
1.1 Vision-Based Control
1.2 Kinematic Control of a Redundant Manipulator
1.2.1 Redundancy Resolution using Null Space of the Pseudo-inverse
1.2.2 Extended Jacobian Method
1.2.3 Optimization Based Redundancy Resolution
1.2.4 Redundancy Resolution with Global Optimization
1.2.5 Neural Network Based Methods
1.3 Visual Servoing
1.3.1 Image Based Visual Servoing (IBVS)
1.3.2 Position Based Visual Servoing (PBVS)
1.3.3 2-1/2-D Visual Servoing
1.4 Visual Control of a Redundant Manipulator: Research Issues
1.5 Learning by Demonstration
1.5.1 DS-Based Motion Learning
1.6 Stability of Nonlinear Systems
1.7 Optimization Techniques
1.7.1 Genetic Algorithm
1.7.2 Expectation Maximization for Gaussian Mixture Model
1.8 Composition of the Book
Part I: Manipulators
2. Kinematic and Dynamic Models of Robot Manipulators
2.1 PowerCube Manipulator
2.2 Kinematic Configuration of the Manipulator
2.3 Estimating the Vision Space Motion with Camera Model
2.3.1 Transformation from Cartesian Space to Vision Space
2.3.2 The Camera Model
2.3.3 Computation of Image Feature Velocity in the Vision Space
2.4 Learning-Based Controller Architecture
2.5 Universal Robot (UR 10)
2.5.1 Mechatronic Design
2.5.1.1 Platform
2.5.1.2 End-Effector
2.5.1.3 Perception Apparatus
2.5.2 Kinematic Model
2.6 Barrett Wam Manipulator
2.6.1 Overview of the System
2.6.2 Experimental Setup
2.6.3 Dynamic Modeling
2.6.4 System Description and Modeling
2.6.5 State Space Representation
2.7 Summary
3. Hand-eye Coordination of a Robotic Arm using KSOM Network
3.1 Kohonen Self Organizing Map
3.1.1 Competitive Process
3.1.2 Cooperative Process
3.1.3 Adaptive Process
3.2 System Identification using KSOM
3.3 Introduction to Learning-Based Inverse Kinematic Control
3.3.1 The Network
3.3.2 The Learning Problem
3.3.3 The Approach
3.3.4 The Formulation of Cost Function
3.3.5 Weight Update Laws
3.4 Visual Motor Control of a Redundant Manipulator using KSOM Network
3.4.1 The Problem
3.5 KSOM with Sub-Clustering in Joint Angle Space
3.5.1 Network Architecture
3.5.2 Training Algorithm
3.5.3 Testing Phase
3.5.4 Redundancy Resolution
3.5.5 Tracking a Continuous Trajectory
3.6 Simulation and Results
3.6.1 Network Architecture and Workspace Dimensions
3.6.2 Training
3.6.3 Testing
3.6.3.1 Reaching Isolated Target Positions in the Workspace
3.6.3.2 Tracking a Straight Line Trajectory
3.6.3.3 Tracking an Elliptical Trajectory
3.6.4 Real-Time Experiment
3.6.4.1 Redundant Solutions
3.6.4.2 Tracking a Circular and a Straight Line Trajectory
3.6.4.3 Multi-Step Movement
3.7 Summary
4. Model-based Visual Servoing of a 7 DOF Manipulator
4.1 Introduction
4.2 Kinematic Control of a Manipulator
4.2.1 Kinematic Control of Redundant Manipulator
4.3 Visual Servoing
4.3.1 Estimating the Vision Space Motion with Camera Model
4.3.2 Transformation from Cartesian Space to Vision Space
4.3.3 The Camera Model
4.3.4 Computation of Image Feature Velocity in the Vision Space
4.4 Kinematic Control of a Manipulator Directly from Vision Space
4.5 Image Moments
4.6 Image Moment Velocity
4.7 A Pinhole Camera Projection
4.8 Image Moment Interaction Matrix
4.9 Experimental Results using a 7 DOF Manipulator
4.10 Summary
5. Learning-Based Visual Servoing
5.1 Introduction
5.2 Kinematic Control using KSOM
5.2.1 KSOM Architecture
5.2.2 KSOM: Weight Update
5.2.3 Comments on Existing KSOM Based Kinematic Control Schemes
5.3 Problem Definition
5.4 Analysis of Solution Learned Using KSOM
5.4.1 KSOM: An Estimate of Inverse Jacobian
5.4.2 Empirical Verification
5.4.2.1 Inverse Jacobian Evolution in Learning Phase
5.4.2.2 Testing Phase: Inverse Jacobian Estimation at each Operating Zone
5.4.2.3 Inference
5.5 KSOM in Closed Loop Visual Servoing
5.5.1 Stability Analysis
5.6 Redundancy Resolution
5.7 Results
5.7.1 Learning Inverse Kinematic Relationship using KSOM
5.7.2 Visual Servoing
5.7.3 Redundancy Resolution
5.7.3.1 Tracking a Straight Line
5.7.3.2 Tracking an Elliptical Trajectory
5.8 Summary
5.9 Reinforcement Learning-Based Optimal Redundancy Resolution Directly from the Vision Space
5.10 Introduction
5.11 Redundancy Resolution Problem from the Vision Space
5.12 SNAC Based Optimal Redundancy Resolution from Vision Space
5.12.1 Selection of Cost Function
5.12.2 Control Challenges
5.13 T-S Fuzzy Model-Based Critic Neural Network for Redundancy Resolution from Vision Space
5.13.1 Fuzzy Critic Model
5.13.2 Weight Update Law
5.13.3 Selection of Fuzzy Zones
5.13.4 Initialization of the Fuzzy Network Control
5.13.4.1 Remark
5.14 KSOM Based Critic Network for Redundancy Resolution from Vision Space
5.14.1 KSOM Critic Model
5.14.2 KSOM: Weight Update
5.14.3 Initialization of KSOM Network Control
5.15 Simulation Results
5.15.1 T-S Fuzzy Model
5.15.2 Kohonen’s Self-organizing Map
5.16 Real-Time Experiment
5.16.1 Tracking Elliptical Trajectory
5.16.1.1 T-S Fuzzy Model
5.16.1.2 KSOM
5.16.2 Grasping a Ball with Hand-manipulator Setup
5.17 Summary
6. Visual Servoing using an Adaptive Distributed Takagi-Sugeno (T-S) Fuzzy Model
6.1 T-S Fuzzy Model
6.2 Adaptive Distributed T-S Fuzzy PD Controller
6.2.1 Offline Learning Algorithm
6.2.2 Online Adaptation Algorithm
6.2.3 Stability Analysis
6.3 Experimental Results
6.3.1 Visual Servoing for a Static Target
6.3.2 Compensation of Model Uncertainties
6.3.3 Visual Servoing for a Moving Target
6.4 Computational Complexity
6.5 Summary
7. Kinematic Control using Single Network Adaptive Critic
7.1 Introduction
7.1.1 Discrete-Time Optimal Control Problem
7.1.2 Adaptive Critic Based Control
7.1.2.1 Training of Action and Critic Network
7.1.3 Single Network Adaptive Critic (DT-SNAC)
7.1.4 Choice of Critic Network Model
7.1.4.1 Costate Vector Modeling with MLN Critic Network
7.1.4.2 Costate Vector Modeling with T-S Fuzzy Model-Based Critic Network
7.2 Adaptive Critic Based Optimal Controller Design for Continuous-time Systems
7.2.1 Continuous-time Single Network Adaptive Critic (CT-SNAC)
7.2.2 Critic Network: Weight Update Law
7.2.3 Choice of Critic Network
7.2.3.1 Critic Network using MLN
7.2.3.2 T-S Fuzzy Model-Based Critic Network with Cluster of Local Quadratic Cost Functions
7.2.4 CT-SNAC
7.3 Discrete-Time Input Affine System Representation of Forward Kinematics
7.4 Modeling the Primary and Additional Tasks as an Integral Cost Function
7.4.1 Quadratic Cost Minimization (Global Minimum Norm Motion)
7.4.2 Joint Limit Avoidance
7.5 Single Network Adaptive Critic Based Optimal Redundancy Resolution
7.5.1 T-S Fuzzy Model-Based Critic Network for Closed Loop Positioning Task
7.5.2 Training Algorithm
7.6 Computational Complexity
7.7 Simulation Results
7.7.1 Global Minimum Norm Motion
7.7.2 Joint Limit Avoidance
7.8 Experimental Results
7.8.1 Global Minimum Norm Motion
7.8.2 Joint Limit Avoidance
7.9 Conclusion
8. Dynamic Control using Single Network Adaptive Critic
8.1 Introduction
8.2 Optimal Control Problem of Continuous Time Nonlinear System
8.2.1 Linear Quadratic Regulator
8.2.2 Hamilton-Jacobi-Bellman Equation
8.2.3 Optimal Control Law for Input Affine System
8.2.4 Adaptive Critic Concept
8.3 Policy Iteration and SNAC for Unknown Continuous Time Nonlinear Systems
8.3.1 Policy Iteration Scheme
8.3.2 Optimal Control Problem of an Unknown Dynamic
8.3.3 Model Representation and Learning Scheme
8.3.3.1 TSK Fuzzy Representation of Nonlinear Dynamics
8.3.3.2 Learning Scheme for the TSK Fuzzy Model
8.3.4 Critic Design and Policy Update
8.3.4.1 Construction of Initial Critic Network using Lyapunov Based LMI
8.3.4.2 Lyapunov Function
8.3.4.3 Conditions for Stabilization
8.3.4.4 Design of Fitness Function
8.3.5 Learning Near-Optimal Controller
8.3.5.1 Update of Critic Network
8.3.5.2 Fitness Function for PI Based Training
8.3.6 Examples
8.3.6.1 Simulated Model
8.3.6.2 Example using Real Robot
8.4 Summary
9. Imitation Learning
9.1 Introduction
9.2 Dynamic Movement Primitives
9.2.1 Mathematical Formulations
9.2.1.1 Choice of Mean and Variance
9.2.1.2 Spatial and Temporal Scaling
9.2.2 Example
9.3 Motion Encoding using Gaussian Mixture Regression
9.3.1 SED: Stable Estimator of Dynamical Systems
9.3.1.1 Learning Model Parameters
9.3.1.2 Log-likelihood Cost
9.4 FuzzStaMP: Fuzzy Controller Regulated Stable Movement Primitives
9.4.1 Motion Modeling with C-FuzzStaMP
9.4.1.1 Fuzzy Lyapunov Function
9.4.1.2 Learning Fuzzy Controller Gains
9.4.1.3 Design of Fitness Function
9.4.1.4 Example
9.4.2 Motion Modeling with R-FuzzStaMP
9.4.2.1 Stability Analysis of the Motion System
9.4.2.2 Design of the Fuzzy Controller
9.4.3 Global Validity and Spatial Scaling
9.4.3.1 Examples
9.5 Learning Skills from Heterogeneous Demonstrations
9.5.1 Stability Analysis
9.5.1.1 Asymptotic Stability in the Demonstrated Region
9.5.1.2 Ensuring Asymptotic Stability outside Demonstrated Region
9.5.2 Learning Model Parameters from Demonstrations
9.5.2.1 Motion Modeling using GMR
9.5.2.2 Motion Modeling using LWPR
9.5.2.3 Motion Modeling using e-SVR
9.5.2.4 Complete Pipeline
9.5.3 Spatial Error Calculation
9.5.4 Examples
9.5.4.1 Example of Monotonic and Non-monotonic State Energy
9.5.4.2 Example of Multitasking with Single and Multiple Task-equilibrium
9.5.5 Summary
10. Visual Perception
10.1 Introduction
10.2 Deep Neural Networks and Artificial Neural Networks
10.2.1 Neural Networks
10.2.1.1 Multi-layer Perceptron
10.2.1.2 MLP Implementation using Tensorflow
10.2.2 Deep Learning Techniques: An Overview
10.2.2.1 Convolutional Neural Network (Flow and Training with Back-propogation)
10.2.3 Different Architectures of Convolutional Neural Networks (CNNs)
10.3 Examples of Vision-Based Object Detection Techniques
10.3.1 Automatic Annotation of Object ROI
10.3.1.1 Image Acquisition
10.3.1.2 Manual Annotation
10.3.1.3 Augmentation and Clutter Generation
10.3.1.4 Two-class Classification Model using Deep Networks
10.3.1.5 Experimental Results and Discussions
10.3.2 Automatic Segmentation of Objects for Warehouse Automation
10.3.2.1 Network Architecture
10.3.2.2 Base Network
10.3.2.3 Single Shot Detection
10.3.3 Automatic Generation of Artificial Clutter
10.3.4 Multi-Class Segmentation using Proposed Network
10.4 Experimental Results
10.4.1 System Description
10.4.1.1 Server
10.4.2 Ground Truth Generation
10.4.3 Image Segmentation
10.5 Summary
11. Vision-Based Grasping
11.1 Introduction
11.2 Model-Based Grasping
11.2.1 Problem Statement
11.2.2 Hardware Setup
11.2.3 Dataset
11.2.4 Data Augmentation
11.2.5 Network Architecture and Training
11.2.6 Axis Assignment
11.2.7 Grasp Decide Index (GDI)
11.2.8 Final Pose Selection
11.2.9 Overall Pipeline and Result
11.3 Grasping without Object Models
11.3.1 Problem Definition
11.3.2 Proposed Method
11.3.2.1 Creating Continuous Surfaces in 3D Point Cloud
11.3.3 Finding Graspable Affordances
11.3.4 Experimental Results
11.3.4.1 Performance Measure
11.3.5 Grasping of Individual Objects
11.3.6 Grasping Objects in a Clutter
11.3.7 Computation Time
11.4 Summary
12. Warehouse Automation: An Example
12.1 Introduction
12.2 Problem Definition
12.3 System Architecture
12.4 The Methods
12.4.1 System Calibration
12.4.2 Rack Detection
12.4.3 Object Recognition
12.4.4 Grasping
12.4.5 Motion Planning
12.4.6 End-Effector Design
12.4.6.1 Suction-based End-effector
12.4.6.2 Combining Gripping with Suction
12.4.7 Robot Manipulator Model
12.4.7.1 Null Space Optimization
12.4.7.2 Inverse Kinematics as a Control Problem
12.4.7.3 Damped Least Square Method
12.5 Experimental Results
12.5.1 Response Time
12.5.2 Grasping and Suction
12.5.3 Object Recognition
12.5.4 Direction for Future Research
12.6 Summary
Part II: Mobile Robotics
13. Introduction to Mobile Robotics and Control
13.1 Introduction
13.2 System Model: Nonholonomic Mobile Robots
13.3 Robot Attitude
13.3.1 Rotation about Roll Axis
13.3.2 Rotation about Pitch Axis
13.3.3 Rotation About Yaw Axis
13.4 Composite Rotation
13.5 Coordinate System
13.5.1 Earth-Centered Earth-Fixed (ECEF) Co-ordinate System
13.6 Control Approaches
13.6.1 Feedback Linearization
13.6.2 Backstepping
13.6.3 Sliding Mode Control
13.6.4 Conventional SMC
13.6.5 Terminal SMC
13.6.6 Nonsingular TSMC (NTSMC)
13.6.7 Fast Nonsingular TSMC (FNTSMC)
13.6.8 Fractional Order SMC (FOSMC)
13.6.9 Higher Order SMC (HOSMC)
13.7 Summary
14. Multi-robot Formation
14.1 Introduction
14.2 Path Planning Schemes
14.3 Multi-Agent Formation Control
14.3.1 Fast Adaptive Gain NTSMC
14.3.2 Fast Adaptive Fuzzy NTSMC (FAFNTSMC)
14.3.3 Fault Detection, Isolation and Collision Avoidance Scheme
14.4 Experiments
14.5 Summary
15. Event Triggered Multi-Robot Consensus
15.1 Introduction to Event Triggered Control
15.2 Event Triggered Consensus
15.2.1 Preliminaries
15.2.2 Sliding Mode-Based Finite Time Consensus
15.3 Event Triggered Sliding Mode-based Consensus Algorithm
15.3.1 Consensus-based Tracking Control of Nonholonomic Multi-robot Systems
15.4 Experiments
15.5 Summary
16. Vision-Based Tracking for a Human Following Mobile Robot
16.1 Visual Tracking: Introduction
16.1.1 Difficulties in Visual Tracking
16.1.2 Required Features of Visual Tracking
16.1.3 Feature Descriptors for Visual Tracking
16.2 Human Tracking Algorithm using SURF Based Dynamic Object Model
16.2.1 Problem Definition
16.2.2 Object Model Description
16.2.2.1 Maintaining a Template Pool of Descriptors
16.2.3 The Tracking Algorithm
16.2.3.1 Step 1: Target Initialization
16.2.3.2 Step 2: Object Recognition and Template Pool Update
16.2.3.3 Step 3: Occlusion Detection, Target Window Prediction
16.2.4 SURF-Based Mean-Shift Algorithm
16.2.5 Modified Object Model Description
16.2.6 Modified Tracking Algorithm
16.3 Human Tracking Algorithm with the Detection of Pose Change due to Out-of-plane Rotations
16.3.1 Problem Definition
16.3.2 Tracking Algorithm
16.3.3 Template Initialization
16.3.4 Tracking
16.3.4.1 Scaling and Re-positioning the Tracking Window
16.3.5 Template Update Module
16.3.6 Error Recovery Module
16.3.6.1 KD-tree Classifier
16.3.6.2 Construction of KD-Tree
16.3.6.3 Dealing with Pose Change
16.3.6.4 Tracker Recovery from Full Occlusions
16.4 Human Tracking Algorithm Based on Optical Flow
16.4.1 The Template Pool and its Online Update
16.4.1.1 Selection of New Templates
16.4.2 Re-Initialization of Optical Flow Tracker
16.4.3 Detection of Partial and Full Occlusion
16.5 Visual Servo Controller
16.5.1 Kinematic Model of the Mobile Robot
16.5.2 Pinhole Camera Model
16.5.3 Problem Formulation
16.5.4 Visual Servo Control Design
16.5.5 Simulation Results
16.5.5.1 Example: Tracking an Object which Moves in a Circular Trajectory
16.6 Experimental Results
16.6.1 Experimental Results for the Human Tracking Algorithm Based on SURF-based Dynamic Object Model
16.6.2 Tracking Results
16.6.3 Human Following Robot
16.6.4 Discussion on Performance Comparison
16.6.5 Experimental Evaluation of Human Tracking Algorithm Based on Optical Flow
16.7 Summary
Exercises
Bibliography
Index