Human-in-the-loop Learning and Control for Robot Teleoperation presents recent, research progress on teleoperation and robots, including human-robot interaction, learning and control for teleoperation with many extensions on intelligent learning techniques. The book integrates cutting-edge research on learning and control algorithms of robot teleoperation, neural motor learning control, wave variable enhancement, EMG-based teleoperation control, and other key aspects related to robot technology, presenting implementation tactics, adequate application examples and illustrative interpretations. Robots have been used in various industrial processes to reduce labor costs and improve work efficiency. However, most robots are only designed to work on repetitive and fixed tasks, leaving a gap with the human desired manufacturing effect.
Author(s): CHENGUANG YANG, JING LUO, NING WANG
Publisher: Academic Press/Elsevier
Year: 2023
Language: English
Pages: 260
Cover
Preface
Contents
Author biographies
1 Introduction
1.1 Overview of typical teleoperation systems
1.1.1 What is teleoperation in robotics?
1.1.2 Composition of a typical teleoperation system
1.2 Human–robot interaction in teleoperation
1.2.1 Why emphasize human–robot interaction?
1.2.2 Several unimodal feedback for better interaction
1.2.3 Multimodal interface for better teleoperation
1.3 Learning and control for teleoperation
1.3.1 Introduction of LfD
1.3.2 Skill learning in teleoperation
1.3.3 Control issues in teleoperation
1.4 Project cases of teleoperation
1.4.1 The Da Vinci surgical robot
1.4.2 Canadian Robot
1.4.3 The Kontur-2 Project
1.4.4 Robonaut
1.5 Conclusion
References
2 Software systems and platforms for teleoperation
2.1 Teleoperation platforms
2.1.1 Mobile robot
2.1.2 Baxter robot
2.1.3 KUKA LBR iiwa robot
2.1.4 Haptic devices
2.1.5 Sensors
2.2 Software systems
2.2.1 OpenHaptics toolkit
2.2.2 MATLAB® Robotics Toolbox
2.2.3 Robot operating system
2.2.4 Gazebo
2.2.5 CoppeliaSim
2.3 Conclusion
References
3 Uncertainties compensation-based teleoperation control
3.1 Introduction
3.1.1 Wave variable method
3.1.2 Neural learning control
3.2 System description
3.2.1 Dynamics of teleoperation systems
3.2.2 Position-to-position control in teleoperation systems
3.2.3 Four-channel control in teleoperation systems
3.3 Neural learning control
3.3.1 Principle of RBFNN
3.3.2 Nonlinear dynamic model
3.3.3 Approaching nonlinear dynamic model with RBFNN
3.4 Wave variable method
3.4.1 Wave variable correction-based method
3.4.2 Multiple channel-based wave variable method
3.5 Experimental case study
3.5.1 Experimental case with RBFNN
3.5.2 Experimental case with the wave variable method
3.6 Conclusion
References
4 User experience-enhanced teleoperation control
4.1 Introduction
4.2 Variable stiffness control and tremor attenuation
4.2.1 Description of the teleoperation robot system
4.2.2 Problem description
4.2.3 Design and analysis for teleoperation
4.3 Hybrid control
4.3.1 Control scheme
4.3.2 Virtual fixture
4.4 Variable stiffness control
4.4.1 Introduction of the integral Lyapunov–Krasovskii function
4.4.2 Controller design
4.5 A VR-based teleoperation application
4.5.1 The coordinate system of Leap Motion
4.5.2 Coordinate conversion algorithm
4.6 Experimental case study
4.6.1 Experimental results of variable stiffness control and tremor suppression
4.6.2 Experimental results of variable stiffness control and virtual fixture
4.6.3 Experimental results of disturbance observer-enhanced variable stiffness controller
4.7 Conclusion
References
5 Shared control for teleoperation
5.1 Introduction
5.2 Collision avoidance control
5.2.1 Kinematic level
5.2.2 Dynamics level
5.3 EEG-based shared control
5.3.1 Visual fusion
5.3.2 Mind control
5.4 MR-based user interactive path planning
5.4.1 Mixed reality
5.4.2 VFH* algorithm
5.5 No-target obstacle avoidance with sEMG-based control
5.5.1 Obstacle detection and muscle stiffness extraction
5.5.2 No-target Bug algorithm
5.5.3 Motion control
5.6 APF-based hybrid shared control
5.6.1 Motion control
5.6.2 Hybrid shared control
5.7 Experimental case study
5.7.1 Experimental results of the collision avoidance control
5.7.2 Experimental results of the EEG-based shared control
5.7.3 Experimental results of the MR-based user interactive path planning
5.7.4 Experimental results of no-target obstacle avoidance with sEMG-based control
5.7.5 Experimental results of APF-based hybrid shared control
5.8 Conclusion
References
6 Human–robot interaction in teleoperation systems
6.1 Introduction
6.2 AR-based prediction model
6.2.1 Problem formulation
6.2.2 Prediction model of human motion
6.2.3 Virtual force model
6.2.4 Convergence analysis
6.3 EMG-based virtual fixture
6.3.1 Linear flexible guidance virtual fixture
6.3.2 Raw sEMG signal pre-processing
6.4 Experimental case study
6.4.1 Teleoperation experiments based on human motion prediction
6.4.2 Experiment of EMG-based virtual fixture
6.5 Conclusion
References
7 Task learning of teleoperation robot systems
7.1 Introduction
7.2 System description
7.3 Space vector approach
7.4 DTW-based demonstration data processing
7.5 Task learning
7.5.1 GMM and GMR
7.5.2 Extreme learning machine
7.5.3 Locally weighted regression (LWR)
7.5.4 Hidden semi-Markov model
7.6 Experimental case study
7.6.1 Cleaning experiment
7.6.2 Kinect-based teleoperation experiment
7.6.3 Cleaning experiment with LWR
7.6.4 Drawing experiment
7.7 Conclusion
References
Index