This book provides a comprehensive review and in-depth discussion of the state-of-the-art research literature and propose energy-efficient computation offloading and resources management for mobile edge computing (MEC), covering task offloading, channel allocation, frequency scaling and resource scheduling. Since the task arrival process and channel conditions are stochastic and dynamic, the authors first propose an energy efficient dynamic computing offloading scheme to minimize energy consumption and guarantee end devices’ delay performance. To further improve energy efficiency combined with tail energy, the authors present a computation offloading and frequency scaling scheme to jointly deal with the stochastic task allocation and CPU-cycle frequency scaling for minimal energy consumption while guaranteeing the system stability. They also investigate delay-aware and energy-efficient computation offloading in a dynamic MEC system with multiple edge servers, and introduce an end-to-end deep reinforcement learning (DRL) approach to select the best edge server for offloading and allocate the optimal computational resource such that the expected long-term utility is maximized. Finally, the authors study the multi-task computation offloading in multi-access MEC via non-orthogonal multiple access (NOMA) and accounting for the time-varying channel conditions. An online algorithm based on DRL is proposed to efficiently learn the near-optimal offloading solutions.
Researchers working in mobile edge computing, task offloading and resource management, as well as advanced level students in electrical and computer engineering, telecommunications, computer science or other related disciplines will find this book useful as a reference. Professionals working within these related fields will also benefit from this book.
Author(s): Ying Chen, Ning Zhang, Yuan Wu, Sherman Shen
Series: Wireless Networks
Publisher: Springer
Year: 2022
Language: English
Pages: 166
City: Cham
Preface
Acknowledgements
Contents
Acronyms
1 Introduction
1.1 Background
1.1.1 Mobile Cloud Computing
1.1.1.1 Architecture of Mobile Cloud Computing
1.1.1.2 Characteristics of Mobile Cloud Computing
1.1.1.3 Cloudlet
1.1.1.4 Fog Computing
1.1.1.5 Data Security and Privacy Protection
1.1.1.6 Challenges of Mobile Cloud Computing
1.1.2 Mobile Edge Computing
1.1.2.1 Definition of Mobile Edge Computing
1.1.2.2 Architecture of Mobile Edge Computing
1.1.2.3 Advantages of Mobile Edge Computing
1.1.2.4 Applications of Mobile Edge Computing
1.1.2.5 Challenges of Mobile Edge Computing
1.1.3 Computation Offloading
1.1.3.1 Minimize Latency
1.1.3.2 Minimize Energy Consumption
1.1.3.3 Weighted Sum of Latency and Energy Consumption
1.2 Challenges
1.3 Contributions
1.4 Book Outline
References
2 Dynamic Computation Offloading for Energy Efficiency in Mobile Edge Computing
2.1 System Model and Problem Statement
2.1.1 Network Model
2.1.2 Task Offloading Model
2.1.3 Task Queuing Model
2.1.4 Energy Consumption Model
2.1.5 Problem Statement
2.2 EEDCO: Energy Efficient Dynamic Computing Offloading for Mobile Edge Computing
2.2.1 Joint Optimization of Energy and Queue
2.2.2 Dynamic Computation Offloading for Mobile Edge Computing
2.2.3 Trade-Off Between Queue Backlog and Energy Efficiency
2.2.4 Convergence and Complexity Analysis
2.3 Performance Evaluation
2.3.1 Impacts of Parameters
2.3.1.1 Effect of Tradeoff Parameter
2.3.1.2 Effect of Arrival Rate
2.3.1.3 Effect of Transmit Power
2.3.1.4 Effect of Channel Power Gain
2.3.1.5 Effect of Number of IoT Devices
2.3.2 Performance Comparison with EA and QW Schemes
2.4 Literature Review
2.5 Summary
References
3 Energy Efficient Offloading and Frequency Scaling for Internet of Things Devices
3.1 System Model and Problem Formulation
3.1.1 Network Model
3.1.2 Task Model
3.1.3 Queuing Model
3.1.4 Energy Consumption Model
3.1.5 Problem Formulation
3.2 COFSEE: Computation Offloading and Frequency Scaling for Energy Efficiency of Internet of Things Devices
3.2.1 Problem Transformation
3.2.2 Optimal Frequency Scaling
3.2.3 Local Computation Allocation
3.2.4 MEC Computation Allocation
3.2.5 Theoretical Analysis
3.3 Performance Evaluation
3.3.1 Impacts of System Parameters
3.3.1.1 Effect of Tradeoff Parameter V
3.3.1.2 Effect of Arrival Rate
3.3.1.3 Effect of Slot Length
3.3.2 Performance Comparison with RLE, RME and TSSchemes
3.4 Literature Review
3.5 Summary
References
4 Deep Reinforcement Learning for Delay-Aware and Energy-Efficient Computation Offloading
4.1 System Model and Problem Formulation
4.1.1 System Model
4.1.2 Problem Formulation
4.2 Proposed DRL Method
4.2.1 Data Prepossessing
4.2.2 DRL Model
4.2.2.1 Reinforcement Learning Framework
4.2.2.2 Deep Reinforcement Learning Model
4.2.3 Training
4.2.3.1 Initialization
4.2.3.2 Exploration and Data Acquisition
4.2.3.3 Replay Experience Buffer
4.2.3.4 Learning
4.2.3.5 Reward Clipping
4.3 Performance Evaluation
4.4 Literature Review
4.5 Summary
References
5 Energy-Efficient Multi-Task Multi-Access Computation Offloading via NOMA
5.1 System Model and Problem Formulation
5.1.1 Motivation
5.1.2 System Model
5.1.3 Problem Formulation
5.2 LEEMMO: Layered Energy-Efficient Multi-Task Multi-Access Algorithm
5.2.1 Layered Decomposition of Joint Optimization Problem
5.2.2 Proposed Subroutine for Solving Problem (TEM-E-Sub)
5.2.3 A Layered Algorithm for Solving Problem (TEM-E-Top)
5.2.4 DRL-Based Online Algorithm
5.3 Performance Evaluation
5.3.1 Impacts of Parameters
5.3.2 Performance Comparison with FDMA Based Offloading Schemes
5.4 Literature Review
5.5 Summary
References
6 Conclusion
6.1 Concluding Remarks
6.2 Future Directions
References