Dynamic Network Representation Based on Latent Factorization of Tensors

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

A dynamic network is frequently encountered in various real industrial applications, such as the Internet of Things. It is composed of numerous nodes and large-scale dynamic real-time interactions among them, where each node indicates a specified entity, each directed link indicates a real-time interaction, and the strength of an interaction can be quantified as the weight of a link. As the involved nodes increase drastically, it becomes impossible to observe their full interactions at each time slot, making a resultant dynamic network High Dimensional and Incomplete (HDI). An HDI dynamic network with directed and weighted links, despite its HDI nature, contains rich knowledge regarding involved nodes’ various behavior patterns. Therefore, it is essential to study how to build efficient and effective representation learning models for acquiring useful knowledge.

In this book, we first model a dynamic network into an HDI tensor and present the basic latent factorization of tensors (LFT) model. Then, we propose four representative LFT-based network representation methods. The first method integrates the short-time bias, long-time bias and preprocessing bias to precisely represent the volatility of network data. The second method utilizes a proportion-al-integral-derivative controller to construct an adjusted instance error to achieve a higher convergence rate. The third method considers the non-negativity of fluctuating network data by constraining latent features to be non-negative and incorporating the extended linear bias. The fourth method adopts an alternating direction method of multipliers framework to build a learning model for implementing representation to dynamic networks with high preciseness and efficiency.

Author(s): Hao Wu, Xuke Wu, Xin Luo
Series: SpringerBriefs in Computer Science
Publisher: Springer
Year: 2023

Language: English
Pages: 88
City: Singapore

Preface
Contents
Chapter 1: Introduction
1.1 Overview
1.2 Formulating a Dynamic Network into an HDI Tensor
1.3 Latent Factorization of Tensor
1.4 Book Organization
References
Chapter 2: Multiple Biases-Incorporated Latent Factorization of Tensors
2.1 Overview
2.2 MBLFT Model
2.2.1 Short-Term Bias
2.2.2 Preprocessing Bias
2.2.3 Long-Term Bias
2.2.4 Parameter Learning Via SGD
2.3 Performance Analysis of MBLFT Model
2.3.1 MBLFT Algorithm Design
2.3.2 Effect of Short-Term Bias
2.3.3 Effect of Preprocessing Bias
2.3.4 Effect of Long-Term Bias
2.3.5 Comparison with State-of-the-Art Models
2.4 Summary
References
Chapter 3: PID-Incorporated Latent Factorization of Tensors
3.1 Overview
3.2 PLFT Model
3.2.1 A PID Controller
3.2.2 Objective Function
3.2.3 Parameter Learning Scheme
3.3 Performance Analysis of PLFT Model
3.3.1 PLFT Algorithm Design
3.3.2 Effects of Hyper-Parameters
3.3.3 Comparison with State-of-the-Art Models
3.4 Summary
References
Chapter 4: Diverse Biases Nonnegative Latent Factorization of Tensors
4.1 Overview
4.2 DBNT Model
4.2.1 Extended Linear Biases
4.2.2 Preprocessing Bias
4.2.3 Parameter Learning Via SLF-NMU
4.3 Performance Analysis of DBNT Model
4.3.1 DBNT Algorithm Design
4.3.2 Effects of Biases
4.3.3 Comparison with State-of-the-Art Models
4.4 Summary
References
Chapter 5: ADMM-Based Nonnegative Latent Factorization of Tensors
5.1 Overview
5.2 ANLT Model
5.2.1 Objective Function
5.2.2 Learning Scheme
5.2.3 ADMM-Based Learning Sequence
5.3 Performance Analysis of ANLT Model
5.3.1 ANLT Algorithm Design
5.3.2 Comparison with State-of-the-Art Models
5.4 Summary
References
Chapter 6: Perspectives and Conclusion
6.1 Perspectives
6.2 Conclusion
References