Using metaheuristics to enhance machine learning techniques has become trendy and has achieved major successes in both supervised (classification and regression) and unsupervised (clustering and rule mining) problems. Furthermore, automatically generating programs via metaheuristics, as a form of evolutionary computation and swarm intelligence, has now gained widespread popularity. This book investigates different ways of integrating metaheuristics into machine learning techniques, from both theoretical and practical standpoints. It explores how metaheuristics can be adapted in order to enhance machine learning tools and presents an overview of the main metaheuristic programming methods. Moreover, real-world applications are provided for illustration, e.g., in clustering, big data, machine health monitoring, underwater sonar targets, and banking.
Author(s): Mansour Eddaly, Bassem Jarboui, Patrick Siarry
Publisher: Springer
Year: 2023
Language: English
Pages: 223
Preface
Organization of the Book
Contents
List of Contributors
Acronyms
Part I Metaheuristics for Machine Learning: Theory and Reviews
1 From Metaheuristics to Automatic Programming
1.1 Introduction
1.2 Metaheuristics
1.2.1 Solution-Based Metaheuristics
1.2.1.1 Simulated Annealing
1.2.1.2 Tabu Search
1.2.1.3 Hill Climbing
1.2.1.4 Variable Neighborhood Search
1.2.1.5 Iterated Local Search
1.2.2 Population-Based Metaheuristics
1.2.2.1 Evolutionary Computation
1.2.2.2 Swarm Intelligence
1.3 Automatic Programming
1.3.1 Program Representation
1.3.1.1 Tree Structure Representations
1.3.1.2 Linear Representations
1.3.1.3 Other Representations
1.3.2 Genetic Programming
1.3.3 Immune Programming
1.3.4 Gene Expression Programming
1.3.5 Particle Swarm Programming
1.3.6 Artificial Bee Colony Programming
1.3.7 Ant Colony Programming
1.3.8 Other Automatic Programming Algorithms
1.3.8.1 Bacterial Programming
1.3.8.2 Iterated Local Search Programming
1.3.8.3 Tree Based Differential Evolution
1.3.8.4 Estimation of Distribution Programming
1.3.8.5 Grammatical Fireworks Algorithm
1.3.8.6 Artificial Fish Swarm Programming
1.3.8.7 Variable Neighborhood Programming
1.4 Discussion and Challenges
1.5 Conclusions
References
2 Biclustering Algorithms Based on Metaheuristics: A Review
2.1 Introduction
2.2 Basic Preliminaries
2.2.1 Biclustering Definition
2.2.2 Classical Biclustering Approaches
2.2.2.1 Iterative Row and Column Clustering Combination
2.2.2.2 Divide-and-Conquer
2.2.2.3 Greedy Iterative Search
2.2.2.4 Exhaustive Bicluster Enumeration
2.2.2.5 Distribution Parameter Identification
2.2.3 Biclustering as an Optimization Problem
2.3 Main Components of Metaheuristics for Biclustering
2.3.1 Bicluster Encoding
2.3.1.1 Binary Bicluster Encoding (BBE)
2.3.1.2 Integer Bicluster Encoding (IBE)
2.3.2 Variation Operators
2.3.3 Crossover Operators
2.3.3.1 Single Crossover Operator 2:Seridi2011,2:Seridi2015
2.3.3.2 Bicluster Crossover Operator 2:Maatouk2019
2.3.4 Mutation Operators
2.3.4.1 Random Mutation Operator 2:Maatouk2019
2.3.4.2 CC-Based Mutation Operator 2:Seridi2011,2:Seridi2015
2.3.5 Objective Functions
2.3.5.1 Bicluster Size (BSize)
2.3.5.2 Bicluster Variance (VAR)
2.3.5.3 Mean Squared Residence (MSR)
2.3.5.4 Scaling Mean Squared Residence (SMSR)
2.3.5.5 Average Correlation Function (ACF)
2.3.5.6 Average Correlation Value (ACV)
2.3.5.7 Virtual Error (VE)
2.3.5.8 Coefficient of Variation Function (CVF)
2.4 Single-Objective Biclustering
2.4.1 Simulated Annealing
2.4.2 Genetic Algorithms
2.4.3 Scatter Search
2.4.4 Cuckoo Search
2.4.5 Particle Swarm Optimization
2.4.6 Hybrid Metaheuristic Approaches
2.4.7 Summary
2.5 Multi-Objective Biclustering
2.5.1 Multi-Objective Evolutionary Algorithms Based on NSGA-II
2.5.2 Multi-Objective Evolutionary Algorithms Based on SPEA2 and IBEA
2.5.3 Multi-Objective Particle Swarm Optimization
2.5.4 Other Multi-Objective Approaches
2.5.5 Summary
2.6 Discussion and Future Directions
2.6.1 Future Directions
2.7 Conclusion
References
3 A Metaheuristic Perspective on Learning Classifier Systems
3.1 Motivation and Structure
3.2 What Are Learning Classifier Systems?
3.2.1 Learning Tasks
3.2.2 LCS Models
3.2.3 Overall Algorithmic Structure of an LCS
3.2.4 LCSs in Metaheuristics Literature
3.3 ML Systems Similar to LCSs
3.3.1 Decision Trees
3.3.2 Mixture of Experts
3.3.3 Bagging and Boosting
3.3.4 Genetic Programming
3.4 The Role of Metaheuristics in LCSs
3.4.1 Four Types of LCS
3.4.2 Metaheuristic Solution Representation
3.4.3 Metaheuristic Operators
3.4.4 Typical Fitness Functions
3.5 Other Metaheuristics-Centric Rule-Based Machine Learning Systems
3.5.1 Approaches Based on Evolutionary Algorithms
3.5.2 Approaches Based on the Ant System
3.5.3 Approaches Based on Other Metaheuristics or Hybrids
3.5.4 Artificial Immune Systems
3.6 Future Work
3.7 Conclusion
References
Part II Metaheuristics for Machine Learning: Applications
4 Metaheuristic-Based Machine Learning Approach for Customer Segmentation
4.1 Introduction
4.2 Proposed Method
4.2.1 Feature Selection
4.2.2 Clustering
4.2.3 General Structure of the Proposed Hybrid Scheme
4.2.4 Classification Based on Clustering
4.3 Computational Experiments and Results
4.3.1 Experiment Setup
4.3.2 Considered Datasets
4.3.3 Parameter Settings
4.3.4 Results and Conversation
4.3.4.1 Experiment 1
4.3.4.2 Experiment 2
4.4 Conclusion and Scope for the Future Work
References
5 Evolving Machine Learning-Based Classifiers by Metaheuristic Approach for Underwater Sonar Target Detection and Recognition
5.1 Introduction
5.2 Theory of SVM
5.3 Problem Definition
5.4 Experiments and Results
5.4.1 Laboratory Dataset
5.4.1.1 Noises and Reflections Removal Stages
5.4.2 Common Dataset 2015 (CDS2015)
5.4.3 Experimental Real-World Underwater Dataset
5.4.4 Experimental Results and Discussion
5.5 Conclusion
Availability of Data and Material
Code Availability
References
6 Solving the Quadratic Knapsack Problem Using GRASP
6.1 Introduction
6.2 Quadratic Knapsack Problem
6.3 Greedy Algorithm
6.4 GRASP
6.4.1 Randomization
6.4.2 Single Swap Local Search
6.4.3 Multi-Swap Local Search
6.4.4 Implementation
6.5 Instance Generation
6.5.1 Weight
6.5.2 Item Profit
6.5.3 Pair Profit
6.6 Computational Results
6.6.1 Comparison to Existing Methods
6.6.2 Comparison of Local Searches
6.6.3 Evaluation of Test Instances
6.7 Conclusions
References
7 Algorithm vs Processing Manipulation to Scale Genetic Programming to Big Data Mining
7.1 Introduction
7.2 Scaling GP to Big Data Mining
7.2.1 Data Manipulation
7.2.2 Processing Manipulation
7.2.3 Algorithm Manipulation
7.3 Scaling GP by Processing Manipulation: Horizontal Parallelization
7.3.1 Spark vs MapReduce
7.3.2 Parallelizing GP over a Distributed Environment
7.3.3 Example of Implementation
7.4 Scaling GP by Algorithm Manipulation: Learning Paradigm
7.4.1 Active Learning with One-Level/Multi-Level Dynamic Sampling
7.4.1.1 Example of Implementation over DEAP
7.4.2 Active Learning with Adaptive Sampling
7.5 Combining Horizontal Parallelization and Active Learning
7.5.1 Dynamic Sampling in Distributed GP
7.5.2 Example of Dynamic-Sampling Implementation with Parallelized GP
7.6 Discussion
7.7 Conclusion
References
8 Dynamic Assignment Problem of Parking Slots
8.1 Introduction
8.2 Mixed Integer Programming Formulation
8.3 Estimation of Distribution Algorithms
8.3.1 Univariate Models
8.3.1.1 Population Based Incremental Learning
8.3.1.2 Stochastic Hill Climbing with Learning by Vectors of Normal Distributions
8.3.1.3 Univariate Marginal Distribution Algorithm
8.3.2 Bivariate Models
8.3.3 Multivariate Models
8.3.3.1 Estimation of Multivariate Normal Density Algorithms (EMNA)
8.3.3.2 Estimation of Gaussian Network Algorithms
8.3.3.3 Iterative Density-Estimation Evolutionary Algorithm
8.4 Estimation of Distribution Algorithm with Reinforcement Learning
8.5 Computational Results
8.6 Conclusion
References