AI/ML Techniques in Caching Data for the Fog-Based IoT Systems

To address the huge amount of multimedia data traffic and requirements of user QoE in the next generation of mobile networks (5G), it is of vital importance to develop and design efficient content caching techniques
at the edge of the network which is considered as a key strategy for 5G. Recent developments in fog computing and Machine Learning (ML) provide efficient caching techniques for 5G which can reduce service latency by providing computation and storage capacity at the network edges. Moreover, caching at the edge of the network is considered as a promising solution to reduce the redundant data transmission and improve the QoE. In this section, we overview the used AI/ML techniques in caching data for the fog-based IoT paradigm.

In [99], the authors present an adaptive caching technique based on the extreme-learning Machine Neural Networks (MNNs) to estimate the popularity of the content based on the content’s features, the behavior of the
users, and the available statistics requested from the users. The scheme also uses mixed-integer linear programming to select the physical cache sizes and estimate the location of the content in the network. Finally,
the authors show that the proposed caching technique improves the users’ QoE as well as the performance of the network compared to industry-standard caching techniques. The study proposes a networking paradigm where network nodes, using a proactive approach, cache wisely selected contents at the edge of the network. In this regard, Collaborating Filtering (CF) strategies are utilized to predict the file popularity matrix.
Nevertheless, CF learning techniques are sub-optimal primarily due to data sparseness and cold-start problems which are important challenges among the ML experts. Similarly, the authors in utilize transfer
learning for popularity estimation where the most popular contents are cached in a proactive manner at the small Base Stations (BSs) until the storage is full at the BSs. However, there may be redundant caching since
each BS caches the most popular content independently, and therefore, the same content may be cached by several small BSs. This, in turn, results in low caching efficiency. Moreover, in, a proactive caching strategy
is proposed based on mobile edge computing to minimize the average transmission cost and increase the cache hit rate. The authors propose a transfer learning-based approach to predict content popularity and utilize a
greedy algorithm in order to solve the problem of cache content placement. The results of this study reveal that the proposed caching mechanism performs better in terms of average content delivery, transmission cost and
latency, as well as cache, hit rate compared to other content caching schemes such as Randomized Replacement (RR) and popularity-aware greedy strategy.

Traditional caching techniques generally need a large number of online optimization iterations to define content delivery and placement. Therefore, they are considered as high computational complexity methods. However, by using Deep Neural Networks (DNNs) for the optimization of caching at the edge of the network, offline training would be used to avoid online heavy computation iterations. This only requires Deep Learning (DL)
the interface which provides optimization strategies. A DNN can be trained with techniques provided by heuristic or optimal algorithms to define the cache policy [103]. This can result in avoiding online optimization
iterations. In addition, since there are some patterns for the output of the optimization problem related to partial cache refreshing, a multi-layer perceptron can be trained to accept the current content popularity and the last content placement probability as input to provide the cache refresh policy. Therefore, according to and, DNNs can be utilized to reduce the complexity of the optimization algorithms. However, techniques
based on DNNs can only be used when the optimization algorithms for the original caching problem is available. Therefore, these methods cannot be considered as self-adapted and their performance is restricted to
fixed optimization algorithms. Moreover, DL can be used for customized caching at the edge of the network.
For example in [105], a multi-layer perceptron is deployed in the cloud in order to anticipate content popularity to be requested and minimize delay for content downloading in self-driving cars. The outputs of the multi-layer
perceptron are then sent to the nodes at the edge of the network and based on these outputs, each node caches the contents which have the higher probabilities to be requested. On self-driving cars, Convolutional Neural
Networks (CNNs) can be used to predict the gender and age of the owner [105]. As soon as these features are identified, other ML algorithms such as binary classification algorithms and K-means clustering [106] can be
used to define which contents should be downloaded from the nodes at the edge of the network to the car.
In addition, by considering the fact that users’ willing to access the contents at various environments is different and changing [107], Recurrent Neural Networks (RNNs) can be utilized for the prediction of the users’
trajectories. According to these predictions, all the contents of the users’ interest can be cached on the node at the edge of the network of each predicted location in advance.

Besides DNNs, Deep Reinforcement Learning (DRL) can be used to maximize the long-term caching performance dealing with the whole optimization problem. The advantages of DRL are in the fact that
DNNs can learn the main features of the raw observation data. By combining DL and RL, the integrated DRL can enhance the methods related to cache management in the fog/edge computing paradigm directly from high-dimensional observation data. For instance, in, a Deep Deterministic Policy Gradient (DDPG) is utilized to train a DRL agent in order to enhance the cache hit rate and make appropriate decisions regarding the cache Journal Pre-proof replacement. In this study, a single base station scenario is considered such that the DRL agent makes the decision to cache required the contents or replace the cached contents. In addition, in [110], the authors propose an algorithm to deal with a large action space challenge. In this regard, K-Nearest Neighbor (KNN) algorithm is used to map the set of practical action inputs into one integrated input. Therefore, the action space is narrowed down in an intended way without missing the optimum caching policy. The results reveal that the proposed
algorithm outperforms in the terms of cache hit rates and runtime compared to the algorithms based on Deep QLearning (DQL) which search the whole action space instead. Another study on the use of DRL for caching in the fog-based IoT is presented in. In this study, a DRL-based algorithm is proposed for the coded caching scheme in fog RANs. In this regard, the network controller allocates limited cache spaces of the fog access
points to various coded files according to the users’ historical requests. The simulation results show the performance improvement of the proposed algorithm in terms of successful transmission probability compared
to other ML algorithms such as Q-Learning.

Leave a Reply

Your email address will not be published. Required fields are marked *