Loading [Contrib]/a11y/accessibility-menu.js
Survey Special Issues

Game theory and evolutionary optimization approaches applied to resource allocation problems in computing environments: A survey

  • Today's intelligent computing environments, including the Internet of Things (IoT), Cloud Computing (CC), Fog Computing (FC), and Edge Computing (EC), allow many organizations worldwide to optimize their resource allocation regarding the quality of service and energy consumption. Due to the acute conditions of utilizing resources by users and the real-time nature of the data, a comprehensive and integrated computing environment has not yet provided a robust and reliable capability for proper resource allocation. Although traditional resource allocation approaches in a low-capacity hardware resource system are efficient for small-scale resource providers, for a complex system in the conditions of dynamic computing resources and fierce competition in obtaining resources, they cannot develop and adaptively manage the conditions optimally. To optimize the resource allocation with minimal delay, low energy consumption, minimum computational complexity, high scalability, and better resource utilization efficiency, CC/FC/EC/IoT-based computing architectures should be designed intelligently. Therefore, the objective of this research is a comprehensive survey on resource allocation problems using computational intelligence-based evolutionary optimization and mathematical game theory approaches in different computing environments according to the latest scientific research achievements.

    Citation: Shahab Shamshirband, Javad Hassannataj Joloudari, Sahar Khanjani Shirkharkolaie, Sanaz Mojrian, Fatemeh Rahmani, Seyedakbar Mostafavi, Zulkefli Mansor. Game theory and evolutionary optimization approaches applied to resource allocation problems in computing environments: A survey[J]. Mathematical Biosciences and Engineering, 2021, 18(6): 9190-9232. doi: 10.3934/mbe.2021453

    Related Papers:

    [1] Yuanyuan Huang, Yiping Hao, Min Wang, Wen Zhou, Zhijun Wu . Optimality and stability of symmetric evolutionary games with applications in genetic selection. Mathematical Biosciences and Engineering, 2015, 12(3): 503-523. doi: 10.3934/mbe.2015.12.503
    [2] Yanpei Liu, Yunjing Zhu, Yanru Bin, Ningning Chen . Resources allocation optimization algorithm based on the comprehensive utility in edge computing applications. Mathematical Biosciences and Engineering, 2022, 19(9): 9147-9167. doi: 10.3934/mbe.2022425
    [3] Liang Tian, Fengjun Shang, Chenquan Gan . Optimal control analysis of malware propagation in cloud environments. Mathematical Biosciences and Engineering, 2023, 20(8): 14502-14517. doi: 10.3934/mbe.2023649
    [4] Jingyu Liu, Weidong Meng, Yuyu Li, Bo Huang, Bixi Zhang . Effective guide for behaviour of farmers in the withdrawal of rural homesteads: An evolutionary game-based study. Mathematical Biosciences and Engineering, 2022, 19(8): 7805-7825. doi: 10.3934/mbe.2022365
    [5] Wenjing Lv, Jue Chen, Songlin Cheng, Xihe Qiu, Dongmei Li . QoS-driven resource allocation in fog radio access network: A VR service perspective. Mathematical Biosciences and Engineering, 2024, 21(1): 1573-1589. doi: 10.3934/mbe.2024068
    [6] Agustín Gabriel Yabo, Jean-Baptiste Caillau, Jean-Luc Gouzé . Optimal bacterial resource allocation: metabolite production in continuous bioreactors. Mathematical Biosciences and Engineering, 2020, 17(6): 7074-7100. doi: 10.3934/mbe.2020364
    [7] Bin Wang, Fagui Liu . Task arrival based energy efficient optimization in smart-IoT data center. Mathematical Biosciences and Engineering, 2021, 18(3): 2713-2732. doi: 10.3934/mbe.2021138
    [8] Kittur Philemon Kibiwott , Yanan Zhao , Julius Kogo, Fengli Zhang . Verifiable fully outsourced attribute-based signcryption system for IoT eHealth big data in cloud computing. Mathematical Biosciences and Engineering, 2019, 16(5): 3561-3594. doi: 10.3934/mbe.2019178
    [9] Yu Shen, Hecheng Li . A new differential evolution using a bilevel optimization model for solving generalized multi-point dynamic aggregation problems. Mathematical Biosciences and Engineering, 2023, 20(8): 13754-13776. doi: 10.3934/mbe.2023612
    [10] A. Swierniak, M. Krzeslak, D. Borys, M. Kimmel . The role of interventions in the cancer evolution–an evolutionary games approach. Mathematical Biosciences and Engineering, 2019, 16(1): 265-291. doi: 10.3934/mbe.2019014
  • Today's intelligent computing environments, including the Internet of Things (IoT), Cloud Computing (CC), Fog Computing (FC), and Edge Computing (EC), allow many organizations worldwide to optimize their resource allocation regarding the quality of service and energy consumption. Due to the acute conditions of utilizing resources by users and the real-time nature of the data, a comprehensive and integrated computing environment has not yet provided a robust and reliable capability for proper resource allocation. Although traditional resource allocation approaches in a low-capacity hardware resource system are efficient for small-scale resource providers, for a complex system in the conditions of dynamic computing resources and fierce competition in obtaining resources, they cannot develop and adaptively manage the conditions optimally. To optimize the resource allocation with minimal delay, low energy consumption, minimum computational complexity, high scalability, and better resource utilization efficiency, CC/FC/EC/IoT-based computing architectures should be designed intelligently. Therefore, the objective of this research is a comprehensive survey on resource allocation problems using computational intelligence-based evolutionary optimization and mathematical game theory approaches in different computing environments according to the latest scientific research achievements.



    Nowadays, researchers try to enhance the overall performance and the QoS of IoT environments using intelligent mechanisms. In the IoT, dealing with all types of data is accomplished, and providing valuable and efficient services is associated with various challenges because numerous devices produce different types of data at various frequencies. The advent of new technologies as paradigms such as CC, FC, and EC leads to the development of IoT through the World Wide Web [1,2,3,4]. The common goal of these computing environments is to access the optimal resource allocation to consumers via the internet [5], in which structures and mechanisms are different for computing environments.

    For allocating problems, it is required to obtain a reasonable and optimal allocation using the suitable resource allocation mechanism for the services which can orchestrate concurrently [6]. Resource allocation is a prominent subject in CC with distributed scarce resources [7]. The computing resources are dynamically allocated according to the conditions and priorities of users. The conventional resource management systems are not capable of processing resource allocation tasks and allocating existing resources in a CC environment dynamically. Dynamic allocation of resources has a high computational complexity owing to the complex procedure of giving numerous versions of the identical work to various devices.

    Similarly, resource allocation is a significant challenge in CC [8]. Cloud computing is a model to provide easy access to a set of changeable and configurable computing resources. In this model, based on user demand for servers, storage resources through the World Wide Web have the least need for resource management and direct intervention of the service provider [9,10]. The advent of cloud computing, which is transforming the information technology perspective very fast [11], has been integrated into virtual infrastructures [12], and its services are managed by professional providers [13]. Also, cloud computing has been developed from computing models such as network computing, distributed computing, and parallel computing, emphasizing usability [11,14,15].

    Due to the heterogeneous nature of cloud computing, resource allocation is frequently based on two methods; system-centric and user-centric. The system-centric method optimizes numerous system performance criteria, e.g., total system throughput. The user-centric method mainly focuses on maximum utilization provisioning for users based on QoS requirements [16]. Achieving three goals of increasing resource utilization efficiency, increasing user satisfaction from the desired QoS, and maximizing the benefits of both fog service providers and users are challenging problems in resource allocation and scheduling [17,18].

    On the other hand, cloud computing, with its three main types of services, i.e., infrastructure, platform, and software, and its key benefits (e.g., scalability and flexibility), still faces some challenges. The distance between the cloud and end devices may be a problem for delay-sensitive applications such as crisis management and content delivery.s

    The integration of CC with IoT devices, called CoT, which has been the subject of researches is introduced to address the aforementioned challenges in recent years. CoT provides management convenience for increasing media content and other data and features such as pervasive access, service creation, service discovery, and resource provisioning. It is essential to decide on the type of data to load in the CC without burdening the core network and the cloud [19]. Despite these improvements, a new paradigm is called FC is placed between IoT and cloud levels to manage resources, perform data filtering, preprocessing, security measures, low latency, location awareness, and improvement of quality service [20].

    According to Cisco's definition of the FC, it adds cloud layer services to Local Area Network (LAN) so that the operation occurs close to the end-user, where two parameters of data processing time and network traffic overhead are reduced [21,22,23], and also, FC, known as "clouds at the edge, " has been developed to improve the QoS by allocating resources near devices, to reduce computation latency and fronthaul traffic data at network edges [17]. In addition, in FC, different edge nodes can work together to share computing resources, storage, and communications to perform some computing tasks locally. At the same time, no interaction is required with the CC center through fronthaul links [24]. Indeed, fog computing infrastructure as a new distributed computing supports the heterogeneity of fog devices, including end-user devices, access points, edge routers, and switches [25].

    The most essential element in the FC is the fog node, which expedites the implementation and deployment of applications in the IoT layer [22]. However, deploying fog computing in the IoT environment has its challenges [26]. According to the above description, a three-layer model of the IoT, FC, and CC is shown in Figure 1.

    Figure 1.  Three-layer model of the IoT, FC, and CC [1].

    Based on Figure 1, the main goal of the IoT and fog networks is to provide services in acceptable and agreed QoS metrics and low monetary cost [4,27]. Additionally, cloud computing shifts towards the edge to improve QoS for IoT devices.

    Nevertheless, the resource capacity of the fog network is constrained [28], and it is necessary to create IoT applications effectively with accurate QoS requirements with the existing network infrastructures [29].

    Hence, having an appropriate resource allocation model based on the FC is necessary to cover the components of the fog node so that different resources can be selected after transferring from the IoT layer to the fog layer through designing resource allocation strategies [30]. In this way, the data packets move to the fog nodes in different network points. They can be managed to host fog services combined with IoT applications to reduce latency and jitter time error and optimize power consumption [22].

    To this end, a new computing paradigm, namely edge computing, is introduced to solve the problems of battery lifetime constraint, response time, bandwidth cost preserving, and data security and privacy.

    Edge computing represents the enabling technologies permitting computation to be conducted at the edge of the network (e.g., edge node) on downstream data and upstream data on behalf of cloud services and IoT services, respectively. The edge computing paradigm can be replaced with FC so that the EC concentrates more on the IoT, but the FC concentrates more on the infrastructure. Also, the EC is as crucial as the CC is in the world [31]. The EC environment is shown in Figure 2.

    Figure 2.  Edge computing environment [31].

    Based on Figure 2, in the EC environment, things play the role of data consumers and data providers.

    At the edge, things can behave in two ways, including requesting service/content and conducting tasks from the CC. It is required a rational resource allocation through the FC, EC, CC, and IoT to address the problems such as energy consumption, optimal resource allocation, fog nodes lifetime, real-time interaction, latency, and jitter.

    Hence, in this article, we concentrate on innovative approaches, including evolutionary optimization and game theory of resource allocation problems. Using the evolutionary optimization methods may lead to few resources optimizations, while it is necessary to make the most required resources available to the consumers at the expected time [32].

    The evolutionary optimization theory is utilized to solve the problem of increasing the efficiency of the host-centric applications and the total power consumption in a data center [33]. For instance, in [34], a bio-inspired hybrid algorithm is performed to optimize resource utilization and minimize response time and processing costs.

    Besides the optimization methods, game theory models have been developed so that these models can reach the goals such as increasing resource utilization and high QoS for consumers, and allocating resources efficiently and fairly between cloud centers, cloud service providers, and data consumers [35].

    Meanwhile, the game theory uses mathematical tools to model and analyze states containing several decision-makers, named players. Each player includes a set of actions that is faced with the storage of joint and rare network resources. To this end, the generated model consists of interaction between the players by permitting each player to be impressed by the actions of all players to achieve a suitable allocation of resources based on the service requests for players [36,37].

    Therefore, in this article, two categories of approaches, including game theory and evolutionary optimization approaches, are investigated in various computing environments, including the IoT, CC, FC, MEC, VFC, Heterogeneous Relay Networks (HRNs), FC-IoT, HetNets, DCs, 5G HetNets, VEC, CC-IoT, CNs, 5G networks, LTE, CC/FC, Heterogeneous Radar Networks (HRNs), HetNets-MEC, IoT-FC, and FC-IIoT.

    The innovations of this article are as follows:

    ● The comprehensive review of two categories of approaches, including game theory and evolutionary optimization approaches for resource allocation in various intelligent computing environments.

    ● Comparison between past survey articles for resource allocation in computing environments with this article.

    ● Providing suggestions for future researches on resource allocation problems in computing environments.

    The rest of the article is as follows. In Section 2, the research methodology, including the review of all the published survey articles and research articles is presented for resource allocation problems. In Section 3, the survey of this article is discussed, and explanations along with figures and tables are given.

    The summary of challenges from studied researches has been explained in Section 4. Finally, in Section 5, conclusions and future research work are given.

    In this article, all the researches in the form of survey articles and research articles are investigated based on the research methodology. First, the survey articles published on resource allocation are described in detail in Subsection 2.1. Then, in Subsection 2.2, the research methodology is explained according to the researches that have been recently carried out in the form of research articles for the problem of resource allocation in various computing environments using two categories of game theory and evolutionary optimization approaches.

    According to the survey articles on the resource allocation problems, a detailed description of the purposes, limitations, and computing environments of the researches are explained. Also, a comparison of them with the current article is presented in the following.

    Yousafzai et al. [38] have investigated resource allocation problems in the CC environment. In the process of resource allocation, cloud resource allocation strategies have been considered utilizing a thematic classification with features such as design methods, optimization goals, and techniques, and valuable performances. While, in this article, we investigate most of the computing environments that have been considered in the field of resource allocation in recent years using game theory and evolutionary optimization approaches.

    Ghobaei et al. [39] have examined resource management approaches for the FC environment in six classes, including application placement, resource scheduling, load balancing, task offloading, resource provisioning, and resource allocation. They have considered auction-based and optimization schemes for resource allocation. However, in this article, various computing environments in the field of resource allocation are investigated.

    Hameed et al. [40], Beloglazov et al. [41], and Shuja et al. [42] have conducted studies on categorizing articles based on efficient energy in the CC environment, while, in this article, the classification is performed through the various approaches of resource allocation in different computing environments.

    Aceto et al. [43] have studied merely resource monitoring for the CC systems, while we focus more comprehensively on resource allocation in different computing environments.

    Jennings and Stadler [44] have presented a framework for cloud resource management. In this survey, the existing literature is reviewed regarding two categories of approaches; game theory and evolutionary optimization.

    Goyal and Dadizadeh [45] have discussed the implementation of parallel processing frameworks, e.g., Microsoft's Dryad and Google's MapReduce frameworks. However, in our study, resource allocation methods according to the latest scientific achievements published in the articles are discussed.

    Hussain et al. [46] have examined the work process of commercial cloud service providers and open-source cloud deployment results. However, in this article, two approaches to resource allocation problems in different computing environments are studied.

    Huang et al. [47] have conducted studies on dynamic resource allocation and task scheduling strategies to investigate how a Software-as-a-Service (SaaS)-based cloud computing system operates under infrastructures. But, in this survey, a review of resource allocation approaches in different computing environments is the subject of study.

    Ahmad et al. [48,49] have studied only the virtual machine migration optimization features for cloud data service operators. However, our study focuses on the latest resource allocation approaches, including game theory and evolutionary optimization approaches for a wide range of computing environments.

    Vinothina et al. [50] have discussed the classification of different strategies and resource allocation challenges and their effects on the cloud system. Regarding the strategy, they have mainly focused on Central Processing Unit (CPU) and memory resources. However, in the present article, we deal with various computing environments for resource allocation problems.

    Anuradha and Sumathi [51] have studied resource allocation approaches and techniques in the CC. They also have made comparisons between the approaches concerning their merits and demerits. The strategies they have reviewed include prediction algorithms for resource requirements and resource allocation algorithms. Finally, they have identified an efficient resource allocation strategy by effectively utilizing resources in an environment limited to cloud computing resources. Whereas, in this article, a review of game theory and evolutionary optimization approaches are carried out on resource allocation in different computing environments.

    Mohamaddiah et al. [33] have reviewed researches on resource management, mainly resource allocation and resource monitoring strategies. They have investigated schemes to address resource allocation problems in the CC environment. However, we analyze most of the different computing environments by applying game theory and evolutionary optimization approaches for resource allocation problems.

    In a study by Mohan and Raj [7], different strategies for allocating network resources and their applications in the CC environment have been presented. The issue of network resource allocation in cloud computing, which is based on the differentially adapted dynamic proportions, has also been briefly described. However, in the current study, we review two categories of game theory and evolutionary optimization approaches for optimal resource allocation problems in different computing environments.

    In a study by Castaneda et al. [52], an overview and a survey of various methods have been presented to achieve common optimization tasks in the downlink of Multi-User Multiple-Input Multiple-Output (MU-MIMO) communication systems.

    Manvi and Shyam [53] have investigated important resource management methods, including resource provisioning, resource allocation, resource mapping, and resource matching. They have provided a comprehensive review of the techniques for Infrastructure-as-a-Service (IaaS) in the cc, and also have presented issues and open problems for further researches.

    In a study by Su et al. [54], the methods and models of resource allocation algorithms in 5G network slicing have been examined. The fundamental ideas of Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) and their roles in network slicing have been introduced. The Management and Orchestration (MO) architecture of the network slicing that provides a basic framework for resource allocation algorithms has also been investigated. Subsequently, the categories of resources with the corresponding limited surface in the Radio Access Network (RAN) slicing and core network slicing are discussed.

    In the last survey, Saeik et al. [55] have proposed three leading solutions, including mathematical optimization, artificial intelligence, and control theory to demonstrate that how Edge and, or Cloud computing can be joined together to simplify the task offloading problems. Their studies concentrated on numerous viewpoints such as IoT and 5G technologies, communication, caching, versatility management, application scenarios, remote integration, architecture, task offloading, models, and algorithms. The solutions mentioned above, are classified based on optimization objective, algorithms, mobility, offloading, and granularity. The objectives of task offloading consist of delay, energy, bandwidth/spectrum, load balancing, deployment cost, model accuracy, multi-objective, which are similar in each three subfield solutions. The mobility, offloading, and objective optimization parameters in the three subfields solutions are entirely like each other, but the algorithms which are utilized in the categorization of the three subfields are different from each other. They compared the solutions based on stability, low complexity, optimality, online training, reachability, and real-time decision features. The stability, optimality, and reachability features perform well in the Control theory solution, While complexity and optimality work well in mathematical optimization. Also, online training, reachability execute well in artificial intelligence but real-time decision works well for all solutions.

    Hence, the control theory techniques have been examined as a substitute way to show the uncertainties of the dynamic problems and guarantee the essential steadiness of the task offloading systems.

    In this subsection, according to Figure 3, the process of the proposed approach is presented to apply approaches for resource allocation problems on intelligent computing environments regarding two categories of approaches; game theory methods and evolutionary optimization.

    Figure 3.  The game theory and evolutionary optimization approaches for resource allocation in various computing environments.

    According to Figure 3, in Subsections 2.2.1 and 2.2.2, the accomplished researches in the field of resource allocation is explained in detail based on the game theory and evolutionary optimization approaches, respectively.

    Game-based theories provide a wide range of mathematical tools for modeling and analyzing interactions related to individual or group behaviors of users in computing environments [56]. Game theory [35] is an appropriate tool to model and examine the resource allocation problem in different computing environments. In general, game theory approaches help to simplify complex problems to a great extent. Therefore, in this article, according to the latest scientific achievements, we review the research articles that have been published recently in the resource allocation field using game theory techniques in various computing environments. In this subsection, we describe in detail the game theory models for resource allocation in different computing environments, which computing environments are specified in Figure 3.

    In a study by Zhang et al. [57], a three-layer hierarchical game framework has been proposed for resource management in multi-Data Service Operator (multi-DSO), multi-Fog Node (multi-FN), and multi-Authorized Data Service Subscriber (multi-ADSS) scenarios. In this framework, firstly, a Stackelberg game between DSOs and ADSSs is introduced, where DSO acts as a leader and provides virtual services to ADSSs as followers. Then, a moral hazard model has been adopted in contract theory between DSOs and FNs to encourage FNs to provide adequate physical resources. Finally, a student project matching game has been suggested for resource allocation based on physical and virtual resources.

    Klaimi et al. [58] have proposed a novel concept of VFC to solve the adaptive control problem for applications with various quality of service requirements and dynamic vehicle resources. An effective game theory method has been designed and a new algorithm has been introduced that evaluates all the features of the game to show that the Nash equilibrium criterion has always been maintained in the game. The proposed approach can improve the QoS, energy, and system efficiency compared to the Low-latency Queuing algorithm.

    In a study by Zhang et al. [59], a joint optimization framework has been proposed for all the Fog Nodes (FNs), Data Service Operators (DSOs), and Data Service Subscribers (DSSs) to attain optimal resource allocation approaches in IoT-fog networks. In this framework, a Stackelberg game has been formulated for analyzing the pricing problem of DSOs and the resource allocation problem of DSSs. Moreover, a game with many-to-many matching has been utilized to deal with the paired problem between DSOs and FNs that the DSO is aware of the anticipated amount of the resources purchased by the DSS. Another layer of many-to-many-matching between each paired FN and the server has been used to address the FN-DSS pairing problem in the identical DSO. The suggested framework can considerably enhance the performance of IoT-based network systems. Eventually, by increasing the average workload arrive rate for DSSs, the utility of FNs first improves then gradually converges to a fixed value when the number of FNs is set.

    In a study by Munir et al. [60], a hierarchical game theory algorithm has been proposed for optimal resource allocation in the process of a heterogeneous network with femtocell movements in the edge. The first game of the algorithm consists of Femtocell Access Points (FAPs), which play a non-cooperative game to select between open and closed access policies to increase their home subscriber rates. In the second game, Macrocell User Equipment (MUE) is provided to decide on the connection between the FAPs and the Macrocell Base Station (MBS) to maximize their rates and total network performance. The hierarchical game algorithm with a network-assisted user-centric scheme improves the performance of the 5G HetNets significantly compared to the closed and network-centric access policy approaches.

    Chen et al. [61] have proposed a framework based on the game theory, named a multi-leader multi-follower Stackelberg game, where End Users (EUs) and Mobile Edge Clouds (MECs) or mobile edge computing nodes perform as followers and leaders, respectively. The proposed framework is a solution for computing a Stackelberg equilibrium, where each MEC reaches a maximum revenue, and each end user attains maximum efficiency with resource budget limitations. The resource allocation and pricing problems convert multiple resources into subsets that each subset is considered as a single type of resource.

    The Stackelberg game framework is created for each sub-problem, where each player, namely end-user, can maximize productivity via choosing a suitable approach in the strategy space. Their study has demonstrated the existence of the Stackelberg equilibrium sub-game and algorithms for determining the Stackelberg equilibrium for each type of resource. As a result, it has been shown that the end-user with idle resources can act as MEC.

    In a study by Liang et al. [62], a Stackelberg model-based hierarchical game has been developed to deal with resource allocation in Heterogeneous Relay Networks (HRNs). The suggested game is comprised of two sub-games; Backhaul-Level Game (BLG) and Access-Level Game (ALG), where Relay Nodes (RNs) play the role of BLG leaders in backup links, and Mobile Stations (MSs) act as ALG followers in access links. Leaders select optimal resource allocation approaches by estimating accessibility to MSs, and subsequently, followers give the best response to the leaders' strategies. As a result, this method can ensure the balance of power between backlinks and access links and enhance user data rates and the full use of system resources in HRNs.

    Nezarat and Dastghaibifard [63] have proposed an auction-based approach under non-cooperative game theory combined with the Bayesian learning method in a cloud computing environment to determine the auction winner. They have compared the proposed method with three algorithms, including the cost-agnostic greedy, cost-agnostic provision, and Naïve Auction in terms of workload and number of tasks assigned to resources according to their deadlines. Comparison results have shown that in the Naïve Auction method, many tasks can be auctioned, while a small number of them can obtain their desired resources. In addition, other methods have been capable of allocating more than 80% of tasks to resources, however, in the suggested method, the higher percentage of tasks receive services, and the breach rate of the service level agreement is meager. Finally, using non-cooperative game theory, resource allocation reaches a Nash equilibrium point in an incomplete information environment. Also, the Bayesian learning forecast has the most proper performance to help the auctioneer obtain the balance price of resources.

    In another study, Nezarat and Dastghaibifard [64] have used a non-cooperative game theory method with a Bayesian learning method based on auction theory to address the resource allocation problem in the cloud computing environment. The objective of the proposed method is to choose the best bidder to sell resources accordingly in an incomplete information environment. The experimental results have demonstrated that resource allocation among non-cooperative users can achieve a Nash equilibrium point for an environment with incomplete information using the game theory method. Also, the Bayesian learning forecast has the most appropriate performance, which helps an auctioneer to decide on the price balance of resources so that finally, the non-cooperative game theory method along with the Bayesian learning method can solve resource allocation problems in cloud computing.

    Yang et al. [65] have used Stackelberg-based game theory approaches to model a problem to minimize energy consumption in the DCs. In their model, the recipient of the system request, who acts as leader, is capable of maximizing profits through adjusting the provision of resources. Whereas, the scheduler agents, who perform as followers, can choose resources to achieve optimal performance. The suggested method can significantly enhance energy efficiency in dynamic work scenarios while no service level agreement is compromised in the DCs. Moreover, the power consumption of the STG (the proposed method is labeled as STG), dynamic capacity provisioning, and dynamic voltage and frequency scaling schemes increase with the reduction in average response time.

    Huang et al. [66] have designed the game theory algorithm and resource allocation protocol based on the equilibrium derivations for intercellular Device-to-Device (D2D) resource allocation in the CNs. This approach addresses the resource allocation problem for a D2D link in the common area of two neighboring cells according to the Nash Equilibrium (NE) derivation and analysis. They have used the game theory algorithm to study D2D communication conditions, where the D2D link is placed in the overlapping area of two neighboring cells. According to the three proposed intercellular D2D scenarios, they have found that due to the existence of the D2D, the intercellular interference conditions can be more serious. The experiment results have shown that the created resource allocation game model has a better effect on system performance than a static Cournot game [67], which significantly increases the sum rate and the sum rate gain.

    Huang et al. [68] have conducted a game theory investigation on context-aware resource allocation for device-to-device communications on the CC-IoT so that resource allocation between two-cell base stations is modeled as a cooperative game, wherein a pair of users are placed device-to-device. A scheme is designed for determining the bandwidth allocation at each base station and maximizing the efficiency of the overall performance of both stations in CC-IoT. This scheme has been combined with the protocol proposed in [66], and also, a context-aware device-to-device resource allocation in the CNs has been provided by selecting different approaches for various system conditions.

    Jie et al. [35] have modeled an optimal resource allocation problem using double-stage Stackelberg game theory with interactions between three categories, including cloud centers, fog service providers, fog nods, and data users in the FC-IIoT environment. They have presented three algorithms such as data user's best response algorithm, optimal price strategy of all fog service providers, and optimal bid strategy of the fog service provider to achieve Stackelberg equilibrium and Nash equilibrium. As a result, applying the three algorithms, Stackelberg equilibrium among data users, fog service providers, and the cloud centers, and also, Nash equilibrium among non-cooperative fog service providers are achieved in FC-IIoT environment.

    Song et al. [56] have utilized non-cooperative game, auction game, and Stackelberg game models for D2D direct communications and cooperative game models, e.g., coalition formation games for D2D Local Area Network (LAN) communications to solve radio resource allocation problems underlying CNs. They have demonstrated the applications of game-theoretic models to present the radio resource allocation problems in D2D communication. According to the results, in the non-cooperative model, each mobile makes a single decision, which may cause intense data collisions. Nevertheless, in the proposed coalition game-theoretic model, the mobiles cooperate to increase the usefulness function, and then the proposed model obtains a better efficiency in cumulative service rate.

    Wei et al. [69] have suggested a Cloud resource Allocation Model based on an Imperfect Information Stackelberg Game (CSAM-IISG) for resource allocation in a cloud computing environment utilizing a hidden Markov model. They have been able to raise the profits of the resource provider and the user.

    Consequently, they have maximized profit for the infrastructure providers by designing a unit price-based allocation model. They demonstrated that the predicted price is near the actual transaction price, but it is less than the actual value in the proposed game model. Also, using the CSAM-IISG model, the bandwidth price and CPU price were obtained greater than the traditional model and reached a Nash equilibrium. Moreover, the profit of the proposed model was gained greater than the traditional auction model when the number of service providers was uniform. Finally, with enhancing the resources of infrastructure suppliers, the profit of the CSAM-IISG model improved more meaningfully than the traditional auction model and enhanced the resource utilization of the infrastructure supplier.

    Zhang et al. [70] have presented an interference-aware resource allocation non-cooperative game theory framework introducing cross-tier/co-tier interference, incomplete channel state information, and energy harvesting in Simultaneous Wireless Information and Power Transfer (SWIPT) enabled heterogeneous networks. In the proposed framework, the interference-aware resource allocation game includes two subgames: the subchannel allocation and the power allocation. The power allocation problem is made as a non-cooperative game by considering a cross-tier/co-tier interference pricing with SWIPT in a heterogeneous small cell network. Also, subchannel allocation is made as a non-cooperative potential game by decreasing the overall interferences experienced by users. The iterative power optimization and iterative subchannel allocation algorithms have been designed to achieve the Nash equilibrium for reducing co-channel interference in small cells. They found that the proposed algorithms compared to the exhaustive search method have a much lower polynomial complexity. Regarding the capacity of the overall small cells, the convergence performance of the proposed power optimization allocation algorithm demonstrates that this algorithm converges after closely four iterations, and the capacity can obtain better performance by increasing the number of small cells. Moreover, in an incomplete channel state information scenario, the capacity of the small cells of the proposed algorithms compared to the fixed power scheme is about 8% more when there are 50 small cells.

    Zhang et al. [71] have developed a distributed Joint Computation Offloading and Resource Allocation Optimization (JCORAO) scheme based on the potentially distributed game in Heterogeneous Networks with Mobile Edge Computing (HetNets-MEC). They have also designed a sub-algorithm called cloud and wireless resource allocation algorithm for joint allocation of up-link sub-channel, up-link transfer power, and computing resources to offload mobile terminals. The evaluation results have illustrated that the distributed JCORAO scheme provides better performance regarding overall cost compared to the Local Execution Completely Algorithm (LECA), cloud execution completely algorithm, and centralized JCORAO. Also, the LECA algorithm is less complex than other algorithms. Furthermore, the distributed JCORAO scheme consumes the least energy. It requires the least task completion time compared to the distributed computation offloading algorithm [72] and energy-efficient dynamic offloading and resource scheduling scheme [73]. In the other words, it has been shown that the fractional Frequency reuse based on the Hungarian method and graph coloring is a more effective method compared to the uniform zero frequency reuse method considering reducing interference between neighboring mobile terminals. Finally, the existence of Nash equilibrium has been proven in the HetNets-MEC environment.

    Li et al. [74] have developed a vehicle edge framework based on the Stackelberg game model to analyze the pricing problem of FNs and the data resource strategy for Data Service Agents (DSAs) via the distributed iteration algorithm, called dynamic service area partitioning algorithm to balance the DSA load and improve the QoS. In this approach, they have combined the MEC and the internet of vehicles, i.e., Vehicular Edge Computing (VEG). Simulation results have illustrated that the suggested framework can guarantee the allocation efficiency for FN resources between the cars. Also, using this framework, the optimal strategy has been obtained for the participants and the subgame gets Nash equilibrium in the VEG environment. Moreover, their results demonstrate that the density of vehicles and resource occupancy rate can impress the service area of the data service agent. Therefore, if the density of cars is greater, the fewer the resources accessible to the server.

    Pillai and Rao [75] have developed a resource allocation mechanism called the coalition formation model based on the uncertainty principle of game theory between machines in the cloud computing environment. The experiment results have indicated that the proposed mechanism designed by coalition-formation for requests with unspecified task information provides better utilization of resources for the machines on the cloud. According to the evaluations, the proposed mechanism achieves better performance by reducing task allocation time and resource wastage and increasing request satisfaction. The complexity of integer programming can be avoided for coalition-formation in a CC environment through solving the optimization problem.

    Xu and Yu [76] have used a game theory algorithm as a finite extensive game with complete information to model the multiple resource allocation problem to the virtual machine level in CC. This resource allocation algorithm is called the Fairness-Utilization trade-off Game Algorithm (FUGA), which supports fair resource allocation for users and efficient utilization of resources for each physical server. The experiment results have illustrated that the FUGA is superior to the Hadoop scheduler regarding fair resource allocation, and it can also allocate resources more efficiently compared to the allocation mechanism of the Google cluster and the First-Fit Algorithm. Finally, the FUGA leads to a Nash equilibrium decision in the CC environment.

    Zhou et al. [77] have investigated optimal computing resource allocation and task assignment problems in VFC. They have proposed an impressive incentive mechanism generating a contract theoretical model to encourage vehicles for sharing their resources. They have also developed a pricing-based stable contract-matching algorithm to address the task assignment problem. As a result, the proposed incentive mechanism leads to social welfare, which achieves optimal performance with no need for information asymmetry. Also, the suggested task assignment approach achieves low overall network delay, i.e., it is close to the optimal comprehensive search algorithm, and the total computation complexity is reduced significantly.

    In the last study, Wang et al. [78] have examined a two-user scenario of non-orthogonal multiple access (NOMA)-based MEC network. They considered the users as leaders and the MEC as followers. Then, a Stackelberg game is modeled based on the interaction amid energy consumption and latency. In this game, the leader was inclined to reduce the overall energy consumption for task offloading and local computing by the task assignment coefficients optimization and transfer power. On the other hand, the followers' goals are to decrease the overall execution time by assigning diverse computational resources to prepare the offloaded tasks. Moreover, the multi-user scenario has been performed to allocate the users into various sub-channels with the matching-based user pairing algorithm. The closed-form solutions (CFS) were derived for the optimization variables to achieve the low-complexity solutions. The results demonstrate that the overall energy consumption decreased dramatically using the derived optimal solutions in the two-user scenario. The more the max tolerance increases, the more the energy consumption of CFS decreases, and more tasks will be offloaded to the MEC server at a reduced rate. They also found that the energy consumption of computing preference was influenced when the maximum tolerance time equals 0.01 and 0.02 s. In the state of multi-user scenario, the user pairing algorithm can decrease the energy consumption by allocating users into the suitable sub-channels. They especially compared the average execution time of the user pairing algorithm and comprehensive search-based algorithm. They found out the suggested pairing algorithm can contend as a low-complexity near-optimal algorithm.

    Nowadays, many consumers require resources such as devices, memory, processors, operating systems, etc., on a cost model under pay-as-you-go, and meeting these requirements is extremely grueling.

    For instance, providers need to use the infrastructure as a service to meet the requirements in a cloud computing environment. Meanwhile, the efficiency and effectiveness of cloud computing should be straightly compatible with the impressive utilization of resources. The main challenge in the cloud environment is changing consumers' requests to access unspecified resources so that resource request management becomes more difficult. Hence, the search algorithms such as round-robin, Throttled, etc., have been utilized for resource management and task scheduling. These algorithms are based on rules and sustain under-exploitation of resources, leading to total delay in executing the tasks. Furthermore, evolutionary optimization techniques such as particle swarm optimization, ant colony optimization, Cat Swarm Optimization (CSO), etc., are more appropriate for solving resource allocation problems that lay under NP-hard [79,80,81,82].

    Due to the computing environments specified in Figure 3, we summarize the state-of-the-art Evolutionary optimization methods for resource allocation problems available in the literature.

    In a study by Lee et al. [83], a Heuristic Genetic Algorithm (HGA) has been presented to solve the resource allocation problem, which means allocating resources to activities to optimize fitness as much as possible. Various genetic algorithms have been investigated, and a heuristic genetic algorithm has been developed to improve the convergence rate of resource allocation problems. The results have indicated that the proposed algorithm has the best performance compared to the existing search algorithms such as Order Representation Approach (ORA) and Simulated Annealing (SA) to enhance the search efficiency. It should be noted that in their study, the network computing environment has not been considered.

    Lee and Lee [84] have proposed a hybrid search algorithm with heuristics to address resource allocation problems so that the advantages of the Ant Colony Optimization (ACO) algorithm and Genetic Algorithm (GA) have been used to explore the search space and obtain the best solution. The results have demonstrated that the suggested hybrid approach achieves better performance than evolutionary algorithms such as ACO, GA, and Simulated Annealing. It should be noted that their study was conducted without considering network computing environments.

    In research by Liu et al. [85], the definition and architecture of the FC and its differences with other similar computing platforms have been studied. In the first stage, the architecture of the FC has been presented in both computational and network aspects. In the second stage, a framework has been proposed for resource allocation and delay decrease. Privacy and fault tolerance have also been taken into account in this framework with the optimization methods. Finally, the proposed framework has been evaluated using an application scenario and genetic algorithm combined with a Dirichlet distribution approach in the FC environment. As a result, the proposed approach has the best performance in terms of reducing the latency, optimizing the resource allocation, and subtask scheduling.

    Rafique et al. [34] have presented a Novel Bio-Inspired Hybrid Algorithm (NBIHA), including a combination of Modified Particle Swarm Optimization (MPSO) algorithm and Modified Cat Swarm Optimization (MCSO) algorithm. The developed method consists of task scheduling and resource allocation in fog computing to optimize resource utilization and minimize response time and processing costs. MPSO algorithm is used for task scheduling, and as a result, an effective load balance is created between fog nodes. Also, a combination of bio-inspired algorithms is utilized to attain efficient resource allocation. Experimental results have shown that the proposed method leads to the optimization of resource utilization and reduction of response time and power consumption compared to the latest benchmark scheduling algorithms such as the first come first served algorithm and shortest job first algorithm.

    In a study by Kim and Ko [86], a service resource allocation approach has been proposed to minimize data transfer between mobile devices of users and efficiently address the restrictions in IoT environments. In this research, the resource allocation problem has somehow become a degree-constrained minimum spanning tree. A genetic algorithm has been used to decrease the required time to find a near-optimal solution. A fitness function and an encoding method have also been defined to use the genetic algorithm effectively. According to this study, the proposed approach has reached a success rate of 97% when applied to achieve near-optimal solutions. Moreover, much less time is needed compared to the brute force scheme.

    Li et al. [24] have investigated task scheduling and heterogeneous resource allocation problems for multiple devices in the FC-IoT environment. In their study, a Non-Orthogonal Multiple Access (NOMA) deployment in IoT networks has been considered to support a large number of device connections and transmit a large amount of data with low latency and limited resources. It enables multiple IoT devices to send data simultaneously, optimize resource blocks allocation, and send the power of various IoT devices regarding related QoS requirements. In addition, the optimization problem has been formulated as a mixed-integer nonlinear programming problem to reduce the system energy consumption. As this problem is an NP-hard problem, the Improved Genetic Algorithm (IGA) has been introduced to deal with it. Simulation results have shown that the suggested approach effectively performs in terms of energy consumption, outage probability, average delay, and throughput in the FC-IoT environment.

    In a study by Chimakurthi [87], the problem of QoS constrained resource allocation has been considered. Consumers desire to host their applications on a cloud provider with certain service level agreements for performances such as response time and throughput. Because the application-hosted data centers consume large amounts of energy and incur high operating costs, the energy efficiency mechanism has been proposed that allocates cloud resources to applications without breaking service level agreements utilizing the ant colony framework.

    Han et al. [88] have proposed an online optimizer based on genetic algorithms in 5G networks. The presented method has coded the slicing strategies into binary sequences to deal with the request and decision mechanism without prior knowledge of the utility model. Consequently, it supports heterogeneous slicings by having full efficiency and great strength versus high scalability and non-stationary service scenarios.

    Tang et al. [89] have investigated the problem of optimizing energy efficiency for the downlink two-tier HetNets consisting of a single macro-cell and multiple pico-cells. The resource allocation problem of energy efficiency is a mixed combinatorial and non-convex optimization problem that is very difficult to solve. Therefore, to reduce the computational complexity, the main problem with multiple inequality constraints has been analyzed in the multiple optimization problems with single inequality constraints. A two-layer resource allocation algorithm based on the energy efficiency quasiconcavity property has been proposed, in which an inner layer has been designed to obtain the maximum energy efficiency for a specified achievable rate, and an outer layer has been designed to reach the optimal energy efficiency through a gradient-based algorithm. Simulation results have confirmed the theoretical achievements and have shown that the suggested resource allocation algorithm can effectively achieve the desired energy efficiency in HeTNets.

    In a study by Liu et al. [90], a genetic algorithm has been presented for resource allocation. In the proposed model, a new crossover operator has been developed to prevent the production of illegal chromosomes. Their model is capable of providing an optimal solution for the resource allocation problem. It should be noted that they have not considered the network computing environment for the resource allocation problem.

    In a study by Zhang et al. [91], a Joint cloud and wireless Resource Allocation algorithm based on Evolutionary Game (JRA-EG) has been developed considering energy consumption and time delay of mobile terminals, and also, the monetary cost in the MEC environment. In their study, the stability of the evolutionary game model has been analyzed, and the evolutionary equilibrium is obtained by the replicator dynamic method in the MEC. The simulation results of the proposed approach have illustrated that the JRA-EG algorithm can converge to evolutionary equilibrium very fast, and in comparison with existing algorithms, can reduce time delay and energy consumption when the size of the input data becomes larger.

    Zheng and Wang [92] have proposed a Pareto-based Fruit Fly Optimization Algorithm (PFOA) to address the problem of Task Scheduling and Resource Allocating (TSRA) in Cloud Computing (CC) environment. First, a heuristic has been proposed based on the least cost property to initialize the population. Second, a resource reassign operator has been designed to generate non-dominated solutions.

    Third, a critical path-based search operator has been designed to enhance the exploitation capability.

    Furthermore, the non-dominated categorizing method has been applied based on the concept of Pareto optimal, and visual memory has been used by the PFOA to address multiple goals to solve the TSRA problem. Lastly, the efficiency of the PFOA has been illustrated through comparative results and statistical analysis utilizing several test examples in the CC environment. As a result, using the proposed PFOA, the Cmax (Cmax represents the dominance communication amid solutions in two sets) is decreased by 13.5% in instance one and 41.2% in instance two to optimize the makespan. Also, 3.2% in instance one and 8.8% in instance two are saved to optimize the total cost.

    Guddeti and Buyya [93] have proposed a new hybrid Bio-Inspired heuristic algorithm, including MPSO combined with MCSO algorithm. The MPSO algorithm has been utilized for task scheduling and allocating to the virtual machines effectively. The hybrid MPSO-MCSO algorithm has been used for resource allocation and management based on the demanded tasks in the cloud computing environment.

    As a result, using statistical hypothesis analysis, the proposed hybrid algorithm compared to the previous research and benchmark algorithms such as Exact algorithm based on branch-and-bound technique, Round Robin (RR), CSO, MPSO, and ACO performs better by improving reliability and flexibility, reducing execution time and average response time, and also increasing the efficient utilization of cloud resources by almost 12%. Moreover, the presented MPSO algorithm is more effective in terms of task scheduling in comparison with the mentioned algorithms in the CC environment.

    Arianyan et al. [94] have proposed an effective resource allocation scheme using genetic algorithms in cloud computing. In the experiments, they have considered various parameters influencing final decision as an input to the resource allocation algorithm, and have examined their impact on different test scenarios.

    The results have shown that considering the Virtual Machine (VM) parameter in the decision algorithm saves more operating and maintenance costs. Also, the "best fit" scenario desires to have more vacant Physical Machines (PMs) and turn off such PMs and the "most available resource" scenario desires to map VMs on PMs with more available resources to reduce applications' response time.

    Beloglazov et al. [95] have investigated energy-aware resource allocation for effective management of data centers in cloud computing by presenting two algorithms, called Modified Best Fit Decreasing (MBFD) and Minimization of Migrations (MM) algorithms to achieve the minimum processor capacity for the selected virtual machine. Furthermore, compared to static resource allocation techniques, their approach reduces energy consumption costs associated with cloud data centers, which ultimately prevents breaches of service level agreements.

    Cao et al. [96] have developed a Cost-Oriented Model (COM) for optimal cloud computing resource allocation applying comprehensive numerical cases based on an actual pricing system for Amazon's public cloud for demand-side management. They have considered load specifications of computing applications and samples' features for cloud computing. To solve the COM model of the optimization problem, they have utilized two algorithms, including Modified Priority List (MPL) and Simulated Annealing (SA) algorithms. The MPL algorithm performs much more efficiently than the SA algorithm in solving the COM model with or without considering uncertainty. Also, the proposed COM model has acceptable performance in terms of minimum operational costs in the field of smart grids.

    Mata and Guardieiro [97] have proposed an approach based on a GA for resource allocation in LTE uplink. The performance of the presented algorithm has been compared to uplink scheduling algorithms such as Recursive Maximum Expansion (RME), Riding Peaks (RP), and RR in a simulation environment with video chat transfer scenarios. The results have demonstrated that the GA-based approach provides better video quality in evaluation scenarios than the scheduling algorithms. Also, the proposed GA can be an important tool in LTE uplink resource allocation in all variables. Moreover, the GA algorithm performance is better than the RME, RP, and RR as the cumulative distribution function of the end-to-end delay is investigated. By using the GA algorithm, nearly 90% of the 60 users studied delay up to 250 ms.

    In contrast, this value reduces to 40% for RP and RME algorithms, and also, the RR algorithm stands at 7%.

    Hachicha et al. [98] have used a genetic algorithm for QoS-aware configurable resource allocation in cloud-based Business Processes that best fits tenant requirements regarding elasticity and shareability and optimizes the quality of service characteristics. The experimental results have illustrated that the suggested GA-based algorithm performs better than linear integer programming in a scenario where a large number of cloud resources exists. Moreover, using the proposed algorithm, when the number of resources of the cloud reaches 16–20, it can save computation time efficiency nearly constant. At the same time, linear integer programming accomplishes a very high growth.

    Ma et al. [99] have proposed a fog computing model based on the multi-layer IoT, called IoT-based Fog Computing Model (IoT-FCM), which allocates resources between the fog layer and terminal layer using a genetic algorithm. They have used a multi-sink version based on the Least Interference Beaconing Protocol (LIBP) [100,101,102] to decrease energy consumption and increase fault tolerance in the terminal layer. The simulation results have shown that the IoT-FCM model performs better than two max-min algorithms, including fog-oriented max-min algorithm and conventional max-min algorithm. This model reduces the distance between fog nodes and terminals by 38% compared to the fog-oriented max-min algorithm and by 55% compared to the conventional max-min algorithm. In addition, the proposed model decreases energy consumption by 150 KWh compared to the two max-min algorithms and modifies the LIBP by adding multiple sinks.

    Kumar et al. [103] have used a bio-inspired Cuckoo search algorithm for performance and power-aware resource allocation in a cloud computing environment to improve energy efficiency for cloud infrastructures. According to the simulation results, the developed approach compared to the first-fit decreasing algorithm leads to approximately 12% savings in energy consumption and improves the QoS.

    Rao and Cornelio [104] have proposed an optimal resource allocation approach for data-intensive workloads utilizing Topology-Aware Resource Allocation (TARA) and have applied an architecture under the "what if" methodology [105] using IaaS to address the optimal resource allocation problem in the cloud computing environment. The architecture has used a prediction engine with a MapReduce simulator to estimate the performance of a specific resource allocation. An evolutionary algorithm has been presented including a genetic algorithm and an evolution strategies algorithm to achieve an optimal solution in an ample search space, which leads to an optimal resource allocation with high reliability and low latency.

    The proposed approach can reduce TARA completion time compared to the simple allocation policies and it also makes resource allocation more efficient due to the existence of the two levels of algorithms performed in the prediction engine.

    Akintoye and Bagula [106] have improved the QoS level in Cloud/Fog Computing (CC/FC) environment using an effective resource allocation strategy via designing two models. they have used the Hungarian Algorithm-Based Binding Policy (HABBP) model as an innovative solution to the linear programming problem, which is a load balancing policy for binding cloudlets to virtual machines. Also, another model, called Genetic Algorithm-Based Virtual Machine Placement (GABVMP) has been performed to solve and optimize VMs placement in the cloud computing environment. The results have shown that the GABVMP achieves better performance compared to the greedy heuristic algorithms, i.e., first-fit placement algorithm and random placement algorithm in terms of the consumption of Physical Machine-Switch links that is under the cost of placing VMs on PMs in the data center. In general, the Quality-of-Service has been improved regarding the allocation cost in (CC/FC) environment.

    Tsai [107] has proposed an efficient algorithm called Search Economics (SE) for IoT Resource Allocation (SEIRA) to solve resource allocation problems. For the hybrid SEIRA algorithm, he has utilized the concepts of meta-heuristic algorithm, i.e., SE [108], and data clustering algorithm, i.e., K-means [109], to deal with IoT resource allocation problems and to decrease the total communication cost between devices and gateways, as well as computational time during the convergence process for the meta-heuristic algorithm used in an IoT environment. The simulation results have demonstrated that the suggested hybrid SEIRA algorithm provides much better performance than the resource allocation algorithms, e.g., Simulated Annealing algorithm and Genetic Algorithm, regarding total data communication costs.

    Sangaiah et al. [110] have used Whale Optimization Algorithm (WOA) [111] to deal with resource allocation problems in the IoT to optimize resource allocation and decrease overall communication cost between gateways and resources. The results of comparing the presented algorithm with the Genetic Algorithm and SEIRA algorithm have indicated the proper performance of the WOA so that according to different benchmarks, the performance of the WOA outperforms the two mentioned algorithms regarding total communication cost in the IoT environment.

    Chaharsooghi and Kermani [112] have used a modified ACO algorithm [113] for a multi-objective Resource Allocation Problem (MORAP). They have tried to improve the efficiency and effectiveness of the algorithm via enhancing the learning of ants. The result of comparing the developed ACO algorithm with the Hybrid Genetic Algorithm (HGA), which was employed to MORAP, has shown that the ACO-based method performs better than the genetic algorithm on a group of MORAP problems. Also, the presented ACO algorithm is better than the HGA in 50% of the non-dominated solution. It should be noted that they have not considered the network computing environment for the multi-objective resource allocation problem.

    Choi and Lim [114] have proposed a new winner determination algorithm with an investigation for deadline constraints of jobs in a combinatorial auction mechanism for efficient resource allocation and reduction of the penalty cost in the CC-IoT. In their study, the execution time constraint for each job has been considered as the service level agreement constraint in the system. Based on the proposed system, the winners in each auction round have been determined under the job necessity according to the execution time deadline. The proposed approach for determining the winner of the auction has been compared with the typical mechanism. The results have indicated that the proposed optimization approach leads to the maximum profit of the provider by considering penalty cost for service level agreement violations, and also reduces the penalty cost by considering time limits of execution for optimal resource allocation in cloud computing environment utilizing actual workload data.

    Yan et al. [115] have proposed two optimal resource allocation schemes of the heterogeneous resource allocation for asynchronous Multiple Targets Tracking (MTT) applications in HRNs. The first scheme aims for minimizing the overall resource consumption by achieving the predefined MTT Bayesian Cramér-Rao Lower Bound (BCRLB) thresholds and satisfying system resource budgets. Also, the second scheme is designed to minimize the overall MTT BCRLB and maximize the total MTT accuracy for the given resource budgets. Then the heterogeneous resource allocation schemes are formulated as two convex optimization problems. Two methods such as the dual ascent method and the block coordinate descent method are designed to solve the problems. As a result, the heterogeneous resource allocation procedures may reach a smaller overall MTT BCRLB for specified resource budgets or need fewer resources to provide the identical tracking performance for multiple targets in HRNs.

    Resource allocation problem in computing environments is raised as one of the categories of resource management [39]. Resource allocation is applied to help execute complex and extensive tasks that need large-scale computing using computing resources available in the network. The most important feature of a distributed computing environment is providing fair resource allocation for resource users. The challenge that computing system users face is to utilize resources in real-time and fairly. Service providers in these systems use various approaches to address problems such as resource utilization delay, high costs, and high jitter, as well as appropriate machines and platforms that must be installed in computing systems to increase the quality of services. Moreover, an efficient resource allocation algorithm assigns resources to tasks so that the energy efficiency is increased in the data centers, and the period is decreased by reducing latency. Due to the increasing growth of new computing technologies, many of these problems have been, to a large extent, solved. In this article, we have investigated many of these computing technologies in the field of resource allocation using two categories of game theory and evolutionary optimization approaches.

    Consequently, in this section, the articles on resource allocation in different computing environments have been reported. Initially, the number of survey articles have been presented in the field of resource allocation by their publication year in Figure 4. Then, we have dealt with the applications of two categories of game theory and evolutionary optimization approaches for resource allocation in various computing environments.

    Figure 4.  The number of survey articles on resource allocation between 2009 and 2021.

    To the best of our knowledge, we have reviewed all the published survey articles on resource allocation optimization. According to Figure 4, the first survey article was published in 2009, and the latest one was published in 2021. Investigating Figure 4, we realize the importance of resource allocation considering the number of published articles. Most of the published survey articles on resource allocation date back to 2014, when 4 articles were published. From 2014 onwards, the number of published articles has gradually decreased until 2021, when the number of survey articles published on resource allocation has become one.

    One of the critical specifications of our survey article compared to previous survey articles on resource allocation is that we have investigated a variety of computing environments. However, in most of the recently published survey articles, only one distributed computing environment has been considered. A summary of the published survey articles on resource allocation is given in Table 1.

    Table 1.  Review of the published survey articles on resource allocation problem.
    No. Authors Objectives of the resource allocation problem Limitations Computing Environments
    1 Goyal and Dadizadeh, [45] Discussing the implementation of parallel processing frameworks (e.g., Microsoft's Dryad and Google's MapReduce frameworks) Only a cloud computing environment has been used. Cloud Computing
    2 Hameed et al., [40], Beloglazov et al., [41], and Shuja et al., [42] Classification of articles based on efficient energy Only a cloud computing environment has been used. Cloud Computing
    3 Vinothina et al., [50] Classification of different strategies and resource allocation challenges and their effects on the cloud system Only a cloud computing environment has been used. Cloud Computing
    4 Mohan and Raj, [7] Presenting a variety of network resource allocation approaches and their applications in the cloud computing environment Only a cloud computing environment has been used. Cloud Computing
    5 Aceto et al., [43] Review of articles in terms of resource monitoring for cloud computing systems Only a cloud computing environment has been used. Cloud Computing
    6 Hussain et al., [46] Review of the work process of commercial cloud service providers and open-source cloud deployment solutions Only a cloud computing environment has been used. Cloud Computing
    7 Huang et al., [47] Review of dynamic resource allocation and task scheduling schemes for SaaS-based cloud computing system Only a cloud computing environment has been used. Cloud Computing
    8 Anuradha and Sumathi, [51] Review of resource allocation approaches and techniques in cloud computing Only a cloud computing environment has been used. Cloud Computing
    9 Mohamaddiah et al., [33] Review of resource management, resource allocation, and resource monitoring strategies, as well as methods to address resource allocation problems in the cloud computing environment Only a cloud computing environment has been used. Cloud Computing
    10 Manvi and Shyam, [53] Review of important resource management methods, including resource provisioning, resource allocation, and resource adaption, and a comprehensive review of the techniques for IaaS in cloud computing Only a cloud computing environment has been used. Cloud Computing
    11 Jennings and Stadler, [44] Providing a framework for managing cloud resources Only a cloud computing environment has been used. Cloud Computing
    12 Ahmad et al., [48,49] Study of virtual machine migration optimization features for cloud data service operators Only a cloud computing environment has been used. Cloud Computing
    13 Castaneda et al., [52] Presenting methods to achieve joint optimization task in the downlink of MU-MIMO communication systems No computing environment has been considered. No computing environment has been considered.
    14 Yousafzai et al., [38] Utilizing a classification with features, including optimization goals, design strategies, optimization approaches, and valuable performances Only a cloud computing environment has been used. Cloud Computing
    15 Ghobaei et al., [39] Investigation of resource management schemes in six classes, including application placement, resource scheduling, load balancing, task offloading, resource provisioning, and resource allocation Only a fog computing environment has been used. Fog Computing
    16 Su et al., [54] Investigation of methods and models of resource allocation algorithms in 5G network slicing as well as studying MO architecture of network slicing by providing a basic framework of resource allocation algorithms Only 5G communication networks have been used but the different resource allocation problems were not investigated. 5G Network
    17 Saeik et al., [55] Consideration of three leading solutions including mathematical optimization, artificial intelligence, and control theory to simplify the task offloading problems through join Edge and, or Cloud together In particular, in their study, no survey paper on resource allocation problems was investigated in computing environments. EC, CC
    18 In this article Investigation of resource allocation approaches, including game theory and evolutionary optimization for resource allocation problems such as QoS, energy and power consumption, cost, computational complexity, average response time, Nash equilibrium, reliability, flexibility, and scalability, in various computing environments The control theory approaches, including linear optimal control theory (i.e., Linear Quadratic Regulator) and State Feedback Control have not been investigated. Also, machine learning and deep learning methods have not been considered. IoT, CC, FC, MEC, VFC, Heterogeneous Relay Networks (HRNs), FC-IoT, HetNets, DCs, 5G HetNets, VEC, CC-IoT, CNs, 5G networks, LTE, CC/FC, Heterogeneous Radar Networks (HRNs), HetNets-MEC, IoT-FC, and FC- IIoT. LTE, CC/FC, and 5G HetNets

     | Show Table
    DownLoad: CSV

    According to Table 1, we can argue that none of the previous survey articles have considered the different resource allocation problems in various computing environments comprehensively.

    In our study, we have reviewed the latest research articles published in journals on resource allocation problems regarding a wide range of computing environments based on two categories of approaches; game theory and evolutionary optimization approaches. Some articles have not explicitly described future research areas on resource allocation, while prospects of future researches have been stated in the present study.

    Therefore, we have reviewed the latest research articles in the field of resource allocation on various computing environments using two categories of approaches, game theory and evolutionary optimization, as presented in Tables 2 and 3, respectively.

    Table 2.  Review of the published research articles based on Game Theory models for resource allocation problems.
    No. References Year No. Citations-Publisher Technique Computing Paradigms Results
    1 Liang et al., [62] 2014 30-IEEE The Stackelberg model-based hierarchical game-Theoretic HRNs ● Guarantee the power consumption and throughput balance between access and backhaul links
    ● Increase the total system resource exploitation performance and the user data rates
    2 Huang et al., [66] 2014 21-IEEE A Game-Theoretic algorithm and protocol based on the equilibrium derivations Cellular Networks ● Increase the system performance, sum rate, and sum-rate gain
    ● Reach a Nash equilibrium
    3 Song et al., [56] 2014 342-IEEE Coalition game-theoretic Cellular Networks ● Increase the usefulness function
    ● Obtain a better efficiency in cumulative service rate
    4 Pillai and Rao, [75] 2014 125-IEEE Coalition formation model based on the uncertainty principle of game theory Cloud Computing ● Better utilization of resources with unspecified task information for the machines
    ● Achieve better performance by reducing task allocation time and resource wastage
    ● Increase the request satisfaction
    ● Avoid the complexity of integer programming by solving the optimization problem
    5 Xu and Yu, [76] 2014 64- Hindawi A fairness-utilization trade-off game algorithm Cloud Computing ● Achieve fair resource allocation
    ● Allocate resources more efficiently to make the trade-off between fairness and utilization
    ● Make a Nash equilibrium decision
    6 Nezarat and Dastghaibifard, [63] 2015 34- PloS An auction-based approach under non-cooperative game theory combined with Bayesian learning Cloud Computing ● Reach a Nash equilibrium point in an incomplete information environment
    ● Achieve the most proper performance to help the auctioneer for gaining the balance price of resources.
    7 Huang et al., [68] 2015 18-IEEE A cooperative game amid the stations Cloud Computing-based Internet of Things ● Maximize the efficiency of the overall performance and overall utilization
    8 Munir et al., [60] 2016 24-IEEE A Game Theoretical Network-Assisted User-Centric scheme 5G HetNets Maximizing sum rate and the overall network performance of HetNets, and less computational complexity
    9 Nezarat and Dastghaibifard [64] 2016 5-IGI-Global A non-cooperative game theory method with Bayesian learning Cloud Computing ● Achieve a Nash equilibrium point
    ● The most appropriate performance to aid an auctioneer to decide on the price balance of resources
    10 Yang et al. [65] 2016 56-IEEE Stackelberg-based game theory Data Centers ● Achieve a better energy efficiency
    ● Satisfy the minimum performance requirements identified through service level agreement
    ● Increase the power consumption with the reduction in average response time
    11 Wei et al., [69] 2016 274-IEEE A cloud resource allocation model based on an imperfect information Stackelberg game Cloud Computing ● Enhance the profit due to the increase the resources of infrastructure suppliers
    ● Increase the bandwidth price and CPU price
    ● Reach a Nash equilibrium
    ● The predicted price is less than the actual value
    ● Enhance the resource utilization of the infrastructure supplier
    12 Zhang et al., [57] 2017 84-IEEE A three-layer hierarchical game the framework under Stackelberg sub-game Fog computing ● The highest utility of the DSO in the independent payment plan
    ● Decrease in the utility of the DSO by increasing the cost coefficients of a computing resource block in FNs
    13 Zhang et al., [59] 2017 218-IEEE A Joint Optimization Approach Combined with Stackelberg Game and Matching IoT-Fog Computing ● Increase the overall utility of all FNs
    ● Improve the utility of FNs and then Gradually converges to a fixed value
    14 Zhang et al., [70] 2017 74-IEEE The interference aware resource allocation non-cooperative game theory framework Heterogeneous Networks ● Obtain the Nash equilibrium for power optimization and sub-channel allocation sub-games to decrease the co-channel interference of small cells
    ● Converges the iterative power optimization algorithm after nearly 4 iterations
    ● Obtain the much lower polynomial complexity
    ● The converges after closely four iterations of the capacity of the overall small cells
    ● Better performance of the capacity by increasing the number of small cells
    15 Klaimi et al., [58] 2018 17-IEEE A potential theoretical game along with a scheduling algorithm Vehicular Fog Computing ● Minimize the use of resources of the CPU and energy consumption for the vehicles.
    ● Increase the QoS in terms of latency and satisfy the high priority demands
    16 Zhang et al., [71] 2018 146-IEEE Distributed Joint computation offloading and resource allocation optimization scheme Heterogeneous Networks with Mobile Edge Computing ● Achieve less overall cost
    ● Reach the lowest energy consumption
    ● Requires the least task completion time
    ● Reducing interference between neighboring mobile terminals
    ● Achieve the Nash equilibrium
    17 Zhou et al., [77] 2019 139-IEEE A pricing-based stable contract-matching algorithm Vehicular Fog Computing ● Achieve social welfare by the proposed contract-matching algorithm
    ● obtain low overall network delay
    Reduce the total computation complexity
    18 Chen et al., [61] 2020 21- Elsevier A multi-leader multi-follower Stackelberg game Mobile Edge Computing ● Determine the best resource demand strategy for an end-user
    ● Obtain an equilibrium price
    ● Determine the Stackelberg equilibrium for every resource type
    ● Demonstrate the role of an EU with useless resources as a MEC
    ● Scales properly as the system size increases
    19 Jie et al., [35] 2020 9-IEEE Double-stage Stackelberg game theory Fog Computing-based Industrial Internet of Things ● Reach Stackelberg equilibrium amid data users, Fog service providers, and the cloud center, and Nash equilibrium amid non-cooperative Fog service providers
    20 Li et al. [74] 2020 1-Hindawi A dynamic service area partitioning algorithm along with Stackelberg game Vehicular Edge Computing ● Guarantee the allocation efficiency for fog node resources between the cars
    ● Obtain Nash equilibrium of the game between Fog nodes
    ● Impress the density of vehicles and resource occupancy rate on the service area of the data service agent
    21 Wang et al. [78] 2021 1-IEEE Stackelberg game, along with the matching-based user pairing algorithm Mobile Edge Computing ● Reduce the overall energy consumption by determining users into the suitable sub-channels
    ● Increase offloading time
    ● Decrease the overall execution time
    ● Achieve the low-complexity
    ● Offload more tasks to the MEC server at a decreased rate
    ● Obtain the performance near to 80% of the worldwide optimal solution

     | Show Table
    DownLoad: CSV
    Table 3.  Review of the published research articles based on Evolutionary optimization methods for resource allocation problems.
    No. References Year No. Citations-Publisher Technique Computing Paradigms Results
    1 Chimakurthi, [87] 2011 33- Arxiv Ant Colony Cloud Computing ● Improve the power and energy, response time using ant agents
    2 Arianyan et al., [94] 2012 12-IEEE Genetic algorithm Cloud Computing ● Save more operating and maintenance costs
    ● The "best fit" scenario desires to have more vacant Physical Machines and turn off such PMs.
    ● The "most available resource" scenario desires to map VMs on PMs with more available resources to reduce applications' response time
    3 Beloglazov et al., [95] 2012 2941-Elsevier Hybrid MBFD-MM algorithm Cloud Computing ● The minimum processor capacity for the selected virtual machine.
    ● Reduce the energy consumption costs associated with cloud data centers
    ● Prevent breaches of service level agreements
    4 Rao and Cornelio, [104] 2012 6-IEEE Genetic algorithm Cloud Computing ● Achieve an optimal resource allocation with high reliability and low latency
    ● Reduce the TARA completion time
    5 Mata and Guardieiro, [97] 2014 8-IEEE Genetic algorithm LTE ● Better video quality
    ● The higher the total throughput of the user equipment
    ● Lower values of average end-to-end delay
    6 Kim and Ko, [86] 2015 40-IEEE A new encoding scheme with a genetic algorithm Internet of Things ● Reduce the running time
    ● generate a near-optimal solution with a rate of 97%
    ● Decrease the data transmissions amid gateways
    7 Tang et al., [1] 2015 112-IEEE A two-layer optimization algorithm Heterogeneous Networks ● Optimize and converge the energy efficiency and spectral efficiency
    ● Decrease the computational complexity
    ● Represent the quasiconcave function of the relationship between achievable rate and energy efficiency
    8 Zheng and Wang, [2] 2016 21-IEEE PFOA Cloud Computing ● Save the overall cost by 3.2% in instance 1 and 8.8%% in instance 2
    ● Decrease the Cmax by 13.5% in instance 1 and 41.2% in instance 2 to optimize the makespan
    9 Cao et al., [96] 2016 138-IEEE A new cost-oriented optimization model based on Modified Priority List Algorithm Cloud Computing ● The more cost decrease by the larger the peak-valley difference
    ● The scalability and effects on decreasing cost
    ● Minimum operational costs
    10 Kumar et al., [103] 2016 7- MATEC A bio-inspired Cuckoo search algorithm Cloud Computing ● Save the energy consumption by approximately 12%
    ● Improve the QoS
    11 Choi and Lim, [114] 2016 54- Sage An optimization approach under a new winner determination algorithm with an investigation for deadline constraints of jobs Cloud Computing for Internet of Things ● Reduction of the penalty cost in the cloud computing environment for the IoT utilizing actual workload data
    ● Achieve the maximum profit of the provider by considering penalty costs for service level agreement violations
    12 Liu et al., [85] 2017 144-IEEE Genetic algorithm in combination with a Dirichlet distribution approach Fog Computing ● Reduce the latency
    Optimize the resource allocation and subtask scheduling
    13 Zhang et al., [91] 2017 22-IEEE JRA-EG Mobile Edge Computing ● Converge to the evolutionary equilibrium
    ● Optimize the energy consumption, reduce time delay and running time, and the least monetary cost
    14 Guddeti and Buyya, [93] 2017 56-IEEE MPSO combined with MCSO Cloud Computing ● Improving reliability and flexibility, reducing execution time and average response time, and also increasing the efficient utilization of cloud resources by almost 12%
    15 Hachicha et al., [98] 2017 4-IEEE Genetic algorithm Cloud Computing ● Improve the QoS
    ● By increasing the number of resources of the cloud, save the computation time efficiency nearly constant
    16 Han et al., [88] 2018 70-IEEE Online optimizer based on genetic algorithms 5G networks ● Increase the long-term network utility and a good scalability
    ● Achieving a satisfying proximate to the global optimum and a fast convergence by the genetic optimizer
    ● Represent timely conformity to environment shift
    17 Tsai [107] 2018 23-Elsevier A hybrid SEIRA algorithm Internet of Things ● Decrease the total communication cost between devices and gateways
    ● A heterogeneous environment is not investigated for scheduling
    ● Decrease the computational time during the convergence process in an IoT environment
    18 Rafique et al., [3] 2019 39-IEEE MPSO combined with MCSO Fog Computing ● Minimize the average response time, energy consumption, execution time, and processing cost of execution
    Optimize the resource exploitation and handling the Fog resources
    19 Li et al., [24] 2019 34-IEEE IGA Fog Computing-Internet of Things ● Optimize the computation and communication resource allocation
    ● Reduce the energy consumption, average delay, and outage probability
    ● Increase the system throughput performance
    20 Ma et al., [99] 2019 24-Mdpi A multi-layer IoT-based fog computing model under a genetic algorithm Internet of Things-Fog Computing ● Reduce the distance between fog nodes and terminals by 38%
    ● Decrease the energy consumption by 150 KWh
    ● Modifying the LIBP by adding multiple sinks.
    21 Akintoye and Bagula, [106] 2019 12- Mdpi GABVMP Cloud/Fog Computing ● Improve the QoS level regarding allocation cost
    ● Better performance of
    the consumption of Physical Machine-Switch links under the cost of placing VMs on PMs
    ● Lower energy consumption by increasing the VMs
    22 Sangaiah et al., [110] 2020 53-Mdpi WOA Internet of Things ● Decrease the overall communication cost between gateways and resources
    23 Yan et al. [115] 2020 58-IEEE Optimal resource allocation schemes of the heterogeneous resource allocation based on the dual ascent and the block coordinate descent methods Heterogeneous Radar Networks ● Minimize the overall resource consumption by achieving the predefined MTT Bayesian Cramér-Rao Lower Bound thresholds
    ● Satisfying system resource budgets
    ● Minimize the overall MTT BCRLB and maximize the total MTT accuracy for the specified resource budgets
    ● Need fewer resources for providing the identical tracking performance due to multiple targets in HRNs

     | Show Table
    DownLoad: CSV

    Based on Tables 2 and 3, the most common problems related to resource allocation are ordered with the following:

    ● Improve the QoS of resources

    ● The low energy and power consumption

    ● Decrease the computational complexity

    ● Increase the long-term utility of the computing environments

    ● Minimize the average response time, computation time, and latency

    ● Decrease overall cost in the computing environments

    On the other hand, the difference between game theory and evolutionary optimization methods, according to Tables 2 and 3, is to achieve Nash equilibrium using game theory-based Stackelberg game framework.

    Moreover, the items such as reliability, flexibility, scalability, the distance between fog nodes and terminals, and the data transmissions between gateways are considered for the evolutionary optimization methods. In general, optimization methods cannot provide a suitable solution for resource allocation problems in different computing environments [32,37].

    In summary, in Figures 5 and 6, we illustrate the number of the published articles in recent years for the two approaches.

    Figure 5.  The number of research articles on resource allocation based on game theory models between 2014 and 2021.
    Figure 6.  The number of research articles on resource allocation based on evolutionary optimization methods between 2011 and 2020.

    According to Figure 5, we have presented research articles in the field of resource allocation on various computing environments using game theory models. We have observed that the first article was published in 2014, while the latest one was published in 2020.

    The most published articles on resource allocation date back to 2014, with 5 published articles. In 2015, 2016, 2017, and 2018, the numbers of the published articles were two, four, three, and two, respectively. Also, in 2020, three articles have been published on resource allocation. Finally, one article has been accomplished in 2021.

    In Figure 6, the published research articles in resource allocation on various computing environments using evolutionary optimization methods have been presented. The first and latest articles were published in 2011 and 2020, respectively. The most published articles on resource allocation date back to 2016, 2017, and 2019 with four articles equally. Also, one article was found in 2011 and 2014 equally. Furthermore, in 2012, the number of articles reached three. In 2015 and 2018, two articles were published uniformly.

    Finally, in 2020, two articles were performed. Furthermore, the number of published survey articles on resource allocation available in the database of various journals is presented in Table 4.

    Table 4.  The number of survey articles published on existing resource allocation in the database of various journals.
    Journal database Number of published review articles
    IEEE Xplore 6
    Springer 5
    Science Direct 5

     | Show Table
    DownLoad: CSV

    According to Table 4, the number of published survey articles on resource allocation is identical for the three databases, including IEEE Xplore, Springer, and Science Direct. In addition to the journals of these databases, in other journals such as University of British Columbia Technical Report for Computer Science, Journal of Software, International Journal of Advanced Computer Science and Applications, and International Journal of Machine Learning and Computing, one survey article has been published in each on the subject of resource allocation. Therefore, in this study, most of the articles among the 20 investigated survey articles are journal articles for resource allocation problems in the computing environment.

    Moreover, the number of research articles on resource allocation available in the database of various journals is presented based on game theory and evolutionary optimization approaches in Tables 5 and 6, respectively.

    Table 5.  The number of research articles published on available resource allocation based on Game theory models from the database of various journals.
    Databases of Game theory models Number of published research articles
    IEEE Xplore 16
    Science Direct 1
    Hindawi 2
    PLOS 1
    IGI-Global 1

     | Show Table
    DownLoad: CSV
    Table 6.  The number of research articles published on available resource allocation based on Evolutionary optimization methods from the database of various journals.
    Databases of Evolutionary optimization methods Number of published research articles
    IEEE Xplore 15
    Science Direct 2
    Mdpi 3
    Sage 1
    Arxiv 1
    Atlantis Press MATEC Web of Conferences 1

     | Show Table
    DownLoad: CSV

    According to Tables 5 and 6, in this study, out of 44 published research articles on the game theory and evolutionary optimization approaches, the highest number of articles has been published in the IEEE Xplore database.

    In this section, the fundamental problems related to resource allocation of the game theory and evolutionary optimization approaches in different computing environments have been described with the following items:

    1) Game theory models

    In studies related to game theory models, the energy consumption, the quality of service, the reach a Nash equilibrium point of the game, the computational complexity, and the average response time were investigated as crucial problems for resource allocation in computing environments. Also, most of the game theory models support good convergence.

    ● Deep learning-based methods are required to be considered in terms of delay and energy consumption such as deep reinforcement learning, supervised learning, and unsupervised learning underlying a Stackelberg game framework for validating the cloud/fog/edge computing environments' performance [116,117,118].

    ● The reliability and scalability parameters are needed to be investigated on the Stackelberg-based game in computing environments.

    2) Evolutionary optimization methods

    The evolutionary optimization methods are lead to improvement of the QoS, reduce the latency, decrease the energy consumption, processing cost of execution, decrease the computational complexity, reduce the total communication cost between devices and gateways, and increase the scalability in computing environments.

    ● After reviewing the articles, we found that in the results related to the optimization of resource allocation using evolutionary optimization methods, there is no achievement of Nash equilibrium. So, the use of evolutionary optimization methods for determining Nash equilibrium needs to be considered by researchers.

    ● The optimization methods such as the MPSO in combination with the MCSO algorithm, IGA, Ant colony, Cuckoo search algorithm, and WOA have been used in different resource allocation problems. These methods are not capable of achieving an optimal solution for resource allocation problems in computing environments. Also, the mentioned methods have the restrictions of lower convergences.

    In this article, a survey of the state-of-the-art on resource allocation problems using game theory and evolutionary optimization approaches accomplishes in various computing environments. The crucial goal of the mentioned approaches is to improve QoS due to the expected requirements for users. The other advantages of the approaches are low energy consumption, the minimum average response time, and low computational complexity. Besides, the main problem for the optimal allocation of resources using game theory models is to achieve Nash equilibrium, which the most common solution is to use the Stackelberg game framework. However, the Nash equilibrium problem was not performed by researchers using the optimization methods on resource allocation in computational environments. Hence, the structure of this article is a way that first, the previous survey articles have been reviewed. Second, we compared our survey article with others' work, as shown in Table 1. According to Table 1, it can be concluded that our survey article is better than previous survey articles due to considering a wide range of computing environments on different resource allocation problems. Then, the published research articles on resource allocation have been reviewed using two categories of game theory and evolutionary optimization approaches for computing environments, as presented in Tables 2 and 3. In these tables, the columns of the number of citation-publisher, technique, computing paradigms, and results have been considered. In addition, the objectives and results of each article in solving the resource allocation problems utilizing game theory and evolutionary optimization approaches in intelligent computing environments have been studied. Another advantage of this article is the number of survey articles by their publication year, as demonstrated in Figure 4, which indicates the importance of the resource allocation problem in academic researches. Finally, the number of the published research articles on resource allocation using game theory and evolutionary optimization approaches has been illustrated in Figures 5 and 6, respectively, based on their publication year. As future work, artificial intelligence-based methods, including machine learning and deep learning for various resource allocation problems in computing environments, can be considered. In particular, deep learning-based methods are proposed, including deep reinforcement learning, deep convolutional neural network, autoencoder neural networks, and generative adversarial networks in resource allocation problems. Also, control theory-based approaches for different categories of resource management, including resource allocation, resource scheduling, task offloading, application placement, load balancing, and resource provisioning in multiple computing environments, will be the subjects of our studies.

    This article has been funded by Dana Pecutan FTSM (PP-FTSM-2021), Universiti Kebangsaan Malaysia.

    The authors declare no conflicts of interest in this article.



    [1] C. Mouradian, D. Naboulsi, S. Yangui, R. H. Glitho, M. J. Morrow, P. A. Polakos, A comprehensive survey on fog computing: State-of-the-art and research challenges, IEEE Commun. Surv. Tutorials, 20 (2017), 416–464.
    [2] N. Abbas, Y. Zhang, A. Taherkordi, T. Skeie, Mobile edge computing: A survey, IEEE Int. Things, 5 (2017), 450–465.
    [3] W. Z. Khan, E. Ahmed, S. Hakak, I. Yaqoob, A. Ahmed, Edge computing: A survey, Future Gener. Comput. Syst., 97 (2019), 219–235. doi: 10.1016/j.future.2019.02.050
    [4] P. Mach, Z. Becvar, Mobile edge computing: A survey on architecture and computation offloading, IEEE Commun. Surv. Tutorials, 19 (2017), 1628–1656. doi: 10.1109/COMST.2017.2682318
    [5] S. Agarwal, S. Yadav, A. K. Yadav, An efficient architecture and algorithm for resource provisioning in fog computing, Int. J. Inf. Eng. Electron. Bus., 8 (2016), 48.
    [6] R. Buyya, C. S. Yeo, S. Venugopal, J. Broberg, I. Brandic, Cloud computing and emerging IT platforms: Vision, hype, and reality for delivering computing as the 5th utility, Future Gener. Comput. Syst., 25 (2009), 599–616. doi: 10.1016/j.future.2008.12.001
    [7] N. R. Mohan, E. B. Raj, Resource allocation techniques in cloud computing-research challenges for applications, in 2012 fourth international conference on computational intelligence and communication networks, IEEE, (2012), 556–560.
    [8] D. Ergu, G. Kou, Y. Peng, Y. Shi, Y. Shi, The analytic hierarchy process: task scheduling and resource allocation in cloud computing environment, J. Supercomput., 64 (2013), 835–848. doi: 10.1007/s11227-011-0625-1
    [9] P. Mell, T. Grance, The NIST definition of cloud computing, 2011.
    [10] F. Shahid, H. Ashraf, A. Ghani, S. A. K. Ghayyur, S. Shamshirband, E. Salwana, PSDS-proficient security over distributed storage: A method for data transmission in cloud, IEEE Access, 8 (2020), 118285–118298. doi: 10.1109/ACCESS.2020.3004433
    [11] A. Shawish, M. Salama, Cloud computing: paradigms and technologies, in Inter-cooperative collective intelligence: Techniques and applications: Springer, Berlin, Heidelberg, (2014), 39–67.
    [12] I. Foster, C. Kesselman, J. M. Nick, S. Tuecke, Grid services for distributed system integration, Computer, 35 (2002), 37–46.
    [13] S. J. Baek, S. M. Park, S. H. Yang, E. H. Song, Y. S. Jeong, Efficient server virtualization using grid service infrastructure, J. Inf. Process. Syst., 6 (2010), 553–562. doi: 10.3745/JIPS.2010.6.4.553
    [14] A. Jula, E. Sundararajan, Z. Othman, Cloud computing service composition: A systematic literature review, Expert Syst. Appl., 41 (2014), 3809–3824. doi: 10.1016/j.eswa.2013.12.017
    [15] H. R. Faragardi, A. Rajabi, R. Shojaee, T. Nolte, Towards energy-aware resource scheduling to maximize reliability in cloud computing systems, in 2013 IEEE 10th International Conference on High Performance Computing and Communications & 2013 IEEE International Conference on Embedded and Ubiquitous Computing, IEEE, (2013), 1469–1479.
    [16] P. Samimi, Y. Teimouri, M. Mukhtar, A combinatorial double auction resource allocation model in cloud computing, Inf. Sci., 357 (2016), 201–216. doi: 10.1016/j.ins.2014.02.008
    [17] L. Ni, J. Zhang, C. Jiang, C. Yan, K. Yu, Resource allocation strategy in fog computing based on priced timed petri nets, IEEE Int. Things, 4 (2017), 1216–1228. doi: 10.1109/JIOT.2017.2709814
    [18] X. Zhao, S. S. Band, S. Elnaffar, M. Sookhak, A. Mosavi, E. Salwana, The implementation of border gateway protocol using software-defined networks: A systematic literature review, IEEE Access, 2021.
    [19] M. Aazam, E. N. Huh, Fog computing micro datacenter based dynamic resource estimation and pricing model for IoT, in 2015 IEEE 29th International Conference on Advanced Information Networking and Applications, IEEE, (2015), 687–694.
    [20] O. Skarlat, S. Schulte, M. Borkowski, P. Leitner, Resource provisioning for IoT services in the fog, in 2016 IEEE 9th international conference on service-oriented computing and applications (SOCA), IEEE, (2016), 32–39.
    [21] F. Bonomi, R. Milito, J. Zhu, S. Addepalli, Fog computing and its role in the internet of things, in Proceedings of the first edition of the MCC workshop on Mobile cloud computing, (2012), 13–16.
    [22] P. G. V. Naranjo, Z. Pooranian, M. Shojafar, M. Conti, R. Buyya, FOCAN: A Fog-supported smart city network architecture for management of applications in the Internet of Everything environments, J Parallel Distr. Comput., 132 (2019), 274–283. doi: 10.1016/j.jpdc.2018.07.003
    [23] B. Varghese, N. Wang, D. S. Nikolopoulos, R. Buyya, Feasibility of fog computing, in Handbook of Integration of Cloud Computing, Cyber Physical Systems and Internet of Things, Springer, (2020), 127–146.
    [24] X. Li, Y. Liu, H. Ji, H. Zhang, V. C. Leung, Optimizing resources allocation for fog computing-based Internet of Things networks, IEEE Access, 7 (2019), 64907–64922. doi: 10.1109/ACCESS.2019.2917557
    [25] I. Stojmenovic, S. Wen, The fog computing paradigm: Scenarios and security issues, in 2014 federated conference on computer science and information systems, IEEE, (2014), 1–8.
    [26] A. Singh, Y. Viniotis, Resource allocation for IoT applications in cloud environments, in 2017 International Conference on Computing, Networking and Communications (ICNC), IEEE, (2017), 719–723.
    [27] M. H. Homaei, E. Salwana, S. Shamshirband, An enhanced distributed data aggregation method in the Internet of Things, Sensors, 19 (2019), 3173. doi: 10.3390/s19143173
    [28] Y. Gu, Z. Chang, M. Pan, L. Song, Z. Han, Joint radio and computational resource allocation in IoT fog computing, IEEE Trans. Veh. Technol., 67 (2018), 7475–7484. doi: 10.1109/TVT.2018.2820838
    [29] S. F. Abedin, M. G. R. Alam, S. A. Kazmi, N. H. Tran, D. Niyato, C. S. Hong, Resource allocation for ultra-reliable and enhanced mobile broadband IoT applications in fog network, IEEE Trans. Commun., 67 (2018), 489–502.
    [30] X. Xu, S. Fu, Q. Cai, W. Tian, W. Liu, W. Dou, et al., Dynamic resource allocation for load balancing in fog environment, Wirel. Commun. Mob. Comput., 2018 (2018).
    [31] W. Shi, J. Cao, Q. Zhang, Y. Li, L. Xu, Edge computing: Vision and challenges, IEEE Int. Things, 3 (2016), 637–646. doi: 10.1109/JIOT.2016.2579198
    [32] B. Frankovič, I. Budinská, Advantages and disadvantages of heuristic and multi agents approaches to the solution of scheduling problem, IFAC Proc. Vol., 33 (2000), 367–372.
    [33] M. H. Mohamaddiah, A. Abdullah, S. Subramaniam, M. Hussin, A survey on resource allocation and monitoring in cloud computing, Int. J. Mach. Learn. Comput., 4 (2014), 31–38.
    [34] H. Rafique, M. A. Shah, S. U. Islam, T. Maqsood, S. Khan, C. Maple, A novel bio-inspired hybrid algorithm (NBIHA) for efficient resource management in fog computing, IEEE Access, 7 (2019), 115760–115773. doi: 10.1109/ACCESS.2019.2924958
    [35] Y. Jie, C. Guo, K. K. R. Choo, C. Z. Liu, M. Li, Game-theoretic resource allocation for fog-based industrial internet of things environment, IEEE Int. Things J., 7 (2020), 3041–3052. doi: 10.1109/JIOT.2020.2964590
    [36] R. Gibbons, A primer in game theory, 1992.
    [37] J. Moura, D. Hutchison, Game theory for multi-access edge computing: Survey, use cases, and future trends, IEEE Commun. Surv. Tutorials, 21 (2018), 260–288.
    [38] A. Yousafzai, A. Gani, R. M. Noor, M. Sookhak, H. Talebian, M. Shiraz, et al., Cloud resource allocation schemes: review, taxonomy, and opportunities, Knowl. Inf. Syst., 50 (2017), 347–381. doi: 10.1007/s10115-016-0951-y
    [39] M. Ghobaei-Arani, A. Souri, A. A. Rahmanian, Resource management approaches in fog computing: A comprehensive review, J Grid Comput., 18 (2020), 1–42. doi: 10.1007/s10723-019-09491-1
    [40] A. Hameed, A. Khoshkbarforoushha, R. Ranjan, P. P. Jayaraman, J. Kolodziej, P. Balaji, et al., A survey and taxonomy on energy efficient resource allocation techniques for cloud computing systems, Computing, 98 (2016), 751–774. doi: 10.1007/s00607-014-0407-8
    [41] A. Beloglazov, R. Buyya, Y. C. Lee, A. Zomaya, A taxonomy and survey of energy-efficient data centers and cloud computing systems, in Advances in computers, Elsevier, (2011), 47–111.
    [42] J. Shuja, K. Bilal, S. A. Madani, M. Othman, R. Ranjan, P. Balaji, et al., Survey of techniques and architectures for designing energy-efficient data centers, IEEE Syst. J., 10 (2014), 507–519.
    [43] G. Aceto, A. Botta, W. De Donato, A. Pescapè, Cloud monitoring: A survey, Comput. Network, 57 (2013), 2093–2115. doi: 10.1016/j.comnet.2013.04.001
    [44] B. Jennings, R. Stadler, Resource management in clouds: Survey and research challenges, J. Network Syst. Manag., 23 (2015), 567–619. doi: 10.1007/s10922-014-9307-7
    [45] A. Goyal, S. Dadizadeh, A survey on cloud computing, Univ. B. C. Tech. Rep. CS, 508 (2009), 55–58.
    [46] H. Hussain, S. U. R. Malik, A. Hameed, S. U. Khan, G. Bickler, N. Min-Allah, et al., A survey on resource allocation in high performance distributed computing systems, Parallel Comput., 39 (2013), 709–736. doi: 10.1016/j.parco.2013.09.009
    [47] L. Huang, H. S. Chen, T. T. Hu, Survey on resource allocation policy and job scheduling algorithms of cloud computing1, J. Softw., 8 (2013), 480–487.
    [48] R. W. Ahmad, A. Gani, S. H. A. Hamid, M. Shiraz, F. Xia, S. A. Madani, Virtual machine migration in cloud data centers: a review, taxonomy, and open research issues, J. Supercomput., 71 (2015), 2473–2515. doi: 10.1007/s11227-015-1400-5
    [49] R. W. Ahmad, A. Gani, S. H. A. Hamid, M. Shiraz, A. Yousafzai, F. Xia, A survey on virtual machine migration and server consolidation frameworks for cloud data centers, J. Network Comput. Appl., 52 (2015), 11–25. doi: 10.1016/j.jnca.2015.02.002
    [50] V. Vinothina, R. Sridaran, P. Ganapathi, A survey on resource allocation strategies in cloud computing, Int. J. Adv. Comput. Sci. Appl., 3 (2012), 97–104. doi: 10.5121/acij.2012.3511
    [51] V. Anuradha, D. Sumathi, A survey on resource allocation strategies in cloud computing, in International Conference on Information Communication and Embedded Systems (ICICES2014), IEEE, (2014), 1–7.
    [52] E. Castaneda, A. Silva, A. Gameiro, M. Kountouris, An overview on resource allocation techniques for multi-user MIMO systems, IEEE Commun. Surv. Tutorials, 19 (2016), 239–284.
    [53] S. S. Manvi, G. K. Shyam, Resource management for Infrastructure as a Service (IaaS) in cloud computing: A survey, J. Network Comput. Appl., 41 (2014), 424–440. doi: 10.1016/j.jnca.2013.10.004
    [54] R. Su, D. Zhang, R. Venkatesan, Z. Gong, C. Li, F. Ding, et al., Resource allocation for network slicing in 5G telecommunication networks: A survey of principles and models, IEEE Network, 33 (2019), 172–179. doi: 10.1109/MNET.2019.1900024
    [55] F. Saeik, M. Avgeris, D. Spatharakis, N. Santi, D. Dechouniotis, J. Violos, et al. S. Papavassiliou, Task offloading in Edge and Cloud Computing: A survey on mathematical, artificial intelligence and control theory solutions, Comput. Network, 195 (2021), 108177. doi: 10.1016/j.comnet.2021.108177
    [56] L. Song, D. Niyato, Z. Han, E. Hossain, Game-theoretic resource allocation methods for device-to-device communication, IEEE Wirel. Commun., 21 (2014), 136–144.
    [57] H. Zhang, Y. Zhang, Y. Gu, D. Niyato, Z. Han, A hierarchical game framework for resource management in fog computing, IEEE Commun. Mag., 55 (2017), 52–57.
    [58] J. Klaimi, S. M. Senouci, M. A. Messous, Theoretical game approach for mobile users resource management in a vehicular fog computing environment, in 2018 14th International Wireless Communications & Mobile Computing Conference (IWCMC), IEEE, (2018), 452–457.
    [59] H. Zhang, Y. Xiao, S. Bu, D. Niyato, F. R. Yu, Z. Han, Computing resource allocation in three-tier IoT fog networks: A joint optimization approach combining Stackelberg game and matching, IEEE Int. Things, 4 (2017), 1204–1215. doi: 10.1109/JIOT.2017.2688925
    [60] H. Munir, S. A. Hassan, H. Pervaiz, Q. Ni, A game theoretical network-assisted user-centric design for resource allocation in 5G heterogeneous networks, in 2016 IEEE 83rd vehicular technology conference (VTC Spring), IEEE, (2016), 1–5.
    [61] Y. Chen, Z. Li, B. Yang, K. Nai, K. Li, A Stackelberg game approach to multiple resources allocation and pricing in mobile edge computing, Future Gener. Comput. Syst., 108 (2020), 273–287. doi: 10.1016/j.future.2020.02.045
    [62] L. Liang, G. Feng, Y. Jia, Game-theoretic hierarchical resource allocation for heterogeneous relay networks, IEEE Trans. Veh. Technol., 64 (2014), 1480–1492.
    [63] A. Nezarat, G. Dastghaibifard, Efficient nash equilibrium resource allocation based on game theory mechanism in cloud computing by using auction, PloS one, 10 (2015), e0138424. doi: 10.1371/journal.pone.0138424
    [64] A. Nezarat, G. Dastghaibifard, A game theoretic method for resource allocation in scientific cloud, Int. J. Cloud Appl. Comput. (IJCAC), 6 (2016), 15–41.
    [65] B. Yang, Z. Li, S. Chen, T. Wang, K. Li, Stackelberg game approach for energy-aware resource allocation in data centers, IEEE Trans. Parallel Distr. Syst., 27 (2016), 3646–3658. doi: 10.1109/TPDS.2016.2537809
    [66] J. Huang, Y. Zhao, K. Sohraby, Resource allocation for intercell device-to-device communication underlaying cellular network: A game-theoretic approach, in 2014 23rd international conference on computer communication and networks (ICCCN), IEEE, (2014), 1–8.
    [67] D. Niyato, E. Hossain, A game-theoretic approach to competitive spectrum sharing in cognitive radio networks, in 2007 IEEE Wireless Communications and Networking Conference, IEEE, (2007), 16–20.
    [68] J. Huang, Y. Yin, Q. Duan, H. Yan, A game-theoretic analysis on context-aware resource allocation for device-to-device communications in cloud-centric internet of things, in 2015 3rd International Conference on Future Internet of Things and Cloud, IEEE, (2015), 80–86.
    [69] W. Wei, X. Fan, H. Song, X. Fan, J. Yang, Imperfect information dynamic stackelberg game based resource allocation using hidden Markov for cloud computing, IEEE Trans. Serv. Comput., 11 (2016), 78–89.
    [70] H. Zhang, J. Du, J. Cheng, K. Long, V. C. Leung, Incomplete CSI based resource optimization in SWIPT enabled heterogeneous networks: A non-cooperative game theoretic approach, IEEE Trans. Wirel. Commun., 17 (2017), 1882–1892.
    [71] J. Zhang, W. Xia, F. Yan, L. Shen, Joint computation offloading and resource allocation optimization in heterogeneous networks with mobile edge computing, IEEE Access, 6 (2018), 19324–19337. doi: 10.1109/ACCESS.2018.2819690
    [72] X. Chen, L. Jiao, W. Li, X. Fu, Efficient multi-user computation offloading for mobile-edge cloud computing, IEEE ACM Trans. Network, 24 (2015), 2795–2808.
    [73] S. Guo, B. Xiao, Y. Yang, Y. Yang, Energy-efficient dynamic offloading and resource scheduling in mobile cloud computing, in IEEE INFOCOM 2016-The 35th Annual IEEE International Conference on Computer Communications, IEEE, (2016), 1–9.
    [74] G. S. Li, Y. Zhang, M. L. Wang, J. H. Wu, Q. Y. Lin, X. F. Sheng, Resource Management Framework Based on the Stackelberg Game in Vehicular Edge Computing, Complexity, 2020 (2020).
    [75] P. S. Pillai, S. Rao, Resource allocation in cloud computing using the uncertainty principle of game theory, IEEE Syst. J., 10 (2016), 637–648. doi: 10.1109/JSYST.2014.2314861
    [76] X. Xu, H. Yu, A game theory approach to fair and efficient resource allocation in cloud computing, Math. Probl. Eng., 2014 (2014).
    [77] Z. Zhou, P. Liu, J. Feng, Y. Zhang, S. Mumtaz, J. Rodriguez, Computation resource allocation and task assignment optimization in vehicular fog computing: A contract-matching approach, IEEE Trans. Veh. Technol., 68 (2019), 3113–3125. doi: 10.1109/TVT.2019.2894851
    [78] K. Wang, Z. Ding, D. K. So, G. K. Karagiannidis, Stackelberg game of energy consumption and latency in MEC systems With NOMA, IEEE Trans. Commun., 69 (2021), 2191–2206. doi: 10.1109/TCOMM.2021.3049356
    [79] S. G. Domanal, R. M. R. Guddeti, R. Buyya, A hybrid bio-inspired algorithm for scheduling and resource management in cloud environment, IEEE Trans. Serv. Comput., 13 (2017), 3–15.
    [80] X. S. Yang, Nature-inspired Optimization Algorithms, Academic Press, 2020.
    [81] A. Arram, M. Ayob, A. Sulaiman, Hybrid bird mating optimizer with single-based algorithms for combinatorial optimization problems, IEEE Access, 9 (2021), 115972–115989. doi: 10.1109/ACCESS.2021.3096125
    [82] N. S. Jaddi, S. Abdullah, A novel auction-based optimization algorithm and its application in rough set feature selection, IEEE Access, 9 (2021), 106501–106514. doi: 10.1109/ACCESS.2021.3098808
    [83] Z. J. Lee, S. F. Su, C. Y. Lee, Y. S. Hung, A heuristic genetic algorithm for solving resource allocation problems, Knowl. Inf. Syst., 5 (2003), 503–511. doi: 10.1007/s10115-003-0082-0
    [84] Z. J. Lee, C. Y. Lee, A hybrid search algorithm with heuristics for resource allocation problem, Inf. Sci., 173 (2005), 155–167. doi: 10.1016/j.ins.2004.07.010
    [85] Y. Liu, J. E. Fieldsend, G. Min, A framework of fog computing: Architecture, challenges, and optimization, IEEE Access, 5 (2017), 25445–25454. doi: 10.1109/ACCESS.2017.2766923
    [86] M. Kim, I. Y. Ko, An efficient resource allocation approach based on a genetic algorithm for composite services in IoT environments, in 2015 IEEE International Conference on Web Services, IEEE, (2015), 543–550.
    [87] L. Chimakurthi, Power efficient resource allocation for clouds using ant colony framework, preprint, arXiv: 1102.2608.
    [88] B. Han, J. Lianghai, H. D. Schotten, Slice as an evolutionary service: Genetic optimization for inter–slice resource management in 5G networks, IEEE Access, 6 (2018), 33137–33147. doi: 10.1109/ACCESS.2018.2846543
    [89] J. Tang, D. K. So, E. Alsusa, K. A. Hamdi, A. Shojaeifard, Resource allocation for energy efficiency optimization in heterogeneous networks, IEEE J. Sel. Area Commun., 33 (2015), 2104–2117. doi: 10.1109/JSAC.2015.2435351
    [90] Y. Liu, S. L. Zhao, X. K. Du, S. Q. Li, Optimization of resource allocation in construction using genetic algorithms, in 2005 International Conference on Machine Learning and Cybernetics, IEEE, 6 (2005), 3428–3432.
    [91] J. Zhang, W. Xia, Z. Cheng, Q. Zou, B. Huang, F. Shen, et al., An evolutionary game for joint wireless and cloud resource allocation in mobile edge computing, in: ; 2017. IEEE. pp. 1–6.
    [92] X. L. Zheng, L. Wang. A Pareto based fruit fly optimization algorithm for task scheduling and resource allocation in cloud computing environment, in 2017 9th International Conference on Wireless Communications and Signal Processing (WCSP), IEEE, (2016), 3393–3400.
    [93] R. M. Guddeti, R. Buyya, A hybrid bio-inspired algorithm for scheduling and resource management in cloud environment, IEEE Trans. Serv. Comput., 2017.
    [94] E. Arianyan, D. Maleki, A. Yari, I. Arianyan, Efficient resource allocation in cloud data centers through genetic algorithm, in 6th International Symposium on Telecommunications (IST), IEEE, (2012), 566–570.
    [95] A. Beloglazov, J. Abawajy, R. Buyya, Energy-aware resource allocation heuristics for efficient management of data centers for cloud computing, Future Gener. Comput. Syst., 28 (2012), 755–768. doi: 10.1016/j.future.2011.04.017
    [96] Z. Cao, J. Lin, C. Wan, Y. Song, Y. Zhang, X. Wang, Optimal cloud computing resource allocation for demand side management in smart grid, IEEE Trans. Smart Grid, 8 (2016), 1943–1955.
    [97] S. H. da Mata, P. R. Guardieiro, A genetic algorithm based approach for resource allocation in LTE uplink, in 2014 International Telecommunications Symposium (ITS), IEEE, (2014), 1–5.
    [98] E. Hachicha, K. Yongsiriwit, M. Sellami, W. Gaaloul, Genetic-based configurable cloud resource allocation in QoS-aware business process development, in 2017 IEEE International Conference on Web Services (ICWS), IEEE, (2017), 836–839.
    [99] K. Ma, A. Bagula, C. Nyirenda, O. Ajayi, An iot-based fog computing model, Sensors, 19 (2019), 2783.
    [100] L. Ngqakaza, A. Bagula, Least path interference beaconing protocol (libp): A frugal routing protocol for the internet-of-things, in International Conference on Wired/Wireless Internet Communications, Springer, (2014), 148–161.
    [101] A. Bagula, D. Djenouri, E. Karbab, Ubiquitous sensor network management: The least interference beaconing model, in 2013 IEEE 24th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC), IEEE, (2013), 2352–2356.
    [102] A. B. Bagula, D. Djenouri, E. Karbab, On the relevance of using interference and service differentiation routing in the internet-of-things, Int. Things, Smart Spaces Next Gener. Networking, Springer, (2013), 25–35.
    [103] R. Kumar, A. Kumar, A. Sharma, A bio-inspired approach for power and performance aware resource allocation in clouds, in MATEC Web of Conferences, EDP Sciences, 57 (2016), 02008.
    [104] J. J. Rao, K. V. Cornelio, An optimized resource allocation approach for data-Intensive workloads using topology-Aware resource allocation, in 2012 IEEE International Conference on Cloud Computing in Emerging Markets (CCEM), IEEE, (2012), 1–4.
    [105] G. Lee, N. Tolia, P. Ranganathan, R. H. Katz, Topology-aware resource allocation for data-intensive workloads, in Proceedings of the first ACM asia-pacific workshop on Workshop on systems, (2010), 1–6.
    [106] S. B. Akintoye, A. Bagula, Improving quality-of-service in cloud/fog computing through efficient resource allocation, Sensors, 19 (2019), 1267. doi: 10.3390/s19061267
    [107] C. W. Tsai, SEIRA: An effective algorithm for IoT resource allocation problem, Comput. Commun., 119 (2018), 156–166. doi: 10.1016/j.comcom.2017.10.006
    [108] C. W. Tsai, An effective WSN deployment algorithm via search economics, Comput. Network, 101 (2016), 178–191. doi: 10.1016/j.comnet.2016.01.005
    [109] J. MacQueen, Some methods for classification and analysis of multivariate observations, in Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, Oakland, CA, USA, 1 (1967), 281–297.
    [110] A. K. Sangaiah, A. A. R. Hosseinabadi, M. B. Shareh, S. Y. Bozorgi Rad, A. Zolfagharian, N. Chilamkurti, IoT resource allocation and optimization based on heuristic algorithm, Sensors, 20 (2020), 539. doi: 10.3390/s20020539
    [111] S. Mirjalili, A. Lewis, The whale optimization algorithm, Adv. Eng. Software, 95 (2016), 51–67. doi: 10.1016/j.advengsoft.2016.01.008
    [112] S. K. Chaharsooghi, A. H. M. Kermani, An effective ant colony optimization algorithm (ACO) for multi-objective resource allocation problem (MORAP), Appl. Math. Comput., 200 (2008), 167–177.
    [113] M. Dorigo, Optimization, Learning and Natural Algorithms, PhD Thesis, Politecnico di Milano, 1992.
    [114] Y. Choi, Y. Lim, Optimization approach for resource allocation on cloud computing for iot, Int. J. Distrib. Sens. Networks, 12 (2016), 3479247. doi: 10.1155/2016/3479247
    [115] J. Yan, W. Pu, S. Zhou, H. Liu, M. S. Greco, Optimal resource allocation for asynchronous multiple targets tracking in heterogeneous radar networks, IEEE Trans. Signal Process., 68 (2020), 4055–4068. doi: 10.1109/TSP.2020.3007313
    [116] K. Karthiban, J. S. Raj, An efficient green computing fair resource allocation in cloud computing using modified deep reinforcement learning algorithm, Soft Comput., (2020), 1–10.
    [117] H. Ye, G. Y. Li, B. H. F. Juang, Deep reinforcement learning based resource allocation for V2V communications, IEEE Trans. Veh. Technol., 68 (2019), 3163–3173. doi: 10.1109/TVT.2019.2897134
    [118] F. Hussain, S. A. Hassan, R. Hussain, E. Hossain, Machine learning for resource management in cellular and IoT networks: Potentials, current solutions, and open challenges, IEEE Commun. Surv. Tutorials, 22 (2020), 1251–1275. doi: 10.1109/COMST.2020.2964534
  • This article has been cited by:

    1. Segun O. Olatinwo, Trudi-H. Joubert, Deep Learning for Resource Management in Internet of Things Networks: A Bibliometric Analysis and Comprehensive Review, 2022, 10, 2169-3536, 94691, 10.1109/ACCESS.2022.3195898
    2. Marzieh Malekimajd, Ali Safarpoor-Dehkordi, A survey on cloud computing scheduling algorithms, 2022, 18, 18759076, 119, 10.3233/MGS-220217
    3. Zhinan Hao, Xiang Wang, Yaojia Zhang, Ren Zhang, Probabilistic linguistic evolutionary game with risk perception in applications to carbon emission reduction decision making, 2022, 0924-669X, 10.1007/s10489-022-04340-3
    4. Diksha Rangwani, Hari Om, A Robust Four-Factor Authentication Protocol for Resource Mining, 2023, 48, 2193-567X, 1947, 10.1007/s13369-022-07055-2
    5. M. E. Bakr, Abdulhakim A. Al-Babtain, Zafar Mahmood, R. A. Aldallal, Saima Khan Khosa, M. M. Abd El-Raouf, Eslam Hussam, Ahmed M. Gemeay, Statistical modelling for a new family of generalized distributions with real data applications, 2022, 19, 1551-0018, 8705, 10.3934/mbe.2022404
    6. Marcos de S. Oliveira, Francisco Erivaldo Fernandes, Lukas Cerveny, Flávia Akemi Miyazaki, Leonardo Valeriano Neri, Alan da Silva, Beatriz Leandro Bonafini, Victor Medeiros Outtes Alves, Órion Darshan Winter de Lima, FastAiAlloc: A real-time multi-resources allocation framework proposal based on predictive model and multiple optimization strategies, 2023, 149, 0167739X, 622, 10.1016/j.future.2023.08.014
    7. Javad Hassannataj Joloudari, Sanaz Mojrian, Hamid Saadatfar, Issa Nodehi, Fatemeh Fazl, Sahar Khanjani Shirkharkolaie, Roohallah Alizadehsani, H. M. Dipu Kabir, Ru-San Tan, U. Rajendra Acharya, Resource allocation problem and artificial intelligence: the state-of-the-art review (2009–2023) and open research challenges, 2024, 83, 1573-7721, 67953, 10.1007/s11042-024-18123-0
    8. Mohammed Barakat, Rashid A. Saeed, Salaheldin Edam, Elmustafa S. Ali, Mamoon M. Saeed, Mohammed S. Elbasheir, Areeg Ali Elnaim, Amin Babeker, 2024, Energy-Aware Task Completion Delay Optimization of Space-Aerial Enabled MEC System, 979-8-3503-7263-2, 440, 10.1109/MI-STA61267.2024.10599674
    9. Xiaolin Wang, Jinglong Zhang, Cailian Chen, Jianping He, Yehan Ma, Xinping Guan, Trust-AoI-Aware Codesign of Scheduling and Control for Edge-Enabled IIoT Systems, 2024, 20, 1551-3203, 2833, 10.1109/TII.2023.3299040
    10. Namory Fofana, Asma Ben Letaifa, Abderrezak Rachedi, Intelligent Task Offloading in Vehicular Networks: A Deep Reinforcement Learning Perspective, 2025, 74, 0018-9545, 201, 10.1109/TVT.2024.3427814
    11. Hang Xiao, Shangjing Sun, Dong Li, 2024, Research on Kill Chain Resource Allocation Optimization Based on Reinforcement Learning and Game Theory, 979-8-3315-4004-3, 1, 10.1109/ICCVIT63928.2024.10872547
  • Reader Comments
  • © 2021 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4645) PDF downloads(282) Cited by(11)

Figures and Tables

Figures(6)  /  Tables(6)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog