Synergistic Integration of Graph Neural Networks and Reinforcement Learning for Enhanced Dynamic Resource Allocation in Cloud Computing Environments

Authors

  • Anirudh Pratap Singh GLA University, Mathura Author

DOI:

https://doi.org/10.64758/qva2e174

Keywords:

Graph Neural Networks (GNNs), Reinforcement Learning (RL), Resource Allocation, Cloud Computing, Dynamic Optimization, Deep Learning, Graph Representation, Multi-Agent Systems, Performance Optimization, Distributed Systems

Abstract

Cloud computing provides scalable and cost-effective infrastructure, but dynamic resource allocation remains a major challenge due to unpredictable workloads, heterogeneous resources, and diverse application requirements. Traditional static or rule-based methods lack adaptability, while machine learning approaches often fail to capture complex dependencies. Reinforcement Learning (RL) offers adaptability but struggles with scalability in large state spaces, and existing graph-based methods rely heavily on manually defined relationships.In order to overcome these constraints, we present in this paper an integrated scheme that combines RL and Graph Neural Networks (GNNs) to solve the problem of dynamic resource allocation. The cloud infrastructure is graphically modeled where the virtual machines, servers, and network devices are treated as nodes and their dependencies as edges. The GNN captures the deep features of such relationships, which are subsequently utilized by an RL agent to make decisions for allocation. We analyze the suggested GNN-RL framework through CloudSim, contrasting it with static, threshold-based, and isolated RL approaches. The results show considerable improvements in the utilization of resources, the completion time of tasks, and power consumption. Our study indicates the value of integrating GNN-RL towards providing a scalable and flexible solution for managing resources in cloud environments towards enabling more intelligent and efficient cloud computing systems

Published

2025-04-01