Efficient cloud-based microservice scaling needs to take into account the inter-dependencies between application components to avoid bottlenecks and to swiftly adapt to dynamically changing environments or user demands. Most of today's solutions are not adaptive enough especially to handle large-scale microservices. In this paper, we propose a novel solution leveraging Multi-Agent Deep Reinforcement Learning (MADRL). First, we define our model for horizontal scaling of microservices and formalize the problem. Second, we propose an algorithm based on Multi-Agent Deep Deterministic Policy Gradient (MADDPG) to solve it. Third, a dedicated simulation environment is presented, where arbitrary microservices can be created for testing purposes, and we carry out a comprehensive evaluation. We analyze the performance of the model for microservices of different sizes, investigating its ability to optimize scaling while considering efficient resource utilization and application stability. Results show that our MADDPG-based RL algorithm outperforms the industry standard approach provided by Kubernetes' HPA by at least 14% in terms of resource usage cost.
- Címlap
- Publikációk
- A Multi-Agent Deep-Reinforcement Learning Approach for Application-Agnostic Microservice Scaling