Case Study: Leveraging CAST AI Workload Autoscaler to Optimize Bede’s Infrastructure

Published 24 October 2024

Bede Gaming provides a leading digital platform to some of the market’s biggest online gaming companies, including one of the world’s major lottery providers (OLG). This means Bede’s platform face massive peaks in user traffic and relies on our infrastructure to meet and respond to those large demands, while maintaining a high quality user experience and service availability. 

 

We faced an age-old question: how do you strike the right balance between cost and performance?  Overprovisioning Kubernetes workloads with manual scaling adjustments is one conservative way to manage it, However, over time it proved to be a costly process in our operations and prompted us to search for a solution. 

By utilising CAST AI, we have implemented algorithms for autoscaling cloud infrastructure and resource management based on real-time traffic levels. The result? 10-15% savings on cloud infrastructure costs, reduced risk of human error and less manual involvement by our platform teams – which frees up our people to focus on higher-value initiatives and ultimately makes the backend platform service more efficient for our customers.  

 

Following our integration, Bede Gaming’s CTO Dan Whiteley sat down with the CAST AI team to discuss Bede’s experience with this software and the benefits it has delivered. You can read the full case study article here: 

 

Link: How Bede Gaming Optimizes Kubernetes Workloads With No Risk To Performance – CAST AI – Kubernetes Automation Platform