Anodot cloud cost forecasts are 2.5x more accurate than Amazon Forecast. Unreliable forecasting is a big problem as cloud costs are a large part of operational budgets.
For many companies, the move to cloud computing has been a given. However, cloud computing is a highly variable expense unlike many other operational expenses. Many companies choose AWS to streamline fragmented processes, reduce costs, increase agility, and innovate faster. Besides ensuring that each service is working properly, one of the biggest challenges in managing AWS is monitoring, optimizing, and predicting costs. There are a myriad of notable services such as storage, database, and compute, each with complex pricing structures. Cloud costs are also very different from those of other organizations due to the difficulty of identifying anomalies in real time and accurately predicting numbers.
With cloud costs taking up a large portion of operational budgets, it’s imperative that organizations have the tools to properly manage their usage. One example is his burgeoning FinOps field, where engineers, developers and financial professionals are called upon to optimize cloud-related spending. In this context, forecasting is essential for more effective budget planning and resource allocation.
Forecasting Cloud Costs with AWS
Like most major cloud providers, AWS offers (mostly) free-to-use cost monitoring and forecasting tools. AWS Cost Explorer enables you to view and analyze your AWS Cost and Usage Reports (AWS CUR). You can also predict your overall cost associated with AWS services by creating a forecast in AWS Cost Explorer. But you can’t view historical data beyond 12 months or create forecasts for individual services.
For longer periods and more granular forecasts, AWS recommends using AWS Cost and Usage Reports with AWS Glue DataBrew and Amazon Forecast. The AWS post provides a tutorial on how to use Amazon Forecast and DataBrew to transform AWS CUR data into the appropriate dataset format, which Forecast can later ingest to train a predictor and create a forecast for a given service or member account ID.
Forecasting Cloud Costs with Anodot
Anodot leverages its proven monitoring and forecasting to create a solution customized for cloud costs. AWS Cost Monitoring supplies built-in collectors to fetch CUR files from AWS. Its forecast can be as granular as desired, and the horizon can be changed according to the user’s needs.
Our team was curious how AWS and Anodot would perform – we put them to the test on our own internal cloud services to see how they would fare on the same data.
Benchmarking Anodot vs. AWS
While AWS provides daily cost reports, Cost Explorer and ML forecasting, in most cases companies still find themselves struggling to accurately predict usage. The forecast service has to be run manually and can be labor-intensive. There’s likely a way to build an automation workflow, but that’s on the user to build.
Anodot AWS Cost Monitoring is a fully automated solution. Forecasts are continuous and updated on an hourly basis, depending on the training data. Anodot analyzes more granular data and helps teams drill deeper into specific projections, including:
time scales (daily/weekly/monthly/quarterly)
- cost per service
- cost per account
For complete coverage, users can pair Anodot’s forecasting with anomaly detection, which alerts the necessary teams in real time of runaway cloud spend. The anomaly detection solution provides deep root cause analysis across all cloud resources to help accelerate remediation.
In this short benchmarking analysis, we measured Anodot’s performance against Amazon Forecast using our internal cloud spend data and compared the forecasts to results obtained several months later.
Ease of Use
To create a forecast in AWS, we performed the following workflow twice:
- Prepare/transform CUR files with AWS Glue DataBrew (0.5 hr).
- Import training data from Glue to Amazon Forecast (0.25 hr).
- Train the model (1.5 hr).
- Create service lookups and analyze results (0.75 hr).
Between data preparation and import, model training time, lookup creation and result analysis, forecasting in AWS took about three hours net time.
On the Anodot platform this entire process exists as a turn-key product.
To create a forecast in Anodot, we performed the following workflow once:
Upload the data to Anodot.
- Run a forecast task using the first training set.
- Retrain models using the second training set.
Training data in Anodot took about 15 minutes (with several machines working in parallel, equivalent to four hours of machine time), and yields results that are ready for consumption. It’s a one-time process and the forecast is generated automatically thereafter.
The third step is activated automatically, typically once a month in order to readjust the selection of models to significant changes in data. Since training was performed twice in the experiment, it took about 30 minutes overall.
With AWS, the user needs to load the data each time they want to forecast. In Anodot, the user loads historical data at the start and data continues to be streamed, so there’s no need to reload again.
As demonstrated below, Anodot outperformed AWS on both daily and monthly forecasts. In fact, Anodot’s forecast closely resembles the actual cloud costs (see the yellow and green lines in the graph below), and was 2.3 – 2.5x more accurate than Amazon Forecast.
Why Anodot Achieves Greater Accuracy
There’s a big difference in the way Anodot and AWS apply their forecasting models. While both have several models at their disposal, AWS uses a one-size-fits-all approach, using one model to forecast across all services, whether or not it’s the best fit for all of them. The “best fit” model is offered as a recommendation but users can choose a different model.
Anodot automatically selects the best-fitting model for each individual cloud service, which makes a significant difference in improving accuracy per service and for the entire cloud bill.
A powerful advantage of Anodot is its proven and patented technology for anomaly detection. We leverage that capability in our forecasting models to accurately identify and remove the effect of anomalies that may cause forecast bias.
Context also influences results. Recurring events, such as holidays or regularly scheduled promotions, can help explain changes in data behavior and better enable the system to predict the effect they will have on future data. In Anodot it is very easy to add influencing events to the forecast model. It’s unclear, based on the AWS guide, whether it’s also possible to factor in events on Amazon Forecast.
Forecasting is critical for effective budget planning and resource allocation. FinOps teams can use them to spot anticipated shifts in cloud usage early on, before it disrupts the budget. However a forecast is only as good as its accuracy. Machine learning has taken forecasts leaps and bounds beyond what was possible with traditional methods, but even among available ML solutions results can vary.
When running two leading forecast solutions, Anodot and Amazon, Anodot forecasts proved 10% more accurate than those generated by Amazon. The two systems vary in model selection, with Anodot choosing the best-fitting model for each service and AWS using a one-size-fits-all approach. Anodot also leverages its patented anomaly detection and factors in influencing events to yield greater accuracy.
Anodot proved easier to implement and continue to use. After the initial data is uploaded and trained, the process is automated and continuous, whereas AWS requires users to upload the data and go through the whole process of training and analysis each time they need to generate a forecast.
Anodot offers a business package for real-time monitoring of cloud usage and costs that’s available as a free trial here or at AWS marketplace. See how Anodot stacks up against Amazon Forecast on your data – simply reach out and we can also add the forecasting solution for a completely automated cloud cost analytics stack.