Loading...


Updated 27 Apr 2026 • 7 mins read

This guide covers the 8 prep steps I run through before locking in any cloud cost forecast. It is written for FinOps practitioners, engineering leads, and finance partners who keep getting blindsided by AWS, Azure, or GCP bills. By the end, you will know how to map cost to product, feature, environment, and customer so your forecasts hold up under scrutiny.
Early in a FinOps role, a quarterly cloud forecast missed actual spend by 38%,not because of flawed math, but because the groundwork was incomplete. The model relied on average billing instead of tying costs to products, features, and customers.
That pattern repeats across teams. Forecasts built on past spend alone often mislead, either constraining growth or triggering last-minute escalations.
Through experience leading cost intelligence at Opslyft and working with SaaS teams across AWS, Azure, and GCP, one lesson stands out: preparation determines accuracy.
This guide outlines eight essential steps to complete before building any forecast, turning unreliable estimates into numbers that leadership can trust.
Most cloud forecasts I audit fail for the same reason. The team treats the cloud bill as a single number that grows with revenue, then applies a flat percentage on top of the last quarter.
According to the FinOps Foundation's State of FinOps reports, forecasting has been ranked the top priority for FinOps practitioners for two consecutive years. That tells me how many teams are still struggling with this.
The fix is not a fancier model. It is unit economics. Once you know what it costs to serve one customer or run one feature, your forecast becomes a function of business inputs rather than a guess. With that framing in place, let me walk you through the 8 prep steps I rely on.
The first thing I do is split the total cloud bill by product. Most SaaS companies run multiple products on shared infrastructure, and the bill rarely arrives sliced that way.
I usually start with a tagging audit. If your tags are inconsistent, the rest of this exercise becomes a guessing game. I covered the practical side of this in our writeup on best practices for cloud cost allocation, which is worth a read before you start.
Once split, compare each product's cost against its revenue contribution. Some products run cheaply and earn a lot. Others quietly burn margin. You want this picture before you forecast anything.
With product-level cost in hand, the next layer to peel back is the environment.
Most teams I work with run between four and six environments. Common ones include research, development, QA, staging, and production.
Production usually gets the attention, but in my experience, non-prod environments are where the real waste hides. I have seen dev clusters left running over weekends, staging databases provisioned at production size, and abandoned QA stacks that nobody owned.
Splitting cost by environment exposes those leaks fast. It also tells you something honest about your release pipeline. If staging costs more than production, that is usually a sign of process drift, not a strategic choice.
Once environments are clean, you can zoom one level deeper into the features running inside them.
This is the step most teams skip because it feels tedious. It is also the step that does the most for forecast accuracy.
Break each product into its core features and estimate the operating cost of each one. Search, recommendations, video transcoding, billing, exports, anything that holds a meaningful chunk of compute or storage. Tools like Opslyft let you align cloud spend with product, feature, environment, customer, and team, which makes this less painful than spreadsheet archaeology.
Where possible, layer environment on top. Knowing what a feature costs in production versus what it costs in QA tells you where to invest engineering hours.
That feature view sets you up to ask the next, often uncomfortable question. Who is consuming all this?
Engineers tend to focus on what they control, which is infrastructure. But customer behavior drives a huge portion of cloud spend, and ignoring it makes forecasts brittle.
Not all customers are equal. In one engagement, I found that 7% of customers were generating roughly 61% of the compute load. The forecast had assumed linear growth across the base. It was wrong by design.
Track per-customer usage at whatever granularity your data supports. Even rough cohorts, by plan tier, region, or contract size, beat a flat per-customer average. There are good ways to help engineers understand how their choices affect cloud costs, and customer-level visibility is one of them.
Once customers are mapped, you can step back and turn this raw data into something forecastable.
This is where the prep work starts paying off. With cost per product, feature, environment, and customer in hand, you can calculate the cost of delivering one unit of value.
That unit might be one API call, one transcoded minute, one active user, or one transaction processed. Pick what matters for your business and stick with it.
Segment your customers into 3 to 5 cohorts and calculate unit cost for each. A small business cohort might cost you $0.40 per active user per month, while an enterprise cohort might cost $2.10. Those two numbers do more for your forecast than any total-cost trendline ever will.
For a fuller treatment of the metrics that matter here, the FinOps KPIs that actually move the needle are a good companion read.
With unit economics defined, you finally have the inputs you need to start modeling scenarios.
Now your data becomes actionable. I run scenario models, not single-point forecasts, because cloud spend never moves in a straight line.
A typical scenario set I build looks like this:
For each scenario, I plug the unit economics from Step 5 into a simple spreadsheet or our forecasting module. I keep the time horizon short. A precise next-quarter forecast is far more useful than a fuzzy 12-month one.
If you want a deeper foundation here, a beginner-friendly guide to cloud cost forecasting walks through the basic mechanics in detail.
Models give you the numbers. The next step is making sure those numbers actually get sharper over time.
Early forecasts will miss. Mine still do, just by less than they used to. The key is treating each forecast as an experiment.
A common mistake I see is relying only on the last 30 to 60 days. Cloud usage has seasonality, end-of-quarter spikes, holiday traffic, batch jobs that run monthly, that only shows up if you look back 12 months or more.
Flexera's annual State of the Cloud report has flagged for years that wasted cloud spend hovers around 28 to 32% across surveyed organizations. That figure has not budged much, which tells me forecasts and the actions that follow them still have plenty of room to improve.
If your forecasts consistently overshoot or undershoot, that bias is a feature of your model, not noise. Adjust the assumption that is causing it. I keep a running log of forecast variance by scenario, and that log usually surfaces the broken assumption faster than any review meeting.
Watch for warning signs that your cloud cost strategy is failing too. Repeatedly missing forecasts is one of them.
Sharper forecasts are only half the job. The other half is how you talk about them.
Even a great forecast is still an expectation. Cloud systems shift quickly, and a single architecture change can move the number by double digits.
When I share a forecast, I explicitly call out the top 3 to 5 variables that could move it: customer growth rate, feature launch dates, contract renewals, infra projects, and any pricing changes from AWS, Azure, or GCP. Each one gets a sensitivity figure, for example, if customer growth lands at 20% instead of 15%, spend increases roughly $45K per quarter.
This is the part finance partners genuinely appreciate. They are not looking for a single magic number. They are looking for a story they can defend in front of their own audience.
Now that the 8-step process is on the table, let me show you how this approach compares to the two more common ways teams forecast.
I have tried all three approaches in production. Here is how they actually stack up across the criteria that matter.
| Criteria | Top-Down | Bottom-Up | Unit Economics | Best For |
|---|---|---|---|---|
| Setup effort | Low | High | Medium | Time-pressed teams |
| Forecast accuracy | Low | Medium | High | Finance reviews |
| Handles new features | Poor | Medium | Strong | Active roadmaps |
| Handles customer mix shifts | Poor | Poor | Strong | Scaling SaaS |
| Tooling required | Spreadsheet | Tagging plus dashboards | Tagging plus cost intel platform | Varies |
| Time to first forecast | Hours | Weeks | 1 to 2 weeks | Quick wins vs depth |
| Best fit | Early-stage startups | Stable mid-market | Scaling SaaS or enterprise | Match to your stage |
Top-down works when you have one product and one customer profile. Bottom-up gets accurate but rots quickly when your product evolves. Unit economics is the only one I have seen hold up across product launches, customer cohort shifts, and pricing changes.
Now for the part of this debate that does not get said often enough.
Most FinOps content treats forecast accuracy as the goal. I disagree. The goal is forecast usefulness.
A 95% accurate forecast that nobody acts on is worth less than an 85% accurate forecast that triggers a rightsizing project, a contract negotiation, or a feature redesign. I would rather ship a less precise model that engineers and finance both engage with than chase decimal points alone.
The teams I see win at this treat the forecast as a conversation, not a deliverable. They review it monthly, change assumptions in the open, and build a shared muscle for talking about cloud cost trade-offs. That cultural piece is what makes the math worth doing.
Forecasting prep is foundational, but it has to feed into action. With that mindset baked in, the questions teams typically ask next start to look like these.
Forecasting cloud costs is a craft, not a calculation. The 8 steps I walked through here are the prep work that separates a forecast people trust from one they tolerate.
Map your costs to products, environments, features, and customers. Translate that into unit economics. Build flexible scenario models. Track your variance honestly. Talk about variables, not single numbers.
In my experience, teams that get good at this stop being defensive about cloud spend and start treating it as a lever. That shift, more than any single dashboard, is what makes FinOps actually work inside an organisation.
If you want a hand applying this to your own environment, the Opslyft team works on this every day. Either way, start with Step 1 this week. The hardest part is just beginning.
Cloud cost forecasting is the practice of predicting future cloud spend based on usage, customer behavior, product roadmap, and architecture choices. In my work I treat it as a finance and engineering exercise combined. It matters because cloud bills can shift 20 to 40% quarter to quarter without warning, which kills hiring plans and feature launches. A good forecast lets leaders commit to budgets and engineering teams ship without constant fire drills.
For mature FinOps teams, I aim for variance within 5 to 10% on a quarterly horizon. Anything tighter usually means I am overfitting to recent data, which breaks the moment a new feature ships. Annual forecasts realistically land within 15 to 20%. If your forecasts are off by 30% or more consistently, it is almost always because you skipped the unit economics work, not because the cloud is unpredictable.
At minimum, I need 12 months of tagged cost data, a clear breakdown by product and environment, customer-level usage if possible, and the product roadmap. Without solid tagging your numbers will be averages dressed up as facts. I also gather pricing changes from AWS, Azure, or GCP, plus any committed-use or savings plan commitments, which can move the number meaningfully on their own.
Budgeting sets a target you want to spend. Forecasting predicts what you will actually spend. I use both together. The forecast informs whether the budget is realistic, and the budget creates accountability when the forecast goes off track. Treating them as the same thing is one of the most common mistakes I see, and it usually results in budgets that get rewritten every quarter instead of held.