Your SLA says 99.9% uptime. Your status page shows a green checkmark. But do you actually know how much downtime that allows? Most people are surprised when they do the math -- the difference between 99.9% and 99.99% is not a rounding error, it is the difference between 43 minutes and 4 minutes of allowed downtime per month.

This guide covers the uptime percentage formula, a complete reference table for the "nines," what each level actually requires from your infrastructure, and how to calculate accurate uptime from real monitoring data.

The uptime percentage formula

The calculation itself is simple:

Uptime % = (Total time - Downtime) / Total time × 100

For a 30-day month (43,200 minutes), if your service was down for 45 minutes:

(43,200 - 45) / 43,200 × 100 = 99.896%

That is just below three nines. Whether that matters depends on your SLA commitments, but the point is that the formula gives you an objective number to work with instead of vague statements like "we had pretty good uptime this month."

The nines table -- your complete reference

This table shows exactly how much downtime each uptime level allows across different time periods. Bookmark it -- you will come back to it more than you expect.

Uptime % Name Downtime / Day Downtime / Month Downtime / Year
99% Two nines 14m 24s 7h 18m 3d 15h 36m
99.5% -- 7m 12s 3h 39m 1d 19h 48m
99.9% Three nines 1m 26s 43m 50s 8h 45m 36s
99.95% Three and a half nines 43s 21m 55s 4h 22m 48s
99.99% Four nines 8.6s 4m 23s 52m 34s
99.999% Five nines 0.86s 26s 5m 15s

The jump from three nines to four nines cuts your allowed downtime from 43 minutes per month to about 4 minutes. From four nines to five nines, it drops to 26 seconds per month. Each additional nine is not a small improvement -- it is an order of magnitude harder to achieve.

What each level of nines actually requires

The nines are not just numbers -- they correspond to fundamentally different infrastructure strategies. Each step up requires a disproportionate increase in complexity and cost.

99% -- two nines

A single server with manual restarts. No redundancy, no automated recovery. If the server crashes at 2 AM, it stays down until someone wakes up and fixes it. This is what you get with a basic VPS and no monitoring. For most production services, two nines is not a target -- it is a failure state.

99.9% -- three nines

The practical baseline for any production SaaS application. Achieving three nines requires health checks and automatic restarts, a load balancer with at least two application instances, database replication or managed database service, basic monitoring with fast alerting, and a team that responds to incidents within minutes. Most modern cloud deployments on AWS, GCP, or similar platforms can hit three nines without heroic effort.

99.99% -- four nines

Four nines is where things get serious. You have 4 minutes of allowed downtime per month, which means zero tolerance for manual intervention during failures. This level requires automated failover across availability zones, zero-downtime deployments (blue-green or canary), multi-region redundancy with active-active or fast failover, circuit breakers and graceful degradation for all dependencies, and runbooks for every known failure mode. Most companies that claim four nines achieve it by excluding planned maintenance windows from their calculations.

99.999% -- five nines

Five nines allows 26 seconds of downtime per month. A single slow deployment that takes 30 seconds to roll forward would blow the budget. This is the domain of core internet infrastructure, financial systems, and emergency services. Achieving it requires fully redundant everything (network, compute, storage, DNS), real-time data replication across geographically separated regions, automatic failover that completes in seconds, chaos engineering to continuously verify resilience, and a 24/7 operations team with sub-minute response times. For most SaaS products, five nines is not a realistic target and not worth the cost. Three nines with fast incident response is a better use of engineering time.

How to calculate your own uptime from monitoring data

If you are running an uptime monitoring service, you already have the data you need. The approach depends on whether you are working with check-level data or incident-level data.

From check results

The simplest method. Count the total number of checks in your time period, count how many succeeded, and divide:

Uptime % = (Successful checks / Total checks) × 100

If you ran 43,200 checks in a month (one per minute) and 15 failed, your uptime is (43,200 - 15) / 43,200 × 100 = 99.965%. This method is straightforward but can slightly overestimate uptime if failures are clustered together, since 15 consecutive failed checks represent a 15-minute outage, not 15 separate one-minute outages.

From incident data

A more accurate approach. Sum the total duration of all incidents in the period and use the standard formula:

Total downtime = sum of (incident end time - incident start time) for all incidents
Uptime % = (Total time - Total downtime) / Total time × 100

CronAlert tracks both check results and incidents, so you can calculate uptime either way. The incident-based calculation is what most SLAs reference, since it captures contiguous downtime periods rather than individual check failures.

Planned maintenance and uptime calculations

Here is where uptime calculations get contentious: does scheduled maintenance count as downtime?

The answer depends entirely on your SLA. There are two common approaches:

Maintenance counts as downtime. The strictest interpretation. If users cannot reach your service, it is down -- regardless of whether you planned it. This is the approach most users prefer, because from their perspective, "planned downtime" and "unplanned downtime" feel the same.

Maintenance is excluded. Many enterprise SLAs explicitly carve out maintenance windows. The adjusted formula becomes:

Uptime % = (Total time - Unplanned downtime) / (Total time - Planned maintenance) × 100

CronAlert supports maintenance windows that suppress alerts during scheduled work. When you configure a maintenance window, checks continue running (so you have data), but failures during the window do not trigger alerts or count toward incident records. This gives you clean uptime numbers that reflect real operational reliability.

Whichever approach you use, be consistent and transparent about it. If your status page claims 99.99% uptime but excludes 4 hours of monthly maintenance, your users will notice the discrepancy.

How CronAlert tracks uptime

CronAlert gives you three layers of uptime data, each useful for different purposes.

Check results are the raw data. Every check records the HTTP status code, response time, and pass/fail result. You can query these through the API or view them in the dashboard. Retention ranges from 7 days on the free plan to 1 year on Business.

Incidents are created when a monitor transitions from up to down, and resolved when it recovers. Each incident records the start time, end time, duration, and the check results that triggered it. This is the data you use for SLA calculations and post-incident reviews.

Status pages show a 90-day uptime history for each monitor you choose to make public. Your users can see at a glance whether your service has been reliable. If you have not set one up yet, our status page guide walks through the process.

SLA compliance -- using uptime data to verify or prove

Uptime percentages are not just internal metrics. They have contractual implications in two directions.

Verifying vendor SLAs

If you rely on a third-party service that promises 99.9% uptime, you should be monitoring it independently. Set up a CronAlert monitor pointing at their API endpoint or health check URL, and let the data accumulate. When they breach their SLA, you will have timestamped evidence for a service credit claim instead of relying on their self-reported status page.

Proving your own SLA

If you offer an SLA to your customers, monitoring data is your proof of compliance. CronAlert's incident history and check result logs provide an auditable trail showing exactly when outages occurred and how long they lasted. Export this data monthly to build a compliance record.

For teams that need detailed operational records, CronAlert's Business plan includes audit logging that tracks every configuration change alongside the uptime data, giving you a complete picture for compliance reviews.

Why monitoring interval matters for accurate uptime

Your monitoring interval directly affects the accuracy of your uptime percentage. This is not a theoretical concern -- it changes your numbers in meaningful ways.

With 3-minute check intervals (CronAlert's free plan), the smallest detectable downtime event is 3 minutes. A 90-second outage that starts and resolves between two checks will never be recorded. Over a month, you get 14,400 data points. That is enough to catch most real incidents, but short blips may be invisible.

With 1-minute check intervals (CronAlert's paid plans), you get 43,200 data points per month -- three times the resolution. You catch shorter incidents, your uptime percentage is more accurate, and your incident start/end times are precise to within a minute.

The practical impact: a service that shows 99.97% uptime with 3-minute checks might show 99.93% with 1-minute checks, because the finer resolution catches incidents that the coarser interval missed. Neither number is wrong, but the 1-minute number is closer to reality.

If you are serious about SLA compliance or reporting accurate uptime to customers, 1-minute intervals are worth the upgrade. See our setup guide for details on configuring check intervals.

Frequently asked questions

How do you calculate uptime percentage?

Uptime percentage = (Total time - Downtime) / Total time × 100. For example, if your service was down for 45 minutes in a 30-day month (43,200 minutes), your uptime is (43,200 - 45) / 43,200 × 100 = 99.896%. You can also calculate it from monitoring check results: (Successful checks / Total checks) × 100.

What does five nines (99.999%) uptime mean?

Five nines means your service can be down for no more than 26 seconds per month, or about 5 minutes and 15 seconds per year. Achieving this requires fully redundant infrastructure, automated failover that completes in seconds, multi-region architecture, and a 24/7 operations team. It is the standard for critical infrastructure like core internet services and financial systems, but overkill for most SaaS products.

Does planned maintenance count against uptime?

It depends on your SLA. Some SLAs explicitly exclude scheduled maintenance windows from uptime calculations, while others count all downtime regardless of cause. The stricter interpretation (maintenance counts) is more honest from the user's perspective. If your SLA excludes maintenance, use: Uptime % = (Total time - Unplanned downtime) / (Total time - Planned maintenance) × 100.

Why does monitoring interval matter for uptime accuracy?

Your monitoring interval determines the resolution of your uptime data. With 3-minute checks, a brief 90-second outage might fall entirely between two successful checks and never be recorded. With 1-minute checks, you get three times the data points and catch shorter incidents. For accurate SLA reporting, 1-minute intervals are recommended.

What is a good uptime percentage for a SaaS product?

99.9% (three nines) is the standard target for most SaaS products, allowing about 43 minutes of downtime per month. Critical infrastructure and enterprise services typically target 99.99% (four nines), which allows only about 4 minutes per month. The right target depends on your user expectations, SLA commitments, and how much you are willing to invest in redundancy.

Ready to start tracking your uptime with real data? Create a free CronAlert account and have your first monitor running in under 5 minutes.