Email is the lowest-common-denominator alert channel. Every team has email. Every monitoring tool supports it. There is no API key to rotate, no bot to install, no webhook signature to verify. Type an address into a form, click save, and you are done.

But email has trade-offs that newer channels do not. It is slower than push or webhook. It is filtered by spam systems that do not know your alert sender. It is invisible at 3am unless someone has push enabled on their phone. And it is easy to start ignoring once volume creeps up.

This post walks through how to set up email alerts in CronAlert correctly, the deliverability and noise problems to watch out for, and when email is the right channel versus a backup channel versus the wrong tool entirely.

How to add an email alert channel in CronAlert

Email alerts are free on every plan. Setup takes about 30 seconds.

  1. Go to Alert Channels in the app sidebar.
  2. Click Add channel and choose Email.
  3. Enter the recipient address (or a mailing list address — see below).
  4. Optional: give the channel a name like "Engineering on-call" so you remember what it does when you are looking at it six months from now.
  5. Save. CronAlert sends a verification email; click the link to confirm.
  6. Attach the channel to one or more monitors via the monitor settings page.

From now on, every state change on those monitors fires an email: down, recovered, SSL expiring, certificate error. You can also restrict which event types fire emails per channel — useful when you want recovery notifications to go to a quieter inbox.

What an alert email looks like

A CronAlert downtime email contains the monitor name, the URL, the failure timestamp in your timezone, the status code or error message, and a one-click link back to the monitor's incident page. The subject line is structured for easy scanning and filtering — for example, [CronAlert] DOWN: api.example.com (502 Bad Gateway).

The structured subject line matters more than it seems. Mail clients and gateway rules can filter on it, mailing list archives are searchable by it, and a phone notification that shows the first 80 characters of the subject conveys the entire incident at a glance — you do not need to open the email to know what is broken.

When email is the right channel

Email alerts work well in a handful of cases:

  • Solo developers and small teams. One person, one inbox, push notifications on. Email is the simplest possible setup that does the job.
  • Audit and archive. Even teams that page through other channels keep email enabled because email is the most durable record of an incident — searchable, threadable, and easy to forward to a customer or attach to a postmortem ticket.
  • Group mailboxes for shared rotations. A support@ or oncall@ address that fans out to a small team is a reasonable single-channel setup for a non-paging tier of incidents.
  • Helpdesk and ticket integration. Tools like Zendesk, Linear, Jira Service Management, and Freshdesk all accept inbound email. Pointing your alert email at a ticket-creating address turns every outage into a ticket automatically — convenient if your incident workflow already lives in a ticketing tool.
  • Customer-facing notifications. If you want certain incidents to flow into a customer support queue or trigger a status page update, email is the universal glue.

When email is the wrong channel

Email alone is not a paging channel. The latency is fine — usually a few seconds end to end — but the human side breaks down in three predictable ways:

  • Inbox triage. If the on-call engineer triages email twice a day, your alert sits unread until then. The protocol is fast; the workflow is slow.
  • Notification fatigue. People mute email notifications on their phone because most email is not urgent. The alert arrives, the phone stays silent.
  • Threading and grouping. Most mail clients will collapse five "site is down" emails from the same sender into one thread, hiding new incidents under old ones. The alert arrives but is visually buried.

For 24/7 services with on-call rotations, email needs to be paired with something that can wake someone up. The standard pairings are PagerDuty for escalation, Slack for team awareness, push notifications for individual urgency, or SMS through a custom webhook integration.

Deliverability: the part everyone gets wrong

The most common email-alert failure is not "the email never sent." It is "the email sent and ended up in junk, and nobody saw it for three hours." A few things cause this:

Corporate email gateways

Microsoft 365, Google Workspace, Mimecast, and Proofpoint all run aggressive spam filters that can block or quarantine email from senders they do not recognize. The first time CronAlert sends an alert to a corporate inbox, there is a non-trivial chance the gateway holds it for review.

Fix this once, up front. Add [email protected] (and any other CronAlert sender domain) to your organization's allowlist at the gateway level. In Microsoft 365 this is the Tenant Allow/Block List under Defender. In Google Workspace it is the Email Allowlist under Gmail's spam, phishing, and malware settings. Send a test alert immediately after to confirm delivery.

Personal spam filters

Even after the gateway lets the mail through, an individual user's spam filter can flag it. The fix here is per-user: have each recipient mark the first alert as "Not spam" and add the sender to their contacts. After a few alerts the personal filter learns. Until then, the alerts may end up in junk.

Mailing list quirks

Sending alerts to [email protected] requires that mailing list to accept mail from external senders. Many corporate distribution lists are configured to reject external mail by default, which silently drops every alert. Test the list by sending a manual mail from outside your domain before you trust it for production alerts.

The acid test

Whenever you add a new email channel, immediately trigger a test alert and confirm it shows up in the right inbox, in the right folder, with notifications firing. Half-broken email channels are silent — there is no error message in CronAlert when a recipient's spam filter eats an alert. The only way to know is to verify by hand.

Email plus a faster channel: the recommended pattern

For most teams beyond a single solo developer, the right setup is email and a faster channel, not email instead of one. Pair email with one of:

  • Slack or Discord for team awareness and quick discussion. See Slack alerts and Discord alerts.
  • Push notifications for individual urgency without phone-level complexity. Setup guides for iOS, Android, Mac, and Windows.
  • PagerDuty for on-call escalation with retry, acknowledgment, and rotation. See PagerDuty alerts.
  • Microsoft Teams for organizations that live in Teams. See Teams alerts.

The faster channel handles the page. Email handles the archive. When the postmortem happens a week later, the email thread has the timestamps, the status codes, and the order of events.

Reducing email alert noise

Once you have email alerts on a handful of monitors, noise becomes the problem. A flapping monitor sends "down then up then down" emails on a loop. A scheduled maintenance window fires "down, down, down" emails that everyone ignores. After two weeks of this, the team filters CronAlert to a folder and stops looking.

A few specific mitigations:

  • Multi-region quorum. Configure monitors to alert only when the failure is confirmed by 2+ regions, eliminating most network-blip false positives. See multi-region monitoring.
  • Consecutive-check verification. Require N consecutive failures before alerting. CronAlert defaults to this; tune the threshold per monitor based on how stable the site is.
  • Maintenance windows. Suppress alerts during planned downtime. See maintenance windows.
  • Recovery emails on a separate channel. Send "down" alerts to your urgent channel and "recovery" alerts to a quieter one. Recovery is reassuring but rarely urgent.
  • Per-monitor channel routing. Critical monitors fire email plus push plus PagerDuty. Internal tools fire email only. Don't treat every monitor the same.

The full noise-reduction playbook is in the alert fatigue guide.

Email alert anti-patterns

One inbox, no rules

Sending every alert from every monitor to one personal inbox with no filtering is the fastest way to ignore alerts. After a month, the alerts blend into the noise of regular email. Set up at least a folder rule that pulls CronAlert mail out of the main inbox into its own thread.

Email-only on a 24/7 service

If your business depends on the site being up at 3am, email alone is not enough. Add at least one channel that can wake someone up. The marginal cost is low and the difference between a 15-minute outage and a 5-hour outage is enormous. The math is in the cost of downtime guide.

A shared inbox with no rotation

If oncall@ goes to five people and there is no defined rotation, the bystander effect kicks in: everyone assumes someone else has it. Either route alerts to one person at a time (with a real rotation) or pair the shared inbox with a paging channel that has explicit on-call ownership.

Cron job heartbeats by email

Cron jobs that email "I ran successfully" every day are an anti-pattern. Nobody reads them. The signal you actually want is "the job did not run today" — which is what heartbeat monitoring provides. Email the failures, not the successes.

Frequently asked questions

Are email alerts fast enough for a real outage?

The protocol is fast — usually 5 to 60 seconds end to end. The bottleneck is the human, not the email. If the on-call engineer has push notifications enabled, email is fine. If email is checked twice a day, alerts will sit unread. For overnight on-call, pair email with a push, SMS, or PagerDuty escalation.

Can I send uptime alerts to multiple email addresses?

Yes. Use a single mailing list address ([email protected]) that fans out to the team, or create multiple email channels each pointing at one person. Mailing lists are easier to manage; per-recipient channels are more reliable. Test deliverability either way.

What happens if my email provider is down during an outage?

The alert channel goes silent. This is the argument for multiple channels: pair email with at least one other (Slack, push, webhook) so a single provider issue cannot silence every alert at once.

Why are my uptime alert emails going to spam?

Usually because the corporate gateway or personal filter does not recognize the sender. Add [email protected] to your gateway allowlist, mark the first alert as "Not spam," and add the sender to your contacts. Test with a manual alert to confirm delivery.

Should I disable email alerts and only use Slack or PagerDuty?

No. Keep email enabled as a secondary channel even if it is not your primary one. It is the most durable archive of an incident — searchable, threadable, attachable to tickets. When Slack or PagerDuty has a control plane outage, email keeps working.

Set up email alerts in 30 seconds

Email alerts are free on every CronAlert plan, including the free tier. Create an account, add an email channel, attach it to your first monitor, and trigger a test to verify deliverability. Pair it with a faster channel — Slack, push, or PagerDuty — for anything that needs to wake someone up.

For the full alert-routing playbook, see how to reduce alert noise without missing real outages.