Opsgenie is one of the standard answers for "where should our pages go." It does on-call rotations, escalation policies, deduplication, native mobile push, and integrations with the chat and ticketing tools the rest of your team uses. The catch is that nothing in Opsgenie monitors anything itself — it routes alerts that other tools generate. Hooking up a real uptime monitor is the part that makes the whole thing useful.

This post walks through wiring CronAlert to Opsgenie end to end: setting up the API integration on the Opsgenie side, configuring the webhook channel on the CronAlert side, mapping the alert payload so Opsgenie gets useful information, deduplicating recurring alerts, and routing different monitor types to different teams.

Why route uptime alerts through Opsgenie

CronAlert can send alerts directly to email, Slack, Microsoft Teams, push notifications, and more. For most small teams that's enough. Opsgenie earns its place when one or more of these become true:

  • You have a real on-call rotation. Opsgenie handles the schedule, the timezone math, the holiday overrides, the weekly handoff. Encoding that in CronAlert channel routing is possible but tedious; Opsgenie does it natively.
  • You need escalation. If the primary on-call doesn't acknowledge within 5 minutes, page the secondary. If neither acknowledges within 15, page the manager. CronAlert can multicast to multiple channels, but it can't enforce escalation order with acknowledgments — that's an incident-management tool's job.
  • You want one alert UI for everything. Uptime alerts, application errors, infrastructure alerts, security alerts. Opsgenie collapses all of them into one stream with one set of rotations and one set of mobile notifications. Engineers don't need to know which tool generated the alert; they just need to know what's broken.
  • You're already on Atlassian. If your team uses Jira, Confluence, or Statuspage, the integration ergonomics with Opsgenie are tight — alerts can auto-create Jira issues, post to Statuspage, link to Confluence runbooks. The stack pays off.

If none of those apply, send alerts directly to a chat channel and skip the indirection. Slack alerts or Microsoft Teams alerts are a faster path for solo developers and small teams.

Step 1: Create an Opsgenie API integration

Opsgenie integrations are scoped to a team, so first decide which team should receive the alerts. For a single rotation across the whole company, use the default team. For a larger organization with multiple on-call rotations, create one integration per team.

  1. Open Opsgenie and go to Teams, then click the team that should receive the alerts.
  2. Open the Integrations tab and click Add integration.
  3. Search for and select API (not "Webhook" — the API integration is the inbound one). The API integration accepts JSON payloads from any source.
  4. Give the integration a name like CronAlert (production) or CronAlert (frontend team) so it's identifiable in alert metadata.
  5. Copy the API URL shown in the integration settings. It looks like https://api.opsgenie.com/v2/alerts with an API key. You'll paste this into CronAlert in the next step.
  6. Save the integration and verify it shows as Enabled.

On EU accounts the host is api.eu.opsgenie.com; double-check the URL Opsgenie shows you and don't hand-edit it.

Step 2: Add the webhook channel in CronAlert

CronAlert sends Opsgenie alerts through its generic webhook alert channel. The webhook channel posts a JSON payload to the URL you configure, and the Opsgenie API integration accepts it directly.

  1. In CronAlert, go to Alert Channels in the app sidebar.
  2. Click Add channel and choose Webhook.
  3. Name the channel — for example "Opsgenie (production rotation)".
  4. Paste the Opsgenie API URL from Step 1 into the URL field.
  5. Set the request method to POST.
  6. Add a header: Authorization: GenieKey YOUR_API_KEY, where the key is the one Opsgenie generated for your integration. Do not include the literal word "Bearer" — Opsgenie uses its own GenieKey scheme.
  7. Set the content type to application/json.
  8. Use the JSON payload template described in the next section.
  9. Save and run the test fire — Opsgenie should immediately show a test alert in the integration team.

Step 3: Configure the alert payload

Opsgenie's API accepts a flexible JSON payload; the only required field is message, but a real production payload should include enough information for an on-call engineer to triage without leaving the alert. The right CronAlert webhook payload looks like this:

{
  "message": "{{monitor.name}} is DOWN",
  "alias": "cronalert-{{monitor.id}}",
  "description": "{{monitor.url}} returned {{check.statusCode}} ({{check.error}}) from {{check.region}} at {{check.checkedAt}}. Failed {{check.consecutiveFailures}} consecutive checks.",
  "priority": "P2",
  "source": "CronAlert",
  "tags": ["uptime", "{{monitor.tag}}"],
  "details": {
    "monitor_id": "{{monitor.id}}",
    "url": "{{monitor.url}}",
    "status_code": "{{check.statusCode}}",
    "error": "{{check.error}}",
    "region": "{{check.region}}",
    "incident_url": "https://cronalert.com/app/monitors/{{monitor.id}}"
  }
}

A few specific notes on the fields that matter most:

  • alias is the deduplication key. Including the monitor ID means repeated "down" alerts from the same monitor are collapsed into a single open alert in Opsgenie. The recovery alert closes the same alias automatically.
  • priority drives Opsgenie's escalation rules. Production monitors should be P2 by default. Reserve P1 for the small set of monitors where any failure is automatically a customer-facing incident — payment processing, primary login, the API your customers integrate against.
  • tags let you write Opsgenie routing rules that match on tag instead of integration. Useful if you want one integration but multiple downstream rotations.
  • details shows up as structured key-value pairs in the Opsgenie alert UI and on the mobile app. Include enough that the on-call engineer can decide whether to escalate without opening a browser.

Step 4: Send a recovery (close) action

By default, the webhook above creates an alert when a monitor goes down but does nothing when it recovers. Opsgenie supports a "close" action via the same API endpoint with a request to /v2/alerts/{alias}/close. The cleanest pattern is to add a second CronAlert webhook channel specifically for recovery events:

  1. Create a second webhook channel called "Opsgenie (recovery)".
  2. Set the URL to https://api.opsgenie.com/v2/alerts/cronalert-{{monitor.id}}/close?identifierType=alias. CronAlert substitutes the monitor ID at send time.
  3. Use the same Authorization header as the down channel.
  4. Set a minimal POST body: { "source": "CronAlert", "note": "Monitor recovered at {{check.checkedAt}}." }
  5. In each monitor's alert configuration, attach the down channel to "down" events and the recovery channel to "up" events.

This pattern keeps Opsgenie in sync with CronAlert state without spamming new alerts on recovery. The on-call engineer sees the alert close itself; the activity log shows when, from where, and why.

Routing different monitors to different teams

Most organizations beyond a certain size want different monitor failures to page different teams. The pattern is straightforward:

  1. Create one Opsgenie API integration per receiving team. Each gives you a different URL and key.
  2. Create one CronAlert webhook channel per integration, named for the team — "Opsgenie (frontend)", "Opsgenie (platform)", "Opsgenie (billing)".
  3. Attach the right channel to each monitor based on what it monitors. A login page goes to the frontend channel; a database health endpoint goes to the platform channel; a Stripe webhook receiver goes to the billing channel.

This is more durable than encoding routing inside Opsgenie's heimdall rules — the routing is visible in CronAlert's monitor configuration, and the on-call team's Opsgenie view only contains the alerts they actually own.

Setting escalation policies

Opsgenie's escalation policies live entirely on the Opsgenie side, but they're worth setting up correctly because the wrong defaults make uptime alerts feel either too noisy or too easy to miss.

  • Acknowledgment timeout: 5 minutes. If the primary on-call doesn't acknowledge within 5 minutes, escalate to the secondary. Shorter than that and you'll page two people for every alert; longer than that and a real outage stalls.
  • Multi-channel notification on escalation. Push, SMS, and voice on the same alert when it escalates. The first level can be push-only; escalation should reach for the louder channels.
  • Suppress reminders for acknowledged alerts. Once the on-call ack's, stop the reminders. The default of "remind every 15 minutes until close" produces noise during long incidents.
  • Schedule overrides for known windows. If your on-call rotation has a known weekly handoff or a holiday rotation, encode it in Opsgenie schedules rather than swapping channels in CronAlert. Opsgenie is designed for this.

Reducing false-positive pages

Opsgenie pages the on-call engineer for every CronAlert webhook fire. If CronAlert fires false positives, Opsgenie pages on false positives. The deduplication on alias helps a lot — a flapping monitor pages once, not ten times — but the right pattern is to prevent the false positive at the source.

  • Multi-region quorum. Configure the monitor to require failure in at least 2 of 5 regions before alerting. Single-region blips never page Opsgenie. See multi-region monitoring.
  • Consecutive-check thresholds. Require N consecutive failures before the alert fires. CronAlert uses a sensible default; tune per monitor based on how stable the target is.
  • Maintenance windows. Don't page on planned downtime. Set a maintenance window for known deploy times, weekly database windows, or third-party scheduled outages.
  • Tier the priorities. Don't page P1 for staging or internal tools. Use P3 or P4 with a 30-minute escalation timeout for those, so they show up in the alert stream without paging the on-call rotation immediately.

The general playbook for cutting noise without missing real outages is in how to reduce alert fatigue.

Testing the integration end to end

Before you trust the integration in production, verify it once with a real-looking test:

  1. Create a deliberately broken monitor in CronAlert — a URL like https://example.invalid/healthz that will fail every check. Set it to a 1-minute interval.
  2. Attach the Opsgenie webhook channel to it.
  3. Wait for the consecutive-failure threshold to trip. Confirm Opsgenie creates an alert with the right priority, the right team assignment, and the right escalation policy.
  4. Acknowledge the alert in Opsgenie. Confirm escalation stops.
  5. Delete the test monitor. Confirm the recovery webhook fires and Opsgenie closes the alert automatically.

Untested alert paths fail at the worst possible time. Five minutes of testing here saves an incident later.

Frequently asked questions

What's the difference between using Opsgenie vs PagerDuty?

Functionally similar for paging. Opsgenie integrates more tightly with the Atlassian stack (Jira, Statuspage, Confluence); PagerDuty is the larger standalone vendor. Pick whichever your team already pays for or whichever fits the rest of your stack.

Does CronAlert have a native Opsgenie integration?

CronAlert uses its generic webhook channel, which posts JSON directly to Opsgenie's API integration. No separate integration type needed — the webhook covers it. The same approach works for Splunk On-Call, incident.io, FireHydrant, and other incident management tools.

How do I deduplicate repeated alerts in Opsgenie?

Use the monitor ID as the alias in the webhook payload. Opsgenie collapses repeated alerts with the same alias into one open alert, and the recovery webhook closes the same alias automatically.

Can I route different monitors to different Opsgenie teams?

Yes. Create one Opsgenie API integration per team, one CronAlert webhook channel per integration, and attach the right channel to each monitor.

What priority should I set for uptime alerts in Opsgenie?

P2 by default for production monitor failures, with escalation to P1 if not acknowledged. P3 or P4 for staging and internal tools. Reserve P1 for monitors where any failure is automatically a customer-facing incident.

Wire up Opsgenie in 10 minutes

The full setup is one Opsgenie API integration, one CronAlert webhook channel, an optional second channel for recovery actions, and a test monitor to verify the path. Total time is about ten minutes if you have the Opsgenie API key handy.

Create a CronAlert account and start with a single monitor wired to a single Opsgenie team before scaling up. For related alert routing playbooks, see PagerDuty alerts, Microsoft Teams alerts, incident response for small teams, and reducing alert fatigue.