Last month a team I know deployed a new payment webhook handler to production. The deploy succeeded. The service started. Nobody created a monitor for it. Three days later, the endpoint started returning 502s after a database connection pool change. It was down for eleven hours before a customer support ticket surfaced the problem. The fix took four minutes. The detection took eleven hours.
This is the gap that CI/CD-integrated monitoring closes. When monitor creation is a manual step that happens after deploy -- or more often, a manual step that someone forgets -- you end up with unmonitored production services. The fix is straightforward: make monitoring part of the deployment itself, so every service that goes live is automatically watched.
Why manual monitor creation does not scale
If your team ships one service a quarter, clicking through a dashboard to set up a monitor is fine. But most teams are deploying far more frequently than that. Microservices multiply. Staging environments spin up and down. Feature branches get preview deployments. Each one needs monitoring, and each one is easy to forget.
The problems compound:
- Coverage gaps -- new services go unmonitored until someone remembers to add them. That gap can last hours or weeks.
- Stale monitors -- services get renamed, URLs change, endpoints move behind a new API gateway. The monitor still points to the old URL and reports "up" because the old URL returns a redirect or a default page.
- Orphaned monitors -- decommissioned services leave behind monitors that clutter the dashboard and waste check quota.
- Inconsistent configuration -- one engineer sets up keyword validation and custom headers. Another just enters the URL. Your monitoring coverage varies by who happened to create each monitor.
The solution is treating monitoring like infrastructure: defined in code, deployed automatically, and torn down when no longer needed. If you are already familiar with what uptime monitoring does, the next step is automating it.
CronAlert REST API basics
CronAlert exposes a REST API that supports everything the dashboard does -- creating, updating, listing, and deleting monitors programmatically. The key endpoints for CI/CD integration are:
| Method | Endpoint | Purpose |
|---|---|---|
| GET | /api/v1/monitors | List monitors (filter by name/URL to check if one exists) |
| POST | /api/v1/monitors | Create a new monitor |
| PUT | /api/v1/monitors/:id | Update an existing monitor |
| DELETE | /api/v1/monitors/:id | Delete a monitor |
| GET | /api/v1/monitors/:id/checks | Get recent check results (for post-deploy verification) |
Authentication is via Bearer token. Generate an API key at Settings > API Keys and pass it in the Authorization header. Write operations require a Pro plan or higher. For a full walkthrough of every endpoint, see the REST API guide.
GitHub Actions: create monitors on deploy
Here is a GitHub Actions workflow that creates or updates a CronAlert monitor after a successful deployment. Store your API key as a repository secret named CRONALERT_API_KEY.
# .github/workflows/deploy.yml
name: Deploy and Monitor
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Deploy to production
run: ./deploy.sh
- name: Create or update CronAlert monitor
env:
CRONALERT_KEY: ${{ secrets.CRONALERT_API_KEY }}
SERVICE_NAME: "checkout-api"
SERVICE_URL: "https://checkout.example.com/health"
run: |
# Check if monitor already exists
EXISTING=$(curl -s https://cronalert.com/api/v1/monitors \
-H "Authorization: Bearer $CRONALERT_KEY" \
| jq -r ".data[] | select(.url == \"$SERVICE_URL\") | .id")
if [ -n "$EXISTING" ]; then
echo "Updating existing monitor $EXISTING"
curl -s -X PUT "https://cronalert.com/api/v1/monitors/$EXISTING" \
-H "Authorization: Bearer $CRONALERT_KEY" \
-H "Content-Type: application/json" \
-d "{
\"name\": \"$SERVICE_NAME\",
\"url\": \"$SERVICE_URL\",
\"method\": \"GET\",
\"expectedStatusCode\": 200
}"
else
echo "Creating new monitor for $SERVICE_NAME"
curl -s -X POST https://cronalert.com/api/v1/monitors \
-H "Authorization: Bearer $CRONALERT_KEY" \
-H "Content-Type: application/json" \
-d "{
\"name\": \"$SERVICE_NAME\",
\"url\": \"$SERVICE_URL\",
\"method\": \"GET\",
\"expectedStatusCode\": 200
}"
fi This checks whether a monitor for the URL already exists. If it does, the monitor gets updated (useful if you changed the service name or other properties). If it does not, a new one is created. No duplicates, no gaps.
GitLab CI: the same pattern
In GitLab CI, the logic is identical -- the syntax just differs. Store your API key as a CI/CD variable named CRONALERT_API_KEY in your project settings.
# .gitlab-ci.yml
stages:
- deploy
- monitor
deploy_production:
stage: deploy
script:
- ./deploy.sh
only:
- main
setup_monitoring:
stage: monitor
needs: [deploy_production]
script:
- |
EXISTING=$(curl -s https://cronalert.com/api/v1/monitors \
-H "Authorization: Bearer $CRONALERT_API_KEY" \
| jq -r '.data[] | select(.url == "https://api.example.com/health") | .id')
if [ -n "$EXISTING" ]; then
curl -s -X PUT "https://cronalert.com/api/v1/monitors/$EXISTING" \
-H "Authorization: Bearer $CRONALERT_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "Production API",
"url": "https://api.example.com/health",
"method": "GET",
"expectedStatusCode": 200
}'
else
curl -s -X POST https://cronalert.com/api/v1/monitors \
-H "Authorization: Bearer $CRONALERT_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "Production API",
"url": "https://api.example.com/health",
"method": "GET",
"expectedStatusCode": 200
}'
fi
only:
- main
The needs directive ensures the monitoring step only runs after a successful deployment. If the deploy fails, no monitor gets created for a service that is not actually running.
Generic CI/CD integration with curl
Not using GitHub or GitLab? The pattern works anywhere you can run a shell script -- CircleCI, Jenkins, Buildkite, Bitbucket Pipelines, or a plain bash deploy script. Here is a reusable function:
#!/bin/bash
# ensure-monitor.sh -- create or update a CronAlert monitor
# Usage: ./ensure-monitor.sh "Service Name" "https://example.com/health"
SERVICE_NAME="$1"
SERVICE_URL="$2"
API_KEY="${CRONALERT_API_KEY:?Set CRONALERT_API_KEY environment variable}"
BASE_URL="https://cronalert.com/api/v1"
# Find existing monitor by URL
MONITOR_ID=$(curl -s "$BASE_URL/monitors" \
-H "Authorization: Bearer $API_KEY" \
| jq -r ".data[] | select(.url == \"$SERVICE_URL\") | .id")
if [ -n "$MONITOR_ID" ]; then
echo "Updating monitor $MONITOR_ID for $SERVICE_NAME"
curl -sf -X PUT "$BASE_URL/monitors/$MONITOR_ID" \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d "{
\"name\": \"$SERVICE_NAME\",
\"url\": \"$SERVICE_URL\",
\"method\": \"GET\",
\"expectedStatusCode\": 200
}"
else
echo "Creating monitor for $SERVICE_NAME"
curl -sf -X POST "$BASE_URL/monitors" \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d "{
\"name\": \"$SERVICE_NAME\",
\"url\": \"$SERVICE_URL\",
\"method\": \"GET\",
\"expectedStatusCode\": 200
}"
fi
Call it from any pipeline step: ./ensure-monitor.sh "Checkout API" "https://checkout.example.com/health". The -sf flags on curl make it fail silently on HTTP errors and return a non-zero exit code, so your pipeline step fails if the API call does not succeed.
Monitoring as code
Hard-coding URLs and service names in pipeline YAML works for a single service. For a team running dozens of services, you want monitor definitions stored alongside the code they watch. Create a monitors.json file in each service repo:
[
{
"name": "Checkout API - Health",
"url": "https://checkout.example.com/health",
"method": "GET",
"expectedStatusCode": 200
},
{
"name": "Checkout API - Create Order",
"url": "https://checkout.example.com/api/orders",
"method": "POST",
"expectedStatusCode": 401
}
] Then add a sync script to your pipeline that reads the file and ensures each monitor exists:
#!/bin/bash
# sync-monitors.sh -- sync monitors from monitors.json
API_KEY="${CRONALERT_API_KEY:?Set CRONALERT_API_KEY}"
BASE_URL="https://cronalert.com/api/v1"
# Get all existing monitors
EXISTING=$(curl -s "$BASE_URL/monitors" \
-H "Authorization: Bearer $API_KEY")
# Loop through each monitor definition
jq -c '.[]' monitors.json | while read -r MONITOR; do
URL=$(echo "$MONITOR" | jq -r '.url')
NAME=$(echo "$MONITOR" | jq -r '.name')
# Check if this URL is already monitored
MONITOR_ID=$(echo "$EXISTING" | jq -r ".data[] | select(.url == \"$URL\") | .id")
if [ -n "$MONITOR_ID" ]; then
echo "Updating: $NAME"
curl -sf -X PUT "$BASE_URL/monitors/$MONITOR_ID" \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d "$MONITOR"
else
echo "Creating: $NAME"
curl -sf -X POST "$BASE_URL/monitors" \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d "$MONITOR"
fi
done
echo "Monitor sync complete" This approach has several advantages. Monitor definitions are version-controlled, so changes go through code review. They live next to the service they monitor, so they are easy to find and update. And the sync script is idempotent -- running it twice produces the same result, which is exactly what you want in a CI/CD pipeline.
For teams managing monitors across multiple services, team monitoring keeps everyone looking at the same dashboard regardless of who created each monitor.
Tearing down monitors when services are decommissioned
Creating monitors automatically is only half the problem. When you shut down a service, its monitor should go too. Orphaned monitors clutter your dashboard, burn through your monitor quota, and generate false alerts for URLs that intentionally no longer exist.
Add a teardown step to your decommission process:
#!/bin/bash
# teardown-monitor.sh -- delete monitor for a decommissioned service
# Usage: ./teardown-monitor.sh "https://old-service.example.com/health"
SERVICE_URL="$1"
API_KEY="${CRONALERT_API_KEY:?Set CRONALERT_API_KEY}"
BASE_URL="https://cronalert.com/api/v1"
MONITOR_ID=$(curl -s "$BASE_URL/monitors" \
-H "Authorization: Bearer $API_KEY" \
| jq -r ".data[] | select(.url == \"$SERVICE_URL\") | .id")
if [ -n "$MONITOR_ID" ]; then
curl -sf -X DELETE "$BASE_URL/monitors/$MONITOR_ID" \
-H "Authorization: Bearer $API_KEY"
echo "Deleted monitor $MONITOR_ID for $SERVICE_URL"
else
echo "No monitor found for $SERVICE_URL"
fi
If you are using the monitoring-as-code approach with monitors.json, the sync script can also be extended to delete monitors that are no longer in the file. Compare the URLs in the file against the URLs in CronAlert, and delete any that exist in CronAlert but not in the file. Just be careful to scope this to monitors created by the pipeline -- you do not want to accidentally delete monitors that a teammate created manually in the dashboard.
Post-deploy verification checks
Beyond creating monitors, your CI/CD pipeline can use CronAlert to verify that a deploy actually worked. The pattern: deploy, wait for a check cycle, then pull the latest check result and verify it passed.
#!/bin/bash
# verify-deploy.sh -- confirm service is healthy after deploy
MONITOR_ID="MONITOR_ID"
API_KEY="${CRONALERT_API_KEY:?Set CRONALERT_API_KEY}"
MAX_RETRIES=5
RETRY_DELAY=60
for i in $(seq 1 $MAX_RETRIES); do
echo "Check attempt $i of $MAX_RETRIES..."
sleep $RETRY_DELAY
RESULT=$(curl -s "https://cronalert.com/api/v1/monitors/$MONITOR_ID/checks?limit=1" \
-H "Authorization: Bearer $API_KEY")
STATUS=$(echo "$RESULT" | jq -r '.data[0].statusCode')
CHECKED_AT=$(echo "$RESULT" | jq -r '.data[0].checkedAt')
echo "Latest check at $CHECKED_AT returned status $STATUS"
if [ "$STATUS" = "200" ]; then
echo "Post-deploy verification passed"
exit 0
fi
done
echo "Post-deploy verification FAILED after $MAX_RETRIES attempts"
exit 1 This turns your monitoring system into an automated quality gate. If the health check fails after deploy, the pipeline fails, and you can wire that into a rollback or alert. This is particularly valuable for API endpoint monitoring -- verifying not just that the server started, but that your endpoints are actually responding correctly.
For even more confidence, add keyword monitoring to your health check monitor. Instead of just verifying a 200 status code, confirm the response body contains "status":"ok" or whatever your health endpoint returns when all dependencies are connected.
Practical tips
A few things to keep in mind when wiring monitoring into your pipelines:
- Never hardcode API keys. Use your CI provider's secret management -- GitHub Actions secrets, GitLab CI variables, CircleCI contexts, etc. Treat your CronAlert API key with the same care as database credentials.
- Use separate API keys per pipeline. If one gets compromised or rotated, you do not break every pipeline at once. CronAlert lets you create multiple keys at Settings > API Keys.
- Match monitors by URL, not name. Names can change across deploys. The URL is the stable identifier. Always look up existing monitors by URL before deciding whether to create or update.
- Add the monitoring step after the health check. Do not create or update a monitor until you have confirmed the service is actually running. Otherwise you might create a monitor for a service that failed to start.
- Log the API responses. If a monitor creation fails, you want to know why. Capture and log the response body from CronAlert so pipeline failures are easy to debug.
If you are setting up monitoring for the first time, our getting started guide covers the basics of what to monitor and how to configure alerts. For organizations, team monitoring lets multiple engineers share the same monitors and alert channels.
FAQ
Do I need a paid plan to create monitors from CI/CD?
Yes. The CronAlert REST API is read-only on the free plan. Write operations -- creating, updating, and deleting monitors -- require a Pro plan ($5/month) or higher. Your existing API key gets full write permissions the moment you upgrade; there is no need to regenerate it. See the API documentation for the full list of endpoints and plan requirements.
Will my pipeline create duplicate monitors on every deploy?
Not if you check first. The examples in this guide all follow the same pattern: list existing monitors with GET /api/v1/monitors, check whether one with the target URL already exists, and branch to either update or create. This makes the operation idempotent -- deploy ten times and you still have one monitor, with the latest configuration.
Can I store my CronAlert API key in GitHub Actions secrets?
Yes, and you should. Store the key as a repository secret (e.g., CRONALERT_API_KEY) and reference it as $${{ secrets.CRONALERT_API_KEY }} in your workflow file. Never commit API keys to your repository. The same principle applies to GitLab CI variables, CircleCI contexts, Bitbucket Pipelines secure variables, or any other CI provider's secret management.
How do I monitor services that are only accessible internally?
CronAlert checks URLs from external Cloudflare edge locations, so it can only reach publicly accessible endpoints. For internal services, expose a lightweight /health endpoint through your load balancer or API gateway. You can restrict access using authentication headers (which CronAlert supports as custom headers on monitors) or IP allowlists. This gives you external verification that the full request path -- DNS, load balancer, application -- is working.
What happens to check history when I delete a monitor?
Deleting a monitor permanently removes all associated check results and incident history. If you are decommissioning a service but want to keep historical data, export the check history via the REST API before deleting the monitor. Alternatively, you can pause the monitor instead of deleting it -- it will stop running checks but retain all historical data and remain visible in your dashboard.
Start automating
Monitoring that depends on someone remembering to click a button is monitoring that will have gaps. Adding a few lines to your CI/CD pipeline closes those gaps permanently. Every deploy creates or updates the right monitors. Every decommission cleans them up. Every post-deploy step verifies the service is actually healthy. Your monitoring stays in sync with your infrastructure without anyone thinking about it.
Create a free CronAlert account to get started. The free plan includes 25 monitors with read-only API access -- enough to prototype your integration. When you are ready to automate monitor creation from your pipelines, Pro starts at $4/month with full read-write API access.