Alert Configurations
Connect scrape jobs, gather jobs, and rules with alert configurations to start detecting version lag.
An alert configuration is the bridge between your data sources and your monitoring rules. It connects three things: a scrape job (what version is deployed), a gather job (what version is available), and a rule (how to measure the gap).
How the pieces fit together
Scrape Job ──────┐
(deployed version)│
├──▶ Alert Config ──▶ Alert
Gather Job ──────┤
(latest release) │
│
Rule ────────────┘
(threshold)
- The scrape job provides the currently deployed version by reading it from a file in your repository.
- The gather job provides the latest available version by checking the upstream source (GitHub releases, Helm repos).
- The rule defines the thresholds that determine whether the gap between those two versions is acceptable.
When all three produce data, Planekeeper evaluates the rule and creates an alert if the deployed version exceeds a threshold.
One alert per config
Each alert configuration produces at most one active alert at any time. When a new scrape finds an updated deployed version or a new gather fetches a newer upstream release, the existing alert is updated in place rather than creating a duplicate.
This means:
- You will never see multiple active alerts from the same config.
- The alert always reflects the current state of your deployment relative to upstream.
- When the deployed version is updated and no longer violates the rule, the alert resolves automatically.
Create an alert config
Navigate to Alert Configs in the sidebar.
Click Create Alert Config.
Fill in the form:
Field Description Name A descriptive label, e.g., “ArgoCD Helm Chart - Days Behind” Scrape Job Select the scrape job that tracks the deployed version Gather Job Select the gather job that tracks upstream releases Rule Select the monitoring rule to apply Click Save.
Evaluation triggers
Alert configs are evaluated automatically. You do not need to trigger evaluation manually. Evaluation runs when:
- The linked scrape job completes and discovers a version.
- The linked gather job completes and fetches new releases.
- The alert config itself is created, updated, or toggled active.
After evaluation, one of three things happens:
| Outcome | What happens |
|---|---|
| New violation detected | A new alert is created with the appropriate severity |
| Existing alert updated | The alert’s version data and severity are refreshed |
| No longer violating | The existing alert is automatically resolved |
Common patterns
Monitor one app with multiple rules
Create separate alert configs that link the same scrape job and gather job to different rules. For example:
- Nginx - Days Behind: scrape(nginx) + gather(nginx) + rule(90/180/365 days)
- Nginx - Majors Behind: scrape(nginx) + gather(nginx) + rule(1/2/3 majors)
Each config produces its own independent alert, giving you visibility into both time-based and version-based staleness.
Monitor multiple environments
If you have separate scrape jobs for staging and production (pointing at different repositories or branches), create separate alert configs for each:
- Redis Production - Days Behind: scrape(prod-redis) + gather(redis) + rule(days)
- Redis Staging - Days Behind: scrape(staging-redis) + gather(redis) + rule(days)
Both configs share the same gather job and rule but track different deployment environments.
Share gather jobs across configs
A single gather job fetches upstream releases once. Multiple alert configs can reference the same gather job with different scrape jobs and rules. This avoids duplicate API calls to upstream sources.
Toggle an alert config
Each alert config has an active/inactive toggle. Click the toggle badge on the list page to switch the state.
- Active configs are evaluated whenever their linked jobs complete.
- Inactive configs are skipped during evaluation. Existing alerts from inactive configs remain until the config is reactivated or the alert is manually resolved.
Uniqueness constraint
Each combination of scrape job, gather job, and rule is unique within an organization. You cannot create two alert configs with the same three-way link. If you need different monitoring for the same combination, create a new rule with different thresholds.