Planekeeper is currently in alpha development. Features and APIs may change. Feedback is welcome! Request early access to get started.

Alert Configurations

Connect scrape jobs, gather jobs, and rules with alert configurations to start detecting version lag.

An alert configuration is the bridge between your data sources and your monitoring rules. It connects three things: a scrape job (what version is deployed), a gather job (what version is available), and a rule (how to measure the gap).

How the pieces fit together

Scrape Job ──────┐
(deployed version)│
                  ├──▶ Alert Config ──▶ Alert
Gather Job ──────┤
(latest release)  │
                  │
Rule ────────────┘
(threshold)
  • The scrape job provides the currently deployed version by reading it from a file in your repository.
  • The gather job provides the latest available version by checking the upstream source (GitHub releases, Helm repos).
  • The rule defines the thresholds that determine whether the gap between those two versions is acceptable.

When all three produce data, Planekeeper evaluates the rule and creates an alert if the deployed version exceeds a threshold.

One alert per config

Each alert configuration produces at most one active alert at any time. When a new scrape finds an updated deployed version or a new gather fetches a newer upstream release, the existing alert is updated in place rather than creating a duplicate.

This means:

  • You will never see multiple active alerts from the same config.
  • The alert always reflects the current state of your deployment relative to upstream.
  • When the deployed version is updated and no longer violates the rule, the alert resolves automatically.

Create an alert config

  1. Navigate to Alert Configs in the sidebar.

  2. Click Create Alert Config.

  3. Fill in the form:

    FieldDescription
    NameA descriptive label, e.g., “ArgoCD Helm Chart - Days Behind”
    Scrape JobSelect the scrape job that tracks the deployed version
    Gather JobSelect the gather job that tracks upstream releases
    RuleSelect the monitoring rule to apply
  4. Click Save.

success
Name your alert configs descriptively. Include the application name and the type of check, e.g., “Kubernetes Dashboard - Majors Behind” or “Cert Manager Helm - Days Behind”. This makes the alerts list easier to scan.

Evaluation triggers

Alert configs are evaluated automatically. You do not need to trigger evaluation manually. Evaluation runs when:

  • The linked scrape job completes and discovers a version.
  • The linked gather job completes and fetches new releases.
  • The alert config itself is created, updated, or toggled active.

After evaluation, one of three things happens:

OutcomeWhat happens
New violation detectedA new alert is created with the appropriate severity
Existing alert updatedThe alert’s version data and severity are refreshed
No longer violatingThe existing alert is automatically resolved

Common patterns

Monitor one app with multiple rules

Create separate alert configs that link the same scrape job and gather job to different rules. For example:

  • Nginx - Days Behind: scrape(nginx) + gather(nginx) + rule(90/180/365 days)
  • Nginx - Majors Behind: scrape(nginx) + gather(nginx) + rule(1/2/3 majors)

Each config produces its own independent alert, giving you visibility into both time-based and version-based staleness.

Monitor multiple environments

If you have separate scrape jobs for staging and production (pointing at different repositories or branches), create separate alert configs for each:

  • Redis Production - Days Behind: scrape(prod-redis) + gather(redis) + rule(days)
  • Redis Staging - Days Behind: scrape(staging-redis) + gather(redis) + rule(days)

Both configs share the same gather job and rule but track different deployment environments.

Share gather jobs across configs

A single gather job fetches upstream releases once. Multiple alert configs can reference the same gather job with different scrape jobs and rules. This avoids duplicate API calls to upstream sources.

Toggle an alert config

Each alert config has an active/inactive toggle. Click the toggle badge on the list page to switch the state.

  • Active configs are evaluated whenever their linked jobs complete.
  • Inactive configs are skipped during evaluation. Existing alerts from inactive configs remain until the config is reactivated or the alert is manually resolved.
warning
Toggling an alert config to inactive does not resolve its existing alert. The alert stays in its current state. Toggle the config back to active and let it re-evaluate to clear the alert, or acknowledge it manually.

Uniqueness constraint

Each combination of scrape job, gather job, and rule is unique within an organization. You cannot create two alert configs with the same three-way link. If you need different monitoring for the same combination, create a new rule with different thresholds.