Alerts
Understand the alert lifecycle, manage severity levels, and acknowledge or resolve version lag alerts.
Alerts are the output of Planekeeper’s monitoring pipeline. When a deployed version exceeds the thresholds defined in a monitoring rule, an alert is created. Alerts update automatically as conditions change and resolve on their own when the underlying issue is fixed.
Alert lifecycle
Every alert follows a predictable progression through these states:
Created ──▶ (Escalated) ──▶ Acknowledged ──▶ Resolved
optional
Created
An alert is created when a rule evaluation detects that the deployed version exceeds at least the moderate threshold. The alert captures the deployed version, the latest upstream version, the severity level, and how far behind the deployment is.
Escalated
If conditions worsen (for example, a new upstream release ships and the gap increases), the severity can escalate from moderate to high, or from high to critical. Escalation updates the existing alert in place and resets any prior acknowledgment.
Acknowledged
A team member marks the alert as acknowledged to signal that the gap is known and being addressed. Acknowledging an alert does not resolve it – the alert stays active and continues to update if conditions change.
Resolved
An alert resolves automatically when the deployed version no longer violates the rule’s thresholds. This typically happens when the scrape job detects that the deployment was updated to a recent enough version. Resolved alerts are preserved in history for auditing.
Severity levels
| Severity | Meaning |
|---|---|
| Moderate | The deployed version is falling behind but within tolerable limits. Plan an update. |
| High | The version gap is significant. Prioritize an update soon. |
| Critical | The deployment is dangerously behind. Immediate attention required. |
Severity is determined by the thresholds configured on the monitoring rule. The highest matching threshold wins.
View alerts
Navigate to Alerts in the sidebar to see all active alerts. The list shows:
- Alert name (from the alert config)
- Severity badge (moderate, high, or critical)
- Deployed version and latest version
- Behind by value (days, major versions, or minor versions)
- Acknowledgment status
Use the severity filter to focus on a specific level. Toggle Unacknowledged only to see alerts that have not been reviewed yet.
Alert details
Click an alert to view its full details, including:
- The linked scrape job, gather job, and rule
- Version comparison information
- Acknowledgment history
Acknowledge alerts
From the UI
- Navigate to Alerts.
- Click the alert you want to acknowledge.
- Click Acknowledge.
The alert remains active but is marked as reviewed. Other team members can see that someone has looked at it.
From a notification link
Alert notifications can include an acknowledge URL. Clicking the link acknowledges the alert directly without navigating the UI.
By default, acknowledge links require the user to be logged in so Planekeeper can record who acknowledged the alert. If the user is not logged in, they are redirected to the login page and sent back to complete the acknowledgment after signing in.
Channels can be configured to allow anonymous acknowledgments by unchecking Require login to acknowledge alerts on the channel. When anonymous mode is enabled, anyone with the link can acknowledge the alert without logging in, but no user identity is recorded.
See Acknowledgment authentication for details on configuring this per channel.
Bulk acknowledgment
- Navigate to Alerts.
- Select multiple alerts using the checkboxes.
- Click Acknowledge Selected.
Unacknowledging
If an alert was acknowledged prematurely, click Unacknowledge on the alert detail page to reset it. This signals to the team that the alert still needs attention.
Manually resolve an alert
If you want to close an alert without waiting for auto-resolution (for example, the alert config is no longer relevant, or the version was updated outside of Planekeeper’s visibility):
- Navigate to Alerts and click the alert.
- Click the Resolve button.
- Confirm the action in the dialog.
The alert is soft-deleted with a resolution timestamp and moves to the resolved history. A resolved notification is sent to configured channels.
Bulk resolve
Select multiple alerts using the checkboxes on the list page, then click Resolve Selected to resolve them in a single operation. Use the checkbox in the table header to select all visible items. Bulk resolve follows the same rules as individual resolution – each alert is soft-deleted with a resolution timestamp and a resolved notification is sent for each one.
Auto-resolution
Alerts resolve themselves when the underlying version lag drops below the rule’s lowest threshold. This happens automatically during the next rule evaluation after a scrape job detects an updated deployment.
What triggers resolution:
- The deployed version is updated to a version that no longer violates any threshold.
- The alert config is re-evaluated and finds no violation.
What happens on resolution:
- The alert is soft-deleted with a resolution timestamp.
- A notification is sent to configured channels (if notification rules are set up).
- The alert moves to the resolved history, accessible from the alerts page.
Alert escalation
When an existing alert’s severity increases – for example, from moderate to high because more time has passed or a new major version was released upstream – the alert is updated in place with the new severity.
Escalation:
- Updates the severity badge and behind-by value.
- Resets the acknowledgment if the deployed version changed (acknowledgment is always cleared when the deployed version changes, whether or not severity escalated).
- Triggers an
alert.escalatednotification to configured channels.
Resolved alert history
Resolved alerts are not deleted. They are preserved with a resolution timestamp and can be viewed separately.
Navigate to Alerts and switch to the Resolved tab to see past alerts. This history is useful for:
- Auditing how long alerts were active before resolution.
- Understanding which deployments are frequently behind.
- Reviewing patterns in how versions fall behind across your organization.
Special cases
Version not found
If the deployed version does not appear in the upstream release history (for a days-behind rule), the alert is automatically set to critical severity. This typically means the version is very old and has been pruned from the release list, or there is a mismatch in version formatting.
Version parse failure
If the deployed or latest version cannot be parsed as a semantic version (for majors-behind or minors-behind rules), the alert is set to critical severity. Check that the version format matches what the upstream source publishes.