Planekeeper is currently in alpha development. Features and APIs may change. Feedback is welcome! Request early access to get started.

Frequently asked questions

Answers to frequently asked questions about scheduling, agents, private repos, rate limits, alerts, notifications, and more.

How often are jobs scheduled?

You control the schedule with a cron expression on each job. Common intervals range from every 15 minutes to once per week. Jobs without a cron schedule run once and stay in “completed” status until you manually trigger them with Run Now.

The scheduler checks for jobs ready to run every 30 seconds. There may be a brief delay between the scheduled time and when an agent picks up the job.


What happens if an agent goes offline?

Jobs claimed by the offline agent are automatically recovered:

  • The orphan cleanup service runs every 2 minutes and resets any job claimed by an agent that is no longer sending heartbeats
  • The stale job detector resets any job stuck in “in_progress” for more than 1 hour

Once reset, the job returns to “pending” status and is picked up by the next available agent. No data is lost.


Can I monitor private repositories?

Yes. Credentials are configured in the agent’s config.yaml file. See Access private repositories for step-by-step instructions.


How do I increase GitHub API rate limits?

Set the GITHUB_TOKEN environment variable in your Planekeeper deployment. Without a token, GitHub allows 60 API requests per hour. With a token, the limit increases to 5,000 per hour.

The token only needs public repository access – no special scopes are required for reading public releases. Generate a fine-grained personal access token with no additional permissions.


What is the difference between gather and scrape jobs?

Gather jobs fetch the list of available releases from an upstream source (GitHub releases or Helm chart repositories). They answer the question: “What versions are available?”

Scrape jobs clone a Git repository you control and extract a version string from a file. They answer the question: “What version do we have deployed?”

An alert config connects these two pieces with a rule: “How far behind are we?”


How do alerts get resolved automatically?

Alerts resolve when the rule evaluation determines the deployed version no longer violates the thresholds. This happens automatically:

  1. The scrape job discovers an updated version
  2. The rule engine re-evaluates the alert config
  3. If the new version is within the rule’s thresholds, the alert is resolved (soft-deleted with a resolved_at timestamp)
  4. A alert.resolved notification is sent to configured channels

You do not need to manually close or dismiss alerts. Updating your deployed software and letting the scrape job detect the change is sufficient.


Can I use multiple notification channels?

Yes. Create multiple notification channels (for example, one for Slack and one for Discord) and set up notification rules that route different events or severity levels to different channels.

Common patterns:

SeverityChannel
CriticalPagerDuty or Slack #ops-critical
HighSlack #ops-alerts
ModerateDiscord #monitoring

Each notification rule can target a specific channel. You can also set a default channel in Settings that is used when a rule does not specify one.


What happens to failed notifications?

Failed notification deliveries are retried automatically with exponential backoff:

PhaseRetry delays
Short-term (attempts 1-4)10s, 30s, 1m, 5m
Mid-term (attempts 5-8)15m, 30m, 1h, 2h
Long-term (attempts 9-12)4h intervals

After 12 attempts (approximately 24 hours of retrying), the delivery moves to a dead letter queue.

info

Coming soon

The dead letter queue UI is planned for a future release. Deliveries that exhaust all retry attempts are already tracked internally as dead letters, but viewing and retrying them from the UI is not yet available.

Non-retryable errors (HTTP 4xx responses except 429) skip retries and go directly to dead letter.


How do I rotate API keys?

  1. Create a new API key from the API Keys page
  2. Update the agent’s AGENT_API_KEY environment variable with the new key
  3. Restart the agent
  4. Verify the agent connects successfully (check the service health page)
  5. Deactivate the old key

Create the new key before deactivating the old one to avoid downtime.


What does “stable_only” mean?

When stable_only is enabled on a rule, prerelease versions are excluded when determining the “latest” upstream version. A version is considered a prerelease if it contains any of these strings (case-insensitive):

alpha, beta, rc, dev, snapshot, canary, nightly, pre

For example, if the newest releases are 5.0.0-rc1, 4.2.0, and 4.1.0:

  • With stable_only off: latest is 5.0.0-rc1
  • With stable_only on: latest is 4.2.0

Enable this when your deployments follow stable release channels and you do not want to be alerted about prerelease versions.


Can I monitor the same artifact with different rules?

Yes. Create multiple alert configs that use the same scrape job and gather job but different rules. For example, you could monitor a single deployment with both a days_behind rule and a majors_behind rule simultaneously. Each alert config generates its own independent alert.


What timezone are timestamps displayed in?

All data is stored in UTC in the database. The UI automatically converts timestamps to your browser’s local timezone using JavaScript.

  • Date + time values (e.g., job creation time, alert timestamps) are shown in your browser’s local timezone with a timezone abbreviation (e.g., “1/15/26, 9:30 AM EST”)
  • Date-only values (e.g., release dates) always display as the UTC calendar date to avoid confusing date shifts across timezones
  • If JavaScript is disabled, timestamps display in UTC with a “UTC” label
  • API responses always return timestamps in UTC using RFC 3339 format (e.g., 2026-01-15T14:30:00Z)

The display format follows your browser’s locale settings, so dates appear in your familiar format (e.g., 1/15/26 for US, 15/01/26 for UK).


How do I see resolved alerts?

Resolved alerts are preserved in the system for auditing purposes. Navigate to Alerts and use the filter to show resolved alerts. Resolved alerts display the timestamp when they were resolved and the version that caused the resolution.