Planekeeper is currently in alpha development. Features and APIs may change. Feedback is welcome! Request early access to get started.

Quickstart

End-to-end walkthrough to monitor a Helm chart version with Planekeeper in under 10 minutes.

This walkthrough sets up a complete monitoring pipeline for a Helm chart. By the end, you will have Planekeeper checking the Argo CD Helm chart version deployed in your repository against the latest upstream releases.

Prerequisites:

  • A Planekeeper account with an active organization. See Account setup if you have not done this yet.
  • An API key. Create one from the API Keys page and save it — it is only shown once.

Step 1: Set up and verify an agent

An agent polls the Planekeeper API for tasks — scraping your repositories for version info and executing gather jobs. You need a running agent before jobs can execute.

1a. Create the agent directory

Create a new directory for the agent and add the following files:

docker-compose.yml

name: planekeeper-client

services:
  clientagent:
    image: sqljames/planekeeper-agent:latest
    container_name: planekeeper-client-agent
    restart: unless-stopped
    environment:
      - AGENT_SERVER_URL=${AGENT_SERVER_URL}
      - AGENT_API_KEY=${AGENT_API_KEY}
      - AGENT_POLL_INTERVAL=${AGENT_POLL_INTERVAL:-30}
      - GITHUB_TOKEN=${GITHUB_TOKEN:-}
    volumes:
      - agent-git-cache:/tmp/planekeeper
      # Agent config for private repository credentials
      - ./config.yaml:/etc/planekeeper/config.yaml:ro
      # Uncomment to mount an SSH key file for private repos
      # - ~/.ssh/id_ed25519:/ssh/id_ed25519:ro

volumes:
  agent-git-cache:

.env

# Required: Planekeeper API endpoint (must include /api/v1/agent)
AGENT_SERVER_URL=https://www.planekeeper.com/api/v1/agent

# Required: The API key you created in the prerequisites
AGENT_API_KEY=pk_xxx

# Optional: Poll interval in seconds (default: 30)
AGENT_POLL_INTERVAL=30

# Optional: GitHub token for higher rate limits (60/hr without, 5000/hr with)
# Create at: https://github.com/settings/tokens
GITHUB_TOKEN=

Replace AGENT_API_KEY with the key from the prerequisites.

1b. Configure credentials for private repos (optional)

If your scrape jobs need to access private Git repositories, create a config.yaml in the same directory:

agent:
  credentials:
    # SSH key (file-based - mount the key in docker-compose.yml)
    # my_ssh_key:
    #   type: ssh_key
    #   private_key_file: /ssh/id_ed25519
    #   passphrase: ""  # optional

    # SSH key (inline - embed the PEM content directly)
    # my_inline_key:
    #   type: ssh_key
    #   private_key: |
    #     -----BEGIN OPENSSH PRIVATE KEY-----
    #     ...key content...
    #     -----END OPENSSH PRIVATE KEY-----
    #   passphrase: ""  # optional

    # HTTPS personal access token (GitHub, GitLab, etc.)
    # github_pat:
    #   type: https_pat
    #   token: ghp_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

    # OCI container registry (Docker Hub, GHCR, quay.io, etc.)
    # dockerhub:
    #   type: registry_basic
    #   username: myuser
    #   password: dckr_pat_xxxxxxxxxxxx

Uncomment and configure the credential types you need. Each credential is referenced by name when creating scrape jobs — the agent advertises its available credentials so jobs are only assigned to agents that have the required credential.

See Private repo access for the full credential reference.

1c. Start the agent

docker compose up -d
docker compose logs -f

You should see a heartbeat message confirming the agent has registered with the server.

1d. Verify in the UI

  1. Open the Agents page from the sidebar.
  2. Your agent should appear with a recent heartbeat.
warning
Without a running agent, gather and scrape jobs stay in pending status indefinitely. Confirm agent connectivity before proceeding.

Step 2: Browse upstream releases

Planekeeper ships with global gather jobs that track popular upstream projects out of the box. These run automatically and their releases are available to all organizations.

  1. Open the Releases page from the sidebar.
  2. Set the Scope filter to All or Global.
  3. Search for argo-cd — you should see Argo CD releases already populated with version numbers and publication dates.

These global releases are ready to use in your alert configs. No need to create a gather job for projects that are already tracked.

success
Want to track a project that isn’t in the global list? Create your own gather job — see Gather jobs for details.

Step 3: Create a scrape job

A scrape job extracts the version you currently have deployed from a configuration file in your Git repository.

  1. Open the Scrape Jobs page from the sidebar.
  2. Click Create Scrape Job.
  3. Fill in the form:
    • Name: Production Argo CD Version
    • Repository URL: https://github.com/your-org/your-infra-repo.git
    • Ref (Branch/Tag): main
    • Target file: charts/argo-cd/Chart.yaml
    • Parser type: yq
    • Parse expression: .dependencies[0].version
    • Schedule: 0 */6 * * * (every 6 hours)
  4. Click Create Scrape Job.
info
If your repository is private, select a credential name that matches a credential configured on your agent. See Private repo access for credential setup.

Step 4: Verify the version snapshot

After the scrape job completes:

  1. Open the scrape job detail page.
  2. Check the Version History section at the bottom.
  3. Confirm that a version snapshot appears (for example, 5.51.4).

Each time the scrape job runs, it creates a new snapshot, building a history of your deployed version over time.

Step 5: Choose a rule

A rule defines how to measure staleness and what severity to assign at each threshold.

Planekeeper ships with built-in rules that cover common monitoring scenarios. For example, the minors_behind rule type compares your version against the latest upstream release and counts how many minor versions behind you are. You can set thresholds to trigger moderate, high, and critical alerts based on how far behind you are.

If a built-in rule fits your needs, you can use it directly when creating an alert config in the next step. Otherwise, create a custom rule:

  1. Click Create Rule.
  2. Fill in the form:
    • Name: Helm Chart Currency
    • Rule type: minors_behind
    • Moderate threshold: 3 (3 minor versions behind)
    • High threshold: 5 (5 minor versions behind)
    • Critical threshold: 10 (10 minor versions behind)
    • Stable releases only: Check this box
  3. Click Create Rule.

This rule triggers a moderate alert when your deployed version is 3 or more minor versions behind, escalates to high at 5, and marks it critical at 10.

success
Built-in rules are a great starting point. You can always create custom rules later with different thresholds or rule types. See Rules for details.

Step 6: Create an alert config

An alert config ties together a scrape job, a gather job, and a rule. It is the connection point that makes Planekeeper evaluate your deployed version against the latest releases.

  1. Open the Alert Configs page from the sidebar.
  2. Click Create Config.
  3. Fill in the form:
    • Name: Argo CD Version Check
    • Scrape job: Select Production Argo CD Version
    • Gather job: Select the global Argo CD gather job
    • Rule: Select a built-in rule or your custom Helm Chart Currency rule
  4. Click Create Config.
success
Creating an alert config immediately triggers a rule evaluation. If the gather job has releases and the scrape job has completed, you will see an alert within seconds.

Step 7: Check for alerts

  1. Open the Alerts page from the sidebar.
  2. If your deployed version is behind, an alert appears with the appropriate severity level.
  3. The alert shows:
    • Your deployed version
    • The latest upstream version
    • How far behind you are (in minor versions)
    • The severity level based on your rule thresholds

To acknowledge an alert, click the Acknowledge button. This marks it as reviewed without resolving it. The alert resolves automatically when a future scrape detects that you have updated to a version that no longer violates the rule.

What you built

Global Gather Job         Scrape Job
(upstream releases)       (deployed version)
        \                     /
         \                   /
          \                 /
           Alert Config
           (links both + rule)
                |
                v
            Alert
       (severity-graded)

The global gather job keeps upstream releases up to date automatically. Your scrape job runs every 6 hours to check your deployed version. Whenever new data arrives, the rules engine re-evaluates and updates alerts automatically.

Next steps