Installation

Full installation guide with all configuration options.

This guide covers installation from prerequisites through custom configuration, verification, and uninstalling.

Prerequisites

RequirementMinimum version
Kubernetes1.28+
Helm3.12+
kubectl1.28+
Container runtimecontainerd, CRI-O
StorageDefault StorageClass with dynamic provisioning (for memory + Tempo PVCs)

Required secrets (created in the agents namespace):

  • At least one LLM API key (Anthropic, OpenAI, or Google) in a Kubernetes Secret

Full install (all components)

The default install deploys the operator, console, memory service, and Tempo:

helm install agentops oci://ghcr.io/samyn92/charts/agentops-platform \
  --namespace agent-system --create-namespace

This creates two namespaces:

NamespaceContents
agent-systemOperator, console, Tempo
agentsMemory service, agent pods, AgentTool/AgentResource/Channel workloads

Minimal install (operator only)

If you only need the operator and will run agents without the console, memory, or tracing:

helm install agentops oci://ghcr.io/samyn92/charts/agentops-platform \
  --namespace agent-system --create-namespace \
  --set agentops-console.enabled=false \
  --set memory.enabled=false \
  --set tempo.enabled=false

The operator installs the CRDs and watches for Agent, AgentRun, AgentTool, AgentResource, and Channel resources. Agents will still function — they just won’t have memory integration or a web UI.

Custom configuration

Using a values file

Create a values-override.yaml and pass it at install time:

helm install agentops oci://ghcr.io/samyn92/charts/agentops-platform \
  --namespace agent-system --create-namespace \
  -f values-override.yaml

Model providers

LLM API keys are configured per-agent in the Agent CR, not at the platform level. Create secrets in the agents namespace:

# Anthropic
kubectl create secret generic llm-api-keys \
  --namespace agents \
  --from-literal=ANTHROPIC_API_KEY=sk-ant-...

# OpenAI
kubectl create secret generic llm-api-keys \
  --namespace agents \
  --from-literal=OPENAI_API_KEY=sk-...

# Multiple providers in one secret
kubectl create secret generic llm-api-keys \
  --namespace agents \
  --from-literal=ANTHROPIC_API_KEY=sk-ant-... \
  --from-literal=OPENAI_API_KEY=sk-...

Agents reference providers via Provider CRs. Create one per backend:

# providers.yaml
apiVersion: agents.agentops.io/v1alpha1
kind: Provider
metadata:
  name: anthropic
  namespace: agents
spec:
  type: anthropic
  apiKeySecret:
    name: llm-api-keys
    key: ANTHROPIC_API_KEY
---
apiVersion: agents.agentops.io/v1alpha1
kind: Provider
metadata:
  name: openai
  namespace: agents
spec:
  type: openai
  apiKeySecret:
    name: llm-api-keys
    key: OPENAI_API_KEY
kubectl apply -f providers.yaml

Then reference them from agents via spec.providerRefs:

spec:
  providerRefs:
    - name: anthropic
    - name: openai

Image pull secrets

For private registries, configure global pull secrets:

# values-override.yaml
global:
  imagePullSecrets:
    - name: ghcr-pull-secret

Create the secret beforehand:

kubectl create secret docker-registry ghcr-pull-secret \
  --namespace agent-system \
  --docker-server=ghcr.io \
  --docker-username=YOUR_USER \
  --docker-password=YOUR_PAT

Console ingress

Enable ingress for external access to the console:

# values-override.yaml
agentops-console:
  ingress:
    enabled: true
    className: nginx
    hosts:
      - host: agentops.example.com
        paths:
          - path: /
            pathType: Prefix
    tls:
      - secretName: agentops-tls
        hosts:
          - agentops.example.com

Console environment (Tempo + Memory URLs)

The console BFF needs to know where Tempo and the memory service live. These depend on your Helm release name and namespace. Set them explicitly:

# values-override.yaml
agentops-console:
  env:
    - name: TEMPO_URL
      value: "http://agentops-agentops-platform-tempo.agent-system.svc.cluster.local:3200"
    - name: ENGRAM_URL_OVERRIDE
      value: "http://agentops-agentops-platform-memory.agents.svc.cluster.local:7437"

Memory service

Configure persistence size and image tag:

# values-override.yaml
memory:
  image:
    tag: "0.2.0"
  persistence:
    size: 5Gi
    storageClassName: "local-path"

Agent namespace

Change the namespace where agent workloads are deployed:

# values-override.yaml
agentNamespace: my-agents
createNamespace: true

Helm values reference

Key values for the umbrella chart:

ValueDefaultDescription
agentNamespaceagentsNamespace for agent workloads
createNamespacetrueCreate the agent namespace
global.imagePullSecrets[]Image pull secrets for all components
Operator
agentops-operator.enabledtrueDeploy the operator
agentops-operator.image.repositoryghcr.io/samyn92/agentops-operatorOperator image
agentops-operator.image.tag"" (chart appVersion)Operator image tag
Console
agentops-console.enabledtrueDeploy the console
agentops-console.image.repositoryghcr.io/samyn92/agentops-consoleConsole image
agentops-console.image.tag"" (chart appVersion)Console image tag
agentops-console.ingress.enabledfalseEnable console ingress
agentops-console.ingress.className""Ingress class name
agentops-console.ingress.hosts[{host: agentops.local, ...}]Ingress host configuration
agentops-console.service.typeClusterIPConsole service type
agentops-console.service.port80Console service port
agentops-console.console.namespace""Restrict console to single namespace
Memory
memory.enabledtrueDeploy the memory service
memory.image.repositoryghcr.io/samyn92/agentops-memoryMemory image
memory.image.tag0.2.0Memory image tag
memory.persistence.size1GiSQLite PVC size
memory.persistence.storageClassName"" (cluster default)Storage class
Tempo
tempo.enabledtrueDeploy Grafana Tempo
tempo.tempo.retention72hTrace retention period
tempo.persistence.size5GiTempo storage PVC size

Verifying the installation

Check all pods

kubectl get pods -n agent-system
kubectl get pods -n agents

Verify CRDs are installed

kubectl get crds | grep agentops

Expected output:

agents.agents.agentops.io              2026-01-01T00:00:00Z
agentruns.agents.agentops.io           2026-01-01T00:00:00Z
agenttools.agents.agentops.io          2026-01-01T00:00:00Z
agentresources.agents.agentops.io      2026-01-01T00:00:00Z
channels.agents.agentops.io            2026-01-01T00:00:00Z

Verify the memory service

kubectl get pods -n agents -l app=agentops-memory
kubectl logs -n agents -l app=agentops-memory --tail=5

Test console connectivity

kubectl port-forward svc/agentops-agentops-console -n agent-system 8080:80
# Open http://localhost:8080

Upgrading

helm upgrade agentops oci://ghcr.io/samyn92/charts/agentops-platform \
  --namespace agent-system \
  -f values-override.yaml

CRDs are upgraded automatically by the operator subchart. Existing agents continue running — the operator reconciles them against the new version.

Uninstalling

# Remove all agents and tools first
kubectl delete agents,agentruns,agenttools,agentresources,channels --all -n agents

# Uninstall the platform
helm uninstall agentops --namespace agent-system

# Clean up namespaces (optional)
kubectl delete namespace agent-system
kubectl delete namespace agents

# Remove CRDs (optional — this deletes all AgentOps resources cluster-wide)
kubectl delete crds \
  agents.agents.agentops.io \
  agentruns.agents.agentops.io \
  agenttools.agents.agentops.io \
  agentresources.agents.agentops.io \
  channels.agents.agentops.io