[TUTORIAL] Deploying ActivePieces on Kubernetes
This guide explains how to deploy ActivePieces on a Kubernetes cluster using PostgreSQL, Redis, Istio, and Cert-Manager.
Prerequisites
Before deploying, ensure you have:
A running Kubernetes cluster
kubectl
installed and configured
Istio installed (for gateway and traffic management)
Cert-Manager installed (for SSL/TLS certificate management)
A storage backend (S3, Ceph, or other) for PostgreSQL backups
A valid domain (e.g.,
ap.example.com
)
Step 1: Deploy PostgreSQL
Create a file named pg.yaml
and paste the following:
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: activepieces-pg
namespace: automation
spec:
instances: 1
primaryUpdateStrategy: unsupervised
storage:
size: 20Gi
enableSuperuserAccess: true
monitoring:
enablePodMonitor: true
backup:
barmanObjectStore:
destinationPath: s3://your-s3-bucket/backup/
endpointURL: http://your-s3-endpoint
wal:
compression: bzip2
s3Credentials:
accessKeyId:
name: s3-credentials
key: AWS_ACCESS_KEY_ID
secretAccessKey:
name: s3-credentials
key: AWS_SECRET_ACCESS_KEY
postgresql:
parameters:
max_wal_size: "512MB"
min_wal_size: "32MB"
wal_keep_size: "256MB"
archive_mode: "off"
---
apiVersion: postgresql.cnpg.io/v1
kind: ScheduledBackup
metadata:
name: backup-activepieces-pg
namespace: automation
spec:
schedule: "0 0 0 * * *"
backupOwnerReference: self
cluster:
name: activepieces-pg
Step 2: Deploy Redis Failover
Create a file named redis.yaml
:
apiVersion: databases.spotahome.com/v1
kind: RedisFailover
metadata:
name: activepieces-redis
namespace: automation
spec:
sentinel:
replicas: 3
redis:
replicas: 3
---
apiVersion: v1
kind: Service
metadata:
name: activepieces-redis-master
namespace: automation
spec:
type: ClusterIP
ports:
- protocol: TCP
port: 6379
targetPort: 6379
Step 3: Deploy ActivePieces
Create a file named activepieces.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: activepieces
namespace: automation
spec:
replicas: 1
selector:
matchLabels:
app: activepieces
template:
metadata:
labels:
app: activepieces
spec:
containers:
- name: activepieces
image: ghcr.io/activepieces/activepieces:latest
ports:
- name: http
containerPort: 80
env:
- name: AP_FRONTEND_URL
value: "https://ap.example.com/"
- name: AP_CLOUD_AUTH_ENABLED
value: "true"
- name: AP_REDIS_HOST
value: activepieces-redis-master.automation.svc.cluster.local
- name: AP_DB_TYPE
value: POSTGRES
- name: AP_POSTGRES_HOST
value: "activepieces-pg.automation.svc.cluster.local"
- name: AP_POSTGRES_PORT
value: "5432"
---
apiVersion: v1
kind: Service
metadata:
name: activepieces
namespace: automation
spec:
selector:
app: activepieces
type: ClusterIP
ports:
- protocol: TCP
port: 80
targetPort: 80
Step 4: Configure Istio Gateway and VirtualService
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: activepieces
namespace: istio-system
spec:
secretName: ap-tls
duration: 2160h
dnsNames:
- "ap.example.com"
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ap-gateway
namespace: automation
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
protocol: HTTP
hosts:
- "ap.example.com"
- port:
number: 443
protocol: HTTPS
hosts:
- "ap.example.com"
tls:
mode: SIMPLE
credentialName: ap-tls
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ap-vs
namespace: automation
spec:
hosts:
- "ap.example.com"
gateways:
- ap-gateway
http:
- route:
- destination:
host: activepieces
port:
number: 80
Step 5: Verify Deployment
Check if all pods are running:
kubectl get pods -n automation
Check if the service is exposed:
kubectl get svc -n automation