Skip to main content

Kubernetes Deployment

For Platform Teams

This guide covers deploying Olytix Core to Kubernetes for production environments. Learn how to use Helm charts for streamlined deployment or native Kubernetes manifests for full control.

Prerequisites

  • Kubernetes 1.25 or later
  • kubectl configured for your cluster
  • Helm 3.10 or later (for Helm deployments)
  • Persistent storage provisioner (for stateful components)
# Verify prerequisites
kubectl version --client
helm version

Helm Chart Deployment

Add the Olytix Core Helm Repository

# Add the Olytix Helm repository
helm repo add olytix https://charts.olytix.net
helm repo update

# Search for available versions
helm search repo olytix/olytix-core --versions

Basic Installation

# Install with default values
helm install olytix-core olytix/olytix-core \
--namespace olytix-core \
--create-namespace

# Check deployment status
kubectl get pods -n olytix-core

Production Installation

Create a values file for production configuration:

# values-production.yaml
replicaCount: 3

image:
repository: olytix/olytix-core
tag: "1.2.0"
pullPolicy: IfNotPresent

resources:
limits:
cpu: 2000m
memory: 4Gi
requests:
cpu: 500m
memory: 1Gi

autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 70
targetMemoryUtilizationPercentage: 80

# Database configuration
database:
host: postgres.database.svc.cluster.local
port: 5432
name: olytix-core_analytics
existingSecret: olytix-core-database-credentials
existingSecretUsernameKey: username
existingSecretPasswordKey: password

# Redis configuration
redis:
enabled: true
architecture: standalone
auth:
enabled: true
existingSecret: olytix-core-redis-credentials
resources:
limits:
cpu: 500m
memory: 1Gi

# Ingress configuration
ingress:
enabled: true
className: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/proxy-body-size: 50m
hosts:
- host: olytix-core.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: olytix-core-tls
hosts:
- olytix-core.example.com

# Worker configuration
worker:
enabled: true
replicaCount: 2
resources:
limits:
cpu: 2000m
memory: 4Gi
requests:
cpu: 500m
memory: 1Gi

# Scheduler configuration
scheduler:
enabled: true

# Monitoring
metrics:
enabled: true
serviceMonitor:
enabled: true
interval: 30s

# Pod disruption budget
podDisruptionBudget:
enabled: true
minAvailable: 2

# Service account
serviceAccount:
create: true
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT:role/olytix-core-role

# Environment variables
env:
OLYTIX_LOG_LEVEL: INFO
OLYTIX_LOG_FORMAT: json
OLYTIX_METRICS__ENABLED: "true"

Install with production values:

# Create namespace
kubectl create namespace olytix-core

# Create secrets
kubectl create secret generic olytix-core-database-credentials \
--namespace olytix-core \
--from-literal=username=olytix-core_prod \
--from-literal=password=your_secure_password

kubectl create secret generic olytix-core-redis-credentials \
--namespace olytix-core \
--from-literal=redis-password=your_redis_password

# Install with production values
helm install olytix-core olytix/olytix-core \
--namespace olytix-core \
--values values-production.yaml \
--wait

# Verify deployment
kubectl get all -n olytix-core

Upgrade and Rollback

# Upgrade to new version
helm upgrade olytix-core olytix/olytix-core \
--namespace olytix-core \
--values values-production.yaml \
--set image.tag=1.3.0

# Rollback if needed
helm rollback olytix-core 1 --namespace olytix-core

# View history
helm history olytix-core --namespace olytix-core

Native Kubernetes Manifests

For full control, use native Kubernetes manifests.

Namespace and ConfigMap

# namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: olytix-core
labels:
app.kubernetes.io/name: olytix-core
---
# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: olytix-core-config
namespace: olytix-core
data:
OLYTIX_LOG_LEVEL: "INFO"
OLYTIX_LOG_FORMAT: "json"
OLYTIX_SERVER__HOST: "0.0.0.0"
OLYTIX_SERVER__PORT: "8000"
OLYTIX_SERVER__WORKERS: "4"
OLYTIX_METRICS__ENABLED: "true"
OLYTIX_DATABASE__PORT: "5432"
OLYTIX_DATABASE__NAME: "olytix-core_analytics"

Secrets

# secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: olytix-core-secrets
namespace: olytix-core
type: Opaque
stringData:
OLYTIX_DATABASE__USER: olytix-core_prod
OLYTIX_DATABASE__PASSWORD: your_secure_password
OLYTIX_SECRET_KEY: your_secret_key
OLYTIX_API_KEY: your_api_key
---
apiVersion: v1
kind: Secret
metadata:
name: olytix-core-redis-secret
namespace: olytix-core
type: Opaque
stringData:
REDIS_PASSWORD: your_redis_password

Deployment

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: olytix-core-api
namespace: olytix-core
labels:
app.kubernetes.io/name: olytix-core
app.kubernetes.io/component: api
spec:
replicas: 3
selector:
matchLabels:
app.kubernetes.io/name: olytix-core
app.kubernetes.io/component: api
template:
metadata:
labels:
app.kubernetes.io/name: olytix-core
app.kubernetes.io/component: api
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8000"
prometheus.io/path: "/metrics"
spec:
serviceAccountName: olytix-core
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
containers:
- name: olytix-core
image: olytix/olytix-core:1.2.0
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8000
protocol: TCP
envFrom:
- configMapRef:
name: olytix-core-config
- secretRef:
name: olytix-core-secrets
env:
- name: OLYTIX_DATABASE__HOST
value: postgres.database.svc.cluster.local
- name: OLYTIX_REDIS__URL
valueFrom:
secretKeyRef:
name: olytix-core-redis-secret
key: REDIS_URL
resources:
limits:
cpu: 2000m
memory: 4Gi
requests:
cpu: 500m
memory: 1Gi
livenessProbe:
httpGet:
path: /health/live
port: http
initialDelaySeconds: 10
periodSeconds: 15
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /health/ready
port: http
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
volumeMounts:
- name: project-volume
mountPath: /workspace
readOnly: true
volumes:
- name: project-volume
persistentVolumeClaim:
claimName: olytix-core-project-pvc
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: olytix-core
app.kubernetes.io/component: api
topologyKey: kubernetes.io/hostname

Worker Deployment

# worker-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: olytix-core-worker
namespace: olytix-core
labels:
app.kubernetes.io/name: olytix-core
app.kubernetes.io/component: worker
spec:
replicas: 2
selector:
matchLabels:
app.kubernetes.io/name: olytix-core
app.kubernetes.io/component: worker
template:
metadata:
labels:
app.kubernetes.io/name: olytix-core
app.kubernetes.io/component: worker
spec:
serviceAccountName: olytix-core
securityContext:
runAsNonRoot: true
runAsUser: 1000
containers:
- name: worker
image: olytix/olytix-core:1.2.0
command: ["celery", "-A", "src.olytix-core.tasks", "worker", "--loglevel=info", "--concurrency=4"]
envFrom:
- configMapRef:
name: olytix-core-config
- secretRef:
name: olytix-core-secrets
env:
- name: OLYTIX_DATABASE__HOST
value: postgres.database.svc.cluster.local
resources:
limits:
cpu: 2000m
memory: 4Gi
requests:
cpu: 500m
memory: 1Gi
livenessProbe:
exec:
command:
- celery
- -A
- src.olytix-core.tasks
- inspect
- ping
initialDelaySeconds: 30
periodSeconds: 60
timeoutSeconds: 10

Service

# service.yaml
apiVersion: v1
kind: Service
metadata:
name: olytix-core-api
namespace: olytix-core
labels:
app.kubernetes.io/name: olytix-core
app.kubernetes.io/component: api
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: olytix-core
app.kubernetes.io/component: api

Ingress

# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: olytix-core-ingress
namespace: olytix-core
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/proxy-body-size: 50m
nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
spec:
tls:
- hosts:
- olytix-core.example.com
secretName: olytix-core-tls
rules:
- host: olytix-core.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: olytix-core-api
port:
number: 80

HorizontalPodAutoscaler

# hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: olytix-core-api-hpa
namespace: olytix-core
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: olytix-core-api
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 10
periodSeconds: 60
scaleUp:
stabilizationWindowSeconds: 0
policies:
- type: Percent
value: 100
periodSeconds: 15

PodDisruptionBudget

# pdb.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: olytix-core-api-pdb
namespace: olytix-core
spec:
minAvailable: 2
selector:
matchLabels:
app.kubernetes.io/name: olytix-core
app.kubernetes.io/component: api

ServiceAccount and RBAC

# rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: olytix-core
namespace: olytix-core
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: olytix-core-role
namespace: olytix-core
rules:
- apiGroups: [""]
resources: ["configmaps", "secrets"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: olytix-core-rolebinding
namespace: olytix-core
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: olytix-core-role
subjects:
- kind: ServiceAccount
name: olytix-core
namespace: olytix-core

Cloud-Specific Configurations

AWS EKS

# AWS ALB Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: olytix-core-ingress
namespace: olytix-core
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:region:account:certificate/id
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
alb.ingress.kubernetes.io/healthcheck-path: /health/ready
spec:
rules:
- host: olytix-core.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: olytix-core-api
port:
number: 80

Google GKE

# GKE Ingress with managed certificate
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: olytix-core-ingress
namespace: olytix-core
annotations:
kubernetes.io/ingress.class: gce
kubernetes.io/ingress.global-static-ip-name: olytix-core-ip
networking.gke.io/managed-certificates: olytix-core-certificate
spec:
rules:
- host: olytix-core.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: olytix-core-api
port:
number: 80
---
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: olytix-core-certificate
namespace: olytix-core
spec:
domains:
- olytix-core.example.com

Azure AKS

# Azure Application Gateway Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: olytix-core-ingress
namespace: olytix-core
annotations:
kubernetes.io/ingress.class: azure/application-gateway
appgw.ingress.kubernetes.io/ssl-redirect: "true"
appgw.ingress.kubernetes.io/health-probe-path: /health/ready
spec:
tls:
- hosts:
- olytix-core.example.com
secretName: olytix-core-tls
rules:
- host: olytix-core.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: olytix-core-api
port:
number: 80

Operations

Deploy All Manifests

# Apply all manifests
kubectl apply -f namespace.yaml
kubectl apply -f configmap.yaml
kubectl apply -f secrets.yaml
kubectl apply -f rbac.yaml
kubectl apply -f deployment.yaml
kubectl apply -f worker-deployment.yaml
kubectl apply -f service.yaml
kubectl apply -f ingress.yaml
kubectl apply -f hpa.yaml
kubectl apply -f pdb.yaml

# Or apply entire directory
kubectl apply -f ./manifests/

Monitoring Deployment

# Check pod status
kubectl get pods -n olytix-core -w

# View logs
kubectl logs -n olytix-core -l app.kubernetes.io/name=olytix-core -f

# Describe pod for troubleshooting
kubectl describe pod -n olytix-core <pod-name>

# Check HPA status
kubectl get hpa -n olytix-core

Rolling Updates

# Update image
kubectl set image deployment/olytix-core-api \
olytix-core=olytix/olytix-core:1.3.0 \
-n olytix-core

# Watch rollout
kubectl rollout status deployment/olytix-core-api -n olytix-core

# Rollback if needed
kubectl rollout undo deployment/olytix-core-api -n olytix-core

Next Steps