Zilla Platform Deployment on Amazon EKS
Zilla Platform Deployment on Amazon EKS
This guide walks through deploying the Zilla Platform on Amazon EKS using the umbrella Helm chart.
The umbrella chart installs all core platform components (Console, Management, and Control) into a single Kubernetes namespace.
Prerequisites
AWS
- An Amazon EKS cluster running in a VPC with private subnets
- IAM permissions to create and manage roles, policies, security groups, and VPC endpoints
- An ACM TLS wildcard certificate for your domain in the same AWS region as the cluster
- AWS Load Balancer Controller installed to provision Application Load Balancers
- VPC interface endpoints for STS and EC2 when using private subnets
- Amazon EBS for persistent storage (gp3 recommended)
- A registered domain with DNS management access
- Route 53 DNS:
- Public hosted zone for internet-facing endpoints
- Private hosted zone associated with the EKS VPC for internal endpoints
Local
- AWS CLI configured with appropriate permissions
eksctlCLI installedkubectlCLI installedhelmCLI installed (v3+)jqinstalled (for API initialization)- Permission to modify local DNS resolution files (for optional
/etc/hostsbased testing)
Install Required Tools (macOS)
brew install awscli eksctl kubectl helm jqGet Started
Authenticate with AWS
aws configureVerify your identity:
aws sts get-caller-identityRequest ACM Certificate
Request a wildcard TLS certificate for your domain:
aws acm request-certificate \
--domain-name "*.<DOMAIN>" \
--validation-method DNS \
--region us-east-1Complete DNS validation in the AWS Console or via CLI, then get the certificate ARN:
aws acm list-certificates --region us-east-1Define Environment Variables
Export the following variables or place them in a configuration file:
export AWS_REGION="<AWS_REGION>"
export CLUSTER_NAME="<CLUSTER_NAME>"
export DOMAIN="<CUSTOM_DOMAIN>"
export ACM_CERTIFICATE_ARN="arn:aws:acm:${AWS_REGION}:<ACCOUNT_ID>:certificate/<CERT_ID>"
export ZILLA_PLATFORM_LICENSE_KEY="<LICENSE_KEY>"AWS Infrastructure Prerequisites
The following AWS resources are required to deploy Zilla Platform on Amazon EKS.
Create an EKS Cluster
Ensure an Amazon EKS cluster is available with worker nodes and OIDC enabled. The cluster must have the core EKS add-ons installed for networking and DNS.
Configure kubectl to use the target cluster:
aws eks update-kubeconfig \
--name ${CLUSTER_NAME} \
--region ${AWS_REGION}kubectl config use-context <AWS_EKS_ARN>Verify cluster access and node readiness:
kubectl cluster-info
kubectl get nodesIf the cluster does not exist or nodes are not available, follow steps here: Create an EKS Cluster
Install AWS Load Balancer Controller
The AWS Load Balancer Controller is required for ALB Ingress.
Verify that the controller service account exists:
kubectl get serviceaccount -n kube-system aws-load-balancer-controllerVerify installation:
kubectl get deployment -n kube-system aws-load-balancer-controller
kubectl get pods -n kube-system -l app.kubernetes.io/name=aws-load-balancer-controllerIf the controller is not installed or not running correctly, follow the setup guide: Install AWS Load Balancer Controller
Create VPC Endpoints (Required for Private Subnets)
EKS nodes in private subnets need VPC endpoints to access AWS services.
Retrieve the VPC and cluster security group details:
VPC_ID=$(aws eks describe-cluster \
--name ${CLUSTER_NAME} \
--region ${AWS_REGION} \
--query 'cluster.resourcesVpcConfig.vpcId' \
--output text)
SG_ID=$(aws eks describe-cluster \
--name ${CLUSTER_NAME} \
--region ${AWS_REGION} \
--query 'cluster.resourcesVpcConfig.clusterSecurityGroupId' \
--output text)
echo "VPC ID: ${VPC_ID}"
echo "Security Group: ${SG_ID}"Ensure the cluster security group allows inbound HTTPS traffic from the VPC CIDR. This is required for communication with interface endpoints.
Create the required interface endpoints with Private DNS enabled.
- STS endpoint, required for IAM Roles for Service Accounts (IRSA)
- EC2 endpoint, required for the Amazon EBS CSI driver
Confirm that the interface endpoints are available:
aws ec2 describe-vpc-endpoints \
--filters "Name=vpc-id,Values=${VPC_ID}" \
--query 'VpcEndpoints[*].[ServiceName,State]' \
--output table --region ${AWS_REGION}If the endpoints are missing or not available, follow the full setup guide: Create VPC Endpoints
Install EBS CSI Driver (Required for Persistence)
The Amazon EBS CSI Driver is required to provision persistent volumes for stateful Zilla Platform components such as PostgreSQL and Kafka.
Fetch the IAM role ARN created for the EBS CSI controller:
ROLE_ARN=$(aws cloudformation describe-stacks \
--stack-name eksctl-${CLUSTER_NAME}-addon-iamserviceaccount-kube-system-ebs-csi-controller-sa \
--query 'Stacks[0].Outputs[?OutputKey==`Role1`].OutputValue' \
--output text --region ${AWS_REGION})
echo "Role ARN: $ROLE_ARN"Verify that the EBS CSI Driver add-on is installed on the cluster:
aws eks list-addons \
--cluster-name ${CLUSTER_NAME} \
--region ${AWS_REGION}Confirm that the EBS CSI Driver pods are running and healthy:
kubectl get pods -n kube-system -l app.kubernetes.io/name=aws-ebs-csi-driverVerify that a default storage class backed by EBS exists:
kubectl get storageclassYou should see a gp3 storage class marked as (default).
If the driver, IAM role, or storage class is not present, follow the full setup guide: Install EBS CSI Driver
Prepare Configuration
Generate JWT Keys
The platform uses JWTs for internal authentication. Generate an RSA key pair if one does not already exist:
# Generate private key
openssl genrsa -out jwt.key 2048
# Generate public key
openssl rsa -in jwt.key -pubout -out jwt.pubCreate values.yaml
Create a custom Helm values file for the EKS deployment:
values.yaml
cat > values.yaml << 'EOF'
global:
licenseKey: "" # Set via --set or environment variable
domain: "example.com"
jwt:
secretName: "zilla-platform-jwt"
ingress:
certificateArn: "" # Set via --set or environment variable
kubernetes:
distribution: eks
management:
smtp:
host: "smtp.example.com"
port: "587"
fromAddress: "noreply@example.com"
fromName: "Zilla Platform"
postgres:
persistence:
enabled: true
size: 20Gi
storageClass: gp3
control:
kafka:
persistence:
enabled: true
size: 20Gi
storageClass: gp3
EOFThe umbrella chart automatically:
- Constructs hostnames from the domain value. Example:
console.dev.aklivity.io - Configures ALB Ingress resources when
className: albis used - Creates:
Internet-facing ALBsfor Console and ManagementInternal ALBsfor Control and Platform
Configure SMTP with Amazon SES
Verify an Email Address
Verify the email address that will be used to send emails.
aws ses verify-email-identity \
--email-address your-email@example.com \
--region ${AWS_REGION}Check the inbox for the verification email and follow the confirmation link.
Create SMTP Credentials
You can create SMTP credentials using either the AWS Console or the AWS CLI.
Navigate to Amazon SES → SMTP settings → Create SMTP credentials.
The console generates SMTP-ready credentials that can be used directly.
aws iam create-user --user-name ses-smtp-user
aws iam attach-user-policy \
--user-name ses-smtp-user \
--policy-arn arn:aws:iam::aws:policy/AmazonSESFullAccess
aws iam create-access-key --user-name ses-smtp-userCLI-generated credentials require SMTP password conversion.
Convert IAM Secret Access Key to SMTP Password
python3 -c "import hmac, hashlib, base64; secret='<SECRET_ACCESS_KEY>'; region='<AWS_REGION>'; signature=hmac.new(('AWS4'+secret).encode(),'11111111'.encode(),hashlib.sha256).digest(); signature=hmac.new(signature,region.encode(),hashlib.sha256).digest(); signature=hmac.new(signature,'ses'.encode(),hashlib.sha256).digest(); signature=hmac.new(signature,'aws4_request'.encode(),hashlib.sha256).digest(); signature=hmac.new(signature,'SendRawEmail'.encode(),hashlib.sha256).digest(); print(base64.b64encode(bytes([0x04])+signature).decode())"Replace
<SECRET_ACCESS_KEY>&<AWS_REGION>with the IAM secret access key and region.
Configure SMTP in values.yaml
Update the Helm values file with your SMTP settings:
management:
smtp:
host: "email-smtp.us-east-1.amazonaws.com"
port: "587"
username: "<SMTP_USERNAME>"
password: "<SMTP_PASSWORD>"
fromAddress: "your-email@example.com"
fromName: "Zilla Platform"Info
Sandbox Mode: New SES accounts can only send to verified email addresses.
Use the same verified email for fromAddress and recipient addresses during testing.
Request production access to remove this restriction.
Deploy the Platform
Create Namespace and Secrets
Create a namespace and a shared JWT secret used by Management and Control:
kubectl create namespace zilla-platform
kubectl create secret generic zilla-platform-jwt \
-n zilla-platform \
--from-file=publicKey=jwt.pub \
--from-file=privateKey=jwt.keyInstall the Umbrella Chart
Deploy all platform components with a single Helm release:
helm install zilla-platform oci://ghcr.io/aklivity/charts/zilla-platform-umbrella \
-n zilla-platform \
-f values.yaml \
--set zilla-platform-console.domain="${DOMAIN}" \
--set zilla-platform-console.ingress.certificateArn="${ACM_CERTIFICATE_ARN}" \
--set zilla-platform-management.domain="${DOMAIN}" \
--set zilla-platform-management.licenseKey="${ZILLA_PLATFORM_LICENSE_KEY}" \
--set zilla-platform-management.jwt.issuer="http://zilla-platform-control:8081" \
--set zilla-platform-management.ingress.certificateArn="${ACM_CERTIFICATE_ARN}" \
--set zilla-platform-control.domain="${DOMAIN}" \
--set zilla-platform-control.licenseKey="${ZILLA_PLATFORM_LICENSE_KEY}" \
--set zilla-platform-control.jwt.issuer="http://zilla-platform-control:8081" \
--set zilla-platform-control.ingress.certificateArn="${ACM_CERTIFICATE_ARN}" \
--set zilla-platform-control.gatewayIngress.certificateArn="${ACM_CERTIFICATE_ARN}" \
--wait --timeout 15mVerify Deployment
Wait for all pods to become ready:
kubectl wait \
--namespace zilla-platform \
--for=condition=ready pod \
--all \
--timeout=600sConfigure DNS
Get Ingress Addresses
Retrieve the ALB addresses for each ingress.
NAMESPACE=zilla-platform
kubectl get ingress zilla-console -n $NAMESPACE -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
kubectl get ingress zilla-management -n $NAMESPACE -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
kubectl get ingress zilla-control -n $NAMESPACE -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
kubectl get ingress zilla-platform -n $NAMESPACE -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'Configure DNS Records
Private Hosted Zone
The control and platform ingresses are exposed through internal ALBs (scheme: internal), so they are only reachable from within the VPC.
To make control.<domain> and platform.<domain> resolvable for pods and services inside the cluster, create (or reuse) a Route 53 Private Hosted Zone and associate it with the EKS VPC.
Step 1: Get your VPC ID
VPC_ID=$(aws eks describe-cluster \
--name ${CLUSTER_NAME} \
--region ${AWS_REGION} \
--query 'cluster.resourcesVpcConfig.vpcId' \
--output text)Step 2: Create or Associate Private Hosted Zone
aws route53 create-hosted-zone \
--name ${DOMAIN} \
--caller-reference "zilla-platform-$(date +%s)" \
--hosted-zone-config PrivateZone=true \
--vpc VPCRegion=${AWS_REGION},VPCId=${VPC_ID}If you already have a private hosted zone, find it and associate with your EKS VPC:
aws route53 list-hosted-zones \
--query 'HostedZones[?Config.PrivateZone==`true`].[Id,Name]' \
--output tableexport HOSTED_ZONE_ID="<ZONE_ID>"Example: export HOSTED_ZONE_ID="Z0123456789ABCDEFGHIJ"
aws route53 associate-vpc-with-hosted-zone \
--hosted-zone-id ${HOSTED_ZONE_ID} \
--vpc VPCRegion=${AWS_REGION},VPCId=${VPC_ID}Step 3: Create private DNS records
The platform uses two types of ALBs:
- Public ALBs:
consoleandmanagement- accessible from the internet - Internal ALBs:
controlandplatform- only accessible within the VPC
Step 3a: Get ALB Hostnames
export DOMAIN="<DOMAIN>"
# Public ALBs (console, management)
CONSOLE_ALB=$(kubectl get ingress zilla-console -n zilla-platform -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
MANAGEMENT_ALB=$(kubectl get ingress zilla-management -n zilla-platform -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
# Internal ALBs (control, platform)
CONTROL_ALB=$(kubectl get ingress zilla-control -n zilla-platform -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
PLATFORM_ALB=$(kubectl get ingress zilla-platform -n zilla-platform -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
echo "Console ALB: $CONSOLE_ALB"
echo "Management ALB: $MANAGEMENT_ALB"
echo "Control ALB: $CONTROL_ALB"
echo "Platform ALB: $PLATFORM_ALB"Step 3b: Create Private DNS Records (control, platform)
These records go in your private hosted zone:
# Get private hosted zone ID
export PRIVATE_HOSTED_ZONE_ID=$(aws route53 list-hosted-zones --query "HostedZones[?Name=='${DOMAIN}.' && Config.PrivateZone==\`true\`].Id" --output text | sed 's|/hostedzone/||')
echo "Private Hosted Zone ID: $PRIVATE_HOSTED_ZONE_ID"
# Create private CNAME records
aws route53 change-resource-record-sets --hosted-zone-id ${PRIVATE_HOSTED_ZONE_ID} --change-batch '{
"Changes": [
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "control.'${DOMAIN}'",
"Type": "CNAME",
"TTL": 300,
"ResourceRecords": [{"Value": "'${CONTROL_ALB}'"}]
}
},
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "platform.'${DOMAIN}'",
"Type": "CNAME",
"TTL": 300,
"ResourceRecords": [{"Value": "'${PLATFORM_ALB}'"}]
}
}
]
}'Note
Without the private hosted zone associated with your EKS VPC, internal services (control & platform) will not be resolvable, and the platform components will fail to communicate with each other.
Public DNS
Create CNAME records for internet-facing services:
| Record | Type | Value |
|---|---|---|
console.dev.aklivity.io | CNAME | (console ingress ALB hostname) |
management.dev.aklivity.io | CNAME | (management ingress ALB hostname) |
export DOMAIN="<DOMAIN>"
# Get ALB hostnames
CONSOLE_ALB=$(kubectl get ingress zilla-console -n zilla-platform -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
MANAGEMENT_ALB=$(kubectl get ingress zilla-management -n zilla-platform -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
# Resolve ALB hostnames to IP addresses
CONSOLE_IP=$(dig +short ${CONSOLE_ALB} | grep -E '^[0-9]+\.' | head -1)
MANAGEMENT_IP=$(dig +short ${MANAGEMENT_ALB} | grep -E '^[0-9]+\.' | head -1)Create Public DNS Records (console, management)
Create DNS records in your public hosted zone so the Console and Management endpoints are reachable.
# Get public hosted zone ID
export PUBLIC_HOSTED_ZONE_ID=$(aws route53 list-hosted-zones --query "HostedZones[?Name=='${DOMAIN}.' && Config.PrivateZone==\`false\`].Id" --output text | sed 's|/hostedzone/||')
echo "Public Hosted Zone ID: $PUBLIC_HOSTED_ZONE_ID"
# Create public CNAME records
aws route53 change-resource-record-sets --hosted-zone-id ${PUBLIC_HOSTED_ZONE_ID} --change-batch '{
"Changes": [
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "console.'${DOMAIN}'",
"Type": "CNAME",
"TTL": 300,
"ResourceRecords": [{"Value": "'${CONSOLE_ALB}'"}]
}
},
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "management.'${DOMAIN}'",
"Type": "CNAME",
"TTL": 300,
"ResourceRecords": [{"Value": "'${MANAGEMENT_ALB}'"}]
}
}
]
}'Local Testing with /etc/hosts
As an alternative to configuring public DNS, you may temporarily map the Console and Management hostnames on your local machine (/etc/hosts) to validate that the endpoints are reachable.
This is optional and intended only for quick verification during setup.
sudo sed -i '' "/console.${DOMAIN}/d; /management.${DOMAIN}/d" /etc/hosts
echo -e "${CONSOLE_IP} console.${DOMAIN}\n${MANAGEMENT_IP} management.${DOMAIN}" | sudo tee -a /etc/hostsVerify Deployment
Check All Pods
kubectl get pods -n zilla-platformCheck Ingresses
kubectl get ingress -AAccess the Console
Open your browser and navigate to:
https://console.<DOMAIN>Get started with one-time administrator registration and access the Zilla Platform.
Create an Environment & Deploy Gateway
After completing your admin onboarding, the next step is to create an Environment and attach your Gateway & Kafka Cluster.
Follow the guide here to create an Environment and generate Bootstrap token required to attach Gateway with Platform.
Info
Creating an environment generates the ZILLA_PLATFORM_BOOTSTRAP_TOKEN, which is later used to launch and attach the Zilla Platform Gateway.
Deploy the Gateway
Deploy the Zilla Platform Gateway by following the guide:
Consume an API Product
To validate access to deployed API Products, follow the steps in the Consume an API Product guide.
Security
For production use, additional security configuration is recommended.
Zilla Platform includes Istio mTLS by default for internal service communication.
Production environments should also consider network policies, external databases, authenticated Kafka access, and secure handling of bootstrap tokens.
Check out this guide for detailed steps.
Uninstall
helm uninstall zilla-platform -n zilla-platform
kubectl delete namespace zilla-platformResources
Troubleshooting Guide: Common issues and debugging steps.