Quickstart
Quickstart
The Zilla Platform Quickstart allows you to spin up a complete Zilla Platform locally in just a few minutes.
This Docker Compose bundle sets up the management console, control plane, and supporting services so you can explore the Zilla Platform end-to-end.
A key capability demonstrated in this Quickstart is event validation using an AsyncAPI specification, allowing the Zilla Gateway to validate messages without relying on a Schema Registry.
Prerequisites
- Docker Engine
24.0+ - Docker Compose plugin version
2.34.0 or later - At least 4 vCPUs and 4 GB RAM available for containers
License Information
The Zilla Platform Quickstart comes with a preconfigured trial license, so you can get started without any additional license setup.
Info
For all other deployment scenarios, a valid license is required.
Please contact Aklivity Support to obtain and configure a license for your environment.
Start the Zilla Platform
This Quickstart sets up the Zilla Platform with all dependencies required to manage and explore the system locally.
Start the Zilla Platform stack using the published Docker Compose.
docker compose -f oci://ghcr.io/aklivity/zilla-platform/quickstart up --waitOnce the stack is ready, open the Zilla Platform Management Console in your browser.
Admin onboarding
Get started with Admin onboarding.
The Zilla Platform provides a streamlined, one-time admin registration to set up your organization and environment.
Setup Data Plane
After completing your admin onboarding, the next step is to create an Environment and attach your Gateway, Kafka Cluster, and Schema Registry.
Create Docker Compose File
Create a Docker Compose file named compose.env.local.yaml with the following content:
name: quickstart-env
include:
- oci://ghcr.io/aklivity/zilla-platform/quickstart/envInfo
This configuration pulls in the official Zilla Platform Quickstart environment from the GitHub Container Registry, bundling the Gateway, Kafka, and Schema Registry.
Start the Environment
Export bootstrap token & start the environment:
export ZILLA_PLATFORM_BOOTSTRAP_TOKEN=$(docker run --rm -v quickstart_bootstrap-data:/bootstrap alpine:3.20 cat /bootstrap/token.txt)docker compose -f compose.env.local.yaml up --waitYou should now have a TLS enabled Zilla Platform Gateway running using the generated certificates.
Info
The Quickstart includes a pre-created Environment. The Zilla Platform Gateway is automatically attached using the ZILLA_PLATFORM_BOOTSTRAP_TOKEN.
Build First API Product
With your platform up and running, and your environment and gateways ready, you can now create your first API Product.
This quickstart will guide you through the essential steps to make your API product available for consumption in minutes.
Extract and Create a Spec
Extract an API specification from your Kafka topics to define your API structure and message schema.
You’ll select an Quickstart Environment, choose the Kafka service, and select the topic: orders.created to generate your API spec.
Refer to the Extract Specifications guide for step-by-step details.
Create an API Product
Next, create an API Product within the same Catalog.
Refer to the API Product guide for step-by-step details.
Name the API Product:
Orders APINote
It is required to name API Product Orders API, as that matches the hostname alias configured in the Quickstart Gateway container.
This ensures the Gateway can resolve and route requests for this API Product correctly.
You’ll select the workspace, project, and spec, then configure server details (specify your Gateway information) and associate one or more plans.
When configuring your server, you’ll need to define connection details for the Kafka Bootstrap Server.
Kafka Bootstrap Server:
staging.platform.net:9094Note
The Schema Registry URL is omitted to demonstrate AsyncAPI-based event validation.
Deploy the API Product
Once the API Product is defined, deploy it using one of the servers specified during creation. This makes your API Product available for consumption.
You’ll select a server and initiate the deployment to publish your API Product.
Refer to the API Product Deploy guide for step-by-step details.
Create an Application
Finally, create an Application to generate an API Key and Secret Key required to access your deployed API Product.
You’ll name your application, associate it with an API Product, and retrieve credentials (API Key & Secret Key) for API access.
Refer to the Application guide for step-by-step details.
Consume API Product
Once your API Product is deployed, you can interact with it using a Kafka client to produce and consume events securely.
Kafka Client Properties
Export API Key & Secret Key:
export ACCESS_KEY="<API_KEY>"
export SECRET_KEY="<SECRET_KEY>"Info
You can find Kafka Bootstrap server under:
APIs & Apps → Applications → [application] → Connection Guide → Connection Endpoints
You can find credentials under:
APIs & Apps → Applications → [application] → Connection Guide → Credentials → Kafka Client Properties
Producing and Consuming Events
Use the following commands to produce and consume event from your API Product.
echo '{"orderId":"test-123","status":"created","timestamp":1234567890000}' | \
docker compose \
-f compose.env.local.yaml \
run --rm kafka-init \
'/opt/bitnami/kafka/bin/kafka-console-producer.sh \
--bootstrap-server orders-api-v0.staging.platform.net:9094 \
--topic orders.created \
--producer-property security.protocol=SASL_SSL \
--producer-property sasl.mechanism=PLAIN \
--producer-property sasl.jaas.config="org.apache.kafka.common.security.plain.PlainLoginModule required username=\"'"${ACCESS_KEY}"'\" password=\"'"${SECRET_KEY}"'\";" \
--producer-property ssl.truststore.location=/etc/tls/client/trust.jks \
--producer-property ssl.truststore.password=generated'docker compose \
-f compose.env.local.yaml \
run --rm kafka-init \
'/opt/bitnami/kafka/bin/kafka-console-consumer.sh \
--bootstrap-server orders-api-v0.staging.platform.net:9094 \
--topic orders.created \
--from-beginning \
--consumer-property security.protocol=SASL_SSL \
--consumer-property sasl.mechanism=PLAIN \
--consumer-property sasl.jaas.config="org.apache.kafka.common.security.plain.PlainLoginModule required username=\"'"${ACCESS_KEY}"'\" password=\"'"${SECRET_KEY}"'\";" \
--consumer-property ssl.truststore.location=/etc/tls/client/trust.jks \
--consumer-property ssl.truststore.password=generated'Invalid Eventecho '{"orderId":"test-123","status":"created"}' | \
docker compose \
-f compose.env.local.yaml \
run --rm kafka-init \
'/opt/bitnami/kafka/bin/kafka-console-producer.sh \
--bootstrap-server orders-api-v0.staging.platform.net:9094 \
--topic orders.created \
--producer-property security.protocol=SASL_SSL \
--producer-property sasl.mechanism=PLAIN \
--producer-property sasl.jaas.config="org.apache.kafka.common.security.plain.PlainLoginModule required username=\"'"${ACCESS_KEY}"'\" password=\"'"${SECRET_KEY}"'\";" \
--producer-property ssl.truststore.location=/etc/tls/client/trust.jks \
--producer-property ssl.truststore.password=generated'Because the timestamp field is required by the AsyncAPI specification, this event fails validation.
The Zilla Gateway validates the event against the AsyncAPI spec at the edge and rejects it before it reaches Kafka, even without consulting a Schema Registry.
Expected Output
org.apache.kafka.common.InvalidRecordException: This record has failed the validation on broker and hence will be rejected.Even though this Quickstart includes a Schema Registry, it is not required for validation.
Zilla can enforce message correctness directly from the schema defined in AsyncAPI specification.
Stop the Stack
To stop Quickstart environment:
docker compose -f compose.env.local.yaml downTo stop the Zilla Platform but keep all data:
docker compose -f oci://ghcr.io/aklivity/zilla-platform/quickstart downWipe all persisted data and start fresh:
docker compose -f compose.env.local.yaml down --volumes --remove-orphansdocker compose -f oci://ghcr.io/aklivity/zilla-platform/quickstart down --volumes --remove-orphans