Quickstart
Quickstart
Get the Zilla Platform running locally and publish your first governed API Product in under 15 minutes.
This Docker Compose bundle boots the management console, control plane, and all supporting services so you can explore the full platform end-to-end. No cloud account or license required.
What you'll build: an Orders API product backed by a Kafka topic, secured with API key authentication, and validated against an AsyncAPI specification, all without a Schema Registry.
Prerequisites
- Docker Engine
24.0+with the Docker Compose plugin2.34.0+ - At least 4 vCPUs and 4 GB RAM allocated to Docker
License
The Quickstart ships with a preconfigured trial license. No setup needed.
Info
All other deployment environments require a valid license. Contact Aklivity Support to obtain one.
Start the Zilla Platform
Pull and start the full platform stack — management console, control plane, Gateway, Kafka, and Schema Registry:
docker compose -f oci://ghcr.io/aklivity/zilla-platform/quickstart up --wait && \
docker compose -f oci://ghcr.io/aklivity/zilla-platform/quickstart/env up --waitOnce the stack is ready, open the Zilla Platform Management Console in your browser.
Info
The Quickstart Environment is pre-created. The Gateway auto-registers via ZILLA_PLATFORM_BOOTSTRAP_TOKEN.
You now have a TLS-enabled Zilla Platform Gateway running with auto-generated certificates.
Admin Setup
The first time you open the console you'll go through a one-time admin registration to create your organization and initial environment.
Refer to the Admin Onboarding guide for step-by-step details.
Publish an API Product
With the platform and data plane running, you can design, deploy, and secure your first API Product.
Extract a spec
Extract a specification from your Kafka topics to define the API structure and message schema.
Select the Quickstart Environment, choose the Kafka service, and select the topic orders.created to generate your API spec.
Refer to the Extract Specifications guide for step-by-step details.
Create the API product
Create an API Product inside the same Catalog.
Name it exactly:
Orders APINote
The name Orders API must match exactly. It maps to the hostname alias pre-configured in the Quickstart Gateway container, which the Gateway uses to resolve and route requests to the correct API Product.
When configuring the server, select your Gateway and set the Kafka Bootstrap Server to:
staging.platform.net:9094Note
Leave the Schema Registry URL blank. This demonstrates AsyncAPI-based event validation, where the Zilla Gateway validates messages directly against the spec with no Schema Registry required.
Refer to the API Product guide for full configuration details.
Deploy
Select a server and deploy the API Product to make it available for consumption.
Refer to the API Product Deploy guide for step-by-step details.
Consume the API product
Subscribe
Create a Subscription for your Application to generate the credentials needed to access the deployed API Product.
Refer to the Subscriptions guide for step-by-step details.
Export credentials
Once the Subscription is created, export the API Key and Secret Key:
export ACCESS_KEY="<API_KEY>"
export SECRET_KEY="<SECRET_KEY>"Info
Find your connection details in the console under:
APIs & Apps → Applications → [application] → [Subscription] → Connection Guide
- Bootstrap server: Connection Endpoints tab
- Credentials: Credentials → Kafka Client Properties tab
Produce and consume
Use the tabs below to produce events, consume them, and observe schema validation in action.
Send a valid order event to the orders.created topic:
echo '{"orderId":"test-123","status":"created","timestamp":1234567890000}' | \
docker compose \
-f oci://ghcr.io/aklivity/zilla-platform/quickstart/env \
run --rm kafka-init \
'/opt/bitnami/kafka/bin/kafka-console-producer.sh \
--bootstrap-server orders-api-v0.staging.platform.net:9094 \
--topic orders.created \
--producer-property security.protocol=SASL_SSL \
--producer-property sasl.mechanism=PLAIN \
--producer-property sasl.jaas.config="org.apache.kafka.common.security.plain.PlainLoginModule required username=\"'"${ACCESS_KEY}"'\" password=\"'"${SECRET_KEY}"'\";" \
--producer-property ssl.truststore.location=/etc/tls/client/trust.jks \
--producer-property ssl.truststore.password=generated'Read events from the orders.created topic from the beginning:
docker compose \
-f oci://ghcr.io/aklivity/zilla-platform/quickstart/env \
run --rm kafka-init \
'/opt/bitnami/kafka/bin/kafka-console-consumer.sh \
--bootstrap-server orders-api-v0.staging.platform.net:9094 \
--topic orders.created \
--from-beginning \
--consumer-property security.protocol=SASL_SSL \
--consumer-property sasl.mechanism=PLAIN \
--consumer-property sasl.jaas.config="org.apache.kafka.common.security.plain.PlainLoginModule required username=\"'"${ACCESS_KEY}"'\" password=\"'"${SECRET_KEY}"'\";" \
--consumer-property ssl.truststore.location=/etc/tls/client/trust.jks \
--consumer-property ssl.truststore.password=generated'This event is missing the required timestamp field. The Zilla Gateway rejects it at the edge before it reaches Kafka, with no Schema Registry involved.
echo '{"orderId":"test-123","status":"created"}' | \
docker compose \
-f oci://ghcr.io/aklivity/zilla-platform/quickstart/env \
run --rm kafka-init \
'/opt/bitnami/kafka/bin/kafka-console-producer.sh \
--bootstrap-server orders-api-v0.staging.platform.net:9094 \
--topic orders.created \
--producer-property security.protocol=SASL_SSL \
--producer-property sasl.mechanism=PLAIN \
--producer-property sasl.jaas.config="org.apache.kafka.common.security.plain.PlainLoginModule required username=\"'"${ACCESS_KEY}"'\" password=\"'"${SECRET_KEY}"'\";" \
--producer-property ssl.truststore.location=/etc/tls/client/trust.jks \
--producer-property ssl.truststore.password=generated'Expected error:
org.apache.kafka.common.InvalidRecordException: This record has failed the validation on broker and hence will be rejected.Stop the stack
Choose the appropriate command based on what you want to preserve.
Stop the data plane environment only (keeps platform data):
docker compose -f oci://ghcr.io/aklivity/zilla-platform/quickstart/env downStop the Zilla Platform (keeps all persisted data for next time):
docker compose -f oci://ghcr.io/aklivity/zilla-platform/quickstart downWipe everything and start fresh (removes all volumes):
docker compose -f oci://ghcr.io/aklivity/zilla-platform/quickstart/env down --volumes --remove-orphans && \
docker compose -f oci://ghcr.io/aklivity/zilla-platform/quickstart down --volumes --remove-orphans