Quickstart
Quickstart
The Zilla Platform Quickstart allows you to spin up a complete Zilla Platform locally in just a few minutes.
This Docker Compose bundle sets up the management console, control plane, and supporting services so you can explore the Zilla Platform end-to-end.
Prerequisites
- Docker Engine
24.0+ - Docker Compose plugin version
2.34.0 or later - At least 4 vCPUs and 4 GB RAM available for containers
- A valid Zilla Platform License
Get a License
Use this link to request a license key.
Our team will reach out with the appropriate license and activation instructions.
Set the License Key
Pass the license using ZILLA_PLATFORM_LICENSE_KEY environment variable.
Every container reads the same license via the ZILLA_PLATFORM_LICENSE_KEY environment variable.
If the Zilla Platform or Gateway detects an invalid or missing license key, the service will display this error message on startup and halt.
License is invalid, contact support@aklivity.io to request a new licenseStart the Zilla Platform
Set the License Key
Set your Zilla Platform license key using the ZILLA_PLATFORM_LICENSE_KEY environment variable.
export ZILLA_PLATFORM_LICENSE_KEY=<license>Launch the Quickstart
This Quickstart sets up the Zilla Platform with all dependencies required to manage and explore the system locally.
Start the Zilla Platform stack using the published Docker Compose.
docker compose -f oci://ghcr.io/aklivity/zilla-platform/quickstart up --waitOnce the stack is ready, open the Zilla Platform Management Console in your browser.
Admin onboarding
Get started with Admin onboarding.
The Zilla Platform provides a streamlined, one-time admin registration to set up your organization and environment.
Setup Data Plane
After completing your admin onboarding, the next step is to create an Environment and attach your Gateway, Kafka Cluster, and Schema Registry.
Follow the guide here to create an Environment and generate Bootstrap token required to attach Gateway with Platform.
Info
Creating an environment generates the ZILLA_PLATFORM_BOOTSTRAP_TOKEN, which is later used to launch and attach the Zilla Platform Gateway.
Use Quickstart Environment
The Quickstart Environment is a ready-to-run sandbox that bundles:
- A Zilla Platform Gateway
- A Kafka Cluster
- A Schema Registry
Create Docker Compose File
Create a Docker Compose file named compose.env.local.yaml with the following content:
name: quickstart-env
include:
- oci://ghcr.io/aklivity/zilla-platform/quickstart/envThis configuration:
- References the official
Zilla Quickstart environmentfrom GitHub Container Registry.
Start the Environment
Before starting the environment, export your license and bootstrap credentials:
export ZILLA_PLATFORM_LICENSE_KEY=<your-license-key>
export ZILLA_PLATFORM_BOOTSTRAP_TOKEN=<your-bootstrap-token>Start the environment and wait for all services to become ready:
docker compose -f compose.env.local.yaml up --waitYou should now have a TLS enabled Zilla Platform Gateway running using the generated certificates.
Attach Kafka Cluster to Zilla Platform
Once the Quickstart Environment is up and running, the next step is to attach your Kafka Cluster to the Zilla Platform.
Add a Service: Under the environment created in the previous step, add a new Kafka service.
Enter Kafka Cluster Details: Connection details for your Kafka cluster
kafka.env.data.plane.net:9092Add Schema Registry: Add Schema Registry Information.
http://schema-registry.env.data.plane.net:8081
For detailed configuration and additional options, see the Service Setup Guide.
Build First API Product
With your platform up and running, and your environment and gateways ready, you can now create your first API Product.
This quickstart will guide you through the essential steps to make your API product available for consumption in minutes.
Create a Workspace
Start by creating a Workspace to organize your API projects and related assets.
You’ll be prompted to provide a name, description to create a new workspace.
Refer to the Workspaces guide for step-by-step details.
Create a Project
Within your workspace, create a Project to group specifications
You’ll specify a project name and select the desired protocol (for example, Kafka) during setup.
Refer to the Projects guide for step-by-step details.
Extract and Create a Spec
Extract an API specification from your Kafka topics to define your API structure and message schema.
You’ll select an environment, choose the Kafka service, and specify the topics & schemas to generate your API spec.
Refer to the Extract Specifications guide for step-by-step details.
Create an API Catalog
Create an API Catalog to organize and manage your API Products and Plans.
You’ll provide a name and description to create a new catalog.
Refer to the API Catalog guide for step-by-step details.
Create a Plan
Under your Catalog, create a Plan that defines security & rate-limit policies for consumers.
You can configure rate limiting (for example, 1000 mb/s or custom) and choose a security method like API Key.
Refer to the Plans guide for step-by-step details.
Create an API Product
Next, create an API Product within the same Catalog.
Refer to the API Product guide for step-by-step details.
Name the API Product Orders API.
Note
It is required to name API Product Orders API, as that matches the hostname alias configured in the Quickstart Gateway container.
This ensures the Gateway can resolve and route requests for this API Product correctly.
You’ll select the workspace, project, and spec, then configure server details (specify your Gateway information) and associate one or more plans.
When configuring your server, you’ll need to define connection details for both the Kafka Bootstrap Server and, optionally, the Schema Registry URL.
Example:
Kafka Bootstrap Server: {server}.staging.platform.net:9094
Schema Registry URL: https://{server}.staging.platform.net:8081Deploy the API Product
Once the API Product is defined, deploy it using one of the servers specified during creation. This makes your API Product available for consumption.
You’ll select a server and initiate the deployment to publish your API Product.
Refer to the API Product Deploy guide for step-by-step details.
Create an Application
Finally, create an Application to generate an API Key and Secret Key required to access your deployed API Product.
You’ll name your application, associate it with an API Product, and retrieve credentials (API Key & Secret Key) for API access.
Refer to the Application guide for step-by-step details.
Consume API Product
Once your API Product is deployed, you can interact with it using a Kafka client to produce and consume events securely. This section walks you through setting up a Kafka client, connecting with the credentials generated from your application, and verifying message flow through the Zilla Platform.
Kafka Client Properties
You can find TLS & SASL information under:
APIs & Apps → Applications → [application] → Connection Guide → Credentials → Kafka Client Properties
This configuration enables secure TLS communication and authenticates the client using your API key and secret key.
You can find Kafka Bootstrap server & Schema Registry URL under:
APIs & Apps → Applications → [application] → Connection Guide → Connection Endpoints
Producing and Consuming Events
Export API Key & Secret Key:
export ACCESS_KEY="<API_KEY>"
export SECRET_KEY="<SECRET_KEY>"Export Kafka bootstrap server & Schema Registry URL:
export KAFKA_BOOTSTRAP_SERVER="<KAFKA_BOOTSTRAP_SERVER>"
export SCHEMA_REGISTRY_URL="<SCHEMA_REGISTRY_URL>"Use the following commands to produce and consume event from your API Product.
echo '{"orderId":"test-123","status":"created","timestamp":1234567890000}' | \
docker compose \
-f compose.env.local.yaml \
run --rm kafka-tools \
kafka-json-schema-console-producer \
--bootstrap-server ${KAFKA_BOOTSTRAP_SERVER} \
--topic orders.created \
--producer-property security.protocol=SASL_SSL \
--producer-property sasl.mechanism=PLAIN \
--producer-property sasl.jaas.config="org.apache.kafka.common.security.plain.PlainLoginModule required username=\"${ACCESS_KEY}\" password=\"${SECRET_KEY}\";" \
--producer-property ssl.truststore.location=/etc/tls/client/trust.jks \
--producer-property ssl.truststore.password=generated \
--property schema.registry.url=${SCHEMA_REGISTRY_URL} \
--property schema.registry.ssl.truststore.location=/etc/tls/client/trust.jks \
--property schema.registry.ssl.truststore.password=generated \
--property basic.auth.credentials.source=USER_INFO \
--property schema.registry.basic.auth.user.info="${ACCESS_KEY}:${SECRET_KEY}" \
--property auto.register.schemas=false \
--property use.latest.version=true \
--property value.schema.file=/etc/schemas/orders.schema.jsondocker compose \
-f compose.env.local.yaml \
run --rm kafka-tools \
kafka-json-schema-console-consumer \
--bootstrap-server ${KAFKA_BOOTSTRAP_SERVER} \
--topic orders.created \
--from-beginning \
--consumer-property security.protocol=SASL_SSL \
--consumer-property sasl.mechanism=PLAIN \
--consumer-property sasl.jaas.config="org.apache.kafka.common.security.plain.PlainLoginModule required username=\"${ACCESS_KEY}\" password=\"${SECRET_KEY}\";" \
--consumer-property ssl.truststore.location=/etc/tls/client/trust.jks \
--consumer-property ssl.truststore.password=generated \
--property schema.registry.url=${SCHEMA_REGISTRY_URL} \
--property schema.registry.ssl.truststore.location=/etc/tls/client/trust.jks \
--property schema.registry.ssl.truststore.password=generated \
--property basic.auth.credentials.source=USER_INFO \
--property schema.registry.basic.auth.user.info="${ACCESS_KEY}:${SECRET_KEY}"Produce Invalid Events
echo '{"orderId":"test-123","status":"created"}' | \
docker compose \
-f compose.env.local.yaml \
run --rm kafka-init \
'/opt/bitnami/kafka/bin/kafka-console-producer.sh \
--bootstrap-server ${KAFKA_BOOTSTRAP_SERVER} \
--topic orders.created \
--producer-property security.protocol=SASL_SSL \
--producer-property sasl.mechanism=PLAIN \
--producer-property sasl.jaas.config="org.apache.kafka.common.security.plain.PlainLoginModule required username=\"'"${ACCESS_KEY}"'\" password=\"'"${SECRET_KEY}"'\";" \
--producer-property ssl.truststore.location=/etc/tls/client/trust.jks \
--producer-property ssl.truststore.password=generated'Since the timestamp field is missing, this event does not comply with the registered schema and will be rejected by the Zilla Gateway before reaching Kafka.
Expected Output
org.apache.kafka.common.InvalidRecordException: This record has failed the validation on broker and hence will be rejected.Stop the Stack
To stop Quickstart environment:
docker compose -f compose.env.local.yaml downTo stop the Zilla Platform but keep all data:
docker compose -f oci://ghcr.io/aklivity/zilla-platform/quickstart downWipe all persisted data and start fresh:
docker compose -f oci://ghcr.io/aklivity/zilla-platform/quickstart down --volumes --remove-orphans