Gravitee and kafka

Gravitee is one of the first API Management platforms to not only support synchronous REST API’s, asynchronous API’s, but also has a native integration with event brokers such as kafka and MQTT. In this blog we will dive into the integration between Gravitee and Kafka and create a simple API that acts as an interface for a Kafka topic.

16 Jan 2024

We did our project with Gravitee version 3.19. As of Gravitee version 4 the Kafka features are no longer available in the free version, but require an upgrade to the Enterprise Edition.

Setup Gravitee for use with Kafka

To be able to make use of all the Kafka features that are provided with Gravitee, you need to install the Kafka plugins. In our case, the installation of Gravitee was done in Docker compose, so that means, the plugin-folder on your local machine needs to be created and configured in the docker-compose.yaml file. The exact instructions to add the plugins directory to the apim docker containers can be found here:

We also added a kafka-zookeeper container to our docker-compose and ended up with the docker-compose.yaml attached. You can use this to replicate our demo. The plugins required to get Kafka working are: gravitee-connector-kafka, gravitee-apim-plugin-endpoint-kafka and gravitee-endpoint-kafka-advanced You can download it from and then just put the .zip file in the corresponding local directories that you created:

• apim-gateway/plugins

• apim-management-api/plugins

Create an API

Once the plugins have been installed, we can start by creating the API itself. We will create a very simple API that listens on path ‘/v1/test’ and integrates with the kafka server in the kafkazookeeper container.

1. Under APIs menu, select +Add Api – Continue in the Wizard

2. See screenshot below:

3. Add target backend URL (here it is the container name and the port kafka is listening on):

4. Create an Plan

Skip Documentation or import a definition

5. Create the API

6. Add methods to the API

7. Set endpoints to Kafka type

8. Deploy your API:

Setup of Kafka properties

In the basic configuration of your API, you only add the kafka server and port, but other properties like the topic, partition, offset, group ID and client ID etc can be defined with any of the following methods.

• Url path: the last part of the path is used as the topic name. F.e. /v1/api/topic → topic is the topic name for your api with path /v1/api

• HTTP-header: use the HTTP-header “x-gravitee-kafka-topic” to specify the topic

• Internal context attribute: in the design of the API you can use “gravitee.attribute.kafka.topic” to set the topic name statically for the API

Test the API

POST a message to Kafka

Result in the consumer-console:

Listen on a topic with a websocket

If you create a GET method on an API that has a kafka-backend, this endpoint becomes available as a websocket.

To test it, we use postman. Create a websocket listener there and point it to the API’s endpoint.

If you use the POST api to post a message to the specified topic it will now be listed in the responses: