Spring Boot - Consume Message Through Kafka, Save into ElasticSearch, and Plot into Grafana
Last Updated :
10 Jul, 2022
In this article, we are going to make a program to produce some data using Kafka Producer which will consume by the Kafka Consumer and save into the elastic search DB, and later on, plot that JSON data into the Grafana dashboard. Start with configuring all the required software and tool.
Requirements
- Java 8
- Spring Boot 2.7.0
- Kafka 3.2.0
- Elastic search DB 7.17.3
Note: Why do we mention the version?
Version compatibility is of paramount importance in the case of Elastic and Spring Boot. If your elastic-search version doesn't match with the spring boot version or vice versa, then you face problems in configuring both. Below is the list of version compatibility :
Spring Data Elasticsearch
| Elastic-Search
| Spring framework
| Spring Boot
|
---|
4.4.X
| 7.17.3
| 5.3.X
| 2.7.X
|
4.3.X
| 7.15.2
| 5.3.X
| 2.6.X
|
4.2.X
| 7.12.0
| 5.3.X
| 2.5.X
|
4.1.X
| 7.9.3
| 5.3.2
| 2.4.X
|
4.0.X
| 7.6.2
| 5.2.12
| 2.3.X
|
3.2.X
| 6.8.12
| 5.2.12
| 2.2.X
|
3.1.X
| 6.2.2
| 5.1.19
| 2.1.X
|
3.0.X
| 5.5.0
| 5.0.13
| 2.0.X
|
2.1.X
| 2.4.0
| 4.3.25
| 1.5.X
|
Download links:
Download and extract from your system.
Create Kafka Producer Application
Create a spring boot project named Producer.
ProducerApplication.class
Java
package com.example.demo;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication
public class ProducerApplication {
public static void main(String[] args) {
SpringApplication.run(ProducerApplication.class, args);
}
}
KafkaProducerConfig.java
In this class, we have provided the configuration of Kafka.
Java
package com.example.demo.config;
import java.util.HashMap;
import java.util.Map;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.StringSerializer;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.support.serializer.JsonSerializer;
import com.example.demo.model.User;
@Configuration
public class KafkaProducerConfig {
@Bean
public ProducerFactory<String, User> userProducerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"localhost:9092");
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class.getName());
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
JsonSerializer.class.getName());
return new DefaultKafkaProducerFactory<>(configProps);
}
@Bean
public KafkaTemplate<String, User> userKafkaTemplate() {
return new KafkaTemplate<>(userProducerFactory());
}
}
User.java
Model class where we store the user information.
Java
package com.example.demo.model;
public class User {
int id;
String name;
String pdate;
public User() {
super();
}
public User(int id, String name, String pdate) {
super();
this.id = id;
this.name = name;
this.pdate = pdate;
}
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getPdate() {
return pdate;
}
public void setPdate(String pdate) {
this.pdate = pdate;
}
}
KafkaService.java
Service class uses kafka template to send the data to the consumer.
Java
package com.example.demo.service;
import java.util.List;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.stereotype.Service;
import com.example.demo.model.User;
@Service
public class KafkaService {
private final Logger LOG = LoggerFactory.getLogger(KafkaService.class);
@Autowired
private KafkaTemplate<String, User> kafkaTemplate;
String kafkaTopic = "gfg";
public void send(User user) {
LOG.info("Sending User Json Serializer : {}", user);
kafkaTemplate.send(kafkaTopic, user);
}
public void sendList(List<User> userList) {
LOG.info("Sending UserList Json Serializer : {}", userList);
for (User user : userList) {
kafkaTemplate.send(kafkaTopic, user);
}
}
}
ProducerController.java
Java
package com.example.demo.controller;
import java.util.List;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RestController;
import com.example.demo.model.User;
import com.example.demo.service.KafkaService;
@RestController
public class ProducerController {
@Autowired
KafkaService kafkaProducer;
@PostMapping("/producer")
public String sendMessage(@RequestBody User user) {
kafkaProducer.send(user);
return "Message sent successfully to the Kafka topic shubham";
}
@PostMapping("/producerlist")
public String sendMessage(@RequestBody List<User> user) {
kafkaProducer.sendList(user);
return "Message sent successfully to the Kafka topic shubham";
}
}
pom.xml
XML
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
https://maven.apache.org/xsd/
maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.7.0</version>
<relativePath/>
</parent>
<groupId>com.example</groupId>
<artifactId>Producer</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>Producer</name>
<description>Demo project for Spring Boot</description>
<properties>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<scope>runtime</scope>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
application.properties
server.port=1234
Create Kafka Consumer and Configure the ElastisSearch Application
Create another spring boot application named ElasticConsumer.
ComsumerApplication.java
Java
package com.example.demo;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
import com.example.demo.model.User;
import com.example.demo.service.KafkaUserService;
@SpringBootApplication
@RestController
public class ConsumerApplication {
@Autowired
KafkaUserService kafkaUserService;
public static void main(String[] args) {
SpringApplication.run(ConsumerApplication.class, args);
}
@KafkaListener(topics = "gfg", groupId = "gfg-group")
public void listen(User user) {
System.out.println("Received User information : " + user.toString());
try {
kafkaUserService.saveUser(user);
} catch (Exception e) {
e.printStackTrace();
}
}
@GetMapping("/getElasticUserFromKafka")
public Iterable<User> findAllUser() {
return kafkaUserService.findAllUsers();
}
}
KafkaConsumerConfig.java
Java
package com.example.demo.config;
import java.util.HashMap;
import java.util.Map;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.annotation.EnableKafka;
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.support.serializer.JsonDeserializer;
import com.example.demo.model.User;
@EnableKafka
@Configuration
public class kafkaConsumerConfig {
@Bean
public ConsumerFactory<String, User> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "127.0.0.1:9092");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "gfg-group");
return new DefaultKafkaConsumerFactory<>(props,
new StringDeserializer(), new JsonDeserializer<>(User.class));
}
@Bean
public ConcurrentKafkaListenerContainerFactory<String, User>
kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, User> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
}
User.java
@Document – specifies our index.
@Id – represents the field _id of our document and it is unique for each message.
@Field – represents a different type of field that might be in our data.
Java
package com.example.demo.model;
import java.util.Date;
import org.springframework.data.annotation.Id;
import org.springframework.data.elasticsearch.annotations.Document;
import org.springframework.data.elasticsearch.annotations.Field;
import org.springframework.data.elasticsearch.annotations.FieldType;
import com.google.gson.Gson;
@Document(indexName = "kafkauser")
public class User {
@Id
int id;
@Field(type = FieldType.Text, name = "name")
String name;
@Field(type = FieldType.Date, name = "pdate")
Date pdate;
public User() {
super();
}
public User(int id, String name, Date pdate) {
super();
this.id = id;
this.name = name;
this.pdate = pdate;
}
public Date getPdate() {
return pdate;
}
public void setPdate(Date pdate) {
this.pdate = pdate;
}
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
@Override
public String toString() {
return new Gson().toJson(this);
}
}
KafkaUserRepository.java
Java
package com.example.demo.repository;
import org.springframework.data.elasticsearch.repository.ElasticsearchRepository;
import org.springframework.stereotype.Repository;
import com.example.demo.model.User;
@Repository
public interface KafkaUserRepository extends ElasticsearchRepository<User,String> {
}
KafkaUserService.java
Java
package com.example.demo.service;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import com.example.demo.model.User;
import com.example.demo.repository.KafkaUserRepository;
@Service
public class KafkaUserService {
@Autowired
private KafkaUserRepository edao;
public void saveUser(User user) {
edao.save(user);
}
public Iterable<User> findAllUsers() {
return edao.findAll();
}
}
How to Run?
Elastis-Search:
- Run elastissearch.bat using cmd -> E:\elasticsearch-7.17.3-windows-x86_64\elasticsearch-7.17.3\bin> elasticsearch.bat
- open browser and type - http://localhost:9200
The output look like this:
{
"name": "DESKTOP-S6UTE8M",
"cluster_name": "elasticsearch",
"cluster_uuid": "VDlwyl2WQhCX7_lLwWm9Kg",
"version": {
"number": "7.17.3",
"build_flavor": "default",
"build_type": "zip",
"build_hash": "5ad023604c8d7416c9eb6c0eadb62b14e766caff",
"build_date": "2022-04-19T08:11:19.070913226Z",
"build_snapshot": false,
"lucene_version": "8.11.1",
"minimum_wire_compatibility_version": "6.8.0",
"minimum_index_compatibility_version": "6.0.0-beta1"
},
"tagline": "You Know, for Search"
}
Kafka:
In Kafka first we have to run zookeeper and after that kafkaserver. In windows we have to run .bat file and in the case of linux we have to run .sh file
- Open cmd
- Navigate under kafka folder
- E:\kafka_2.12-3.2.0\kafka_2.12-3.2.0> .\bin\windows\zookeeper-server-start.bat .\config\zookeeper.properties
- E:\kafka_2.12-3.2.0\kafka_2.12-3.2.0> .\bin\windows\kafka-server-start.bat .\config\server.properties
pom.xml
XML
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.7.0</version>
<relativePath />
</parent>
<groupId>com.example</groupId>
<artifactId>Consumer</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>Consumer</name>
<description>Demo project for Spring Boot</description>
<properties>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<scope>runtime</scope>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
Run Producer and ElasticConsumer Spring Application
Send JSON data using postman. Here data is sent by the Producer app to ElasticConsumer. And in the ElasticConsumer console data would be printed and saved into ElasticSearchDB in the form of JSON.
Producer App APIs:
- Send single object -> http://localhost:1234/producer
- Send list of objects -> http://localhost:1234/producerlist
ElasticConsumer app APIs
- Fetch all records from elastic db -> localhost:8080/getElasticUserFromKafka
Grafana Dashboard
Grafana dashboard is running on http://localhots:3000. Watch the configuration video below.
Output
Output Video 1:
Output Video 2:
Some ElasticSearch APIs
- To show the records of index -> http://localhost:9200/<index_name>/_search
- To Delete index -> http://localhost:9200/<index_name>
- List all indices -> http://localhost:9200/_cat/indices
- Show schema of index -> http://localhost:9200/<index_name>
Similar Reads
How to Test Spring Boot Project using ZeroCode?
Zerocode automated testing framework for a REST API project concept is getting seen via this tutorial by taking a sample spring boot maven project. Let us see the dependencies for Zerocode : <dependency> <groupId>org.jsmart</groupId> <artifactId>zerocode-tdd</artifactId
5 min read
Validation in Spring Boot
In this article, via a Gradle project, let us see how to validate a sample application and show the output in the browser. The application is prepared as of type Spring Boot and in this article let us see how to execute via the command line as well. Example Project Project Structure: Â As this is th
5 min read
Spring Boot â Validation using Hibernate Validator
Hibernate Validator provides a powerful and flexible way to validate data in Spring Boot applications. Validating user input is essential for building secure and reliable applications. Spring Boot makes this easy with Hibernate Validator, the reference implementation of JSR 380 (Bean Validation API)
6 min read
How to Connect MongoDB with Spring Boot?
In recent times MongoDB has been the most used database in the software industry. It's easy to use and learn This database stands on top of document databases it provides the scalability and flexibility that you want with the querying and indexing that you need. In this, we will explain how we conne
4 min read
Spring Boot - File Handling
Spring Boot is a popular, open-source spring-based framework used to develop robust web applications and microservices. As it is built on top of Spring Framework it not only has all the features of Spring but also includes certain special features such as auto-configuration, health checks, etc. whic
5 min read
Spring Boot MockMVC Testing with Example Project
In a Spring Boot project, we have to test the web layer. For that, we can use MockMVC. In this tutorial, let us see how to do that by having a sample GeekEmployee bean and writing the business logic as well as the test cases for it. Example Project Project Structure: Â This is a maven project. Let's
5 min read
Spring Boot Integration With MySQL as a Maven Project
Spring Boot is trending and it is an extension of the spring framework but it reduces the huge configuration settings that need to be set in a spring framework. In terms of dependencies, it reduces a lot and minimized the dependency add-ons. It extends maximum support to all RDBMS databases like MyS
4 min read
Spring Boot Integration With MongoDB as a Maven Project
MongoDB is a NoSQL database and it is getting used in software industries a lot because there is no strict schema like RDBMS that needs to be observed. It is a document-based model and less hassle in the structure of the collection. In this article let us see how it gets used with SpringBoot as a Ma
4 min read
Spring Boot MockMVC Example
Automated testing plays a vital role in the software industry. In this article, let us see how to do the testing using MockMvc for a Spring Boot project. To test the web layer, we need MockMvc and by using @AutoConfigureMockMvc, we can write tests that will get injected. SpringBootApplication is an
3 min read
Spring Boot Integration With PostgreSQL as a Maven Project
PostgreSQL is a user-friendly versatile RDBMS. This article lets us see how to integrate Spring Data JPA with PostgreSQL. There are some conventions to be followed while using PostgreSQL. We will cover that also. Working with PostgreSQL We can easily create databases and tables in that. The below sc
3 min read