From the course: Apache Flink: Real-Time Data Engineering

Unlock the full course today

Join today to access over 24,200 courses taught by industry experts.

Using a Kafka streaming source

Using a Kafka streaming source

- [Instructor] In this video, I will demonstrate using a Kafka streaming source in Flink. Before we proceed, we need to update our pom.xml to add a dependency for the Kafka connector in Flink. To run this example, you need a running instance of Kafka with two topics created in it, namely Flint.Kafka.streaming.source and Flint.Kafka.streaming.synch. For the purpose of data generation, we first need a Kafka data stream generator. Let's now review the Kafka stream data generator class under the data sources packages. The Kafka connection is set up by specifying a broker list. The code generates the exact audit trail CSV record as the file stream data generator. It then creates a producer record and then publishes the record to Kafka. Let me now switch to the windowing operations class under chapter three to show how to consume a Kafka data stream in Flink. We first set up the streaming environment as before. To connect to…

Contents