Find out why Apache Kafka is an excellent choice of distributed data-streaming platform
Learn major CLIs: kafka-topics, kafka-console-producer, kafka-console-consumer, kafka-consumer-groups, kafka-configs
Get a solid grip on Apache Kafka by understanding various real-world examples
The high throughput and low latency of Apache Kafka have made it one of the leading distributed data-streaming enterprise technologies. It is used by many Fortune 500 companies such as Netflix, Airbnb, Uber, Walmart, and LinkedIn. If you want to develop Apache Kafka skills to stream data easily, then this course is for you.
The course starts by explaining the architecture of the Apache Kafka ecosystem, before going on to cover the core concepts of Kafka such as topics, partitions, brokers, replicas, producers, consumers, and more. Next, you will use native Kafka binaries to launch your own Kafka cluster on Windows, Mac OS X, and Linux.
As you advance, you will get hands-on experience with the Kafka command-line interface (CLI), along with an understanding of how to create producers and consumers in Java to interact with Kafka. Next, a real-world project using Wikimedia as a source of data for a producer and OpenSearch as a sink for our consumers. Moving on, you will get to grips with advanced APIs such as Kafka Connect and Kafka Streams and perform a case study on the real-world applications of Kafka. Finally, you will get an overview of advanced Kafka for administrators and understand advanced topic configurations.
By the end of this course, you will be well-versed with how Apache Kafka 3.0 plays an important role in data-streaming applications.
This course is for developers who want to learn the fundamentals of Apache Kafka, software architects who want to understand how Kafka fits into their solution architecture or anyone who is looking to understand how Apache Kafka works as a distributed system. Basic knowledge of Java and the Linux command line will be beneficial to effectively understand the concepts covered in the course.