Kafka
Master the distributed event streaming platform that powers real-time data pipelines at the world's largest companies. From decoupled microservices to event sourcing at scale.
Core Architecture
Topics, partitions, offsets, and records — the primitives you must internalize before anything else.
Brokers & Replication
The physical layer — how Kafka stores data, replicates it, and survives failures.
Producers & Consumers
How data flows in and out — batching, partitioning, consumer groups, and rebalancing.
Streams & Patterns
Real-world usage — event sourcing, CQRS, CDC, sagas, and Kafka Streams processing.
Operations & Tuning
Schema Registry, performance tuning, monitoring, security, and when NOT to use Kafka.
Why Kafka?
Traditional message queues delete messages after consumption. Kafka retains them. This single design decision — the durable, replayable, append-only log — unlocks event sourcing, stream processing, and decoupled architectures that scale to millions of events per second.
- ✓Append-only commit log — immutable, ordered, replayable from any offset.
- ✓Millions of messages per second with horizontal scaling via partitions.
- ✓Consumer groups enable independent, parallel processing with automatic load balancing.
- ✓Three roles in one: message queue, event streaming platform, and durable storage layer.