Apache Kafka is publish-subscribe messaging rethought as a
distributed commit log.
Fast
A single Kafka broker can handle hundreds of megabytes
of reads and writes per second from thousands of clients.
Scalable
Kafka is designed to allow a single cluster to serve as
the central data backbone for a large organization. It
can be elastically and transparently expanded without
downtime. Data streams are partitioned and spread over a
cluster of machines to allow data streams larger than the
capability of any single machine and to allow clusters of
co-ordinated consumers
Durable
Messages are persisted on disk and replicated within the
cluster to prevent data loss. Each broker can handle
terabytes of messages without performance impact.
Distributed by Design
Kafka has a modern cluster-centric design that offers
strong durability and fault-tolerance guarantees.