NestJS materials

Write an awesome doc for NestJS tips, tricks, notes, things which are not in the doc (or are but not really obvious) and experimented to figure them out and use them.

View on GitHub

Kafka

Event streaming

Event streaming use cases

Key capabilities

  1. Pub/sub pattern.
  2. Storing streams of events durably and reliably.

    [!NOTE]

    Kafka’s performance is effectively constant with respect to data size, so storing data for a long time is perfectly fine.

  3. Live or retrospective processing.

How it works

How Kafka works infographic

Glossary

# Topic:
A channel for categorizing events.
A topic is similar to a folder in a filesystem.
Multi-producer and multi-subscriber.
Every topic can be replicated, even across geo-regions or datacenters, so that there are always multiple brokers that have a copy of the data. A common production setting is a replication factor of 3, i.e., there will always be three copies of your data.
# Event:
AKA record or message.
Usually has a key, value, timestamp, and optional metadata headers. Here's an example event.
Similar to the files in a folder (topic).
Can be read as often as needed (but can also guarantee to process events exactly-once).
# Partitioning:
Topics are partitioned.
A topic is spread over a number of "buckets" located on different Kafka brokers.
Important for scalability, because it allows client apps to read/write data from/to many brokers at the same time.
# Producer:
Client apps that send data to our Kafka topics.
# Consumer:
Client apps that receive data from Kafka topics by subscribing to the events.
<dt id=serverDefinition"> # Server: </dt>
A cluster of one or more servers that can span multiple datacenters or cloud regions.
Some of these servers form the storage layer, called the brokers.
Some manages data distribution.

Docker wurstmeister/kafka