...
- All the messages from the same partition are processed in the same order as it has been produced.
- One event consumer can process multiple partitions at the same time, but partition should be processed by a single event_consumer at the same time. Proper routing is guaranteed by kafka_consumer.
- kafka_consumer - read events from kafka topic, proxy it to event_consumer workers, commit offset to kafka
- event_consumer - implements all the processing business logic for every specific message type.
- event_consumer is idempotent. In case if during the processing of event batch if fails to process one of the events, it crashes and restart. Offset is not committed. So the same batch of events will be received by consumer again. It will be reprocessed, but with the same result.
- Messages are processed by workers one-by-one, but are committed to kafka in batches.
- Any state change of the patient data in MongoDB is performed in transaction.