...
- All the messages from the same partition are processed in the same order as it has been produced.One
- event consumer can process multiple partitions at the same time, but partition should be processed by a single event_consumer at the same time. Proper routing is _consumer is not connected to any partitions or kafka, it's just a worker which processes whatever you send to it, so you can scale it without fear. Messages order per partition guaranteed by kafka_consumer service which performs like a messages router and can't be scaled.
- kafka_consumer - read events from kafka topic, proxy it to event_consumer workers, commit offset to kafka
- event_consumer - implements all the processing business logic for every specific message type.
- event_consumer is idempotent. In case if during the processing of event batch if if event_consumer fails to process one of the eventsevent, it crashes and restart. Offset is not committed kafka_consumer will restart process that has served this particular event_consumer's worker and offset will not be committed to kafka for the whole batch of events. So the same batch of events will be received by kafka_consumer again. It will be reprocessed, but with the same result.
- Messages are processed by workers one-by-one, but are committed to kafka in batches.
- Any state change of the patient data in MongoDB is performed in transaction.