Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. All the messages from the same partition are processed in the same order as it has been produced.One
  2. event consumer can process multiple partitions at the same time, but partition should be processed by a single event_consumer at the same time. Proper routing is _consumer is not connected to any partitions or kafka, it's just a worker which processes whatever you send to it, so you can scale it without fear. Messages order per partition guaranteed by kafka_consumer service which performs like a messages router and can't be scaled.
  3. kafka_consumer - read events from kafka topic, proxy it to event_consumer workers, commit offset to kafka
  4. event_consumer - implements all the processing business logic for every specific message type.
  5. event_consumer is idempotent. In case if during the processing of event batch if if event_consumer fails to process one of the eventsevent, it crashes and restart. Offset is not committed kafka_consumer will restart process that has served this particular event_consumer's worker and offset will not be committed to kafka for the whole batch of events. So the same batch of events will be received by kafka_consumer again. It will be reprocessed, but with the same result.
  6. Messages are processed by workers one-by-one, but are committed to kafka in batches.
  7. Any state change of the patient data in MongoDB is performed in transaction.