Event producing
Only basic validation is performed on the event producing stage:
- json validation
- create job in MongoDB
- message with the relevant type is created in Kafka medical_events topic
- medical_events topic has multiple partitions
- Messages related to the same patient always must be sent to the same partition.
Supported message types (can be extended):
approval_create_job approval_resend_job diagnostic_report_package_cancel_job diagnostic_report_package_create_job episode_cancel_job episode_close_job episode_create_job episode_update_job package_cancel_job package_create_job service_request_cancel_job service_request_close_job service_request_complete_job service_request_create_job service_request_process_job service_request_recall_job service_request_release_job service_request_use_job
Event processing
- All the messages from the same partition are processed in the same order as it has been produced.
- One event consumer can process multiple partitions at the same time, but partition should be processed by a single event_consumer at the same time. Proper routing is guaranteed by kafka_consumer.
- kafka_consumer - read events from kafka topic, proxy it to event_consumer workers, commit offset to kafka
- event_consumer - implements all the processing business logic for every specific message type.
- event_consumer is idempotent. In case if during the processing of event batch if fails to process one of the events, it crashes and restart. Offset is not committed. So the same batch of events will be received by consumer again. It will be reprocessed, but with the same result.
- Messages are processed by workers one-by-one, but are committed to kafka in batches.
- Any state change of the patient data in MongoDB is performed in transaction.