Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Image Added

Event producing

Only basic validation is performed on the event producing stage:

  1. json validation
  2. create job in MongoDB
  3. message with the relevant type is created in Kafka medical_events topic
  4. medical_events topic has multiple partitions
  5. Messages related to the same patient always must be sent to the same partition.

Supported message types (can be extended):

Code Block
approval_create_job
approval_resend_job
diagnostic_report_package_cancel_job
diagnostic_report_package_create_job
episode_cancel_job
episode_close_job
episode_create_job
episode_update_job
package_cancel_job
package_create_job
service_request_cancel_job
service_request_close_job
service_request_complete_job
service_request_create_job
service_request_process_job
service_request_recall_job
service_request_release_job
service_request_use_job

Event processing

  1. All the messages from the same partition are processed in the same order as it has been produced.
  2. One event consumer can process multiple partitions at the same time, but partition should be processed by a single event_consumer at the same time.
  3. kafka_consumer - read events from kafka topic, proxy it to event_consumer workers, commit offset to kafka
  4. event_consumer - implements all the processing business logic for every specific message type.
  5. event_consumer is idempotent. In case if during the processing of event batch if fails to process one of the events, it crashes and restart. Offset is not committed. So the same batch of events will be received by consumer again. It will be reprocessed, but with the same result.
  6. Messages are processed by workers one-by-one, but are committed to kafka in batches.
  7. Any state change of the patient data in MongoDB is performed in transaction.