ЕСОЗ - публічна документація

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 9 Next »

Event producing

Only basic validation is performed on the event producing stage:

  1. json validation
  2. create job in MongoDB
  3. message with the relevant type is created in Kafka medical_events topic
  4. medical_events topic has multiple partitions
  5. Messages related to the same patient always must be sent to the same partition.

Supported message types (can be extended):

approval_create_job
approval_resend_job
diagnostic_report_package_cancel_job
diagnostic_report_package_create_job
episode_cancel_job
episode_close_job
episode_create_job
episode_update_job
package_cancel_job
package_create_job
service_request_cancel_job
service_request_close_job
service_request_complete_job
service_request_create_job
service_request_process_job
service_request_recall_job
service_request_release_job
service_request_use_job

Event processing

  1. All the messages from the same partition are processed in the same order as it has been produced.
  2. event_consumer is not connected to any partitions or kafka, it's just a worker which processes whatever you send to it, so you can scale it without fear. Messages order per partition guaranteed by kafka_consumer service which performs like a messages router and can't be scaled.
  3. kafka_consumer - read events from kafka topic, proxy it to event_consumer workers, commit offset to kafka.
  4. kafka_consumer commits offset in batches after all the messages from batch has been processed.
  5. event_consumer - implements all the processing business logic for every specific message type.
  6. event_consumer is idempotent. In case if event_consumer fails to process event, it crashes and restart. kafka_consumer will restart process that has served this particular event_consumer's worker and offset will not be committed to kafka for the whole batch of events. So the same batch of events will be received by kafka_consumer again. It will be reprocessed, but with the same result.
  7. Messages are processed by workers (event_consumer) one-by-one, but are committed to kafka in batches.
  8. Any state change of the patient data in MongoDB is performed in transaction.
  • No labels