...
Only basic validation is performed on the event producing stage:
- json validation
- job hash check
- in case there is no job with the same hash in status pending, create job in MongoDB
- Otherwise, we will respond with 202 (accepted) but with the previously created job id. New job will not be created in Mongo DB
- message with the relevant type is created in Kafka medical_events topic
- medical_events topic has multiple partitions
- Messages related to the same patient always must be sent to the same partition.
...
Code Block |
---|
approval_create_job: event for creating an Approval approval_resend_job: event for resending an Approval diagnostic_report_package_cancel_job: event for canceling a Diagnostic report package diagnostic_report_package_create_job: event for creating a Diagnostic report package episode_cancel_job: event for canceling an Episode episode_close_job: event for closing an Episode episode_create_job: event for creating an Episode episode_update_job: event for updating an Episode package_cancel_job: event for canceling a Package package_create_job: event for creating a Package procedure_cancel_job: event for canceling a Procedure procedure_create_job: event for creating a Procedure service_request_cancel_job: event for canceling a Service request service_request_close_job: event for closing a Service request service_request_complete_job: event for completing a Service request service_request_create_job: event for creating a Service request service_request_process_job: event for processing a Service request service_request_recall_job: event for recalling a Service request service_request_release_job: event for releasing a Service request service_request_resend_job: event for resending a Service request service_request_use_job: event for using a Service request |
Event processing
- All the messages from the same partition are processed in the same order as it has been produced.
- event_consumer is not connected to any partitions or kafka, it's just a worker which processes whatever you send to it, so you can scale it without fear. Messages order per partition guaranteed by kafka_consumer service which performs like a messages router and can't be scaled.
- kafka_consumer - read events from kafka topic, proxy it to event_consumer workers, commit offset to kafka.
- kafka_consumer commits offset in batches , not one-by-oneafter all the messages from batch has been processed.
- event_consumer - implements all the processing business logic for every specific message type.
- event_consumer is idempotent. In case if event_consumer fails to process event, it crashes and restart. kafka_consumer will restart process that has served this particular event_consumer's worker and offset will not be committed to kafka for the whole batch of events. So the same batch of events will be received by kafka_consumer again. It will be reprocessed, but with the same result.
- Messages are processed by workers (event_consumer) one-by-one, but are committed to kafka in batches.
- Any state change of the patient data in MongoDB is performed in transaction.
Async job processing status model
Possible statuses
Status | Description |
---|---|
0 - Pending | Job status after successful creation |
1 - Processed | Job status after successful processing |
2 - Failed | Job status after expected error (e.g. error 409 or 422) |
3 - Failed_with_error | Job status after a system error (e.g. error 500) |
Possible transitions between statuses
From | To | Description |
---|---|---|
Pending | Processed | Job successfully processed |
Pending | Failed | Job failed due to user error |
Pending | Failed_with_error | Job failed due to system error |