ЕСОЗ - публічна документація

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 12 Current »

Event producing

Only basic validation is performed on the event producing stage:

  1. json validation
  2. job hash check
    1. in case there is no job with the same hash in status pending, create job in MongoDB
    2. Otherwise, we will respond with 202 (accepted) but with the previously created job id. New job will not be created in Mongo DB
  3. message with the relevant type is created in Kafka medical_events topic
  4. medical_events topic has multiple partitions
  5. Messages related to the same patient always must be sent to the same partition.

Supported message types (can be extended):

approval_create_job: event for creating an Approval
approval_resend_job: event for resending an Approval
diagnostic_report_package_cancel_job: event for canceling a Diagnostic report package
diagnostic_report_package_create_job: event for creating a Diagnostic report package
episode_cancel_job: event for canceling an Episode
episode_close_job: event for closing an Episode
episode_create_job: event for creating an Episode
episode_update_job: event for updating an Episode
package_cancel_job: event for canceling a Package
package_create_job: event for creating a Package
procedure_cancel_job: event for canceling a Procedure
procedure_create_job: event for creating a Procedure
service_request_cancel_job: event for canceling a Service request
service_request_close_job: event for closing a Service request
service_request_complete_job: event for completing a Service request
service_request_create_job: event for creating a Service request
service_request_process_job: event for processing a Service request
service_request_recall_job: event for recalling a Service request
service_request_release_job: event for releasing a Service request
service_request_resend_job: event for resending a Service request
service_request_use_job: event for using a Service request

Event processing

  1. All the messages from the same partition are processed in the same order as it has been produced.
  2. event_consumer is not connected to any partitions or kafka, it's just a worker which processes whatever you send to it, so you can scale it without fear. Messages order per partition guaranteed by kafka_consumer service which performs like a messages router and can't be scaled.
  3. kafka_consumer - read events from kafka topic, proxy it to event_consumer workers, commit offset to kafka.
  4. kafka_consumer commits offset in batches after all the messages from batch has been processed.
  5. event_consumer - implements all the processing business logic for every specific message type.
  6. event_consumer is idempotent. In case if event_consumer fails to process event, it crashes and restart. kafka_consumer will restart process that has served this particular event_consumer's worker and offset will not be committed to kafka for the whole batch of events. So the same batch of events will be received by kafka_consumer again. It will be reprocessed, but with the same result.
  7. Messages are processed by workers (event_consumer) one-by-one, but are committed to kafka in batches.
  8. Any state change of the patient data in MongoDB is performed in transaction.

Async job processing status model

Possible statuses

StatusDescription
PendingJob status after successful creation
Processed

Job status after successful processing

FailedJob status after expected error (e.g. error 409 or 422)
Failed_with_errorJob status after a system error (e.g. error 500)

Possible transitions between statuses

FromToDescription
PendingProcessed

Job successfully processed

PendingFailedJob failed due to user error
PendingFailed_with_errorJob failed due to system error
  • No labels