Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

...

Objectvolume
Patients50.000.000
Visitup to 40.000.000.000
Episodeup to 10.000.000.000
Encounterup to 40.000.000.000
Observationup to 500.000.000.000
Conditionup to 500.000.000.000
Allergy intolerance1.000.000.000
Immunization1.000.000.000

Volumes prediction and HW-sizing calculation based on it -  refer to the medical events sizing, sheet "Prod -

...

Model4"


Async job processing Sequence diagram

...

  1. Validation
    1. json validation
    2. create job in MongoDB
    3. package_create_job message type is created in Kafka medical_events topic
  2. package_create_job processing
    1. check that there is no jobs processing for the patient_id
      1. if not - put patient_id to processing queue (required to guarantee that new request of one patient will not be processed until the previous one has been completed)
      2. else -
    2. decode signed_content
    3. validation
    4. validation failed:
      1. update job.status == failed
      2.  error code == 4xx
    5. save signed_content
    6. If error -
      1. update job.status == failed_with_error
      2. error_code == 500
    7. else create package_save_patient type in Kafka first_medical_events topic
  3. package_save_patient processing
    1. save patient data to medical_data MongoDB collection
    2. If error -
      1. update job.status == failed_with_error
      2. error_code == 500
    3. else - create package_save_conditions type in Kafka second_medical_events topic
  4. package_save_conditions processing
    1. save conditions to conditions MongoDB collection
    2. If error - consumer fails and after restart should process the same message
    3. else - create package_save_observationstype in Kafka second_medical_events topic
  5. package_save_observations processing
    1. save observations to conditions MongoDB collection
    2. If error - consumer fails and after restart should process the same message
    3. else - update_job_status message is created
  6. Update job status processing
    1. remove patient_id from processing queue
    2. update job status to processed
    3. If error - consumer fails and after restart should process the same message
    4. else - done

...

  1. Medical data profile of one patient (encounters, visits, episodes, allergy intolerance, immunizations) shouldn't exceed 16 Mbytes (~700-1000 encounters)
  2. Medical data can be accessed via API only by patient_id and via patient context. There is no way to get the medical data of multiple patients via single call
  3. System's architecture provides possibility to operate in the 24/7 mode. But in case if the master MongoDB node is down, there might be a downtime (up to minutes). We still do need the system maintenance time slots.
  4. MongoDB limitations. The most important ones:
    1. The maximum BSON document size is 16 megabytes.
    2. MongoDB supports no more than 100 levels of nesting for BSON documents.
    3. The total size of an index entry, which can include structural overhead depending on the BSON type, must be less than 1024 bytes.
    4. A single collection can have no more than 64 indexes.
    5. There can be no more than 31 fields in a compound index.
    6. Multikey indexes cannot cover queries over array field(s).
    7. Database size: The MMAPv1 storage engine limits each database to no more than 16000 data files. This means that a single MMAPv1 database has a maximum size of 32TB. Setting the storage.mmapv1.smallFiles option reduces this limit to 8TB.

    8. Data Size: Using the MMAPv1 storage engine, a single mongod instance cannot manage a data set that exceeds maximum virtual memory address space provided by the underlying operating system. For Linux journaled data size of 64 TB
    9. Replica sets can have up to 50 members.
    10. Covered Queries in Sharded ClustersStarting in MongoDB 3.0, an index cannot cover a query on a sharded collection when run against a mongosif the index does not contain the shard key, with the following exception for the _id index: If a query on a sharded collection only specifies a condition on the _id field and returns only the _id field, the _id index can cover the query when run against a mongos even if the _id field is not the shard key.
    11. MongoDB does not support unique indexes across shards, except when the unique index contains the full shard key as a prefix of the index. In these situations MongoDB will enforce uniqueness across the full key, not a single field.
    12. A shard key cannot exceed 512 bytes.

    13. Shard Key Index Type

      A shard key index can be an ascending index on the shard key, a compound index that start with the shard key and specify ascending order for the shard key, or a hashed index.

      A shard key index cannot be an index that specifies a multikey index, a text index or a geospatial index on the shard key fields.

    14. Shard Key is Immutable

...