The study of medical imaging data from large cohort studies, such as Magnetic Resonance Imaging (MRI), takes a long time. As a result, model for automated data processing is being created to assist physicians and reduce their workload. This model shows how to use Deep Learning (DL) to find different parts of the human body in MRI images that are required to test adipose tissue (AT) distributions for diabetes mellitus type 2. The network is tailored to the mission and is based on HCR-Net. It has a hybrid performance that incorporates classification and regression. A grid search is used to train the model and optimise its hyperparameters. The model has been trained and configured to accurately predict separating lines between body parts on the test data.
Input variables : MRI images
Output Variables : Body parts of interest (wrists, shoulder joints, diaphragm/liver dome, hips, ankles)
Statistical | : | Somers D | Accuracy | Precision and Recall | Confusion Matrix | F1 Score | Roc and Auc | Prevalence | Detection Rate | Balanced Accuracy | Cohen's Kappa | Concordance | Gini Coefficent | KS Statistic | Youden's J Index |
Infrastructure | : | Log Bytes | Logging/User/IAMPolicy | Logging/User/VPN | CPU Utilization | Memory Usage | Error Count | Prediction Count | Prediction Latencies | Private Endpoint Prediction Latencies | Private Endpoint Response Count |
Visit Model : github.com
Additional links : ieeexplore.ieee.org
Model Category | : | Public |
Date Published | : | October, 2020 |
Healthcare Domain | : |
Medical Technology
Provider |
Code | : | github.com |
Medical Imaging |
Image Processing |