Automated segmentation of gastrointestinal organs using the TATIMPA network: a novel robust comparative deep learning approach with integrated multi-pyramidal attention
Singh D.P., Banerjee T., Chandar K.P., Tanwar V.S., Tummala S., Narayan Y., Byrapuneni L.P., Singh R.M., Charan P., Swain D.
Network Modeling Analysis in Health Informatics and Bioinformatics, 2026, DOI Link
View abstract ⏷
Sensitive and precise segmentation of the stomach, small bowel, and large bowel is fundamental to enhancing the accuracy of diagnosis, optimizing treatment strategies, and facilitating surgical procedures. The variability and complexity of the GI tract’s anatomy pose challenges to traditional segmentation approaches. In this work, we present the TATIMPA (Tri Attribute T-Net Integrated Multi-Pyramidal Attention) network, an advanced encoder-decoder deep learning system with multi-scale attention and residual learning tailored for organ segmentation. The proposed model features tri-attribute feature extraction, dilated convolutions, and a novel hybrid loss function, Banerjee’s Coefficient, which improves segmentation boundary and morphological coherence. With the model trained and validated on a specially curated medical dataset of annotated gastrointestinal images, TATIMPA achieved the following Dice Coefficient scores during training: 0.9932 for the stomach, 0.9952 for the small bowel, and 0.9952 for the large bowel. The model’s testing and validation scores remained just as high, with recall values over 0.99 in all phases. The model also surpassed the most advanced U-Net, ER-Net, and Res-UNet based implementations on all the most important benchmarking metrics Dice and Jaccard Index and F2 Score. Models were tested for robustness and consistency using ANOVA, Friedman, and Bland–Altman tests, confirming generalizability and independence to the specific sample population dataset. This positions TATIMPA as a potent candidate for clinical use in automated diagnosis, preoperative strategizing, as well as in longitudinal patient monitoring, precision, and reliability in gastrointestinal healthcare optimization.
Lightweight Residual Multi-Head Convolution with Channel Attention (ResMHCNN) for End-to-End Classification of Medical Images
Tummala S., Chauhdary S.H., Singh V., Kumar R., Kadry S., Kim J.
CMES - Computer Modeling in Engineering and Sciences, 2025, DOI Link
View abstract ⏷
Lightweight deep learning models are increasingly required in resource-constrained environments such as mobile devices and the Internet of Medical Things (IoMT). Multi-head convolution with channel attention can facilitate learning activations relevant to different kernel sizes within a multi-head convolutional layer. Therefore, this study investigates the capability of novel lightweight models incorporating residual multi-head convolution with channel attention (ResMHCNN) blocks to classify medical images. We introduced three novel lightweight deep learning models (BT-Net, LCC-Net, and BC-Net) utilizing the ResMHCNN block as their backbone. These models were cross-validated and tested on three publicly available medical image datasets: a brain tumor dataset from Figshare consisting of T1-weighted magnetic resonance imaging slices of meningioma, glioma, and pituitary tumors; the LC25000 dataset, which includes microscopic images of lung and colon cancers; and the BreaKHis dataset, containing benign and malignant breast microscopic images. The lightweight models achieved accuracies of 96.9% for 3-class brain tumor classification using BT-Net, and 99.7% for 5-class lung and colon cancer classification using LCC-Net. For 2-class breast cancer classification, BC-Net achieved an accuracy of 96.7%. The parameter counts for the proposed lightweight models—LCC-Net, BC-Net, and BT-Net—are 0.528, 0.226, and 1.154 million, respectively. The presented lightweight models, featuring ResMHCNN blocks, may be effectively employed for accurate medical image classification. In the future, these models might be tested for viability in resource-constrained systems such as mobile devices and IoMT platforms.
A Hybrid Deep Learning Model for Enhanced Structural Damage Detection: Integrating ResNet50, GoogLeNet, and Attention Mechanisms †
Singh V., Baral A., Kumar R., Tummala S., Noori M., Yadav S.V., Kang S., Zhao W.
Sensors, 2024, DOI Link
View abstract ⏷
Quick and accurate structural damage detection is essential for maintaining the safety and integrity of infrastructure, especially following natural disasters. Traditional methods of damage assessment, which rely on manual inspections, can be labor-intensive and subject to human error. This paper introduces a hybrid deep learning model that combines the capabilities of ResNet50 and GoogLeNet, further enhanced by a convolutional block attention module (CBAM), proposed to improve both the accuracy and performance in detecting structural damage. For training purposes, a diverse dataset of images depicting both structural damage cases and undamaged cases was used. To further enhance the robustness, data augmentation techniques were also employed. In this research, precision, recall, F1-score, and accuracy were employed to evaluate the effectiveness of the introduced hybrid deep learning model. Our findings indicate that the hybrid deep neural network introduced in this study significantly outperformed standalone architectures such as ResNet50 and GoogLeNet, making it a highly effective solution for applications in disaster response and infrastructure maintenance.
A 3D Sparse Autoencoder for Fully Automated Quality Control of Affine Registrations in Big Data Brain MRI Studies
Thadikemalla V.S.G., Focke N.K., Tummala S.
Journal of Imaging Informatics in Medicine, 2024, DOI Link
View abstract ⏷
This paper presents a fully automated pipeline using a sparse convolutional autoencoder for quality control (QC) of affine registrations in large-scale T1-weighted (T1w) and T2-weighted (T2w) magnetic resonance imaging (MRI) studies. Here, a customized 3D convolutional encoder-decoder (autoencoder) framework is proposed and the network is trained in a fully unsupervised manner. For cross-validating the proposed model, we used 1000 correctly aligned MRI images of the human connectome project young adult (HCP-YA) dataset. We proposed that the quality of the registration is proportional to the reconstruction error of the autoencoder. Further, to make this method applicable to unseen datasets, we have proposed dataset-specific optimal threshold calculation (using the reconstruction error) from ROC analysis that requires a subset of the correctly aligned and artificially generated misalignments specific to that dataset. The calculated optimum threshold is used for testing the quality of remaining affine registrations from the corresponding datasets. The proposed framework was tested on four unseen datasets from autism brain imaging data exchange (ABIDE I, 215 subjects), information eXtraction from images (IXI, 577 subjects), Open Access Series of Imaging Studies (OASIS4, 646 subjects), and “Food and Brain” study (77 subjects). The framework has achieved excellent performance for T1w and T2w affine registrations with an accuracy of 100% for HCP-YA. Further, we evaluated the generality of the model on four unseen datasets and obtained accuracies of 81.81% for ABIDE I (only T1w), 93.45% (T1w) and 81.75% (T2w) for OASIS4, and 92.59% for “Food and Brain” study (only T1w) and in the range 88–97% for IXI (for both T1w and T2w and stratified concerning scanner vendor and magnetic field strengths). Moreover, the real failures from “Food and Brain” and OASIS4 datasets were detected with sensitivities of 100% and 80% for T1w and T2w, respectively. In addition, AUCs of > 0.88 in all scenarios were obtained during threshold calculation on the four test sets.
An Explainable Classification Method Based on Complex Scaling in Histopathology Images for Lung and Colon Cancer
Tummala S., Kadry S., Nadeem A., Rauf H.T., Gul N.
Diagnostics, 2023, DOI Link
View abstract ⏷
Lung and colon cancers are among the leading causes of human mortality and morbidity. Early diagnostic work up of these diseases include radiography, ultrasound, magnetic resonance imaging, and computed tomography. Certain blood tumor markers for carcinoma lung and colon also aid in the diagnosis. Despite the lab and diagnostic imaging, histopathology remains the gold standard, which provides cell-level images of tissue under examination. To read these images, a histopathologist spends a large amount of time. Furthermore, using conventional diagnostic methods involve high-end equipment as well. This leads to limited number of patients getting final diagnosis and early treatment. In addition, there are chances of inter-observer errors. In recent years, deep learning has shown promising results in the medical field. This has helped in early diagnosis and treatment according to severity of disease. With the help of EffcientNetV2 models that have been cross-validated and tested fivefold, we propose an automated method for detecting lung (lung adenocarcinoma, lung benign, and lung squamous cell carcinoma) and colon (colon adenocarcinoma and colon benign) cancer subtypes from LC25000 histopathology images. A state-of-the-art deep learning architecture based on the principles of compound scaling and progressive learning, EffcientNetV2 large, medium, and small models. An accuracy of 99.97%, AUC of 99.99%, F1-score of 99.97%, balanced accuracy of 99.97%, and Matthew’s correlation coefficient of 99.96% were obtained on the test set using the EffcientNetV2-L model for the 5-class classification of lung and colon cancers, outperforming the existing methods. Using gradCAM, we created visual saliency maps to precisely locate the vital regions in the histopathology images from the test set where the models put more attention during cancer subtype predictions. This visual saliency maps may potentially assist pathologists to design better treatment strategies. Therefore, it is possible to use the proposed pipeline in clinical settings for fully automated lung and colon cancer detection from histopathology images with explainability.
EfficientNetV2 Based Ensemble Model for Quality Estimation of Diabetic Retinopathy Images from DeepDRiD
Tummala S., Thadikemalla V.S.G., Kadry S., Sharaf M., Rauf H.T.
Diagnostics, 2023, DOI Link
View abstract ⏷
Diabetic retinopathy (DR) is one of the major complications caused by diabetes and is usually identified from retinal fundus images. Screening of DR from digital fundus images could be time-consuming and error-prone for ophthalmologists. For efficient DR screening, good quality of the fundus image is essential and thereby reduces diagnostic errors. Hence, in this work, an automated method for quality estimation (QE) of digital fundus images using an ensemble of recent state-of-the-art EfficientNetV2 deep neural network models is proposed. The ensemble method was cross-validated and tested on one of the largest openly available datasets, the Deep Diabetic Retinopathy Image Dataset (DeepDRiD). We obtained a test accuracy of 75% for the QE, outperforming the existing methods on the DeepDRiD. Hence, the proposed ensemble method may be a potential tool for automated QE of fundus images and could be handy to ophthalmologists.
Few-shot learning using explainable Siamese twin network for the automated classification of blood cells
Medical and Biological Engineering and Computing, 2023, DOI Link
View abstract ⏷
Automated classification of blood cells from microscopic images is an interesting research area owing to advancements of efficient neural network models. The existing deep learning methods rely on large data for network training and generating such large data could be time-consuming. Further, explainability is required via class activation mapping for better understanding of the model predictions. Therefore, we developed a Siamese twin network (STN) model based on contrastive learning that trains on relatively few images for the classification of healthy peripheral blood cells using EfficientNet-B3 as the base model. Hence, in this study, a total of 17,092 publicly accessible cell histology images were analyzed from which 6% were used for STN training, 6% for few-shot validation, and the rest 88% for few-shot testing. The proposed architecture demonstrates percent accuracies of 97.00, 98.78, 94.59, 95.70, 98.86, 97.09, 99.71, and 96.30 during 8-way 5-shot testing for the classification of basophils, eosinophils, immature granulocytes, erythroblasts, lymphocytes, monocytes, platelets, and neutrophils, respectively. Further, we propose a novel class activation mapping scheme that highlights the important regions in the test image for the STN model interpretability. Overall, the proposed framework could be used for a fully automated self-exploratory classification of healthy peripheral blood cells. Graphical abstract: The whole proposed framework demonstrates the Siamese twin network training and 8-way k-shot testing. The values indicate the amount of dissimilarity. [Figure not available: see fulltext.]
BreaST-Net: Multi-Class Classification of Breast Cancer from Histopathological Images Using Ensemble of Swin Transformers
Tummala S., Kim J., Kadry S.
Mathematics, 2022, DOI Link
View abstract ⏷
Breast cancer (BC) is one of the deadly forms of cancer, causing mortality worldwide in the female population. The standard imaging procedures for screening BC involve mammography and ultrasonography. However, these imaging procedures cannot differentiate subtypes of benign and malignant cancers. Here, histopathology images could provide better sensitivity toward benign and malignant cancer subtypes. Recently, vision transformers have been gaining attention in medical imaging due to their success in various computer vision tasks. Swin transformer (SwinT) is a variant of vision transformer that works on the concept of non-overlapping shifted windows and is a proven method for various vision detection tasks. Thus, in this study, we investigated the ability of an ensemble of SwinTs in the two-class classification of benign vs. malignant and eight-class classification of four benign and four malignant subtypes, using an openly available BreaKHis dataset containing 7909 histopathology images acquired at different zoom factors of 40×, 100×, 200×, and 400×. The ensemble of SwinTs (including tiny, small, base, and large) demonstrated an average test accuracy of 96.0% for the eight-class and 99.6% for the two-class classification, outperforming all the previous works. Thus, an ensemble of SwinTs could identify BC subtypes using histopathological images and may lead to pathologist relief.
Classification of Brain Tumor from Magnetic Resonance Imaging Using Vision Transformers Ensembling
Tummala S., Kadry S., Bukhari S.A.C., Rauf H.T.
Current Oncology, 2022, DOI Link
View abstract ⏷
The automated classification of brain tumors plays an important role in supporting radiologists in decision making. Recently, vision transformer (ViT)-based deep neural network architectures have gained attention in the computer vision research domain owing to the tremendous success of transformer models in natural language processing. Hence, in this study, the ability of an ensemble of standard ViT models for the diagnosis of brain tumors from T1-weighted (T1w) magnetic resonance imaging (MRI) is investigated. Pretrained and finetuned ViT models (B/16, B/32, L/16, and L/32) on ImageNet were adopted for the classification task. A brain tumor dataset from figshare, consisting of 3064 T1w contrast-enhanced (CE) MRI slices with meningiomas, gliomas, and pituitary tumors, was used for the cross-validation and testing of the ensemble ViT model’s ability to perform a three-class classification task. The best individual model was L/32, with an overall test accuracy of 98.2% at 384 × 384 resolution. The ensemble of all four ViT models demonstrated an overall testing accuracy of 98.7% at the same resolution, outperforming individual model’s ability at both resolutions and their ensembling at 224 × 224 resolution. In conclusion, an ensemble of ViT models could be deployed for the computer-aided diagnosis of brain tumors based on T1w CE MRI, leading to radiologist relief.
3D Deep Convolutional Neural Network for Detection of Anomalous Rigid and Affine Registrations in Big Data Brain MRI
Tummala J., Mareedu M., Alla R.N., Kunderu L.S.D., Tummala S.
2021 IEEE Bombay Section Signature Conference, IBSSC 2021, 2021, DOI Link
View abstract ⏷
Registration to a reference image is an important step during preprocessing of structural brain magnetic resonance imaging (MRI) in group studies for disease diagnosis and prognosis. The manual quality control (QC) of these many images is time-consuming, tedious, and requires prior expertise. Owing to the availability of free MRI datasets and the recent advances in computational infrastructure and deep learning frameworks, it is now feasible to train a deep learning model on larger datasets. To facilitate fully automatic QC in large-scale MRI studies, we proposed 3D deep convolutional neural network models for checking rigid and affine registrations of T1-weighted and T2-weighted MRI data. Because it is a supervised learning approach, five artificially misaligned images are generated for each image type and registration type. The proposed models were cross-validated and tested on the dataset from IXI consisting of 580 T1w and 576 T2w images, where 80 percent of them are used for cross-validation and remaining for testing. Performance metrics such as accuracy, F1-score, recall, precision, and specificity demonstrated a value greater than or equal to 0.99. Therefore, the models could be deployed during fully automatic QC of rigid and affine registrations in the bigdata structural MRI processing pipeline.
Deep Learning Framework using Siamese Neural Network for Diagnosis of Autism from Brain Magnetic Resonance Imaging
Tummala S.
2021 6th International Conference for Convergence in Technology, I2CT 2021, 2021, DOI Link
View abstract ⏷
Autism spectrum disorder (ASD) is characterized by structural and functional brain changes that contribute to memory, attention and social interaction. The aim of this research is to develop a deep learning framework using Siamese neural nets for computer aided diagnosis of ASD using T1-weighted magnetic resonance imaging (MRI) of 102 control and 112 ASD patients from autism brain imaging data exchange. The preprocessing of the images involves reorientation to a standard space, cropping followed by affine registration to a template. Siamese Neural Network (SNN) with pre-trained ResNet50 model was employed for this study. After preprocessing, the affine registered images are down sampled and reshaped to match with the required input size of the ResNet50. Further, 1070 positive and negative image pairs are formed for training and validation of the SNN model. Final layer of ResNet50 is global averaged and an extra dense layer is added which represents the input image embedding. Further, L1-distance is computed between the embeddings of the two inputs which is further used to backpropagate the error computed using the contrastive loss function. The quality metrics used during 5-fold stratified cross-validation are accuracy, recall, precision and f1-score and these metrics reached a value of 0.99 during validation. Therefore, the developed SNN based tool could be used for diagnosis of autism from T1-weighted MRI.
Machine learning framework for fully automatic quality checking of rigid and affine registrations in big data brain MRI
Tummala S., Focke N.K.
Proceedings - International Symposium on Biomedical Imaging, 2021, DOI Link
View abstract ⏷
Rigid and affine registrations to a common template are the essential steps during pre-processing of brain structural magnetic resonance imaging (MRI) data. Manual quality control (QC) of these registrations is quite tedious if the data contains several thousands of images. Therefore, we propose a machine learning (ML) framework for fully automatic QC of these registrations via global and local computation of the similarity functions such as normalized cross-correlation, normalized mutual-information, and correlation ratio, and using these as features for training of different ML classifiers. To facilitate supervised learning, misaligned images are generated. A structural MRI dataset consisting of 215 subjects from autism brain imaging data exchange is used for 5-fold cross-validation and testing. ML models based on local costs performed better than the models with global costs. Local cost based random forest, and AdaBoost models reached testing F1-scores and balanced accuracies of 0.98 and 0.95 respectively for QC of both rigid and affine registrations.
Fully automated quality control of rigid and affine registrations of T1w and T2w MRI in big data using machine learning
Tummala S., Thadikemalla V.S.G., Kreilkamp B.A.K., Dam E.B., Focke N.K.
Computers in Biology and Medicine, 2021, DOI Link
View abstract ⏷
Background: Magnetic resonance imaging (MRI)-based morphometry and relaxometry are proven methods for the structural assessment of the human brain in several neurological disorders. These procedures are generally based on T1-weighted (T1w) and/or T2-weighted (T2w) MRI scans, and rigid and affine registrations to a standard template(s) are essential steps in such studies. Therefore, a fully automatic quality control (QC) of these registrations is necessary in big data scenarios to ensure that they are suitable for subsequent processing. Method: A supervised machine learning (ML) framework is proposed by computing similarity metrics such as normalized cross-correlation, normalized mutual information, and correlation ratio locally. We have used these as candidate features for cross-validation and testing of different ML classifiers. For 5-fold repeated stratified grid search cross-validation, 400 correctly aligned, 2000 randomly generated misaligned images were used from the human connectome project young adult (HCP-YA) dataset. To test the cross-validated models, the datasets from autism brain imaging data exchange (ABIDE I) and information eXtraction from images (IXI) were used. Results: The ensemble classifiers, random forest, and AdaBoost yielded best performance with F1-scores, balanced accuracies, and Matthews correlation coefficients in the range of 0.95–1.00 during cross-validation. The predictive accuracies reached 0.99 on the Test set #1 (ABIDE I), 0.99 without and 0.96 with noise on Test set #2 (IXI, stratified w.r.t scanner vendor and field strength). Conclusions: The cross-validated and tested ML models could be used for QC of both T1w and T2w rigid and affine registrations in large-scale MRI studies.
Brain tissue entropy changes in patients with autism spectrum disorder
Tummala S.
Lecture Notes in Computational Vision and Biomechanics, 2019, DOI Link
View abstract ⏷
Autism Spectrum Disorder (ASD) is accompanied by brain tissue changes in areas that control behavior, cognition, and motor functions, deficient in the disorder. The objective of this research was to evaluate brain structural changes in ASD patients compared to control subjects using voxel-by-voxel image entropy from T1-weighted imaging data of 115 ASD and 105 control subjects from autism brain imaging data exchange. For all subjects, entropy maps were calculated, normalized to a common space and smoothed. Then, the entropy maps were compared at each voxel between groups using analysis of covariance (covariates; age, gender). Increased entropy in ASD patients, indicating chronic injury, emerged in several vital regions including frontal temporal and parietal lobe regions, corpus callosum, cingulate cortices, and hippocampi. Entropy procedure showed significant effect size and demonstrated wide-spread changes in sites that control social behavior, cognitive, and motor activities, suggesting severe damage in those areas. The neuropathological mechanisms contributing to tissue injury remain unclear and possibly due to factors including genetic, atypical early brain growth during childhood.
Gender Differences in Knee Joint Congruity Quantified from MRI: A Validation Study with Data from Center for Clinical and Basic Research and Osteoarthritis Initiative
Tummala S., Schiphof D., Byrjalsen I., Dam E.B.
Cartilage, 2018, DOI Link
View abstract ⏷
Objective: Gender is a risk factor in the onset of osteoarthritis (OA). The aim of the study was to investigate gender differences in contact area (CA) and congruity index (CI) in the medial tibiofemoral (MTF) joint in 2 different cohorts, quantified automatically from magnetic resonance imaging (MRI). Design: The CA and CI markers were validated on 2 different data sets from Center for Clinical and Basic Research (CCBR) and Osteoarthritis Initiative (OAI). The CCBR cohort consisted of 159 subjects and the OAI subcohort consisted of 1,436 subjects. From the MTF joint, the contact area was located and quantified using Euclidean distance transform. Furthermore, the CI was quantified over the contact area by assessing agreement of the first- and second-order general surface features. Then, the gender differences between CA and CI values were evaluated at different stages of radiographic OA. Results: Female CAs were significantly higher than male CAs after normalization, male CIs were significantly higher than female CIs after correcting with age and body mass index (P < 0.05), consistent across the 2 data sets. For the OAI data set, the gender differences were present at all stages of radiographic OA. Conclusion: This study demonstrated the gender differences in CA and CI in MTF joints. The higher normalized CA and lower CI values in female knees may be linked with the increased risk of incidence of radiographic OA in females. These differences may help further understand the gender differences and/or to establish gender specific treatment strategies.
Non-Gaussian diffusion imaging shows brain myelin and axonal changes in obstructive sleep apnea
Tummala S., Roy B., Vig R., Park B., Kang D.W., Woo M.A., Aysola R., Harper R.M., Kumar R.
Journal of Computer Assisted Tomography, 2017, DOI Link
View abstract ⏷
Objective Obstructive sleep apnea (OSA) is accompanied by brain changes in areas that regulate autonomic, cognitive, and mood functions, which were initially examined by Gaussian-based diffusion tensor imaging measures, but can be better assessed with non-Gaussian measures. We aimed to evaluate axonal and myelin changes in OSA using axial (AK) and radial kurtosis (RK) measures. Materials and Methods We acquired diffusion kurtosis imaging data from 22 OSA and 26 controls; AK and RK maps were calculated, normalized, smoothed, and compared between groups using analysis of covariance. Results Increased AK, indicating axonal changes, emerged in the insula, hippocampus, amygdala, dorsolateral pons, and cerebellar peduncles and showed more axonal injury over previously identified damage. Higher RK, showing myelin changes, appeared in the hippocampus, amygdala, temporal and frontal lobes, insula, midline pons, and cerebellar peduncles and showed more widespread myelin damage over previously identified injury. Conclusions Axial kurtosis and RK measures showed widespread changes over Gaussian-based techniques, suggesting a more sensitive nature of kurtoses to injury.
Associations between brain white matter integrity and disease severity in obstructive sleep apnea
Tummala S., Roy B., Park B., Kang D.W., Woo M.A., Harper R.M., Kumar R.
Journal of Neuroscience Research, 2016, DOI Link
View abstract ⏷
Obstructive sleep apnea (OSA) is characterized by recurrent upper airway blockage, with continued diaphragmatic efforts to breathe during sleep. Brain structural changes in OSA appear in various regions, including white matter sites that mediate autonomic, mood, cognitive, and respiratory control. However, the relationships between brain white matter changes and disease severity in OSA are unclear. This study examines associations between an index of tissue integrity, magnetization transfer (MT) ratio values (which show MT between free and proton pools associated with tissue membranes and macromolecules), and disease severity (apnea-hypopnea index [AHI]) in OSA subjects. We collected whole-brain MT imaging data from 19 newly diagnosed, treatment-naïve OSA subjects (50.4 ± 8.6 years of age, 13 males, AHI 39.7 ± 24.3 events/hr], using a 3.0-Tesla MRI scanner. With these data, whole-brain MT ratio maps were calculated, normalized to common space, smoothed, and correlated with AHI scores by using partial correlation analyses (covariates, age and gender; P < 0.005). Multiple brain sites in OSA subjects, including superior and inferior frontal regions, ventral medial prefrontal cortex and nearby white matter, midfrontal white matter, insula, cingulate and cingulum bundle, internal and external capsules, caudate nuclei and putamen, basal forebrain, hypothalamus, corpus callosum, and temporal regions, showed principally lateralized negative correlations (P < 0.005). These regions showed significant correlations even with correction for multiple comparisons (cluster-level, family-wise error, P < 0.05), except for a few superior frontal areas. Predominantly negative correlations emerged between local MT values and OSA disease severity, indicating potential usefulness of MT imaging for examining the OSA condition. These findings indicate that OSA severity plays a significant role in white matter injury. © 2016 Wiley Periodicals, Inc.
Global and regional brain non-Gaussian diffusion changes in newly diagnosed patients with obstructive sleep apnea
Tummala S., Palomares J., Kang D.W., Park B., Woo M.A., Harper R.M., Kumar R.
Sleep, 2016, DOI Link
View abstract ⏷
Study Objectives: Obstructive sleep apnea (OSA) patients show brain structural injury and functional deficits in autonomic, affective, and cognitive regulatory sites, as revealed by mean diffusivity (MD) and other imaging procedures. The time course and nature of gray and white matter injury can be revealed in more detail with mean kurtosis (MK) procedures, which can differentiate acute from chronic injury, and better show extent of damage over MD procedures. Our objective was to examine global and regional MK changes in newly diagnosed OSA, relative to control subjects. Methods: Two diffusion kurtosis image series were collected from 22 recently-diagnosed, treatment-naïve OSA and 26 control subjects using a 3.0-Tesla MRI scanner. MK maps were generated, normalized to a common space, smoothed, and compared voxel-by-voxel between groups using analysis of covariance (covariates; age, sex). Results: No age or sex differences appeared, but body mass index, sleep, neuropsychologic, and cognitive scores significantly differed between groups. MK values were significantly increased globally in OSA over controls, and in multiple localized sites, including the basal forebrain, extending to the hypothalamus, hippocampus, thalamus, insular cortices, basal ganglia, limbic regions, cerebellar areas, parietal cortices, ventral temporal lobe, ventrolateral medulla, and midline pons. Multiple sites, including the insular cortices, ventrolateral medulla, and midline pons showed more injury over previously identified damage with MD procedures, with damage often lateralized. Conclusions: Global mean kurtosis values are significantly increased in obstructive sleep apnea (OSA), suggesting acute tissue injury, and these changes are principally localized in critical sites mediating deficient functions in the condition. The mechanisms for injury likely include altered perfusion and hypoxemia-induced processes, leading to acute tissue changes in recently diagnosed OSA.
Water Exchange across the Blood-Brain Barrier in Obstructive Sleep Apnea: An MRI Diffusion-Weighted Pseudo-Continuous Arterial Spin Labeling Study
Palomares J.A., Tummala S., Wang D.J.J., Park B., Woo M.A., Kang D.W., Lawrence K.S.S., Harper R.M., Kumar R.
Journal of Neuroimaging, 2015, DOI Link
View abstract ⏷
Obstructive sleep apnea (OSA) subjects show brain injury in sites that control autonomic, cognitive, and mood functions that are deficient in the condition. The processes contributing to injury may include altered blood-brain barrier (BBB) actions. Our aim was to examine BBB function, based on diffusion-weighted pseudo-continuous arterial spin labeling (DW-pCASL) procedures, in OSA compared to controls. We performed DW-pCASL imaging in nine OSA and nine controls on a 3.0-Tesla MRI scanner. Global mean gray and white matter arterial transient time (ATT, an index of large artery integrity), water exchange rate across the BBB (Kw, BBB function), DW-pCASL ratio, and cerebral blood flow (CBF) values were compared between OSA and control subjects. RESULTS: Global mean gray and white matter ATT (OSA vs. controls; gray matter, 1.691 ± .120 vs. 1.658 ± .109 second, P = .49; white matter, 1.700 ± .115 vs. 1.650 ± .114 second, P = .44), and CBF values (gray matter, 57.4 ± 15.8 vs. 58.2 ± 10.7 ml/100 g/min, P = .67; white matter, 24.2 ± 7.0 vs. 24.6 ± 6.7 ml/100 g/min, P = .91) did not differ significantly, but global gray and white matter Kw (gray matter, 158.0 ± 28.9 vs. 220.8 ± 40.6 min-1, P = .002; white matter, 177.5 ± 57.2 vs. 261.1 ± 51.0 min-1, P = .006), and DW-pCASL ratio (gray matter, .727 ± .076 vs. .823 ± .069, P = .011; white matter, .722 ± .144 vs. .888 ± .100, P = .004) values were significantly reduced in OSA over controls. OSA subjects show compromised BBB function, but intact large artery integrity. The BBB alterations may introduce neural damage contributing to abnormal functions in OSA, and suggest a need to repair BBB function with strategies commonly used in other fields.
Gradient competition anisotropy for centerline extraction and segmentation of spinal cords
Law M.W.K., Garvin G.J., Tummala S., Tay K., Leung A.E., Li S.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2013, DOI Link
View abstract ⏷
Centerline extraction and segmentation of the spinal cord - an intensity varying and elliptical curvilinear structure under strong neighboring disturbance are extremely challenging. This study proposes the gradient competition anisotropy technique to perform spinal cord centerline extraction and segmentation. The contribution of the proposed method is threefold - 1) The gradient competition descriptor compares the image gradient obtained at different detection scales to suppress neighboring disturbance. It reliably recognizes the curvilinearity and orientations of elliptical curvilinear objects. 2) The orientation coherence anisotropy analyzes the detection responses offered by the gradient competition descriptor. It enforces structure orientation consistency to sustain strong disturbance introduced by high contrast neighboring objects to perform centerline extraction. 3) The intensity coherence segmentation quantifies the intensity difference between the centerline and the voxels in the vicinity of the centerline. It effectively removes the object intensity variation along the structure to accurately delineate the target structure. They constitute the gradient competition anisotropy method which can robustly and accurately detect the centerline and boundary of the spinal cord. It is validated and compared using 25 clinical datasets. It is demonstrated that the proposed method well suits the applications of spinal cord centerline extraction and segmentation. © 2013 Springer-Verlag.
Automatic quantification of tibio-femoral contact area and congruity
Tummala S., Nielsen M., Lillholm M., Christiansen C., Dam E.B.
IEEE Transactions on Medical Imaging, 2012, DOI Link
View abstract ⏷
We present methods to quantify the medial tibio-femoral (MTF) joint contact area (CA) and congruity index (CI) from low-field magnetic resonance imaging (MRI). Firstly, based on the segmented MTF cartilage compartments, we computed the contact area using the Euclidian distance transformation. The CA was defined as the area of the tibial superior surface and the femoral inferior surface that are less than a voxel width apart. Furthermore, the CI is computed point-by-point by assessing the first-and second-order general surface features over the contact area. Mathematically, it is the inverse distance between the local normal vectors (first-order features) scaled by the local normal curvatures (second-order features) along the local direction of principal knee motion in a local reference coordinate system formed by the directions of principal curvature and the surface normal vector. The abilities of the CA and the CI for diagnosing osteoarthritis (OA) at different levels (disease severity was assessed using the Kellgren and Lawrence Index, KL) were cross-validated on 288 knees at baseline. Longitudinal analysis was performed on 245 knees. The precision quantified on 31 scan-rescan pairs (RMS CV) for CA was 13.7% and for CI 7.5%. The CA increased with onset of the disease and then decreased with OA progression. The CI was highest in healthy and decreased with the onset of OA and further with disease progression. The CI showed an AUC of 0.69 (p < 0.0001) for separating KL = 0 and KL > 0. For separating KL < 1 or KL = 1 and KL > 1 knees, the AUC for CI was 0.73 (p < 0.0001). The CA demonstrated longitudinal responsiveness (SRM) at all stages of OA, whereas the CI did for advanced OA only. Eventually, the quantified CA and the CI might be suitable to help explaining OA onset, diagnosis of (early) OA, and measuring the efficacy of DMOADs in clinical trials. © 2012 IEEE.
Automatic quantification of congruity from knee MRI
Tummala S., Dam E.B., Nielsen M.
Computational Biomechanics for Medicine: Deformation and Flow, 2012, DOI Link
View abstract ⏷
Biomechanical factors may play a critical role in the initiation and progression of Osteoarthritis (OA). We present a method to quantify the medial tibiofemoral (MTF) congruity from low-field magnetic resonance imaging (MRI). Firstly, the MTF cartilage compartments were segmented fully automatically using a voxel quantification approach. Further, the contact area (CA) was computed using the Euclidian distance transformation by setting the voxel width as threshold. Eventually, the congruity index (CI) was computed point-by-point over CA as the inverse distance between the local normal vectors scaled by the local normal curvatures along the local direction of principal knee motion. The ability of the CI quantification method was cross-validated for various tasks of diagnosis of OA. Healthy knees were more congruent than knees with OA. These quantification methods might be suitable to help explain the onset and diagnosis of OA.
Diagnosis of osteoarthritis by cartilage surface smoothness quantified automatically from knee MRI
Tummala S., Bay-Jensen A.-C., Karsdal M.A., Dam E.B.
Cartilage, 2011, DOI Link
View abstract ⏷
Objective: We investigated whether surface smoothness of articular cartilage in the medial tibiofemoral compartment quantified from magnetic resonance imaging (MRI) could be appropriate as a diagnostic marker of osteoarthritis (OA). Method: At baseline, 159 community-based subjects aged 21 to 81 with normal or OA-affected knees were recruited to provide a broad range of OA states. Smoothness was quantified using an automatic framework from low-field MRI in the tibial, femoral, and femoral subcompartments. Diagnostic ability of smoothness was evaluated by comparison with conventional OA markers, specifically cartilage volume from MRI, joint space width (JSW) from radiographs, and pain scores. Results: A total of 140 subjects concluded the 21-month study. Cartilage smoothness provided diagnostic ability in all compartments (P < 0.0001). The diagnostic smoothness markers performed at least similar to JSW and were superior to volume markers (e.g., the AUC for femoral smoothness of 0.80 was higher than the 0.57 for volume, P < 0.0001, and marginally higher than 0.73 for JSW, P = 0.25). The smoothness markers allowed diagnostic detection of pain presence (P < 0.05) and showed some correlation with pain severity (e.g., r = -0.32). The longitudinal change in smoothness was correlated with cartilage loss (r up to 0.60, P < 0.0001 in all compartments). Conclusions: This study demonstrated the potential of cartilage smoothness markers for diagnosis of moderate radiographic OA. Furthermore, correlations between smoothness and pain values and smoothness loss and cartilage loss supported a link to progression of OA. Thereby, smoothness markers may allow detection and monitoring of OA-supplemented currently accepted markers. © The Author(s) 2011.
Surface smoothness: Cartilage biomarkers for knee OA beyond the radiologist
Tummala S., Dam E.B.
Progress in Biomedical Optics and Imaging - Proceedings of SPIE, 2010, DOI Link
View abstract ⏷
Fully automatic imaging biomarkers may allow quantification of patho-physiological processes that a radiologist would not be able to assess reliably. This can introduce new insight but is problematic to validate due to lack of meaningful ground truth expert measurements. Rather than quantification accuracy, such novel markers must therefore be validated against clinically meaningful end-goals such as the ability to allow correct diagnosis. We present a method for automatic cartilage surface smoothness quantification in the knee joint. The quantification is based on a curvature flow method used on tibial and femoral cartilage compartments resulting from an automatic segmentation scheme. These smoothness estimates are validated for their ability to diagnose osteoarthritis and compared to smoothness estimates based on manual expert segmentations and to conventional cartilage volume quantification. We demonstrate that the fully automatic markers eliminate the time required for radiologist annotations, and in addition provide a diagnostic marker superior to the evaluated semi-manual markers. © 2010 Copyright SPIE - The International Society for Optical Engineering.