Purpose This study compares 3 deep learning models (UNet, TransUNet, and MIST) for left atrium (LA) segmentation of cardiac computed tomography (CT) images from patients with congenital heart disease (CHD). It investigates how architectural variations in the MIST model, such as spatial squeeze-and-excitation attention, impact Dice score and HD95.
Methods We analyzed 108 publicly available, de-identified CT volumes from the ImageCHD dataset. Volumes underwent resampling, intensity normalization, and data augmentation. UNet, TransUNet, and MIST models were trained using 80% of 97 cases, with the remaining 20% employed for validation. Eleven cases were reserved for testing. Performance was evaluated using the Dice score (measuring overlap accuracy) and HD95 (reflecting boundary accuracy). Statistical comparisons were performed via one-way repeated measures analysis of variance.
Results MIST achieved the highest mean Dice score (0.74; 95% confidence interval, 0.67–0.81), significantly outperforming TransUNet (0.53; P<0.001) and UNet (0.49; P<0.001). Regarding HD95, TransUNet (9.09 mm) and MIST (5.77 mm) similarly outperformed UNet (27.49 mm; P<0.0001). In ablation experiments, the inclusion of spatial attention did not further enhance the MIST model’s performance, suggesting redundancy with existing attention mechanisms. However, the integration of multi-scale features and refined skip connections consistently improved segmentation accuracy and boundary delineation.
Conclusion MIST demonstrated superior LA segmentation, highlighting the benefits of its integrated multi-scale features and optimized architecture. Nevertheless, its computational overhead complicates practical clinical deployment. Our findings underscore the value of advanced hybrid models in cardiac imaging, providing improved reliability for CHD evaluation. Future studies should balance segmentation accuracy with feasible clinical implementation.
Purpose This study developed and evaluated a feature-based ensemble model integrating the synthetic minority oversampling technique (SMOTE) and random undersampling (RUS) methods with a random forest approach to address class imbalance in machine learning for early diabetes detection, aiming to improve predictive performance.
Methods Using the Scikit-learn diabetes dataset (442 samples, 10 features), we binarized the target variable (diabetes progression) at the 75th percentile and split it 80:20 using stratified sampling. The training set was balanced to a 1:2 minority-to-majority ratio via SMOTE (0.6) and RUS (0.66). A feature-based ensemble model was constructed by training random forest classifiers on 10 two-feature subsets, selected based on feature importance, and combining their outputs using soft voting. Performance was compared against 13 baseline models, using accuracy and area under the curve (AUC) as metrics on the imbalanced test set.
Results The feature-based ensemble model and balanced random forest both achieved the highest accuracy (0.8764), followed by the fully connected neural network (0.8700). The ensemble model had an excellent AUC (0.9227), while k-nearest neighbors had the lowest accuracy (0.8427). Visualizations confirmed its superior discriminative ability, especially for the minority (high-risk) class, which is a critical factor in medical contexts.
Conclusion Integrating SMOTE, RUS, and feature-based ensemble learning improved classification performance in imbalanced diabetes datasets by delivering robust accuracy and high recall for the minority class. This approach outperforms traditional resampling techniques and deep learning models, offering a scalable and interpretable solution for early diabetes prediction and potentially other medical applications.
Purpose Accurate prediction of blood glucose variability is crucial for effective diabetes management, as both hypoglycemia and hyperglycemia are associated with increased morbidity and mortality. However, conventional predictive models rely primarily on patient-specific biometric data, often neglecting the influence of patient–provider interactions, which can significantly impact outcomes. This study introduces Cyclic Dual Latent Discovery (CDLD), a deep learning framework that explicitly models patient–provider interactions to improve prediction of blood glucose levels. By leveraging a real-world intensive care unit (ICU) dataset, the model captures latent attributes of both patients and providers, thus improving forecasting accuracy.
Methods ICU patient records were obtained from the MIMIC-IV v3.0 critical care database, including approximately 5,014 instances of patient–provider interaction. The CDLD model uses a cyclic training mechanism that alternately updates patient and provider latent representations to optimize predictive performance. During preprocessing, all numeric features were normalized, and extreme glucose values were capped at 500 mg/dL to mitigate the effect of outliers.
Results CDLD outperformed conventional models, achieving a root mean square error of 0.0852 on the validation set and 0.0899 on the test set, which indicates improved generalization. The model effectively captured latent patient–provider interaction patterns, yielding more accurate glucose variability predictions than baseline approaches.
Conclusion Integrating patient–provider interaction modeling into predictive frameworks can increase blood glucose prediction accuracy. The CDLD model offers a novel approach to diabetes management, potentially paving the way for artificial intelligence-driven personalized treatment strategies.
Purpose This study aimed to leverage Shapley additive explanation (SHAP)-based feature engineering to predict appendix cancer. Traditional models often lack transparency, hindering clinical adoption. We propose a framework that integrates SHAP for feature selection, construction, and weighting to enhance accuracy and clinical relevance.
Methods Data from the Kaggle Appendix Cancer Prediction dataset (260,000 samples, 21 features) were used in this prediction study conducted from January through March 2025, in accordance with TRIPOD-AI guidelines. Preprocessing involved label encoding, SMOTE (synthetic minority over-sampling technique) to address class imbalance, and an 80:20 train-test split. Baseline models (random forest, XGBoost, LightGBM) were compared; LightGBM was selected for its superior performance (accuracy=0.8794). SHAP analysis identified key features and guided 3 engineering steps: selection of the top 15 features, construction of interaction-based features (e.g., chronic severity), and feature weighting based on SHAP values. Performance was evaluated using accuracy, precision, recall, and F1-score.
Results Four LightGBM model configurations were evaluated: baseline (accuracy=0.8794, F1-score=0.8691), feature selection (accuracy=0.8968, F1-score=0.8860), feature construction (accuracy=0.8980, F1-score=0.8872), and feature weighting (accuracy=0.8986, F1-score=0.8877). SHAP-based engineering yielded performance improvements, with feature weighting achieving the highest precision (0.9940). Key features (e.g., red blood cell count and chronic severity) contributed to predictions while maintaining interpretability.
Conclusion The SHAP-based framework substantially improved the accuracy and transparency of appendix cancer predictions using LightGBM (F1-score=0.8877). This approach bridges the gap between predictive power and clinical interpretability, offering a scalable model for rare disease prediction. Future validation with real-world data is recommended to ensure generalizability.
This review examines the bidirectional relationship between periodontitis and systemic health conditions, offering an integrated perspective based on current evidence. It synthesizes epidemiological data, biological mechanisms, and clinical implications to support collaborative care strategies recognizing oral health as a key component of overall wellness. Periodontitis affects 7.4% to 11.2% of adults worldwide, and its prevalence increases with age. Beyond its local effects, including gingival inflammation, periodontal pocket formation, and alveolar bone loss, periodontitis is associated with various systemic conditions. Emerging evidence has established links with obesity, diabetes mellitus, cardiovascular disease, chronic kidney disease, inflammatory bowel disease, rheumatoid arthritis, respiratory diseases, adverse pregnancy outcomes, certain malignancies, neurodegenerative diseases, psychological disorders, and autoimmune conditions. These associations are mediated by 3 primary mechanisms: dysbiotic oral biofilms, chronic low-grade systemic inflammation, and the dissemination of periodontal pathogens throughout the body. The pathophysiology involves elevated levels of pro-inflammatory cytokines (including interleukin 6, tumor necrosis factor alpha, and C-reactive protein), impaired immune function, oxidative stress, and molecular mimicry. Periodontal pathogens, particularly Porphyromonas gingivalis, are crucial in initiating and sustaining systemic inflammatory responses. Treatment of periodontitis has demonstrated measurable improvements in numerous systemic conditions, emphasizing the clinical significance of these interconnections. Periodontitis should be understood as more than just a localized oral disease; it significantly contributes to the overall systemic inflammatory burden, with implications for general health. An integrated, multidisciplinary approach to prevention, early detection, and comprehensive treatment is vital for optimal patient outcomes. Healthcare providers should acknowledge oral health as an essential element of systemic well-being.
Purpose This study aimed to identify the types of human rights violations and the associated psychological trauma experienced by North Korean defectors. It also examined the impact of trauma on the defectors’ interpersonal relationships, employment, and overall quality of life, while evaluating existing psychological support policies to suggest potential improvements.
Methods A multidisciplinary research team conducted an observational survey and in-depth interviews with approximately 300 North Korean defectors residing in South Korea from June to September 2017. Standardized measurement tools, including the Post-Traumatic Stress Disorder (PTSD) Checklist (PCL-5), Patient Health Questionnaire-9 (PHQ-9), Generalized Anxiety Disorder Scale-7 (GAD-7), and Short Form-8 Health Survey (SF-8), were employed. Statistical analyses consisted of frequency analysis, cross-tabulation, factor analysis, and logistic regression.
Results The findings revealed a high prevalence of human rights violations, such as public executions (82%), forced self-criticism (82.3%), and severe starvation or illness (62.7%). Additionally, there were elevated rates of PTSD (56%), severe depression (28.3%), anxiety (25%), and insomnia (23.3%). Defectors who resided in China before entering South Korea reported significantly worse mental health outcomes and a lower quality of life. Moreover, trauma was strongly and negatively correlated with social adjustment, interpersonal relationships, employment stability, and overall well-being.
Conclusion An urgent revision of existing policies is needed to incorporate specialized, trauma-informed care infrastructures within medical institutions. Furthermore, broad societal education to reduce stigma and enhance integration efforts is essential to effectively support the psychological well-being and social integration of North Korean defectors.
Heart failure (HF) represents a significant global health burden characterized by high morbidity, mortality, and healthcare utilization. Traditional in-person care models face considerable limitations in providing continuous monitoring and timely interventions for HF patients. Telemedicine—defined as the remote delivery of healthcare via information and communication technologies—has emerged as a promising solution to these challenges. This review examines the evolution, current applications, clinical evidence, limitations, and future directions of telemedicine in HF management. Evidence from randomized controlled trials and meta-analyses indicates that telemedicine interventions can improve guideline-directed medical therapy implementation, reduce hospitalization rates, improve patient engagement, and potentially decrease mortality among HF patients. Remote monitoring systems that track vital signs, symptoms, and medication adherence allow for the early detection of clinical deterioration, enabling timely interventions before decompensation occurs. Despite these benefits, telemedicine implementation faces several barriers, including technological limitations, reimbursement issues, digital literacy gaps, and challenges in integrating workflows. Future directions include developing standardized guidelines, designing patient-centered technologies, and establishing hybrid care models that combine virtual and in-person approaches. As healthcare systems worldwide seek more efficient and effective strategies for managing the growing population of individuals with HF, telemedicine offers a solution that may significantly improve patient outcomes and quality of life.
Purpose This study aimed to investigate whether proteins present in the molting membranes of third-stage (L3) Anisakis larvae could serve as potential risk factors for allergic reactions.
Methods Third-stage larvae (L3) of Anisakis spp. were primarily collected from mackerels and cultured in vitro to yield both molting membranes and fourth-stage (L4) larvae. Major soluble proteins in the molting membranes were identified using SDS-PAGE (sodium dodecyl sulfate–polyacrylamide gel electrophoresis). Crude antigens extracted from L3, L4, and the molting membranes were subsequently evaluated by western blotting using sera from Anisakis-infected rabbits and patients with eosinophilia.
Results Antigens derived from the molting membranes reacted with sera from Anisakis-infected rabbits as well as with sera from 7 patients with eosinophilia of unknown origin. These findings suggest that unidentified proteins in the molting membranes of Anisakis L3 may contribute to early allergic reactions, particularly in patients sensitized by specific molecular components.
Conclusion Our results indicate that proteins present in the molting membranes of third-stage Anisakis spp. larvae may be associated with allergic responses. Further studies are required to confirm the correlation between these membranes and Anisakis-induced allergies.
Recent advancements in tuberculosis treatment research emphasize innovative strategies that enhance treatment efficacy, reduce adverse effects, and adhere to patient-centered care principles. As tuberculosis remains a significant global health challenge, integrating new and repurposed drugs presents promising avenues for more effective management, particularly against drug-resistant strains. Recently, the spectrum concept in tuberculosis infection and disease has emerged, underscoring the need for research aimed at developing treatment plans specific to each stage of the disease. The application of precision medicine to tailor treatments to individual patient profiles is crucial for addressing the diverse and complex nature of tuberculosis infections. Such personalized approaches are essential for optimizing therapeutic outcomes and improving patient adherence—both of which are vital for global tuberculosis eradication efforts. The role of tuberculosis cohort studies is also emphasized, as they provide critical data to support the development of these tailored treatment plans and deepen our understanding of disease progression and treatment response. To advance these innovations, a robust tuberculosis policy framework is required to foster the integration of research findings into practice, ensuring that treatment innovations are effectively translated into improved health outcomes worldwide.
Purpose The standardized uptake value (SUV) is a key quantitative index in nuclear medicine imaging; however, variations in region‐of‐interest (ROI) determination exist across institutions. This study aims to standardize SUV evaluation by introducing a deep learning‐based quantitative analysis method that enhances diagnostic and prognostic accuracy.
Methods We used the Swin UNETR model to automatically segment key organs (breast, liver, spleen, and bone marrow) critical for breast cancer prognosis. Tumor segmentation was performed iteratively based on predefined SUV thresholds, and prognostic information was extracted from the liver, spleen, and bone marrow (reticuloendothelial system). The artificial intelligence training process employed 3 datasets: a test dataset (40 patients), a validation dataset (10 patients), and an independent test dataset (10 patients). To validate our approach, we compared the SUV values obtained using our method with those produced by commercial software.
Results In a dataset of 10 patients, our method achieved an auto‐segmentation accuracy of 0.9311 for all target organs. Comparison of maximum SUV and mean SUV values from our automated segmentation with those from traditional single‐ROI methods revealed differences of 0.19 and 0.16, respectively, demonstrating improved reliability and accuracy in whole‐organ SUV analysis.
Conclusion This study successfully standardized SUV calculation in nuclear medicine imaging through deep learning‐based automated organ segmentation and SUV analysis, significantly enhancing accuracy in predicting breast cancer prognosis.
Purpose This study aimed to analyze dementia-related death statistics in Korea between 2013 and 2023.
Methods The analysis utilized microdata from Statistics Korea’s cause-of-death statistics. Among all recorded deaths, those related to dementia were extracted and analyzed using the underlying cause-of-death codes from the International Classification of Diseases, 10th revision.
Results The number of dementia-related deaths increased from 8,688 in 2013 to 14,402 in 2023. The crude death rate rose from 17.2 per 100,000 in 2013 to 28.2 per 100,000 in 2023, although the age-standardized death rate declined from 9.7 to 8.7 over the same period. The dementia death rate is 2.1 times higher in women than in men, and mortality among individuals aged 85 and older exceeds 976 per 100,000. By specific cause, Alzheimer’s disease accounted for 77.1% of all dementia deaths, and by place, the majority occurred in hospitals (76.2%), followed by residential institutions including nursing homes (15.3%) in 2023.
Conclusion The rising mortality associated with dementia, especially Alzheimer’s disease, highlights a growing public health concern in Korea. These findings support the need for enhanced prevention efforts, improved quality of care, and targeted policies addressing the complexities of dementia management. It is anticipated that this empirical analysis will contribute to reducing the social burden.
The Mycobacterium avium complex (MAC), comprising M. avium and M. intracellulare, constitutes the predominant cause of nontuberculous mycobacterial pulmonary disease (NTM-PD) in Korea, followed by the M. abscessus complex. Its global prevalence is increasing, as shown by a marked rise in Korea from 11.4 to 56.7 per 100,000 individuals between 2010 and 2021, surpassing the incidence of tuberculosis. Among the older adult population (aged ≥65 years), the prevalence escalated from 41.9 to 163.1 per 100,000, accounting for 47.6% of cases by 2021. Treatment should be individualized based on prognostic indicators, including cavitary disease, low body mass index, and positive sputum smears for acid-fast bacilli. Current therapeutic guidelines recommend a 3-drug regimen—consisting of a macrolide, rifampin, and ethambutol—administered for a minimum of 12 months following culture conversion. Nevertheless, treatment success rates are only roughly 60%, and over 30% of patients experience recurrence. This is often attributable to reinfection rather than relapse. Antimicrobial susceptibility testing for clarithromycin and amikacin is essential, as resistance significantly worsens prognosis. Ethambutol plays a crucial role in preventing the development of macrolide resistance, whereas the inclusion of rifampin remains a subject of ongoing debate. Emerging therapeutic strategies suggest daily dosing for milder cases, increased azithromycin dosing, and the substitution of rifampin with clofazimine in severe presentations. Surgical resection achieves a notable sputum conversion rate of approximately 93% in eligible candidates. For refractory MAC-PD, adjunctive therapy with amikacin is advised, coupled with strategies to reduce environmental exposure. Despite advancements in therapeutic approaches, patient outcomes remain suboptimal, highlighting the urgent need for novel interventions.
Lung cancer remains a leading cause of cancer-related mortality worldwide. Low-dose computed tomography (LDCT) screening has demonstrated efficacy in reducing lung cancer mortality by enabling early detection. In several countries, including Korea, LDCT-based screening for high-risk populations has been incorporated into national healthcare policies. However, in regions with a high tuberculosis (TB) burden, the effectiveness of LDCT screening for lung cancer may be influenced by TB-related pulmonary changes. Studies indicate that the screen-positive rate in TB-endemic areas differs from that in low-TB prevalence regions. A critical challenge is the differentiation between lung cancer lesions and TB-related abnormalities, which can contribute to false-positive findings and increase the likelihood of unnecessary invasive procedures. Additionally, structural lung damage from prior TB infections can alter LDCT interpretation, potentially reducing diagnostic accuracy. Nontuberculous mycobacterial infections further complicate this issue, as their radiologic features frequently overlap with those of TB and lung cancer, necessitating additional microbiologic confirmation. Future research incorporating artificial intelligence and biomarkers may enhance diagnostic precision and facilitate a more personalized approach to lung cancer screening in TB-endemic settings.
Chronic obstructive pulmonary disease (COPD) is a leading cause of respiratory morbidity and mortality, most often linked to smoking. However, growing evidence indicates that previous tuberculosis (TB) infection is also a critical risk factor for COPD. This review aimed at providing a comprehensive perspective on TB-COPD, covering its epidemiologic significance, pathogenesis, clinical characteristics, and current management approaches. Tuberculosis-associated chronic obstructive pulmonary disease (TB-COPD) is characterized by persistent inflammatory responses, altered immune pathways, and extensive structural lung damage—manifested as cavitation, fibrosis, and airway remodeling. Multiple epidemiologic studies have shown that individuals with a history of TB have a significantly higher likelihood of developing COPD and experiencing worse outcomes, such as increased breathlessness and frequent exacerbations. Key pathogenic mechanisms include elevated matrix metalloproteinase activity and excessive neutrophil-driven inflammation, which lead to alveolar destruction, fibrotic scarring, and the development of bronchiectasis. Treatment generally follows current COPD guidelines, advocating the use of long-acting bronchodilators and the selective application of inhaled corticosteroids. Studies have demonstrated that indacaterol significantly improves lung function and respiratory symptoms, while long-acting muscarinic antagonists have shown survival benefits.