Objective We propose a multi-feature fusion model based on manually extracted features and deep learning features from endoscopic images for grading rebleeding risk of peptic ulcers. Methods Based on the endoscopic appearance of peptic ulcers, color features were extracted to distinguish active bleeding (Forrest I) from non-bleeding ulcers (Forrest II and III). The edge and texture features were used to describe the morphology and appearance of the ulcers in different grades. By integrating deep features extracted from a deep learning network with manually extracted visual features, a multi-feature representation of endoscopic images was created to predict the risk of rebleeding of peptic ulcers. Results In a dataset consisting of 3573 images from 708 patients with Forrest classification, the proposed multi-feature fusion model achieved an accuracy of 74.94% in the 6-level rebleeding risk classification task, outperforming the experienced physicians who had a classification accuracy of 59.9% (P<0.05). The F1 scores of the model for identifying Forrest Ib, IIa, and III ulcers were 90.16%, 75.44%, and 77.13%, respectively, demonstrating particularly good performance of the model for Forrest Ib ulcers. Compared with the first model for peptic ulcer rebleeding classification, the proposed model had improved F1 scores by 5.8%. In the simplified 3-level risk (high-risk, low-risk, and non-endoscopic treatment) classification task, the model achieved F1 scores of 93.74%, 81.30%, and 73.59%, respectively. Conclusions The proposed multi-feature fusion model integrating deep features from CNNs with manually extracted visual features effectively improves the accuracy of rebleeding risk classification for peptic ulcers, thus providing an efficient diagnostic tool for clinical assessment of rebleeding risks of peptic ulcers.
分级模型的整体性能评估使用准确率(Accuracy)、精确率(Precision)、召回率(Recall)、F1 得分(F1-score)和受试者工作特性(ROC)曲线。为了综合衡量分级模型在不同风险级别上的精确度和召回率,本文使用 F1 得分评估每一级溃疡的预测性能。相关指标的计算公式如公式(4)至(7)所示。
RomstadKK, DetlieTE, SøbergT, et al. Treatment and outcome of gastrointestinal bleeding due to peptic ulcers and erosions - (BLUE study)[J]. Scand J Gastroenterol, 2022, 57(1): 8-15.
[11]
Mackiewicz-PrackaA, NehringP, PrzybyłkowskiA. Emergency endoscopic interventions in acute upper gastrointestinal bleeding: a cohort study[J]. Diagnostics, 2023, 13(23): 3584.
[12]
YenHH, WuPY, WuTL, et al. Forrest classification for bleeding peptic ulcer: a new look at the old endoscopic classification[J]. Diagnostics, 2022, 12(5): 1066.
[13]
KlangE, BarashY, LevartovskyA, et al. Differentiation between malignant and benign endoscopic images of gastric ulcers using deep learning[J]. Clin Exp Gastroenterol, 2021, 14: 155-62.
YenHH, WuPY, SuPY, et al. Performance comparison of the deep learning and the human endoscopist for bleeding peptic ulcer disease[J]. J. Med. Biol. Eng, 2021, 41(4): 504-13.
[17]
YenHH, WuPY, ChenMF, et al. Current status and future perspective of artificial intelligence in the management of peptic ulcer bleeding: a review of recent literature[J]. J Clin Med, 2021, 10(16): 3527.
[18]
AfonsoJ, SaraivaMJM, FerreiraJPS, et al. Development of a convolutional neural network for detection of erosions and ulcers with distinct bleeding potential in capsule endoscopy[J]. echniques and Innovations in Gastrointestinal Endoscopy, 2021, 23(4): 291-6.
[19]
HeKM, ZhangXY, RenSQ, et al. Deep residual learning for image recognition[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). June 27-30, 2016. VegasLas, NV, USA. IEEE, 2016: 770-8, .
[20]
SunithaS, SujathaSS. An improved bleeding detection method for wireless capsule endoscopy (WCE) images based on AlexNet[C]//2021 3rd International Conference on Signal Processing and Communication (ICPSC). May 13-14, 2021. Coimbatore, India. IEEE, 2021: 11-15.
[21]
SainjuS, BuiFM, WahidK. Bleeding detection in wireless capsule endoscopy based on color features from histogram probability[C]//2013 26th IEEE Canadian Conference on Electrical and Computer Engineering (CCECE). May 5-8, 2013. Regina, SK, Canada. IEEE, 2013: 1-4.
[22]
DilnaC, GopiVP. A novel method for bleeding detection in Wireless Capsule Endoscopic images[C]//2015 International Conference on Computing and Network Communications (CoCoNet). December 16-19, 2015. Trivandrum, India. IEEE, 2015: 854-8.
[23]
YuanYX, LiBP, MengMQH. Bleeding frame and region detection in the wireless capsule endoscopy video[J]. IEEE J Biomed Health Inform, 2016, 20(2): 624-30.
[24]
TubaE, TomicS, BekoM, et al. Bleeding detection in wireless capsule endoscopy images using texture and color features[C]//2018 26th Telecommunications Forum (TELFOR). November 20-21, 2018. Belgrade. IEEE, 2018: 1-4.
[25]
ChenBZ, LiJX, LuGM, et al. Lesion location attention guided network for multi-label thoracic disease classification in chest X-rays[J]. IEEE J Biomed Health Inform, 2020, 24(7): 2016-27.
[26]
SimonyanK, ZissermanA. Very deep convolutional networks for large-scale image recognition[C]// International Conference on Learning Representations (ICLR). May 7-9, 2015. San Diego, CA, USA. ICLR, 2015: 1-14.
[27]
TanM, LeQ. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks[C]//Proceedings of the 36th International Conference on Machine Learning (ICML). Jun 9-15, 2019. Long Beach, CA, USA. Proceedings of Machine Learning Research, 97: 6105-14.
[28]
CaoWZ, MirjaliliV, RaschkaS. Rank consistent ordinal regression for neural networks with application to age estimation[J]. Pattern Recognition Letters, 2020, 140: 325-31.
[29]
NiuZX, ZhouM, WangL, et al. Ordinal regression with multiple output CNN for age estimation[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). June 27-30, 2016. Las Vegas, NV, USA. IEEE, 2016: 4920-8.
[30]
PolatG, ErgencI, KaniHT, et al. Class distance weighted cross-entropy loss for ulcerative colitis severity estimation[C]//Annual Conference on Medical Image Understanding and Analysis. Cham: Springer, 2022: 157-71.
[31]
LiuTJ, XieSN, YuJ, et al. Classification of thyroid nodules in ultrasound images using deep model based transfer learning and hybrid features[C]//2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). March 5-9, 2017. New Orleans, LA. IEEE, 2017: 919-23.
[32]
XieJH, GuoLH, ZhaoCK, et al. A Hybrid Deep Learning and Handcrafted Features based Approach for Thyroid Nodule Classification in Ultrasound Images[J]. J Physics ConSer, 2020, 1693(1): 012160.