欢迎来到《四川大学学报(医学版)》
夏春潮, 张凯, 刘秀民, 等. 基于神经网络对CT上腹部脏器边界识别及其临床应用中的检测效能[J]. 四川大学学报(医学版), 2021, 52(2): 306-310. DOI: 10.12182/20210360201
引用本文: 夏春潮, 张凯, 刘秀民, 等. 基于神经网络对CT上腹部脏器边界识别及其临床应用中的检测效能[J]. 四川大学学报(医学版), 2021, 52(2): 306-310. DOI: 10.12182/20210360201
XIA Chun-chao, ZHANG Kai, LIU Xiu-min, et al. The Clinical Effectiveness of Neural Network-based Boundary Recognition of Upper Abdominal Organs on CT Images[J]. Journal of Sichuan University (Medical Sciences), 2021, 52(2): 306-310. DOI: 10.12182/20210360201
Citation: XIA Chun-chao, ZHANG Kai, LIU Xiu-min, et al. The Clinical Effectiveness of Neural Network-based Boundary Recognition of Upper Abdominal Organs on CT Images[J]. Journal of Sichuan University (Medical Sciences), 2021, 52(2): 306-310. DOI: 10.12182/20210360201

基于神经网络对CT上腹部脏器边界识别及其临床应用中的检测效能

The Clinical Effectiveness of Neural Network-based Boundary Recognition of Upper Abdominal Organs on CT Images

  • 摘要:
      目的  探讨基于神经网络模型和不同层面组合对CT上腹部影像主要脏器边界进行识别,并检测其临床实际应用效能。
      方法  纳入2018年3月–2019年3月在我院行或包括上腹部增强CT检查且图像质量满足临床诊断要求的病例2 000例,对上腹部8个主要脏器边界层面(肝上下缘、脾上下缘、左肾下缘、右肾下缘、胃下缘和胆囊下缘)进行标注,利用不同神经网络方法和不同层面组合进行模型训练(训练集、验证集和测试集),获得边界识别的准确率,并通过识别50例临床数据检测其准确率和临床应用效能。
      结果  两种模型按照不同权重比例整合的融合模型准确率最高,EfficientNet-b3模型次之,Xception模型最低。各模型中5层图像识别边界的准确率要高于3层图像的准确率,1层图像的准确率最低。融合模型连续5层图像获得肝上缘、肝下缘、脾上缘、脾下缘、左肾下缘、右肾下缘、胃下缘和胆囊下缘的识别准确率分别是91%、87%、92%、85%、92%、95%、76%、74%。融合模型对50例效能检测数据进行识别,获得8个层面准确率为88%、86%、88%、80%、82%、80%、69%、65%;满足临床应用各边界的准确率为98%、98%、95%、98%、99%、98%、80%、77%。
      结论  通过增加边界在连续层面的变化逻辑,按照不同权重比例整合的融合模型识别上腹部CT各边界的准确率最高,并在临床实践中获得较高的临床检查效能。

     

    Abstract:
      Objective  To assess the clinical effectiveness of boundary recognition of upper abdomen organs on CT images based on neural network model and the combination of different slices.
      Methods  A total of 2 000 patients who underwent upper abdomen enhanced CT scans from March 2018 to March 2019 were included in the study. The quality of the CT images met the requirements for clinical diagnosis. Eight boundary layers (the upper and lower edge of liver, the upper and lower edge of spleen, the lower edge of left kidney, the lower edge of right kidney, the lower edge of the stomach and the lower edge of the gallbladder) of the main organs in the upper abdomen were labeled. The model training (training set, verification set and test set) based on different neural network methods and combinations of different slices were then performed to assess the accuracy of boundary recognition. Furthermore, clinical data from 50 cases were used as test group for assessing the accuracy and clinical effectiveness of this model.
      Results  The fusion model created by integrating the two models according to different weight ratios yielded the highest accuracy, and then followed the EfficientNet-b3 model, with the Xception model showing the lowest accuracy. In each model, the boundary recognition accuracy of 5-slice image is higher than that of 3-silce image, and that of 1-slice image is the lowest. The recognition accuracy of fusion model of the 5-continuous-slice image for upper edge of liver, lower edge of liver, upper edge of spleen, lower edge of spleen, lower edge of left kidney, lower edge of right kidney, lower edge of stomach and lower edge of gallbladder was 91%, 87%, 92%, 85%, 92%, 95%, 76% and 74%, respectively. The fusion model was checked with the effectiveness data of 50 cases, yielding 88%, 86%, 88%, 80%, 82%, 80%, 69%, and 65% accuracy for 8-slice image, respectively, and the accuracy of meeting clinical application requirement was as high as 98%, 98%, 95%, 98%, 99%, 98%, 80% and 77%, respectively.
      Conclusion  By increasing boundary change logics in the continuous slices, the fusion model integrating different weight proportions demonstrates the highest accuracy for identifying the boundary of upper abdominal organs on CT images, achieving high examination effectiveness in clinical practice.

     

© 2021 《四川大学学报(医学版)》编辑部 版权所有 cc

开放获取 本文遵循知识共享署名—非商业性使用4.0国际许可协议(CC BY-NC 4.0),允许第三方对本刊发表的论文自由共享(即在任何媒介以任何形式复制、发行原文)、演绎(即修改、转换或以原文为基础进行创作),必须给出适当的署名,提供指向本文许可协议的链接,同时标明是否对原文作了修改;不得将本文用于商业目的。CC BY-NC 4.0许可协议详情请访问 https://creativecommons.org/licenses/by-nc/4.0

/

返回文章
返回