Abstract:
Objective To study the different methods of artificial intelligence (AI)-assisted Ki-67 scoring of clinical invasive ductal carcinoma (IDC) of the breast and to compare the results.
Methods A total of 100 diagnosed IDC cases were collected, including slides of HE staining and immunohistochemical Ki-67 staining and diagnosis results. The slides were scanned and turned into whole slide image (WSI), which were then scored with AI. There were two AI scoring methods. One was fully automatic counting by AI, which used the scoring system of Ki-67 automatic diagnosis to do counting with the whole image of WSI. The second method was semi-automatic AI counting, which required manual selection of areas for counting, and then relied on an intelligent microscope to conduct automatic counting. The diagnostic results of pathologists were taken as the results of pure manual counting. Then the Ki-67 scores obtained by manual counting, semi-automatic AI counting and automatic AI counting were pairwise compared. The Ki-67 scores obtained from the manual counting (pathological diagnosis results), semi-automatic AI and automatic AI counts were pair-wise compared and classified according to three levels of difference: difference ≤10%, difference of >10%−<30% and difference ≥30%. Intra-class correlation coefficient (ICC) was used to evaluate the correlation.
Results The automatic AI counting of Ki-67 takes 5−8 minutes per case, the semi-automatic AI counting takes 2−3 minutes per case, and the manual counting takes 1−3 minutes per case. When results of the two AI counting methods were compared, the difference in Ki-67 scores was all within 10% (100% of the total), and the ICC index being 0.992. The difference between manual counting and semi-automatic AI was less than 10% in 60 cases (60% of the total), between 10% and 30% in 37 cases (37% of the total), and more than 30% in only 3 cases (3% of the total), ICC index being 0.724. When comparing automatic AI with manual counting, 78 cases (78% of the total) had a difference of ≤10%, 17 cases (17% of the total) had a difference of between 10% and 30%, and 5 cases (5%) had a difference of ≥30%, the ICC index being 0.720. The ICC values showed that there was little difference between the results of the two AI counting methods, indicating good repeatability, but the repeatability between AI counting and manual counting was not particularly ideal.
Conclusion AI automatic counting has the advantage of requiring less manpower, for the pathologist is involved only for the verification of the diagnosis results at the end. However, the semi-automatic method is better suited to the diagnostic habits of pathologists and has a shorter turn-over time compared with that of the fully automatic AI counting method. Furthermore, in spite of its higher repeatability, AI counting, cannot serve as a full substitute for pathologists, but should instead be viewed as a powerful auxiliary tool.