[1] Sung H, Ferlay J, Siegel RL, et al. Global Cancer Statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries [J]. CA Cancer J Clin, 2021, 71(3): 209-249. [2] Petersen PE. Oral cancer prevention and control-The approach of the World Health Organization [J]. Oral Oncol, 2009, 45(4-5): 454-460. [3] McCullough MJ, Prasad G, Farah CS. Oral mucosal malignancy and potentially malignant lesions: an update on the epidemiology, risk factors, diagnosis and management [J]. Aust Dent J, 2010, 55 Suppl 1: 61-65. [4] Speight PM, Epstein J, Kujan O, et al. Screening for oral cancera perspective from the Global Oral Cancer Forum [J]. Oral Surg Oral Med Oral Pathol Oral Radiol, 2017, 123(6): 680-687. [5] Pandey M, Choudhury H, Ying JNS, et al. Mucoadhesive nanocarriers as a promising strategy to enhance intracellular delivery against oral cavity carcinoma [J]. Pharmaceutics, 2022, 14(4): 795. [6] Litjens G, Kooi T, Bejnordi BE, et al. A survey on deep learning in medical image analysis [J]. Med Image Anal, 2017, 42: 60-88. [7] Afify HM, Mohammed KK, Ella Hassanien A. Novel prediction model on OSCC histopathological images via deep transfer learning combined with Grad-CAM interpretation [J]. Biomed Signal Process Control, 2023, 83: 104704. [8] Das N, Hussain E, Mahanta LB. Automated classification of cells into multiple classes in epithelial tissue of oral squamous cell carcinoma using transfer learning and convolutional neural network [J]. Neural Netw, 2020, 128: 47-60. [9] Warin K, Limprasert W, Suebnukarn S, et al. Performance of deep convolutional neural network for classification and detection of oral potentially malignant disorders in photographic images [J]. Int J Oral Maxillofac Surg, 2022, 51(5): 699-704. [10] Al-hammuri K, Gebali F, Kanan A, et al. Vision transformer architecture and applications in digital health: a tutorial and survey [J]. Vis Comput Ind Biomed Art, 2023, 6(1): 14. [11] Chandrashekar HS, Geetha Kiran A, Murali S, et al. Oral Images Dataset [DB/OL]. Mendeley Data, 2021. [12] Dosovitskiy A, Beyer L, Kolesnikov A, et al. An image is worth 16×16 words: Transformers for image recognition at scale [J]. arXiv, 2021. [13] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition [J]. arXiv, 2015. [14] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition [C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016: 770-778. [15] Huang G, Liu Z, Van Der Maaten L, et al. Densely connected convolutional networks [C]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017: 2261-2269. [16] Tan M, Le QV. EfficientNetV2: Smaller models and faster training [J]. arXiv, 2021. [17] Shamshad F, Khan S, Zamir SW, et al. Transformers in medical imaging: A survey [J]. Med Image Anal, 2023, 88: 102802. [18] Qiu Z, Xu J, Peng J, et al. Cross-channel dynamic spatial-spectral fusion transformer for hyperspectral image classification [J]. IEEE Transactions on Geoscience and Remote Sensing, 2023, 61: 1-12. [19] Marzouk R, Alabdulkreem E, Dhahbi S, et al. Deep transfer learning driven oral cancer detection and classification model [J]. Computers, Materials & Continua, 2022, 73(2): 3905-3920. |