White blood cell classification: Convolutional Neural Network (CNN) and Vision Transformer (ViT) under medical microscope

Abstract

Deep learning (DL) has made significant advances in computer vision with the advent of vision transformers (ViTs). Unlike convolutional neural networks (CNNs), ViTs use self-attention to extract both local and global features from image data, and then apply residual connections to feed these features directly into a fully networked multilayer perceptron head. In hospitals, hematologists prepare peripheral blood smears (PBSs) and read them under a medical microscope to detect abnormalities in blood counts such as leukemia. However, this task is time-consuming and prone to human error. This study investigated the transfer learning process of the Google ViT and ImageNet CNNs to automate the reading of PBSs. The study used two online PBS datasets, PBC and BCCD, and transferred them into balanced datasets to investigate the influence of data amount and noise immunity on both neural networks. The PBC results showed that the Google ViT is an excellent DL neural solution for data scarcity. The BCCD results showed that the Google ViT is superior to ImageNet CNNs in dealing with unclean, noisy image data because it is able to extract both global and local features and use residual connections, despite the additional time and computational overhead.

Publication
Algorithms
Mohamad Abou Ali
Mohamad Abou Ali
Postdoctoral Researcher

I develop generalizable deep learning methods for biomedical imaging, advancing diagnostic accuracy, data augmentation, and intelligent healthcare systems.

Fadi Dornaika
Fadi Dornaika
Ikerbasque Research Professor

Ikerbasque Research Professor with expertise in computer vision, machine learning, and pattern recognition.

Ignacio Arganda-Carreras
Ignacio Arganda-Carreras
Ikerbasque Research Associate Professor

My research interests include image processing, computer vision, and deep learning for biomedical applications.