BIOIMAGING 2018 Abstracts


Full Papers
Paper Nr: 4
Title:

Optimising Graphical Techniques Applied to Irreversible Tracers

Authors:

Yasser Alzamil, Yulia Hicks, Xin Yang and Christopher Marshall

Abstract: Graphical analysis techniques are often applied to positron emission tomography (PET) images to estimate physiological parameters. Patlak analysis is primarily used to obtain the rate constant (Ki) that indicates the transfer of a tracer from plasma to the irreversible compartment and ultimately describes how the tracer binds to the targeted tissue. One of the most common issues associated with Patlak analysis is the introduction of statistical noise that affects the slope of the graphical plot and causes bias. In this study, several statistical methods are proposed and applied to PET time activity curves (TACs) for both reversible and irreversible regions that are involved in the equation. A dynamic PET imaging simulator for the Patlak model was used to evaluate the statistical methods employed to reduce the bias introduced in the acquired data.

Paper Nr: 7
Title:

Patch-based Carcinoma Detection on Confocal Laser Endomicroscopy Images - A Cross-site Robustness Assessment

Authors:

Marc Aubreville, Miguel Goncalves, Christian Knipfer, Nicolai Oetter, Tobias Würfl, Helmut Neumann, Florian Stelzle, Christopher Bohr and Andreas Maier

Abstract: Deep learning technologies such as convolutional neural networks (CNN) provide powerful methods for image recognition and have recently been employed in the field of automated carcinoma detection in confocal laser endomicroscopy (CLE) images. CLE is a (sub-)surface microscopic imaging technique that reaches magnifications of up to 1000x and is thus suitable for in vivo structural tissue analysis. In this work, we aim to evaluate the prospects of a priorly developed deep learning-based algorithm targeted at the identification of oral squamous cell carcinoma with regard to its generalization to further anatomic locations of squamous cell carcinomas in the area of head and neck. We applied the algorithm on images acquired from the vocal fold area of five patients with histologically verified squamous cell carcinoma and presumably healthy control images of the clinically normal contra-lateral vocal cord. We find that the network trained on the oral cavity data reaches an accuracy of 89.45% and an area-under-the- curve (AUC) value of 0.955, when applied on the vocal cords data. Compared to the state of the art, we achieve very similar results, yet with an algorithm that was trained on a completely disjunct data set. Concatenating both data sets yielded further improvements in cross-validation with an accuracy of 90.81% and AUC of 0.970. In this study, for the first time to our knowledge, a deep learning mechanism for the identification of oral carcinomas using CLE Images could be applied to other disciplines in the area of head and neck. This study shows the prospect of the algorithmic approach to generalize well on other malignant entities of the head and neck, regardless of the anatomical location and furthermore in an examiner-independent manner.

Paper Nr: 15
Title:

Graph-Cut Segmentation of Retinal Layers from OCT Images

Authors:

Bashir Isa Dodo, Yongmin Li, Khalid Eltayef and Xiaohui Liu

Abstract: The segmentation of various retinal layers is vital for diagnosing and tracking progress of medication of various ocular diseases. Due to the complexity of retinal structures, the tediousness of manual segmentation and variation from different specialists, many methods have been proposed to aid with this analysis. However image artifacts, in addition to inhomogeneity in pathological structures, remain a challenge, with negative influence on the performance of segmentation algorithms. Previous attempts normally pre-process the images or model the segmentation to handle the obstruction but it still remains an area of active research, especially in relation to the graph based algorithms. In this paper we present an automatic retinal layer segmentation method, which is comprised of fuzzy histogram hyperbolization and graph cut methods to segment 8 boundaries and 7 layers of the retina on 150 OCT B-Sans images, 50 each from the temporal, nasal and centre of foveal region. Our method shows positive results, with additional tolerance and adaptability to contour variance and pathological inconsistency of the retinal structures in all regions.

Paper Nr: 17
Title:

EndoCal 10 Obturation Voids in Root Canal and Isthmus of a Human Premolar: A Synchrotron micro-CT Imaging Study

Authors:

Assem Hedayat and Pengyu Wu

Abstract: The objective of this research is to detect and characterize voids in an Endocal 10 obturated human premolar using synchrotron-radiation-based micro-computed tomography (SRµCT) and 3D visualization. Also, the aim is to investigate the extent of voids present in a fine structure such as an isthmus following obturation. We scanned an extracted human premolar that was obturated with EndoCal 10 using the Bio-Medical Imaging and Therapy (BMIT) 05ID-2 beamline at the Canadian Light Source. We applied the non-destructive monochromatic X-ray beam at 47 keV, and compiled 4.3 µm pixel size images utilizing a AA-40 (HAMAMATSU) beam monitor synchronized with a (HAMAMATSU C9300-124) charge-coupled camera. We used Fiji for reconstructing the images and Avizo 9.0 for 3D rendering. The results showed voids in different parts of the obturation as well as a partially obturated isthmus. Miicro-CT and 3D visualization show that voids exist in the pulp chamber, root canal, and isthmus following obturation with EndoCal 10 during endodontic therapy. We categorized the isthmus as a type V one. The study also highlights the reasons contributing to the difficulty of obturating the isthmus. The variation in isthmus’ diameter, its irregular branching, the presence of pulp tissue, as well as its angular orientation with respect to the root canals are some of the reasons that impede the flow of EndoCal 10 through it.

Paper Nr: 24
Title:

Optimized KinectFusion Algorithm for 3D Scanning Applications

Authors:

Faraj Alhwarin, Stefan Schiffer, Alexander Ferrein and Ingrid Scholl

Abstract: KinectFusion is an effective way to reconstruct indoor scenes. It takes a depth image stream and uses the iterative closests point (ICP) method to estimate the camera motion. Then it merges the images in a volume to construct a 3D model. The model accuracy is not satisfactory for certain applications such as scanning a human body to provide information about bone structure health. For one reason, camera noise and noise in the ICP method limit the accuracy. For another, the error in estimating the global camera poses accumulates. In this paper, we present a method to optimize KinectFusion for 3D scanning in the above scenarios. We aim to reduce the noise influence on camera pose tracking. The idea is as follows: in our application scenarios we can always assume that either the camera rotates around the object to be scanned or that the object rotates in front of the camera. In both cases, the relative camera/object pose is located on a 3D-circle. Therefore, camera motion can be described as a rotation around a fixed axis passing through a fixed point. Since the axis and the center of rotation are always fixed, the error averaging principle can be utilized to reduce the noise impact and hence to enhance the 3D model accuracy of scanned object.

Paper Nr: 29
Title:

Colorectal Cancer Classification using Deep Convolutional Networks - An Experimental Study

Authors:

Francesco Ponzio, Enrico Macii, Elisa Ficarra and Santa Di Cataldo

Abstract: The analysis of histological samples is of paramount importance for the early diagnosis of colorectal cancer (CRC). The traditional visual assessment is time-consuming and highly unreliable because of the subjectivity of the evaluation. On the other hand, automated analysis is extremely challenging due to the variability of the architectural and colouring characteristics of the histological images. In this work, we propose a deep learning technique based on Convolutional Neural Networks (CNNs) to differentiate adenocarcinomas from healthy tissues and benign lesions. Fully training the CNN on a large set of annotated CRC samples provides good classification accuracy (around 90% in our tests), but on the other hand has the drawback of a very computationally intensive training procedure. Hence, in our work we also investigate the use of transfer learning approaches, based on CNN models pre-trained on a completely different dataset (i.e. the ImageNet). In our results, transfer learning considerably outperforms the CNN fully trained on CRC samples, obtaining an accuracy of about 96% on the same test dataset.

Paper Nr: 35
Title:

Spot Detection in Microscopy Images using Convolutional Neural Network with Sliding-Window Approach

Authors:

Matsilele Mabaso, Daniel Withey and Bhekisipho Twala

Abstract: Robust spot detection in microscopy image analysis serves as a critical prerequisite in many biomedical applications. Various approaches that automatically detect spots have been proposed to improve the analysis of biological images. In this paper, we propose an approach based on Convolutional Neural Network (conv-net) that automatically detects spots using sliding-window approach. In this framework, a supervised CNN is trained to identify spots in image patches. Then, a sliding window is applied on testing images containing multiple spots where each window is sent to a CNN classifier to check if it contains a spot or not. This gives results for multiple windows which are then post-processed to remove overlaps by overlap suppression. The proposed approach was compared to two other popular conv-nets namely, GoogleNet and AlexNet using two types of synthetic images. The experimental results indicate that the proposed methodology provides fast spot detection with precision, recall and F_score values that are comparable with the other state-of-the-art pre-trained conv-nets methods. This demonstrates that, rather than training a conv-net from scratch, fine-tuned pre-trained conv-net models can be used for the task of spot detection.

Short Papers
Paper Nr: 8
Title:

Brain Tumor Segmentation in Magnetic Resonance Images using Genetic Algorithm Clustering and AdaBoost Classifier

Authors:

Gustavo C. Oliveira, Renato Varoto and Alberto Cliquet Jr.

Abstract: We present a technique for automatic brain tumor segmentation in magnetic resonance images, combining a modified version of a Genetic Algorithm Clustering method with an AdaBoost Classifier. In a group of 42 FLAIR images, segmentations produced by the algorithm were compared to the ground truth information produced by radiologists. The mean Dice similarity coefficient reached by the algorithm was 70.3%. In most cases, the AdaBoost classifier increased the quality of the segmentation, improving, on average, the DSC in about 10%. Our implementation of the Genetic Algorithm Clustering method presents improvements compared to the original method. The use of a fixed, small number of groups and smaller population allowed for less computational effort. In addition, adaptive restriction in the initial segmentation was achieved by using the information of the groups with highest and 2nd-highest mean intensities. By exploring intensity and spatial information of the pixels, the AdaBoost classifier improved segmentation results.

Paper Nr: 13
Title:

Multispectral 3D Surface Scanning System RoScan and its Application in Inflammation Monitoring and Quantification

Authors:

Adam Chromy

Abstract: This paper presents experimental multispectral 3D surface scanning system RoScan, which is capable of capturing 3D models of a surface, containing a spatial representation of the object, colour of each point of the surface, its temperature and roughness. Such models are provided with accuracy up to 0:12 mm and thermal resolution of 0:05C, what makes it suitable for 3D thermal body scanning in medicine. Basic principles, parameters, and functional capabilities are discussed, and developed tools for data analysis are presented. The RoScan system is suitable for early detection of inflamed regions and its objective quantification. It can be also used for evaluation of treatment suitability or for monitoring during a recovery process. To show this, the case study monitoring of inflammation related to eczema caused by an allergic reaction is presented. The inflammation development is studied using RoScan during eczema growth and after the application of two different external dermatologics – Protopic R 0.1% topical ointment and ointment from shea butter and coconut oil. On this particular subject, measured characteristics demonstrated a stronger effect of Protopic R 0.1% on eczema healing, as the evolution of inflammation in the area treated with this dermatologics started to recover earlier and culminated on the lower value of temperature gradient then the second ointment.

Paper Nr: 19
Title:

Algorithm for Simple Automated Breast MRI Deformation Modelling

Authors:

Marta Danch-Wierzchowska, Damian Borys and Andrzej Swierniak

Abstract: Increasing incidence of breast cancer caused development of patient-specific treatment planning procedures. The most effective tool for breast cancer visualisation is Magnetic Resonance Imaging (MRI). However, the MRI scans represent patient data in prone position with breast placed in signal enhancement coils, while other procedures, i.e. surgery, PET-CT (Positron Emission Tomography fused with Computer Tomography) are performed in patient supine position. The bigger patient breast is, the bigger its shape differ in every patient position, what influences its interior structure and tumour location. In this paper, we present our method for automated breast model deformation, which is based on prone MRI dataset. Proposed algorithm allows to obtain reliable breast model in supine position in a few simple steps, without manual intervention.

Paper Nr: 21
Title:

Semi-automatic CT Image Segmentation using Random Forests Learned from Partial Annotations

Authors:

Oldřich Kodym and Michal Španěl

Abstract: Human tissue segmentation is a critical step not only in the process of their visualization and diagnostics but also for pre-operative planning and custom implants engineering. Manual segmentation of three-dimensional data obtained through CT scanning is very time demanding task for clinical experts and therefore the automation of this process is required. Results of fully automatic approaches often lack the required precision in cases of non-standard treatment, which is often the case when computer planning is important, and thus semi-automatic approaches demanding a certain level of expert interaction are being designed. This work presents a semi-automatic method of 3D segmentation applicable to arbitrary tissue that takes several manually annotated slices as an input. These slices are used for training a random forest classifiers to predict the annotation for the remaining part of the CT scan and final segmentation is obtained using the graph-cut method. Precision of the proposed method is evaluated on various CT datasets using fully expert-annotated segmentations of these tissues. Dice coefficient of overlap is 0.976 ± 0.014 for hard tissue segmentation and 0.978 ± 0.008 for kidney segmentation, achieving competitive results with other task-specific methods.

Paper Nr: 26
Title:

Video-based Patient Monitoring System - Application of the System in Intensive Care Unit

Authors:

Vladimir Kublanov, Konstantin Purtov and Mikhail Kontorovich

Abstract: The paper presents the video-based monitoring system to assess the physiological parameters and patient state in intensive care unit. It allows to measure thoracic and abdominal breathing movements, remote plethysmography signals, tissue perfusion, patient activity and changes in psycho-emotional state. Thus, the system provides a comprehensive assessment of patient state without contact. The system works in usual illumination conditions of intensive care unit and consists of a personal computer with specialized software and two low-cost Logitech C920 webcams with RGB sensors (8 bit per channel), 30 Hz sampling frequency and 640x480 pixel resolution. The webcams were placed at a distance of 80 cm above the patient’s body. The software provides automatic assessment of psychophysiological parameters and determination the following patterns: heart rate, heart rate variability, asystole and arrhythmias, breathing rate, spontaneous breathing recovery, breathing muscle tone and patient consciousness recovery, motor activity and control of ventilation parameters. The proposed system can be used as an additional diagnostic tool of anesthesia equipment for non-invasive patient monitoring in intensive care unit.

Paper Nr: 34
Title:

CNN based Mitotic HEp-2 Cell Image Detection

Authors:

Krati Gupta, Arnav Bhavsar and Anil K. Sao

Abstract: We propose a Convolutional Neural Network (CNN) framework to detect the individual mitotic HEp-2 cells against non-mitotic cells, which is important for Computer-Aided Detection (CAD) system for auto-immune disease diagnosis. The significant aspect of detecting mitotic HEp-2 cells is to consider the distinctive appearance differences between the mitotic and non-mitotic classes that are represented through the learned features from pre-trained CNN. We especially focus on gauging the effectiveness of learned features from different CNN layers, combined with traditional Support Vector Machine (SVM) classifier. We also consider the class sample skew between the classes. Importantly, we compare and discuss the performance of learned feature representations, and show that some of these features are indeed very effective in discriminating mitotic and non-mitotic cells. We demonstrate a high classification performance using the proposed framework.

Posters
Paper Nr: 9
Title:

The Face Recognition Processes - Neurofuzzy Approach

Authors:

Wojciech Biniek, Edward Puchała and Maria Bujnowska-Fedak

Abstract: The paper deals with the novel neuro-fuzzy approach for face recognition problem. A proposed method consists of two steps. The first one means image preprocessing (face detection and landmarks extraction). In particular, this concerns points on the face such as the corners of the mouth, points along the eyebrows, on the eyes, nose and jaw. In the second step, based on extracted features, neurofuzzy system recognizes to whom detected face belongs. Classical fuzzy controllers need an expert knowledge to define set of rules and/or defuzzification process. Main concept of neurofuzzy approach is to replace expert with neural networks. This paper shows that, neurofuzzy system can suit face recognition process and provide better results than other popular techniques.

Paper Nr: 11
Title:

Learning Rigid Image Registration - Utilizing Convolutional Neural Networks for Medical Image Registration

Authors:

J. M. Sloan, K. A. Goatman and J. P. Siebert

Abstract: Many traditional computer vision tasks, such as segmentation, have seen large step-changes in accuracy and/or speed with the application of Convolutional Neural Networks (CNNs). Image registration, the alignment of two or more images to a common space, is a fundamental step in many medical imaging workflows. In this paper we investigate whether these techniques can also bring tangible benefits to the registration task. We describe and evaluate the use of convolutional neural networks (CNNs) for both mono- and multi- modality registration and compare their performance to more traditional schemes, namely multi-scale, iterative registration. This paper also investigates incorporating inverse consistency of the learned spatial transformations to impose additional constraints on the network during training and investigate any benefit in accuracy during detection. The approaches are validated with a series of artificial mono-modal registration tasks utilizing T1-weighted MR brain images from the Open Access Series of Imaging Studies (OASIS) study and IXI brain development dataset and a series of real multi-modality registration tasks using T1-weighted and T2-weighted MR brain images from the 2015 Ischemia Stroke Lesion segmentation (ISLES) challenge. The results demonstrate that CNNs give excellent performance for both mono- and multi- modality head and neck registration compared to the baseline method with significantly fewer outliers and lower mean errors.

Paper Nr: 12
Title:

Robust Plant Segmentation from Challenging Background with a Multiband Acquisition and a Supervised Machine Learning Algorithm

Authors:

Taha Jerbi, Aaron Velez Ramirez and Dominique Van Der Straeten

Abstract: Remote sensing through imaging forms the basis for non-invasive plant phenotyping and has numerous applications in fundamental plant science as well as in agriculture. Plant segmentation is a challenging task especially when the image background reveals difficulties such as the presence of algae and moss or, more generally when the background contains a large colour variability. In this work, we present a method based on the use of multiband images to construct a machine learning model that separates between the plant and its background containing soil and algae/moss. Our experiment shows that we succeed to separate plant parts from the image background, as desired. The method presents improvements as compared to previous methods proposed in the literature especially with data containing a complex background.

Paper Nr: 20
Title:

Internet Addiction: Features of Neuroimaging Diagnosis

Authors:

A. Efimtsev, B. Litvintcev, A. Sokolov, A. Petrov, O. Shemchuk, N. Semibratov and V. Fokin

Abstract: Internet addiction (netoholism, dependence on social networks) one of most common types of non-chemical addictions. Сonducted researches allow to assert, that consequences of not chemical addictions negatively affect mental and somatic health. Nevertheless, the mechanisms of their occurrence and pathogenesis have not yet been adequately studied. Purpose was to assess functional changes in the brain in young people who are addicted to the Internet. FMRI in the group of subjects showed activation sites in response to presentation of visual stimuli (static images) located in the hemispheres of the cerebellum, in the occipital region of both hemispheres, upper and lower parietal lobule, angular, in the upper and middle frontal convolutions, at the pole temporal lobe in the left predtsentralnoy zone. Results indicate that the hobby of the Internet and social networks can contribute to the development of social maladaptation with subsequent negative effects on the processes of neuromediation. The involvement of certain brain structures in the process testifies to the likely influence of the Internet on the development of the nonchemical addiction.

Paper Nr: 27
Title:

An Automated Method for Generating Training Sets for Deep Learning based Image Registration

Authors:

Masato Ito and Fumihiko Ino

Abstract: In this paper, we propose an automated method for generating training sets required for realizing deep learning based image registration. The proposed method minimizes effort for supervised learning by automatically generating thousands of training sets from a small number of seed sets, i.e., tens of deformation vector fields obtained with a conventional registration method. To automate this procedure, we solve an inverse problem instead of a direct problem; we produce a floating image by applying a deformation vector fieldFto a reference image and let the inverse vector of F be the ground truth for these images. In experiments, the proposed method took 33 minutes to produce 169,890 training sets from approximately 670,000 2-D magnetic resonance (MR) images and 30 seed sets. We further trained GoogLeNet with these training sets and performed holdout validation to compare the proposed method with the conventional registration method in terms of recall and precision. As a result, the proposed method increased recall and precision from 50% to 80%, demonstrating the impact of deep learning for image registration problems.

Paper Nr: 30
Title:

Efficacy of Beam Computer Tomography (CBCT) in Diagnosis of Disease Lesions in Paranasal Sinuses

Authors:

Edward Kijak

Abstract: Different techniques of X-ray imaging often confirm the suspected diagnosis, but not infrequently redirect the diagnostic process in other areas. A modern ultrahigh-resolution volumetric tomography, called a cone beam computed tomography (CBCT) also, is one of the most innovative technique and able to visualise these anatomical structures that conventional techniques are not. A differential diagnosis of TMJ dysfunction is particularly difficult due to the quantity of factors that influence the generation of symptoms. Laying on of symptoms that mask the main disease means that frequently, without additional examinations, it is not possible, in an univocal way, to describe the type and extent of this disease. The study assesses the usefulness of volumetric tomography ( CBCT ) in an accidental detecting in the maxillary lesions sinuses of a temporomandibular joints dysfunction. The analysis was performed on the base of 249 studies of volumetric tomography. The face part of the skull was made by the a camera with a large imaging field (FOV) 17 cm x 23 cm i-CAT Next Generation (ISI). It was found that a significant number of patients (almost half) with TMD have the changes in paranasal sinuses. Based on the observations, the relevance and legitimacy of tested technique helping in stomatognathic system diseases diagnosis was analysed.

Paper Nr: 31
Title:

Evaluation of Radiomic Features Stability When Deformable Image Registration Is Applied

Authors:

Kuei-Ting Chou, Kujtim Latifi, Eduardo G. Moros, Vladimir Feygelman, Tzung-Chi Huang, Thomas J. Dilling, Bradford Perez and Geoffrey G. Zhang

Abstract: Radiomic features are currently being evaluated as potential imaging biomarkers. Deformable image registration (DIR) is now routinely applied in many medical imaging applications. Usually, DIR is applied in one of two ways: a) mapping the surface of a contoured volume, or b) mapping the image intensities. This study investigated radiomic feature stability when DIR is applied in these two ways using four dimensional computed tomography (4DCT) data. DIR was applied between the inspiration and expiration phases of 4DCT datasets. Radiomic features were extracted from (1) the expiration phases of 25 lung cancer 4DCT datasets within the contoured tumor volumes, (2) the inspiration phases with the mapped tumor volumes, and (3) the inspiration phases deformed to the corresponding expiration phases of the original contoured volumes. The mean variation and the concordance correlation coefficient (CCC) between these 3 sets of features were analyzed. Many features were found unstable (mean variation > 50% or CCC < 0.5) when DIR was applied in either way. Caution is needed in radiomic feature applications when DIR is necessary.

Paper Nr: 33
Title:

Dense 3D Reconstruction of Endoscopic Polyp

Authors:

Ankur Deka, Yuji Iwahori, M. K. Bhuyan, Pradipta Sasmal and Kunio Kasugai

Abstract: This paper proposes a model for 3D reconstruction of polyp in endoscopic scene. 3D shape of polyp enables better understanding of the medical condition and can help predict abnormalities like cancer. While there has been significant progress in monocular shape recovery, the same hasn’t been the case with endoscopic images due to challenges like specular regions. We take advantage of the advances in shape recovery and suitably apply these with modifications to the scenario of endoscopic images. The model operates on 2 nearby video frames. ORB features are detected and tracked for computing camera motion and initial rough depth estimation. This is followed by a dense pixelwise operation which gives a dense depth map of the scene. Our method shows positive results and strong correspondence with the ground truth.