WP5: Data Flow and Quality
The DoMore! project handles a huge number of images. The success of the project relies on these images to have good enough quality to ensure that the algorithms works.
Our priority now is on understanding and control the various factors that influence the image quality of the scanner, such as magnification, resolution, image compression, darkcurrent, glare, diffraction and not at least focus. A main challenge is to understand and mimick how a human distinguishes a focused from an unfocused image.
The scanner company is responsible for producing images with reliable and stable quality, but experience has taught us that scanners more or less frequently produce images that are partially or totally out of focus. Manual control detects many of these cases. However, an automatic algorithm could increase the likelihood of detecting unfocused images and thus ensure a better result in the final analysis. In Task 5.1 and 5.2, focusing quality is the common denominator. The effort has therefore been to establish an automatic grading of the focus quality of images produced by microscopes and scanners. Two different approaches have been used.
For microscopy imaging, we do have the possibility of adjusting the image focus, i.e., performing autofocus. In autofocusing, a quick search is done to find the lens position that gives the best-focused image according to some criteria. This can be done sequentially for a small increment in the lens position, or iteratively for decreasing increments, moving the lens back and forth to the position where a given focus measure of the given image is a maximum. Even though autofocusing has been a long-standing topic, most of the papers published on the subject are devoted to proposing a new method that is marginally different from previous ones and testing the performance on just a few selected images. A few surveys and comparisons of focus measures exist, but the focusing problem is generally related to the imaging modality and field of application.
Our task is to find an optimally focused image of a microscopy image of cells, containing structures at several scales having a variety of grey level gradients, where the final image processing step is a statistical and/or structural texture analysis of the grey level morphology of the cell nuclei. Thus, we do not only need to find the “best” focusing algorithm, but also to investigate if the choice of focusing algorithm can result in some selection effects when it comes to the subsequent texture analysis.
To determine the performance of various focusing metrics, a thorough literature study was performed. The first draft of a report on different focusing algorithms suitable for cell nuclei microscopy imaging has been completed. Based on the report, a pilot study has been carried out. In the study, five of the most promising focusing algorithms were implemented and tested on sets of microscopy images where for the same sample, the focus had been varied in a controlled way for a large number of possible focus depths. The preliminary results show that several of the metrics performed well, i.e., identified a focus depth that co-aligned with the one picked by a trained user, but that the Energy of Laplacian might be the most promising one. To continue this study, new data needed to be recorded.
In DoMore!, images from more than 60,000 slides are scanned. Ideally, all of these images are of high quality having high contrast and a perfect focus, as these images form the foundation of further analysis. Based on the knowledge gained in Task 5.1 we wanted to develop a tool to monitor the image quality. The scanner produces an image, and the task was to determine if this image was unfocused. A trained human can reliably establish this in a couple of seconds. Having a computer program doing the same is more challenging.
A larger dataset was collected to evaluate various metrics and ideas. It consisted of pairs of 90 scans collected on three different scanners. Each pair consisted of an unfocused scan (detected manually), and the same scan rescanned and controlled to have an acceptable focus quality. From each scanner (Aperio, Hamamatsu XR and Hamamatsu NZ), ten scans from three different tumor type (prostate, colorectal and lung) were collected. The scans were manually annotated to identify areas of different tissue type (tumor, connective/muscle/fatty tissue, necrotic areas). The annotation was performed with up to 20 times zoom level.
The current status of the project is that, so far, no metric is found that can separate all incidences of focused from unfocused images. Our priority now is on understanding and mimicking how a human distinguishes a focused from an unfocused image.
The wanted outcome of this project is an algorithm that determines if a given scan is of a quality good enough to be included in the following analysis, or if the slide should be rescanned or removed from the analysis. The algorithm should work for both full scans and for tiles, and ideally be independent of scanner and tissue type. We have a good indication that it is possible to establish such an algorithm for a given scanner and tissue type. If it will be independent of scanner and tissue type, remains to be proven.
In the following, we will continue working on a global algorithm for scanned slides. When this is established, we will return to microscopy images and take up again our project on auto-focusing algorithms. We will then investigate the relationship between focusing measures and texture features.
Another important task will be to closely examine all the other parameters that influence image quality with the goal to reduce the coefficient of variation of integrated optical density measured in scanner images, which is 1.5 times higher than for microscope images.
Oslo, 15th February 2019
Navigate to the other Work Package descriptions: