Supervised learning, where an algorithm is trained on manually labeled examples, is a very successful technique for the automated segmentation of biomedical images. However, a major drawback of these techniques is that they assume training images to be representative of the test images to segment. This means that the performance of such algorithms deteriorates if they are trained on images that are obtained with different scanners or scanning parameters than the test images. Voxels of images from different scanners have different intensity distributions, which can greatly decrease segmentation performance. However, ample training images (with manual segmentations) acquired with the same scanner and scanning parameters as the test image might not always be available.
In the project “Transfer Learning in Biomedical Image Analysis” we investigate whether so-called transfer-learning techniques can solve the problem of training on unrepresentative training images. Transfer learning is a relatively new field of machine learning that comprises techniques that can handle differences between training and test data. We investigate different transfer-learning methods to compensate for distribution differences between training and test data, which work at different stages of the classification framework. So far, we investigated transfer classifiers, weighting of training images, transfer kernel learning, and a feature-space transformation to transform the feature distributions of training images to that of test images. We showed on a variety of MR brain-sementation tasks that the proposed methods can bring great improvement in case of scanner or patient differences between training and test data.