Accéder directement au contenu Accéder directement à la navigation
|La connaissance libre et partagée|
Article dans une revue
Multi‐expert fusion: An ensemble learning framework to segment 3d trus prostate images
Abstract : Purpose: Prostate segmentation of 3D TRUS images is a prerequisite for several diagnostic and therapeutic applications. Unfortunately, this difficult task suffers from high intra-and inter-observer variability, even for experienced urologists/radiologists. This is why automatic segmentation algorithms could have a significant clinical added-value. Methods: This paper introduces a new deep segmentation architecture consisting of two main phases: view-specific segmentations of 2D slices and their fusion. The segmentation phase is based on three segmentation networks trained in parallel on specific slice viewing directions: axial, coronal, sagittal. The proposed fusion network is then fed with the output of the segmentation networks and trained to produce three confidence maps. These maps correspond to the local trust granted by the fusion network to each view-specific segmentation network. Finally, for a given slice, the segmentation is computed by combining these confidence maps with their corresponding segmentations. The 3D segmentation of the prostate is obtained by re-stacking all the segmented slices to form a volume. Results: This approach was evaluated on a database of 100 patients with several combinations of network architectures (for both the segmentation phase and the fusion phase) to show the flexibility and reliability of the framework. The proposed approach was also compared to STAPLE, to the majority voting strategy and to a direct 3D approach tested on the same database. The new method outperforms these three approaches on all evaluation criteria. Finally, the results of the Multi-eXpert Fusion (MXF) framework compare favorably with other state-of-the-art methods while these methods typically work on smaller databases. Conclusions: We proposed a novel MXF framework to segment 3D TRUS images of the prostate. The main feature of this approach is the fusion of expert networks results at the pixel level using computed confidence maps. Experiments conducted on a clinical database have shown the robustness and flexibility of this approach and its superiority over state-of-the-art approaches. Finally, the MXF framework demonstrated its ability to capture and preserve the underlying gland structures, particularly in the base and apex regions.