ABSTRACT
In various applications of radar imagery, one of the fundamental problems is mainly linked to the analysis and interpretation of the images provided, in particular the recognition of moving and/or fixed targets. This task has become more difficult due to the large volume of radar data. This led to the use of automatic processing and target recognition methods. The aim of this study is to explore data fusion in SAR (Synthetic Aperture Radar) image classifiers. To this end, we propose a new approach to combine three CNN (Convolutional Neural Networks) architectures with several fusion rules. First, we perform a training process of three deep learning architectures; namely, the basic CNN, the Xception, and the AlexNet architectures. Then, two fusion techniques are proposed. The first one deals with the majority rule and the second uses a neural networks to combine the decision outputs obtained from three elementary classifiers to achieve the final decision. To evaluate and validate the proposed approach, the MSTAR (Moving and Stationary Target Acquisition and Recognition) dataset is used. The obtained performances of the fusion techniques improve the recognition rate with a final accuracy of 99.59% for the majority rule and 99.51 for the neural network-based rule, which surpasses the accuracy of each individual CNN.
The articles in Bibliomed are open access articles licensed under Creative Commons Attribution 4.0 International License (CC BY), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
We use cookies and other tracking technologies to work properly, to analyze our website traffic, and to understand where our visitors are coming from. More InfoGot It!