Multimodal Learning for Classification of Solar Radio Spectrum

Chen, Z. and Ma, L. and Xu, L. and Weng, Y. and Yan, Y.H. (2015) Multimodal Learning for Classification of Solar Radio Spectrum. In: IEEE International Conference on Systems, Man, and Cybernetics (SMC), City University Hong Kong, OCT 09-12, 2015.

Full-text not available from this repository..


This paper proposes the first attempt to utilize multi-modal learning method for the representation learning of the solar radio spectrums. The solar radio signals sensed from different frequency channels, which present different characteristics, are regarded as different modalities. We employ a multimodal neural network to learn the representations of the solar radio spectrum, which can distinguish the differences and learn the interactions between different modalities. The original solar radio spectrums are firstly pre-processed, including normalization, denoising, channel competition and etc., before being fed into the multimodal learning network. Experimental results have demonstrated that the proposed multimodal learning network can learn the representation of the solar radio spectrum more effectively, and improve the classification accuracy.

Item Type: Conference or Workshop Item (UNSPECIFIED)
Subjects: Research Publications
Departments: College of Physical and Applied Sciences > School of Computer Science
Date Deposited: 03 Mar 2016 04:41
Last Modified: 11 Mar 2016 03:16
ISSN: 1062-922X
URI: http://e.bangor.ac.uk/id/eprint/6291
Identification Number: DOI: 10.1109/SMC.2015.187
Publisher: IEEE publishing
Administer Item Administer Item

eBangor is powered by EPrints 3 which is developed by the School of Electronics and Computer Science at the University of Southampton. More information and software credits.