To address the difficulty in distinguishing feature importance due to the close coupling between convolutional neural networks models and input data, a feature dimensionality augmentation method to analyze the importance of input features from the output results of the network model is proposed. Firstly, a standard orthogonal basis is sequentially assigned to the sample features of the input network model in a high-dimensional Euclidean space, and the input sample features are represented with dimensionality augmentation. Secondly, the convolutional neural networks is computationally extended in high-dimensional Euclidean space, and the features represented by the dimensionality augmentation are computed. Finally, in the calculation results, the corresponding relationship between the standard orthogonal basis and the input sample features can be analyzed to determine the influence weights of each input sample feature in the output results. Experiment shows that the weights analyzed in this method can effectively reflect the influence of input features on convolutional neural networks.
KRIZHEVSKYA, SUTSKEVERI, HINTONG E. ImageNet classification with deep convolutional neural networks [C]∥Proceedings of the 25th International Conference on Neural Information Processing Systems. Red Hook, USA: Curran Associates Inc., 2012:1097-1105.
[2]
SIMONYANK, ZISSERMANA. Very deep convolutional networks for large-scale image recognition[DB/OL]. (2015-04-10)[2024-04-27].
[3]
SZEGEDYC, LIUW, JIAY Q, et al. Going deeper with convolutions[C]∥Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE, 2015. DOI: 10.1109/CVPR.2015.7298594 .
[4]
HEK M, ZHANGX Y, RENS Q, et al. Deep residual learning for image recognition[C]∥Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016:770-778.
[5]
HOWARDA G, ZHUM L, CHENB, et al. MobileNets: efficient convolutional neural networks for mobile vision applications[DB/OL]. (2017-04-17)[2024-04-27].
[6]
KIMY. Convolutional neural networks for sentence classification[C]∥Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Doha, Qatar: Association for Computational Linguistics, 2014:1746-1751.
[7]
KALCHBRENNERN, GREFENSTETTEE, BLUNSOM, P. A convolutional neural network for modelling sentences[C]∥Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. Baltimore, USA: Association for Computational Linguistics, 2014:655-665.
ZHOUB L, KHOSLAA, LAPEDRIZAA, et al. Learning deep features for discriminative localization[C]∥Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016:27-30.
[13]
SELVARAJUR R, COGSWELLM, DAS A, et al. Grad-CAM: visual explanations from deep networks via gradient-based localization[C]∥Proceedings of the IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017:618-626.
[14]
CHATTOPADHAYA, SARKARA, HOWLADERP, et al. Grad-CAM++: generalized gradient-based visual explanations for deep convolutional networks[C]∥Proceedings of 2018 IEEE Winter Conference on Applications of Computer Vision. Lake Tahoe, USA: IEEE, 2018:839-847.
[15]
JIANGP T, ZHANGC B, HOUQ, et al. LayerCAM: exploring hierarchical class activation maps for localization[J]. IEEE Transactions on Image Processing, 2021,30:5875-5888.
[16]
NAIRV, HINTONG E. Rectified linear units improve restricted boltzmann machines[C]∥Proceedings of International Conference on Machine Learning. Haifa, Israel: Omnipress, 2010:21-24.
[17]
XIAOH, RASULK, VOLLGRAFR. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms[DB/OL]. (2017-08-25)[2024-04-27].
[18]
KRIZHEVSKYA, HINTONG. Learning multiple layers of features from tiny images[EB/OL]. (2009-04-08)[2024-04-27].