College of Computer and Information Technology,Xinyang Normal University,Xinyang 464000,China
Show less
文章历史+
Received
Accepted
Published
2024-10-17
Issue Date
2026-04-17
PDF (1711K)
摘要
提出一种基于改进对抗擦除技术的食品识别方法,渐进式获取判别性区域。该方法通过Otsu算法和形态学操作获得判别性区域,降低噪声干扰。为验证所提食品识别方法的有效性,在Sushi⁃50、 ETH Food⁃101和Vireo⁃172数据集上,与其他文献的方法进行对比。实验结果表明,所提方法更有效地降低食品图像复杂背景的干扰,提升食品识别性能。在ETH Food⁃101数据集上,相较于ResNet⁃50,该方法在Top⁃1和Top⁃5准确率上分别提升2.6和0.8个百分点。
Abstract
A food recognition method was proposed based on the improved adversarial erasing technique that incrementally obtain discriminative regions. The method could identify each discriminative region using Otsu algorithm and morphological operations, thereby reducing noise interference. To validate the effectiveness of the proposed food recognition method, the comparative experiments were conducted on the Sushi-50, ETH Food-101 and Vireo-172 datasets with the methods presented in other literatures. The experimental results demonstrated that the proposed method can effectively mitigate interference from complex backgrounds in food images, thereby enhancing food recognition performance. Compared to ResNet-50, the method improved the Top-1 and Top-5 accuracy on the ETH Food-101 dataset by 2.6 and 0.8 percentage point, respectively.
首先,生成类激活图(class activation map,CAM)。CAM是一种可视化技术,用于突出显示分类网络识别目标的区域,由CNN的最后一个卷积层生成,将一幅训练图像输入模型,经过全局平均池化层(Global Average Pooling,GAP),获得特征向量,并将其连接至模型的输出层,生成CAM。计算公式如下:
ENRIQUEZJ P, ARCHILA-GODINEZJ C. Social and cultural influences on food choices: A review[J]. Critical Reviews in Food Science and Nutrition, 2022, 62(13): 3698⁃3704.
HENGShuangping, CUIMengdi, LIXiaolin, et al. Development and application of molecular markers related to purple leaf in Brassica juncea [J]. Journal of Xinyang Normal University(Natural Science Edition), 2024, 37(3): 360‑365.
YANGWenmei, YUANChengwei, ZHAOJiguo, et al. Food image recognition and nutrition analysis based on convolutional neural network[J]. China Food Industry, 2024(24): 118⁃120.
ZHUFengqing, BOSCHM, SCHAPT R, et al. Segmentation assisted food classification for dietary assessment[C]//International Society for Optics and Photonics. Computational Imaging IX. Bellingham, WA: SPIE, 2011, 7873: 77⁃84.
[8]
SHAHB, BHAVSARH. Depth-restricted convolutional neural network-a model for Gujarati food image classification[J]. The Visual Computer, 2024, 40(3): 1931⁃1946.
[9]
TANNOR, OKAMOTOK, YANAIK. DeepFoodCam: A DCNN‑based real-time mobile food recognition system[C]//Proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management. New York: ACM, 2016: 89⁃89.
[10]
HASSANNEJADH, MATRELLAG, CIAMPOLINIP, et al. Food image recognition using very deep convolutional networks[C]// Proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management. New York: ACM, 2016: 41⁃49.
[11]
QIUJ, LOF P W, SUNY, et al. Mining discriminative food regions for accurate food recognition[J]. arXiv Preprint arXiv:2022.
[12]
MINWeiqing, WANGZhiling, LIUYuxin, et al. Large scale visual food recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(8): 9932⁃9949.
LIUQing, HUANGJin, ZHANGYaya, et al. Microscopic image segmentation of Chinese herbal medicine based on gray wolf optimization PCNN algorithm[J]. Journal of Xinyang Normal University (Natural Science Edition), 2024, 37(1): 120⁃126.
[15]
HEKaiming, ZHANGXiangyu, RENShaoqing, et al. Deep residual learning for image recognition[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 770⁃778.
[16]
BOSSARDL, GUILLAUMINM, VAN GOOLL. Food-101-mining discriminative components with random forests[C]// Computer Vision-ECCV 2014. Zurich: Springer, 2014: 446⁃461.
[17]
CHENJingjing, NGOC W. Deep‑based ingredient recognition for cooking recipe retrieval[C]// Proceedings of the 24th ACM International Conference on Multimedia. New York: ACM, 2016: 32⁃41.