1.College of Computer and Control Engineering,Northeast Forestry University,Harbin 150040,China
2.Feline Research Center of National Forestry and Grassland Administration,College of Wildlife and Protected Area,Northeast Forestry University,Harbin 150040,China
3.Inner Mongolia Forestry Industry Group,Yakeshi 022150,China
4.Inner Mongolia Hanma National Nature Reserve,Genhe 022359,China
As an integral part of the ecosystem, wild Cervidae animals play a crucial role in maintaining ecological ba-lance. The application of unmanned aerial vehicle (UAV) imaging technology in wildlife monitoring has become increasingly mature. However, due to the influence of natural lighting conditions and the complex and changeable wild environment, it is difficult to obtain high-quality cervid images using single-spectrum imaging technology. Therefore, this paper proposes an image fusion algorithm based on the DenseFuse network. By utilizing the multispectral imaging equipment carried by unmanned aerial vehicles (UAVs), the algorithm fuses infrared images with visible light images while preserving the contour information of the infrared images and the appearance information of the visible light images, thereby improving the quality of monitoring images. Based on the wild cervid image dataset, this paper employs multiple image fusion strategies for experiments and conducts a detailed comparison of the fusion effects between infrared and visible light images. The experimental results show that the comprehensive evaluation index obtained by using the -norm fusion strategy is the best, and the average information entropy of the fused images reaches 6.965. This result indicates that the proposed UAV multi-source image fusion algorithm can provide reliable technical support for wildlife monitoring.
目前,在动物监测中,无人机最为常用的监测手段是可见光成像技术。例如,通过无人机可见光图像对白额雁(Anser albifrons)[6]、绵羊[7]和牛[8]的数量进行统计。结合人工智能技术,对无人机采集的可见光图像展开分析,从而实现对非洲野生动物[9]、恒河鳄(Crocodylus palustris)[10]和沙漠动物[11]的有效监测。值得注意的是,在上述研究中,无人机大多在空旷区域开展动物数据的采集工作,很少出现动物被环境遮挡的情况。然而,在森林等植被茂盛的地区,使用无人机可见光成像进行野生鹿科动物监测时,由于树木遮挡,野生鹿科动物很难被完整监测到。相较于传统可见光成像技术,红外热成像技术依托红外辐射独特的物理穿透特性,在光照不足、植被冠层遮蔽等场景下仍能保持稳定的探测能力。这种热辐射感知机制突破了可见光成像的光学局限性,尤其在野生动物监测领域,当动物体色与背景环境形成光学伪装,或夜间可见光监测失效时,红外成像系统通过探测目标体表与环境的温差(通常哺乳动物体表温度比环境高5~10 ℃),仍能实现有效识别[12]。Lyu et al.[13]通过Faster R-CNN网络检测红外图像中的鹿,并针对红外图像分辨率低的问题,集成了小尺度锚框和多尺度特征图,提高了检测小物体的准确性。但是,由于红外成像技术主要是依靠检测物体散发的热量,使得红外图像缺乏野生鹿科动物的纹理细节。相比之下,可见光可以提供鹿科动物高分辨率纹理细节信息。因此,针对野生鹿科动物监测场景中存在的多种成像技术互补性需求,亟需一种多光谱图像融合技术,将野生鹿科动物的可见光图像和热红外图像进行融合,既保证复杂林冠环境下动物主体的稳定识别,又能有效保留角部纹理、毛色斑纹等物种鉴别关键特征。
图像融合是将来自不同传感器获取的图像信息合并为单一图像[14-15],可以提高图像质量、增强图像信息。因此,红外图像和可见光相融合后的融合图像能够吸收两者的优点,在保证分辨率的同时,也能保留清晰的轮廓特征和更多细节信息。Li et al.[16]提出以一种基于深度学习的红外图像和可见光图像融合的算法,其中将源图像分为基础部分和高频部分并通过不同的融合策略以获得融合图像。Liu et al.[17]通过自适应算法提高了红外图像对比度,但是可能会导致一些图像细节被压缩或丢失。为了让融合图像保留更多源图像的信息,杨莘等[18]提出一种端到端的双融合路径生成对抗网络,其中将两幅源图像直接输入到网络的每一层,以提取更多的源图像特征信息。王昱婷等[19]提出的融合模型DAPR-Net可以获得更清晰的目标细节和更明确的目标信息,并通过双注意力特征提取模块AFEM,增强了在低光场景下的检测效果,但是在夜光场景下,可见光图像效果不佳。谢一博等[20]通过双目异型成像系统和双尺度融合算法对红外与可见光图像进行融合,获得了信息丰富、质量更佳的融合图像。
DUN, FATHOLLAHI-FARDA M, WONGK Y. Wildlife resource conservation and utilization for achieving sustainable deve-lopment in China: main barriers and problem identification[J]. Environmental Science and Pollution Research, 2023. DOI: 10. 1007/s11356-023-26982-7 .
[2]
LINNELLJ D C, CRETOISB, NILSENE B, et al. The challenges and opportunities of coexisting with wild ungulates in the human-dominated landscapes of Europe’s Anthropocene[J]. Biological Conservation, 2020, 244: 108500.
[3]
FORSYTHD M, COMTES, DAVISN E, et al. Methodology matters when estimating deer abundance: a global systematic review and recommendations for improvements[J]. The Journal of Wildlife Management, 2022, 86(4): e22207.
[4]
DE KOCKM E, POHŮNEKV, HEJCMANOVÁP. Semi-automated detection of ungulates using UAV imagery and reflective spectrometry[J]. Journal of Environmental Management, 2022, 320: 115807.
[5]
LIX H, HUANGH L, SAVKINA V. Autonomous navigation of an aerial drone to observe a group of wild animals with reduced visual disturbance[J]. IEEE Systems Journal, 2022, 16(2): 3339-3348.
[6]
OGAWAK, LINY T, TAKEDAH, et al. Automated counting wild birds on UAV image using deep learning[C]//2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, July 11-16, 2021. Brussels: IEEE, 2021: 5259-5262.
[7]
SARWARF, GRIFFINA, REHMANS U, et al. Detecting sheep in UAV images[J]. Computers and Electronics in Agriculture, 2021, 187: 106219.
[8]
DE LIMA WEBERF, DE MORAES WEBERV A, DE MORAESP H, et al. Counting cattle in UAV images using convolutional neural network[J]. Remote Sensing Applications: Society and Environment, 2023, 29: 100900.
[9]
PETSOT, JAMISOLAR S, MPOELENGD, et al. Individual animal and herd identification using custom YOLO v3 and v4 with images taken from a UAV camera at different altitudes[C]//2021 IEEE 6th International Conference on Signal and Image Processing(ICSIP), October 22-24, 2021. Nanjing: IEEE, 2021: 33-39.
[10]
DESAIB, PATELA, PATELV, et al. Identification of free-ranging mugger crocodiles by applying deep learning methods on UAV imagery[J]. Ecological Informatics, 2022, 72: 101874.
[11]
CHENC R, EDIRISINGHEE A, LEONCEA, et al. Deep neural networks based multiclass animal detection and classification in drone imagery[C]//2023 International Symposium on Networks, Computers and Communications(ISNCC), October 23-26, 2023. Doha: IEEE, 2023: 1-8.
[12]
LIW X, CHENQ, GUG H, et al. Visible-infrared image matching based on parameter-free attention mechanism and target-aware graph attention mechanism[J]. Expert Systems with Applications, 2024, 238: 122038.
[13]
LYUH T, QIUF, ANL, et al. Deer survey from drone thermal imagery using enhanced faster R-CNN based on ResNets and FPN[J]. Ecological Informatics, 2024, 79: 102383.
[14]
MAJ Y, MAY, LIC. Infrared and visible image fusion methods and applications: a survey[J]. Information Fusion, 2019, 45: 153-178.
[15]
JINX, JIANGQ, YAOS W, et al. A survey of infrared and visual image fusion methods[J]. Infrared Physics & Technology, 2017, 85: 478-501.
[16]
LIH, WUX J, KITTLERJ. Infrared and visible image fusion using a deep learning framework[C]//2018 24th International Conference on Pattern Recognition(ICPR), August 20-24, 2018. Beijing: IEEE, 2018: 2705-2710.
[17]
LIUF, GUANS, YUK K, et al. Infrared target detection based on the fusion of mask R-CNN and image enhancement network[C]//2022 China Automation Congress(CAC), November 25-27, 2022. Xiamen: IEEE, 2022: 2011-2016.
YANGS, TIANL F, LIANGJ M, et al. Infrared and visible image fusion based on improved dual path generation adversarial network[J]. Journal of Electronics & Information Technology, 2023, 45(8): 3012-3021.
WANGY T, LIUZ M, WANY P, et al. Target detection under low light conditions based on visible and infrared images[J]. Computer Engineering, 2024, 50(8): 270-281.
XIEY B, CHENGJ, ZHOUS, et al. Research on the fast fusion algorithm of of true colour of infrared and visible images under night vision environment[J]. Laser & Infrared, 2024, 54(1): 136-147.
[24]
LIH, WUX J. DenseFuse: a fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing, 2019, 28(5): 2614-2623.
[25]
WANGZ, BOVIKA C, SHEIKHH R, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600-612.
[26]
ZHAOH, GALLOO, FROSIOI, et al. Loss functions for image restoration with neural networks[J]. IEEE Transactions on Computational Imaging, 2017, 3(1): 47-57.
[27]
CHENX Q, ZHANGQ Y, LINM H, et al. No-reference color image quality assessment: from entropy to perceptual quality[J]. EURASIP Journal on Image and Video Processing, 2019, 2019(1): 77.