Aiming at the problem that the existing video traffic parameter extraction method relies too much on manual labeling and the single perspective cannot effectively correct the dynamic driving deviation of on-site vehicles, a multi-view video traffic parameter extraction method that can split the lane demarcation line is proposed. This method consists of an automatic generation module for labeling points and a module for multi-view correction. The automatic generation of label points module realizes the process of automatically generating label points by constructing a reference block based on the dividing line of equal-length lanes. The multi-view deviation correction module proposes a variety of mapping methods for the boundary between vehicles and lanes and a correction speed measurement method based on the average speed probability density function to correct two types of deviations generated by vehicles when driving dynamically. The experimental results on the public dataset and the measured dataset show that the speed extraction accuracy of the proposed method is better than that of other speed measurement methods, and has certain universality.
本文提出一种结合可跨越车道分界线的多视角视频车速提取方法(Multi-view crossing lane boundary, MV-CLB),具体框架如图1所示。MV-CLB由标注点自动生成模块和多视角纠偏测速模块构成。在输入视频帧图像基础上,标注点自动生成模块以获取的车道分界线像素位置信息为依据,通过自动分割可跨越车道分界线等长距离的方式建立参照依据,实现二维图像向三维空间转换的新标注点生成。多视角纠偏测速模块采用车道分界线垂足映射(LL-TR)、平行映射(LL-PA)、混合映射(LL-MX)3种方式校正多视角场景下车辆与车道分界线的位置偏差,并结合概率密度函数修正车辆未落入标注点的问题,最终实现多视角场景下视频交通速度参数的有效提取。
JiangGui-yan, GuoHai-feng, WuChao-teng. Urban road traffic status discrimination method based on induction coil data[J]. Journal of Jilin University (Engineering and Technology Edition), 2008,38(Sup. l): 37-42.
LiuLi, LiYong. A traffic radar speed measurement algorithm based on self-focusing[J]. Journal of Nanjing University of Aeronautics and Astronautics, 2013, 45(6): 843-848.
WuHang-bin, LiuDou, LiuQi-yuan, et al. Parameter extraction of traffic flow of moble vehicles based on cross-sectional laser scanning[J]. Journal of Tongji University (Natural Science Edition), 2017, 45(7): 1069-1074.
WangCheng-gang, LiuHai, TanZhong-hui. A vehicle speed measurement algorithm based on video image[J]. Journal of Shanghai Institute of Shipping Transportation Science, 2019, 42(1): 65-69.
[12]
RahimH A, AhmadR B, ZainA S M, et al. An adapted point based tracking for vehicle speed estimation in linear spacing[C]∥International Conference on Computer and Communication Engineering, Zurich, Switzerland, 2010: 1-4.
[13]
PeruničićA, DjukanovićS, CvijetićA. Vision-based vehicle speed estimation using the YOLO detector and RNN[C]∥27th International Conference on Information Technology, Wuhan, China, 2023: 1-4.
TianHui-juan, LiuJia-wei, ZhaiJia-hao, et al. Video-based vehicle speed measurement method using multiple intrusion lines[J]. Journal of Transportation Systems Engineering and Information Technology, 2022, 22(1): 49-56.
YuYue, BaiXing, DiLan. Video vehicle tracking and speed measurement optimization based on YOLO network and wavelet noise reduction[J]. Applied Science and Technology, 2023, 50(1): 14-20.
YuYan-ling, WangTao, YuanBin, et al. Research on vehicle speed detection algorithm based on video[J]. Modern Electronics Technique, 2013, 36(3): 158-161.
[21]
SochorJ, JuránekR, ŠpaňhelJ, et al. Comprehensive data set for automatic single camera visual speed measurement[J]. IEEE Transactions on Intelligent Transportation Systems, 2019, 20(5): 1633-1643.
[22]
RedmonJ, DivvalaS, GirshickR, et al. You only look once: unified, real-time object detection[C]∥2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, 2016: 779-788.
[23]
BewleyA, GeZ, OttL, et al. Simple online and realtime tracking[C]∥2016 IEEE International Conference on Image Processing (ICIP), Phoenix, USA, 2016: 3464-3468.
[24]
JavadiS, DahlM, PetterssonM I, et al. Design of a video-based vehicle speed measurement system: an uncertainty approach[C]∥2018 Joint 7th International Conference on Informatics, Electronics & Vision (ICIEV) and 2018 2nd International Conference on Imaging, Vision & Pattern Recognition, Kitakyushu, Japan, 2018: 44-49.