ISSN 0253-2778

CN 34-1054/N

Open AccessOpen Access JUSTC Original Paper

A multi-target tracking algorithm based on feature point trajectories

Cite this:
https://doi.org/10.3969/j.issn.0253-2778.2020.06.002
  • Received Date: 25 October 2019
  • Accepted Date: 13 May 2020
  • Rev Recd Date: 13 May 2020
  • Publish Date: 30 June 2020
  • In a continuous video stream, the multi-target tracking task is to determine the positions of the concerned targets in each frame. However, the tracking algorithm suffer from many challenging issues, such as appearance variation, lighting change, occlusion and cluttered background. Especially, occlusion has the most negative impact on tracking performance. Therefore, a tracking algorithm is proposed based on feature point trajectory to solve the tracking problem where multiple targets may occlude each other. The main idea of the proposed tracking algorithm is to introduce the delay during tracking, and acquire future N frame images in advance when processing the current frame; extract feature points from the obtained frame images and connect them to form feature trajectories, and estimate the positions of targets after N frames according to the obtained trajectories. After predicting the future positions of the targets, the motion of targets can be analyzed so as to precisely determine their locations at the current frame. Experiments show the this algorithm can effectively deal with occlusion. Moreover, the complexity of the proposed algorithm is lower than that of many traditional algorithms, which guarantees real-time tracking on the low-end processor in actual applications.
    In a continuous video stream, the multi-target tracking task is to determine the positions of the concerned targets in each frame. However, the tracking algorithm suffer from many challenging issues, such as appearance variation, lighting change, occlusion and cluttered background. Especially, occlusion has the most negative impact on tracking performance. Therefore, a tracking algorithm is proposed based on feature point trajectory to solve the tracking problem where multiple targets may occlude each other. The main idea of the proposed tracking algorithm is to introduce the delay during tracking, and acquire future N frame images in advance when processing the current frame; extract feature points from the obtained frame images and connect them to form feature trajectories, and estimate the positions of targets after N frames according to the obtained trajectories. After predicting the future positions of the targets, the motion of targets can be analyzed so as to precisely determine their locations at the current frame. Experiments show the this algorithm can effectively deal with occlusion. Moreover, the complexity of the proposed algorithm is lower than that of many traditional algorithms, which guarantees real-time tracking on the low-end processor in actual applications.
  • loading
  • [1]
    尹宏鹏,陈波,柴毅,等. 基于视觉的目标检测与跟踪综述[J]. 自动化学报,2016, 42(10): 1466-1489.
    [2]
    严金丰. 复杂交通场景中运动目标只能监控[D]. 合肥: 中国科学技术大学,2014.
    [3]
    ZIVKOVIC Z, VAN DER Heijden F. Efficient adaptive density estimation per image pixel for the task of background subtraction [J]. Pattern Recognition Letters, 2006, 27(7): 773-780.
    [4]
    管皓,薛向阳,安志勇. 深度学习在视频目标跟踪中的应用进展与展望[J]. 自动化学报,2016, 42(6): 834-847.
    [5]
    管皓,薛向阳,安志勇. 在线单目标视频跟踪算法综述[J]. 小型微型计算机系统,2017, 38(1): 147-153
    [6]
    LEVEY A, LINDENBAUM M. Sequential Karhunen-Loeve basis extraction and its application to images [J]. IEEE Transactions on Image Processing, 2000, 9(8): 1371-1374.
    [7]
    MEI X, LING H. Robust visual tracking using L1 minimization[C]// IEEE 12th International Conference on Computer Vision. Kyoto, Japan: IEEE, 2009: 1436-1443.
    [8]
    SBALZARINI I F, KOUMOUTSAKOS P. Feature point tracking and trajectory analysis for video imaging in cell biology [J]. Journal of Structural Biology, 2005, 151(2): 182-195.
    [9]
    MARESCA M E, PETROSINO A. Matrioska: Amulti-level approach to fast tracking by learning[C]//International Conference on Image Analysis and Processing. Berlin: Springer, 2013: 419-428.
    [10]
    NEBEHAY G, PFLUGFELDER R. Clustering of static-adaptive correspondences for deformable object tracking[C]// IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE, 2015: 2784-2791.
    [11]
    AVIDAN S. Support vector tracking [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, 26(8):1064-1072.
    [12]
    GRABNER H, BISCHOF H. On-line boosting and vision [C]// IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2006: 1-8.
    [13]
    HENRIQUES J F, CASEIRO R, MARTINS P, et al. High-speed tracking with kernelized correlation filters[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(3): 583-596.
    [14]
    NAM H, HAN B. Learning multi-domain convolutional neural networks for visual tracking[C]// IEEE Conference on Computer Vision and Pattern Recognition. 2016: 4293-4302.
    [15]
    LI B, YAN J, WU W, et al. High performance visual tracking with Siamese region proposal network[C]// IEEE Conference on Computer Vision and Pattern Recognition. 2018: 8971-8980.
    [16]
    WU Y, LIM J, YANG M H. Online object tracking: A benchmark[C]. IEEE Conference on Computer Vision and Pattern Recognition. 2013: 2411-2418.
    [17]
    BABENKO B, YANG M H, Belongie S. Robust object tracking with online multiple instance learning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(8): 1619-1632.
    [18]
    KALAL Z, MIKOLAJCZYK K, MATAS J. Tracking-learning-detection[J]. IEEE transactions on Pattern Analysis and Machine Intelligence, 2012, 34(7): 1409-1422.
    [19]
    NEBEHAY G, PFLUGFELDER R. Consensus-based matching and tracking of keypoints for object tracking[C]. IEEE Winter Conference on Applications of Computer Vision.2014: 862-869.
    [20]
    BABENKO B, YANG M H, BELONGIE S. Visual tracking with online multiple instance learning[C]// IEEE Conference on Computer Vision and Pattern Recognition. Miami, USA: IEEE, 2009: 983-990.)
  • 加载中

Catalog

    [1]
    尹宏鹏,陈波,柴毅,等. 基于视觉的目标检测与跟踪综述[J]. 自动化学报,2016, 42(10): 1466-1489.
    [2]
    严金丰. 复杂交通场景中运动目标只能监控[D]. 合肥: 中国科学技术大学,2014.
    [3]
    ZIVKOVIC Z, VAN DER Heijden F. Efficient adaptive density estimation per image pixel for the task of background subtraction [J]. Pattern Recognition Letters, 2006, 27(7): 773-780.
    [4]
    管皓,薛向阳,安志勇. 深度学习在视频目标跟踪中的应用进展与展望[J]. 自动化学报,2016, 42(6): 834-847.
    [5]
    管皓,薛向阳,安志勇. 在线单目标视频跟踪算法综述[J]. 小型微型计算机系统,2017, 38(1): 147-153
    [6]
    LEVEY A, LINDENBAUM M. Sequential Karhunen-Loeve basis extraction and its application to images [J]. IEEE Transactions on Image Processing, 2000, 9(8): 1371-1374.
    [7]
    MEI X, LING H. Robust visual tracking using L1 minimization[C]// IEEE 12th International Conference on Computer Vision. Kyoto, Japan: IEEE, 2009: 1436-1443.
    [8]
    SBALZARINI I F, KOUMOUTSAKOS P. Feature point tracking and trajectory analysis for video imaging in cell biology [J]. Journal of Structural Biology, 2005, 151(2): 182-195.
    [9]
    MARESCA M E, PETROSINO A. Matrioska: Amulti-level approach to fast tracking by learning[C]//International Conference on Image Analysis and Processing. Berlin: Springer, 2013: 419-428.
    [10]
    NEBEHAY G, PFLUGFELDER R. Clustering of static-adaptive correspondences for deformable object tracking[C]// IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE, 2015: 2784-2791.
    [11]
    AVIDAN S. Support vector tracking [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, 26(8):1064-1072.
    [12]
    GRABNER H, BISCHOF H. On-line boosting and vision [C]// IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2006: 1-8.
    [13]
    HENRIQUES J F, CASEIRO R, MARTINS P, et al. High-speed tracking with kernelized correlation filters[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(3): 583-596.
    [14]
    NAM H, HAN B. Learning multi-domain convolutional neural networks for visual tracking[C]// IEEE Conference on Computer Vision and Pattern Recognition. 2016: 4293-4302.
    [15]
    LI B, YAN J, WU W, et al. High performance visual tracking with Siamese region proposal network[C]// IEEE Conference on Computer Vision and Pattern Recognition. 2018: 8971-8980.
    [16]
    WU Y, LIM J, YANG M H. Online object tracking: A benchmark[C]. IEEE Conference on Computer Vision and Pattern Recognition. 2013: 2411-2418.
    [17]
    BABENKO B, YANG M H, Belongie S. Robust object tracking with online multiple instance learning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(8): 1619-1632.
    [18]
    KALAL Z, MIKOLAJCZYK K, MATAS J. Tracking-learning-detection[J]. IEEE transactions on Pattern Analysis and Machine Intelligence, 2012, 34(7): 1409-1422.
    [19]
    NEBEHAY G, PFLUGFELDER R. Consensus-based matching and tracking of keypoints for object tracking[C]. IEEE Winter Conference on Applications of Computer Vision.2014: 862-869.
    [20]
    BABENKO B, YANG M H, BELONGIE S. Visual tracking with online multiple instance learning[C]// IEEE Conference on Computer Vision and Pattern Recognition. Miami, USA: IEEE, 2009: 983-990.)

    Article Metrics

    Article views (94) PDF downloads(269)
    Proportional views

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return