Wifi MAC Address Tagging Assisted Fast Surveillance Video Retrieval System

Conventional public safety surveillance video camera systems required 24/7 monitoring of security officers with video wall display installed in the control room. When a crime or incident is reported, all the recorded surveillance video streams nearby the incident area are played back simultaneously...

Full description

Saved in:
Bibliographic Details
Main Author: Tan, Kien Long
Format: Thesis
Language:English
English
Published: 2020
Subjects:
Online Access:http://eprints.utem.edu.my/id/eprint/25453/1/Wifi%20Mac%20Address%20Tagging%20Assisted%20Fast%20Surveillance%20Video%20Retrieval%20System.pdf
http://eprints.utem.edu.my/id/eprint/25453/2/Wifi%20Mac%20Address%20Tagging%20Assisted%20Fast%20Surveillance%20Video%20Retrieval%20System.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
id my-utem-ep.25453
record_format uketd_dc
institution Universiti Teknikal Malaysia Melaka
collection UTeM Repository
language English
English
advisor Lim, Kim Chuan

topic T Technology (General)
T Technology (General)
spellingShingle T Technology (General)
T Technology (General)
Tan, Kien Long
Wifi MAC Address Tagging Assisted Fast Surveillance Video Retrieval System
description Conventional public safety surveillance video camera systems required 24/7 monitoring of security officers with video wall display installed in the control room. When a crime or incident is reported, all the recorded surveillance video streams nearby the incident area are played back simultaneously on video wall to help locate the target person. The security officers can fast forward the video playback to speed up the video search but it requires massive manpower if there are hundreds of video streams or multiple target persons required to be examined on the video wall. Even today with the Graphics Processing Unit (GPU) that is able to run the person search deep neural network model to automatic search for the target person from a large video database, it can take hours or even days to complete the search. This research aims to determine how to prioritize the surveillance camera video frames that need to be processed by the person search deep neural network model to reduce the time taken for getting the target person in the next camera (the cameras that may recorded the target person according to walkway topology). Thanks to the advancement in artificial intelligence, a person search deep neural network model trained to correctly match thousands of identical person can be used to automate the person search process. The person search matching process required the person in the image to be firstly detected before the matching can be carried out. Eight deep neural network based object detection models are re-trained on 55,272 labelled persons to determine the suitable object detection model that can be used to replace the person detection part of the person search model. As a result, applying Model 3 (Darkflow) for person detection is found to be able to provide reasonable speed/accuracy trade-off (0.62 mAP and 0.04s mean inference time). To further reduce the required time of automated person search without having to scale up the computing hardware, additional metadata (WiFi MAC address of smartphone) collected during the occurrence of the incident can be used to prioritize the retrieving of surveillance video frames for subsequent person search. Three ways of retrieving surveillance video are compared, in term of time taken for getting the target person, with a constructed testbed in UTeM. The developed WiFi sniffer enabled surveillance camera, with 3-stage WiFi frame inspection and the use of collected WiFi signal strength for filtering, is able to tag the collected WiFi MAC addresses to the surveillance video frames according to the time of the MAC address is sniffed. Using the formulated mathematical model, the proposed WiFi MAC address tagging assisted fast surveillance video retrieval method performs 9.6 times better in single person search and 6.2 times better in multiple persons search provided the WiFi MAC address of the target’s smartphone is sniffed by the WiFi sniffer of the surveillance camera. Based on these results, the proposed fast video retrieval system with MAC address tagging is proven to take less time to get target person in the next camera as compared to video retrieval system without MAC address tagging. Further research is needed to identify how to prioritize the WiFi MAC address searching when multiple WiFi MAC addresses are sniffed.
format Thesis
qualification_name Master of Philosophy (M.Phil.)
qualification_level Master's degree
author Tan, Kien Long
author_facet Tan, Kien Long
author_sort Tan, Kien Long
title Wifi MAC Address Tagging Assisted Fast Surveillance Video Retrieval System
title_short Wifi MAC Address Tagging Assisted Fast Surveillance Video Retrieval System
title_full Wifi MAC Address Tagging Assisted Fast Surveillance Video Retrieval System
title_fullStr Wifi MAC Address Tagging Assisted Fast Surveillance Video Retrieval System
title_full_unstemmed Wifi MAC Address Tagging Assisted Fast Surveillance Video Retrieval System
title_sort wifi mac address tagging assisted fast surveillance video retrieval system
granting_institution Universiti Teknikal Malaysia Melaka
granting_department Faculty of Electronics and Computer Engineering
publishDate 2020
url http://eprints.utem.edu.my/id/eprint/25453/1/Wifi%20Mac%20Address%20Tagging%20Assisted%20Fast%20Surveillance%20Video%20Retrieval%20System.pdf
http://eprints.utem.edu.my/id/eprint/25453/2/Wifi%20Mac%20Address%20Tagging%20Assisted%20Fast%20Surveillance%20Video%20Retrieval%20System.pdf
_version_ 1747834132076953600
spelling my-utem-ep.254532021-12-12T22:38:38Z Wifi MAC Address Tagging Assisted Fast Surveillance Video Retrieval System 2020 Tan, Kien Long T Technology (General) TK Electrical engineering. Electronics Nuclear engineering Conventional public safety surveillance video camera systems required 24/7 monitoring of security officers with video wall display installed in the control room. When a crime or incident is reported, all the recorded surveillance video streams nearby the incident area are played back simultaneously on video wall to help locate the target person. The security officers can fast forward the video playback to speed up the video search but it requires massive manpower if there are hundreds of video streams or multiple target persons required to be examined on the video wall. Even today with the Graphics Processing Unit (GPU) that is able to run the person search deep neural network model to automatic search for the target person from a large video database, it can take hours or even days to complete the search. This research aims to determine how to prioritize the surveillance camera video frames that need to be processed by the person search deep neural network model to reduce the time taken for getting the target person in the next camera (the cameras that may recorded the target person according to walkway topology). Thanks to the advancement in artificial intelligence, a person search deep neural network model trained to correctly match thousands of identical person can be used to automate the person search process. The person search matching process required the person in the image to be firstly detected before the matching can be carried out. Eight deep neural network based object detection models are re-trained on 55,272 labelled persons to determine the suitable object detection model that can be used to replace the person detection part of the person search model. As a result, applying Model 3 (Darkflow) for person detection is found to be able to provide reasonable speed/accuracy trade-off (0.62 mAP and 0.04s mean inference time). To further reduce the required time of automated person search without having to scale up the computing hardware, additional metadata (WiFi MAC address of smartphone) collected during the occurrence of the incident can be used to prioritize the retrieving of surveillance video frames for subsequent person search. Three ways of retrieving surveillance video are compared, in term of time taken for getting the target person, with a constructed testbed in UTeM. The developed WiFi sniffer enabled surveillance camera, with 3-stage WiFi frame inspection and the use of collected WiFi signal strength for filtering, is able to tag the collected WiFi MAC addresses to the surveillance video frames according to the time of the MAC address is sniffed. Using the formulated mathematical model, the proposed WiFi MAC address tagging assisted fast surveillance video retrieval method performs 9.6 times better in single person search and 6.2 times better in multiple persons search provided the WiFi MAC address of the target’s smartphone is sniffed by the WiFi sniffer of the surveillance camera. Based on these results, the proposed fast video retrieval system with MAC address tagging is proven to take less time to get target person in the next camera as compared to video retrieval system without MAC address tagging. Further research is needed to identify how to prioritize the WiFi MAC address searching when multiple WiFi MAC addresses are sniffed. 2020 Thesis http://eprints.utem.edu.my/id/eprint/25453/ http://eprints.utem.edu.my/id/eprint/25453/1/Wifi%20Mac%20Address%20Tagging%20Assisted%20Fast%20Surveillance%20Video%20Retrieval%20System.pdf text en public http://eprints.utem.edu.my/id/eprint/25453/2/Wifi%20Mac%20Address%20Tagging%20Assisted%20Fast%20Surveillance%20Video%20Retrieval%20System.pdf text en validuser https://plh.utem.edu.my/cgi-bin/koha/opac-detail.pl?biblionumber=119760 mphil masters Universiti Teknikal Malaysia Melaka Faculty of Electronics and Computer Engineering Lim, Kim Chuan 1. Ahmed, E., Jones, M., and Marks, T. K., 2015. An Improved Deep Learning Architecture for Person Re-identification. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3908-3916. 2. Aixing, L., Fengqi, Y., and Keqing, S., 2011. A Novel Fast and Effictive Video Retrieval System for Surveillance Application, IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems, CYBER, Vol. 1, pp. 153-157. 3. Alex, K., Ilya, S., and Geoffrey, E. H., 2012. ImageNet Classification with Deep Convolutional Neural Networks, Advances In Neural Information Processing Systems, pp. 1-9. 4. Andre, A., David, C., Peter, V., and Bernd, G., 2014. Real-time Query-by-Image Video Search System, MM, pp. 3-7. 5. Andreea, C. P., Cristian, C., and Mitra, B., 2017. WiFi Tracking of Pedestrian Behaviour, Smart Sensors Networks: Communication Technologies and Intelligent Applications, pp. 309-337. 6. Andrew, G. H., Menglong, Z., Bo, C., Dmitry, K., Weijun, W., Tobias, W., Marco, A., and Hartwig, A., 2017. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications, CoRR. 7. Android, 2018. Android Version Distribution Dashboard. [on-line] Available at: https://developer.android.com/about/dashboards/ [Accessed on 13 January 2018]. 8. Apidet, B., Nattha, J., and Hiroshi, S., 2018. A System for Detection and Tracking of Human Movements Using RSSI Signals, IEEE Sensors Journal, Vol. 18, No. 6, pp. 2531-2544. Apostolis, 2004. Content Based Video Indexing and Retrieval. [on-line]Available at: https://www.delab.csd.auth.gr/courses/c_mmdb/VideoRetrievalByContent.ppt [Accessed on 17 December 2017]. 9. Behrouz, A. F., 2013. Data Communications and Networking 5th edition. McGraw-Hill, ISBN: 9780071315869. 10. Bromley, J., Guyon, I., LeCun, Y., Säckinger, E., and Shah, R., 1993. Signature Verification Using a ‘Siamese’ Time Delay Neural Network, Proceeding International Conference Neural Information Processing System, pp. 737-744. 11. Carreira, J., Agrawal, P., Fragkiadaki, K., and Malik, J., 2015. Human Pose Estimation with Iterative Error Feedback, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4733-4742. 12. Christian, S., Sergey, I., and Vincet, V., 2016. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning, Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17). 13. Cisco, 2011. WLAN Radio Frequency Design Considerations. [on-line] Available at: https://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Mobility/emob41dg/emob41dg -wrapper/ch3_WLAN.html [Accessed on 23 June 2017]. 14. Daniel, B., 2015. Why we chose to move to HTML5 video | Engineering Blog | Facebook Code. [on-line] Available at: https://code.fb.com/web/why-we-chose-to-move-to-html5-video/ [Accessed on 13 December 2016]. 15. Di, C., Shanshan, Z., Wanli, O., Jian, Y., and Ying, T., 2018. Person Search via A MaskGuided Two-Stream CNN Model. [on-line] Available at: https://arxiv.org/abs/1807.08107 [Accessed on 4 July 2018]. 16. Dong, Y., and Zhen, L., 2014. Deep Metric Learning for Practical Person Re-Identification, Proceeding ICPR '14 Proceedings of the 2014 22nd International Conference on Pattern Recognition, pp. 34-39. 17. Erhan, D., Szegedy, C., Toshev, A., and Anguelov, D., 2014. Scalable Object Detection Using Deep Neural Networks, IEEE Conference on Computer Vision and Pattern Recognition, pp. 2155-2162. 18. François, D., Philipp, H., Charalampos, Z. P., Rui, S. C., and Mário, S. N, 2004. Towards Video on the Web with HTML5. [on-line]129 Available at: https://www.w3.org/2010/Talks/1014-html5-video-fd/video-html5.pdf [Accessed on 15 December 2016]. 19. Furqan, M. K. and Francois, B., 2016. Person Re-identification for Real-world Surveillance Systems, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 20. Girshick, R., Donahue, J., Darrell, T., and Malik, J., 2014. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation, Conference on Computer Vision and Pattern Recognition (CVPR). 21. Girshick, R., 2015. Fast R-CNN, IEEE International Conference on Computer Vision (ICCV), pp. 1440-1448. 22. Gerónimo, D. and Kjellström, H., 2014. Unsupervised Surveillance Video Retrieval based on Human Action and Appearance, 22nd International Conference on Pattern Recognition, pp. 4630-4635. 23. He, K., Zhang, X., Ren, S., and Sun, J., 2015. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(9), pp. 1904-1916. 24. He, K., Gkioxari, G., Dollar, P., and Girshick, R., 2017. Mask R-CNN, IEEE International Conference on Computer Vision (ICCV), pp. 2980-2988. 25. He, Z., and Zhang, B., 2018. End-to-End Detection and Re-identification Integrated Net for Person Search, Asian Conference on Computer Vision, pp. 349-364. 26. Jane, B., Isabelle, G., and Yann, L.C., 1993. Signature Verification using a "Siamese" Time Delay Neural Network, International Journal of Pattern Recognition and Artificial Intelligence, Vol. 7, No. 4, pp. 669-688. 27. Jason, H., 2016. 4 WiFi Band Steering Myths. [on-line] Available at: https://www.networkcomputing.com/wireless-infrastructure/4-wifi-bandsteering-myths/1861851560 [Accessed on 15 December 2017]. 28. Jiayou, L., and Xingqun, Z., 2014. Characterization of Smart Phone Received Signal Strength Indication for WLAN Indoor Positioning Accuracy Improvement, Journal Of Networks, Vol. 9, No. 3, pp. 739-746. 29. Jifeng, D., Yi, L., Kaiming, H., and Jian, S., 2016. R-FCN: Object Detection via Region-based Fully Convolutional Networks, 30th Conference on Neural Information Processing Systems (NIPS 2016). 30. Jonathan, H., Vivek, R., Chen, S., Menglong, Z., Anoop, K., Alireza, F., Ian, F., Zbigniew, W., Yang, S., Sergio, G., and Kevin, M., 2017. Speed/Accuracy Trade-Offs for Modern Convolutional Object Detectors, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3296-3306.131 31. Jonathan, H., Vivek, R., Chen, S., Menglong, Z., Anoop, K., Alireza, F., Ian, F., Zbigniew, W., Yang, S., Sergio, G., and Kevin, M., 2017. Tensorflow Object Detection API. [on-line] Available at: https://github.com/tensorflow/models/tree/master/research/object_detection [Accessed on 24 June 2017]. 32. Joseph, R., Santosh, D., and Ali, F., 2016. You Only Look Once: Unified, Real-Time Object Detection, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779- 788. 33. Joseph, R., and Ali, F., 2017. YOLO9000: Better, Faster, Stronger, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6517-6525. 34. Julien, F., 2015. Short: How Talkative is Your Mobile Device? An Experimental Study of Wi-Fi Probe Requests, WiSec '15 Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks, pp. 1-6. 35. Kaiming, H., Xiangyu, Z., Shaoqing, R., and Jian, S., 2015. Deep Residual Learning for Image Recognition, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 36. Krizhevsky, A., Sutskever, I., and Hinton, G. E., 2012. ImageNet Classification with Deep Convolutional Neural Networks, Proceedings of the 25th International Conference on Neural Information Processing Systems, pp. 1097-1105. 37. Lazebnik, S., Schmid, C., and Ponce, J., 2006. Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 38. Li, W., Zhao, R., Xiao, T., and Wang, X., 2014. DeepReID: Deep Filter Pairing Neural Network for Person Re-identification, IEEE Conference on Computer Vision and Pattern Recognition, pp. 152-159. 39. Liang, Z., Zhi, B., Yifan, S., Jingdong, W., Chi, S., Shengjin, W., and Qi, T., 2016. MARS: A Video Benchmark for Large-Scale Person Re-identification, European Conference on Computer Vision, pp. 868-884. 40. Liang, Z., Hengheng, Z., Shaoyan, S., Manmohan, C., and Qi, T., 2017. Person Reidentification in the Wild, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3346-3355. 41. Liang, Z., Yi, Y., and Alexander, G. H., 2016. Person Re-identification: Past, Present and Future, International Conference on Pattern Recognition, Vol. 14, No. 8, pp. 1-20. 42. Liu, H., Feng, J., Jie, Z., Jayashree, K., Zhao, B., Qi, M., Jiang, J., and Yan, S., 2017. Neural Person Search Machines, IEEE International Conference on Computer Vision (ICCV), pp. 493-501. 43. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., and Berg, A. C., 2016. SSD: Single Shot MultiBox Detector, European Conference on Computer Vision, pp. 21-37. 44. McLaughlin, N., Rincon, J. M. del, and Miller, P., 2016. Recurrent Convolutional Network for Video-Based Person Re-identification, IEEE Conference on Computer Vision and Pattern Recognition, pp. 1325-1334, 45. Md, A. N., Hanan, A., Saqib, A., and Rashid, H. K., 2012. A Taxonomy of Cross Layer Routing Metrics for Wireless Mesh Networks, EURASIP Journal on Wireless Communications and Networking, Volume 2012 Issue 1, pp. 1-16. 46. Najibi, M., Rastegari, M., and Davis, L. S., 2016. G-CNN: An Iterative Grid Based Object Detector. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2369- 2377. 47. Niall, M., Jesus, M. R., and Paul, M., 2016. Recurrent Convolutional Network for Videobased Person Re-Identification, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1325-1334. 48. Niki, M., Abir, D., Christian, M., and Amit, K. R., 2015. Re-Identification in the Function Space of Feature Warps, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 37 No. 8, pp. 1656-1669. 49. Ning, D., Daniel, W., Xiaomeng, C., Abhinav, P., Charlie, Y. H., and Andrew, R., 2013. Characterizing and Modeling the Impact of Wireless Signal Strength on Smartphone Battery 50. Drain, Proceedings of the ACM SIGMETRICS/international conference on Measurement and modeling of computer systems, pp. 29-40. 51. Peter, M., 2014. On the Number of Non-Overlapping Channels in the IEEE 802.11 WLANs Operating in the 2.4 GHz Band, Elektrotehniski Vestnik, 81(3), pp. 148-152. 52. Po, K. L., Marc, D., Kelvin, M., and Robert, L., 2016. Video Summarization of Surveillance Cameras, 13th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 286-294. 53. Raia, H., Sumit, C., and Yann, L.C., 2006. Dimensionality Reduction by Learning an Invariant Mapping, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 2, pp. 1735-1742. 54. Ren, S., He, K., Girshick, R., and Sun, J., 2015. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6), pp.1137-1149. 55. Richard, L. 2015. YouTube Engineering and Developers Blog: YouTube now defaults to HTML5. [on-line] Available at: https://youtube-eng.googleblog.com/2015/01/youtube-now-defaults-tohtml5_27.html [Accessed on 13 December 2016]. 56. Sangdoo, Y., Kimin, Y., Soo, W.K., Youngjoon, Y., and Jiyeoup, J., 2014. Visual Surveillance Briefing System: Event-based Video Retrieval and Summarization, 11th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 204-209.135 57. Scheuner, J., Mazlami, G., Schoni, D., Stephan, S., De Carli, A., Bocek, T., and Stiller, B., 2016. Probr - A Generic and Passive WiFi Tracking System, IEEE 41st Conference on Local Computer Networks (LCN), pp. 495-502. 58. Shang, X. W., Ying, C. C., Xiang, L., An-Cong, W., Jin-Jie, Y., and Wei-Shi, Z., 2016. An Enhanced Deep Feature Representation for Person Re-identification, IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1-8. 59. Stephan, M. G., Maurice, L., Julius, M., and Georg, C., 2014. Analysis of Injection Capabilities and Media Access of IEEE 802.11 Hardware in Monitor Mode, IEEE/IFIP Network Operations and Management Symposium: Management in a Software Defined World. 60. Thtrieu, 2016. Darknett in TensorFlow (Darkflow). [on-line] Available at: https://github.com/thtrieu/darkflow [Accessed on 11 March 2017]. 61. Udayan, D., Tristan, H., and David, K., 2006. Channel Sampling Strategies for Monitoring Wireless Networks, 4th International Symposium on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks, pp. 1-7. 62. Uijlings, J. R. R., van de Sande, K. E. A., Gevers, T., and Smeulders, A. W. M., 2013. Selective Search for Object Recognition. International Journal of Computer Vision, 104(2), pp. 154-171.136 63. Wei, L., Dragomir, A., Dumitru, E., Christian, S., Scott, R., Cheng, Y. F., Alexander, C. B., 2016. SSD: Single Shot MultiBox Detector, European Conference on Computer Vision, pp 21-37. 64. William, S., 2007. Data and Computer Communications 8th edition. Pearson Prentice Hall, ISBN: 0132433109. 65. Wei, L., Rui, Z., Tong, X., and Xiaogang, W., 2014. DeepReID: Deep Filter Pairing Neural Network for Person Re-Identification, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 152-159. 66. Wei, L., Merima, K., Tarik, K., Adnan, S., Ingrid, M., and Eli, D. P., 2017. Wireless Technology Recognition Based on RSSI Distribution at Sub-Nyquist Sampling Rate for Constrained Devices. [on-line] Available at: https://www.ncbi.nlm.nih.gov/pubmed/28895879 [Accessed on 6 May 2018]. 67. Wen, Y., Zhang, K., Li, Z., and Qiao, Y., 2016. A Discriminative Feature Learning Approach for Deep Face Recognition, Springer International Publishing AG, pp. 499-515. 68. Wu, L., Shen, C. H., and Hengel, A., 2016. PersonNet: Person Re-identification with Deep Convolutional Neural Networks, arXiv:1601.07255. 69. Xiao, J., Xie, Y., Tillo, T., Huang, K., Wei, Y., and Feng, J., 2017. IAN: The Individual Aggregation Network for Person Search, Pattern Recognition, pp. 332-340. 70. Xiao, T., Li, S., Wang, B., Lin, L., and Wang, X., 2016. End-to-End Deep Learning for Person Search. ArXiv, abs/1604.01850. 71. Xiao, T., Li, S., Wang, B., Lin, L., and Wang, X., 2017. Joint Detection and Identification Feature Learning for Person Search, IEEE Conference on Computer Vision and Pattern Recognition, pp. 3376-3385. 72. Yi, D., Lei, Z., Liao, S., and Li, S. Z., 2014. Deep Metric Learning for Person Reidentification, 22nd International Conference on Pattern Recognition, pp. 34-39. 73. Yoo, D., Park, S., Lee, J.Y., Paek, A. S., and Kweon, I. S., 2015. AttentionNet: Aggregating Weak Directions for Accurate Object Detection, IEEE International Conference on Computer Vision, pp. 2659-2667. 74. Yuanlu, X., Bingpeng, M., Rui, H., and Liang, L., 2014. Person Search in a Scene by Jointly Modeling People Commonness and Person Uniqueness, Proceedings of the ACM International Conference on Multimedia, pp. 937-940. 75. Zhenwei, H., Lei, Z., and Wei, J., 2018. End-to-End Detection and Re-identification Integrated Net for Person Search. [on-line] Available at: https://arxiv.org/abs/1804.00376 [Accessed on 25 June 2018]