Cooperative Perception Technology of Autonomous Driving in the Internet of Vehicles Environment: A Review
Abstract
:1. Introduction
- Insufficient environmental perception information. The perception module of autonomous vehicles primarily relies on numerous onboard sensors, such as LiDAR, cameras, millimeter-wave radar, etc. [6]. However, affected by various factors, such as sensor characteristics, obstacle occlusion, illumination, and bad weather, the vehicle’s perception range is limited, resulting in blind spots in the field of view and making it difficult to provide a full range of perception information for autonomous driving, which will cause autonomous vehicles to fail to detect imminent danger in a timely manner. For example, a Tesla Model X perception system mistakenly identified the white side of a tractor turning left in front of it as the sky, resulting in an accident in 2016 [7]. In 2019, a Tesla Model 3 Autopilot system driving at high speed failed to accurately identify the vehicle in front and perpendicular to itself and make braking decisions, resulting in a serious traffic accident [8].
- It is difficult for in-vehicle computing systems to process a large amount of multisource heterogeneous sensor data in real-time. There is no unified format for various sensor data types, and the fusion processing is complicated. In addition, for the object recognition network of an RGB camera, if the image resolution is 320 × 320 KB and the frequency of generating data is 50 Hz, the amount of data, in this case, will reach 14 MB/S [9]. The current method of processing massive amounts of data, usually equipped with high-performance computers for autonomous driving, will greatly increase the cost of autonomous vehicles. In addition, the diversity of multisource and heterogeneous sensor data formats will also increase the difficulty of data processing.
2. Cooperative Perception Information Fusion
2.1. Image Fusion
2.2. Point Cloud Fusion
2.3. Image–Point Fusion
2.4. Summary
3. Cooperative Perception Information-Sharing
3.1. Cooperative Perception Information-Sharing Network
3.1.1. DSRC
3.1.2. C-V2X
3.1.3. Hybrid Architecture
3.2. Cooperative Perception Information-Sharing Strategy
- Its absolute position has changed by more than 4 m since the last time its information was included in a CPM.
- Its absolute speed has changed by more than 0.5 m/s since the last time its information was included in a CPM.
- The last time the detected object was included in a CPM was one or more seconds ago.
3.3. The Effect of Network Performance on Cooperative Perception
3.3.1. Latency
3.3.2. Packet Loss Rate
3.3.3. Congestion Control
3.4. Summary
4. Discussion
4.1. Cooperative Perception Information Fusion
4.2. Efficient and Reliable Information-Sharing Strategy
4.3. Vehicle Mobility
4.4. Security
5. Conclusions and Outlook
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
IoV | Internet of vehicles |
V2V | Vehicle-to-vehicle |
V2I | Vehicle-to-infrastructure |
MEC | Multiaccess edge computing |
ROI | Region of interest |
CPM | Cooperative perception message |
C-V2X | Cellular–vehicle-to-everything |
DSRC | Dedicated short range communication |
WAVE | Wireless access in vehicular environments |
3GPP | Third generation partnership project |
ETSI | European Telecommunications Standard Institute |
LoS | Line-of-sight |
NLoS | Non-line-of-sight |
PDR | Packet delivery ratio |
DCC | Decentralized congestion control |
PLR | Packet loss rate |
References
- Marti, E.; De Miguel, M.A.; Garcia, F.; Perez, J. A review of sensor technologies for perception in automated driving. IEEE Intell. Transp. Syst. Mag. 2019, 11, 94–108. [Google Scholar] [CrossRef] [Green Version]
- Shuttleworth, J. SAE Standard News: J3016 Automated-Driving Graphic Update. 2019. Available online: https://www.sae.org/news/2019/01/sae-updates-j3016-automated-driving-graphic (accessed on 18 November 2020).
- Yeong, D.J.; Velasco-Hernandez, G.; Barry, J.; Walsh, J. Sensor and sensor fusion technology in autonomous vehicles: A review. Sensors 2021, 21, 2140. [Google Scholar] [CrossRef] [PubMed]
- Duan, X.; Jiang, H.; Tian, D.; Zou, T.; Zhou, J.; Cao, Y. V2I based environment perception for autonomous vehicles at intersections. China Commun. 2021, 18, 1–12. [Google Scholar] [CrossRef]
- Lv, P.; Xu, J.; Li, T.; Xu, W. Survey on edge computing technology for autonomous driving. J. Commun. 2021, 42, 190–208. [Google Scholar]
- Wang, Z.; Wu, Y.; Niu, Q. Multi-sensor fusion in automated driving: A survey. IEEE Access 2020, 8, 2847–2868. [Google Scholar] [CrossRef]
- Tesla Driver Killed in Crash with Autopilot Active, NHTSA Investigating. Available online: https://www.theverge.com/2016/6/30/12072408/tesla-autopilot-car-crash-death-autonomous-model-s (accessed on 30 June 2016).
- Tesla’s Latest Autopilot Death Looks Just Like a Prior Crash. Available online: https://www.wired.com/story/teslas-latest-autopilot-death-looks-like-prior-crash/ (accessed on 16 May 2019).
- Zhang, Y.; Zhang, S.; Zhang, Y.; Ji, J.; Duan, Y.; Huang, Y.; Peng, J.; Zahng, Y. Multi-modality fusion perception and computing in autonomous driving. J. Comput. Res. Dev. 2020, 57, 1781–1799. [Google Scholar]
- Ding, Z.; Xiang, J. Overview of intelligent vehicle infrastructure cooperative simulation technology for IoVs and automatic driving. World Electr. Veh. J. 2021, 12, 222. [Google Scholar] [CrossRef]
- Mo, Y.; Zhang, P.; Chen, Z.; Ran, B. A method of vehicle-infrastructure cooperative perception based vehicle state information fusion using improved kalman filter. Multimed. Tools Appl. 2021, 81, 4603–4620. [Google Scholar] [CrossRef]
- Lian, Y.; Qian, L.; Ding, L.; Yang, F.; Guan, Y. Semantic fusion infrastructure for unmanned vehicle system based on cooperative 5G MEC. In Proceedings of the 2020 IEEE/CIC International Conference on Communications in China (ICCC), Chongqing, China, 9–11 August 2020; pp. 202–207. [Google Scholar]
- Lv, P.; He, Y.; Han, J.; Xu, J. Objects perceptibility prediction model based on machine learning for V2I communication load reduction. In Proceedings of the 16th International Conference on Wireless Algorithms, Systems, and Applications (WASA), Nanjing, China, 25–27 June 2021; pp. 521–528. [Google Scholar]
- Emara, M.; Filippou, M.C.; Sabella, D. MEC-assisted end-to-end latency evaluations for C-V2X communications. In Proceedings of the 2018 European Conference on Networks and Communications (EuCNC), Ljubljana, Slovenia, 18–21 June 2018. [Google Scholar]
- Ma, H.; Li, S.; Zhang, E.; Lv, Z.; Hu, J.; Wei, X. Cooperative autonomous driving oriented MEC-aided 5G-V2X: Prototype system design, field tests and AI-based optimization Tools. IEEE Access 2020, 8, 54288–54302. [Google Scholar] [CrossRef]
- Rosique, F.; Navarro, P.J.; Fernández, C.; Padilla, A. A systematic review of perception system and simulators for autonomous vehicles research. Sensors 2019, 19, 648. [Google Scholar] [CrossRef] [Green Version]
- Li, Y.; Niu, J.; Ouyang, Z. Fusion strategy of multi-sensor based object detection for self-driving vehicles. In Proceedings of the 2020 International Wireless Communications and Mobile Computing (IWCMC), Beirut, Lebanon, 29 June–2 July 2020; pp. 1549–1554. [Google Scholar]
- Jisen, W. A study on target recognition algorithm based on 3D point cloud and feature fusion. In Proceedings of the 2021 IEEE 4th International Conference on Automation, Electronics and Electrical Engineering (AUTEEE), Shenyang, China, 19–21 November 2021; pp. 630–633. [Google Scholar]
- Theis, L.; Shi, W.; Cunningham, A.; Huszár, F. Lossy image compression with compressive autoencoders. In Proceedings of the 5th International Conference on Learning Representations (ICLR), Toulon, France, 24–26 April 2017; pp. 1–19. [Google Scholar]
- Xu, D.; Lu, G.; Yang, R.; Timofte, R. Learned image and video compression with deep neural networks. In Proceedings of the 2020 IEEE International Conference on Visual Communications and Image Processing (VCIP), Macau, China, 1–4 December 2020; pp. 1–3. [Google Scholar]
- Löhdefink, J.; Bär, A.; Schmidt, N.M.; Hüger, F.; Schlicht, P.; Fingscheidt, T. Focussing Learned Image Compression to Semantic Classes for V2X Applications. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; pp. 1641–1648. [Google Scholar]
- Rippel, O.; Bourdev, L. Real-time adaptive image compression. In Proceedings of the International Conference on Machine Learning (ICML), Sydney, Australia, 6–11 August 2017; pp. 2922–2930. [Google Scholar]
- Lv, P.; Li, K.; Xu, J.; Li, T.; Chen, N. Cooperative sensing information transmission load optimization for automated vehicles. Chin. J. Comput. 2021, 44, 1984–1997. [Google Scholar]
- Xiao, Z.; Mo, Z.; Jiang, K.; Yang, D. Multimedia fusion at semantic level in vehicle cooperative perception. In Proceedings of the 2018 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), San Diego, CA, USA, 23–27 July 2018; pp. 1–6. [Google Scholar]
- Sridhar, S.; Eskandarian, A. Cooperative perception in autonomous ground vehicles using a mobile-robot testbed. IET Intell. Transp. Syst. 2019, 13, 1545–1556. [Google Scholar] [CrossRef] [Green Version]
- Kim, S.W.; Qin, B.; Chong, Z.J.; Shen, X.; Liu, W.; Ang, M.H.; Frrazzoli, E.; Rus, D. Multi-vehicle cooperative driving using cooperative perception: Design and experimental validation. IEEE Trans. Intell. Transp. Syst. 2014, 16, 663–680. [Google Scholar] [CrossRef]
- Liu, W.; Ma, Y.; Gao, M.; Duan, S.; Wei, L. Cooperative visual augmentation algorithm of intelligent vehicle based on inter-vehicle image fusion. Appl. Sci. 2021, 11, 11917. [Google Scholar] [CrossRef]
- Shi, J.; Wang, W.; Wang, X.; Sun, H.; Lan, X.; Xin, J.; Zheng, N. Leveraging spatio-temporal evidence and independent vision channel to improve multi-sensor fusion for vehicle environmental perception. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Suzhou, China, 26–30 June 2018; pp. 591–596. [Google Scholar]
- Chen, Q.; Tang, S.; Yang, Q.; Fu, S. Cooper: Cooperative perception for connected autonomous vehicles based on 3D point clouds. In Proceedings of the 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), Dallas, TX, USA, 7–10 July 2019; pp. 514–524. [Google Scholar]
- Ye, E.; Spiegel, P.; Althoff, M. Cooperative raw sensor data fusion for ground truth generation in autonomous driving. In Proceedings of the 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), Rhodes, Greece, 20–23 September 2020; pp. 1–7. [Google Scholar]
- Chen, Q.; Ma, X.; Tang, S.; Guo, J.; Yang, Q.; Fu, S. F-cooper: Feature based cooperative perception for autonomous vehicle edge computing system using 3D point clouds. In Proceedings of the 4th ACM/IEEE Symposium on Edge Computing, New York, NY, USA, 7–9 November 2019; pp. 88–100. [Google Scholar]
- Shangguan, W.; Du, Y.; Chai, L. Interactive perception-based multiple object tracking via CVIS and AV. IEEE Access 2019, 7, 121907–121921. [Google Scholar] [CrossRef]
- Asvadi, A.; Girão, P.; Peixoto, P.; Nunes, U. 3D object tracking using RGB and LIDAR data. In Proceedings of the IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016; pp. 1255–1260. [Google Scholar]
- Arnold, E.; Dianati, M.; de Temple, R.; Fallah, S. Cooperative perception for 3D object detection in driving scenarios using infrastructure sensors. IEEE Trans. Intell. Transp. Syst. 2020, 23, 1852–1864. [Google Scholar] [CrossRef]
- Elfring, J.; Appeldoorn, R.; Van den Dries, S.; Kwakkernaat, M. Effective world modeling: Multisensor data fusion methodology for automated driving. Sensors 2016, 16, 1668. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Cui, Y.; Chen, R.; Chu, W.; Chen, L.; Tian, D.; Li, Y.; Cao, D. Deep learning for image and point cloud fusion in autonomous driving: A review. IEEE Trans. Intell. Transp. Syst. 2021, 23, 722–739. [Google Scholar] [CrossRef]
- Yu, Y.; Liu, X.; Xu, C.; Cheng, X. Multi-sensor data fusion algorithm based on the improved weighting factor. J. Phys. 2021, 1754, 37–51. [Google Scholar] [CrossRef]
- Jiang, Q.; Zhang, L.; Meng, D. Target detection algorithm based on MMW radar and camera fusion. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 1–6. [Google Scholar]
- Ji, Z.; Prokhorov, D. Radar-vision fusion for object classification. In Proceedings of the 2008 11th International Conference on Information Fusion, Cologne, Germany, 30 June–3 July 2008; pp. 1–7. [Google Scholar]
- Kocić, J.; Jovičić, N.; Drndarević, V. Sensors and sensor fusion in autonomous vehicles. In Proceedings of the 2018 26th Telecommunications Forum (TELFOR), Belgrade, Serbia, 20–21 November 2018; pp. 420–425. [Google Scholar]
- Han, S.; Wang, X.; Xu, L.; Sun, H.; Zheng, N. Frontal object perception for intelligent vehicles based on radar and camera fusion. In Proceedings of the 2016 35th Chinese Control Conference (CCC), Chengdu, China, 27–29 July 2016; pp. 4003–4008. [Google Scholar]
- Zhang, X.; Zhou, M.; Qiu, P.; Huang, Y.; Li, J. Radar and vision fusion for the real-time obstacle detection and identification. Ind. Robot. Int. J. Robot. Res. Appl. 2019, 2007, 233. [Google Scholar] [CrossRef]
- Zeng, S.; Zhang, W.; Litkouhi, B.B. Fusion of obstacle detection using radar and camera. Pat. US 2016, 9, B2. [Google Scholar]
- Jha, H.; Lodhi, V.; Chakravarty, D. Object detection and identification using vision and radar data fusion system for ground-based navigation. In Proceedings of the 2019 6th International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 7–8 March 2019; pp. 590–593. [Google Scholar]
- Lekic, V.; Babic, Z. Automotive radar and camera fusion using generative adversarial networks. Comput. Vis. Image Underst. 2019, 184, 1–8. [Google Scholar] [CrossRef]
- Wang, X.; Xu, L.; Sun, H.; Xin, J.; Zheng, N. On-road vehicle detection and tracking using MMW radar and monovision fusion. IEEE Trans. Intell. Transp. Syst. 2016, 17, 2075–2084. [Google Scholar] [CrossRef]
- Fu, Y.; Tian, D.; Duan, X.; Zhou, J.; Lang, P.; Lin, C.; You, X. A camera-radar fusion method based on edge computing. In Proceedings of the 2020 IEEE International Conference on Edge Computing (EDGE), Beijing, China, 5–10 September 2020; pp. 9–14. [Google Scholar]
- Fan, J.; Huo, T.; Li, X. A review of one-stage detection algorithms in autonomous driving. In Proceedings of the 2020 4th CAA International Conference on Vehicular Control and Intelligence (CVCI), Hangzhou, China, 18–20 December 2020; pp. 210–214. [Google Scholar]
- Şahin, M.Ş.; Acarman, T. An object segmentation approach for mobile lidar point clouds. In Proceedings of the 2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS), Coimbatore, India, 25–27 March 2021; pp. 554–560. [Google Scholar]
- Wang, L.; Zhang, Z.; Di, X.; Tian, J. A roadside camera-radar sensing fusion system for intelligent transportation. In Proceedings of the 2020 17th European Radar Conference (EuRAD), Utrecht, The Netherlands, 16–18 September 2020; pp. 282–285. [Google Scholar]
- Saito, M.; Shen, S.; Ito, T. Interpolation method for sparse point cloud at long distance using sensor fusion with LiDAR and camera. In Proceedings of the 2021 IEEE CPMT Symposium Japan (ICSJ), Kyoto, Japan, 10–12 November 2021; pp. 116–117. [Google Scholar]
- Lee, G.H.; Kwon, K.H.; Kim, M.Y. Ambient environment recognition algorithm fusing vision and LiDAR sensors for robust multi-channel V2X system. In Proceedings of the 2019 Eleventh International Conference on Ubiquitous and Future Networks (ICUFN), Split, Croatia, 2–5 July 2019; pp. 98–101. [Google Scholar]
- Lee, G.H.; Choi, J.D.; Lee, J.H.; Kim, M.Y. Object detection using vision and LiDAR sensor fusion for multi-channel V2X system. In Proceedings of the 2020 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Fukuoka, Japan, 19–21 February 2020; pp. 1–5. [Google Scholar]
- Gu, S.; Yang, J.; Kong, H. A Cascaded LiDAR-camera fusion network for road detection. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Tebessa, Algeria, 21–22 September 2021; pp. 13308–13314. [Google Scholar]
- Allig, C.; Wanielik, G. Alignment of perception information for cooperative perception. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 1849–1854. [Google Scholar]
- Zhang, J.W.; Liu, T.J.; Li, R.G.; Liu, D.; Zhan, J.L.; Kan, H.W. A temporal calibration method for multi-sensor fusion of autonomous vehicles. Automot. Eng. 2022, 44, 215–224. [Google Scholar]
- Seeliger, F.; Dietmayer, K. Inter-vehicle information-fusion with shared perception information. In Proceedings of the 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), Qingdao, China, 8–11 October 2014; pp. 2087–2093. [Google Scholar]
- Rauch, A.; Klanner, F.; Rasshofer, R.; Dietmayer, K. Car2x-based perception in a high-level fusion architecture for cooperative perception systems. In Proceedings of the 2012 IEEE Intelligent Vehicles Symposium, Madrid, Spain, 3–7 June 2012; pp. 270–275. [Google Scholar]
- Shanzhi, C.; Yan, S.; Jinling, H. Cellular vehicle to everything (C-V2X): A review. Sci. Found. China 2020, 34, 179–185. [Google Scholar]
- Zhou, H.; Xu, W.; Chen, J.; Wang, W. Evolutionary V2X Technologies Toward the Internet of Vehicles: Challenges and Opportunities. Proceedings of the IEEE 2020, 108, 308–323. [Google Scholar] [CrossRef]
- Lu, N.; Cheng, N.; Zhang, N.; Shen, X.; Mark, J.W. Connected vehicles: Solutions and challenges. IEEE Internet Things J. 2014, 1, 289–299. [Google Scholar] [CrossRef]
- Kenney, J.B. Dedicated short-range communications (DSRC) standards in the United States. Proc. IEEE 2011, 99, 1162–1182. [Google Scholar] [CrossRef]
- Maglogiannis, V.; Naudts, D.; Hadiwardoyo, S.; Akker, D.V.; Barja, J.M.; Moerman, I. Experimental V2X evaluation for C-V2X and ITS-G5 technologies in a real-life highway environment. IEEE Trans. Netw. Serv. Manag. 2022, 19, 521–1538. [Google Scholar] [CrossRef]
- Wei, S.G.; Yu, D.; Guo, C.L.; Shu, W.W. Survey of connected automated vehicle perception mode: From autonomy to interaction. IET Intell. Transp. Syst. 2019, 13, 495–505. [Google Scholar] [CrossRef]
- Abdelkader, G.; Elgazzar, K.; Khamis, A. Connected vehicles: Technology review, state of the art, challenges and opportunities. Sensors 2021, 21, 7712. [Google Scholar] [CrossRef] [PubMed]
- Naik, G.; Choudhury, B.; Park, J.M. IEEE 802.11bd & 5G NR V2X: Evolution of radio access technologies for V2X communications. IEEE Access 2019, 7, 70169–70184. [Google Scholar]
- Abdel Hakeem, S.A.; Hady, A.A.; Kim, H.W. 5G-V2X: Standardization, architecture, use cases, network-slicing, and edge-computing. Wirel. Netw. 2020, 26, 6015–6041. [Google Scholar] [CrossRef]
- Bazzi, A.; Berthet, A.O.; Campolo, C.; Masini, B.M.; Molinaro, A.; Zanella, A. On the design of Sidelink for cellular V2X: A literature review and outlook for future. IEEE Access 2021, 9, 97953–97980. [Google Scholar] [CrossRef]
- 5GAA. V2X Technology Benchmark Testing. Available online: https://www.fcc.gov/ecfs/filing/109271050222769 (accessed on 28 September 2018).
- Choi, J.; Va, V.; Gonzalez-Prelcic, N.; Daniels, R.; Bhat, C.R.; Heath, R.W. Millimeter-wave vehicular communication to support massive automotive sensing. IEEE Commun. Mag. 2016, 54, 160–167. [Google Scholar] [CrossRef] [Green Version]
- Garcia-Roger, D.; González, E.E.; Martín-Sacristán, D.; Monserrat, J.F. V2X Support in 3GPP specifications: From 4G to 5G and beyond. IEEE Access 2020, 8, 190946–190963. [Google Scholar] [CrossRef]
- Abboud, K.; Omar, H.A.; Zhuang, W. Interworking of DSRC and cellular network technologies for V2X communications: A survey. IEEE Trans. Veh. Technol. 2016, 65, 9457–9470. [Google Scholar] [CrossRef]
- Shuguang, L.; Zhenxing, Y. Architecture and key technologies of the V2X-based vehicle networking. In Proceedings of the 2019 IEEE 2nd International Conference on Electronics and Communication Engineering (ICECE), Harbin, China, 20–22 January 2019; pp. 148–152. [Google Scholar]
- Mir, Z.H.; Toutouh, J.; Filali, F.; Ko, Y.B. Enabling DSRC and C-V2X integrated hybrid vehicular networks: Architecture and protocol. IEEE Access 2020, 8, 180909–180927. [Google Scholar] [CrossRef]
- Shen, X.; Li, J.; Chen, L.; Chen, J.; He, S. Heterogeneous LTE/DSRC approach to support real-time vehicular communications. In Proceedings of the 2018 10th International Conference on Advanced Infocomm Technology (ICAIT), Stockholm, Sweden, 12–15 August 2018; pp. 122–127. [Google Scholar]
- Zhu, X.; Yuan, S.; Zhao, P. Research and application on key technologies of 5G and C-V2X Intelligent converged network Based on MEC. In Proceedings of the 2021 IEEE International Conference on Power Electronics, Computer Applications (ICPECA), Shenyang, China, 22–24 January 2021; pp. 175–179. [Google Scholar]
- Fukatsu, R.; Sakaguchi, K. Millimeter-wave V2V communications with cooperative perception for automated driving. In Proceedings of the 2019 IEEE 89th Vehicular Technology Conference (VTC2019-Spring), Kuala Lumpur, Malaysia, 22–25 April 2019; pp. 1–5. [Google Scholar]
- Fukatsu, R.; Sakaguchi, K. Automated driving with cooperative perception using millimeter-wave V2I communications for safe and efficient passing through intersections. In Proceedings of the 2021 IEEE 93rd Vehicular Technology Conference (VTC2021-Spring), Helsinki, Finland, 25–28 April 2021; pp. 1–5. [Google Scholar]
- Miucic, R.; Sheikh, A.; Medenica, Z.; Kunde, R. V2X Applications using collaborative perception. In Proceedings of the 2018 IEEE 88th Vehicular Technology Conference (VTC-Fall), Chicago, IL, USA, 27–30 August 2018; pp. 1–6. [Google Scholar]
- Li, T.; Han, X.; Ma, J. Cooperative perception for estimating and predicting microscopic traffic states to manage connected and automated traffic. IEEE Trans. Intell. Transp. Syst. 2021, 1–14. [Google Scholar] [CrossRef]
- Wang, Y.; De Veciana, G.; Shimizu, T.; Lu, H. Performance and scaling of collaborative sensing and networking for automated driving applications. In Proceedings of the 2018 IEEE International Conference on Communications Workshops (ICC Workshops), Kansas City, MO, USA, 20–24 May 2018; pp. 1–6. [Google Scholar]
- Thandavarayan, G.; Sepulcre, M.; Gozalvez, J. Generation of cooperative perception messages for connected and automated vehicles. IEEE Trans. Veh. Technol. 2020, 69, 16336–16341. [Google Scholar] [CrossRef]
- ETSI ITS. Intelligent transport system (ITS); vehicular communications; basic set of applications; analysis of the collective-perception service (CPS). ETSI TR 2019, 103, 562. [Google Scholar]
- Thandavarayan, G.; Sepulcre, M.; Gozalvez, J. Analysis of message generation rules for collective perception in connected and automated driving. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 134–139. [Google Scholar]
- Thandavarayan, G.; Sepulcre, M.; Gozalvez, J. Redundancy mitigation in cooperative perception for connected and automated vehicles. In Proceedings of the 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring), Antwerp, Belgium, 25–28 May 2020; pp. 1–5. [Google Scholar]
- Rauch, A.; Klanner, F.; Dietmayer, K. Analysis of V2X communication parameters for the development of a fusion architecture for cooperative perception systems. In Proceedings of the 2011 IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 June 2011; pp. 685–690. [Google Scholar]
- Coll-Perales, B.; Thandavarayan, G.; Sepulcre, M.; Gozalvez, J. Context-based broadcast acknowledgement for enhanced reliability of cooperative V2X messages. In Proceedings of the 2020 Forum on Integrated and Sustainable Transportation Systems (FISTS), Delft, The Netherlands, 3–5 November 2020; pp. 393–398. [Google Scholar]
- Basagni, S.; Bölöni, L.; Gjanci, P.; Petrioli, C.; Phillips, C.A.; Turgut, D. Maximizing the value of sensed information in underwater wireless sensor networks via an autonomous underwater vehicle. In Proceedings of the 2014 IEEE Conference on Computer Communications (INFOCOM), Toronto, ON, Canada, 27 April–2 May 2014; pp. 988–996. [Google Scholar]
- Zou, P.; Ozel, O.; Subramaniam, S. On age and value of information in status update systems. In Proceedings of the 2020 IEEE Wireless Communications and Networking Conference (WCNC), Seoul, Korea, 25–28 May 2020; pp. 1–6. [Google Scholar]
- Higuchi, T.; Giordani, M.; Zanella, A.; Zorzi, M.; Altintas, O. Value-anticipating V2V communications for cooperative perception. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 1947–1952. [Google Scholar]
- Aoki, S.; Higuchi, T.; Altintas, O. Cooperative perception with deep reinforcement learning for connected vehicles. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; pp. 328–334. [Google Scholar]
- Rahal, A.J.; Veciana, G.d.; Shimizu, T.; Lu, H. Optimizing timely coverage in communication constrained collaborative sensing systems. In Proceedings of the 2020 18th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOPT), Volos, Greece, 15–19 June 2020; pp. 1–8. [Google Scholar]
- Talak, R.; Karaman, S.; Modiano, E. Optimizing information freshness in wireless networks under general interference constraints. IEEE/ACM Trans. Netw. 2019, 28, 15–28. [Google Scholar] [CrossRef] [Green Version]
- Malik, R.Q.; Ramli, K.N.; Kareem, Z.H.; Habelalmatee, M.I.; Abbas, H. A review on vehicle-to-infrastructure communication system: Requirement and applications. In Proceedings of the 2020 3rd International Conference on Engineering Technology and its Applications (IICETA), Najaf, Iraq, 6–7 September 2020; pp. 159–163. [Google Scholar]
- Noh, S.; An, K.; Han, W. Toward highly automated driving by vehicle-to-infrastructure communications. In Proceedings of the 15th International Conference on Control, Automation and Systems (ICCAS), Busan, Korea, 13–16 October 2015; pp. 2016–2021. [Google Scholar]
- Balador, A.; Cinque, E.; Pratesi, M.; Valentini, F.; Bai, C.; Gómez, A.A.; Mohammadi, M. Survey on decentralized congestion control methods for vehicular communication. Veh. Commun. 2021, 33, 100394. [Google Scholar] [CrossRef]
- Xu, Q.; Pan, J.A.; Li, K.Q.; Wang, J.Q.; Wu, X.B. Design of connected vehicle controller under cloud control scenes with unreliable communication. Automot. Eng. 2021, 43, 527–536. [Google Scholar]
- Chang, X.Y.; Xu, Q.; Li, K.Q.; Bian, Y.; Han, H.; Zhang, J. Analysis of intelligent and connected vehicle control under communication delay and packet loss. China J. Highw. Transp. 2019, 32, 216–225. [Google Scholar]
- Liu, D.B.; Zhang, X.R.; Wang, R.M.; Li, X.C.; Xu, Z.G. DSRC-based vehicle network communication performance in closed field test. Chin. J. Automot. Eng. 2020, 56, 180–187. [Google Scholar]
- Bae, J.K.; Park, M.C.; Yang, E.J.; Seo, D.W. Implementation and performance evaluation for DSRC-based vehicular communication system. IEEE Access 2020, 9, 6878–6887. [Google Scholar] [CrossRef]
- Xu, K.; Wang, M.; Ge, Y.; Yu, R.; Wang, J.; Zhang, J. C-V2X Large-scale test network transmission performance data analysis method. In Proceedings of the 2021 IEEE 20th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), Shenyang, China, 20–22 October 2021; pp. 1334–1339. [Google Scholar]
- Lee, T.K.; Chen, J.J.; Tseng, Y.C.; Lin, C.K. Effect of packet loss and delay on V2X data fusion. In Proceedings of the 2020 21st Asia-Pacific Network Operations and Management Symposium (APNOMS), Daegu, Korea, 22–25 September 2020; pp. 302–305. [Google Scholar]
- Lee, T.K.; Kuo, Y.C.; Huang, S.H.; Wang, G.S.; Lin, C.Y.; Tseng, Y.C. Augmenting car surrounding information by inter-vehicle data fusion. In Proceedings of the 2019 IEEE Wireless Communications and Networking Conference (WCNC), Seoul, Korea, 25–28 May 2019; pp. 1–6. [Google Scholar]
- Xiong, G.; Yang, T.; Li, M.; Zhang, Y.; Song, W.; Gong, J. A novel V2X-based pedestrian collision avoidance system and the effects analysis of communication delay and packet loss on its application. In Proceedings of the 2018 IEEE International Conference on Vehicular Electronics and Safety (ICVES), Madrid, Spain, 12–14 September 2018; pp. 1–6. [Google Scholar]
- ETSI, T. Decentralized congestion control mechanisms for intelligent transport systems operating in the 5 GHz range; access layer part. ETSI TS 2018, 102, V1. [Google Scholar]
- Günther, H.J.; Riebl, R.; Wolf, L.; Facchi, C. The effect of decentralized congestion control on collective perception in dense traffic scenarios. Comput. Commun. 2018, 122, 76–83. [Google Scholar] [CrossRef]
- Delooz, Q.; Festag, A. Network load adaptation for collective perception in V2X communications. In Proceedings of the 2019 IEEE International Conference on Connected Vehicles and Expo (ICCVE), Graz, Austria, 7–9 March 2019; pp. 1–6. [Google Scholar]
- Huang, H.; Fang, W.; Li, H. Performance modelling of V2V based collective perceptions in connected and autonomous vehicles. In Proceedings of the 2019 IEEE 44th Conference on Local Computer Networks (LCN), Osnabrueck, Germany, 14–17 October 2019; pp. 356–363. [Google Scholar]
- Thandavarayan, G.; Sepulcre, M.; Gozalvez, J. Cooperative perception for connected and automated vehicles: Evaluation and impact of congestion control. IEEE Access 2020, 8, 197665–197683. [Google Scholar] [CrossRef]
- Günther, H.J.; Riebl, R.; Wolf, L.; Facchi, C. Collective perception and decentralized congestion control in vehicular Ad-Hoc networks. In Proceedings of the 2016 IEEE Vehicular Networking Conference (VNC), Columbus, OH, USA, 8–10 December 2016; pp. 1–8. [Google Scholar]
- Furukawa, K.; Takai, M.; Ishihara, S. Controlling sensor data dissemination method for collective perception in VANET. In Proceedings of the 2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), Kyoto, Japan, 11–15 March 2019; pp. 753–758. [Google Scholar]
- Furukawa, K.; Takai, M.; Ishihara, S. Controlling sensing information dissemination for collective perception in VANET. In Proceedings of the 16th ITS Asia-Pacific Forum, Fukuoka, Japan, 8–10 May 2018. [Google Scholar]
- Sepulcre, M.; Mira, J.; Thandavarayan, G.; Gozalvez, J. Is packet dropping a suitable congestion control mechanism for vehicular networks? In Proceedings of the 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring), Antwerp, Begium, 25–28 May 2020; pp. 1–5. [Google Scholar]
- Zhu, C.; Tao, J.; Pastor, G.; Xiao, Y.; Ji, Y.; Zhou, Q.; Li, Y.; Antti, Y.J. Folo: Latency and quality optimized task allocation in vehicular fog computing. IEEE Internet Things J. 2018, 6, 4150–4161. [Google Scholar] [CrossRef] [Green Version]
- Zhou, S.; Netalkar, P.P.; Chang, Y.; Xu, Y.; Chao, J. The MEC-based architecture design for low-latency and fast hand-off vehicular networking. In Proceedings of the 2018 IEEE 88th Vehicular Technology Conference (VTC-Fall), Chicago, IL, USA, 27–30 August 2018; pp. 1–7. [Google Scholar]
- Lin, C.C.; Deng, D.J.; Yao, C.C. Resource allocation in vehicular cloud computing systems with heterogeneous vehicles and roadside units. IEEE Internet Things J. 2017, 5, 3692–3700. [Google Scholar] [CrossRef]
- Zheng, K.; Meng, H.; Chatzimisios, P.; Lei, L.; Shen, X. An SMDP-based resource allocation in vehicular cloud computing systems. IEEE Trans. Ind. Electron. 2015, 62, 7920–7928. [Google Scholar] [CrossRef]
- Zhang, J.Y.; Li, F.; Li, R.X.; Li, Y.L.; Song, J.Q.; Zhang, Q.Y. Research on identity authentication in V2X communications based on elliptic curve encryption algorithm. Automot. Eng. 2020, 42, 27–32. [Google Scholar]
- Song, L.; Han, Q.; Liu, J. Investigate key management and authentication models in VANETs. In Proceedings of the 2011 International Conference on Electronics, Communications and Control (ICECC), Las Vegas, NV, USA, 27–29 April 2011; pp. 1516–1519. [Google Scholar]
- Han, X.; Tian, D.; Sheng, Z.; Duan, X.; Zhou, J.; Long, K.; Chen, M.; Leung, C.M.; Fellow, L. Reliability-aware joint optimization for cooperative vehicular communication and computing. IEEE Trans. Intell. Transp. Syst. 2020, 22, 5437–5446. [Google Scholar] [CrossRef]
- Xu, C.; Liu, H.; Li, P.; Wang, P. A remote attestation security model based on privacy-preserving block-chain for V2X. IEEE Access 2018, 6, 67809–67818. [Google Scholar] [CrossRef]
Factor | Camera | LiDAR | Radar | Only Fusion | Fusion and V2X |
---|---|---|---|---|---|
Range | ~ | ~ | √ | ~ | √ |
Resolution | √ | ~ | × | √ | √ |
Distance Accuracy | ~ | √ | √ | ~ | √ |
Velocity | ~ | × | √ | √ | √ |
Color Perception (e.g., traffic lights) | √ | × | × | √ | √ |
Object Detection | ~ | √ | √ | ~ | √ |
Object Classification | √ | ~ | × | √ | √ |
Lane Detection | √ | × | × | √ | √ |
Obstacle Edge Detection | √ | √ | × | √ | √ |
Illumination Conditions | × | √ | √ | ~ | √ |
Weather Conditions | × | ~ | √ | ~ | √ |
Authors, Year | Key Research Points | Findings | Remarks |
---|---|---|---|
Lian et al. 2020 [15] | Multiple roadside camera perception data mapping to form a global semantic description. | The detection time was increased by about 45%, and the detection accuracy was increased by about 10%. | Distributed interactive fusion deployment of sensors for a wider range of cooperative perception without increasing the time cost of computing. |
Löhdefifink et al. 2020 [21] | Used a lossy learning method for image compression to relieve the pressure on wireless communication channels. | Image compression requirements were high, and late fusion results of segmentation mask cascades were optimal. | The transmission of processed data can effectively reduce the load on the wireless channel. |
Lv et al. 2021 [23] | Based on the separation principle of static background and dynamic foreground, the dynamic foreground was extracted, and the static background and dynamic foreground were re-fused by a generative adversarial network. | The processing time of perceptual information was reduced to 27.7% of the original. | |
Xiao et al. 2018 [24] | A bird’s-eye view generated by integrating the perception information of other vehicles expanded the perception range, shared the processed image information, and reduced the network burden. | Solved the problem of obstacle occlusion and reduced the transmission of data volume. | Perception range will be affected by communication distance. |
Sridhar1 et al. 2018 [25] | Utilized image feature point matching for data fusion to form vehicle cooperative perception with a common field of view. | Fusion of perception information from other vehicles and conversion to its own coordinate system. | Cooperative perception can effectively expand the perception range of vehicles. |
Liu et al. 2018 [27] | Used feature point matching to estimate geometric transformation parameters to solve perception blind spots in congestion. | The intersection over union value was increased by 2~3 times. | Effectively solved the obstacle occlusion, but ignored the problem of viewing angle. |
Authors, Year | Key Research Points | Findings | Remarks |
---|---|---|---|
Chen et al. 2019 [29] | Shared the original point cloud data for the first time, and analyzed the impact of communication cost and the robustness of positioning errors on cooperative perception. | Sparse point cloud negatively affects perception. | Data-level fusion. |
Ye et al. 2020 [30] | Fusion of raw sensor data from multiple vehicles to overcome occlusion and sensor resolution degradation with distance. | Fusing sensor data from multiple viewpoints improved perception accuracy and range. | Data-level fusion. |
Chen et al. 2019 [31] | A feature-level fusion scheme was proposed, and the tradeoffs between processing time, bandwidth usage, and detection performance were analyzed. | The detection accuracy within 20 m was improved by about 10%. | Feature-level fusion. |
Wei et al. 2019 [32] | Integrated the point cloud data of multiple objects, continuously perceiving the position of surrounding vehicles in cases of limited LiDAR perception and V2V communication failure. | Cooperative perception object detection was more stable than LiDAR-only and V2V-only methods. | Feature-level fusion. |
Arnold et al. 2020 [34] | Proposed early fusion and late fusion schemes of single-modal point cloud data to more accurately estimate the bounding box of the detection target. | The recall rate of cooperative perception target detection was as high as 95%. | The detection performance of data-level fusion was better than that of decision-level fusion, but the communication quality was poor. |
Authors, Year | Key Research Points | Findings | Remarks |
---|---|---|---|
Jiang et al. 2019 [35] | Used millimeter-wave radar to filter the target and map it to the image to obtain the region of interest, weighted the detection value and estimated value of the two, and improved the perception accuracy. | Effectively detected small targets in foggy weather. | Strong anti-interference ability. However, the detection frequency was low and cannot meet the real-time requirements. |
Fu et al. 2020 [43] | A fusion perception method of roadside camera millimeter-wave radar was proposed, and the Kalman filter was used to evaluate the pros and cons of the perception results. | Both horizontal and vertical had better detection results. | No actual deployment. |
Wang et al. 2020 [46] | Combined with real road scenes, filtered background objects detected by radar to achieve the automatic calibration of multiple sensors. | Fast and automatic acquisition of roadside perception fusion information. | Attempt to combine depth information to display detection results in 3D boxes. |
Saito et al. 2021 [51] | Projected the point cloud data to the pixel coordinate system of the next frame of the point cloud, performed 3D reconstruction, and improved the accuracy of target detection. | Improved target shape recovery rate and discernible distance. | Further adjustments to real-time models and panoramic cameras to expand the fusion range. |
Duan et al. 2021 [4] | An image–point cloud cooperative perception system was proposed, which sends the detected objects within the perception range to the vehicle. | Effectively extended the detection range. | A large amount of calculation and poor real-time performance. |
Gu et al. 2021 [54] | Utilized point cloud and image concatenation to form a point cloud single-modality mode and a point cloud–image multimodal mode fusion network. | Multimodality for more environmental changes. | Improved the detection accuracy of the road and had good real-time performance. |
Methodology | Advantages | Disadvantages | Conclusion |
---|---|---|---|
Image fusion | High resolution, richer perception information, good target classification effect, mature algorithm, low deployment cost. | The depth information of the target is insufficient, and it is greatly affected by light and adverse weather. | The image–point cloud fusion scheme has the best effect. |
Point cloud fusion | High spatial resolution, rich 3D information, wide detection range, good positioning, and ranging effect. | Poor target classification effect, a large amount of data, easily affected by adverse weather, expensive. | |
Image–point cloud fusion | Realizes the complementary advantages of images and point clouds, high resolution, high perception accuracy, richer information, and strong anti-interference. | A large amount of data and a complex algorithm. |
Factors | Authors, Year | Key Research Points | Findings |
---|---|---|---|
Vehicle Mobility | Zhu et al. 2021 [76] | The network architecture of MEC and C-V2X fusion was proposed, which reduces the network transmission delay and improves the reliability of the perception system. | Distributed computing deployment can effectively reduce interaction delay. |
Fukatsu et al. 2019 [77] | Explored the requirements of different driving speeds for network data transmission. | The larger the bandwidth, the better the cooperative perception effect. | |
Fukatsu et al. 2021 [78] | Analyzed the data rate required to achieve cooperative perception at different driving speeds. | Derived the transmission data rate for safe driving at different driving speeds. | |
Traffic Density and Market Penetration | Radovan et al. 2018 [79] | Different sensor and communication equipment deployment schemes will effectively improve the scope of cooperative perception. | Different sensor combinations can make up for the lack of low permeability. |
Li et al. 2021 [80] | Analyzed the impact of market penetration on location and velocity estimates and forecasts. | When the market penetration rate is 50%, the estimated accuracy of vehicle positioning and speed is 80%-90%. | |
Wang et al. 2018 [81] | Discussed the capacity requirements for vehicle communication for cooperative perception under different traffic densities and market penetration rates. | V2I traffic from the CPM exchange is highest at about 50% penetration. |
Sharing Strategy | Authors, Year | Purposes | Findings | Remarks |
---|---|---|---|---|
CPM generation rules | Thandavarayan et al. 2020 [84,85] | Optimized the CPM generation strategy formulated by ETSI to reduce redundant information. | Dynamic CPM generation strategy. | Optimizing the CPM generation strategy can effectively reduce redundant information. |
CPM value and freshness | Baldomero et al. 2020 [87] | Designed a context-based confirmation mechanism through which the transmitting vehicle can selectively request the confirmation of specified or critical broadcast information to reduce communication load. | Realized the correct reception of information through message response. | Transmitted critical sensory data, reducing communication load. |
Higuchi et al. 2020 [90] | Decided whether to send the CPM policy by predicting the importance of the CPM to the receiver, reducing the communication load. | Leveraged value prediction networks and assessed the validity of information. | Shared perceptual information based on information importance and freshness. | |
Aoki et al. 2020 [91] | Leveraged deep reinforcement learning to select data to transfer. | The detection accuracy was increased by 12%, and the packet reception rate was increased by 27%. | ||
Rahal et al. 2020 [92] | Proposed enhancing the freshness of perceptual information to enhance the timeliness and accuracy of cooperative perception information. | Optimized information update rate. |
Authors, Year | Key Research Points | Remarks |
---|---|---|
Liu et al. 2020 [99] | Analyzed the impact of the analysis of factors affecting DSRC performance. | Communication distance and shelter are the main factors that cause the degradation of DSRC communication performance, and selective deployment of roadside equipment can effectively improve DSRC communication performance. |
Bae et al. 2021 [100] | Analyzed the impact of communication distance on packet reception rates in LoS and NLoS test scenarios. | Communication distance has a great influence on the reception rate of data packets. The greater the communication distance, the more serious the loss of packet reception rate. |
Lee et al. 2020 [102] | Analyzed the impact of PLR and delay on V2X data fusion. | By predicting data changes and using historical data, the accuracy of data fusion can be improved, and the detection accuracy is nearly 50% higher than that of lossy networks. |
Xiong et al. 2018 [104] | Evaluated the impact of latency and packet loss on the security of Internet of vehicles applications. | The higher the PLR, the lower the security. The smaller the initial speed, the lower the limit latency. |
Thandavarayan et al. 2020 [109] | The study investigated the impact of congestion control on cooperative perception using the DCC framework. | The combination of congestion control functions at the access and facility layers can improve the perception achieved with cooperative perception, ensure the timely transmission of the information, and significantly improve the object perception rate. |
Günther et al. 2016 [110] | Selected the best DCC variant and format of messages to maximize vehicle awareness. | The amount of data generated by cooperative perception can easily lead to channel congestion, resulting in too much old sensing information and reducing the accuracy of sensing information. |
Furukawa et al. 2019 [111] | Improved the vehicle position relationship and road structure to dynamically adjust the sensor data transmission rate method to improve the transmission rate of useful information. | Selecting high-probability vehicles to broadcast data and prioritizing data from other vehicles’ blind spots reduces radio traffic and enhances the real-time situational awareness of other vehicles. |
Sepulcre et al. 2020 [113] | Selected high-probability vehicles to broadcast and prioritize data from other vehicles’ blind spots, reducing radio traffic and enhancing real-time situational awareness of other vehicles. | Controlling the way the vehicle drops packets can reduce the flow of packets transmitted to the wireless channel, but the dropped packets are not transmitted, resulting in the lower performance of the application. |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Cui, G.; Zhang, W.; Xiao, Y.; Yao, L.; Fang, Z. Cooperative Perception Technology of Autonomous Driving in the Internet of Vehicles Environment: A Review. Sensors 2022, 22, 5535. https://doi.org/10.3390/s22155535
Cui G, Zhang W, Xiao Y, Yao L, Fang Z. Cooperative Perception Technology of Autonomous Driving in the Internet of Vehicles Environment: A Review. Sensors. 2022; 22(15):5535. https://doi.org/10.3390/s22155535
Chicago/Turabian StyleCui, Guangzhen, Weili Zhang, Yanqiu Xiao, Lei Yao, and Zhanpeng Fang. 2022. "Cooperative Perception Technology of Autonomous Driving in the Internet of Vehicles Environment: A Review" Sensors 22, no. 15: 5535. https://doi.org/10.3390/s22155535