Abstract
This research introduces an innovative method for adaptive traffic signal control (ATSC) through the utilization of multi-objective deep reinforcement learning (DRL) techniques. The proposed approach aims to enhance control strategies at intersections while simultaneously addressing safety, efficiency, and decarbonization objectives. Traditional ATSC methods typically prioritize traffic efficiency and often struggle to adapt to real-time dynamic traffic conditions. To address these challenges, the study suggests a DRL-based ATSC algorithm that incorporates the Dueling Double Deep Q Network (D3QN) framework. The performance of this algorithm is assessed using a simulated intersection in Changsha, China. Notably, the proposed ATSC algorithm surpasses both traditional ATSC and ATSC algorithms focused solely on efficiency optimization by achieving over a 16% reduction in traffic conflicts and a 4% decrease in carbon emissions. Regarding traffic efficiency, waiting time is reduced by 18% compared to traditional ATSC, albeit showing a slight increase (0.64%) compared to the DRL-based ATSC algorithm integrating the D3QN framework. This marginal increase suggests a trade-off between efficiency and other objectives like safety and decarbonization. Additionally, the proposed approach demonstrates superior performance, particularly in scenarios with high traffic demand, across all three objectives. These findings contribute to advancing traffic control systems by offering a practical and effective solution for optimizing signal control strategies in real-world traffic situations.
References:
1) Arel, I., Liu, C., Urbanik, T., Kohls, A.G., 2010. Reinforcement learning-based multi-agent system for network traffic signal control. IET Intel. Transport Syst. 4 (2), 128–135.https://doi.org/10.1049/iet-its.2009.0070.2) Arulkumaran, K., Deisenroth, M.P., Brundage, M., Bharath, A.A., 2017. Deep reinforcement learning: A brief survey. IEEE Signal Process Mag. 34 (6), 26–38. https://doi.org/10.1109/MSP.2017.2743240.
3) Arun, A., Haque, M.M., Bhaskar, A., Washington, S., Sayed, T., 2021. A systematic mapping review of surrogate safety assessment using traffic conflict techniques. Accid. Anal. Prev. 153, 106016 https://doi.org/10.1016/j.aap.2021.106016.
4) Aslani, M., Mesgari, M.S., Wiering, M., 2017. Adaptive traffic signal control with actor-critic methods in a real-world traffic network with different traffic disruption events. Transp. Res. Part C: Emerg. Technol. 85, 732–752. https://doi.org/10.1016/j. trc.2017.09.020.
5) Boukerche, A., Zhong, D., Sun, P., 2021. FECO: An Efficient Deep Reinforcement Learning-Based Fuel-Economic Traffic Signal Control Scheme. IEEE Trans. Sustain. Comput. 7 (1), 144–156. https://doi.org/10.1109/TSUSC.2021.3138926.
6) Ceylan, H., Bell, M.G., 2004. Traffic signal timing optimisation based on genetic algorithm approach, including drivers’ routing. Transp. Res. B Methodol. 38 (4), 329–342. https://doi.org/10.1016/S0191-2615(03)00015-8.
7) Christopoulou, M., Barmpounakis, S., Koumaras, H., Kaloxylos, A., 2022. Artificial Intelligence and Machine Learning as key enablers for V2X communications: A comprehensive survey. Veh. Commun. 39, 100569. https://doi.org/10.1016/j. vehcom.2022.100569.
8) El-Tantawy, S., Abdulhai, B., Abdelgawad, H., 2014. Design of reinforcement learning parameters for seamless application of adaptive traffic signal control. J. Intell. Transp. Syst. 18 (3), 227–245. https://doi.org/10.1080/15472450.2013.810991.
9) Essa, M., Sayed, T., 2020. Self-learning adaptive traffic signal control for real-time safety optimization. Accid. Anal. Prev. 146, 105713 https://doi.org/10.1016/j. aap.2020.105713.
10) Fink, J., Kwigizile, V., Oh, J.-S., 2016. Quantifying the impact of adaptive traffic control systems on crash frequency and severity: Evidence from Oakland County, Michigan. J. Saf. Res. 57, 1–7. https://doi.org/10.1016/j.jsr.2016.01.001.
11) François-Lavet, V., Henderson, P., Islam, R., Bellemare, M.G., Pineau, J., 2018. An introduction to deep reinforcement learning. Found. Trends® Mach. Learn. 11 (3–4), 219–354. https://doi.org/10.1561/2200000071.
12) Fu, C., Sayed, T., 2021. Comparison of threshold determination methods for the deceleration rate to avoid a crash (DRAC)-based crash estimation. Accid. Anal. Prev. 153, 106051 https://doi.org/10.1016/j.aap.2021.106051.
13) Genders, W., Razavi, S., 2018. Evaluating reinforcement learning state representations for adaptive traffic signal control. Procedia Comput. Sci. 130, 26–33. https://doi. org/10.1016/j.procs.2018.04.008.
14) Ghoul, T., Sayed, T., 2021. Real-time signal-vehicle coupled control: An application of connected vehicle data to improve intersection safety. Accid. Anal. Prev. 162, 106389 https://doi.org/10.1016/j.aap.2021.106389.
15) Gong, Y., Abdel-Aty, M., Yuan, J., Cai, Q., 2020. Multi-objective reinforcement learning approach for improving safety at intersections with adaptive traffic signal control. Accid. Anal. Prev. 144, 105655 https://doi.org/10.1016/j.aap.2020.105655.
16) Hao, P., Wu, G., Boriboonsomsin, K., Barth, M.J., 2018. Eco-approach and departure (EAD) application for actuated signals in real-world traffic. IEEE Trans. Intell. Transp. Syst. 20, 30–40. https://doi.org/10.1109/TITS.2018.2794509.
17) Haydari, A., Yilmaz, Y., 2020. Deep reinforcement learning for intelligent transportation systems: A survey. IEEE Trans. Intell. Transp. Syst. 23 (1), 11–32. https://doi.org/ 10.1109/TITS.2020.3008612.
18) Houli, D., Zhiheng, L., Yi, Z., 2010. Multiobjective reinforcement learning for traffic signal control using vehicular ad hoc network. EURASIP J. Adv. Signal Process. 2010, 1–7. https://doi.org/10.1155/2010/724035.
19) Jin, J., Ma, X., 2015. Adaptive group-based signal control by reinforcement learning. Transp. Res. Procedia 10, 207–216. https://doi.org/10.1016/j.trpro.2015.09.070.
20) Joyo, A., Yaqub, R., Madamopoulos, N., 2020. Intelligent traffic-lights management by exploiting smart antenna technology (ITSAT). IEEE Intell. Transp. Syst. Mag. 13 (4), 154–163. https://doi.org/10.1109/MITS.2019.2926265.
21) Kadkhodayi, A., et al. "Artificial Intelligence-Based Real-Time Traffic Man-agement." Journal of Electrical Electronics Engineering 2.4 (2023): 368-373.
22) Katrakazas, C., Theofilatos, A., Islam, M.A., Papadimitriou, E., Dimitriou, L., Antoniou, C., 2021. Prediction of rear-end conflict frequency using multiple-location traffic parameters. Accid. Anal. Prev. 152, 106007 https://doi.org/10.1016/j. aap.2021.106007.
23) Khamis, M.A., Gomaa, W., 2014. Adaptive multi-objective reinforcement learning with hybrid exploration for traffic signal control based on cooperative multi-agent framework. Eng. Appl. Artif. Intel. 29, 134–151. https://doi.org/10.1016/j. engappai.2014.01.007. Krajzewicz, D., Behrisch, M., Wagner, P., Luz, R., Krumnow, M., 2015. Second generation of pollutant emission models for SUMO. In: Modeling Mobility with Open Data. Springer, Cham, pp. 203–221.
24) Kumar, N., Rahman, S.S., Dhakad, N., 2020. Fuzzy inference enabled deep reinforcement learning-based traffic light control for intelligent transportation system. IEEE Trans. Intell. Transp. Syst. 22 (8), 4919–4928. https://doi.org/10.1109/ TITS.2020.2984033.
25) Li, G., Lai, W., Sui, X., Li, X., Qu, X., Zhang, T., Li, Y., 2020. Influence of traffic congestion on driver behavior in post-congestion driving. Accid. Anal. Prev. 141, 105508 https://doi.org/10.1016/j.aap.2020.105508.
26) Liang, X., Du, X., Wang, G., Han, Z., 2019. A deep reinforcement learning network for traffic light cycle control. IEEE Trans. Veh. Technol. 68 (2), 1243–1253. https://doi. org/10.1109/TVT.2018.2890726.
27) Mao, T., Mih˘ait˘a, A.S., Chen, F., Vu, H.L., 2021. Boosted genetic algorithm using machine learning for traffic control optimization. IEEE Trans. Intell. Transp. Syst. 23 (7), 7112–7141. https://doi.org/10.1109/TITS.2021.3066958.
28) McKenney, D., White, T., 2013. Distributed and adaptive traffic signal control within a realistic traffic simulation. Eng. Appl. Artif. Intel. 26 (1), 574–583. https://doi.org/ 10.1016/j.engappai.2012.04.008.
29) Mirbakhsh, Ardeshir. A Self-Learning Intersection Control System for Connected and Automated Vehicles. Diss. New Jersey Institute of Technology, 2022.
30) Mirbakhsh, Ardeshir, Joyoung Lee, and Dejan Besenski. "Development of a Signal-Free Intersection Control System for CAVs and Corridor Level Impact Assessment." Future Transportation 3.2 (2023): 552-567.
31) Mirbakhsh, Ardeshir, Joyoung Lee, and Dejan Besenski. "Spring–mass–damper-based platooning logic for automated vehicles." Transportation research record 2677.5 (2023): 1264-1274.
32) Liu, Lei, and Shahin Mirbakhsh. "Enhanced Automated Intersection Control for Platooning AVs: A Fusion Model with Significant Traffic, Safety, and Environmental Improvements Review Article." Journal of Electrical and Electronics Engineering (2023).
33) Mohebifard, R., Hajbabaie, A., 2019. Optimal network-level traffic signal control: A benders decomposition-based solution algorithm. Transp. Res. B Methodol. 121, 252–274. https://doi.org/10.1016/j.trb.2019.01.012.
34) Muralidharan, A., Pedarsani, R., Varaiya, P., 2015. Analysis of fixed-time control. Transp. Res. B Methodol. 73, 81–90. https://doi.org/10.1016/j.trb.2014.12.002.
35) Paz, A., Molano, V., Martinez, E., Gaviria, C., & Arteaga, C. (2015). Calibration of traffic flow models using a memetic algorithm. Transp. Res. Pt. C-Emerg. Technol., 55, 432- 443. . Reyad, P., Sayed, T., Essa, M., Zheng, L., 2021. Real-time crash-risk optimization at signalized intersections. Transp. Res. Record: J. Transp. Res. Board 2676 (12), 32–50. https://doi.org/10.1177/03611981211062891.
36) Robertson, D.I., Bretherton, R.D., 1991. Optimizing networks of traffic signals in real time-the SCOOT method. IEEE Trans. Veh. Technol. 40 (1), 11–15. https://doi.org/ 10.1109/25.69966.
37) Sabra, Z. A., Gettman, D., Henry, R., & Nallamothu, V. (2013). Enhancing safety and capacity in an adaptive signal control system—Phase 2. Rep. No. FHWA-PROJ-10- 0037, Federal Highway Administration, Washington, DC. https://doi.org/10.13140/ RG.2.2.16217.83044. Schaul, T.,
38) Quan, J., Antonoglou, I., & Silver, D. (2015). Prioritized experience replay. arXiv preprint arXiv:1511.05952. https://doi.org/10.48550/arXiv.1511.05952. Shelby, S.G., 2004. Single-intersection evaluation of real-time adaptive traffic signal control algorithms. Transp. Res. Rec. 1867 (1), 183–192. https://doi.org/10.3141/ 1867-21.