Traffic Signal Control with Cell Transmission Model Using Reinforcement Learning for Total Delay Minimisation

  • Pitipong Chanloha Chulalongkorn University
  • Jatuporn Chinrungrueng National Science and Technology Development Agency
  • Wipawee Usaha
  • Chaodit Aswakul Chulalongkorn University,

Abstract

This paper proposes a new framework to control the traffic signal lights by applying the automated goal-directed learning and decision making scheme, namely the reinforcement learning (RL) method, to seek the best possible traffic signal ac- tions upon changes of network state modelled by the signalised cell transmission model (CTM). This paper employs the Q-learning which is one of the RL tools in order to find the traffic signal solution because of its adaptability in finding the real time solu- tion upon the change of states. The goal is for RL to minimise the total network delay. Surprisingly, by using the total network delay as a reward function, the results were not necessarily as good as initially expected. Rather, both simulation and mathemat- ical derivation results confirm that using the newly proposed red light delay as the RL reward function gives better performance than using the total network delay as the reward function. The investigated scenarios include the situations where the summa- tion of overall traffic demands exceeds the maximum flow capacity. Reported results show that our proposed framework using RL and CTM in the macroscopic level can computationally efficiently find the proper control solution close to the brute-forcely searched best periodic signal solution (BPSS). For the practical case study conducted by AIMSUN microscopic traffic simulator, the proposed CTM-based RL reveals that the reduction of the average delay can be significantly decreased by 40% with bus lane and 38% without bus lane in comparison with the case of currently used traffic signal strategy. Therefore, the CTM-based RL algorithm could be a useful tool to adjust the proper traffic signal light in practice.

References

[1] R. Sutton, A.G. Barto (1998); Reinforcement Learning: An Introduction. The MIT Press, Cambridge, Massachusetts.

[2] B. Abdulhai, L. Kattan (2003); Reinforcement learning: Introduction to theory and potential for transport applications. Canadian Journal of Civil Engineering, 30(6), 981–991.
http://dx.doi.org/10.1139/l03-014

[3] C. Jacob, B. Abdulhai (2005); Integrated traffic corridor control using machine learning. International Conference on Systems, Man & Cybernetics, 3460–3465.
http://dx.doi.org/10.1109/ICSMC.2005.1571683

[4] D.D. Oliveira et al (2006); Reinforcement learning-based control of traffic lights in non- stationary environments: a case study in a microscopic simulator. Forth European Workshop on Multi Agent Systems.

[5] C.F. Daganzo (1995); The cell transmission model part II: Network traffic. Transportation Research Part B: Methodological, 29b(2), 79–93.
http://dx.doi.org/10.1016/0191-2615(94)00022-R

[6] H.K. Lo et al (2001); Dynamic network traffic control. Transportation Research Part A: Policy and Practice, 35(8), 721–744.
http://dx.doi.org/10.1016/s0965-8564(00)00014-8

[7] M. Maher, O. Feldman (2002); The application of the cell transmission model to the optimi- sation of signals on signalised roundabouts. European Transport Conference, 1–13.

[8] H.K. Lo, A.H.F. Chow (2004); Control strategies for oversaturated traffic. Journal of Transportation Engineering, 466–478.
http://dx.doi.org/10.1061/(ASCE)0733-947X(2004)130:4(466)

[9] W.H. Lin, C. Wang (2004); An enhanced 0-1 mixed-integer LP formulation for traffic signal control. IEEE Transactions on Intelligence Transportation Systems, 5(4): 238–245.
http://dx.doi.org/10.1109/TITS.2004.838217

[10] K. Tueprasert, C. Aswakul (2010); Multiclass cell transmission model for heterogeneous mobility in general topology of road network. Journal of Intelligent Transportation Systems, 14(2): 68–82.
http://dx.doi.org/10.1080/15472451003719715

[11] G. Flotterod, K. Nagel (2005) Some practical extensions to the cell transmission model. Proceedings of the 8th Internationall IEEE Conference on Intelligent Transportation Systems.
http://dx.doi.org/10.1109/itsc.2005.1520042

[12] A. Sadek, N. Basha (2006); Self-learning intelligent agents for dynamic traffic routing on transportation networks. International Conference on Complex Systems, 503–518.

[13] N.H. Gartner et al (1995); Development of advanced traffic signal control strategies for Intelligent Transportation Systems : multilevel design, Transportation Research Record, 98– 105.
Published
2015-07-19
How to Cite
CHANLOHA, Pitipong et al. Traffic Signal Control with Cell Transmission Model Using Reinforcement Learning for Total Delay Minimisation. INTERNATIONAL JOURNAL OF COMPUTERS COMMUNICATIONS & CONTROL, [S.l.], v. 10, n. 5, p. 627-642, july 2015. ISSN 1841-9844. Available at: <http://univagora.ro/jour/index.php/ijccc/article/view/2025pdf>. Date accessed: 02 july 2020. doi: https://doi.org/10.15837/ijccc.2015.5.2025.

Keywords

Traffic Signal Control (TSC), Cell Transmission Model (CTM), Rein- forcement Learning (RL).