Page 120 - Read Online
P. 120

Boin et al. Intell Robot 2022;2(2):145­67                   Intelligence & Robotics
               DOI: 10.20517/ir.2022.11


               Research Article                                                              Open Access



               AVDDPG – Federated reinforcement learning applied
               to autonomous platoon control


               Christian Boin, Lei Lei, Simon X. Yang

               School of Engineering, University of Guelph, Guelph, ON N1G 2W1, Canada.
               Correspondence to: Dr. Lei Lei, School of Engineering, University of Guelph, 50 Stone Road East, Guelph, ON N1G 2W1, Canada.
               E-mail: leil@uoguelph.ca
               How to cite this article: Boin C, Lei L, Yang SX. AVDDPG – Federated reinforcement learning applied to autonomous platoon control.
               Intell Robot 2022,2(2):145-66. http://dx.doi.org/10.20517/ir.2022.11
               Received: 27 Mar 2022  First Decision:  Revised:  Accepted: 20 May 2022  Published: 30 May 2022
               Academic Editors: Xin Xu, Wai Lok Woo Copy Editor: Jia-Xin Zhang  Production Editor: Jia-Xin Zhang



               Abstract
               Since 2016 federated learning (FL) has been an evolving topic of discussion in the artificial intelligence (AI) research
               community. Applications of FL led to the development and study of federated reinforcement learning (FRL). Few
               works exist on the topic of FRL applied to autonomous vehicle (AV) platoons. In addition, most FRL works choose a
               single aggregation method (usually weight or gradient aggregation). We explore FRL’s effectiveness as a means to
               improve AV platooning by designing and implementing an FRL framework atop a custom AV platoon environment.
               The application of FRL in AV platooning is studied under two scenarios: (1) Inter-platoon FRL (Inter-FRL) where FRL
               is applied to AVs across different platoons; (2) Intra-platoon FRL (Intra-FRL) where FRL is applied to AVs within a
               single platoon. Both Inter-FRL and Intra-FRL are applied to a custom AV platooning environment using both gradient
               and weight aggregation to observe the performance effects FRL can have on AV platoons relative to an AV platooning
               environment trained without FRL. It is concluded that Intra-FRL using weight aggregation (Intra-FRLWA) provides the
               best performance for controlling an AV platoon. In addition, we found that weight aggregation in FRL for AV platooning
               provides increases in performance relative to gradient aggregation. Finally, a performance analysis is conducted for
               Intra-FRLWA versus a platooning environment without FRL for platoons of length 3, 4 and 5 vehicles. It is concluded
               that Intra-FRLWA largely out-performs the platooning environment that is trained without FRL.

               Keywords: Deep reinforcement learning, autonomous driving, federated reinforcement learning, platooning







                           © The Author(s) 2022. Open Access This article is licensed under a Creative Commons Attribution 4.0
                           International License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, shar­
                ing, adaptation, distribution and reproduction in any medium or format, for any purpose, even commercially, as long as you
                give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate
                if changes were made.







                                                                                            www.intellrobot.com
   115   116   117   118   119   120   121   122   123   124   125