Optimizing the Neural Architecture of Reinforcement Learning Agents

N. Mazyavkina, S. Moustafa, I. Trofimov, E. Burnaev

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


Reinforcement learning (RL) enjoyed significant progress over the last years. One of the most important steps forward was the wide application of neural networks. However, architectures of these neural networks are quite simple and typically are constructed manually. In this work, we study recently proposed neural architecture search (NAS) methods for optimizing the architecture of RL agents. We create two search spaces for the neural architectures and test two NAS methods: Efficient Neural Architecture Search (ENAS) and Single-Path One-Shot (SPOS). Next, we carry out experiments on the Atari benchmark and conclude that modern NAS methods find architectures of RL agents outperforming a manually selected one.

Original languageEnglish
Title of host publicationIntelligent Computing - Proceedings of the 2021 Computing Conference
EditorsKohei Arai
PublisherSpringer Nature
Number of pages16
ISBN (Electronic)9783030801250
ISBN (Print)9783030801250
Publication statusPublished - 2021
EventComputing Conference 2021 - Virtual, Online
Duration: 15 Jul 202116 Jul 2021

Publication series

NameIntelligent Computing - Proceedings of the 2021 Computing Conference


ConferenceComputing Conference 2021
CityVirtual, Online


  • Atari
  • AutoML
  • Neural architecture search
  • Reinforcement learning


Dive into the research topics of 'Optimizing the Neural Architecture of Reinforcement Learning Agents'. Together they form a unique fingerprint.

Cite this