Call for Papers

This workshop aims to reinforce the connection between reinforcement learning and control theory by bringing together researchers from both fields. In particular, we invite contributions on all fundamental and theoretical aspects, with a special emphasis on topics that connect both fields and provide new perspectives. Contributions that bridge theory and applications are also welcome. We believe that significant progress in tackling large-scale applications can only be achieved through collaborative efforts and a mutual understanding of each field’s strengths and approaches. Our workshop is dedicated to fostering dialogue and collaboration, paving the way for breakthroughs in complex dynamic programming challenges and interactive systems.

We invite researchers to submit papers on the topics listed below. All accepted papers will be presented as posters or selected for contributed talks. There will be no proceedings, however, accepted papers will be made available through the OpenReview website. We allow submission of published or already peer-reviewed papers but we ask authors to indicate this in the submission form for transparency.

Note on parallel submissions to the ICML workshop on ‘Aligning Reinforcement Learning Experimentalists and Theorists’: We ask authors to submit works to only one of the two workshops; and we reserve the right to coordinate and only accept the work at one of the workshops.

Topics

Technical topics include, but are not limited to, the following aspects:

  • Performance measures and guarantees: Stability, robustness, regret bounds, sample-complexity, stochastic vs non-stochastic approaches, MDPs etc.
  • Fundamental assumptions: Linear and non-linear systems, excitation, stability, etc.
  • Fundamental limits: Results that mathematically characterize the difficulty of a given problem, statistical, information theoretic and computational lower bounds.
  • Computational aspects: Efficient algorithms, computational hardness, approximations, etc.
  • Topology: Continuous-state and action spaces vs discrete spaces; Discrete and continuous time analysis.
  • Models: Bandits, Markov Decision Processes, Linear and nonlinear control, partial observability, POMDPs, partial monitoring, etc
  • Data Acquisition & Exploration: Exploration-exploitation trade-offs, pure-exploration, experimental design.
  • Offline vs. online: Open-loop and closed loop control, offline and online reinforcement learning and hybrid approaches.
  • Planning and learned search: Dynamic programming, tree search and planning algorithms.
  • Target applications: Formalization of applications such as autonomous vehicles, robots, industrial processes, recommender systems, internet routing, hardware optimization, hyper-parameter optimization and AutoML …
  • Benchmarks: Evaluation of algorithms and theoretical results on a suitable collection of problems.

Important Dates

Submission Deadline (extended): May 29, 2024 (AoE)
Acceptance Notification: June 17, 2024 (AoE)
Camera Ready: July 27, 2024 (AoE)
Workshop: July 27, 2024 (Saturday)

Submission Instructions

Formatting: We have a short and long format submission track:

  • Short format: Up to 4 pages plus references and appendix.
  • Long format: Up to 8 pages plus references and appendix.

Submission: via our OpenReview site
Template: You may use either a double-column or a single-column format. For the single-column (NeurIPS-style) template, long papers may be up to 9 pages long.
Reviews: Submissions will be evaluated by a reviewing committee. There will be a single round of reviews and no author response.