nlin.AO

4 posts

arXiv:2501.00160v1 Announce Type: new Abstract: Multi-Agent Reinforcement Learning involves agents that learn together in a shared environment, leading to emergent dynamics sensitive to initial conditions and parameter variations. A Dynamical Systems approach, which studies the evolution of multi-component systems over time, has uncovered some of the underlying dynamics by constructing deterministic approximation models of stochastic algorithms. In this work, we demonstrate that even in the simplest case of independent Q-learning with a Boltzmann exploration policy, significant discrepancies arise between the actual algorithm and previous approximations. We elaborate why these models actually approximate interesting variants rather than the original incremental algorithm. To explain the discrepancies, we introduce a new discrete-time approximation model that explicitly accounts for agents' update frequencies within the learning process and show that its dynamics fundamentally differ from the simplified dynamics of prior models. We illustrate the usefulness of our approach by applying it to the question of spontaneous cooperation in social dilemmas, specifically the Prisoner's Dilemma as the simplest case study. We identify conditions under which the learning behaviour appears as long-term stable cooperation from an external perspective. However, our model shows that this behaviour is merely a metastable transient phase and not a true equilibrium, making it exploitable. We further exemplify how specific parameter settings can significantly exacerbate the moving target problem in independent learning. Through a systematic analysis of our model, we show that increasing the discount factor induces oscillations, preventing convergence to a joint policy. These oscillations arise from a supercritical Neimark-Sacker bifurcation, which transforms the unique stable fixed point into an unstable focus surrounded by a stable limit cycle.

David Goll, Jobst Heitzig, Wolfram Barfuss1/3/2025

arXiv:2412.18549v1 Announce Type: cross Abstract: The collection of updated data on social contact patterns following the COVID-19 pandemic disruptions is crucial for future epidemiological assessments and evaluating non-pharmaceutical interventions (NPIs) based on physical distancing. We conducted two waves of an online survey in March 2022 and March 2023 in Italy, gathering data from a representative population sample on direct (verbal/physical interactions) and indirect (prolonged co-location in indoor spaces) contacts. Using a generalized linear mixed model, we examined determinants of individuals' total social contacts and evaluated the potential impact of work-from-home and distance learning on the transmissibility of respiratory pathogens. In-person attendance at work or school emerged as a primary driver of social contacts. Adults attending in person reported a mean of 1.69 (95% CI: 1.56-1.84) times the contacts of those staying home; among children and adolescents, this ratio increased to 2.38 (95% CI: 1.98-2.87). We estimated that suspending all non-essential work alone would marginally reduce transmissibility. However, combining distance learning for all education levels with work-from-home policies could decrease transmissibility by up to 23.7% (95% CI: 18.2%-29.0%). Extending these measures to early childcare services would yield only minimal additional benefits. These results provide useful data for modelling the transmission of respiratory pathogens in Italy after the end of the COVID-19 emergency. They also provide insights into the potential epidemiological effectiveness of social distancing interventions targeting work and school attendance, supporting considerations on the balance between the expected benefits and their heavy societal costs.

Lorenzo Lucchini, Valentina Marziano, Filippo Trentini, Chiara Chiavenna, Elena D'Agnese, Vittoria Offeddu, Mattia Manica, Piero Poletti, Duilio Balsamo, Giorgio Guzzetta, Marco Aielli, Alessia Melegaro, Stefano Merler12/25/2024

arXiv:2412.16249v1 Announce Type: new Abstract: Behavioral experiments on the ultimatum game (UG) reveal that we humans prefer fair acts, which contradicts the prediction made in orthodox Economics. Existing explanations, however, are mostly attributed to exogenous factors within the imitation learning framework. Here, we adopt the reinforcement learning paradigm, where individuals make their moves aiming to maximize their accumulated rewards. Specifically, we apply Q-learning to UG, where each player is assigned two Q-tables to guide decisions for the roles of proposer and responder. In a two-player scenario, fairness emerges prominently when both experiences and future rewards are appreciated. In particular, the probability of successful deals increases with higher offers, which aligns with observations in behavioral experiments. Our mechanism analysis reveals that the system undergoes two phases, eventually stabilizing into fair or rational strategies. These results are robust when the rotating role assignment is replaced by a random or fixed manner, or the scenario is extended to a latticed population. Our findings thus conclude that the endogenous factor is sufficient to explain the emergence of fairness, exogenous factors are not needed.

Guozhong Zheng, Jiqiang Zhang, Xin Ou, Shengfeng Deng, Li Chen12/24/2024

arXiv:2403.17392v3 Announce Type: replace Abstract: Cyborg insects refer to hybrid robots that integrate living insects with miniature electronic controllers to enable robotic-like programmable control. These creatures exhibit advantages over conventional robots in adaption to complex terrain and sustained energy efficiency. Nevertheless, there is a lack of literature on the control of multi-cyborg systems. This research gap is due to the difficulty in coordinating the movements of a cyborg system under the presence of insects' inherent individual variability in their reactions to control input. Regarding this issue, we propose a swarm navigation algorithm and verify it under experiments. This research advances swarm robotics by integrating biological organisms with control theory to develop intelligent autonomous systems for real-world applications.

Yang Bai, Phuoc Thanh Tran Ngoc, Huu Duoc Nguyen, Duc Long Le, Quang Huy Ha, Kazuki Kai, Yu Xiang See To, Yaosheng Deng, Jie Song, Naoki Wakamiya, Hirotaka Sato, Masaki Ogura12/24/2024