Latest Posts

arXiv:2503.09635v1 Announce Type: new Abstract: We present FPGS, a feed-forward photorealistic style transfer method of large-scale radiance fields represented by Gaussian Splatting. FPGS, stylizes large-scale 3D scenes with arbitrary, multiple style reference images without additional optimization while preserving multi-view consistency and real-time rendering speed of 3D Gaussians. Prior arts required tedious per-style optimization or time-consuming per-scene training stage and were limited to small-scale 3D scenes. FPGS efficiently stylizes large-scale 3D scenes by introducing a style-decomposed 3D feature field, which inherits AdaIN's feed-forward stylization machinery, supporting arbitrary style reference images. Furthermore, FPGS supports multi-reference stylization with the semantic correspondence matching and local AdaIN, which adds diverse user control for 3D scene styles. FPGS also preserves multi-view consistency by applying semantic matching and style transfer processes directly onto queried features in 3D space. In experiments, we demonstrate that FPGS achieves favorable photorealistic quality scene stylization for large-scale static and dynamic 3D scenes with diverse reference images. Project page: https://kim-geonu.github.io/FPGS/

GeonU Kim, Kim Youwang, Lee Hyoseok, Tae-Hyun Oh3/14/2025

arXiv:2503.09631v1 Announce Type: new Abstract: We present V2M4, a novel 4D reconstruction method that directly generates a usable 4D mesh animation asset from a single monocular video. Unlike existing approaches that rely on priors from multi-view image and video generation models, our method is based on native 3D mesh generation models. Naively applying 3D mesh generation models to generate a mesh for each frame in a 4D task can lead to issues such as incorrect mesh poses, misalignment of mesh appearance, and inconsistencies in mesh geometry and texture maps. To address these problems, we propose a structured workflow that includes camera search and mesh reposing, condition embedding optimization for mesh appearance refinement, pairwise mesh registration for topology consistency, and global texture map optimization for texture consistency. Our method outputs high-quality 4D animated assets that are compatible with mainstream graphics and game software. Experimental results across a variety of animation types and motion amplitudes demonstrate the generalization and effectiveness of our method. Project page:https://windvchen.github.io/V2M4/.

Jianqi Chen, Biao Zhang, Xiangjun Tang, Peter Wonka3/14/2025

arXiv:2503.09632v1 Announce Type: new Abstract: Dexterous telemanipulation critically relies on the continuous and stable tracking of the human operator's commands to ensure robust operation. Vison-based tracking methods are widely used but have low stability due to anomalies such as occlusions, inadequate lighting, and loss of sight. Traditional filtering, regression, and interpolation methods are commonly used to compensate for explicit information such as angles and positions. These approaches are restricted to low-dimensional data and often result in information loss compared to the original high-dimensional image and video data. Recent advances in diffusion-based approaches, which can operate on high-dimensional data, have achieved remarkable success in video reconstruction and generation. However, these methods have not been fully explored in continuous control tasks in robotics. This work introduces the Diffusion-Enhanced Telemanipulation (DET) framework, which incorporates the Frame-Difference Detection (FDD) technique to identify and segment anomalies in video streams. These anomalous clips are replaced after reconstruction using diffusion models, ensuring robust telemanipulation performance under challenging visual conditions. We validated this approach in various anomaly scenarios and compared it with the baseline methods. Experiments show that DET achieves an average RMSE reduction of 17.2% compared to the cubic spline and 51.1% compared to FFT-based interpolation for different occlusion durations.

Haoyang Wang, Haoran Guo, Lingfeng Tao, Zhengxiong Li3/14/2025

arXiv:2503.09634v1 Announce Type: new Abstract: Structural and appearance changes in brain imaging over time are crucial indicators of neurodevelopment and neurodegeneration. The rapid advancement of large-scale generative models provides a promising backbone for modeling these complex global and local changes in brain images, such as transforming the age of a source image to a target age. However, current generative models, typically trained on independently and identically distributed (i.i.d.) data, may struggle to maintain intra-subject spatiotemporal consistency during transformations. We propose the Identity-Preserving Longitudinal Diffusion Model (IP-LDM), designed to accurately transform brain ages while preserving subject identity. Our approach involves first extracting the identity representation from the source image. Then, conditioned on the target age, the latent diffusion model learns to generate the age-transformed target image. To ensure consistency within the same subject over time, we regularize the identity representation using a triplet contrastive formulation. Our experiments on both elderly and infant brain datasets demonstrate that our model outperforms existing conditional generative models, producing realistic age transformations while preserving intra-subject identity.

Gexin Huang, Zhangsihao Yang, Yalin Wang, Guido Gerig, Mengwei Ren, Xiaoxiao Li3/14/2025

arXiv:2503.09624v1 Announce Type: new Abstract: This paper presents the Adaptive Personalized Control System (APECS) architecture, a novel framework for human-in-the-loop control. An architecture is developed which defines appropriate constraints for the system objectives. A method for enacting Lipschitz and sector bounds on the resulting controller is derived to ensure desirable control properties. An analysis of worst-case loss functions and the optimal loss function weighting is made to implement an effective training scheme. Finally, simulations are carried out to demonstrate the effectiveness of the proposed architecture. This architecture resulted in a 4.5% performance increase compared to the human operator and 9% to an unconstrained feedforward neural network trained in the same way.

Marius F. R. Juston, Alex Gisi, William R. Norris, Dustin Nottage, Ahmet Soylemezoglu3/14/2025

arXiv:2503.09626v1 Announce Type: new Abstract: Social bot detection is crucial for mitigating misinformation, online manipulation, and coordinated inauthentic behavior. While existing neural network-based detectors perform well on benchmarks, they struggle with generalization due to distribution shifts across datasets and frequently produce overconfident predictions for out-of-distribution accounts beyond the training data. To address this, we introduce a novel Uncertainty Estimation for Social Bot Detection (UESBD) framework, which quantifies the predictive uncertainty of detectors beyond mere classification. For this task, we propose Robust Multi-modal Neural Processes (RMNP), which aims to enhance the robustness of multi-modal neural processes to modality inconsistencies caused by social bot camouflage. RMNP first learns unimodal representations through modality-specific encoders. Then, unimodal attentive neural processes are employed to encode the Gaussian distribution of unimodal latent variables. Furthermore, to avoid social bots stealing human features to camouflage themselves thus causing certain modalities to provide conflictive information, we introduce an evidential gating network to explicitly model the reliability of modalities. The joint latent distribution is learned through the generalized product of experts, which takes the reliability of each modality into consideration during fusion. The final prediction is obtained through Monte Carlo sampling of the joint latent distribution followed by a decoder. Experiments on three real-world benchmarks show the effectiveness of RMNP in classification and uncertainty estimation, as well as its robustness to modality conflicts.

Qi Wu, Yingguang Yang, hao liu, Hao Peng, Buyun He, Yutong Xia, Yong Liao3/14/2025

arXiv:2503.09630v1 Announce Type: new Abstract: Diffusion models have transformed image generation, yet controlling their outputs for diverse applications, including content moderation and creative customization, remains challenging. Existing approaches usually require task-specific training and struggle to generalize across both concrete (e.g., objects) and abstract (e.g., styles) concepts. We propose CASteer (Cross-Attention Steering) a training-free framework for controllable image generation using steering vectors to influence a diffusion model$'$s hidden representations dynamically. CASteer computes these vectors offline by averaging activations from concept-specific generated images, then applies them during inference via a dynamic heuristic that activates modifications only when necessary, removing concepts from affected images or adding them to unaffected ones. This approach enables precise control over a wide range of tasks, including removing harmful content, adding desired attributes, replacing objects, or altering styles, all without model retraining. CASteer handles both concrete and abstract concepts, outperforming state-of-the-art techniques across multiple diffusion models while preserving unrelated content and minimizing unintended effects.

Tatiana Gaintseva, Chengcheng Ma, Ziquan Liu, Martin Benning, Gregory Slabaugh, Jiankang Deng, Ismail Elezi3/14/2025

arXiv:2503.04305v2 Announce Type: replace Abstract: Recent developments in computer science and artificial intelligence have also contributed to the legal domain, as revealed by the number and range of related publications and applications. Machine and deep learning models require considerable amount of domain-specific data for training and comparison purposes, in order to attain high-performance in the legal domain. Additionally, semantic resources such as ontologies are valuable for building large-scale computational legal systems, in addition to ensuring interoperability of such systems. Considering these aspects, we present an up-to-date review of the literature on datasets, benchmarks, and ontologies proposed for computational law. We believe that this comprehensive and recent review will help researchers and practitioners when developing and testing approaches and systems for computational law.

Dilek K\"u\c{c}\"uk, Fazli Can3/14/2025

arXiv:2503.09617v1 Announce Type: new Abstract: Large Language Models (LLMs) are rapidly saturating existing benchmarks, necessitating new open-ended evaluations. We introduce the Factorio Learning Environment (FLE), based on the game of Factorio, that tests agents in long-term planning, program synthesis, and resource optimization. FLE provides exponentially scaling challenges -- from basic automation to complex factories processing millions of resource units per second. We provide two settings: (1) lab-play consisting of eight structured tasks with fixed resources, and (2) open-play with the unbounded task of building the largest factory on an procedurally generated map. We demonstrate across both settings that models still lack strong spatial reasoning. In lab-play, we find that LLMs exhibit promising short-horizon skills, yet are unable to operate effectively in constrained environments, reflecting limitations in error analysis. In open-play, while LLMs discover automation strategies that improve growth (e.g electric-powered drilling), they fail to achieve complex automation (e.g electronic-circuit manufacturing).

Jack Hopkins, Mart Bakler, Akbir Khan3/14/2025

arXiv:2503.09612v1 Announce Type: new Abstract: Technology can pose signicant risks to a wide array of vulnerable populations. However, by addressing the challenges and opportunities in technology design, research, and deployment, we can create systems that benet everyone, fostering a society where even the most vulnerable are empowered and supported.

Pamela Wisniewski (Vanderbilt University), Katie Siek (Indiana University Bloomington), Kevin Butler (University of Florida), Gabrielle Allen (University of Wyoming), Weisong Shi (University of Delaware), Manish Parashar (University of Utah)3/14/2025

arXiv:2503.09622v1 Announce Type: new Abstract: Due to dynamic variations such as changing payload, aerodynamic disturbances, and varying platforms, a robust solution for quadrotor trajectory tracking remains challenging. To address these challenges, we present a deep reinforcement learning (DRL) framework that achieves physical dynamics invariance by directly optimizing force/torque inputs, eliminating the need for traditional intermediate control layers. Our architecture integrates a temporal trajectory encoder, which processes finite-horizon reference positions/velocities, with a latent dynamics encoder trained on historical state-action pairs to model platform-specific characteristics. Additionally, we introduce scale-aware dynamics randomization parameterized by the quadrotor's arm length, enabling our approach to maintain stability across drones spanning from 30g to 2.1kg and outperform other DRL baselines by 85% in tracking accuracy. Extensive real-world validation of our approach on the Crazyflie 2.1 quadrotor, encompassing over 200 flights, demonstrates robust adaptation to wind, ground effects, and swinging payloads while achieving less than 0.05m RMSE at speeds up to 2.0 m/s. This work introduces a universal quadrotor control paradigm that compensates for dynamic discrepancies across varied conditions and scales, paving the way for more resilient aerial systems.

Varad Vaidya, Jishnu Keshavan3/14/2025

arXiv:2503.10401v1 Announce Type: new Abstract: This paper presents a novel method for assistive load carrying using quadruped robots. The controller uses proprioceptive sensor data to estimate external base wrench, that is used for precise control of the robot's acceleration during payload transport. The acceleration is controlled using a combination of admittance control and Control Barrier Function (CBF) based quadratic program (QP). The proposed controller rejects disturbances and maintains consistent performance under varying load conditions. Additionally, the built-in CBF guarantees collision avoidance with the collaborative agent in front of the robot. The efficacy of the overall controller is shown by its implementation on the physical hardware as well as numerical simulations. The proposed control framework aims to enhance the quadruped robot's ability to perform assistive tasks in various scenarios, from industrial applications to search and rescue operations.

Nimesh Khandelwal, Amritanshu Manu, Shakti S. Gupta, Mangal Kothari, Prashanth Krishnamurthy, Farshad Khorrami3/14/2025

arXiv:2502.20524v2 Announce Type: replace Abstract: Systems with a high number of inputs compared to the degrees of freedom (e.g. a mobile robot with Mecanum wheels) often have a minimal set of energy-efficient inputs needed to achieve a main task (e.g. position tracking) and a set of energy-intense inputs needed to achieve an additional auxiliary task (e.g. orientation tracking). This letter presents a unified control scheme, derived through feedback linearization, that can switch between two modes: an energy-saving mode, which tracks the main task using only the energy-efficient inputs while forcing the energy-intense inputs to zero, and a dexterous mode, which also uses the energy-intense inputs to track the auxiliary task as needed. The proposed control guarantees the exponential tracking of the main task and that the dynamics associated with the main task evolve independently of the a priori unknown switching signal. When the control is operating in dexterous mode, the exponential tracking of the auxiliary task is also guaranteed. Numerical simulations on an omnidirectional Mecanum wheel robot validate the effectiveness of the proposed approach and demonstrate the effect of the switching signal on the exponential tracking behavior of the main and auxiliary tasks.

Mirko Mizzoni, Pieter van Goor, Antonio Franchi3/14/2025

arXiv:2503.10567v1 Announce Type: new Abstract: Training a model that effectively handles both common and rare data-i.e., achieving performance fairness-is crucial in federated learning (FL). While existing fair FL methods have shown effectiveness, they remain vulnerable to mislabeled data. Ensuring robustness in fair FL is therefore essential. However, fairness and robustness inherently compete, which causes robust strategies to hinder fairness. In this paper, we attribute this competition to the homogeneity in loss patterns exhibited by rare and mislabeled data clients, preventing existing loss-based fair and robust FL methods from effectively distinguishing and handling these two distinct client types. To address this, we propose performance-capacity analysis, which jointly considers model performance on each client and its capacity to handle the dataset, measured by loss and a newly introduced feature dispersion score. This allows mislabeled clients to be identified by their significantly deviated performance relative to capacity while preserving rare data clients. Building on this, we introduce FedPCA, an FL method that robustly achieves fairness. FedPCA first identifies mislabeled clients via a Gaussian Mixture Model on loss-dispersion pairs, then applies fairness and robustness strategies in global aggregation and local training by adjusting client weights and selectively using reliable data. Extensive experiments on three datasets demonstrate FedPCA's effectiveness in tackling this complex challenge. Code will be publicly available upon acceptance.

Nannan Wu, Zengqiang Yan, Nong Sang, Li Yu, Chang Wen Chen3/14/2025

arXiv:2503.09613v1 Announce Type: new Abstract: AI's rapid integration into the workplace demands new approaches to workforce education and training and broader AI literacy across disciplines. Coordinated action from government, industry, and educational institutions is necessary to ensure workers can adapt to accelerating technological change.

Lisa Amini (IBM Research), Henry F. Korth (Lehigh University), Nita Patel (Otis), Evan Peck (University of Colorado Boulder), Ben Zorn (Microsoft)3/14/2025

arXiv:2503.09628v1 Announce Type: new Abstract: Autonomous Underwater Vehicles (AUVs) play an essential role in modern ocean exploration, and their speed control systems are fundamental to their efficient operation. Like many other robotic systems, AUVs exhibit multivariable nonlinear dynamics and face various constraints, including state limitations, input constraints, and constraints on the increment input, making controller design challenging and requiring significant effort and time. This paper addresses these challenges by employing a data-driven Koopman operator theory combined with Model Predictive Control (MPC), which takes into account the aforementioned constraints. The proposed approach not only ensures the performance of the AUV under state and input limitations but also considers the variation in incremental input to prevent rapid and potentially damaging changes to the vehicle's operation. Additionally, we develop a platform based on ROS2 and Gazebo to validate the effectiveness of the proposed algorithms, providing new control strategies for underwater vehicles against the complex and dynamic nature of underwater environments.

Zhiliang Liu, Xin Zhao, Peng Cai, Bing Cong3/14/2025

arXiv:2503.09621v1 Announce Type: new Abstract: Decentralized safe control plays an important role in multi-agent systems given the scalability and robustness without reliance on a central authority. However, without an explicit global coordinator, the decentralized control methods are often prone to deadlock -- a state where the system reaches equilibrium, causing the robots to stall. In this paper, we propose a generalized decentralized framework that unifies the Control Lyapunov Function (CLF) and Control Barrier Function (CBF) to facilitate efficient task execution and ensure deadlock-free trajectories for the multi-agent systems. As the agents approach the deadlock-related undesirable equilibrium, the framework can detect the equilibrium and drive agents away before that happens. This is achieved by a secondary deadlock resolution design with an auxiliary CBF to prevent the multi-agent systems from converging to the undesirable equilibrium. To avoid dominating effects due to the deadlock resolution over the original task-related controllers, a deadlock indicator function using CBF-inspired risk measurement is proposed and encoded in the unified framework for the agents to adaptively determine when to activate the deadlock resolution. This allows the agents to follow their original control tasks and seamlessly unlock or deactivate deadlock resolution as necessary, effectively improving task efficiency. We demonstrate the effectiveness of the proposed method through theoretical analysis, numerical simulations, and real-world experiments.

Yanze Zhang, Yiwei Lyu, Siwon Jo, Yupeng Yang, Wenhao Luo3/14/2025

arXiv:2502.11809v2 Announce Type: replace Abstract: Deep neural networks (DNNs) often exhibit biases toward certain categories during object recognition, even under balanced training data conditions. The intrinsic mechanisms underlying these biases remain unclear. Inspired by the human visual system, which decouples object manifolds through hierarchical processing to achieve object recognition, we propose a geometric analysis framework linking the geometric complexity of class-specific perceptual manifolds in DNNs to model bias. Our findings reveal that differences in geometric complexity can lead to varying recognition capabilities across categories, introducing biases. To support this analysis, we present the Perceptual-Manifold-Geometry library, designed for calculating the geometric properties of perceptual manifolds.

Yanbiao Ma, Bowei Liu, Boyuan Gao, Wei Dai, Jiayi Chen, Shuo Li3/14/2025

arXiv:2503.09614v1 Announce Type: new Abstract: To sustain innovation and safeguard national security, the U.S. must strengthen domestic pathways to computing PhDs by engaging talented undergraduates early - before they are committed to industry - with research experiences, mentorship, and financial support for graduate studies.

Susanne Hambrusch (Purdue University), Lori Pollock (University of Delaware), Mary Hall (University of Utah), Nancy M. Amato (University of Illinois Urbana-Champaign)3/14/2025

arXiv:2503.09636v1 Announce Type: new Abstract: Neuromorphic vision, inspired by biological neural systems, has recently gained significant attention for its potential in enhancing robotic autonomy. This paper presents a systematic exploration of a proposed Neuromorphic Navigation framework that uses event-based neuromorphic vision to enable efficient, real-time navigation in robotic systems. We discuss the core concepts of neuromorphic vision and navigation, highlighting their impact on improving robotic perception and decision-making. The proposed reconfigurable Neuromorphic Navigation framework adapts to the specific needs of both ground robots (Turtlebot) and aerial robots (Bebop2 quadrotor), addressing the task-specific design requirements (algorithms) for optimal performance across the autonomous navigation stack -- Perception, Planning, and Control. We demonstrate the versatility and the effectiveness of the framework through two case studies: a Turtlebot performing local replanning for real-time navigation and a Bebop2 quadrotor navigating through moving gates. Our work provides a scalable approach to task-specific, real-time robot autonomy leveraging neuromorphic systems, paving the way for energy-efficient autonomous navigation.

Sourav Sanyal, Amogh Joshi, Adarsh Kosta, Kaushik Roy3/14/2025