Latest Posts

Google.org has announced which organizations will receive the final $10 million in funding from its $75 million AI Opportunity Fund to provide American nonprofits with A…

{"$":{"xmlns:author":"http://www.w3.org/2005/Atom"},"name":["Maggie Johnson"],"title":["VP and Head of Google.org"],"department":[""],"company":[""]}3/31/2025

Learn how to write effective prompts and troubleshoot results in this installment of our GitHub for Beginners series. The post GitHub for Beginners: How to get LLMs to do what you want appeared first on The GitHub Blog.

Kedasha Kerr3/31/2025

arXiv:2503.21798v1 Announce Type: new Abstract: Transforming a dynamic hypothesis into a causal loop diagram (CLD) is crucial for System Dynamics Modelling. Extracting key variables and causal relationships from text to build a CLD is often challenging and time-consuming for novice modelers, limiting SD tool adoption. This paper introduces and tests a method for automating the translation of dynamic hypotheses into CLDs using large language models (LLMs) with curated prompting techniques. We first describe how LLMs work and how they can make the inferences needed to build CLDs using a standard digraph structure. Next, we develop a set of simple dynamic hypotheses and corresponding CLDs from leading SD textbooks. We then compare the four different combinations of prompting techniques, evaluating their performance against CLDs labeled by expert modelers. Results show that for simple model structures and using curated prompting techniques, LLMs can generate CLDs of a similar quality to expert-built ones, accelerating CLD creation.

Ning-Yuan Georgia Liu, David R. Keith3/31/2025

arXiv:2503.21794v1 Announce Type: new Abstract: The paper explores an approach to constructing energy landscapes of a formal neuron and multilayer artificial neural networks (ANNs). Their analysis makes it possible to determine the conceptual limitations of both classification ANNs (e.g., MLP or CNN) and generative ANN models. The study of informational and thermodynamic entropy in formal neuron and ANN models leads to the conclusion about the energetic nature of informational entropy. The application of the Gibbs free energy concept allows representing the output information of ANNs as the structured part of enthalpy. Modeling ANNs as energy systems makes it possible to interpret the structure of their internal energy as an internal model of the external world, which self-organizes based on the interaction of the system's internal energy components. The control of the self-organization and evolution process of this model is carried out through an energy function (analogous to the Lyapunov function) based on reduction operators. This makes it possible to introduce a new approach to constructing self-organizing and evolutionary ANNs with direct learning, which does not require additional external algorithms. The presented research makes it possible to formulate a formal definition of information in terms of the interaction processes between the internal and external energy of the system.

Yurii Parzhyn3/31/2025

arXiv:2503.22401v1 Announce Type: new Abstract: Large Language Models (LLMs) have demonstrated remarkable in-context learning capabilities, enabling flexible utilization of limited historical information to play pivotal roles in reasoning, problem-solving, and complex pattern recognition tasks. Inspired by the successful applications of LLMs in multiple domains, this paper proposes a generative design method by leveraging the in-context learning capabilities of LLMs with the iterative search mechanisms of metaheuristic algorithms for solving reliability-based design optimization problems. In detail, reliability analysis is performed by engaging the LLMs and Kriging surrogate modeling to overcome the computational burden. By dynamically providing critical information of design points to the LLMs with prompt engineering, the method enables rapid generation of high-quality design alternatives that satisfy reliability constraints while achieving performance optimization. With the Deepseek-V3 model, three case studies are used to demonstrated the performance of the proposed approach. Experimental results indicate that the proposed LLM-RBDO method successfully identifies feasible solutions that meet reliability constraints while achieving a comparable convergence rate compared to traditional genetic algorithms.

Zhonglin Jiang, Qian Tang, Zequn Wang3/31/2025

arXiv:2503.22272v1 Announce Type: new Abstract: In this work, we propose a factor graph optimization (FGO) framework to simultaneously solve the calibration problem for Ultra-WideBand (UWB) anchors and the robot localization problem. Calibrating UWB anchors manually can be time-consuming and even impossible in emergencies or those situations without special calibration tools. Therefore, automatic estimation of the anchor positions becomes a necessity. The proposed method enables the creation of a soft sensor providing the position information of the anchors in a UWB network. This soft sensor requires only UWB and LiDAR measurements measured from a moving robot. The proposed FGO framework is suitable for the calibration of an extendable large UWB network. Moreover, the anchor calibration problem and robot localization problem can be solved simultaneously, which saves time for UWB network deployment. The proposed framework also helps to avoid artificial errors in the UWB-anchor position estimation and improves the accuracy and robustness of the robot-pose. The experimental results of the robot localization using LiDAR and a UWB network in a 3D environment are discussed, demonstrating the performance of the proposed method. More specifically, the anchor calibration problem with four anchors and the robot localization problem can be solved simultaneously and automatically within 30 seconds by the proposed framework. The supplementary video and codes can be accessed via https://github.com/LiuxhRobotAI/Simultaneous_calibration_localization.

Xinghua Liu, Ming Cao3/31/2025

arXiv:2503.21795v1 Announce Type: new Abstract: Efficient spatial navigation is a hallmark of the mammalian brain, inspiring the development of neuromorphic systems that mimic biological principles. Despite progress, implementing key operations like back-tracing and handling ambiguity in bio-inspired spiking neural networks remains an open challenge. This work proposes a mechanism for activity back-tracing in arbitrary, uni-directional spiking neuron graphs. We extend the existing replay mechanism of the spiking hierarchical temporal memory (S-HTM) by our spike timing-dependent threshold adaptation (STDTA), which enables us to perform path planning in networks of spiking neurons. We further present an ambiguity dependent threshold adaptation (ADTA) for identifying places in an environment with less ambiguity, enhancing the localization estimate of an agent. Combined, these methods enable efficient identification of the shortest path to an unambiguous target. Our experiments show that a network trained on sequences reliably computes shortest paths with fewer replays than the steps required to reach the target. We further show that we can identify places with reduced ambiguity in multiple, similar environments. These contributions advance the practical application of biologically inspired sequential learning algorithms like the S-HTM towards neuromorphic localization and navigation.

Robin Dietrich, Tobias Fischer, Nicolai Waniek, Nico Reeb, Michael Milford, Alois Knoll, Adam D. Hines3/31/2025

arXiv:2409.03542v2 Announce Type: replace Abstract: Generative classifiers are constructed on the basis of a joint probability distribution and are typically learned using closed-form procedures that rely on data statistics and maximize scores related to data fitting. However, these scores are not directly linked to supervised classification metrics such as the error, i.e., the expected 0-1 loss. To address this limitation, we propose a learning procedure called risk-based calibration (RC) that iteratively refines the generative classifier by adjusting its joint probability distribution according to the 0-1 loss in training samples. This is achieved by reinforcing data statistics associated with the true classes while weakening those of incorrect classes. As a result, the classifier progressively assigns higher probability to the correct labels, improving its training error. Results on 20 heterogeneous datasets using both na\"ive Bayes and quadratic discriminant analysis show that RC significantly outperforms closed-form learning procedures in terms of both training error and generalization error. In this way, RC bridges the gap between traditional generative approaches and learning procedures guided by performance measures, ensuring a closer alignment with supervised classification objectives.

Aritz P\'erez, Carlos Echegoyen, Guzm\'an Santaf\'e3/31/2025

arXiv:2503.21793v1 Announce Type: new Abstract: Neuromorphic computing based on spiking neural networks (SNNs) is emerging as a promising alternative to traditional artificial neural networks (ANNs), offering unique advantages in terms of low power consumption. However, the security aspect of SNNs is under-explored compared to their ANN counterparts. As the increasing reliance on AI systems comes with unique security risks and challenges, understanding the vulnerabilities and threat landscape is essential as neuromorphic computing matures. In this effort, we propose a novel input-triggered Hardware Trojan (HT) attack for SNNs. The HT mechanism is condensed in the area of one neuron. The trigger mechanism is an input message crafted in the spiking domain such that a selected neuron produces a malicious spike train that is not met in normal settings. This spike train triggers a malicious modification in the neuron that forces it to saturate, firing permanently and failing to recover to its resting state even when the input activity stops. The excessive spikes pollute the network and produce misleading decisions. We propose a methodology to select an appropriate neuron and to generate the input pattern that triggers the HT payload. The attack is illustrated by simulation on three popular benchmarks in the neuromorphic community. We also propose a hardware implementation for an analog spiking neuron and a digital SNN accelerator, demonstrating that the HT has a negligible area and power footprint and, thereby, can easily evade detection.

Spyridon Raptis, Paul Kling, Ioannis Kaskampas, Ihsen Alouani, Haralampos-G. Stratigopoulos3/31/2025

arXiv:2503.22410v1 Announce Type: cross Abstract: This paper considers distributed online nonconvex optimization with time-varying inequality constraints over a network of agents. For a time-varying graph, we propose a distributed online primal-dual algorithm with compressed communication to efficiently utilize communication resources. We show that the proposed algorithm establishes an $\mathcal{O}( {{T^{\max \{ {1 - {\theta_1},{\theta_1}} \}}}} )$ network regret bound and an $\mathcal{O}( {T^{1 - {\theta_1}/2}} )$ network cumulative constraint violation bound, where $T$ is the number of iterations and ${\theta_1} \in ( {0,1} )$ is a user-defined trade-off parameter. When Slater's condition holds (i.e, there is a point that strictly satisfies the inequality constraints at all iterations), the network cumulative constraint violation bound is reduced to $\mathcal{O}( {T^{1 - {\theta_1}}} )$. These bounds are comparable to the state-of-the-art results established by existing distributed online algorithms with perfect communication for distributed online convex optimization with (time-varying) inequality constraints. Finally, a simulation example is presented to validate the theoretical results.

Kunpeng Zhang, Lei Xu, Xinlei Yi, Ming Cao, Karl H. Johansson, Tianyou Chai, Tao Yang3/31/2025

arXiv:2503.22074v1 Announce Type: new Abstract: Large language models (LLMs) hold great promise for specialized scientific domains such as materials science, yet adapting them efficiently and accurately to domain-specific knowledge remains challenging due to limited data and high knowledge density. We propose a two-stage framework that combines structured model compression with a scientific fine-tuning regimen to address this challenge. In the compression stage, we decompose the LLM's weight matrices into local low-rank "rank blocks" and arrange these blocks in a Penrose-like non-periodic tiling pattern. Each block is then compacted via spectral transformations (e.g., discrete cosine or Fourier transforms), and a Kullback-Leibler (KL) divergence-based alignment loss preserves the distributional similarity between the compressed model's representations and those of the original full model. In the adaptation stage, the compressed model is further tuned using a human-like scientific reading protocol: it processes technical materials science documents section by section, engaging in a structured question-and-answer routine for each section. This section-wise Q&A fine-tuning strategy extracts explicit reasoning traces and gradually injects domain knowledge, while minimizing catastrophic forgetting of the model's general language capabilities. By balancing efficient compression with targeted adaptation, our two-stage approach enables precise specialization of LLMs to high-value domains under data-scarce conditions. We present this principled yet exploratory pipeline and outline its potential for advancing materials science knowledge integration, laying the groundwork for comprehensive empirical evaluation in future work.

Chuan-Wei Kuo, Siyu Chen, Chenqi Yan, Yu Yang Fredrik Liu3/31/2025

arXiv:2409.07067v3 Announce Type: replace Abstract: Spacecraft image denoising is a crucial fundamental technology closely related to aerospace research. However, the existing deep learning-based image denoising methods are primarily designed for natural image and fail to adequately consider the characteristics of spacecraft image(e.g. low-light conditions, repetitive periodic structures), resulting in suboptimal performance in the spacecraft image denoising task. To address the aforementioned problems, we propose a Structure modeling Activation Free Fourier Network (SAFFN), which is an efficient spacecraft image denoising method including Structure Modeling Block (SMB) and Activation Free Fourier Block (AFFB). We present SMB to effectively extract edge information and model the structure for better identification of spacecraft components from dark regions in spacecraft noise image. We present AFFB and utilize an improved Fast Fourier block to extract repetitive periodic features and long-range information in noisy spacecraft image. Extensive experimental results demonstrate that our SAFFN performs competitively compared to the state-of-the-art methods on spacecraft noise image datasets. The codes are available at: https://github.com/shenduke/SAFFN.

Jingfan Yang, Hu Gao, Ying Zhang, Bowen Ma, Depeng Dang3/31/2025

arXiv:2412.02545v3 Announce Type: replace Abstract: Shadows introduce challenges such as reduced brightness, texture deterioration, and color distortion in images, complicating a holistic solution. This study presents \textbf{ShadowHack}, a divide-and-conquer strategy that tackles these complexities by decomposing the original task into luminance recovery and color remedy. To brighten shadow regions and repair the corrupted textures in the luminance space, we customize LRNet, a U-shaped network with a rectified attention module, to enhance information interaction and recalibrate contaminated attention maps. With luminance recovered, CRNet then leverages cross-attention mechanisms to revive vibrant colors, producing visually compelling results. Extensive experiments on multiple datasets are conducted to demonstrate the superiority of ShadowHack over existing state-of-the-art solutions both quantitatively and qualitatively, highlighting the effectiveness of our design. Our code will be made publicly available.

Jin Hu, Mingjia Li, Xiaojie Guo3/31/2025

arXiv:2503.22345v1 Announce Type: new Abstract: We present a work in progress that explores using a Large Language Model (LLM) as a design material for an interactive museum installation. LLMs offer the possibility of creating chatbots that can facilitate dynamic and human-like conversation, engaging in a form of role play to bring historical persons to life for visitors. However, LLMs are prone to producing misinformation, which runs counter to museums' core mission to educate the public. We use Research-through-Design to explore some approaches to navigating this dilemma through rapid prototyping and evaluation and propose some directions for further research. We suggest that designers may shape interactions with the chatbot to emphasize personal narratives and role play rather than historical facts or to intentionally highlight the unreliability of the chatbot outputs to provoke critical reflection.

Maria Padilla Engstr{\o}m, Anders Sundnes L{\o}vlie3/31/2025

arXiv:2503.21796v1 Announce Type: new Abstract: Self-supervised learning has become an increasingly important paradigm in the domain of machine intelligence. Furthermore, evidence for self-supervised adaptation, such as contrastive formulations, has emerged in recent computational neuroscience and brain-inspired research. Nevertheless, current work on self-supervised learning relies on biologically implausible credit assignment -- in the form of backpropagation of errors -- and feedforward inference, typically a forward-locked pass. Predictive coding, in its mechanistic form, offers a biologically plausible means to sidestep these backprop-specific limitations. However, unsupervised predictive coding rests on learning a generative model of raw pixel input (akin to ``generative AI'' approaches), which entails predicting a potentially high dimensional input; on the other hand, supervised predictive coding, which learns a mapping between inputs to target labels, requires human annotation, and thus incurs the drawbacks of supervised learning. In this work, we present a scheme for self-supervised learning within a neurobiologically plausible framework that appeals to the free energy principle, constructing a new form of predictive coding that we call meta-representational predictive coding (MPC). MPC sidesteps the need for learning a generative model of sensory input (e.g., pixel-level features) by learning to predict representations of sensory input across parallel streams, resulting in an encoder-only learning and inference scheme. This formulation rests on active inference (in the form of sensory glimpsing) to drive the learning of representations, i.e., the representational dynamics are driven by sequences of decisions made by the model to sample informative portions of its sensorium.

Alexander Ororbia, Karl Friston, Rajesh P. N. Rao3/31/2025

arXiv:2503.22035v1 Announce Type: new Abstract: AI is transforming industries, raising concerns about job displacement and decision making reliability. AI, as a universal approximation function, excels in data driven tasks but struggles with small datasets, subjective probabilities, and contexts requiring human judgment, relationships, and ethics.The EPOCH framework highlights five irreplaceable human capabilities: Empathy, Presence, Opinion, Creativity, and Hope. These attributes are vital in financial services for trust, inclusion, innovation, and consumer experience. Although AI improves efficiency in risk management and compliance, it will not eliminate jobs but redefine them, similar to how ATMs reshaped bank tellers' roles. The challenge is ensuring professionals adapt, leveraging AI's strengths while preserving essential human capabilities.

Isabella Loaiza, Roberto Rigobon3/31/2025

arXiv:2503.22179v1 Announce Type: new Abstract: Face swapping aims to seamlessly transfer a source facial identity onto a target while preserving target attributes such as pose and expression. Diffusion models, known for their superior generative capabilities, have recently shown promise in advancing face-swapping quality. This paper addresses two key challenges in diffusion-based face swapping: the prioritized preservation of identity over target attributes and the inherent conflict between identity and attribute conditioning. To tackle these issues, we introduce an identity-constrained attribute-tuning framework for face swapping that first ensures identity preservation and then fine-tunes for attribute alignment, achieved through a decoupled condition injection. We further enhance fidelity by incorporating identity and adversarial losses in a post-training refinement stage. Our proposed identity-constrained diffusion-based face-swapping model outperforms existing methods in both qualitative and quantitative evaluations, demonstrating superior identity similarity and attribute consistency, achieving a new state-of-the-art performance in high-fidelity face swapping.

Dailan He, Xiahong Wang, Shulun Wang, Guanglu Song, Bingqi Ma, Hao Shao, Yu Liu, Hongsheng Li3/31/2025

arXiv:2503.21797v1 Announce Type: new Abstract: Cooperative Co-evolution, through the decomposition of the problem space, is a primary approach for solving large-scale global optimization problems. Typically, when the subspaces are disjoint, the algorithms demonstrate significantly both effectiveness and efficiency compared to non-decomposition algorithms. However, the presence of overlapping variables complicates the decomposition process and adversely affects the performance of cooperative co-evolution. In this study, we propose a novel two-phase cooperative co-evolution framework to address large-scale global optimization problems with complex overlapping. An effective method for decomposing overlapping problems, grounded in their mathematical properties, is embedded within the framework. Additionally, a customizable benchmark for overlapping problems is introduced to extend existing benchmarks and facilitate experimentation. Extensive experiments demonstrate that the algorithm instantiated within our framework significantly outperforms existing algorithms. The results reveal the characteristics of overlapping problems and highlight the differing strengths of cooperative co-evolution and non-decomposition algorithms. Our work is open-source and accessible at: https://github.com/GMC-DRL/HCC.

Wenjie Qiu, Hongshu Guo, Zeyuan Ma, Yue-Jiao Gong3/31/2025

arXiv:2503.22076v1 Announce Type: new Abstract: While transformers have proven enormously successful in a range of tasks, their fundamental properties as models of computation are not well understood. This paper contributes to the study of the expressive capacity of transformers, focusing on their ability to perform the fundamental computational task of evaluating an arbitrary function from $[n]$ to $[n]$ at a given argument. We prove that concise 1-layer transformers (i.e., with a polylog bound on the product of the number of heads, the embedding dimension, and precision) are capable of doing this task under some representations of the input, but not when the function's inputs and values are only encoded in different input positions. Concise 2-layer transformers can perform the task even with the more difficult input representation. Experimentally, we find a rough alignment between what we have proven can be computed by concise transformers and what can be practically learned.

Lena Strobl, Dana Angluin, Robert Frank3/31/2025

arXiv:2503.21800v1 Announce Type: new Abstract: Population-based cancer registries (PBCRs) face a significant bottleneck in manually extracting data from unstructured pathology reports, a process crucial for tasks like tumor group assignment, which can consume 900 person-hours for approximately 100,000 reports. To address this, we introduce ELM (Ensemble of Language Models), a novel ensemble-based approach leveraging both small language models (SLMs) and large language models (LLMs). ELM utilizes six fine-tuned SLMs, where three SLMs use the top part of the pathology report and three SLMs use the bottom part. This is done to maximize report coverage. ELM requires five-out-of-six agreement for a tumor group classification. Disagreements are arbitrated by an LLM with a carefully curated prompt. Our evaluation across nineteen tumor groups demonstrates ELM achieves an average precision and recall of 0.94, outperforming single-model and ensemble-without-LLM approaches. Deployed at the British Columbia Cancer Registry, ELM demonstrates how LLMs can be successfully applied in a PBCR setting to achieve state-of-the-art results and significantly enhance operational efficiencies, saving hundreds of person-hours annually.

Lovedeep Gondara, Jonathan Simkin, Shebnum Devji, Gregory Arbour, Raymond Ng3/31/2025