math.IT
309 postsarXiv:2503.22071v1 Announce Type: cross Abstract: We propose a model for quantum computing with long chains of trapped ions and we design quantum error correction schemes for this model. The main components of a quantum error correction scheme are the quantum code and a quantum circuit called the syndrome extraction circuit, which is executed to perform error correction with this code. In this work, we design syndrome extraction circuits tailored to our ion chain model, a syndrome extraction tuning protocol to optimize these circuits, and we construct new quantum codes that outperform the state-of-the-art for chains of about $50$ qubits. To establish a baseline under the ion chain model, we simulate the performance of surface codes and bivariate bicycle (BB) codes equipped with our optimized syndrome extraction circuits. Then, we propose a new variant of BB codes defined by weight-five measurements, that we refer to as BB5 codes, and we identify BB5 codes that achieve a better minimum distance than any BB codes with the same number of logical qubits and data qubits, such as $[[30, 4, 5]]$ and $[[48, 4, 7]]$ BB5 codes. For a physical error rate of $10^{-3}$, the $[[48, 4, 7]]$ BB5 code achieves a logical error rate per logical qubit of $5 \cdot 10^{-5}$, which is four times smaller than the best BB code in our baseline family. It also achieves the same logical error rate per logical qubit as the distance-7 surface code but using four times fewer physical qubits per logical qubit.
arXiv:2503.22651v1 Announce Type: cross Abstract: We study the tradeoffs between the locality and parameters of subsystem codes. We prove lower bounds on both the number and lengths of interactions in any $D$-dimensional embedding of a subsystem code. Specifically, we show that any embedding of a subsystem code with parameters $[[n,k,d]]$ into $\mathbb{R}^D$ must have at least $M^*$ interactions of length at least $\ell^*$, where \[ M^* = \Omega(\max(k,d)), \quad\text{and}\quad \ell^* = \Omega\bigg(\max\bigg(\frac{d}{n^\frac{D-1}{D}}, \bigg(\frac{kd^\frac{1}{D-1}}{n}\bigg)^\frac{D-1}{D}\bigg)\bigg). \] We also give tradeoffs between the locality and parameters of commuting projector codes in $D$-dimensions, generalizing a result of Dai and Li. We provide explicit constructions of embedded codes that show our bounds are optimal in both the interaction count and interaction length.
arXiv:2502.10209v2 Announce Type: replace Abstract: This paper presents a comprehensive framework for holographic multiantenna communication, a paradigm that integrates both wide apertures and closely spaced antennas relative to the wavelength. The presented framework is physically grounded, enabling information-theoretic analyses that inherently incorporate correlation and mutual coupling among the antennas. This establishes the combined effects of correlation and coupling on the information-theoretic performance limits across SNR levels. Additionally, it reveals that, by suitably selecting the individual antenna patterns, mutual coupling can be harnessed to either reinforce or counter spatial correlations as appropriate for specific SNRs, thereby improving the performance.
arXiv:2503.22015v1 Announce Type: cross Abstract: Learning-based denoising algorithms achieve state-of-the-art performance across various denoising tasks. However, training such models relies on access to large training datasets consisting of clean and noisy image pairs. On the other hand, in many imaging applications, such as microscopy, collecting ground truth images is often infeasible. To address this challenge, researchers have recently developed algorithms that can be trained without requiring access to ground truth data. However, training such models remains computationally challenging and still requires access to large noisy training samples. In this work, inspired by compression-based denoising and recent advances in neural compression, we propose a new compression-based denoising algorithm, which we name DeCompress, that i) does not require access to ground truth images, ii) does not require access to large training dataset - only a single noisy image is sufficient, iii) is robust to overfitting, and iv) achieves superior performance compared with zero-shot or unsupervised learning-based denoisers.
arXiv:2503.21794v1 Announce Type: new Abstract: The paper explores an approach to constructing energy landscapes of a formal neuron and multilayer artificial neural networks (ANNs). Their analysis makes it possible to determine the conceptual limitations of both classification ANNs (e.g., MLP or CNN) and generative ANN models. The study of informational and thermodynamic entropy in formal neuron and ANN models leads to the conclusion about the energetic nature of informational entropy. The application of the Gibbs free energy concept allows representing the output information of ANNs as the structured part of enthalpy. Modeling ANNs as energy systems makes it possible to interpret the structure of their internal energy as an internal model of the external world, which self-organizes based on the interaction of the system's internal energy components. The control of the self-organization and evolution process of this model is carried out through an energy function (analogous to the Lyapunov function) based on reduction operators. This makes it possible to introduce a new approach to constructing self-organizing and evolutionary ANNs with direct learning, which does not require additional external algorithms. The presented research makes it possible to formulate a formal definition of information in terms of the interaction processes between the internal and external energy of the system.
arXiv:2503.22486v1 Announce Type: new Abstract: This work investigates the potential of exploiting movable antennas (MAs) to enhance the performance of a multi-user downlink integrated sensing and communication (ISAC) system. Specifically, we formulate an optimization problem to maximize the transmit beampattern gain for sensing while simultaneously meeting each user's communication requirement by jointly optimizing antenna positions and beamforming design. The problem formulated is highly non-convex and involves multivariate-coupled constraints. To address these challenges, we introduce a series of auxiliary random variables and transform the original problem into an augmented Lagrangian problem. A double-loop algorithm based on a penalty dual decomposition framework is then developed to solve the problem. Numerical results validate the effectiveness of the proposed design, demonstrating its superiority over MA designs based on successive convex approximation optimization and other baseline approaches in ISAC systems. The results also highlight the advantages of MAs in achieving better sensing performance and improved beam control, especially for sparse arrays with large apertures.
arXiv:2503.22016v1 Announce Type: cross Abstract: We show how to construct simulation secure one-time memories, and thus one-time programs, without computational assumptions in the presence of constraints on quantum hardware. Specifically, we build one-time memories from random linear codes and quantum random access codes (QRACs) when constrained to non-adaptive, constant depth, and $D$-dimensional geometrically-local quantum circuit for some constant $D$. We place no restrictions on the adversary's classical computational power, number of qubits it can use, or the coherence time of its qubits. Notably, our construction can still be secure even in the presence of fault tolerant quantum computation as long as the input qubits are encoded in a non-fault tolerant manner (e.g. encoded as high energy states in non-ideal hardware). Unfortunately though, our construction requires decoding random linear codes and thus does not run in polynomial time. We leave open the question of whether one can construct a polynomial time information theoretically secure one-time memory from geometrically local quantum circuits. Of potentially independent interest, we develop a progress bound for information leakage via collision entropy (Renyi entropy of order $2$) along with a few key technical lemmas for a "mutual information" for collision entropies. We also develop new bounds on how much information a specific $2 \mapsto 1$ QRAC can leak about its input, which may be of independent interest as well.
arXiv:2411.04728v3 Announce Type: replace Abstract: Inspired by biological processes, neuromorphic computing leverages spiking neural networks (SNNs) to perform inference tasks, offering significant efficiency gains for workloads involving sequential data. Recent advances in hardware and software have shown that embedding a small payload within each spike exchanged between spiking neurons can enhance inference accuracy without increasing energy consumption. To scale neuromorphic computing to larger workloads, split computing - where an SNN is partitioned across two devices - is a promising solution. In such architectures, the device hosting the initial layers must transmit information about the spikes generated by its output neurons to the second device. This establishes a trade-off between the benefits of multi-level spikes, which carry additional payload information, and the communication resources required for transmitting extra bits between devices. This paper presents the first comprehensive study of a neuromorphic wireless split computing architecture that employs multi-level SNNs. We propose digital and analog modulation schemes for an orthogonal frequency division multiplexing (OFDM) radio interface to enable efficient communication. Simulation and experimental results using software-defined radios reveal performance improvements achieved by multi-level SNN models and provide insights into the optimal payload size as a function of the connection quality between the transmitter and receiver.
arXiv:2406.10910v2 Announce Type: replace Abstract: This paper concerns the coordinate multi-cell beamforming design for integrated sensing and communications (ISAC). In particular, we assume that each base station (BS) has massive antennas. The optimization objective is to maximize a weighted sum of the data rates (for communications) and the Fisher information (for sensing). We first show that the conventional beamforming method for the multiple-input multiple-output (MIMO) transmission, i.e., the weighted minimum mean square error (WMMSE) algorithm, works for the ISAC problem case from a fractional programming (FP) perspective. However, the WMMSE algorithm frequently requires computing the $N\times N$ matrix inverse, where $N$ is the number of transmit or receive antennas, so the algorithm becomes quite costly when antennas are massively deployed. To address this issue, we develop a nonhomogeneous bound and use it in conjunction with the FP technique to solve the ISAC beamforming problem without the need to invert any large matrices. It is further shown that the resulting new FP algorithm has an intimate connection with gradient projection, based on which we can accelerate the convergence via Nesterov's gradient extrapolation.
arXiv:2503.10574v1 Announce Type: new Abstract: We study a family of processes generated according to sequential probability assignments induced by the LZ78 universal compressor. We characterize entropic and distributional properties such as their entropy and relative entropy rates, finite-state compressibility and log loss of their realizations, and the empirical distributions that they induce. Though not quite stationary, these sources are "almost stationary and ergodic;" similar to stationary and ergodic processes, they satisfy a Shannon-McMillan-Breiman-type property: the normalized log probability of their realizations converges almost surely to their entropy rate. Further, they are locally "almost i.i.d." in the sense that the finite-dimensional empirical distributions of their realizations converge almost surely to a deterministic i.i.d. law. However, unlike stationary ergodic sources, the finite-state compressibility of their realizations is almost surely strictly larger than their entropy rate by a "Jensen gap." We present simulations demonstrating the theoretical results. Among their potential uses, these sources allow to gauge the performance of sequential probability models on non-Markovian non-stationary data.
arXiv:2503.10274v1 Announce Type: cross Abstract: This paper devotes to combine the chirp basis function transformation and symplectic coordinates transformation to yield a novel Wigner distribution (WD) associated with the linear canonical transform (LCT), named as the symplectic WD in the LCT domain (SWDL). It incorporates the merits of the symplectic WD (SWD) and the WD in the LCT domain (WDL), achieving stronger capability in the linear frequency-modulated (LFM) signal frequency rate feature extraction while maintaining the same level of computational complexity. Some essential properties of the SWDL are derived, including marginal distributions, energy conservations, unique reconstruction, Moyal formula, complex conjugate symmetry, time reversal symmetry, scaling property, time translation property, frequency modulation property, and time translation and frequency modulation property. Heisenberg's uncertainty principles of the SWDL are formulated, giving rise to three kinds of lower bounds attainable respectively by Gaussian enveloped complex exponential signal, Gaussian signal and Gaussian enveloped chirp signal. The optimal symplectic matrices corresponding to the highest time-frequency resolution are generated by solving the lower bound optimization (minimization) problem. The time-frequency resolution of the SWDL is compared with those of the SWD and WDL to demonstrate its superiority in LFM signals time-frequency energy concentration. A synthesis example is also carried out to verify the feasibility and reliability of the theoretical analysis.
arXiv:2503.09729v1 Announce Type: cross Abstract: One of the most promising technologies for next-generation wireless networks is integrated communication and sensing (ISAC). It is considered a key enabler for applications that require both enhanced communication and accurate sensing capabilities. Examples of such applications include smart environments, augmented and virtual reality, or the internet of things, where the capabilities of intelligent sensing and broadband communications are vital. Therefore, ISAC has attracted the research interest of both academia and industry, and many investigations have been carried out over the past decade. The articles in the literature include system models, performance evaluation, and optimization studies of several ISAC alternative designs. Stochastic geometry is the study and analysis of random spatial patterns, and as such, stochastic geometry tools have been considered for the performance evaluation of wireless networks with different types of nodes. In this paper, we aim to provide a comprehensive survey of current research progress in performance evaluation of ISAC systems using stochastic geometry tools. The survey covers terrestrial, aerial, and vehicular networks, where the random spatial location of the corresponding network elements and propagation scatterers and/or blockages is treated with various point processes. The paper starts with a short tutorial on ISAC technology, stochastic geometry tools, and metrics used in performance evaluation of communication and sensing. Then, the technical components of the system models utilized in the surveyed papers are discussed. Subsequently, we present the key results of the literature in all types of networks using three levels of integration: sensing-assisted communication, communication-assisted sensing, and joint sensing and communication. Finally, future research challenges and promising directions are discussed.
arXiv:2503.07804v2 Announce Type: replace Abstract: We undertake a Shannon theoretic study of the problem of communicating classical information over a $3-$user quantum interference channel (QIC) and focus on characterizing inner bounds. In our previous work, we had demonstrated that coding strategies based on coset codes can yield strictly larger inner bounds. Adopting the powerful technique of \textit{tilting}, \textit{smoothing} and \textit{augmentation} discovered by Sen recently, and combining with our coset code strategy we derive a new inner bound to the classical-quantum capacity region of a $3-$user QIC. The derived inner bound subsumes all current known bounds.
arXiv:2501.02894v3 Announce Type: replace-cross Abstract: This work derives an upper bound on the maximum cardinality of a family of graphs on a fixed number of vertices, in which the intersection of every two graphs in that family contains a subgraph that is isomorphic to a specified graph H. Such families are referred to as H-intersecting graph families. The bound is derived using the combinatorial version of Shearer's lemma, and it forms a nontrivial extension of the bound derived by Chung, Graham, Frankl, and Shearer (1986), where H is specialized to a triangle. The derived bound is expressed in terms of the chromatic number of H, while a relaxed version, formulated using the Lov\'{a}sz $\vartheta$-function of the complement of H, offers reduced computational complexity. Additionally, a probabilistic version of Shearer's lemma, combined with properties of the Shannon entropy, are employed to establish bounds related to the enumeration of graph homomorphisms, providing further insights into the interplay between combinatorial structures and information-theoretic principles.
arXiv:2503.10234v1 Announce Type: new Abstract: In this paper, we establish the list-decoding capacity theorem for sum-rank metric codes. This theorem implies the list-decodability theorem for random general sum-rank metric codes: Any random general sum-rank metric code with a rate not exceeding the list-decoding capacity is $\left(\rho,O\left(1/\epsilon\right)\right)$-list-decodable with high probability, where $\rho\in\left(0,1\right)$ represents the error fraction and $\epsilon>0$ is referred to as the capacity gap. For random $\mathbb{F}_q$-linear sum-rank metric codes by using the same proof approach we demonstrate that any random $\mathbb{F}_q$-linear sum-rank metric code with a rate not exceeding the list-decoding capacity is $\left(\rho,\exp\left(O\left(1/\epsilon\right)\right)\right)$-list-decodable with high probability, where the list size is exponential at this stage due to the high correlation among codewords in linear codes. To achieve an exponential improvement on the list size, we prove a limited correlation property between sum-rank metric balls and $\mathbb{F}_q$-subspaces. Ultimately, we establish the list-decodability theorem for random $\mathbb{F}_q$-linear sum-rank metric codes: Any random $\mathbb{F}_q$-linear sum-rank metric code with rate not exceeding the list decoding capacity is $\left(\rho, O\left(1/\epsilon\right)\right)$-list-decodable with high probability. For the proof of the list-decodability theorem of random $\mathbb{F}_q$-linear sum-rank metric codes our proof idea is inspired by and aligns with that provided in the works \cite{Gur2010,Din2014,Gur2017} where the authors proved the list-decodability theorems for random $\mathbb{F}_q$-linear Hamming metric codes and random $\mathbb{F}_q$-linear rank metric codes, respectively.
arXiv:2203.00110v3 Announce Type: replace Abstract: We consider the scenario of communicating on a $3\mhyphen$user classical-quantum broadcast channel. We undertake an information theoretic study and focus on the problem of characterizing an inner bound to its capacity region. We design a new coding scheme based \textit{partitioned coset codes} - an ensemble of codes possessing algebraic properties. Analyzing its information-theoretic performance, we characterize a new inner bound. We identify examples for which the derived inner bound is strictly larger than that achievable using IID random codes. Proceeding further, we incorporate Sen's technique of tilting smoothing and augmentation to perform simultaneous decoding via a simultaneous decoding POVM and thereby characterize a further enlarged achievable rate region for communicating classical bits over the $3-$user classical-quantum broadcast channel. Finally, in our last step, we characterize a new inner bound to the classical-quantum capacity region of the $3-$user classical-quantum broadcast channel that subsumes all previous known inner bounds by combining the conventional unstructured IID codes with structured coset code strategies.
arXiv:2302.01856v3 Announce Type: replace Abstract: Quantifying the complexity of large graphs requires measures that extend beyond predefined structural features and scale efficiently with graph size. This work adopts a generative perspective, modeling large networks as exchangeable graphs to quantify the information content of their generating mechanisms via graphon entropy. As a graph property, graphon entropy is invariant under isomorphisms, making it an effective measure of complexity; however, it is not directly computable. To address this, we introduce a suite of graphon entropy estimators, including a nonparametric estimator for broad applicability and specialized versions for structured graphons arising from well-studied random graph models such as Erd\H{o}s-R\'enyi, Chung-Lu, and stochastic block models. We establish their large-sample properties, deriving convergence rates and Central Limit Theorems. Simulations illustrate how the nonparametric graphon entropy estimator captures structural variations in graphs, while real-world applications demonstrate its role in characterizing evolving network dynamics.
arXiv:2503.09825v1 Announce Type: new Abstract: This paper presents a comprehensive analysis of the information-energy capacity region for simultaneous lightwave information and power transfer (SLIPT) systems over lognormal fading channels. Unlike conventional studies that primarily focus on additive white Gaussian noise channels, we study the complex impact of lognormal fading, which is prevalent in optical wireless communication systems such as underwater and atmospheric channels. By applying the Smith's framework for these channels, we demonstrate that the optimal input distribution is discrete, characterized by a finite number of mass points. We further investigate the properties of these mass points, especially at the transition points, to reveal critical insights into the rate-power trade-off inherent in SLIPT systems. Additionally, we introduce a novel cooperative information-energy capacity learning framework, leveraging generative adversarial networks, to effectively estimate and optimize the information-energy capacity region under practical constraints. Numerical results validate our theoretical findings, illustrating the significant influence of channel fading on system performance. The insights and methodologies presented in this work provide a solid foundation for the design and optimization of future SLIPT systems operating in challenging environments.
arXiv:2503.09991v1 Announce Type: new Abstract: A finite-field multiple-access (FFMA) system separates users within a finite field by utilizing different element-pairs (EPs) as virtual resources. The Cartesian product of distinct EPs forms an EP code, which serves as the input to a finite-field multiplexing module (FF-MUX), allowing the FFMA technique to interchange the order of channel coding and multiplexing. This flexibility enables the FFMA system to support a large number of users with short packet traffic, addressing the finite block length (FBL) problem in multiuser reliable transmission. Designing EP codes is a central challenge in FFMA systems. In this paper, we construct EP codes based on a bit(s)-to-codeword transformation approach and define the corresponding EP code as a codeword-wise EP (CWEP) code. We then investigate the encoding process of EP codes, and propose unique sum-pattern mapping (USPM) structural property constraints to design uniquely decodable CWEP codes. Next, we present the \(\kappa\)-fold ternary orthogonal matrix \({\bf T}_{\rm o}(2^{\kappa}, 2^{\kappa})\) over GF\((3^m)\), where \(m = 2^{\kappa}\), and the ternary non-orthogonal matrix \({\bf T}_{\rm no}(3,2)\) over GF\((3^2)\), for constructing specific CWEP codes. Based on the proposed CWEP codes, we introduce three FFMA modes: channel codeword multiple access (FF-CCMA), code division multiple access (FF-CDMA), and non-orthogonal multiple access (FF-NOMA). Simulation results demonstrate that all three modes effectively support massive user transmissions with strong error performance.
arXiv:2503.09977v1 Announce Type: new Abstract: Fractional programming (FP) is a branch of mathematical optimization that deals with the optimization of ratios. It is an invaluable tool for signal processing and machine learning, because many key metrics in these fields are fractionally structured, e.g., the signal-to-interference-plus-noise ratio (SINR) in wireless communications, the Cramer-Rao bound (CRB) in radar sensing, the normalized cut in graph clustering, and the margin in support vector machine (SVM). This article provides a comprehensive review of both the theory and applications of a recently developed FP technique known as the quadratic transform, which can be applied to a wide variety of FP problems, including both the minimization and the maximization of the sum of functions of ratios as well as matrix ratio problems.