cs.IT

150 posts

arXiv:2501.07318v1 Announce Type: new Abstract: In this paper, we propose an integrated sensing and communication (ISAC) system aided by the movable-antenna (MA) array, which can improve the communication and sensing performance via flexible antenna movement over conventional fixed-position antenna (FPA) array. First, we consider the downlink multiuser communication, where each user is randomly distributed within a given three-dimensional zone with local movement. To reduce the overhead of frequent antenna movement, the antenna position vector (APV) is designed based on users' statistical channel state information (CSI), so that the antennas only need to be moved in a large timescale. Then, for target sensing, the Cramer-Rao bounds (CRBs) of the estimation mean square error for different spatial angles of arrival (AoAs) are derived as functions of MAs' positions. Based on the above, we formulate an optimization problem to maximize the expected minimum achievable rate among all communication users, with given constraints on the maximum acceptable CRB thresholds for target sensing. An alternating optimization algorithm is proposed to iteratively optimize one of the horizontal and vertical APVs of the MA array with the other being fixed. Numerical results demonstrate that our proposed MA arrays can significantly enlarge the trade-off region between communication and sensing performance compared to conventional FPA arrays with different inter-antenna spacing. It is also revealed that the steering vectors of the designed MA arrays exhibit low correlation in the angular domain, thus effectively reducing channel correlation among communication users to enhance their achievable rates, while alleviating ambiguity in target angle estimation to achieve improved sensing accuracy.

Wenyan Ma, Lipeng Zhu, Rui Zhang1/14/2025

arXiv:2501.07363v1 Announce Type: new Abstract: We introduce several families of entanglement-assisted (EA) Calderbank-Shor-Steane (CSS) codes derived from two distinct classes of low-density parity-check (LDPC) codes. We derive two families of EA quantum QC-LDPC codes, namely, the spatially coupled (SC) and the non-spatially coupled cases. These two families are constructed by tiling permutation matrices of prime and composite orders. We establish several code properties along with conditions for guaranteed girth for the proposed code families. The Tanner graphs of the proposed EA quantum QC-LDPC and EA quantum QC-SC-LDPC codes have girths greater than four, which is required for good error correction performance. Some of the proposed families of codes require only \textit{minimal} Bell pairs to be shared across the quantum transceiver. Furthermore, we construct two families of EA quantum QC-LDPC codes based on a single classical code, with Tanner graphs having girths greater than six, further improving the error correction performance. We evaluate the performance of these codes using both depolarizing and Markovian noise models to assess the random and burst error performance. Using a modified version of the sum-product algorithm over a quaternary alphabet, we show how correlated Pauli errors can be handled within the decoding setup. Simulation results show that nearly an order of improvement in the error correction performance can be achieved with quaternary decoder compared to binary decoder over the depolarizing and Markovian error channels, thereby generalizing the approach of EA quantum QC-LDPC code designs to work with both random and burst quantum error models, useful in practice.

Pavan Kumar, Abhi Kumar Sharma, Shayan Srinivasa Garani1/14/2025

arXiv:2501.07279v1 Announce Type: new Abstract: Binary linear block codes (BLBCs) are essential to modern communication, but their diverse structures often require multiple decoders, increasing complexity. This work introduces enhanced polar decoding ($\mathsf{PD}^+$), a universal soft decoding algorithm that transforms any BLBC into a polar-like code compatible with efficient polar code decoders such as successive cancellation list (SCL) decoding. Key innovations in $\mathsf{PD}^+$ include pruning polar kernels, shortening codes, and leveraging a simulated annealing algorithm to optimize transformations. These enable $\mathsf{PD}^+$ to achieve competitive or superior performance to state-of-the-art algorithms like OSD and GRAND across various codes, including extended BCH, extended Golay, and binary quadratic residue codes, with significantly lower complexity. Moreover, $\mathsf{PD}^+$ is designed to be forward-compatible with advancements in polar code decoding techniques and AI-driven search methods, making it a robust and versatile solution for universal BLBC decoding in both present and future systems.

Chien-Ying Lin, Yu-Chih Huang, Shin-Lin Shieh, Po-Ning Chen1/14/2025

arXiv:2501.06760v1 Announce Type: new Abstract: Recent advancements in smart radio environment technologies aim to enhance wireless network performance through the use of low-cost electromagnetic (EM) devices. Among these, reconfigurable intelligent surfaces (RIS) have garnered attention for their ability to modify incident waves via programmable scattering elements. An RIS is a nearly passive device, in which the tradeoff between performance, power consumption, and optimization overhead depend on how often the RIS needs to be reconfigured. This paper focuses on the metaprism (MTP), a static frequency-selective metasurface which relaxes the reconfiguration requirements of RISs and allows for the creation of different beams at various frequencies. In particular, we address the design of an ideal MTP based on its frequency-dependent reflection coefficients, defining the general properties necessary to achieve the desired beam steering function in the angle-frequency domain. We also discuss the limitations of previous studies that employed oversimplified models, which may compromise performance. Key contributions include a detailed exploration of the equivalence of the MTP to an ideal S-parameter multiport model and an analysis of its implementation using Foster's circuits. Additionally, we introduce a realistic multiport network model that incorporates aspects overlooked by ideal scattering models, along with an ad hoc optimization strategy for this model. The performance of the proposed optimization approach and circuits implementation are validated through simulations using a commercial full-wave EM simulator, confirming the effectiveness of the proposed method.

Silvia Palmucci, Andrea Abrardo, Davide Dardari, Alberto Toccafondi, Marco Di Renzo1/14/2025

arXiv:2501.06923v1 Announce Type: new Abstract: In online betting, the bookmaker can update the payoffs it offers on a particular event many times before the event takes place, and the updated payoffs may depend on the bets accumulated thus far. We study the problem of bookmaking with the goal of maximizing the return in the worst-case, with respect to the gamblers' behavior and the event's outcome. We formalize this problem as the \emph{Optimal Online Bookmaking game}, and provide the exact solution for the binary case. To this end, we develop the optimal bookmaking strategy, which relies on a new technique called bi-balancing trees, that assures that the house loss is the same for all \emph{decisive} betting sequences, where the gambler bets all its money on a single outcome in each round.

Alankrita Bhatt, Or Ordentlich, Oron Sabag1/14/2025

arXiv:2501.07041v1 Announce Type: new Abstract: In this paper, we investigate receiver design for high frequency (HF) skywave massive multiple-input multiple-output (MIMO) communications. We first establish a modified beam based channel model (BBCM) by performing uniform sampling for directional cosine with deterministic sampling interval, where the beam matrix is constructed using a phase-shifted discrete Fourier transform (DFT) matrix. Based on the modified BBCM, we propose a beam structured turbo receiver (BSTR) involving low-dimensional beam domain signal detection for grouped user terminals (UTs), which is proved to be asymptotically optimal in terms of minimizing mean-squared error (MSE). Moreover, we extend it to windowed BSTR by introducing a windowing approach for interference suppression and complexity reduction, and propose a well-designed energy-focusing window. We also present an efficient implementation of the windowed BSTR by exploiting the structure properties of the beam matrix and the beam domain channel sparsity. Simulation results validate the superior performance of the proposed receivers but with remarkably low complexity.

Linfeng Song, Ding Shi, Xiqi Gao, Geoffrey Ye Li, Xiang-Gen Xia1/14/2025

arXiv:2501.06801v1 Announce Type: new Abstract: DNA data storage is now being considered as a new archival storage method for its durability and high information density, but still facing some challenges like high costs and low throughput. By reducing sequencing sample size for decoding digital data, minimizing DNA coverage depth helps lower both costs and system latency. Previous studies have mainly focused on minimizing coverage depth in uniform distribution channels under theoretical assumptions. In contrast, our work uses real DNA storage experimental data to extend this problem to log-normal distribution channels, a conclusion derived from our PCR and sequencing data analysis. In this framework, we investigate both noiseless and noisy channels. We first demonstrate a detailed negative correlation between linear coding redundancy and the expected minimum sequencing coverage depth. Moreover, we observe that the probability of successfully decoding all data in a single sequencing run increases and then decreases as coding redundancy rises, when the sample size is optimized for complete decoding. Then we extend the lower bounds of DNA coverage depth from uniform to log-normal noisy channels. The findings of this study provide valuable insights for the efficient execution of DNA storage experiments.

Ruiying Cao, Xin Chen1/14/2025

arXiv:2501.07220v1 Announce Type: new Abstract: Integrated sensing and communication (ISAC) and ubiquitous connectivity are two usage scenarios of sixth generation (6G) networks. In this context, low earth orbit (LEO) satellite constellations, as an important component of 6G networks, is expected to provide ISAC services across the globe. In this paper, we propose a novel dual-function LEO satellite constellation framework that realizes information communication for multiple user equipments (UEs) and location sensing for interested target simultaneously with the same hardware and spectrum. In order to improve both information transmission rate and location sensing accuracy within limited wireless resources under dynamic environment, we design a multiple-satellite cooperative information communication and location sensing algorithm by jointly optimizing communication beamforming and sensing waveform according to the characteristics of LEO satellite constellation. Finally, extensive simulation results are presented to demonstrate the competitive performance of the proposed algorithms.

Qi Wang, Xiaoming Chen, Qiao Qi, Mili Li, Wolfgang Gerstacker1/14/2025

arXiv:2501.06700v1 Announce Type: new Abstract: In this paper, we address a crucial but often overlooked issue in applying reinforcement learning (RL) to radio resource management (RRM) in wireless communications: the mismatch between the discounted reward RL formulation and the undiscounted goal of wireless network optimization. To the best of our knowledge, we are the first to systematically investigate this discrepancy, starting with a discussion of the problem formulation followed by simulations that quantify the extent of the gap. To bridge this gap, we introduce the use of average reward RL, a method that aligns more closely with the long-term objectives of RRM. We propose a new method called the Average Reward Off policy Soft Actor Critic (ARO SAC) is an adaptation of the well known Soft Actor Critic algorithm in the average reward framework. This new method achieves significant performance improvement our simulation results demonstrate a 15% gain in the system performance over the traditional discounted reward RL approach, underscoring the potential of average reward RL in enhancing the efficiency and effectiveness of wireless network optimization.

Kun Yang, Jing Yang, Cong Shen1/14/2025

arXiv:2501.06726v1 Announce Type: new Abstract: Sensing and edge artificial intelligence (AI) are envisioned as two essential and interconnected functions in sixth-generation (6G) mobile networks. On the one hand, sensing-empowered applications rely on powerful AI models to extract features and understand semantics from ubiquitous wireless sensors. On the other hand, the massive amount of sensory data serves as the fuel to continuously refine edge AI models. This deep integration of sensing and edge AI has given rise to a new task-oriented paradigm known as integrated sensing and edge AI (ISEA), which features a holistic design approach to communication, AI computation, and sensing for optimal sensing-task performance. In this article, we present a comprehensive survey for ISEA. We first provide technical preliminaries for sensing, edge AI, and new communication paradigms in ISEA. Then, we study several use cases of ISEA to demonstrate its practical relevance and introduce current standardization and industrial progress. Next, the design principles, metrics, tradeoffs, and architectures of ISEA are established, followed by a thorough overview of ISEA techniques, including digital air interface, over-the-air computation, and advanced signal processing. Its interplay with various 6G advancements, e.g., new physical-layer and networking techniques, are presented. Finally, we present future research opportunities in ISEA, including the integration of foundation models, convergence of ISEA and integrated sensing and communications (ISAC), and ultra-low-latency ISEA.

Zhiyan Liu, Xu Chen, Hai Wu, Zhanwei Wang, Xianhao Chen, Dusit Niyato, Kaibin Huang1/14/2025

arXiv:2501.06545v1 Announce Type: new Abstract: Low harvested energy poses a significant challenge to sustaining continuous communication in energy harvesting (EH)-powered wireless sensor networks. This is mainly due to intermittent and limited power availability from radio frequency signals. In this paper, we introduce a novel energy-aware resource allocation problem aimed at enabling the asynchronous accumulate-then-transmit protocol, offering an alternative to the extensively studied harvest-then-transmit approach. Specifically, we jointly optimize power allocation and time fraction dedicated to EH to maximize the average long-term system throughput, accounting for both data and energy queue lengths. By leveraging inner approximation and network utility maximization techniques, we develop a simple yet efficient iterative algorithm that guarantees at least a local optimum and achieves long-term utility improvement. Numerical results highlight the proposed approach's effectiveness in terms of both queue length and sustained system throughput.

Ngoc M. Ngo, Trung T. Nguyen, Phuc H. Nguyen, Van-Dinh Nguyen1/14/2025

arXiv:2501.06910v1 Announce Type: new Abstract: Data compression plays a key role in reducing storage and I/O costs. Traditional lossy methods primarily target data on rectilinear grids and cannot leverage the spatial coherence in unstructured mesh data, leading to suboptimal compression ratios. We present a multi-component, error-bounded compression framework designed to enhance the compression of floating-point unstructured mesh data, which is common in scientific applications. Our approach involves interpolating mesh data onto a rectilinear grid and then separately compressing the grid interpolation and the interpolation residuals. This method is general, independent of mesh types and typologies, and can be seamlessly integrated with existing lossy compressors for improved performance. We evaluated our framework across twelve variables from two synthetic datasets and two real-world simulation datasets. The results indicate that the multi-component framework consistently outperforms state-of-the-art lossy compressors on unstructured data, achieving, on average, a $2.3-3.5\times$ improvement in compression ratios, with error bounds ranging from $\num{1e-6}$ to $\num{1e-2}$. We further investigate the impact of hyperparameters, such as grid spacing and error allocation, to deliver optimal compression ratios in diverse datasets.

Qian Gong, Zhe Wang, Viktor Reshniak, Xin Liang, Jieyang Chen, Qing Liu, Tushar M. Athawale, Yi Ju, Anand Rangarajan, Sanjay Ranka, Norbert Podhorszki, Rick Archibald, Scott Klasky1/14/2025

arXiv:2501.06974v1 Announce Type: new Abstract: Fluid antenna multiple access (FAMA), enabled by the fluid antenna system (FAS), offers a new and straightforward solution to massive connectivity. Previous results on FAMA were primarily based on narrowband channels. This paper studies the adoption of FAMA within the fifth-generation (5G) orthogonal frequency division multiplexing (OFDM) framework, referred to as OFDM-FAMA, and evaluate its performance in broadband multipath channels. We first design the OFDM-FAMA system, taking into account 5G channel coding and OFDM modulation. Then the system's achievable rate is analyzed, and an algorithm to approximate the FAS configuration at each user is proposed based on the rate. Extensive link-level simulation results reveal that OFDM-FAMA can significantly improve the multiplexing gain over the OFDM system with fixed-position antenna (FPA) users, especially when robust channel coding is applied and the number of radio-frequency (RF) chains at each user is small.

Hanjiang Hong, Kai-Kit Wong, Hao Xu, Yin Xu, Hyundong Shin, Ross Murch, Dazhi He, Wenjun Zhang1/14/2025

arXiv:2501.06970v1 Announce Type: new Abstract: With the rapid expansion of space activities and the escalating accumulation of space debris, Space Domain Awareness (SDA) has become essential for sustaining safe space operations. This paper proposes a decentralized solution using satellite swarms and blockchain, where satellites (nodes) take on the roles of verifiers and approvers to validate and store debris-tracking data securely. Our simulations show that the network achieves optimal performance with around 30 nodes, balancing throughput and response time settling at 4.37 seconds. These results suggest that large-scale networks can be effectively managed by decoupling them into smaller, autonomous swarms, each optimized for specific tasks. Furthermore, we compare the performance of the decentralized swarm architecture with that of a fully shared role model and show significant improvements in scalability and response times when roles are decoupled.

Nesrine Benchoubane, Nida Fidan, Gunes Karabulut Kurt, Enver Ozdemir1/14/2025

arXiv:2501.06363v1 Announce Type: new Abstract: In this paper, we investigate the rate-distortion-perception function (RDPF) of a source modeled by a Gaussian Process (GP) on a measure space $\Omega$ under mean squared error (MSE) distortion and squared Wasserstein-2 perception metrics. First, we show that the optimal reconstruction process is itself a GP, characterized by a covariance operator sharing the same set of eigenvectors of the source covariance operator. Similarly to the classical rate-distortion function, this allows us to formulate the RDPF problem in terms of the Karhunen-Lo\`eve transform coefficients of the involved GPs. Leveraging the similarities with the finite-dimensional Gaussian RDPF, we formulate an analytical tight upper bound for the RDPF for GPs, which recovers the optimal solution in the "perfect realism" regime. Lastly, in the case where the source is a stationary GP and $\Omega$ is the interval $[0, T]$ equipped with the Lebesgue measure, we derive an upper bound on the rate and the distortion for a fixed perceptual level and $T \to \infty$ as a function of the spectral density of the source process.

Giuseppe Serra, Photios A. Stavrou, Marios Kountouris1/14/2025

arXiv:2501.06653v1 Announce Type: new Abstract: Snapshot compressive imaging (SCI) refers to the recovery of three-dimensional data cubes-such as videos or hyperspectral images-from their two-dimensional projections, which are generated by a special encoding of the data with a mask. SCI systems commonly use binary-valued masks that follow certain physical constraints. Optimizing these masks subject to these constraints is expected to improve system performance. However, prior theoretical work on SCI systems focuses solely on independently and identically distributed (i.i.d.) Gaussian masks, which do not permit such optimization. On the other hand, existing practical mask optimizations rely on computationally intensive joint optimizations that provide limited insight into the role of masks and are expected to be sub-optimal due to the non-convexity and complexity of the optimization. In this paper, we analytically characterize the performance of SCI systems employing binary masks and leverage our analysis to optimize hardware parameters. Our findings provide a comprehensive and fundamental understanding of the role of binary masks - with both independent and dependent elements - and their optimization. We also present simulation results that confirm our theoretical findings and further illuminate different aspects of mask design.

Mengyu Zhao, Shirin Jalali1/14/2025

arXiv:2501.06641v1 Announce Type: new Abstract: In 1969 J. Verhoeff provided the first examples of a decimal error detecting code using a single check digit to provide protection against all single, transposition and adjacent twin errors. The three codes he presented are length 3-digit codes with 2 information digits. Existence of a 4-digit code would imply the existence of 10 such disjoint 3-digit codes. Apparently, not even a pair of such disjoint 3-digit codes is known. The code developed herein, has the property that the knowledge of any two digits is sufficient to determine the entire codeword even though their positions were unknown. This fulfills Verhoeff's desire to eliminate "cyclic errors". Phonetic errors, where 2 digit pairs of the forms X0 and 1X are interchanged, are also eliminated.

Larry A. Dunning1/14/2025

arXiv:2501.07154v1 Announce Type: new Abstract: Data from Internet of Things (IoT) sensors has emerged as a key contributor to decision-making processes in various domains. However, the quality of the data is crucial to the effectiveness of applications built on it, and assessment of the data quality is heavily context-dependent. Further, preserving the privacy of the data during quality assessment is critical in domains where sensitive data is prevalent. This paper proposes a novel framework for automated, objective, and privacy-preserving data quality assessment of time-series data from IoT sensors deployed in smart cities. We leverage custom, autonomously computable metrics that parameterise the temporal performance and adherence to a declarative schema document to achieve objectivity. Additionally, we utilise a trusted execution environment to create a "data-blind" model that ensures individual privacy, eliminates assessee bias, and enhances adaptability across data types. This paper describes this data quality assessment methodology for IoT sensors, emphasising its relevance within the smart-city context while addressing the growing need for privacy in the face of extensive data collection practices.

Novoneel Chakraborty, Abhay Sharma, Jyotirmoy Dutta, Hari Dilip Kumar1/14/2025

arXiv:2501.06316v1 Announce Type: new Abstract: The accurate estimation of human activity in cities is one of the first steps towards understanding the structure of the urban environment. Human activities are highly granular and dynamic in spatial and temporal dimensions. Estimating confidence is crucial for decision-making in numerous applications such as urban management, retail, transport planning and emergency management. Detecting general trends in the flow of people between spatial locations is neither obvious nor easy due to the high cost of capturing these movements without compromising the privacy of those involved. This research intends to address this problem by examining the movement of people in a SmartStreetSensors network at a fine spatial and temporal resolution using a Transfer Entropy approach.

Roberto Murcio, Balamurugan Soundararaj1/14/2025

arXiv:2501.07561v1 Announce Type: new Abstract: We propose a two-stage concatenated coding scheme for reliable and information-theoretically secure communication over intersymbol interference wiretap channels. Motivated by the theoretical coding strategies that achieve the secrecy capacity, our scheme integrates low-density parity-check (LDPC) codes in the outer stage, forming a nested structure of wiretap codes, with trellis codes in the inner stage to improve achievable secure rates. The trellis code is specifically designed to transform the uniformly distributed codewords produced by the LDPC code stage into a Markov process, achieving tight lower bounds on the secrecy capacity. We further estimate the information leakage rate of the proposed coding scheme using an upper bound. To meet the weak secrecy criterion, we optimize degree distributions of the irregular LDPC codes at the outer stage, essentially driving the estimated upper bound on the information leakage rate to zero.

Aria Nouri, Reza Asvadi, Jun Chen1/14/2025