cs.NI

108 posts

arXiv:2501.06880v1 Announce Type: new Abstract: Online Cloud gaming demands real-time, high-quality video transmission across variable wide-area networks (WANs). Neural-enhanced video transmission algorithms employing super-resolution (SR) for video quality enhancement have effectively challenged WAN environments. However, these SR-based methods require intensive fine-tuning for the whole video, making it infeasible in diverse online cloud gaming. To address this, we introduce River, a cloud gaming delivery framework designed based on the observation that video segment features in cloud gaming are typically repetitive and redundant. This permits a significant opportunity to reuse fine-tuned SR models, reducing the fine-tuning latency of minutes to query latency of milliseconds. To enable the idea, we design a practical system that addresses several challenges, such as model organization, online model scheduler, and transfer strategy. River first builds a content-aware encoder that fine-tunes SR models for diverse video segments and stores them in a lookup table. When delivering cloud gaming video streams online, River checks the video features and retrieves the most relevant SR models to enhance the frame quality. Meanwhile, if no existing SR model performs well enough for some video segments, River will further fine-tune new models and update the lookup table. Finally, to avoid the overhead of streaming model weight to the clients, River designs a prefetching strategy that predicts the models with the highest possibility of being retrieved. Our evaluation based on real video game streaming demonstrates River can reduce redundant training overhead by 44% and improve the Peak-Signal-to-Noise-Ratio by 1.81dB compared to the SOTA solutions. Practical deployment shows River meets real-time requirements, achieving approximately 720p 20fps on mobile devices.

Shan Jiang, Zhenhua Han, Haisheng Tan, Xinyang Jiang, Yifan Yang, Xiaoxi Zhang, Hongqiu Ni, Yuqing Yang, Xiang-Yang Li1/14/2025

arXiv:2501.06194v1 Announce Type: new Abstract: Nowadays, the use of soft computational techniques in power systems under the umbrella of machine learning is increasing with good reception. In this paper, we first present a deep learning approach to find the optimal configuration for HetNet systems. We used a very large number of radial configurations of a test system for training purposes. We also studied the issue of joint carrier/power allocation in multilayer hierarchical networks, in addition to ensuring the quality of experience for all subscribers, to achieve optimal power efficiency. The proposed method uses an adaptive load equilibrium model that aims to achieve "almost optimal" equity among all servers from the standpoint of the key performance indicator. Unlike current model-based energy efficiency methods, we propose a joint resource allocation, energy efficiency, and flow control algorithm to solve common nonconvex and hierarchical optimization problems. Also, by referring to the allocation of continuous resources based on SLA, we extended the proposed algorithm to common flow/power control and operational power optimization algorithm to achieve optimal energy efficiency along with ensuring user's throughput limitations. Also, simulation results show that the proposed controlled power/flow optimization approach can significantly increase energy efficiency compared to conventional designs using network topology adjustment capability.

Davoud Yousefi, Hassan Yari, Farzad Osouli, Mohammad Ebrahimi, Somayeh Esmalifalak, Morteza Johari, Abbas Azarnezhad, Fatemeh Sadeghi, Rogayeh Mirzapour1/14/2025

arXiv:2501.06700v1 Announce Type: new Abstract: In this paper, we address a crucial but often overlooked issue in applying reinforcement learning (RL) to radio resource management (RRM) in wireless communications: the mismatch between the discounted reward RL formulation and the undiscounted goal of wireless network optimization. To the best of our knowledge, we are the first to systematically investigate this discrepancy, starting with a discussion of the problem formulation followed by simulations that quantify the extent of the gap. To bridge this gap, we introduce the use of average reward RL, a method that aligns more closely with the long-term objectives of RRM. We propose a new method called the Average Reward Off policy Soft Actor Critic (ARO SAC) is an adaptation of the well known Soft Actor Critic algorithm in the average reward framework. This new method achieves significant performance improvement our simulation results demonstrate a 15% gain in the system performance over the traditional discounted reward RL approach, underscoring the potential of average reward RL in enhancing the efficiency and effectiveness of wireless network optimization.

Kun Yang, Jing Yang, Cong Shen1/14/2025

arXiv:2501.06604v1 Announce Type: new Abstract: The increasing demand for high-speed and reliable wireless networks has driven advancements in technologies such as millimeter-wave and 5G radios, which requires efficient planning and timely deployment of wireless access points. A critical tool in this process is the radio map, a graphical representation of radio-frequency signal strengths that plays a vital role in optimizing overall network performance. However, existing methods for estimating radio maps face challenges due to the need for extensive real-world data collection or computationally intensive ray-tracing analyses, which is costly and time-consuming. Inspired by the success of generative AI techniques in large language models and image generation, we explore their potential applications in the realm of wireless networks. In this work, we propose RM-Gen, a novel generative framework leveraging conditional denoising diffusion probabilistic models to synthesize radio maps using minimal and readily collected data. We then introduce an environment-aware method for selecting critical data pieces, enhancing the generative model's applicability and usability. Comprehensive evaluations demonstrate that RM-Gen achieves over 95% accuracy in generating radio maps for networks that operate at 60 GHz and sub-6GHz frequency bands, outperforming the baseline GAN and pix2pix models. This approach offers a cost-effective, adaptable solution for various downstream network optimization tasks.

Xuanhao Luo, Zhizhen Li, Zhiyuan Peng, Mingzhe Chen, Yuchen Liu1/14/2025

arXiv:2501.06242v1 Announce Type: new Abstract: 5G technology enhances industries with high-speed, reliable, low-latency communication, revolutionizing mobile broadband and supporting massive IoT connectivity. With the increasing complexity of applications on User Equipment (UE), offloading resource-intensive tasks to robust servers is essential for improving latency and speed. The 3GPP's Multi-access Edge Computing (MEC) framework addresses this challenge by processing tasks closer to the user, highlighting the need for an intelligent controller to optimize task offloading and resource allocation. This paper introduces a novel methodology to efficiently allocate both communication and computational resources among individual UEs. Our approach integrates two critical 5G service imperatives: Ultra-Reliable Low Latency Communication (URLLC) and Massive Machine Type Communication (mMTC), embedding them into the decision-making framework. Central to this approach is the utilization of Proximal Policy Optimization, providing a robust and efficient solution to the challenges posed by the evolving landscape of 5G technology. The proposed model is evaluated in a simulated 5G MEC environment. The model significantly reduces processing time by 4% for URLLC users under strict latency constraints and decreases power consumption by 26% for mMTC users, compared to existing baseline models based on the reported simulation results. These improvements showcase the model's adaptability and superior performance in meeting diverse QoS requirements in 5G networks.

Alireza Ebrahimi, Fatemeh Afghah1/14/2025

arXiv:2501.06243v1 Announce Type: new Abstract: Autonomous agents represent an inevitable evolution of the internet. Current agent frameworks do not embed a standard protocol for agent-to-agent interaction, leaving existing agents isolated from their peers. As intellectual property is the native asset ingested by and produced by agents, a true agent economy requires equipping agents with a universal framework for engaging in binding contracts with each other, including the exchange of valuable training data, personality, and other forms of Intellectual Property. A purely agent-to-agent transaction layer would transcend the need for human intermediation in multi-agent interactions. The Agent Transaction Control Protocol for Intellectual Property (ATCP/IP) introduces a trustless framework for exchanging IP between agents via programmable contracts, enabling agents to initiate, trade, borrow, and sell agent-to-agent contracts on the Story blockchain network. These contracts not only represent auditable onchain execution but also contain a legal wrapper that allows agents to express and enforce their actions in the offchain legal setting, creating legal personhood for agents. Via ATCP/IP, agents can autonomously sell their training data to other agents, license confidential or proprietary information, collaborate on content based on their unique skills, all of which constitutes an emergent knowledge economy.

Andrea Muttoni, Jason Zhao1/14/2025

arXiv:2501.06191v1 Announce Type: new Abstract: Deep Learning (DL) modeling has been a recent topic of interest. With the accelerating need to embed Deep Learning Networks (DLNs) to the Internet of Things (IoT) applications, many DL optimization techniques were developed to enable applying DL to IoTs. However, despite the plethora of DL optimization techniques, there is always a trade-off between accuracy, latency, and cost. Moreover, there are no specific criteria for selecting the best optimization model for a specific scenario. Therefore, this research aims at providing a DL optimization model that eases the selection and re-using DLNs on IoTs. In addition, the research presents an initial design for a DL optimization model management framework. This framework would help organizations choose the optimal DL optimization model that maximizes performance without sacrificing quality. The research would add to the IS design science knowledge as well as the industry by providing insights to many IT managers to apply DLNs to IoTs such as machines and robots.

Samaa Elnagar, Kweku-Muata Osei-Bryson1/14/2025

arXiv:2501.06688v1 Announce Type: new Abstract: Consider a network where a wireless base station (BS) connects multiple source-destination pairs. Packets from each source are generated according to a renewal process and are enqueued in a single-packet queue that stores only the freshest packet. The BS decides, at each time slot, which sources to schedule. Selected sources transmit their packet to the BS via unreliable links. Successfully received packets are forwarded to corresponding destinations. The connection between the BS and destinations is assumed unreliable and delayed. Information freshness is captured by the Age of Information (AoI) metric. The objective of the scheduling decisions is leveraging the delayed and unreliable AoI knowledge to keep the information fresh. In this paper, we derive a lower bound on the achievable AoI by any scheduling policy. Then, we develop an optimal randomized policy for any packet generation processes. Next, we develop minimum mean square error estimators of the AoI and system times, and a Max-Weight Policy that leverages these estimators. We evaluate the AoI of the Optimal Randomized Policy and the Max-Weight Policy both analytically and through simulations. The numerical results suggest that the Max-Weight Policy with estimation outperforms the Optimal Randomized Policy even when the BS has no AoI knowledge.

Zhuoyi Zhao, Igor Kadota1/14/2025

arXiv:2501.06236v1 Announce Type: new Abstract: Modeling radio propagation is essential for wireless network design and performance optimization. Traditional methods rely on physics models of radio propagation, which can be inaccurate or inflexible. In this work, we propose using graph neural networks to learn radio propagation behaviors directly from real-world network data. Our approach converts the radio propagation environment into a graph representation, with nodes corresponding to locations and edges representing spatial and ray-tracing relationships between locations. The graph is generated by converting images of the environment into a graph structure, with specific relationships between nodes. The model is trained on this graph representation, using sensor measurements as target data. We demonstrate that the graph neural network, which learns to predict radio propagation directly from data, achieves competitive performance compared to traditional heuristic models. This data-driven approach outperforms classic numerical solvers in terms of both speed and accuracy. To the best of our knowledge, we are the first to apply graph neural networks to real-world radio propagation data to generate coverage maps, enabling generative models of signal propagation with point measurements only.

Adrien Bufort, Laurent Lebocq, Stefan Cathabard1/14/2025

arXiv:2501.06410v1 Announce Type: new Abstract: The low-altitude economy (LAE), driven by unmanned aerial vehicles (UAVs) and other aircraft, has revolutionized fields such as transportation, agriculture, and environmental monitoring. In the upcoming six-generation (6G) era, UAV-assisted mobile edge computing (MEC) is particularly crucial in challenging environments such as mountainous or disaster-stricken areas. The computation task offloading problem is one of the key issues in UAV-assisted MEC, primarily addressing the trade-off between minimizing the task delay and the energy consumption of the UAV. In this paper, we consider a UAV-assisted MEC system where the UAV carries the edge servers to facilitate task offloading for ground devices (GDs), and formulate a calculation delay and energy consumption multi-objective optimization problem (CDECMOP) to simultaneously improve the performance and reduce the cost of the system. Then, by modeling the formulated problem as a multi-objective Markov decision process (MOMDP), we propose a multi-objective deep reinforcement learning (DRL) algorithm within an evolutionary framework to dynamically adjust the weights and obtain non-dominated policies. Moreover, to ensure stable convergence and improve performance, we incorporate a target distribution learning (TDL) algorithm. Simulation results demonstrate that the proposed algorithm can better balance multiple optimization objectives and obtain superior non-dominated solutions compared to other methods.

Geng Sun, Weilong Ma, Jiahui Li, Zemin Sun, Jiacheng Wang, Dusit Niyato, Shiwen Mao1/14/2025

arXiv:2501.06428v1 Announce Type: new Abstract: This research investigates how CDNs (Content Delivery Networks) can improve the digital experience, as consumers increasingly expect fast, efficient, and effortless access to online resources. CDNs play a crucial role in reducing latency, enhancing scalability, and optimizing delivery mechanisms, which is evident across various platforms and regions. The study focuses on key CDN concerns, such as foundational and modern CDN architectures, edge computing, hybrid CDNs, and multi-CDN strategies. It also explores performance-enhancing topics, including caching, load balancing, and the novel features of HTTP/3 and QUIC. Current trends, such as integrating CDNs with 5G networks, serverless architectures, and AI-driven traffic management, are examined to demonstrate how CDN technology is likely to evolve. The study also addresses challenges related to security, cost, and global regulations. Practical examples from the e-commerce, streaming, and gaming industries highlight how enhanced CDNs are transforming these sectors. The conclusions emphasize the need to evolve CDN strategies to meet growing user expectations and adapt to the rapidly changing digital landscape. Additionally, the research identifies future research opportunities, particularly in exploring the impact of QC, the enhancement of AI services, and the sustainability of CDN solutions. Overall, the study situates architectural design, performance strategies, and emerging trends to address gaps and create a more efficient and secure approach for improving digital experiences.

Anuj Tyagi1/14/2025

arXiv:2501.06446v1 Announce Type: new Abstract: A large number of heterogeneous wireless networks share the unlicensed spectrum designated as the ISM (Industry, Scientific, and Medicine) radio band. These networks do not adhere to a common medium access rule and differ in their specifications considerably. As a result, when concurrently active, they cause cross-technology interference (CTI) on each other. The effect of this interference is not reciprocal, the networks using high transmission power and advanced transmission schemes often causing disproportionate disruptions to those with modest communication and computation resources. CTI corrupts packets, incurs packet retransmission cost, introduces end-to-end latency and jitter, and make networks unpredictable. The purpose of this paper is to closely examine its impact on low-power networks which are based on the IEEE 802.15.4 standard. It discusses latest developments on CTI detection, coexistence and avoidance mechanisms as well on messaging schemes which attempt to enable heterogeneous networks directly communicate with one another to coordinate packet transmission and channel assignment.

Zegeye Mekasha Kidane, Waltenegus Dargie1/14/2025

arXiv:2501.06464v1 Announce Type: new Abstract: An expansion of Internet of Things (IoTs) has led to significant challenges in wireless data harvesting, dissemination, and energy management due to the massive volumes of data generated by IoT devices. These challenges are exacerbated by data redundancy arising from spatial and temporal correlations. To address these issues, this paper proposes a novel data-driven collaborative beamforming (CB)-based communication framework for IoT networks. Specifically, the framework integrates CB with an overlap-based multi-hop routing protocol (OMRP) to enhance data transmission efficiency while mitigating energy consumption and addressing hot spot issues in remotely deployed IoT networks. Based on the data aggregation to a specific node by OMRP, we formulate a node selection problem for the CB stage, with the objective of optimizing uplink transmission energy consumption. Given the complexity of the problem, we introduce a softmax-based proximal policy optimization with long short-term memory (SoftPPO-LSTM) algorithm to intelligently select CB nodes for improving transmission efficiency. Simulation results validate the effectiveness of the proposed OMRP and SoftPPO-LSTM methods, demonstrating significant improvements over existing routing protocols and node selection strategies. The results also reveal that the combined OMRP with the SoftPPO-LSTM method effectively mitigates hot spot problems and offers superior performance compared to traditional strategies.

Yangning Li, Hui Kang, Jiahui Li, Geng Sun, Zemin Sun, Jiacheng Wang, Changyuan Zhao, Dusit Niyato1/14/2025

arXiv:2501.06526v1 Announce Type: new Abstract: Unmanned aerial vehicle (UAV)-based integrated sensing and communication (ISAC) systems are poised to revolutionize next-generation wireless networks by enabling simultaneous sensing and communication (S\&C). This survey comprehensively reviews UAV-ISAC systems, highlighting foundational concepts, key advancements, and future research directions. We explore recent advancements in UAV-based ISAC systems from various perspectives and objectives, including advanced channel estimation (CE), beam tracking, and system throughput optimization under joint sensing and communication S\&C constraints. Additionally, we examine weighted sum rate (WSR) and sensing trade-offs, delay and age of information (AoI) minimization, energy efficiency (EE), and security enhancement. These applications highlight the potential of UAV-based ISAC systems to improve spectrum utilization, enhance communication reliability, reduce latency, and optimize energy consumption across diverse domains, including smart cities, disaster relief, and defense operations. The survey also features summary tables for comparative analysis of existing methodologies, emphasizing performance, limitations, and effectiveness in addressing various challenges. By synthesizing recent advancements and identifying open research challenges, this survey aims to be a valuable resource for developing efficient, adaptive, and secure UAV-based ISAC systems.

Manzoor Ahmed, Ali Arshad Nasir, Mudassir Masood, Kamran Ali Memon, Khurram Karim Qureshi, Feroz Khan, Wali Ullah Khan, Fang Xu, Zhu Han1/14/2025

arXiv:2501.06244v1 Announce Type: new Abstract: With the growing demand for Earth observation, it is important to provide reliable real-time remote sensing inference services to meet the low-latency requirements. The Space Computing Power Network (Space-CPN) offers a promising solution by providing onboard computing and extensive coverage capabilities for real-time inference. This paper presents a remote sensing artificial intelligence applications deployment framework designed for Low Earth Orbit satellite constellations to achieve real-time inference performance. The framework employs the microservice architecture, decomposing monolithic inference tasks into reusable, independent modules to address high latency and resource heterogeneity. This distributed approach enables optimized microservice deployment, minimizing resource utilization while meeting quality of service and functional requirements. We introduce Robust Optimization to the deployment problem to address data uncertainty. Additionally, we model the Robust Optimization problem as a Partially Observable Markov Decision Process and propose a robust reinforcement learning algorithm to handle the semi-infinite Quality of Service constraints. Our approach yields sub-optimal solutions that minimize accuracy loss while maintaining acceptable computational costs. Simulation results demonstrate the effectiveness of our framework.

Zhiyong Yu, Yuning Jiang, Xin Liu, Yuanming Shi, Chunxiao Jiang, Linling Kuang1/14/2025

arXiv:2501.06205v1 Announce Type: new Abstract: The evolution of Artificial Intelligence (AI) and its subset Deep Learning (DL), has profoundly impacted numerous domains, including autonomous driving. The integration of autonomous driving in military settings reduces human casualties and enables precise and safe execution of missions in hazardous environments while allowing for reliable logistics support without the risks associated with fatigue-related errors. However, relying on autonomous driving solely requires an advanced decision-making model that is adaptable and optimum in any situation. Considering the presence of numerous interconnected autonomous vehicles in mission-critical scenarios, Ultra-Reliable Low Latency Communication (URLLC) is vital for ensuring seamless coordination, real-time data exchange, and instantaneous response to dynamic driving environments. The advent of 6G strengthens the Internet of Automated Defense Vehicles (IoADV) concept within the realm of Internet of Military Defense Things (IoMDT) by enabling robust connectivity, crucial for real-time data exchange, advanced navigation, and enhanced safety features through IoADV interactions. On the other hand, a critical advancement in this space is using pre-trained Generative Large Language Models (LLMs) for decision-making and communication optimization for autonomous driving. Hence, this work presents opportunities and challenges with a vision of realizing the full potential of these technologies in critical defense applications, especially through the advancement of IoADV and its role in enhancing autonomous military operations.

Murat Arda Onsu, Poonam Lohan, Burak Kantarci1/14/2025

arXiv:2501.06309v1 Announce Type: new Abstract: In our earlier work, Network-Centric Optimal Hybrid Mobility for IPv6 wireless sensor networks, in which the work sought to control mobility of sensor nodes from an external network was proposed. It was a major improvement on earlier works such as Cluster Sensor Proxy Mobile IPv6 (CSPMIPv6) and Network of Proxies (NoP). In this work, the Network-Centric optimal hybrid mobility scenario was used to detect and fill sensing holes occurring as a result damaged or energy depleted sensing nodes. Various sensor networks self-healing and recovery, and deployment algorithms such as Enhanced Virtual Forces Algorithm with Boundary Forces (EVFA-B); Coverage - Aware Sensor Automation protocol (CASA); Sensor Self-Organizing Algorithm (SSOA); VorLag and the use of the use of anchor and relay nodes were reviewed. With node density thresholds set for various scenarios, the recovery efficiency using various parameters were measured. Comparably, our method provides the most efficient node relocation and self-healing mechanism for sensor networks. Compared to Sensor Self-Organizing Algorithm (SSOA), Hybrid Mobile IP showed superiority in coverage, shorter period of recovery, less computational cost and lower energy depletion. With processing and mobility costs shifted to the external network, Hybrid Mobile IP extends the life span of the network.

Kwadwo Asante, Yaw Marfo Missah, Frimpong Twum. Michael Asante1/14/2025

arXiv:2501.06637v1 Announce Type: new Abstract: Terahertz communications are envisioned as a key enabler for 6G networks. The abundant spectrum available in such ultra high frequencies has the potential to increase network capacity to huge data rates. However, they are extremely affected by blockages, to the point of disrupting ongoing communications. In this paper, we elaborate on the relevance of predicting visibility between users and access points (APs) to improve the performance of THz-based networks by minimizing blockages, that is, maximizing network availability, while at the same time keeping a low reconfiguration overhead. We propose a novel approach to address this problem, by combining a neural network (NN) for predicting future user-AP visibility probability, with a probability threshold for AP reselection to avoid unnecessary reconfigurations. Our experimental results demonstrate that current state-of-the-art handover mechanisms based on received signal strength are not adequate for THz communications, since they are ill-suited to handle hard blockages. Our proposed NN-based solution significantly outperforms them, demonstrating the interest of our strategy as a research line.

Pablo Fondo-Ferreiro, Cristina L\'opez-Bravo, Francisco Javier Gonz\'alez-Casta\~no, Felipe Gil-Casti\~neira, David Candal-Ventureira1/14/2025

arXiv:2501.06334v1 Announce Type: new Abstract: Employing wireless systems with dual sensing and communications functionalities is becoming critical in next generation of wireless networks. In this paper, we propose a robust design for over-the-air federated edge learning (OTA-FEEL) that leverages sensing capabilities at the parameter server (PS) to mitigate the impact of target echoes on the analog model aggregation. We first derive novel expressions for the Cramer-Rao bound of the target response and mean squared error (MSE) of the estimated global model to measure radar sensing and model aggregation quality, respectively. Then, we develop a joint scheduling and beamforming framework that optimizes the OTA-FEEL performance while keeping the sensing and communication quality, determined respectively in terms of Cramer-Rao bound and achievable downlink rate, in a desired range. The resulting scheduling problem reduces to a combinatorial mixed-integer nonlinear programming problem (MINLP). We develop a low-complexity hierarchical method based on the matching pursuit algorithm used widely for sparse recovery in the literature of compressed sensing. The proposed algorithm uses a step-wise strategy to omit the least effective devices in each iteration based on a metric that captures both the aggregation and sensing quality of the system. It further invokes alternating optimization scheme to iteratively update the downlink beamforming and uplink post-processing by marginally optimizing them in each iteration. Convergence and complexity analysis of the proposed algorithm is presented. Numerical evaluations on MNIST and CIFAR-10 datasets demonstrate the effectiveness of our proposed algorithm. The results show that by leveraging accurate sensing, the target echoes on the uplink signal can be effectively suppressed, ensuring the quality of model aggregation to remain intact despite the interference.

Saba Asaad, Ping Wang, Hina Tabassum1/14/2025

arXiv:2501.06943v1 Announce Type: new Abstract: Open radio access networks (e.g., O-RAN) facilitate fine-grained control (e.g., near-RT RIC) in next-generation networks, necessitating advanced AI/ML techniques in handling online resource orchestration in real-time. However, existing approaches can hardly adapt to time-evolving network dynamics in network slicing, leading to significant online performance degradation. In this paper, we propose AdaSlicing, a new adaptive network slicing system, to online learn to orchestrate virtual resources while efficiently adapting to continual network dynamics. The AdaSlicing system includes a new soft-isolated RAN virtualization framework and a novel AdaOrch algorithm. We design the AdaOrch algorithm by integrating AI/ML techniques (i.e., Bayesian learning agents) and optimization methods (i.e., the ADMM coordinator). We design the soft-isolated RAN virtualization to improve the virtual resource utilization of slices while assuring the isolation among virtual resources at runtime. We implement AdaSlicing on an O-RAN compliant network testbed by using OpenAirInterface RAN, Open5GS Core, and FlexRIC near-RT RIC, with Ettus USRP B210 SDR. With extensive network experiments, we demonstrate that AdaSlicing substantially outperforms state-of-the-art works with 64.2% cost reduction and 45.5% normalized performance improvement, which verifies its high adaptability, scalability, and assurance.

Ming Zhao, Yuru Zhang, Qiang Liu, Ahan Kak, Nakjung Choi1/14/2025