cs.NI
60 postsarXiv:2501.01398v1 Announce Type: new Abstract: Augmented reality applications are bitrate intensive, delay-sensitive, and computationally demanding. To support them, mobile edge computing systems need to carefully manage both their networking and computing resources. To this end, we present a proof of concept resource management scheme that adapts the bandwidth at the base station and the GPU frequency at the edge to efficiently fulfill roundtrip delay constrains. Resource adaptation is performed using a Multi-Armed Bandit algorithm that accounts for the monotonic relationship between allocated resources and performance. We evaluate our scheme by experimentation on an OpenAirInterface 5G testbed where the considered application is OpenRTiST. The results indicate that our resource management scheme can substantially reduce both bandwidth usage and power consumption while delivering high quality of service. Overall, this work demonstrates that intelligent resource control can potentially establish systems that are not only more efficient but also more sustainable.
arXiv:2412.18990v2 Announce Type: replace Abstract: This study focuses on a method for detecting and classifying distributed denial of service (DDoS) attacks, such as SYN Flooding, ACK Flooding, HTTP Flooding, and UDP Flooding, using neural networks. Machine learning, particularly neural networks, is highly effective in detecting malicious traffic. A dataset containing normal traffic and various DDoS attacks was used to train a neural network model with a 24-106-5 architecture. The model achieved high Accuracy (99.35%), Precision (99.32%), Recall (99.54%), and F-score (0.99) in the classification task. All major attack types were correctly identified. The model was also further tested in the lab using virtual infrastructures to generate normal and DDoS traffic. The results showed that the model can accurately classify attacks under near-real-world conditions, demonstrating 95.05% accuracy and balanced F-score scores for all attack types. This confirms that neural networks are an effective tool for detecting DDoS attacks in modern information security systems.
arXiv:2501.01271v1 Announce Type: new Abstract: This paper investigates the inherent trade-off between energy efficiency (EE) and spectral efficiency (SE) in distributed massive-MIMO (D-mMIMO) systems. Optimizing the EE and SE together is crucial as increasing spectral efficiency often leads to higher energy consumption. Joint power allocation and AP-UE association are pivotal in this trade-off analysis because they directly influence both EE and SE. We address the gap in existing literature where the EE-SE trade-off has been analyzed but not optimized in the context of D-mMIMO systems. The focus of this study is to maximize the EE with constraints on uplink sum SE through judicious power allocation and AP-UE association, essential for enhancing network throughput. Numerical simulations are performed to validate the proposed model, exploring the impacts of AP-UE association and power allocation on the EE-SE trade-off in uplink D-mMIMO scenarios.
arXiv:2501.01293v1 Announce Type: new Abstract: Recently, the increasing deployment of LEO satellite systems has enabled various space analytics (e.g., crop and climate monitoring), which heavily relies on the advancements in deep learning (DL). However, the intermittent connectivity between LEO satellites and ground station (GS) significantly hinders the timely transmission of raw data to GS for centralized learning, while the scaled-up DL models hamper distributed learning on resource-constrained LEO satellites. Though split learning (SL) can be a potential solution to these problems by partitioning a model and offloading primary training workload to GS, the labor-intensive labeling process remains an obstacle, with intermittent connectivity and data heterogeneity being other challenges. In this paper, we propose LEO-Split, a semi-supervised (SS) SL design tailored for satellite networks to combat these challenges. Leveraging SS learning to handle (labeled) data scarcity, we construct an auxiliary model to tackle the training failure of the satellite-GS non-contact time. Moreover, we propose a pseudo-labeling algorithm to rectify data imbalances across satellites. Lastly, an adaptive activation interpolation scheme is devised to prevent the overfitting of server-side sub-model training at GS. Extensive experiments with real-world LEO satellite traces (e.g., Starlink) demonstrate that our LEO-Split framework achieves superior performance compared to state-ofthe-art benchmarks.
arXiv:2407.11155v2 Announce Type: replace Abstract: Multi-Access Edge Computing (MEC) is widely recognized as an essential enabler for applications that necessitate minimal latency. However, the dropped task ratio metric has not been studied thoroughly in literature. Neglecting this metric can potentially reduce the system's capability to effectively manage tasks, leading to an increase in the number of eliminated or unprocessed tasks. This paper presents a 5G-MEC task offloading scenario with a focus on minimizing the dropped task ratio, computational latency, and communication latency. We employ Mixed Integer Linear Programming (MILP), Particle Swarm Optimization (PSO), and Genetic Algorithm (GA) to optimize the latency and dropped task ratio. We conduct an analysis on how the quantity of tasks and User Equipment (UE) impacts the ratio of dropped tasks and the latency. The tasks that are generated by UEs are classified into two categories: urgent tasks and non-urgent tasks. The UEs with urgent tasks are prioritized in processing to ensure a zero-dropped task ratio. Our proposed method improves the performance of the baseline methods, First Come First Serve (FCFS) and Shortest Task First (STF), in the context of 5G-MEC task offloading. Under the MILP-based approach, the latency is reduced by approximately 55% compared to GA and 35% compared to PSO. The dropped task ratio under the MILP-based approach is reduced by approximately 70% compared to GA and by 40% compared to PSO.
arXiv:2412.04074v3 Announce Type: replace Abstract: This paper studies an integrated sensing and communications (ISAC) system for low-altitude economy (LAE), where a ground base station (GBS) provides communication and navigation services for authorized unmanned aerial vehicles (UAVs), while sensing the low-altitude airspace to monitor the unauthorized mobile target. The expected communication sum-rate over a given flight period is maximized by jointly optimizing the beamforming at the GBS and UAVs' trajectories, subject to the constraints on the average signal-to-noise ratio requirement for sensing, the flight mission and collision avoidance of UAVs, as well as the maximum transmit power at the GBS. Typically, this is a sequential decision-making problem with the given flight mission. Thus, we transform it to a specific Markov decision process (MDP) model called episode task. Based on this modeling, we propose a novel LAE-oriented ISAC scheme, referred to as Deep LAE-ISAC (DeepLSC), by leveraging the deep reinforcement learning (DRL) technique. In DeepLSC, a reward function and a new action selection policy termed constrained noise-exploration policy are judiciously designed to fulfill various constraints. To enable efficient learning in episode tasks, we develop a hierarchical experience replay mechanism, where the gist is to employ all experiences generated within each episode to jointly train the neural network. Besides, to enhance the convergence speed of DeepLSC, a symmetric experience augmentation mechanism, which simultaneously permutes the indexes of all variables to enrich available experience sets, is proposed. Simulation results demonstrate that compared with benchmarks, DeepLSC yields a higher sum-rate while meeting the preset constraints, achieves faster convergence, and is more robust against different settings.
arXiv:2501.01200v1 Announce Type: new Abstract: This paper provides an in-depth review and discussion of the state of the art in redundancy mitigation for the vehicular Collective Perception Service (CPS). We focus on the evolutionary differences between the redundancy mitigation rules proposed in 2019 in ETSI TR 103 562 versus the 2023 technical specification ETSI TS 103 324, which uses a Value of Information (VoI) based mitigation approach. We also critically analyse the academic literature that has sought to quantify the communication challenges posed by the CPS and present a unique taxonomy of the redundancy mitigation approaches proposed using three distinct classifications: object inclusion filtering, data format optimisation, and frequency management. Finally, this paper identifies open research challenges that must be adequately investigated to satisfactorily deploy CPS redundancy mitigation measures. Our critical and comprehensive evaluation serves as a point of reference for those undertaking research in this area.
arXiv:2501.00186v1 Announce Type: new Abstract: Rapid progress in the development of information technology has led to a significant increase in the number and complexity of cyber threats. Traditional methods of cybersecurity training based on theoretical knowledge do not provide a sufficient level of practical skills to effectively counter real threats. The article explores the possibilities of integrating simulation environments into the cybersecurity training process as an effective approach to improving the quality of training. The article presents the architecture of a simulation environment based on a cluster of KVM hypervisors, which allows creating scalable and flexible platforms at minimal cost. The article describes the implementation of various scenarios using open source software tools such as pfSense, OPNsense, Security Onion, Kali Linux, Parrot Security OS, Ubuntu Linux, Oracle Linux, FreeBSD, and others, which create realistic conditions for practical training.
arXiv:2501.00354v1 Announce Type: new Abstract: Large constellations of Earth Observation Low Earth Orbit satellites collect enormous amounts of image data every day. This amount of data needs to be transferred to data centers for processing via ground stations. Ground Station as a Service (GSaaS) emerges as a new cloud service to offer satellite operators easy access to a network of ground stations on a pay-per-use basis. However, renting ground station and data center resources still incurs considerable costs, especially for large satellite constellations. The current practice of sticking to a single GSaaS provider also suffers high data latency and low robustness to weather variability due to limited ground station availability. To address these limitations, we propose SkyGS, a system that schedules both communication and computation by federating GSaaS and cloud computing services across multiple cloud providers. We formulate the resulting problem as a system cost minimization problem with a long-term data latency threshold constraint. In SkyGS, we apply Lyapunov optimization to decompose the long-term optimization problem into a series of real-time optimization problems that do not require prior knowledge. As the decomposed problem is still of exponential complexity, we transform it into a bipartite graph-matching problem and employ the Hungarian algorithm to solve it. We analyze the performance theoretically and evaluate SkyGS using realistic simulations based on real-world satellites, ground stations, and data centers data. The comprehensive experiments demonstrate that SkyGS can achieve cost savings by up to 63% & reduce average data latency by up to 95%.
arXiv:2501.00372v1 Announce Type: new Abstract: The increasing complexity of 6G systems demands innovative tools for network management, simulation, and optimization. This work introduces the integration of ns-3 with Sionna RT, establishing the foundation for the first open source full-stack Digital Network Twin (DNT) capable of supporting multi-RAT. By incorporating a deterministic ray tracer for precise and site-specific channel modeling, this framework addresses limitations of traditional stochastic models and enables realistic, dynamic, and multilayered wireless network simulations. Tested in a challenging vehicular urban scenario, the proposed solution demonstrates significant improvements in accurately modeling wireless channels and their cascading effects on higher network layers. With up to 65% observed differences in application-layer performance compared to stochastic models, this work highlights the transformative potential of ray-traced simulations for 6G research, training, and network management.
arXiv:2501.00549v1 Announce Type: new Abstract: In this paper, we address the problem of timely delivery of status update packets in a real-time communication system, where a transmitter sends status updates generated by a source to a receiver over an unreliable channel. The timestamps of transmitted and received packets are measured using separate clocks located at the transmitter and receiver, respectively. To account for possible clock drift between these two clocks, we consider both deterministic and probabilistic drift scenarios. We analyze the system's performance regarding the Age of Information (AoI) and derive closed-form expressions for the distribution and the average AoI under both clock drift models. Additionally, we explore the impact of key system parameters on the average AoI through analytical and numerical results.
arXiv:2501.00859v1 Announce Type: new Abstract: Multirotor Aerial Vehicles (MRAVs) when integrated into wireless communication systems and equipped with a Reflective Intelligent Surface (RIS) enhance coverage and enable connectivity in obstructed areas. However, due to limited degrees of freedom (DoF), traditional under-actuated MRAVs with RIS are unable to control independently both the RIS orientation and their location, which significantly limits network performance. A new design, omnidirectional MRAV (o-MRAV), is introduced to address this issue. In this paper, an o-MRAV is deployed to assist a terrestrial base station in providing connectivity to obstructed users. Our objective is to maximize the minimum data rate among users by optimizing the o-MRAV's orientation, location, and RIS phase shift. To solve this challenging problem, we first smooth the objective function and then apply the Parallel Successive Convex Approximation (PSCA) technique to find efficient solutions. Our simulation results show significant improvements of 28% and 14% in terms of minimum and average data rates, respectively, for the o-MRAVs compared to traditional u-MRAVs.
arXiv:2501.00883v1 Announce Type: new Abstract: The dispersed node locations and complex topologies of edge networks, combined with intricate dynamic microservice dependencies, render traditional centralized microservice architectures (MSAs) unsuitable. In this paper, we propose a decentralized microservice architecture (DMSA), which delegates scheduling functions from the control plane to edge nodes. DMSA redesigns and implements three core modules of microservice discovery, monitoring, and scheduling for edge networks to achieve precise awareness of instance deployments, low monitoring overhead and measurement errors, and accurate dynamic scheduling, respectively. Particularly, DMSA has customized a microservice scheduling scheme that leverages multi-port listening and zero-copy forwarding to guarantee high data forwarding efficiency. Moreover, a dynamic weighted multi-level load balancing algorithm is proposed to adjust scheduling dynamically with consideration of reliability, priority, and response delay. Finally, we have implemented a physical verification platform for DMSA. Extensive empirical results demonstrate that compared to state-of-the-art and traditional scheduling schemes, DMSA effectively counteracts link failures and network fluctuations, improving the service response delay and execution success rate by approximately $60\% \sim 75\%$ and $10\%\sim15\%$, respectively.
arXiv:2501.00950v1 Announce Type: new Abstract: The future mobile network has the complex mission of distributing available radio resources among various applications with different requirements. The radio access network slicing enables the creation of different logical networks by isolating and using dedicated resources for each group of applications. In this scenario, the radio resource scheduling (RRS) is responsible for distributing the radio resources available among the slices to fulfill their service-level agreement (SLA) requirements, prioritizing critical slices while minimizing the number of intent violations. Moreover, ensuring that the RRS can deal with a high diversity of network scenarios is essential. Several recent papers present advances in machine learning-based RRS. However, the scenarios and slice variety are restricted, which inhibits solid conclusions about the generalization capabilities of the models after deployment in real networks. This paper proposes an intent-based RRS using multi-agent reinforcement learning in a radio access network (RAN) slicing context. The proposed method protects high-priority slices when the available radio resources cannot fulfill all the slices. It uses transfer learning to reduce the number of training steps required. The proposed method and baselines are evaluated in different network scenarios that comprehend combinations of different slice types, channel trajectories, number of active slices and users' equipment (UEs), and UE characteristics. The proposed method outperformed the baselines in protecting slices with higher priority, obtaining an improvement of 40% and, when considering all the slices, obtaining an improvement of 20% in relation to the baselines. The results show that by using transfer learning, the required number of training steps could be reduced by a factor of eight without hurting performance.
arXiv:2501.01027v1 Announce Type: new Abstract: Remote patient monitoring is crucial in modern healthcare, but current systems struggle with real-time analysis and prediction of vital signs. This paper presents a novel architecture combining deep learning with 5G network capabilities to enable real-time vital sign monitoring and prediction. The proposed system utilizes a hybrid CNN-LSTM model optimized for edge deployment, paired with 5G Ultra-Reliable Low-Latency Communication (URLLC) for efficient data transmission. The architecture achieves end-to-end latency of 14.4ms while maintaining 96.5% prediction accuracy across multiple vital signs. Our system shows significant improvements over existing solutions, reducing latency by 47% and increasing prediction accuracy by 4.2% compared to current state-of-the-art systems. Performance evaluations conducted over three months with data from 1000 patients validate the system's reliability and scalability in clinical settings. The results demonstrate that integrating deep learning with 5G technology can effectively address the challenges of real-time patient monitoring, leading to early detection of deteriorating conditions and improved clinical outcomes. This research establishes a framework for reliable, real-time vital sign monitoring and prediction in digital healthcare.
arXiv:2501.01038v1 Announce Type: new Abstract: Integrated sensing and communication (ISAC) has emerged as a pivotal technology for enabling vehicle-to-everything (V2X) connectivity, mobility, and security. However, designing efficient beamforming schemes to achieve accurate sensing and enhance communication performance in the dynamic and uncertain environments of V2X networks presents significant challenges. While AI technologies offer promising solutions, the energy-intensive nature of neural networks (NNs) imposes substantial burdens on communication infrastructures. This work proposes an energy-efficient and intelligent ISAC system for V2X networks. Specifically, we first leverage a Markov Decision Process framework to model the dynamic and uncertain nature of V2X networks. This framework allows the roadside unit (RSU) to develop beamforming schemes relying solely on its current sensing state information, eliminating the need for numerous pilot signals and extensive channel state information acquisition. To endow the system with intelligence and enhance its performance, we then introduce an advanced deep reinforcement learning (DRL) algorithm based on the Actor-Critic framework with a policy-clipping technique, enabling the joint optimization of beamforming and power allocation strategies to guarantee both communication rate and sensing accuracy. Furthermore, to alleviate the energy demands of NNs, we integrate Spiking Neural Networks (SNNs) into the DRL algorithm. By leveraging discrete spikes and their temporal characteristics for information transmission, SNNs not only significantly reduce the energy consumption of deploying AI model in ISAC-assisted V2X networks but also further enhance algorithm performance. Extensive simulation results validate the effectiveness of the proposed scheme with lower energy consumption, superior communication performance, and improved sensing accuracy.
arXiv:2501.01141v1 Announce Type: new Abstract: This paper investigates adaptive transmission strategies in embodied AI-enhanced vehicular networks by integrating large language models (LLMs) for semantic information extraction and deep reinforcement learning (DRL) for decision-making. The proposed framework aims to optimize both data transmission efficiency and decision accuracy by formulating an optimization problem that incorporates the Weber-Fechner law, serving as a metric for balancing bandwidth utilization and quality of experience (QoE). Specifically, we employ the large language and vision assistant (LLAVA) model to extract critical semantic information from raw image data captured by embodied AI agents (i.e., vehicles), reducing transmission data size by approximately more than 90\% while retaining essential content for vehicular communication and decision-making. In the dynamic vehicular environment, we employ a generalized advantage estimation-based proximal policy optimization (GAE-PPO) method to stabilize decision-making under uncertainty. Simulation results show that attention maps from LLAVA highlight the model's focus on relevant image regions, enhancing semantic representation accuracy. Additionally, our proposed transmission strategy improves QoE by up to 36\% compared to DDPG and accelerates convergence by reducing required steps by up to 47\% compared to pure PPO. Further analysis indicates that adapting semantic symbol length provides an effective trade-off between transmission quality and bandwidth, achieving up to a 61.4\% improvement in QoE when scaling from 4 to 8 vehicles.
arXiv:2501.01170v1 Announce Type: new Abstract: In this study, we have experimentally modelled the movement of a bee colony in a hive during the winter season and developed a monitoring system that allows tracking the movement of the bee colony and honey consumption. The monitoring system consists of four load cells connected to the RP2040 controller based on the Raspberry Pi Pico board, from which data is transmitted via the MQTT protocol to the Raspberry Pi 5 microcomputer via a Wi-Fi network. The processed data from the Raspberry Pi 5 is recorded in a MySQL database. The algorithm for finding the location of the bee colony in the hive works correctly, the trajectory of movement based on the data from the sensors repeats the physical movement in the experiment, which is an imitation of the movement of the bee colony in real conditions. The proposed monitoring system provides continuous observation of the bee colony without adversely affecting its natural activities and can be integrated with various wireless data networks. This is a promising tool for improving the efficiency of beekeeping and maintaining the health of bee colonies.
arXiv:2501.01187v1 Announce Type: new Abstract: Privacy-preserving machine learning (PPML) enables clients to collaboratively train deep learning models without sharing private datasets, but faces privacy leakage risks due to gradient leakage attacks. Prevailing methods leverage secure aggregation strategies to enhance PPML, where clients leverage masks and secret sharing to further protect gradient data while tolerating participant dropouts. These methods, however, require frequent inter-client communication to negotiate keys and perform secret sharing, leading to substantial communication overhead. To tackle this issue, we propose NET-SA, an efficient secure aggregation architecture for PPML based on in-network computing. NET-SA employs seed homomorphic pseudorandom generators for local gradient masking and utilizes programmable switches for seed aggregation. Accurate and secure gradient aggregation is then performed on the central server based on masked gradients and aggregated seeds. This design effectively reduces communication overhead due to eliminating the communication-intensive phases of seed agreement and secret sharing, with enhanced dropout tolerance due to overcoming the threshold limit of secret sharing. Extensive experiments on server clusters and Intel Tofino programmable switch demonstrate that NET-SA achieves up to 77x and 12x enhancements in runtime and 2x decrease in total client communication cost compared with state-of-the-art methods.
arXiv:2412.20151v2 Announce Type: replace Abstract: As an emerging computing paradigm, mobile edge computing (MEC) provides processing capabilities at the network edge, aiming to reduce latency and improve user experience. Meanwhile, the advancement of containerization technology facilitates the deployment of microservice-based applications via edge node collaboration, ensuring highly efficient service delivery. However, existing research overlooks the resource contention among microservices in MEC. This neglect potentially results in inadequate resources for microservices constituting latency-sensitive applications, leading to increased response time and ultimately compromising quality of service (QoS). To solve this problem, we propose the Contention-Aware Multi-Application Microservice Deployment (CAMD) algorithm for collaborative MEC, balancing rapid response for applications with low-latency requirements and overall processing efficiency. The CAMD algorithm decomposes the overall deployment problem into manageable sub-problems, each focusing on a single microservice, then employs a heuristic approach to optimize these sub-problems, and ultimately arrives at an optimized deployment scheme through an iterative process. Finally, the superiority of the proposed algorithm is evidenced through intensive experiments and comparison with baseline algorithms.