cs.NI
136 postsarXiv:2501.11574v1 Announce Type: new Abstract: Co-existence of 5G New Radio (5G-NR) with IoT devices is considered as a promising technique to enhance the spectral usage and efficiency of future cellular networks. In this paper, a unified framework has been proposed for allocating in-band resource blocks (RBs), i.e., within a multi-cell network, to 5G-NR users in co-existence with NB-IoT and LTE-M devices. First, a benchmark (upper-bound) scheduler has been designed for joint sub-carrier (SC) and modulation and coding scheme (MCS) allocation that maximizes instantaneous throughput and fairness among users/devices, while considering synchronous RB allocation in the neighboring cells. A series of numerical simulations with realistic ICI in an urban scenario have been used to compute benchmark upper-bound solutions for characterizing performance in terms of throughput, fairness, and delay. Next, an edge learning based multi-agent deep reinforcement learning (DRL) framework has been developed for different DRL algorithms, specifically, a policy-based gradient network (PGN), a deep Q-learning based network (DQN), and an actor-critic based deep deterministic policy gradient network (DDPGN). The proposed DRL framework depends on interference allocation, where the actions are based on inter-cell-interference (ICI) instead of power, which can bypass the need for raw data sharing and/or inter-agent communication. The numerical results reveal that the interference allocation based DRL schedulers can significantly outperform their counterparts, where the actions are based on power allocation. Further, the performance of the proposed policy-based edge learning algorithms is close to the centralized ones.
arXiv:2501.12304v1 Announce Type: new Abstract: The increasing number of wireless communication technologies and standards bring immense opportunities and challenges to provide seamless connectivity in Hybrid Vehicular Networks (HVNs). HVNs could not only enhance existing applications but could also spur an array of new services. However, due to sheer number of use cases and applications with diverse and stringent QoS performance requirements it is very critical to efficiently decide on which radio access technology (RAT) to select. In this paper a QoS aware RAT selection algorithm is proposed for HVN. The proposed algorithm switches between IEEE 802.11p based ad hoc network and LTE cellular network by considering network load and application's QoS requirements. The simulation-based studies show that the proposed RAT selection mechanism results in lower number of Vertical Handovers (VHOs) and significant performance improvements in terms of packet delivery ratio, latency and application-level throughput.
arXiv:2501.11198v1 Announce Type: new Abstract: Internet of Things (IoT) devices have become increasingly ubiquitous with applications not only in urban areas but remote areas as well. These devices support industries such as agriculture, forestry, and resource extraction. Due to the device location being in remote areas, satellites are frequently used to collect and deliver IoT device data to customers. As these devices become increasingly advanced and numerous, the amount of data produced has rapidly increased potentially straining the ability for radio frequency (RF) downlink capacity. Free space optical communications with their wide available bandwidths and high data rates are a potential solution, but these communication systems are highly vulnerable to weather-related disruptions. This results in certain communication opportunities being inefficient in terms of the amount of data received versus the power expended. In this paper, we propose a deep reinforcement learning (DRL) method using Deep Q-Networks that takes advantage of weather condition forecasts to improve energy efficiency while delivering the same number of packets as schemes that don't factor weather into routing decisions. We compare this method with simple approaches that utilize simple cloud cover thresholds to improve energy efficiency. In testing the DRL approach provides improved median energy efficiency without a significant reduction in median delivery ratio. Simple cloud cover thresholds were also found to be effective but the thresholds with the highest energy efficiency had reduced median delivery ratio values.
arXiv:2501.11484v1 Announce Type: new Abstract: The Internet of Things (IoT) establishes connectivity between billions of heterogeneous devices that provide a variety of essential everyday services. The IoT faces several challenges, including energy efficiency and scalability, that require consideration of enabling technologies such as network softwarization. This technology is an appropriate solution for IoT, leveraging Software Defined Networking (SDN) and Network Function Virtualization (NFV) as two main techniques, especially when combined with Machine Learning (ML). Although many efforts have been made to optimize routing in softwarized IoT, the existing solutions do not take advantage of distributed intelligence. In this paper, we propose to optimize routing in softwarized IoT networks using Federated Deep Reinforcement Learning (FDRL), where distributed network softwarization and intelligence (i.e., FDRL) join forces to improve routing in constrained IoT networks. Our proposal introduces the combination of two novelties (i.e., distributed controller design and intelligent routing) to meet the IoT requirements (mainly performance and energy efficiency). The simulation results confirm the effectiveness of our proposal compared to the conventional counterparts.
arXiv:2501.11994v1 Announce Type: new Abstract: The Single Carrier-Frequency Division Multiple Access (SC-FDMA) is a transmission technique used in the uplink of Long Term Evolution (LTE) and 5G systems, as it is characterized by reduced transmitted signal envelope fluctuations in comparison to Orthogonal Frequency Division Multiplexing (OFDM) technique used in the downlink. This allows for higher energy efficiency of User Equipments (UEs) while maintaining sufficient signal quality, measured by Error Vector Magnitude (EVM), at the transmitter. This paper proposes to model a nonlinear Power Amplifier (PA) influence while optimizing the transmit power in order to maximize the Signal to Noise and Distortion power Ratio (SNDR) at the receiver, removing the transmitter-based EVM constraint. An analytic model of SNDR for the OFDM system and a semi-analytical model for the SC-FDMA system are provided. Numerical investigations show that the proposed transmit power optimization allows for improved signal quality at the receiver for both OFDM and SC-FDMA systems. However, SC-FDMA still outperforms OFDM in this matter. Such a power amplifier-aware wireless transmitter optimization should be considered to boost the performance and sustainability of next-generation wireless systems, including Internet of Things (IoT) ones.
arXiv:2412.10874v2 Announce Type: replace Abstract: With the increasing complexity of Wi-Fi networks and the iterative evolution of 802.11 protocols, the Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) protocol faces significant challenges in achieving fair channel access and efficient resource allocation between legacy and modern Wi-Fi devices. To address these challenges, we propose an AI-driven Station (AI-STA) equipped with a Deep Q-Learning (DQN) module that dynamically adjusts its receive sensitivity threshold and transmit power. The AI-STA algorithm aims to maximize fairness in resource allocation while ensuring diverse Quality of Service (QoS) requirements are met. The performance of the AI-STA is evaluated through discrete event simulations in a Wi-Fi network, demonstrating that it outperforms traditional stations in fairness and QoS metrics. Although the AI-STA does not exhibit exceptionally superior performance, it holds significant potential for meeting QoS and fairness requirements with the inclusion of additional MAC parameters. The proposed AI-driven Sensitivity and Power algorithm offers a robust framework for optimizing sensitivity and power control in AI-STA devices within legacy Wi-Fi networks.
arXiv:2501.10627v1 Announce Type: new Abstract: The flexibility and complexity of IPv6 extension headers allow attackers to create covert channels or bypass security mechanisms, leading to potential data breaches or system compromises. The mature development of machine learning has become the primary detection technology option used to mitigate covert communication threats. However, the complexity of detecting covert communication, evolving injection techniques, and scarcity of data make building machine-learning models challenging. In previous related research, machine learning has shown good performance in detecting covert communications, but oversimplified attack scenario assumptions cannot represent the complexity of modern covert technologies and make it easier for machine learning models to detect covert communications. To bridge this gap, in this study, we analyzed the packet structure and network traffic behavior of IPv6, used encryption algorithms, and performed covert communication injection without changing network packet behavior to get closer to real attack scenarios. In addition to analyzing and injecting methods for covert communications, this study also uses comprehensive machine learning techniques to train the model proposed in this study to detect threats, including traditional decision trees such as random forests and gradient boosting, as well as complex neural network architectures such as CNNs and LSTMs, to achieve detection accuracy of over 90\%. This study details the methods used for dataset augmentation and the comparative performance of the applied models, reinforcing insights into the adaptability and resilience of the machine learning application in IPv6 covert communication. In addition, we also proposed a Generative AI-assisted interpretation concept based on prompt engineering as a preliminary study of the role of Generative AI agents in covert communication.
arXiv:2501.11107v1 Announce Type: new Abstract: Chaos Engineering (CE) is an engineering technique aimed at improving the resiliency of distributed systems. It involves artificially injecting specific failures into a distributed system and observing its behavior in response. Based on the observation, the system can be proactively improved to handle those failures. Recent CE tools realize the automated execution of predefined CE experiments. However, defining these experiments and reconfiguring the system after the experiments still remain manual. To reduce the costs of the manual operations, we propose \textsc{ChaosEater}, a \textit{system} for automating the entire CE operations with Large Language Models (LLMs). It pre-defines the general flow according to the systematic CE cycle and assigns subdivided operations within the flow to LLMs. We assume systems based on Infrastructure as Code (IaC), wherein the system configurations and artificial failures are managed through code. Hence, the LLMs' operations in our \textit{system} correspond to software engineering tasks, including requirement definition, code generation and debugging, and testing. We validate our \textit{system} through case studies on both small and large systems. The results demonstrate that our \textit{system} significantly reduces both time and monetary costs while completing reasonable single CE cycles.
arXiv:2501.11333v1 Announce Type: new Abstract: In this paper, the task offloading from vehicles with random velocities is optimized via a novel dynamic improvement framework. Particularly, in a vehicular network with multiple vehicles and base stations (BSs), computing tasks of vehicles are offloaded via BSs to an edge server. Due to the random velocities, the exact trajectories of vehicles cannot be predicted in advance. Hence, instead of deterministic optimization, the cell association, uplink time and throughput allocation of multiple vehicles in a period of task offloading are formulated as a finite-horizon Markov decision process. In the proposed solution framework, we first obtain a reference scheduling scheme of cell association, uplink time and throughput allocation via deterministic optimization at the very beginning. The reference scheduling scheme is then used to approximate the value functions of the Bellman's equations, and the actual scheduling action is determined in each time slot according to the current system state and approximate value functions. Thus, the intensive computation for value iteration in the conventional solution is eliminated. Moreover, a non-trivial average cost upper bound is provided for the proposed solution framework. In the simulation, the random trajectories of vehicles are generated from a high-fidelity traffic simulator. It is shown that the performance gain of the proposed scheduling framework over the baselines is significant.
arXiv:2501.11410v1 Announce Type: new Abstract: This paper proposes a novel split learning architecture designed to exploit the cyclical movement of Low Earth Orbit (LEO) satellites in non-terrestrial networks (NTNs). Although existing research focuses on offloading tasks to the NTN infrastructure, these approaches overlook the dynamic movement patterns of LEO satellites that can be used to efficiently distribute the learning task. In this work, we analyze how LEO satellites, from the perspective of ground terminals, can participate in a time-window-based model training. By splitting the model between a LEO and a ground terminal, the computational burden on the satellite segment is reduced, while each LEO satellite offloads the partially trained model to the next satellite in the constellation. This cyclical training process allows larger and more energy-intensive models to be deployed and trained across multiple LEO satellites, despite their limited energy resources. We formulate an optimization problem that manages radio and processing resources, ensuring the entire data is processed during each satellite pass while minimizing the energy consumption. Our results demonstrate that this approach offers a more scalable and energy-efficient way to train complex models, enhancing the capabilities of LEO satellite constellations in the context of Artificial Intelligence-driven applications.
arXiv:2501.11605v1 Announce Type: new Abstract: Microblogging is a crucial mode of online communication. However, launching a new microblogging platform remains challenging, largely due to network effects. This has resulted in entrenched (and undesirable) dominance by established players, such as X/Twitter. To overcome these network effects, Bluesky, an emerging microblogging platform, introduced starter packs -- curated lists of accounts that users can follow with a single click. We ask if starter packs have the potential to tackle the critical problem of social bootstrapping in new online social networks? This paper is the first to address this question: we asses whether starter packs have been indeed helpful in supporting Bluesky growth. Our dataset includes $25.05 \times 10^6$ users and $335.42 \times 10^3$ starter packs with $1.73 \times 10^6$ members, covering the entire lifecycle of Bluesky. We study the usage of these starter packs, their ability to drive network and activity growth, and their potential downsides. We also quantify the benefits of starter packs for members and creators on user visibility and activity while identifying potential challenges. By evaluating starter packs' effectiveness and limitations, we contribute to the broader discourse on platform growth strategies and competitive innovation in the social media landscape.
arXiv:2501.11984v1 Announce Type: new Abstract: Long-range frequency-hopping spread spectrum (LR-FHSS) promises to enhance network capacity by integrating frequency hopping into existing Long Range Wide Area Networks (LoRaWANs). Due to its simplicity and scalability, LR-FHSS has generated significant interest as a potential candidate for direct-to-satellite IoT (D2S-IoT) applications. This paper explores methods to improve the reliability of data transfer on the uplink (i.e., from terrestrial IoT nodes to satellite) of LR-FHSS D2S-IoT networks. Because D2S-IoT networks are expected to support large numbers of potentially uncoordinated IoT devices per satellite, acknowledgment-cum-retransmission-aided reliability mechanisms are not suitable due to their lack of scalability. We therefore leverage message-replication, wherein every application-layer message is transmitted multiple times to improve the probability of reception without the use of receiver acknowledgments. We propose two message-replication schemes. One scheme is based on conventional replication, where multiple replicas of a message are transmitted, each as a separate link-layer frame. In the other scheme, multiple copies of a message is included in the payload of a single link-layer frame. We show that both techniques improve LR-FHSS reliability. Which method is more suitable depends on the network's traffic characteristics. We provide guidelines to choose the optimal method.
arXiv:2501.12033v1 Announce Type: new Abstract: Today, the rapid growth of applications reliant on datacenters calls for new advancements to meet the increasing traffic and computational demands. Traffic traces from datacenters are essential for further development and optimization of future datacenters. However, traces are rarely released to the public. Researchers often use simplified mathematical models that lack the depth needed to recreate intricate traffic patterns and, thus, miss optimization opportunities found in realistic traffic. In this preliminary work, we introduce DTG-GPT, a packet-level Datacenter Traffic Generator (DTG), based on the generative pre-trained transformer (GPT) architecture used by many state-of-the-art large language models. We train our model on a small set of available traffic traces from different domains and offer a simple methodology to evaluate the fidelity of the generated traces to their original counterparts. We show that DTG-GPT can synthesize novel traces that mimic the spatiotemporal patterns found in real traffic traces. We further demonstrate that DTG-GPT can generate traces for networks of different scales while maintaining fidelity. Our findings indicate the potential that, in the future, similar models to DTG-GPT will allow datacenter operators to release traffic information to the research community via trained GPT models.
arXiv:2501.12037v1 Announce Type: new Abstract: Reconfigurable intelligent surfaces (RISs) are a promising technology for enhancing cellular network performance and yielding additional value to network operators. This paper proposes a techno-economic analysis of RIS-assisted cellular networks to guide operators in deciding between deploying additional RISs or base stations (BS). We assume a relative cost model that considers the total cost of ownership (TCO) of deploying additional nodes, either BSs or RISs. We assume a return on investment (RoI) that is proportional to the system's spectral efficiency. The latter is evaluated based on a stochastic geometry model that gives an integral formula for the ergodic rate in cellular networks equipped with RISs. The marginal RoI for any investment strategy is determined by the partial derivative of this integral expression with respect to node densities. We investigate two case studies: throughput enhancement and coverage hole mitigation. These examples demonstrate how operators could determine the optimal investment strategy in scenarios defined by the current densities of BSs and RISs, and their relative costs. Numerical results illustrate the evolution of ergodic rates based on the proposed investment strategy, demonstrating the investment decision-making process while considering technological and economic factors. This work quantitatively demonstrates that strategically investing in RISs can offer better system-level benefits than solely investing in BS densification.
arXiv:2501.10396v1 Announce Type: new Abstract: We present a survey paper on methods and applications of digital twins (DT) for urban traffic management. While the majority of studies on the DT focus on its "eyes," which is the emerging sensing and perception like object detection and tracking, what really distinguishes the DT from a traditional simulator lies in its ``brain," the prediction and decision making capabilities of extracting patterns and making informed decisions from what has been seen and perceived. In order to add values to urban transportation management, DTs need to be powered by artificial intelligence and complement with low-latency high-bandwidth sensing and networking technologies. We will first review the DT pipeline leveraging cyberphysical systems and propose our DT architecture deployed on a real-world testbed in New York City. This survey paper can be a pointer to help researchers and practitioners identify challenges and opportunities for the development of DTs; a bridge to initiate conversations across disciplines; and a road map to exploiting potentials of DTs for diverse urban transportation applications.
arXiv:2501.10403v1 Announce Type: new Abstract: Cyber polygon used to train cybersecurity professionals, test new security technologies and simulate attacks play an important role in ensuring cybersecurity. The creation of such training grounds is based on the use of hypervisors, which allow efficient management of virtual machines, isolating operating systems and resources of a physical computer from virtual machines, ensuring a high level of security and stability. The paper analyses various aspects of using hypervisors in cyber polygons, including types of hypervisors, their main functions, and the specifics of their use in modelling cyber threats. The article shows the ability of hypervisors to increase the efficiency of hardware resources, create complex virtual environments for detailed modelling of network structures and simulation of real situations in cyberspace.
arXiv:2501.10712v1 Announce Type: new Abstract: This paper defines a new model which incorporates three key ingredients of a large class of wireless communication systems: (1) spatial interactions through interference, (2) dynamics of the queueing type, with users joining and leaving, and (3) carrier sensing and collision avoidance as used in, e.g., WiFi. In systems using (3), rather than directly accessing the shared resources upon arrival, a customer is considerate and waits to access them until nearby users in service have left. This new model can be seen as a missing piece of a larger puzzle that contains such dynamics as spatial birth-and-death processes, the Poisson-Hail model, and wireless dynamics as key other pieces. It is shown that, under natural assumptions, this model can be represented as a Markov process on the space of counting measures. The main results are then two-fold. The first is on the shape of the stability region and, more precisely, on the characterization of the critical value of the arrival rate that separates stability from instability. The second is of a more qualitative or perhaps even ethical nature. There is evidence that for natural values of the system parameters, the implementation of sensing and collision avoidance stabilizes a system that would be unstable if immediate access to the shared resources would be granted. In other words, for these parameters, renouncing greedy access makes sharing sustainable, whereas indulging in greedy access kills the system.
arXiv:2501.07676v2 Announce Type: replace Abstract: Practitioners use Infrastructure as Code (IaC) scripts to efficiently configure IT infrastructures through machine-readable definition files. However, during the development of these scripts, some code patterns or deployment choices may lead to sustainability issues like inefficient resource utilization or redundant provisioning for example. We call this type of patterns sustainability smells. These inefficiencies pose significant environmental and financial challenges, given the growing scale of cloud computing. This research focuses on Terraform, a widely adopted IaC tool. Our study involves defining seven sustainability smells and validating them through a survey with 19 IaC practitioners. We utilized a dataset of 28,327 Terraform scripts from 395 open-source repositories. We performed a detailed qualitative analysis of a randomly sampled 1,860 Terraform scripts from the original dataset to identify code patterns that correspond to the sustainability smells and used the other 26,467 Terraform scripts to study the prevalence of the defined sustainability smells. Our results indicate varying prevalence rates of these smells across the dataset. The most prevalent smell is Monolithic Infrastructure, which appears in 9.67\% of the scripts. Additionally, our findings highlight the complexity of conducting root cause analysis for sustainability issues, as these smells often arise from a confluence of script structures, configuration choices, and deployment contexts.
arXiv:2501.11247v1 Announce Type: new Abstract: Accurate and reliable link quality prediction (LQP) is crucial for optimizing network performance, ensuring communication stability, and enhancing user experience in wireless communications. However, LQP faces significant challenges due to the dynamic and lossy nature of wireless links, which are influenced by interference, multipath effects, fading, and blockage. In this paper, we propose GAT-LLM, a novel multivariate wireless link quality prediction model that combines Large Language Models (LLMs) with Graph Attention Networks (GAT) to enable accurate and reliable multivariate LQP of wireless communications. By framing LQP as a time series prediction task and appropriately preprocessing the input data, we leverage LLMs to improve the accuracy of link quality prediction. To address the limitations of LLMs in multivariate prediction due to typically handling one-dimensional data, we integrate GAT to model interdependencies among multiple variables across different protocol layers, enhancing the model's ability to handle complex dependencies. Experimental results demonstrate that GAT-LLM significantly improves the accuracy and robustness of link quality prediction, particularly in multi-step prediction scenarios.
arXiv:2501.12317v1 Announce Type: new Abstract: Vehicular communication networks represent both an opportunity and a challenge for providing smart mobility services by using a hybrid solution that relies on cellular connectivity and short range communications. The evaluation of this kind of network is overwhelmingly carried out in the present literature with simulations. However, the degree of realism of the results obtained is limited because simulations simplify real world interactions too much in many cases. In this article, we define an outdoor testbed to evaluate the performance of short range vehicular communications by using real world personal portable devices (smartphones, tablets, and laptops), two different PHY standards (IEEE 802.11g and IEEE 802.11a), and vehicles. Our test results on the 2.4 GHz band show that smartphones can be used to communicate vehicles within a range up to 75 m, while tablets can attain up to 125 m in mobility conditions. Moreover, we observe that vehicles equipped with laptops exchange multimedia information with nodes located further than 150 m. The communications on the 5 GHz band achieved an effective transmission range of up to 100 m. This, together with the optimization of the protocols used, could take our commodity lightweight devices to a new realm of use in the next generation of ad hoc mobility communications for moving through the city.