cs.ET

31 posts

arXiv:2501.00280v1 Announce Type: cross Abstract: In this paper, we investigate the optimization of global quantum communication through satellite constellations. We address the challenge of quantum key distribution (QKD) across vast distances and the limitations posed by terrestrial fiber-optic networks. Our research focuses on the configuration of satellite constellations to improve QKD between ground stations and the application of innovative orbital mechanics to reduce latency in quantum information transfer. We introduce a novel approach using quantum relay satellites in Molniya orbits, enhancing communication efficiency and coverage. The use of these high eccentricity orbits allows us to extend the operational presence of satellites over targeted hemispheres, thus maximizing the quantum network's reach. Our findings provide a strategic framework for deploying quantum satellites and relay systems to achieve a robust and efficient global quantum communication network.

Yichen Gao, Guanqun Song, Ting Zhu1/3/2025

arXiv:2501.01058v1 Announce Type: cross Abstract: The MaxCut problem is a fundamental problem in Combinatorial Optimization, with significant implications across diverse domains such as logistics, network design, and statistical physics. The algorithm represents innovative approaches that balance theoretical rigor with practical scalability. The proposed method introduces a Quantum Genetic Algorithm (QGA) using a Grover-based evolutionary framework and divide-and-conquer principles. By partitioning graphs into manageable subgraphs, optimizing each independently, and applying graph contraction to merge the solutions, the method exploits the inherent binary symmetry of MaxCut to ensure computational efficiency and robust approximation performance. Theoretical analysis establishes a foundation for the efficiency of the algorithm, while empirical evaluations provide quantitative evidence of its effectiveness. On complete graphs, the proposed method consistently achieves the true optimal MaxCut values, outperforming the Semidefinite Programming (SDP) approach, which provides up to 99.7\% of the optimal solution for larger graphs. On Erd\H{o}s-R\'{e}nyi random graphs, the QGA demonstrates competitive performance, achieving median solutions within 92-96\% of the SDP results. These results showcase the potential of the QGA framework to deliver competitive solutions, even under heuristic constraints, while demonstrating its promise for scalability as quantum hardware evolves.

Paulo A. Viana, Fernando M. de Paula Neto1/3/2025

arXiv:2412.20447v2 Announce Type: replace Abstract: The Bitcoin Network is a sophisticated accounting system that allows its underlying cryptocurrency to be trusted even in the absence of a reliable financial authority. Given its undeniable success, the technology, generally referred to as blockchain, has also been proposed as a means to improve legacy accounting systems. Accounting for real-world data, however, requires the intervention of a third party known as an Oracle, which, having not the same characteristics as a blockchain, could potentially reduce the expected integration benefit. Through a systematic review of the literature, this study aims to investigate whether the papers concerning blockchain integration in accounting consider and address the limitations posed by oracles. A broad overview of the limitations that emerged in the literature is provided and distinguished according to the specific accounting integration. Results support the view that although research on the subject counts numerous articles, actual studies considering oracle limitations are lacking. Interestingly, despite the scarce production of papers addressing oracles in various accounting sectors, reporting for ESG already shows interesting workarounds for oracle limitations, with permissioned chains envisioned as a valid support for the safe storage of sustainability data.

Giulio Caldarelli1/3/2025

arXiv:2408.08941v2 Announce Type: replace-cross Abstract: Optimizing quantum circuits is critical for enhancing computational speed and mitigating errors caused by quantum noise. Effective optimization must be achieved without compromising the correctness of the computations. This survey explores re-cent advancements in quantum circuit optimization, encompassing both hardware-independent and hardware-dependent techniques. It reviews state-of-the-art approaches, including analytical algorithms, heuristic strategies, machine learning based methods, and hybrid quantum-classical frameworks. The paper highlights the strengths and limitations of each method, along with the challenges they pose. Furthermore, it identifies potential research opportunities in this evolving field, offering insights into the future directions of quantum circuit optimization.

Krishnageetha Karuppasamy, Varun Puram, Stevens Johnson, Johnson P Thomas1/3/2025

arXiv:2501.00049v1 Announce Type: new Abstract: A chatbot is an intelligent software application that automates conversations and engages users in natural language through messaging platforms. Leveraging artificial intelligence (AI), chatbots serve various functions, including customer service, information gathering, and casual conversation. Existing virtual assistant chatbots, such as ChatGPT and Gemini, demonstrate the potential of AI in Natural Language Processing (NLP). However, many current solutions rely on predefined APIs, which can result in vendor lock-in and high costs. To address these challenges, this work proposes a chatbot developed using a Sequence-to-Sequence (Seq2Seq) model with an encoder-decoder architecture that incorporates attention mechanisms and Long Short-Term Memory (LSTM) cells. By avoiding predefined APIs, this approach ensures flexibility and cost-effectiveness. The chatbot is trained, validated, and tested on a dataset specifically curated for the tourism sector in Draa-Tafilalet, Morocco. Key evaluation findings indicate that the proposed Seq2Seq model-based chatbot achieved high accuracies: approximately 99.58% in training, 98.03% in validation, and 94.12% in testing. These results demonstrate the chatbot's effectiveness in providing relevant and coherent responses within the tourism domain, highlighting the potential of specialized AI applications to enhance user experience and satisfaction in niche markets.

Lamya Benaddi, Charaf Ouaddi, Adnane Souha, Abdeslam Jakimi, Mohamed Rahouti, Mohammed Aledhari, Diogo Oliveira, Brahim Ouchao1/3/2025

arXiv:2501.00790v1 Announce Type: new Abstract: The rapid proliferation of Industrial Internet of Things (IIoT) systems necessitates advanced, interpretable, and scalable intrusion detection systems (IDS) to combat emerging cyber threats. Traditional IDS face challenges such as high computational demands, limited explainability, and inflexibility against evolving attack patterns. To address these limitations, this study introduces the Lightweight Explainable Network Security framework (LENS-XAI), which combines robust intrusion detection with enhanced interpretability and scalability. LENS-XAI integrates knowledge distillation, variational autoencoder models, and attribution-based explainability techniques to achieve high detection accuracy and transparency in decision-making. By leveraging a training set comprising 10% of the available data, the framework optimizes computational efficiency without sacrificing performance. Experimental evaluation on four benchmark datasets: Edge-IIoTset, UKM-IDS20, CTU-13, and NSL-KDD, demonstrates the framework's superior performance, achieving detection accuracies of 95.34%, 99.92%, 98.42%, and 99.34%, respectively. Additionally, the framework excels in reducing false positives and adapting to complex attack scenarios, outperforming existing state-of-the-art methods. Key strengths of LENS-XAI include its lightweight design, suitable for resource-constrained environments, and its scalability across diverse IIoT and cybersecurity contexts. Moreover, the explainability module enhances trust and transparency, critical for practical deployment in dynamic and sensitive applications. This research contributes significantly to advancing IDS by addressing computational efficiency, feature interpretability, and real-world applicability. Future work could focus on extending the framework to ensemble AI systems for distributed environments, further enhancing its robustness and adaptability.

Muhammet Anil Yagiz, Polat Goktas1/3/2025

arXiv:2501.00977v1 Announce Type: new Abstract: The increasing demand for SSDs coupled with scaling difficulties have left manufacturers scrambling for newer SSD interfaces which promise better performance and durability. While these interfaces reduce the rigidity of traditional abstractions, they require application or system-level changes that can impact the stability, security, and portability of systems. To make matters worse, such changes are rendered futile with introduction of next-generation interfaces. Further, there is little guidance on data placement and hardware specifics are often abstracted from the application layer. It is no surprise therefore that such interfaces have seen limited adoption, leaving behind a graveyard of experimental interfaces ranging from open-channel SSDs to zoned namespaces. In this paper, we show how shim layers can to shield systems from changing hardware interfaces while benefiting from them. We present Reshim, an all-userspace shim layer that performs affinity and lifetime based data placement with no change to the operating system or the application. We demonstrate Reshim's ease of adoption with host-device coordination for three widely-used data-intensive systems: RocksDB, MongoDB, and CacheLib. With Reshim, these systems see 2-6 times highe write throughput, up to 6 times lower latency, and reduced write amplification compared to filesystems like F2FS. Reshim performs on par with application-specific backends like ZenFS while offering more generality, lower latency, and richer data placement. With Reshim we demonstrate the value of isolating the complexity of the placement logic, allowing easy deployment of dynamic placement rules across several applications and storage interfaces.

Devashish R. Purandare, Peter Alvaro, Avani Wildani, Darrell D. E. Long, Ethan L. Miller1/3/2025

arXiv:2501.01052v1 Announce Type: new Abstract: Compute-in-memory (CiM) emerges as a promising solution to solve hardware challenges in artificial intelligence (AI) and the Internet of Things (IoT), particularly addressing the "memory wall" issue. By utilizing nonvolatile memory (NVM) devices in a crossbar structure, CiM efficiently accelerates multiply-accumulate (MAC) computations, the crucial operations in neural networks and other AI models. Among various NVM devices, Ferroelectric FET (FeFET) is particularly appealing for ultra-low-power CiM arrays due to its CMOS compatibility, voltage-driven write/read mechanisms and high ION/IOFF ratio. Moreover, subthreshold-operated FeFETs, which operate at scaling voltages in the subthreshold region, can further minimize the power consumption of CiM array. However, subthreshold-FeFETs are susceptible to temperature drift, resulting in computation accuracy degradation. Existing solutions exhibit weak temperature resilience at larger array size and only support 1-bit. In this paper, we propose TReCiM, an ultra-low-power temperature-resilient multibit 2FeFET-1T CiM design that reliably performs MAC operations in the subthreshold-FeFET region with temperature ranging from 0 to 85 degrees Celcius at scale. We benchmark our design using NeuroSim framework in the context of VGG-8 neural network architecture running the CIFAR-10 dataset. Benchmarking results suggest that when considering temperature drift impact, our proposed TReCiM array achieves 91.31% accuracy, with 1.86% accuracy improvement compared to existing 1-bit 2T-1FeFET CiM array. Furthermore, our proposed design achieves 48.03 TOPS/W energy efficiency at system level, comparable to existing designs with smaller technology feature sizes.

Yifei Zhou, Thomas K\"ampfe, Kai Ni, Hussam Amrouch, Cheng Zhuo, Xunzhao Yin1/3/2025

arXiv:2501.01087v1 Announce Type: new Abstract: Time Series Forecasting (TSF) is an important application across many fields. There is a debate about whether Transformers, despite being good at understanding long sequences, struggle with preserving temporal relationships in time series data. Recent research suggests that simpler linear models might outperform or at least provide competitive performance compared to complex Transformer-based models for TSF tasks. In this paper, we propose a novel data-efficient architecture, GLinear, for multivariate TSF that exploits periodic patterns to provide better accuracy. It also provides better prediction accuracy by using a smaller amount of historical data compared to other state-of-the-art linear predictors. Four different datasets (ETTh1, Electricity, Traffic, and Weather) are used to evaluate the performance of the proposed predictor. A performance comparison with state-of-the-art linear architectures (such as NLinear, DLinear, and RLinear) and transformer-based time series predictor (Autoformer) shows that the GLinear, despite being parametrically efficient, significantly outperforms the existing architectures in most cases of multivariate TSF. We hope that the proposed GLinear opens new fronts of research and development of simpler and more sophisticated architectures for data and computationally efficient time-series analysis. The source code is publicly available on GitHub.

Syed Tahir Hussain Rizvi, Neel Kanwal, Muddasar Naeem, Alfredo Cuzzocrea, Antonio Coronato1/3/2025

arXiv:2501.01154v1 Announce Type: new Abstract: In probability theory, the partition function is a factor used to reduce any probability function to a density function with total probability of one. Among other statistical models used to represent joint distribution, Markov random fields (MRF) can be used to efficiently represent statistical dependencies between variables. As the number of terms in the partition function scales exponentially with the number of variables, the potential of each configuration cannot be computed exactly in a reasonable time for large instances. In this paper, we aim to take advantage of the exponential scalability of quantum computing to speed up the estimation of the partition function of a MRF representing the dependencies between operating variables of an airborne radar. For that purpose, we implement a quantum algorithm for partition function estimation in the one clean qubit model. After proposing suitable formulations, we discuss the performances and scalability of our approach in comparison to the theoretical performances of the algorithm.

Timothe Presles, Cyrille Enderli, Gilles Burel, El Houssain Baghious1/3/2025

arXiv:2501.01189v1 Announce Type: new Abstract: Recent advancements in connected autonomous vehicle (CAV) technology have sparked growing research interest in lane-free traffic (LFT). LFT envisions a scenario where all vehicles are CAVs, coordinating their movements without lanes to achieve smoother traffic flow and higher road capacity. This potentially reduces congestion without building new infrastructure. However, the transition phase will likely involve non-connected actors such as human-driven vehicles (HDVs) or independent AVs sharing the roads. This raises the question of how LFT performance is impacted when not all vehicles are CAVs, as these non-connected vehicles may prioritize their own benefits over system-wide improvements. This paper addresses this question through microscopic simulation on a ring road, where CAVs follow the potential lines (PL) controller for LFT, while HDVs adhere to a strip-based car-following model. The PL controller is also modified for safe velocities to prevent collisions. The results reveal that even a small percentage of HDVs can significantly disrupt LFT flow: 5% HDVs can reduce LFT's maximum road capacity by 16%, and a 20% HDVs nearly halves it. The study also develops an adaptive potential (APL) controller that forms APL corridors with modified PLs in the surroundings of HDVs. APL shows a peak traffic flow improvement of 23.6% over the PL controller. The study indicates that a penetration rate of approximately 60% CAVs in LFT is required before significant benefits of LFT start appearing compared to a scenario with all HDVs. These findings open a new research direction on minimizing the adverse effects of non-connected vehicles on LFT.

Arslan Ali Syed, Majid Rostami-Shahrbabaki, Klaus Bogenberger1/3/2025

arXiv:2501.01285v1 Announce Type: new Abstract: Augmented Reality (AR) functionalities may be effectively leveraged in collaborative service scenarios (e.g., remote maintenance, on-site building, street gaming, etc.). Standard development cycles for collaborative AR require to code for each specific visualization platform and implement the necessary control mechanisms over the shared assets. This paper describes SARA, an architecture to support cross-platform collaborative Augmented Reality applications based on microservices. The architecture is designed to work over the concept of collaboration models (turn, layer, ownership,hierarchy-based and unconstrained examples) which regulate the interaction and permissions of each user over the AR assets. Thanks to the reusability of its components, during the development of an application, SARA enables focusing on the application logic while avoiding the implementation of the communication protocol, data model handling and orchestration between the different, possibly heterogeneous,devices involved in the collaboration (i.e., mobile or wearable AR devices using different operating systems). To describe how to build an application based on SARA, a prototype for HoloLens and iOS devices has been implemented. the prototype is a collaborative voxel-based game in which several players work real time together on a piece of land, adding or eliminating cubes in a collaborative manner to create buildings and landscapes. Turn-based and unconstrained collaboration models are applied to regulate the interaction, the development workflow for this case study shows how the architecture serves as a framework to support the deployment of collaborative AR services, enabling the reuse of collaboration model components, agnostically handling client technologies.

Diego Vaquero-Melchor, Ana M. Bernardos, Luca Bergesio1/3/2025

arXiv:2501.00211v1 Announce Type: cross Abstract: Autonomous Vehicles (AVs) represent a transformative advancement in the transportation industry. These vehicles have sophisticated sensors, advanced algorithms, and powerful computing systems that allow them to navigate and operate without direct human intervention. However, AVs' systems still get overwhelmed when they encounter a complex dynamic change in the environment resulting from an accident or a roadblock for maintenance. The advanced features of Sixth Generation (6G) technology are set to offer strong support to AVs, enabling real-time data exchange and management of complex driving maneuvers. This paper proposes a Multi-Agent Reinforcement Learning (MARL) framework to improve AVs' decision-making in dynamic and complex Intelligent Transportation Systems (ITS) utilizing 6G-V2X communication. The primary objective is to enable AVs to avoid roadblocks efficiently by changing lanes while maintaining optimal traffic flow and maximizing the mean harmonic speed. To ensure realistic operations, key constraints such as minimum vehicle speed, roadblock count, and lane change frequency are integrated. We train and test the proposed MARL model with two traffic simulation scenarios using the SUMO and TraCI interface. Through extensive simulations, we demonstrate that the proposed model adapts to various traffic conditions and achieves efficient and robust traffic flow management. The trained model effectively navigates dynamic roadblocks, promoting improved traffic efficiency in AV operations with more than 70% efficiency over other benchmark solutions.

Noor Aboueleneen, Yahuza Bello, Abdullatif Albaseer, Ahmed Refaey Hussein, Mohamed Abdallah, Ekram Hossain1/3/2025

arXiv:2412.18519v1 Announce Type: cross Abstract: As quantum hardware continues to scale, managing the heterogeneity of resources and applications -- spanning diverse quantum and classical hardware and software frameworks -- becomes increasingly critical. \textit{Pilot-Quantum} addresses these challenges as a middleware designed to provide unified application-level management of resources and workloads across hybrid quantum-classical environments. It is built on a rigorous analysis of existing quantum middleware systems and application execution patterns. It implements the Pilot Abstraction conceptual model, originally developed for HPC, to manage resources, workloads, and tasks. It is designed for quantum applications that rely on task parallelism, including: (i) \textit{Hybrid algorithms}, such as variational approaches, and (ii) \textit{Circuit cutting systems}, used to partition and execute large quantum circuits. \textit{Pilot-Quantum} facilitates seamless integration of quantum processing units (QPUs), classical CPUs, and GPUs, while supporting high-level programming frameworks like Qiskit and Pennylane. This enables users to design and execute hybrid workflows across diverse computing resources efficiently. The capabilities of \textit{Pilot-Quantum} are demonstrated through mini-applications -- simplified yet representative kernels focusing on critical performance bottlenecks. We present several mini-apps, including circuit execution across hardware and simulator platforms (e.\,g., IBM's Eagle QPU), distributed state vector simulation, circuit cutting, and quantum machine learning workflows, demonstrating significant scale (e.\,g., a 41-qubit simulation on 256 GPUs) and speedups (e.\,g., 15x for QML, 3.5x for circuit cutting).

Pradeep Mantha, Florian J. Kiwit, Nishant Saurabh, Shantenu Jha, Andre Luckow12/25/2024

arXiv:2404.19165v2 Announce Type: replace Abstract: Spiking neural networks (SNNs) inherently rely on the timing of signals for representing and processing information. Incorporating trainable transmission delays, alongside synaptic weights, is crucial for shaping these temporal dynamics. While recent methods have shown the benefits of training delays and weights in terms of accuracy and memory efficiency, they rely on discrete time, approximate gradients, and full access to internal variables like membrane potentials. This limits their precision, efficiency, and suitability for neuromorphic hardware due to increased memory requirements and I/O bandwidth demands. To address these challenges, we propose DelGrad, an analytical, event-based method to compute exact loss gradients for both synaptic weights and delays. The inclusion of delays in the training process emerges naturally within our proposed formalism, enriching the model's search space with a temporal dimension. Moreover, DelGrad, grounded purely in spike timing, eliminates the need to track additional variables such as membrane potentials. To showcase this key advantage, we demonstrate the functionality and benefits of DelGrad on the BrainScaleS-2 neuromorphic platform, by training SNNs in a chip-in-the-loop fashion. For the first time, we experimentally demonstrate the memory efficiency and accuracy benefits of adding delays to SNNs on noisy mixed-signal hardware. Additionally, these experiments also reveal the potential of delays for stabilizing networks against noise. DelGrad opens a new way for training SNNs with delays on neuromorphic hardware, which results in less number of required parameters, higher accuracy and ease of hardware training.

Julian G\"oltz, Jimmy Weber, Laura Kriener, Sebastian Billaudelle, Peter Lake, Johannes Schemmel, Melika Payvand, Mihai A. Petrovici12/25/2024

arXiv:2409.17945v2 Announce Type: replace Abstract: Modular autonomous vehicles (MAVs) represent a groundbreaking concept that integrates modularity into the ongoing development of autonomous vehicles. This innovative design introduces unique features to traffic flow, allowing multiple modules to seamlessly join together and operate collectively. To understand the traffic flow characteristics involving these vehicles and their collective operations, this study established a modeling framework specifically designed to simulate their behavior within traffic flow. The mixed traffic flow, incorporating arbitrarily formed trains of various modular sizes, is modeled and studied. Simulations are conducted under varying levels of traffic demand and penetration rates to examine the traffic flow dynamics in the presence of these vehicles and their operations. The microscopic trajectories, MAV train compositions, and macroscopic fundamental diagrams of the mixed traffic flow are analyzed. The simulation findings indicate that integrating MAVs and their collective operations can substantially enhance capacity, with the extent of improvement depending on the penetration rate in mixed traffic flow. Notably, the capacity nearly doubles when the penetration rate exceeds 75%. Furthermore, their presence significantly influences and regulates the free-flow speed of the mixed traffic. Particularly, when variations in operational speed limits exist between the MAVs and the background traffic, the mixed traffic adjusts to the operating velocity of these vehicles. This study provides insights into potential future traffic flow systems incorporating emerging MAV technologies.

Lanhang Ye, Toshiyuki Yamamoto12/25/2024

arXiv:2312.13185v2 Announce Type: replace-cross Abstract: Measurement-based quantum computation (MBQC) is a paradigm for quantum computation where computation is driven by local measurements on a suitably entangled resource state. In this work we show that MBQC is related to a model of quantum computation based on Clifford quantum cellular automata (CQCA). Specifically, we show that certain MBQCs can be directly constructed from CQCAs which yields a simple and intuitive circuit model representation of MBQC in terms of quantum computation based on CQCA. We apply this description to construct various MBQC-based Ans\"atze for parameterized quantum circuits, demonstrating that the different Ans\"atze may lead to significantly different performances on different learning tasks. In this way, MBQC yields a family of Hardware-efficient Ans\"atze that may be adapted to specific problem settings and is particularly well suited for architectures with translationally invariant gates such as neutral atoms.

Hendrik Poulsen Nautrup, Hans J. Briegel12/25/2024

arXiv:2405.00755v2 Announce Type: replace Abstract: Quantum machine learning is a new research field combining quantum information science and machine learning. Quantum computing technologies appear to be particularly well-suited for addressing problems in the health sector efficiently. They have the potential to handle large datasets more effectively than classical models and offer greater transparency and interpretability for clinicians. Alzheimer's disease is a neurodegenerative brain disorder that mostly affects elderly people, causing important cognitive impairments. It is the most common cause of dementia and it has an effect on memory, thought, learning abilities and movement control. This type of disease has no cure, consequently an early diagnosis is fundamental for reducing its impact. The analysis of handwriting can be effective for diagnosing, as many researches have conjectured. The DARWIN (Diagnosis AlzheimeR WIth haNdwriting) dataset contains handwriting samples from people affected by Alzheimer's disease and a group of healthy people. Here we apply quantum AI to this use-case. In particular, we use this dataset to test classical methods for classification and compare their performances with the ones obtained via quantum machine learning methods. We find that quantum methods generally perform better than classical methods. Our results pave the way for future new quantum machine learning applications in early-screening diagnostics in the healthcare domain.

Giacomo Cappiello, Filippo Caruso12/24/2024

arXiv:2412.16774v1 Announce Type: new Abstract: Researchers all over the world are employing a variety of analysis approaches in attempt to provide a safer and faster solution for sharing resources via a Multi-access Edge Computing system. Multi-access Edge Computing (MEC) is a job-sharing method within the edge server network whose main aim is to maximize the pace of the computing process, resulting in a more powerful and enhanced user experience. Although there are many other options when it comes to determining the fastest method for computing processes, our paper introduces a rather more extensive change to the system model to assure no data loss and/or task failure due to any scrutiny in the edge node cluster. RAFT, a powerful consensus algorithm, can be used to introduce an auction theory approach in our system, which enables the edge device to make the best decision possible regarding how to respond to a request from the client. Through the use of the RAFT consensus, blockchain may be used to improve the safety, security, and efficiency of applications by deploying it on trustful edge base stations. In addition to discussing the best-distributed system approach for our (MEC) system, a Deep Deterministic Policy Gradient (DDPG) algorithm is also presented in order to reduce overall system latency. Assumed in our proposal is the existence of a cluster of N Edge nodes, each containing a series of tasks that require execution. A DDPG algorithm is implemented in this cluster so that an auction can be held within the cluster of edge nodes to decide which edge node is best suited for performing the task provided by the client.

Zain Khaliq, Ahmed Refaey Hussein12/24/2024

arXiv:2412.16847v1 Announce Type: new Abstract: Monitoring fatigue is essential for improving safety, particularly for people who work long shifts or in high-demand workplaces. The development of wearable technologies, such as fitness trackers and smartwatches, has made it possible to continuously analyze physiological signals in real-time to determine a person level of exhaustion. This has allowed for timely insights into preventing hazards associated with fatigue. This review focuses on wearable technology and artificial intelligence (AI) integration for tiredness detection, adhering to the PRISMA principles. Studies that used signal processing methods to extract pertinent aspects from physiological data, such as ECG, EMG, and EEG, among others, were analyzed as part of the systematic review process. Then, to find patterns of weariness and indicators of impending fatigue, these features were examined using machine learning and deep learning models. It was demonstrated that wearable technology and cutting-edge AI methods could accurately identify weariness through multi-modal data analysis. By merging data from several sources, information fusion techniques enhanced the precision and dependability of fatigue evaluation. Significant developments in AI-driven signal analysis were noted in the assessment, which should improve real-time fatigue monitoring while requiring less interference. Wearable solutions powered by AI and multi-source data fusion present a strong option for real-time tiredness monitoring in the workplace and other crucial environments. These developments open the door for more improvements in this field and offer useful tools for enhancing safety and reducing fatigue-related hazards.

Kourosh Kakhi, Senthil Kumar Jagatheesaperumal, Abbas Khosravi, Roohallah Alizadehsani, U Rajendra Acharya12/24/2024