cs.CR

526 posts

arXiv:2503.10185v1 Announce Type: new Abstract: Following the publication of Bitcoin's arguably most famous attack, selfish mining, various works have introduced mechanisms to enhance blockchain systems' game theoretic resilience. Some reward mechanisms, like FruitChains, have been shown to be equilibria in theory. However, their guarantees assume non-realistic parameters and their performance degrades significantly in a practical deployment setting. In this work we introduce a reward allocation mechanism, called Proportional Splitting (PRS), which outperforms existing state of the art. We show that, for large enough parameters, PRS is an equilibrium, offering the same theoretical guarantees as the state of the art. In addition, for practical, realistically small, parameters, PRS outperforms all existing reward mechanisms across an array of metrics. We implement PRS on top of a variant of PoEM, a Proof-of-Work (PoW) protocol that enables a more accurate estimation of each party's mining power compared to e.g., Bitcoin. We then evaluate PRS both theoretically and in practice. On the theoretical side, we show that our protocol combined with PRS is an equilibrium and guarantees fairness, similar to FruitChains. In practice, we compare PRS with an array of existing reward mechanisms and show that, assuming an accurate estimation of the mining power distribution, it outperforms them across various well-established metrics. Finally, we realize this assumption by approximating the power distribution via low-work objects called "workshares" and quantify the tradeoff between the approximation's accuracy and storage overhead.

Lukas Aumayr, Zeta Avarikioti, Dimitris Karakostas, Karl Kreder, Shreekara Shastry3/14/2025

arXiv:2503.10207v1 Announce Type: new Abstract: Kyber, an IND-CCA2-secure lattice-based post-quantum key-encapsulation mechanism, is the winner of the first post-quantum cryptography standardization process of the US National Institute of Standards and Technology. In this work, we provide an efficient implementation of Kyber on ESP32, a very popular microcontroller for Internet of Things applications. We hand-partition the Kyber algorithm to enable utilization of the ESP32 dual-core architecture, which allows us to speed up its execution by 1.21x (keygen), 1.22x (encaps) and 1.20x (decaps). We also explore the possibility of gaining further improvement by utilizing the ESP32 SHA and AES coprocessor and achieve a culminated speed-up of 1.72x (keygen), 1.84x (encaps) and 1.69x (decaps).

Fabian Segatz, Muhammad Ihsan Al Hafiz3/14/2025

arXiv:2503.10058v1 Announce Type: new Abstract: Money laundering is a financial crime that obscures the origin of illicit funds, necessitating the development and enforcement of anti-money laundering (AML) policies by governments and organizations. The proliferation of mobile payment platforms and smart IoT devices has significantly complicated AML investigations. As payment networks become more interconnected, there is an increasing need for efficient real-time detection to process large volumes of transaction data on heterogeneous payment systems by different operators such as digital currencies, cryptocurrencies and account-based payments. Most of these mobile payment networks are supported by connected devices, many of which are considered loT devices in the FinTech space that constantly generate data. Furthermore, the growing complexity and unpredictability of transaction patterns across these networks contribute to a higher incidence of false positives. While machine learning solutions have the potential to enhance detection efficiency, their application in AML faces unique challenges, such as addressing privacy concerns tied to sensitive financial data and managing the real-world constraint of limited data availability due to data regulations. Existing surveys in the AML literature broadly review machine learning approaches for money laundering detection, but they often lack an in-depth exploration of advanced deep learning techniques - an emerging field with significant potential. To address this gap, this paper conducts a comprehensive review of deep learning solutions and the challenges associated with their use in AML. Additionally, we propose a novel framework that applies the least-privilege principle by integrating machine learning techniques, codifying AML red flags, and employing account profiling to provide context for predictions and enable effective fraud detection under limited data availability....

Jiani Fan, Lwin Khin Shar, Ruichen Zhang, Ziyao Liu, Wenzhuo Yang, Dusit Niyato, Bomin Mao, Kwok-Yan Lam3/14/2025

arXiv:2503.09964v1 Announce Type: new Abstract: Large Multimodal Models (LMMs) are increasingly vulnerable to AI-generated extremist content, including photorealistic images and text, which can be used to bypass safety mechanisms and generate harmful outputs. However, existing datasets for evaluating LMM robustness offer limited exploration of extremist content, often lacking AI-generated images, diverse image generation models, and comprehensive coverage of historical events, which hinders a complete assessment of model vulnerabilities. To fill this gap, we introduce ExtremeAIGC, a benchmark dataset and evaluation framework designed to assess LMM vulnerabilities against such content. ExtremeAIGC simulates real-world events and malicious use cases by curating diverse text- and image-based examples crafted using state-of-the-art image generation techniques. Our study reveals alarming weaknesses in LMMs, demonstrating that even cutting-edge safety measures fail to prevent the generation of extremist material. We systematically quantify the success rates of various attack strategies, exposing critical gaps in current defenses and emphasizing the need for more robust mitigation strategies.

Bhavik Chandna, Mariam Aboujenane, Usman Naseem3/14/2025

arXiv:2503.10171v1 Announce Type: new Abstract: Graph databases have garnered extensive attention and research due to their ability to manage relationships between entities efficiently. Today, many graph search services have been outsourced to a third-party server to facilitate storage and computational support. Nevertheless, the outsourcing paradigm may invade the privacy of graphs. PeGraph is the latest scheme achieving encrypted search over social graphs to address the privacy leakage, which maintains two data structures XSet and TSet motivated by the OXT technology to support encrypted conjunctive search. However, PeGraph still exhibits limitations inherent to the underlying OXT. It does not provide transparent search capabilities, suffers from expensive computation and result pattern leakages, and it fails to support search over dynamic encrypted graph database and results verification. In this paper, we propose SecGraph to address the first two limitations, which adopts a novel system architecture that leverages an SGX-enabled cloud server to provide users with secure and transparent search services since the secret key protection and computational overhead have been offloaded to the cloud server. Besides, we design an LDCF-encoded XSet based on the Logarithmic Dynamic Cuckoo Filter to facilitate efficient plaintext computation in trusted memory, effectively mitigating the risks of result pattern leakage and performance degradation due to exceeding the limited trusted memory capacity. Finally, we design a new dynamic version of TSet named Twin-TSet to enable conjunctive search over dynamic encrypted graph database. In order to support verifiable search, we further propose VSecGraph, which utilizes a procedure-oriented verification method to verify all data structures loaded into the trusted memory, thus bypassing the computational overhead associated with the client's local verification.

Qiuhao Wang, Xu Yang, Yiwei Liu, Saiyu Qi, Hongguang Zhao, Ke Li, Yong Qi3/14/2025

arXiv:2404.18407v2 Announce Type: replace Abstract: Physical design watermarking on contemporary integrated circuit (IC) layout encodes signatures without considering the dense connections and design constraints, which could lead to performance degradation on the watermarked products. This paper presents ICMarks, a quality-preserving and robust watermarking framework for modern IC physical design. ICMarks embeds unique watermark signatures during the physical design's placement stage, thereby authenticating the IC layout ownership. ICMarks's novelty lies in (i) strategically identifying a region of cells to watermark with minimal impact on the layout performance and (ii) a two-level watermarking framework for augmented robustness toward potential removal and forging attacks. Extensive evaluations on benchmarks of different design objectives and sizes validate that ICMarks incurs no wirelength and timing metrics degradation, while successfully proving ownership. Furthermore, we demonstrate ICMarks is robust against two major watermarking attack categories, namely, watermark removal and forging attacks; even if the adversaries have prior knowledge of the watermarking schemes, the signatures cannot be removed without significantly undermining the layout quality.

Ruisi Zhang, Rachel Selina Rajarathnam, David Z. Pan, Farinaz Koushanfar3/14/2025

arXiv:2503.09735v1 Announce Type: new Abstract: Adversarial examples are a major problem for machine learning models, leading to a continuous search for effective defenses. One promising direction is to leverage model explanations to better understand and defend against these attacks. We looked at AmI, a method proposed by a NeurIPS 2018 spotlight paper that uses model explanations to detect adversarial examples. Our study shows that while AmI is a promising idea, its performance is too dependent on specific settings (e.g., hyperparameter) and external factors such as the operating system and the deep learning framework used, and such drawbacks limit AmI's practical usage. Our findings highlight the need for more robust defense mechanisms that are effective under various conditions. In addition, we advocate for a comprehensive evaluation framework for defense techniques.

Qian Ma, Ziping Ye3/14/2025

arXiv:2503.09727v1 Announce Type: new Abstract: Knowledge graph reasoning (KGR), which answers complex, logical queries over large knowledge graphs (KGs), represents an important artificial intelligence task with a range of applications. Many KGs require extensive domain expertise and engineering effort to build and are hence considered proprietary within organizations and enterprises. Yet, spurred by their commercial and research potential, there is a growing trend to make KGR systems, (partially) built upon private KGs, publicly available through reasoning APIs. The inherent tension between maintaining the confidentiality of KGs while ensuring the accessibility to KGR systems motivates our study of KG extraction attacks: the adversary aims to "steal" the private segments of the backend KG, leveraging solely black-box access to the KGR API. Specifically, we present KGX, an attack that extracts confidential sub-KGs with high fidelity under limited query budgets. At a high level, KGX progressively and adaptively queries the KGR API and integrates the query responses to reconstruct the private sub-KG. This extraction remains viable even if any query responses related to the private sub-KG are filtered. We validate the efficacy of KGX against both experimental and real-world KGR APIs. Interestingly, we find that typical countermeasures (e.g., injecting noise into query responses) are often ineffective against KGX. Our findings suggest the need for a more principled approach to developing and deploying KGR systems, as well as devising new defenses against KG extraction attacks.

Zhaohan Xi3/14/2025

arXiv:2503.09939v1 Announce Type: new Abstract: In this digital era, ensuring the security of digital data during transmission and storage is crucial. Digital data, particularly image data, needs to be protected against unauthorized access. To address this, this paper presents a novel image encryption scheme based on a confusion diffusion architecture. The diffusion module introduces a novel geometric block permutation technique, which effectively scrambles the pixels based on geometric shape extraction of pixels. The image is converted into four blocks, and pixels are extracted from these blocks using L-shape, U-shape, square-shape, and inverted U-shape patterns for each block, respectively. This robust extraction and permutation effectively disrupts the correlation within the image. Furthermore, the confusion module utilises bit-XOR and dynamic substitution techniques. For the bit-XOR operation, 2D Henon map has been utilised to generate a chaotic seed matrix, which is bit-XORed with the scrambled image. The resultant image then undergoes the dynamic substitution process to complete confusion phase. A statistical security analysis demonstrates the superior security of the proposed scheme, with being high uncertainty and unpredictability, achieving an entropy of 7.9974 and a correlation coefficient of 0.0014. These results validate the proposed scheme's effectiveness in securing digital images.

Muhammad Ali, Jawad Ahmad, Muhammad Abdullah Hussain Khan, Safee Ullah, Mujeeb Ur Rehman, Syed Aziz Shah, Muhammad Shahbaz Khan3/14/2025

arXiv:2503.09953v1 Announce Type: new Abstract: In this digital age, ensuring the security of digital data, especially the image data is critically important. Image encryption plays an important role in securing the online transmission/storage of images from unauthorized access. In this regard, this paper presents a novel diffusion-confusion-based image encryption algorithm named as X-CROSS. The diffusion phase involves a dual-layer block permutation. It involves a bit-level permutation termed Inter-Bit Transference (IBT) using a Bit-Extraction key, and pixel permutation with a unique X-crosspermutation algorithm to effectively scramble the pixels within an image. The proposed algorithm utilizes a resilient 2D chaotic map with non-linear dynamical behavior, assisting in generating complex Extraction Keys. After the permutation phase, the confusion phase proceeds with a dynamic substitution technique on the permuted images, establishing the final encryption layer. This combination of novel permutation and confusion results in the removal of the image's inherent patterns and increases its resistance to cyber-attacks. The close to ideal statistical security results for information entropy, correlation, homogeneity, contrast, and energy validate the proposed scheme's effectiveness in hiding the information within the image.

Hansa Ahsan, Safee Ullah, Jawad Ahmad, Aizaz Ahmad Khattak, Muhammad Ali, Muhammad Shahbaz Khan3/14/2025

arXiv:2503.10074v1 Announce Type: new Abstract: ISA extensions are increasingly adopted to boost the performance of specialized workloads without requiring an entire architectural redesign. However, these enhancements can inadvertently expose new attack surfaces in the microarchitecture. In this paper, we investigate Intel's recently introduced cldemote extension, which promotes efficient data sharing by transferring cache lines from upper-level caches to the Last Level Cache (LLC). Despite its performance benefits, we uncover critical properties-unprivileged access, inter-cache state transition, and fault suppression-that render cldemote exploitable for microarchitectural attacks. We propose two new attack primitives, Flush+Demote and Demote+Time, built on our analysis. Flush+Demote constructs a covert channel with a bandwidth of 2.84 Mbps and a bit error rate of 0.018%, while Demote+Time derandomizes the kernel base address in 2.49 ms on Linux. Furthermore, we show that leveraging cldemote accelerates eviction set construction in non-inclusive LLC designs by obviating the need for helper threads or extensive cache conflicts, thereby reducing construction time by 36% yet retaining comparable success rates. Finally, we examine how ISA extensions contribute to broader microarchitectural attacks, identifying five key exploitable characteristics and categorizing four distinct attack types. We also discuss potential countermeasures, highlighting the far-reaching security implications of emerging ISA extensions.

Taehun Kim, Hyerean Jang, Youngjoo Shin3/14/2025

arXiv:2503.10063v1 Announce Type: new Abstract: We consider the problem of securely and robustly embedding covert messages into an image-based diffusion model's output. The sender and receiver want to exchange the maximum amount of information possible per diffusion sampled image while remaining undetected. The adversary wants to detect that such communication is taking place by identifying those diffusion samples that contain covert messages. To maximize robustness to transformations of the diffusion sample, a strategy is for the sender and the receiver to embed the message in the initial latents. We first show that prior work that attempted this is easily broken because their embedding technique alters the latents' distribution. We then propose a straightforward method to embed covert messages in the initial latent {\em without} altering the distribution. We prove that our construction achieves indistinguishability to any probabilistic polynomial time adversary. Finally, we discuss and analyze empirically the tradeoffs between embedding capacity, message recovery rates, and robustness. We find that optimizing the inversion method for error correction is crucial for reliability.

Luke A. Bauer, Wenxuan Bao, Vincent Bindschaedler3/14/2025

arXiv:2503.10081v1 Announce Type: new Abstract: The outstanding capability of diffusion models in generating high-quality images poses significant threats when misused by adversaries. In particular, we assume malicious adversaries exploiting diffusion models for inpainting tasks, such as replacing a specific region with a celebrity. While existing methods for protecting images from manipulation in diffusion-based generative models have primarily focused on image-to-image and text-to-image tasks, the challenge of preventing unauthorized inpainting has been rarely addressed, often resulting in suboptimal protection performance. To mitigate inpainting abuses, we propose ADVPAINT, a novel defensive framework that generates adversarial perturbations that effectively disrupt the adversary's inpainting tasks. ADVPAINT targets the self- and cross-attention blocks in a target diffusion inpainting model to distract semantic understanding and prompt interactions during image generation. ADVPAINT also employs a two-stage perturbation strategy, dividing the perturbation region based on an enlarged bounding box around the object, enhancing robustness across diverse masks of varying shapes and sizes. Our experimental results demonstrate that ADVPAINT's perturbations are highly effective in disrupting the adversary's inpainting tasks, outperforming existing methods; ADVPAINT attains over a 100-point increase in FID and substantial decreases in precision.

Joonsung Jeon, Woo Jae Kim, Suhyeon Ha, Sooel Son, Sung-eui Yoon3/14/2025

arXiv:2503.10411v1 Announce Type: new Abstract: Before a fair exchange takes place, there is typically an advertisement phase with the goal of increasing the appeal of possessing a digital asset while keeping it sufficiently hidden. In this work, we give a definition that explicitly combines a fair-exchange protocol with a prior advertising phase. Then, we construct such a fair exchange protocol with aids using zk-SNARKs and relying on mainstream decentralized platforms (i.e., a blockchain with smart contracts like Ethereum and a decentralized storage system like IPFS). Experimental results confirm the practical relevance of our decentralized approach, paving the road towards building decentralized marketplaces where users can, even anonymously, and without direct off-chain communications, effectively advertise and exchange their digital assets as part of a system of enhanced NFTs.

Pierpaolo Della Monica, Ivan Visconti, Andrea Vitaletti, Marco Zecchini3/14/2025

arXiv:2503.09669v1 Announce Type: new Abstract: Text-to-image diffusion models have achieved remarkable success in generating high-quality contents from text prompts. However, their reliance on publicly available data and the growing trend of data sharing for fine-tuning make these models particularly vulnerable to data poisoning attacks. In this work, we introduce the Silent Branding Attack, a novel data poisoning method that manipulates text-to-image diffusion models to generate images containing specific brand logos or symbols without any text triggers. We find that when certain visual patterns are repeatedly in the training data, the model learns to reproduce them naturally in its outputs, even without prompt mentions. Leveraging this, we develop an automated data poisoning algorithm that unobtrusively injects logos into original images, ensuring they blend naturally and remain undetected. Models trained on this poisoned dataset generate images containing logos without degrading image quality or text alignment. We experimentally validate our silent branding attack across two realistic settings on large-scale high-quality image datasets and style personalization datasets, achieving high success rates even without a specific text trigger. Human evaluation and quantitative metrics including logo detection show that our method can stealthily embed logos.

Sangwon Jang, June Suk Choi, Jaehyeong Jo, Kimin Lee, Sung Ju Hwang3/14/2025

arXiv:2503.09666v1 Announce Type: new Abstract: In a globalized and interconnected world, interoperability has become a key concept for advancing tactical scenarios. Federated Coalition Networks (FCN) enable cooperation between entities from multiple nations while allowing each to maintain control over their systems. However, this interoperability necessitates the sharing of increasing amounts of information between different tactical assets, raising the need for higher security measures. Emerging technologies like blockchain drive a revolution in secure communications, paving the way for new tactical scenarios. In this work, we propose a blockchain-based framework to enhance the resilience and security of the management of these networks. We offer a guide to FCN design to help a broad audience understand the military networks in international missions by a use case and key functions applied to a proposed architecture. We evaluate its effectiveness and performance in information encryption to validate this framework.

Jorge \'Alvaro Gonz\'alez, Ana Mar\'ia Saiz Garc\'ia, Victor Monzon Baeza3/14/2025

arXiv:2503.09712v1 Announce Type: new Abstract: Time series classification (TSC) is a cornerstone of modern web applications, powering tasks such as financial data analysis, network traffic monitoring, and user behavior analysis. In recent years, deep neural networks (DNNs) have greatly enhanced the performance of TSC models in these critical domains. However, DNNs are vulnerable to backdoor attacks, where attackers can covertly implant triggers into models to induce malicious outcomes. Existing backdoor attacks targeting DNN-based TSC models remain elementary. In particular, early methods borrow trigger designs from computer vision, which are ineffective for time series data. More recent approaches utilize generative models for trigger generation, but at the cost of significant computational complexity. In this work, we analyze the limitations of existing attacks and introduce an enhanced method, FreqBack. Drawing inspiration from the fact that DNN models inherently capture frequency domain features in time series data, we identify that improper perturbations in the frequency domain are the root cause of ineffective attacks. To address this, we propose to generate triggers both effectively and efficiently, guided by frequency analysis. FreqBack exhibits substantial performance across five models and eight datasets, achieving an impressive attack success rate of over 90%, while maintaining less than a 3% drop in model accuracy on clean data.

Yuanmin Huang, Mi Zhang, Zhaoxiang Wang, Wenxuan Li, Min Yang3/14/2025

arXiv:2503.09726v1 Announce Type: new Abstract: Graph Neural Networks (GNNs) are widely used and deployed for graph-based prediction tasks. However, as good as GNNs are for learning graph data, they also come with the risk of privacy leakage. For instance, an attacker can run carefully crafted queries on the GNNs and, from the responses, can infer the existence of an edge between a pair of nodes. This attack, dubbed as a "link-stealing" attack, can jeopardize the user's privacy by leaking potentially sensitive information. To protect against this attack, we propose an approach called "$(N)$ode $(A)$ugmentation for $(R)$estricting $(G)$raphs from $(I)$nsinuating their $(S)$tructure" ($NARGIS$) and study its feasibility. $NARGIS$ is focused on reshaping the graph embedding space so that the posterior from the GNN model will still provide utility for the prediction task but will introduce ambiguity for the link-stealing attackers. To this end, $NARGIS$ applies spectral clustering on the given graph to facilitate it being augmented with new nodes -- that have learned features instead of fixed ones. It utilizes tri-level optimization for learning parameters for the GNN model, surrogate attacker model, and our defense model (i.e. learnable node features). We extensively evaluate $NARGIS$ on three benchmark citation datasets over eight knowledge availability settings for the attackers. We also evaluate the model fidelity and defense performance on influence-based link inference attacks. Through our studies, we have figured out the best feature of $NARGIS$ -- its superior fidelity-privacy performance trade-off in a significant number of cases. We also have discovered in which cases the model needs to be improved, and proposed ways to integrate different schemes to make the model more robust against link stealing attacks.

Mir Imtiaz Mostafiz, Imtiaz Karim, Elisa Bertino3/14/2025

arXiv:2503.09823v1 Announce Type: new Abstract: This paper offers a new privacy approach for the growing ecosystem of services--ranging from open banking to healthcare--dependent on sensitive personal data sharing between individuals and third-parties. While these services offer significant benefits, individuals want control over their data, transparency regarding how their data is used, and accountability from third-parties for misuse. However, existing legal and technical mechanisms are inadequate for supporting these needs. A comprehensive approach to the modern privacy challenges of accountable third-party data sharing requires a closer alignment of technical system architecture and legal institutional design. In order to achieve this privacy alignment, we extend traditional security threat modeling and analysis to encompass a broader range of privacy notions than has been typically considered. In particular, we introduce the concept of covert-accountability, which addresses adversaries that may act dishonestly but face potential identification and legal consequences. As a concrete instance of this design approach, we present the OTrace protocol, designed to provide traceable, accountable, consumer-control in third-party data sharing ecosystems. OTrace empowers consumers with the knowledge of where their data is, who has it, what it is being used for, and whom it is being shared with. By applying our alignment framework to OTrace, we demonstrate that OTrace's technical affordances can provide more confident, scalable regulatory oversight when combined with complementary legal mechanisms.

Kevin Liao, Shreya Thipireddy, Daniel Weitzner3/14/2025

arXiv:2503.10218v1 Announce Type: new Abstract: Modern Federated Learning (FL) has become increasingly essential for handling highly heterogeneous mobile devices. Current approaches adopt a partial model aggregation paradigm that leads to sub-optimal model accuracy and higher training overhead. In this paper, we challenge the prevailing notion of partial-model aggregation and propose a novel "full-weight aggregation" method named Moss, which aggregates all weights within heterogeneous models to preserve comprehensive knowledge. Evaluation across various applications demonstrates that Moss significantly accelerates training, reduces on-device training time and energy consumption, enhances accuracy, and minimizes network bandwidth utilization when compared to state-of-the-art baselines.

Yifeng Cai, Ziqi Zhang, Ding Li, Yao Guo, Xiangqun Chen3/14/2025