quant-ph
90 postsarXiv:2501.06420v1 Announce Type: new Abstract: Code clones, referring to identical or similar code fragments, have long posed challenges in classical programming, impacting software quality, maintainability, and scalability. However, their presence and characteristics in quantum programming remain unexplored. This paper presents an empirical study of code clones in quantum programs, specifically focusing on software developed using the Qiskit framework. We examine the existence, distribution, density, and size of code clones in quantum software, revealing a high density of Type-2 and Type-3 clones involving minor modifications. Our findings suggest that these clones are more frequent in quantum software, likely due to the complexity of quantum algorithms and their integration with classical logic. This highlights the need for advanced clone detection and refactoring tools specifically designed for the quantum domain to improve software maintainability and scalability.
arXiv:2407.15895v3 Announce Type: replace-cross Abstract: This paper explores the explicit design of quantum circuits for quantum simulation of partial differential equations (PDEs) with physical boundary conditions. These equations and/or their discretized forms usually do not evolve via unitary dynamics, thus are not suitable for quantum simulation. Boundary conditions (either time-dependent or independent) make the problem more difficult. To tackle this challenge, the Schrodingerisation method can be employed, which converts linear partial and ordinary differential equations with non-unitary dynamics into systems of Schrodinger-type equations, via the so-called warped phase transformation that maps the equation into one higher dimension. Despite advancements in Schrodingerisation techniques, the explicit implementation of quantum circuits for solving general PDEs, especially with physical boundary conditions, remains underdeveloped. We present two methods for handling the inhomogeneous terms arising from time-dependent physical boundary conditions. One approach utilizes Duhamel's principle to express the solution in integral form and employs linear combination of unitaries (LCU) for coherent state preparation. Another method applies an augmentation to transform the inhomogeneous problem into a homogeneous one. We then apply the quantum simulation technique from [CJL23] to transform the resulting non-autonomous system to an autonomous system in one higher dimension. We provide detailed implementations of these two methods and conduct a comprehensive complexity analysis in terms of queries to the time evolution input oracle.
arXiv:2501.06705v1 Announce Type: new Abstract: Recent advancements in quantum technologies, particularly in quantum sensing and simulation, have facilitated the generation and analysis of inherently quantum data. This progress underscores the necessity for developing efficient and scalable quantum data management strategies. This goal faces immense challenges due to the exponential dimensionality of quantum data and its unique quantum properties such as no-cloning and measurement stochasticity. Specifically, classical storage and manipulation of an arbitrary n-qubit quantum state requires exponential space and time. Hence, there is a critical need to revisit foundational data management concepts and algorithms for quantum data. In this paper, we propose succinct quantum data sketches to support basic database operations such as search and selection. We view our work as an initial step towards the development of quantum data management model, opening up many possibilities for future research in this direction.
arXiv:2501.06319v1 Announce Type: cross Abstract: In this paper, we introduce a novel authentication scheme for satellite nodes based on quantum entanglement and measurement noise profiles. Our approach leverages the unique noise characteristics exhibited by each satellite's quantum optical communication system to create a distinctive "quantum noise fingerprint." This fingerprint is used for node authentication within a satellite constellation, offering a quantum-safe alternative to traditional cryptographic methods. The proposed scheme consists of a training phase, where each satellite engages in a training exercise with its neighbors to compile noise profiles, and an online authentication phase, where these profiles are used for real-time authentication. Our method addresses the inherent challenges of implementing cryptographic-based schemes in space, such as key management and distribution, by exploiting the fundamental properties of quantum mechanics and the unavoidable imperfections in quantum systems. This approach enhances the security and reliability of satellite communication networks, providing a robust solution to the authentication challenges in satellite constellations. We validated and tested several hypotheses for this approach using IBM System One quantum computers.
arXiv:2310.12682v2 Announce Type: replace-cross Abstract: Quantum stabilizer codes often struggle with syndrome errors due to measurement imperfections. Typically, multiple rounds of syndrome extraction are employed to ensure reliable error information. In this paper, we consider phenomenological decoding problems, where data qubit errors may occur between extractions, and each measurement can be faulty. We introduce generalized quantum data-syndrome codes along with a generalized check matrix that integrates both quaternary and binary alphabets to represent diverse error sources. This results in a Tanner graph with mixed variable nodes, enabling the design of belief propagation (BP) decoding algorithms that effectively handle phenomenological errors. Importantly, our BP decoders are applicable to general sparse quantum codes. Through simulations, we achieve an error threshold of more than 3\% for quantum memory protected by rotated toric codes, using solely BP without post-processing. Our results indicate that $d$ rounds of syndrome extraction are sufficient for a toric code of distance $d$. We observe that at high error rates, fewer rounds of syndrome extraction tend to perform better, while more rounds improve performance at lower error rates. Additionally, we propose a method to construct effective redundant stabilizer checks for single-shot error correction. Our simulations show that BP decoding remains highly effective even with a high syndrome error rate.
arXiv:2405.00304v3 Announce Type: replace-cross Abstract: Quantum computing (QC) seems to show potential for application in machine learning (ML). In particular quantum kernel methods (QKM) exhibit promising properties for use in supervised ML tasks. However, a major disadvantage of kernel methods is their unfavorable quadratic scaling with the number of training samples. Together with the limits imposed by currently available quantum hardware (NISQ devices) with their low qubit coherence times, small number of qubits, and high error rates, the use of QC in ML at an industrially relevant scale is currently impossible. As a small step in improving the potential applications of QKMs, we introduce QUACK, a quantum kernel algorithm whose time complexity scales linear with the number of samples during training, and independent of the number of training samples in the inference stage. In the training process, only the kernel entries for the samples and the centers of the classes are calculated, i.e. the maximum shape of the kernel for n samples and c classes is (n, c). During training, the parameters of the quantum kernel and the positions of the centroids are optimized iteratively. In the inference stage, for every new sample the circuit is only evaluated for every centroid, i.e. c times. We show that the QUACK algorithm nevertheless provides satisfactory results and can perform at a similar level as classical kernel methods with quadratic scaling during training. In addition, our (simulated) algorithm is able to handle high-dimensional datasets such as MNIST with 784 features without any dimensionality reduction.
arXiv:2501.07027v1 Announce Type: cross Abstract: This paper shows that Knill-Laflamme condition, known as a necessary and sufficient condition for quantum error-correction, can be applied to quantum errors where the number of particles changes before and after the error. This fact shows that correctabilities of single deletion errors and single insertion errors are equivalent. By applying Knill-Laflamme condition, we generalize the previously known correction conditions for single insertion and deletion errors to necessary and sufficient level. By giving an example that satisfies this condition, we construct a new single qudit insertion/deletion code and explain its decoding algorithm.
arXiv:2501.06300v1 Announce Type: new Abstract: We present a tensorization algorithm for constructing tensor train representations of functions, drawing on sketching and cross interpolation ideas. The method only requires black-box access to the target function and a small set of sample points defining the domain of interest. Thus, it is particularly well-suited for machine learning models, where the domain of interest is naturally defined by the training dataset. We show that this approach can be used to enhance the privacy and interpretability of neural network models. Specifically, we apply our decomposition to (i) obfuscate neural networks whose parameters encode patterns tied to the training data distribution, and (ii) estimate topological phases of matter that are easily accessible from the tensor train representation. Additionally, we show that this tensorization can serve as an efficient initialization method for optimizing tensor trains in general settings, and that, for model compression, our algorithm achieves a superior trade-off between memory and time complexity compared to conventional tensorization methods of neural networks.
arXiv:2501.06916v1 Announce Type: new Abstract: This study proposes an approach for removing mislabeled instances from contaminated training datasets by combining surrogate model-based black-box optimization (BBO) with postprocessing and quantum annealing. Mislabeled training instances, a common issue in real-world datasets, often degrade model generalization, necessitating robust and efficient noise-removal strategies. The proposed method evaluates filtered training subsets based on validation loss, iteratively refines loss estimates through surrogate model-based BBO with postprocessing, and leverages quantum annealing to efficiently sample diverse training subsets with low validation error. Experiments on a noisy majority bit task demonstrate the method's ability to prioritize the removal of high-risk mislabeled instances. Integrating D-Wave's clique sampler running on a physical quantum annealer achieves faster optimization and higher-quality training subsets compared to OpenJij's simulated quantum annealing sampler or Neal's simulated annealing sampler, offering a scalable framework for enhancing dataset quality. This work highlights the effectiveness of the proposed method for supervised learning tasks, with future directions including its application to unsupervised learning, real-world datasets, and large-scale implementations.
arXiv:2501.06989v1 Announce Type: new Abstract: With its significant security potential, the quantum internet is poised to revolutionize technologies like cryptography and communications. Although it boasts enhanced security over traditional networks, the quantum internet still encounters unique security challenges essential for safeguarding its Confidentiality, Integrity, and Availability (CIA). This study explores these challenges by analyzing the vulnerabilities and the corresponding mitigation strategies across different layers of the quantum internet, including physical, link, network, and application layers. We assess the severity of potential attacks, evaluate the expected effectiveness of mitigation strategies, and identify vulnerabilities within diverse network configurations, integrating both classical and quantum approaches. Our research highlights the dynamic nature of these security issues and emphasizes the necessity for adaptive security measures. The findings underline the need for ongoing research into the security dimension of the quantum internet to ensure its robustness, encourage its adoption, and maximize its impact on society.
arXiv:2501.06443v1 Announce Type: new Abstract: Although classical computing has excelled in a wide range of applications, there remain problems that push the limits of its capabilities, especially in fields like cryptography, optimization, and materials science. Quantum computing introduces a new computational paradigm, based on principles of superposition and entanglement to explore solutions beyond the capabilities of classical computation. With the increasing interest in the field, there are challenges and opportunities for academics and practitioners in terms of software engineering practices, particularly in testing quantum programs. This paper presents an empirical study of testing patterns in quantum algorithms. We analyzed all the tests handling quantum aspects of the implementations in the Qiskit Algorithms library and identified seven distinct patterns that make use of (1) fixed seeds for algorithms based on random elements; (2) deterministic oracles; (3) precise and approximate assertions; (4) Data-Driven Testing (DDT); (5) functional testing; (6) testing for intermediate parts of the algorithms being tested; and (7) equivalence checking for quantum circuits. Our results show a prevalence of classical testing techniques to test the quantum-related elements of the library, while recent advances from the research community have yet to achieve wide adoption among practitioners.
arXiv:2501.07292v1 Announce Type: cross Abstract: Quantum relative entropy, a quantum generalization of the well-known Kullback-Leibler divergence, serves as a fundamental measure of the distinguishability between quantum states and plays a pivotal role in quantum information science. Despite its importance, efficiently estimating quantum relative entropy between two quantum states on quantum computers remains a significant challenge. In this work, we propose the first quantum algorithm for estimating quantum relative entropy and Petz R\'{e}nyi divergence from two unknown quantum states on quantum computers, addressing open problems highlighted in [Phys. Rev. A 109, 032431 (2024)] and [IEEE Trans. Inf. Theory 70, 5653-5680 (2024)]. This is achieved by combining quadrature approximations of relative entropies, the variational representation of quantum f-divergences, and a new technique for parameterizing Hermitian polynomial operators to estimate their traces with quantum states. Notably, the circuit size of our algorithm is at most 2n+1 with n being the number of qubits in the quantum states and it is directly applicable to distributed scenarios, where quantum states to be compared are hosted on cross-platform quantum computers. We validate our algorithm through numerical simulations, laying the groundwork for its future deployment on quantum hardware devices.
arXiv:2312.17019v3 Announce Type: replace-cross Abstract: In this work, we consider a fundamental task in quantum many-body physics - finding and learning ground states of quantum Hamiltonians and their properties. Recent works have studied the task of predicting the ground state expectation value of sums of geometrically local observables by learning from data. For short-range gapped Hamiltonians, a sample complexity that is logarithmic in the number of qubits and quasipolynomial in the error was obtained. Here we extend these results beyond the local requirements on both Hamiltonians and observables, motivated by the relevance of long-range interactions in molecular and atomic systems. For interactions decaying as a power law with exponent greater than twice the dimension of the system, we recover the same efficient logarithmic scaling with respect to the number of qubits, but the dependence on the error worsens to exponential. Further, we show that learning algorithms equivariant under the automorphism group of the interaction hypergraph achieve a sample complexity reduction, leading in particular to a constant number of samples for learning sums of local observables in systems with periodic boundary conditions. We demonstrate the efficient scaling in practice by learning from DMRG simulations of $1$D long-range and disordered systems with up to $128$ qubits. Finally, we provide an analysis of the concentration of expectation values of global observables stemming from the central limit theorem, resulting in increased prediction accuracy.
arXiv:2405.00082v3 Announce Type: replace-cross Abstract: We study the problem of Hamiltonian structure learning from real-time evolution: given the ability to apply $e^{-\mathrm{i} Ht}$ for an unknown local Hamiltonian $H = \sum_{a = 1}^m \lambda_a E_a$ on $n$ qubits, the goal is to recover $H$. This problem is already well-understood under the assumption that the interaction terms, $E_a$, are given, and only the interaction strengths, $\lambda_a$, are unknown. But how efficiently can we learn a local Hamiltonian without prior knowledge of its interaction structure? We present a new, general approach to Hamiltonian learning that not only solves the challenging structure learning variant, but also resolves other open questions in the area, all while achieving the gold standard of Heisenberg-limited scaling. In particular, our algorithm recovers the Hamiltonian to $\varepsilon$ error with total evolution time $O(\log (n)/\varepsilon)$, and has the following appealing properties: (1) it does not need to know the Hamiltonian terms; (2) it works beyond the short-range setting, extending to any Hamiltonian $H$ where the sum of terms interacting with a qubit has bounded norm; (3) it evolves according to $H$ in constant time $t$ increments, thus achieving constant time resolution. As an application, we can also learn Hamiltonians exhibiting power-law decay up to accuracy $\varepsilon$ with total evolution time beating the standard limit of $1/\varepsilon^2$.
arXiv:2501.03360v1 Announce Type: cross Abstract: A mangrove mapping (MM) algorithm is an essential classification tool for environmental monitoring. The recent literature shows that compared with other index-based MM methods that treat pixels as spatially independent, convolutional neural networks (CNNs) are crucial for leveraging spatial continuity information, leading to improved classification performance. In this work, we go a step further to show that quantum features provide radically new information for CNN to further upgrade the classification results. Simply speaking, CNN computes affine-mapping features, while quantum neural network (QNN) offers unitary-computing features, thereby offering a fresh perspective in the final decision-making (classification). To address the challenging MM problem, we design an entangled spatial-spectral quantum feature extraction module. Notably, to ensure that the quantum features contribute genuinely novel information (unaffected by traditional CNN features), we design a separate network track consisting solely of quantum neurons with built-in interpretability. The extracted pure quantum information is then fused with traditional feature information to jointly make the final decision. The proposed quantum-empowered deep network (QEDNet) is very lightweight, so the improvement does come from the cooperation between CNN and QNN (rather than parameter augmentation). Extensive experiments will be conducted to demonstrate the superiority of QEDNet.
arXiv:2501.03799v1 Announce Type: cross Abstract: Recently, a new definition for quantum $f$-divergences was introduced based on an integral representation. These divergences have shown remarkable properties, for example when investigating contraction coefficients under noisy channels. At the same time, many properties well known for other definitions have remained elusive for the new quantum $f$-divergence because of its unusual representation. In this work, we investigate alternative ways of expressing these quantum $f$-divergences. We leverage these expressions to prove new properties of these $f$-divergences and demonstrate some applications. In particular, we give a new proof of the achievability of the quantum Chernoff bound by establishing a strengthening of an inequality by Audenaert et al. We also establish inequalities between some previously known Renyi divergences and the new Renyi divergence. We further investigate some monotonicity and convexity properties of the new $f$-divergences, and prove inequalities between these divergences for various functions.
arXiv:2411.14416v4 Announce Type: replace-cross Abstract: We study a longstanding question of Aaronson and Kuperberg on whether there exists a classical oracle separating $\mathsf{QMA}$ from $\mathsf{QCMA}$. Settling this question in either direction would yield insight into the power of quantum proofs over classical proofs. We show that such an oracle exists if a certain quantum pseudorandomness conjecture holds. Roughly speaking, the conjecture posits that quantum algorithms cannot, by making few queries, distinguish between the uniform distribution over permutations versus permutations drawn from so-called "dense" distributions. Our result can be viewed as establishing a "win-win" scenario: either there is a classical oracle separation of $\mathsf{QMA}$ from $\mathsf{QCMA}$, or there is quantum advantage in distinguishing pseudorandom distributions on permutations.
arXiv:2501.03518v1 Announce Type: cross Abstract: Quantum annealing (QA) has attracted research interest as a sampler and combinatorial optimization problem (COP) solver. A recently proposed sampling-based solver for QA significantly reduces the required number of qubits, being capable of large COPs. In relation to this, a trainable sampling-based COP solver has been proposed that optimizes its internal parameters from a dataset by using a deep learning technique called deep unfolding. Although learning the internal parameters accelerates the convergence speed, the sampler in the trainable solver is restricted to using a classical sampler owing to the training cost. In this study, to utilize QA in the trainable solver, we propose classical-quantum transfer learning, where parameters are trained classically, and the trained parameters are used in the solver with QA. The results of numerical experiments demonstrate that the trainable quantum COP solver using classical-quantum transfer learning improves convergence speed and execution time over the original solver.
arXiv:2310.00518v2 Announce Type: replace-cross Abstract: Quantum state tomography (QST) is the process of reconstructing the complete state of a quantum system (mathematically described as a density matrix) through a series of different measurements. These measurements are performed on a number of identical copies of the quantum system, with outcomes gathered as frequencies. QST aims to recover the density matrix or the properties of the quantum state from the measured frequencies. Although an informationally complete set of measurements can specify the quantum state accurately in an ideal scenario with a large number of identical copies, both the measurements and identical copies are restricted and imperfect in practical scenarios, making QST highly ill-posed. The conventional QST methods usually assume accurate measured frequencies or rely on manually designed regularizers to handle the ill-posed reconstruction problem, suffering from limited applications in realistic scenarios. Recent advances in deep neural networks (DNN) led to the emergence of deep learning in QST. However, existing DL-based QST approaches often employ generic DNN models that are not optimized for imperfect conditions of QST. In this paper, we propose a transformer-based autoencoder architecture tailored for QST with imperfect measurement data. Our method leverages a transformer-based encoder to extract an informative latent representation (ILR) from imperfect measurement data and employs a decoder to predict the quantum states based on the ILR. We anticipate that the high-dimensional ILR will capture more comprehensive information about the quantum states. To achieve this, we conduct pre-training of the encoder using a pretext task that involves reconstructing high-quality frequencies from measured frequencies. Extensive simulations and experiments demonstrate the remarkable ability of the informative latent representation to deal with imperfect measurement data in QST.
arXiv:2405.02630v3 Announce Type: replace-cross Abstract: We present an efficient tensor-network-based approach for simulating large-scale quantum circuits, demonstrated using Quantum Support Vector Machines (QSVMs). Our method effectively reduces exponential runtime growth to near-quadratic scaling with respect to the number of qubits in practical scenarios. Traditional state-vector simulations become computationally infeasible beyond approximately 50 qubits; in contrast, our simulator successfully handles QSVMs with up to 784 qubits, completing simulations within seconds on a single high-performance GPU. Furthermore, by employing the Message Passing Interface (MPI) in multi-GPU environments, the approach shows strong linear scalability, reducing computation time as dataset size increases. We validate the framework on the MNIST and Fashion MNIST datasets, achieving successful multiclass classification and emphasizing the potential of QSVMs for high-dimensional data analysis. By integrating tensor-network techniques with high-performance computing resources, this work demonstrates both the feasibility and scalability of large-qubit quantum machine learning models, providing a valuable validation tool in the emerging Quantum-HPC ecosystem.