cs.CC
92 postsarXiv:2503.22613v1 Announce Type: new Abstract: A classical algorithm by Bellman and Ford from the 1950's computes shortest paths in weighted graphs on $n$ vertices and $m$ edges with possibly negative weights in $O(mn)$ time. Indeed, this algorithm is taught regularly in undergraduate Algorithms courses. In 2023, after nearly 70 years, Fineman \cite{fineman2024single} developed an $\tilde{O}(m n^{8/9})$ expected time algorithm for this problem. Huang, Jin and Quanrud improved on Fineman's startling breakthrough by providing an $\tilde{O}(m n^{4/5} )$ time algorithm. This paper builds on ideas from those results to produce an $\tilde{O}(m\sqrt{n})$ expected time algorithm. The simple observation that distances can be updated with respect to the reduced costs for a price function in linear time is key to the improvement. This almost immediately improves the previous work. To produce the final bound, this paper provides recursive versions of Fineman's structures.
arXiv:2503.21951v1 Announce Type: new Abstract: This work establishes conditional lower bounds for average-case {\em parity}-counting versions of the problems $k$-XOR, $k$-SUM, and $k$-OV. The main contribution is a set of self-reductions for the problems, providing the first specific distributions, for which: $\mathsf{parity}\text{-}k\text{-}OV$ is $n^{\Omega(\sqrt{k})}$ average-case hard, under the $k$-OV hypothesis (and hence under SETH), $\mathsf{parity}\text{-}k\text{-}SUM$ is $n^{\Omega(\sqrt{k})}$ average-case hard, under the $k$-SUM hypothesis, and $\mathsf{parity}\text{-}k\text{-}XOR$ is $n^{\Omega(\sqrt{k})}$ average-case hard, under the $k$-XOR hypothesis. Under the very believable hypothesis that at least one of the $k$-OV, $k$-SUM, $k$-XOR or $k$-Clique hypotheses is true, we show that parity-$k$-XOR, parity-$k$-SUM, and parity-$k$-OV all require at least $n^{\Omega(k^{1/3})}$ (and sometimes even more) time on average (for specific distributions). To achieve these results, we present a novel and improved framework for worst-case to average-case fine-grained reductions, building on the work of Dalirooyfard, Lincoln, and Vassilevska Williams, FOCS 2020.
arXiv:2503.22633v1 Announce Type: new Abstract: Moment polytopes of tensors, the study of which is deeply rooted in invariant theory, representation theory and symplectic geometry, have found relevance in numerous places, from quantum information (entanglement polytopes) and algebraic complexity theory (GCT program and the complexity of matrix multiplication) to optimization (scaling algorithms). Towards an open problem in algebraic complexity theory, we prove separations between the moment polytopes of matrix multiplication tensors and unit tensors. As a consequence, we find that matrix multiplication moment polytopes are not maximal, i.e. are strictly contained in the corresponding Kronecker polytope. As another consequence, we obtain a no-go result for a natural operational characterization of moment polytope inclusion in terms of asymptotic restriction. We generalize the separation and non-maximality to moment polytopes of iterated matrix multiplication tensors. Our result implies that tensor networks where multipartite entanglement structures beyond two-party entanglement are allowed can go beyond projected entangled-pair states (PEPS) in terms of expressivity. Our proof characterizes membership of uniform points in moment polytopes of tensors, and establishes a connection to polynomial multiplication tensors via the minrank of matrix subspaces. As a result of independent interest, we extend these techniques to obtain a new proof of the optimal border subrank bound for matrix multiplication.
arXiv:2503.22650v1 Announce Type: cross Abstract: Free tensors are tensors which, after a change of bases, have free support: any two distinct elements of its support differ in at least two coordinates. They play a distinguished role in the theory of bilinear complexity, in particular in Strassen's duality theory for asymptotic rank. Within the context of quantum information theory, where tensors are interpreted as multiparticle quantum states, freeness corresponds to a type of multiparticle Schmidt decomposition. In particular, if a state is free in a given basis, the reduced density matrices are diagonal. Although generic tensors in $\mathbb{C}^n \otimes \mathbb{C}^n \otimes \mathbb{C}^n$ are non-free for $n \geq 4$ by parameter counting, no explicit non-free tensors were known until now. We solve this hay in a haystack problem by constructing explicit tensors that are non-free for every $n \geq 3$. In particular, this establishes that non-free tensors exist in $\mathbb{C}^n \otimes \mathbb{C}^n \otimes \mathbb{C}^n$, where they are not generic. To establish non-freeness, we use results from geometric invariant theory and the theory of moment polytopes. In particular, we show that if a tensor $T$ is free, then there is a tensor $S$ in the GL-orbit closure of $T$, whose support is free and whose moment map image is the minimum-norm point of the moment polytope of $T$. This implies a reduction for checking non-freeness from arbitrary basis changes of $T$ to unitary basis changes of $S$. The unitary equivariance of the moment map can then be combined with the fact that tensors with free support have diagonal moment map image, in order to further restrict the set of relevant basis changes.
arXiv:2503.16089v2 Announce Type: replace Abstract: We prove that an $\epsilon$-approximate fixpoint of a map $f:[0,1]^d\rightarrow [0,1]^d$ can be found with $\mathcal{O}(d^2(\log\frac{1}{\epsilon} + \log\frac{1}{1-\lambda}))$ queries to $f$ if $f$ is $\lambda$-contracting with respect to an $\ell_p$-metric for some $p\in [1,\infty)\cup\{\infty\}$. This generalizes a recent result of Chen, Li, and Yannakakis [STOC'24] from the $\ell_\infty$-case to all $\ell_p$-metrics. Previously, all query upper bounds for $p\in [1,\infty) \setminus \{2\}$ were either exponential in $d$, $\log\frac{1}{\epsilon}$, or $\log\frac{1}{1-\lambda}$. Chen, Li, and Yannakakis also show how to ensure that all queries to $f$ lie on a discrete grid of limited granularity in the $\ell_\infty$-case. We provide such a rounding for the $\ell_1$-case, placing an appropriately defined version of the $\ell_1$-case in $\textsf{FP}^{dt}$. To prove our results, we introduce the notion of $\ell_p$-halfspaces and generalize the classical centerpoint theorem from discrete geometry: for any $p \in [1, \infty) \cup \{\infty\}$ and any mass distribution (or point set), we prove that there exists a centerpoint $c$ such that every $\ell_p$-halfspace defined by $c$ and a normal vector contains at least a $\frac{1}{d+1}$-fraction of the mass (or points).
arXiv:2503.01005v2 Announce Type: replace-cross Abstract: Let $X$ be a $d$-partite $d$-dimensional simplicial complex with parts $T_1,\dots,T_d$ and let $\mu$ be a distribution on the facets of $X$. Informally, we say $(X,\mu)$ is a path complex if for any $i<j<k$ and $F \in T_i,G \in T_j, K\in T_k$, we have $\mathbb{P}_\mu[F,K | G]=\mathbb{P}_\mu[F|G]\cdot\mathbb{P}_\mu[K|G].$ We develop a new machinery with $\mathcal{C}$-Lorentzian polynomials to show that if all links of $X$ of co-dimension 2 have spectral expansion at most $1/2$, then $X$ is a $1/2$-local spectral expander. We then prove that one can derive fast-mixing results and log-concavity statements for top-link spectral expanders. We use our machinery to prove fast mixing results for sampling maximal flags of flats of distributive lattices (a.k.a. linear extensions of posets) subject to external fields, and to sample maximal flags of flats of "typical" modular lattices. We also use it to re-prove the Heron-Rota-Welsh conjecture and to prove a conjecture of Chan and Pak which gives a generalization of Stanley's log-concavity theorem. Lastly, we use it to prove near optimal trickle-down theorems for "sparse complexes" such as constructions by Lubotzky-Samuels-Vishne, Kaufman-Oppenheim, and O'Donnell-Pratt.
arXiv:2503.10393v1 Announce Type: new Abstract: Oredango puzzle, one of the pencil puzzles, was originally created by Kanaiboshi and published in the popular puzzle magazine Nikoli. In this paper, we show NP- and ASP-completeness of Oredango by constructing a reduction from the 1-in-3SAT problem. Next, we formulate Oredango as an 0-1 integer-programming problem, and present numerical results obtained by solving Oredango puzzles from Nikoli and PuzzleSquare JP using a 0-1 optimization solver.
arXiv:2503.05934v2 Announce Type: replace Abstract: Recent work by Google DeepMind introduced assembly-optimized sorting networks that achieve faster performance for small fixed-size arrays (3-8). In this research, we investigate the integration of these networks as base cases in classical divide-and-conquer sorting algorithms, specifically Merge Sort and Quick Sort, to leverage these efficient sorting networks for small subarrays generated during the recursive process. We conducted benchmarks with 11 different optimization configurations and compared them to classical Merge Sort and Quick Sort. We tested the configurations with random, sorted and nearly sorted arrays. Our optimized Merge Sort, using a configuration of three sorting networks (sizes 6, 7, and 8), achieves at least 1.5x speedup for random and nearly sorted arrays, and at least 2x speedup for sorted arrays, in comparison to classical Merge Sort. This optimized Merge Sort surpasses both classical Quick Sort and similarly optimized Quick Sort variants when sorting random arrays of size 10,000 and larger. When comparing our optimized Quick Sort to classical Quick Sort, we observe a 1.5x speedup using the 3-to-5 configuration on sorted arrays of size 10,000. The 6-to-8 configuration maintains a consistent 1.5x improvement across sorted arrays from 25,000 to 1 million elements. Our findings demonstrate the potential of integrating AI-optimized sorting networks to enhance the performance of classical sorting algorithms.
arXiv:2502.18382v2 Announce Type: replace Abstract: We extend the bounded degree graph model for property testing introduced by Goldreich and Ron (Algorithmica, 2002) to hypergraphs. In this framework, we analyse the query complexity of three fundamental hypergraph properties: colorability, $k$-partiteness, and independence number. We present a randomized algorithm for testing $k$-partiteness within families of $k$-uniform $n$-vertex hypergraphs of bounded treewidth whose query complexity does not depend on $n$. In addition, we prove optimal lower bounds of $\Omega(n)$ on the query complexity of testing algorithms for $k$-colorability, $k$-partiteness, and independence number in $k$-uniform $n$-vertex hypergraphs of bounded degree. For each of these properties, we consider the problem of explicitly constructing $k$-uniform hypergraphs of bounded degree that differ in $\Theta(n)$ hyperedges from any hypergraph satisfying the property, but where violations of the latter cannot be detected in any neighborhood of $o(n)$ vertices.
arXiv:2407.09301v3 Announce Type: replace-cross Abstract: We prove non asymptotic total variation estimates for the kinetic Langevin algorithm in high dimension when the target measure satisfies a Poincar\'e inequality and has gradient Lipschitz potential. The main point is that the estimate improves significantly upon the corresponding bound for the non kinetic version of the algorithm, due to Dalalyan. In particular the dimension dependence drops from $O(n)$ to $O(\sqrt n)$.
arXiv:2503.05312v1 Announce Type: new Abstract: A proper vertex coloring of a connected graph $G$ is called an odd coloring if, for every vertex $v$ in $G$, there exists a color that appears odd number of times in the open neighborhood of $v$. The minimum number of colors required to obtain an odd coloring of $G$ is called the \emph{odd chromatic number} of $G$, denoted by $\chi_{o}(G)$. Determining $\chi_o(G)$ known to be ${\sf NP}$-hard. Given a graph $G$ and an integer $k$, the \odc{} problem is to decide whether $\chi_o(G)$ is at most $k$. In this paper, we study the parameterized complexity of the problem, particularly with respect to structural graph parameters. We obtain the following results: \begin{itemize} \item We prove that the problem admits a polynomial kernel when parameterized by the distance to clique. \item We show that the problem cannot have a polynomial kernel when parameterized by the vertex cover number unless ${\sf NP} \subseteq {\sf Co {\text -} NP/poly}$. \item We show that the problem is fixed-parameter tractable when parameterized by distance to cluster, distance to co-cluster, or neighborhood diversity. \item We show that the problem is ${\sf W[1]}$-hard parameterized by clique-width. \end{itemize} Finally, we study the complexity of the problem on restricted graph classes. We show that it can be solved in polynomial time on cographs and split graphs but remains NP-complete on certain subclasses of bipartite graphs.
arXiv:2503.05548v1 Announce Type: new Abstract: We study integer linear programs (ILP) of the form $\min\{c^\top x\ \vert\ Ax=b,l\le x\le u,x\in\mathbb Z^n\}$ and analyze their parameterized complexity with respect to their distance to the generalized matching problem--following the well-established approach of capturing the hardness of a problem by the distance to triviality. The generalized matching problem is an ILP where each column of the constraint matrix has $1$-norm of at most $2$. It captures several well-known polynomial time solvable problems such as matching and flow problems. We parameterize by the size of variable and constraint backdoors, which measure the least number of columns or rows that must be deleted to obtain a generalized matching ILP. We present the following results: (i) a fixed-parameter tractable (FPT) algorithm for ILPs parameterized by the size $p$ of a minimum variable backdoor to generalized matching; (ii) a randomized slice-wise polynomial (XP) time algorithm for ILPs parameterized by the size $h$ of a minimum constraint backdoor to generalized matching as long as $c$ and $A$ are encoded in unary; (iii) we complement (ii) by proving that solving an ILP is W[1]-hard when parameterized by $h$ even when $c,A,l,u$ have coefficients of constant size. To obtain (i), we prove a variant of lattice-convexity of the degree sequences of weighted $b$-matchings, which we study in the light of SBO jump M-convex functions. This allows us to model the matching part as a polyhedral constraint on the integer backdoor variables. The resulting ILP is solved in FPT time using an integer programming algorithm. For (ii), the randomized XP time algorithm is obtained by pseudo-polynomially reducing the problem to the exact matching problem. To prevent an exponential blowup in terms of the encoding length of $b$, we bound the Graver complexity of the constraint matrix and employ a Graver augmentation local search framework.
arXiv:2404.19005v2 Announce Type: replace-cross Abstract: Realizing computationally complex quantum circuits in the presence of noise and imperfections is a challenging task. While fault-tolerant quantum computing provides a route to reducing noise, it requires a large overhead for generic algorithms. Here, we develop and analyze a hardware-efficient, fault-tolerant approach to realizing complex sampling circuits. We co-design the circuits with the appropriate quantum error correcting codes for efficient implementation in a reconfigurable neutral atom array architecture, constituting what we call a fault-tolerant compilation of the sampling algorithm. Specifically, we consider a family of $[[2^D , D, 2]]$ quantum error detecting codes whose transversal and permutation gate set can realize arbitrary degree-$D$ instantaneous quantum polynomial (IQP) circuits. Using native operations of the code and the atom array hardware, we compile a fault-tolerant and fast-scrambling family of such IQP circuits in a hypercube geometry, realized recently in the experiments by Bluvstein et al. [Nature 626, 7997 (2024)]. We develop a theory of second-moment properties of degree-$D$ IQP circuits for analyzing hardness and verification of random sampling by mapping to a statistical mechanics model. We provide evidence that sampling from hypercube IQP circuits is classically hard to simulate and analyze the linear cross-entropy benchmark (XEB) in comparison to the average fidelity. To realize a fully scalable approach, we first show that Bell sampling from degree-$4$ IQP circuits is classically intractable and can be efficiently validated. We further devise new families of $[[O(d^D),D,d]]$ color codes of increasing distance $d$, permitting exponential error suppression for transversal IQP sampling. Our results highlight fault-tolerant compiling as a powerful tool in co-designing algorithms with specific error-correcting codes and realistic hardware.
arXiv:2503.05062v1 Announce Type: new Abstract: Despite of tremendous research on decoding Reed-Solomon (RS) and algebraic geometry (AG) codes under the random and adversary substitution error models, few studies have explored these codes under the burst substitution error model. Burst errors are prevalent in many communication channels, such as wireless networks, magnetic recording systems, and flash memory. Compared to random and adversarial errors, burst errors often allow for the design of more efficient decoding algorithms. However, achieving both an optimal decoding radius and quasi-linear time complexity for burst error correction remains a significant challenge. The goal of this paper is to design (both list and probabilistic unique) decoding algorithms for RS and AG codes that achieve the Singleton bound for decoding radius while maintaining quasi-linear time complexity. Our idea is to build a one-to-one correspondence between AG codes (including RS codes) and interleaved RS codes with shorter code lengths (or even constant lengths). By decoding the interleaved RS codes with burst errors, we derive efficient decoding algorithms for RS and AG codes. For decoding interleaved RS codes with shorter code lengths, we can employ either the naive methods or existing algorithms. This one-to-one correspondence is constructed using the generalized fast Fourier transform (G-FFT) proposed by Li and Xing (SODA 2024). The G-FFT generalizes the divide-and-conquer technique from polynomials to algebraic function fields. More precisely speaking, assume that our AG code is defined over a function field $E$ which has a sequence of subfields $\mathbb{F}_q(x)=E_r\subseteq E_{r-1}\subseteq \cdots\subset E_1\subseteq E_0=E$ such that $E_{i-1}/E_i$ are Galois extensions for $1\le i\le r$. Then the AG code based on $E$ can be transformed into an interleaved RS code over the rational function field $\mathbb{F}_q(x)$.
arXiv:2503.05572v1 Announce Type: cross Abstract: We study groups of reversible cellular automata, or CA groups, on groups. More generally, we consider automorphism groups of subshifts of finite type on groups. It is known that word problems of CA groups on virtually nilpotent groups are in co-NP, and can be co-NP-hard. We show that under the Gap Conjecture of Grigorchuk, their word problems are PSPACE-hard on all other groups. On free and surface groups, we show that they are indeed always in PSPACE. On a group with co-NEXPTIME word problem, CA groups themselves have co-NEXPTIME word problem, and on the lamplighter group (which itself has polynomial-time word problem) we show they can be co-NEXPTIME-hard. We show also two nonembeddability results: the group of cellular automata on a non-cyclic free group does not embed in the group of cellular automata on the integers (this solves a question of Barbieri, Carrasco-Vargas and Rivera-Burgos); and the group of cellular automata in dimension $D$ does not embed in a group of cellular automata in dimension $d$ if $D \geq 3d+2$ (this solves a question of Hochman).
arXiv:2503.04731v1 Announce Type: new Abstract: Answer Set Programming (ASP) is a prominent problem-modeling and solving framework, whose solutions are called answer sets. Epistemic logic programs (ELP) extend ASP to reason about all or some answer sets. Solutions to an ELP can be seen as consequences over multiple collections of answer sets, known as world views. While the complexity of propositional programs is well studied, the non-ground case remains open. This paper establishes the complexity of non-ground ELPs. We provide a comprehensive picture for well-known program fragments, which turns out to be complete for the class NEXPTIME with access to oracles up to \Sigma^P_2. In the quantitative setting, we establish complexity results for counting complexity beyond #EXP. To mitigate high complexity, we establish results in case of bounded predicate arity, reaching up to the fourth level of the polynomial hierarchy. Finally, we provide ETH-tight runtime results for the parameter treewidth, which has applications in quantitative reasoning, where we reason on (marginal) probabilities of epistemic literals.
arXiv:2502.21240v2 Announce Type: replace Abstract: We consider the problem of preprocessing an $n\times n$ matrix M, and supporting queries that, for any vector v, returns the matrix-vector product Mv. This problem has been extensively studied in both theory and practice: on one side, practitioners have developed algorithms that are highly efficient in practice, whereas theoreticians have proven that the problem cannot be solved faster than naive multiplication in the worst-case. This lower bound holds even in the average-case, implying that existing average-case analyses cannot explain this gap between theory and practice. Therefore, we study the problem for structured matrices. We show that for $n\times n$ matrices of VC-dimension d, the matrix-vector multiplication problem can be solved with $\tilde{O}(n^2)$ preprocessing and $\tilde O(n^{2-1/d})$ query time. Given the low constant VC-dimensions observed in most real-world data, our results posit an explanation for why the problem can be solved so much faster in practice. Moreover, our bounds hold even if the matrix does not have a low VC-dimension, but is obtained by (possibly adversarially) corrupting at most a subquadratic number of entries of any unknown low VC-dimension matrix. Our results yield the first non-trivial upper bounds for many applications. In previous works, the online matrix-vector hypothesis (conjecturing that quadratic time is needed per query) was used to prove many conditional lower bounds, showing that it is impossible to compute and maintain high-accuracy estimates for shortest paths, Laplacian solvers, effective resistance, and triangle detection in graphs subject to node insertions and deletions in subquadratic time. Yet, via a reduction to our matrix-vector-multiplication result, we show we can maintain the aforementioned problems efficiently if the input is structured, providing the first subquadratic upper bounds in the high-accuracy regime.
arXiv:2501.10633v1 Announce Type: new Abstract: We introduce the meta-problem Sidestep$(\Pi, \mathsf{dist}, d)$ for a problem $\Pi$, a metric $\mathsf{dist}$ over its inputs, and a map $d: \mathbb N \to \mathbb R_+ \cup \{\infty\}$. A solution to Sidestep$(\Pi, \mathsf{dist}, d)$ on an input $I$ of $\Pi$ is a pair $(J, \Pi(J))$ such that $\mathsf{dist}(I,J) \leqslant d(|I|)$ and $\Pi(J)$ is a correct answer to $\Pi$ on input $J$. This formalizes the notion of answering a related question (or sidestepping the question), for which we give some practical and theoretical motivations, and compare it to the neighboring concepts of smoothed analysis, planted problems, and edition problems. Informally, we call hardness radius the ``largest'' $d$ such that Sidestep$(\Pi, \mathsf{dist}, d)$ is NP-hard. This framework calls for establishing the hardness radius of problems $\Pi$ of interest for the relevant distances $\mathsf{dist}$. We exemplify it with graph problems and two distances $\mathsf{dist}_\Delta$ and $\mathsf{dist}_e$ (the edge edit distance) such that $\mathsf{dist}_\Delta(G,H)$ (resp. $\mathsf{dist}_e(G,H)$) is the maximum degree (resp. number of edges) of the symmetric difference of $G$ and $H$ if these graphs are on the same vertex set, and $+\infty$ otherwise. We show that the decision problems Independent Set, Clique, Vertex Cover, Coloring, Clique Cover have hardness radius $n^{\frac{1}{2}-o(1)}$ for $\mathsf{dist}_\Delta$, and $n^{\frac{4}{3}-o(1)}$ for $\mathsf{dist}_e$, that Hamiltonian Cycle has hardness radius 0 for $\mathsf{dist}_\Delta$, and somewhere between $n^{\frac{1}{2}-o(1)}$ and $n/3$ for $\mathsf{dist}_e$, and that Dominating Set has hardness radius $n^{1-o(1)}$ for $\mathsf{dist}_e$. We leave several open questions.
arXiv:2501.10688v1 Announce Type: new Abstract: Looped Transformers have shown exceptional capability in simulating traditional graph algorithms, but their application to more complex structures like hypergraphs remains underexplored. Hypergraphs generalize graphs by modeling higher-order relationships among multiple entities, enabling richer representations but introducing significant computational challenges. In this work, we extend the Loop Transformer architecture to simulate hypergraph algorithms efficiently, addressing the gap between neural networks and combinatorial optimization over hypergraphs. In this paper, we extend the Loop Transformer architecture to simulate hypergraph algorithms efficiently, addressing the gap between neural networks and combinatorial optimization over hypergraphs. Specifically, we propose a novel degradation mechanism for reducing hypergraphs to graph representations, enabling the simulation of graph-based algorithms, such as Dijkstra's shortest path. Furthermore, we introduce a hyperedge-aware encoding scheme to simulate hypergraph-specific algorithms, exemplified by Helly's algorithm. The paper establishes theoretical guarantees for these simulations, demonstrating the feasibility of processing high-dimensional and combinatorial data using Loop Transformers. This work highlights the potential of Transformers as general-purpose algorithmic solvers for structured data.
arXiv:2501.11683v1 Announce Type: new Abstract: Flesh and Blood (FAB) is a trading card game that two players need to make a strategy to reduce the life points of their opponent to zero. The mechanics of the game present complex decision-making scenarios of resource management. Due the similarity of other card games, the strategy of the game have scenarios that can turn an NP-problem. This paper presents a model of an aggressive, single-turn strategy as a combinatorial optimization problem, termed the FAB problem. Using mathematical modeling, we demonstrate its equivalence to a 0-1 Knapsack problem, establishing the FAB problem as NP-hard. Additionally, an Integer Linear Programming (ILP) formulation is proposed to tackle real-world instances of the problem. By establishing the computational hardness of optimizing even relatively simple strategies, our work highlights the combinatorial complexity of the game.