cs.DS

154 posts

arXiv:2503.09508v2 Announce Type: replace Abstract: The online randomized primal-dual method has widespread applications in online algorithm design and analysis. A key challenge is identifying an appropriate function space, $F$, in which we search for an optimal updating function $f \in F$ that yields the best possible lower bound on the competitiveness of a given algorithm. The choice of $F$ must balance two competing objectives: on one hand, it should impose sufficient simplifying conditions on $f$ to facilitate worst-case analysis and establish a valid lower bound; on the other hand, it should remain general enough to offer a broad selection of candidate functions. The tradeoff is that any additional constraints on $f$ that can facilitate competitive analysis may also lead to a suboptimal choice, weakening the resulting lower bound. To address this challenge, we propose an auxiliary-LP-based framework capable of effectively approximating the best possible competitiveness achievable when applying the randomized primal-dual method to different function spaces. Specifically, we examine the framework introduced by Huang and Zhang (STOC 2020), which analyzes Stochastic Balance for vertex-weighted online matching with stochastic rewards. Our approach yields both lower and upper bounds on the best possible competitiveness attainable using the randomized primal-dual method for different choices of ${F}$. Notably, we establish that Stochastic Balance achieves a competitiveness of at least $0.5796$ for the problem (under equal vanishing probabilities), improving upon the previous bound of $0.576$ by Huang and Zhang (STOC 2020). Meanwhile, our analysis yields an upper bound of $0.5810$ for a function space strictly larger than that considered in Huang and Zhang (STOC 2020).

Pan Xu3/14/2025

arXiv:2503.09908v1 Announce Type: new Abstract: We present a work optimal algorithm for parallel fully batch-dynamic maximal matching against an oblivious adversary. In particular it processes batches of updates (either insertions or deletions of edges) in constant expected amortized work per edge update, and in $O(\log^3 m)$ depth per batch whp, where $m$ is the maximum number of edges in the graph over time. This greatly improves on the recent result by Ghaffari and Trygub (2024) that requires $O(\log^9 m)$ amortized work per update and $O(\log^4 m )$ depth per batch, both whp. The algorithm can also be used for hyperedge maximal matching. For hypergraphs with rank $r$ (maximum cardinality of any edge) the algorithm supports batches of insertions and deletions with $O(r^3)$ expected amortized work per edge update, and $O(\log^3 m)$ depth per batch whp. This is a factor of $O(r)$ work off of the best sequential algorithm, Assadi and Solomon (2021), which uses $O(r^2)$ work per update. Ghaffari and Trygub's parallel batch-dynamic algorithm on hypergraphs requires $O(r^8 \log^9 m)$ amortized work per edge update whp. We leverage ideas from the prior algorithms but introduce substantial new ideas. Furthermore, our algorithm is relatively simple, perhaps even simpler than the sequential hyperedge algorithm. We also present the first work-efficient algorithm for maximal matching on hypergraphs. For a hypergraph with total cardinality $m'$ (i.e., sum over the cardinality of each edge), the algorithm runs in $O(m')$ work in expectation and $O(\log^2 m)$ depth whp. The algorithm also has some properties that allow us to use it as a subroutine in the dynamic algorithm to select random edges in the graph to add to the matching.

Guy E. Blelloch, Andrew C. Brady3/14/2025

arXiv:2503.10158v1 Announce Type: cross Abstract: Integral linear systems $Ax=b$ with matrices $A$, $b$ and solutions $x$ are also required to be in integers, can be solved using invariant factors of $A$ (by computing the Smith Canonical Form of $A$). This paper explores a new problem which arises in applications, that of obtaining conditions for solving the Modular Linear System $Ax=b\rem n$ given $A,b$ in $\zz_n$ for $x$ in $\zz_n$ along with the constraint that the value of the linear function $\phi(x)=$ is coprime to $n$ for some solution $x$. In this paper we develop decomposition of the system to coprime moduli $p^{r(p)}$ which are divisors of $n$ and show how such a decomposition simplifies the computation of Smith form. This extends the well known index calculus method of computing the discrete logarithm where the moduli over which the linear system is reduced were assumed to be prime (to solve the reduced systems over prime fields) to the case when the factors of the modulus are prime powers $p^{r(p)}$. It is shown how this problem can be addressed effciently using the invariant factors and Smith form of the augmented matrix $[A,-p^{r(p)}I]$ and conditions modulo $p$ satisfied by $w$, where $p^{r(p)}$ vary over all divisors of $n$ with $p$ prime.

Virendra Sule3/14/2025

arXiv:2410.07388v2 Announce Type: replace Abstract: The Densest $k$-Subgraph (D$k$S) problem aims to find a subgraph comprising $k$ vertices with the maximum number of edges between them. A continuous reformulation of the binary quadratic D$k$S problem is considered, which incorporates a diagonal loading term. It is shown that this non-convex, continuous relaxation is tight for a range of diagonal loading parameters, and the impact of the diagonal loading parameter on the optimization landscape is studied. On the algorithmic side, two projection-free algorithms are proposed to tackle the relaxed problem, based on Frank-Wolfe and explicit constraint parametrization, respectively. Experiments suggest that both algorithms have merits relative to the state-of-art, while the Frank-Wolfe-based algorithm stands out in terms of subgraph density, computational complexity, and ability to scale up to very large datasets.

Qiheng Lu, Nicholas D. Sidiropoulos, Aritra Konar3/14/2025

arXiv:2503.10416v1 Announce Type: new Abstract: Runtime repeated recursion unfolding was recently introduced as a just-in-time program transformation strategy that can achieve super-linear speedup. So far, the method was restricted to single linear direct recursive rules in the programming language Constraint Handling Rules (CHR). In this companion paper, we generalize the technique to multiple recursion and to multiple recursive rules and provide an implementation of the generalized method in the logic programming language Prolog. The basic idea of the approach is as follows: When a recursive call is encountered at runtime, the recursive rule is unfolded with itself and this process is repeated with each resulting unfolded rule as long as it is applicable to the current call. In this way, more and more recursive steps are combined into one recursive step. Then an interpreter applies these rules to the call starting from the most unfolded rule. For recursions which have sufficiently simplifyable unfoldings, a super-linear can be achieved, i.e. the time complexity is reduced. We implement an unfolder, a generalized meta-interpreter and a novel round-robin rule processor for our generalization of runtime repeated recursion unfolding with just ten clauses in Prolog. We illustrate the feasibility of our technique with worst-case time complexity estimates and benchmarks for some basic classical algorithms that achieve a super-linear speedup.

Thom Fruehwirth3/14/2025

arXiv:2503.09802v1 Announce Type: new Abstract: We study the task of list-decodable linear regression using batches. A batch is called clean if it consists of i.i.d. samples from an unknown linear regression distribution. For a parameter $\alpha \in (0, 1/2)$, an unknown $\alpha$-fraction of the batches are clean and no assumptions are made on the remaining ones. The goal is to output a small list of vectors at least one of which is close to the true regressor vector in $\ell_2$-norm. [DJKS23] gave an efficient algorithm, under natural distributional assumptions, with the following guarantee. Assuming that the batch size $n$ satisfies $n \geq \tilde{\Omega}(\alpha^{-1})$ and the number of batches is $m = \mathrm{poly}(d, n, 1/\alpha)$, their algorithm runs in polynomial time and outputs a list of $O(1/\alpha^2)$ vectors at least one of which is $\tilde{O}(\alpha^{-1/2}/\sqrt{n})$ close to the target regressor. Here we design a new polynomial time algorithm with significantly stronger guarantees under the assumption that the low-degree moments of the covariates distribution are Sum-of-Squares (SoS) certifiably bounded. Specifically, for any constant $\delta>0$, as long as the batch size is $n \geq \Omega_{\delta}(\alpha^{-\delta})$ and the degree-$\Theta(1/\delta)$ moments of the covariates are SoS certifiably bounded, our algorithm uses $m = \mathrm{poly}((dn)^{1/\delta}, 1/\alpha)$ batches, runs in polynomial-time, and outputs an $O(1/\alpha)$-sized list of vectors one of which is $O(\alpha^{-\delta/2}/\sqrt{n})$ close to the target. That is, our algorithm achieves substantially smaller minimum batch size and final error, while achieving the optimal list size. Our approach uses higher-order moment information by carefully combining the SoS paradigm interleaved with an iterative method and a novel list pruning procedure. In the process, we give an SoS proof of the Marcinkiewicz-Zygmund inequality that may be of broader applicability.

Ilias Diakonikolas, Daniel M. Kane, Sushrut Karmalkar, Sihan Liu, Thanasis Pittas3/14/2025

arXiv:2503.10447v1 Announce Type: new Abstract: In the Feedback Arc Set in Tournaments (Subset-FAST) problem, we are given a tournament $D$ and a positive integer $k$, and the objective is to determine whether there exists an arc set $S \subseteq A(D)$ of size at most $k$ whose removal makes the graph acyclic. This problem is well-known to be equivalent to a natural tournament ranking problem, whose task is to rank players in a tournament such that the number of pairs in which the lower-ranked player defeats the higher-ranked player is no more than $k$. Using the PTAS for Subset-FAST [STOC 2007], Bessy et al. [JCSS 2011] present a $(2 + \varepsilon)k$-vertex kernel for this problem, given any fixed $\varepsilon > 0$. A generalization of Subset-FAST, called Subset-FAST, further includes an additional terminal subset $T \subseteq V(D)$ in the input. The goal of Subset-FAST is to determine whether there is an arc set $S \subseteq A(D)$ of size at most $k$ whose removal ensures that no directed cycle passes through any terminal in $T$. Prior to our work, no polynomial kernel for Subset-FAST was known. In our work, we show that Subset-FAST admits an $\mathcal{O}((\alpha k)^{2})$-vertex kernel, provided that Subset-FAST has an approximation algorithm with an approximation ratio $\alpha$. Consequently, based on the known $\mathcal{O}(\log k \log \log k)$-approximation algorithm, we obtain an almost quadratic kernel for Subset-FAST.

Tian Bai3/14/2025

arXiv:2503.10541v1 Announce Type: new Abstract: In a digraph $D$, an arc $e=(x,y) $ in $D$ is considered transitive if there is a path from $x$ to $y$ in $D- e$. A digraph is transitive-free if it does not contain any transitive arc. In the Transitive-free Vertex Deletion (TVD) problem, the goal is to find at most $k$ vertices $S$ such that $D-S$ has no transitive arcs. In our work, we study a more general version of the TVD problem, denoted by $\ell$-Relaxed Transitive-free Vertex Deletion ($\ell$-RTVD), where we look for at most $k$ vertices $S$ such that $D-S$ has no more than $\ell$ transitive arcs. We explore $\ell$-RTVD on various well-known graph classes of digraphs such as directed acyclic graphs (DAGs), planar DAGs, $\alpha$-bounded digraphs, tournaments, and their multiple generalizations such as in-tournaments, out-tournaments, local tournaments, acyclic local tournaments, and obtain the following results. Although the problem admits polynomial-time algorithms in tournaments, $\alpha$-bounded digraphs, and acyclic local tournaments for fixed values of $\ell$, it remains NP-hard even in planar DAGs with maximum degree 6. In the parameterized realm, for $\ell$-RTVD on in-tournaments and out-tournaments, we obtain polynomial kernels parameterized by $k+\ell$ for bounded independence number. But the problem remains fixed-parameter intractable on DAGs when parameterized by $k$.

Ankit Abhinav, Satyabrata Jana, Abhishek Sahu3/14/2025

arXiv:2301.13735v2 Announce Type: replace Abstract: A class of graphs $\mathscr{C}$ is monadically stable if for any unary expansion $\widehat{\mathscr{C}}$ of $\mathscr{C}$, one cannot interpret, in first-order logic, arbitrarily long linear orders in graphs from $\widehat{\mathscr{C}}$. It is known that nowhere dense graph classes are monadically stable; these encompass most of the studied concepts of sparsity in graphs, including graph classes that exclude a fixed topological minor. On the other hand, monadic stability is a property expressed in purely model-theoretic terms and hence it is also suited for capturing structure in dense graphs. For several years, it has been suspected that one can create a structure theory for monadically stable graph classes that mirrors the theory of nowhere dense graph classes in the dense setting. In this work we provide a step in this direction by giving a characterization of monadic stability through the Flipper game: a game on a graph played by Flipper, who in each round can complement the edge relation between any pair of vertex subsets, and Connector, who in each round localizes the game to a ball of bounded radius. This is an analog of the Splitter game, which characterizes nowhere dense classes of graphs (Grohe, Kreutzer, and Siebertz, J.ACM'17). We give two different proofs of our main result. The first proof uses tools from model theory, and it exposes an additional property of monadically stable graph classes that is close in spirit to definability of types. Also, as a byproduct, we give an alternative proof of the recent result of Braunfeld and Laskowski (arXiv 2209.05120) that monadic stability for graph classes coincides with existential monadic stability. The second proof relies on the recently introduced notion of flip-wideness (Dreier, M\"ahlmann, Siebertz, and Toru\'nczyk, ICALP 2023) and provides an efficient algorithm to compute Flipper's moves in a winning strategy.

Jakub Gajarsk\'y, Nikolas M\"ahlmann, Rose McCarty, Pierre Ohlmann, Micha{\l} Pilipczuk, Wojciech Przybyszewski, Sebastian Siebertz, Marek Soko{\l}owski, Szymon Toru\'nczyk3/14/2025

arXiv:2408.07831v2 Announce Type: replace Abstract: We introduce and study spatiotemporal online allocation with deadline constraints ($\mathsf{SOAD}$), a new online problem motivated by emerging challenges in sustainability and energy. In $\mathsf{SOAD}$, an online player completes a workload by allocating and scheduling it on the points of a metric space $(X, d)$ while subject to a deadline $T$. At each time step, a service cost function is revealed that represents the cost of servicing the workload at each point, and the player must irrevocably decide the current allocation of work to points. Whenever the player moves this allocation, they incur a movement cost defined by the distance metric $d(\cdot, \ \cdot)$ that captures, e.g., an overhead cost. $\mathsf{SOAD}$ formalizes the open problem of combining general metrics and deadline constraints in the online algorithms literature, unifying problems such as metrical task systems and online search. We propose a competitive algorithm for $\mathsf{SOAD}$ along with a matching lower bound establishing its optimality. Our main algorithm, \textsc{ST-CLIP}, is a learning-augmented algorithm that takes advantage of predictions (e.g., forecasts of relevant costs) and achieves an optimal consistency-robustness trade-off. We evaluate our proposed algorithms in a simulated case study of carbon-aware spatiotemporal workload management, an application in sustainable computing that schedules a delay-tolerant batch compute job on a distributed network of data centers. In these experiments, we show that \textsc{ST-CLIP} substantially improves on heuristic baseline methods.

Adam Lechowicz, Nicolas Christianson, Bo Sun, Noman Bashir, Mohammad Hajiesmaili, Adam Wierman, Prashant Shenoy3/14/2025

arXiv:2503.09810v1 Announce Type: new Abstract: Counting small subgraphs, referred to as motifs, in large graphs is a fundamental task in graph analysis, extensively studied across various contexts and computational models. In the sublinear-time regime, the relaxed problem of approximate counting has been explored within two prominent query frameworks: the standard model, which permits degree, neighbor, and pair queries, and the strictly more powerful augmented model, which additionally allows for uniform edge sampling. Currently, in the standard model, (optimal) results have been established only for approximately counting edges, stars, and cliques, all of which have a radius of one. This contrasts sharply with the state of affairs in the augmented model, where algorithmic results (some of which are optimal) are known for any input motif, leading to a disparity which we term the ``scope gap" between the two models. In this work, we make significant progress in bridging this gap. Our approach draws inspiration from recent advancements in the augmented model and utilizes a framework centered on counting by uniform sampling, thus allowing us to establish new results in the standard model and simplify on previous results. In particular, our first, and main, contribution is a new algorithm in the standard model for approximately counting any Hamiltonian motif in sublinear time. Our second contribution is a variant of our algorithm that enables nearly uniform sampling of these motifs, a capability previously limited in the standard model to edges and cliques. Our third contribution is to introduce even simpler algorithms for stars and cliques by exploiting their radius-one property. As a result, we simplify all previously known algorithms in the standard model for stars (Gonen, Ron, Shavitt (SODA 2010)), triangles (Eden, Levi, Ron Seshadhri (FOCS 2015)) and cliques (Eden, Ron, Seshadri (STOC 2018)).

Talya Eden, Reut Levi, Dana Ron, Ronitt Rubinfeld3/14/2025

arXiv:2503.10161v1 Announce Type: new Abstract: A minimal perfect hash function (MPHF) maps a set of n keys to unique positions {1, ..., n}. Representing an MPHF requires at least 1.44 bits per key. ShockHash is a technique to construct an MPHF and requires just slightly more space. It gives each key two pseudo random candidate positions. If each key can be mapped to one of its two candidate positions such that there is exactly one key mapped to each position, then an MPHF is found. If not, ShockHash repeats the process with a new set of random candidate positions. ShockHash has to store how many repetitions were required and for each key to which of the two candidate positions it is mapped. However, when a given set of candidate positions can be used as MPHF then there is not only one but multiple ways of mapping the keys to one of their candidate positions such that the mapping results in an MPHF. This redundancy makes up for the majority of the remaining space overhead in ShockHash. In this paper, we present MorphisHash which is a technique that almost completely eliminates this redundancy. Our theoretical result is that MorphisHash saves {\Theta}(ln(n)) bits compared to ShockHash. This corresponds to a factor of 20 less space overhead in practice. The technique to accomplish this might be of a more general interest to compress data structures.

Stefan Hermann3/14/2025

arXiv:2503.09762v1 Announce Type: new Abstract: We study a centralized discrete-time dynamic two-way matching model with finitely many agent types. Agents arrive stochastically over time and join their type-dedicated queues waiting to be matched. We focus on state-independent greedy policies that achieve constant regret at all times by making matching decisions based solely on agent availability across types, rather than requiring complete queue-length information. Such policies are particularly appealing for life-saving applications such as kidney exchange, as they require less information and provide more transparency compared to state-dependent policies. First, for acyclic matching networks, we analyze a deterministic priority policy proposed by Kerimov et al. [2023] that follows a static priority order over matches. We derive the first explicit regret bound in terms of the general position gap (GPG) parameter $\epsilon$, which measures the distance of the fluid relaxation from degeneracy. Second, for general two-way matching networks, we design a randomized state-independent greedy policy that achieves constant regret with optimal scaling $O(\epsilon^{-1})$, matching the existing lower bound established by Kerimov et al. [2024].

S\"uleyman Kerimov, Mingwei Yang, Sophie H. Yu3/14/2025

arXiv:2503.05934v2 Announce Type: replace Abstract: Recent work by Google DeepMind introduced assembly-optimized sorting networks that achieve faster performance for small fixed-size arrays (3-8). In this research, we investigate the integration of these networks as base cases in classical divide-and-conquer sorting algorithms, specifically Merge Sort and Quick Sort, to leverage these efficient sorting networks for small subarrays generated during the recursive process. We conducted benchmarks with 11 different optimization configurations and compared them to classical Merge Sort and Quick Sort. We tested the configurations with random, sorted and nearly sorted arrays. Our optimized Merge Sort, using a configuration of three sorting networks (sizes 6, 7, and 8), achieves at least 1.5x speedup for random and nearly sorted arrays, and at least 2x speedup for sorted arrays, in comparison to classical Merge Sort. This optimized Merge Sort surpasses both classical Quick Sort and similarly optimized Quick Sort variants when sorting random arrays of size 10,000 and larger. When comparing our optimized Quick Sort to classical Quick Sort, we observe a 1.5x speedup using the 3-to-5 configuration on sorted arrays of size 10,000. The 6-to-8 configuration maintains a consistent 1.5x improvement across sorted arrays from 25,000 to 1 million elements. Our findings demonstrate the potential of integrating AI-optimized sorting networks to enhance the performance of classical sorting algorithms.

Anas Gamal Aly, Anders E. Jensen, Hala ElAarag3/14/2025

arXiv:2503.05004v1 Announce Type: new Abstract: Global minimum cut is a fundamental combinatorial optimization problem with wide-ranging applications. Often in practice, these problems are solved repeatedly on families of similar or related instances. However, the de facto algorithmic approach is to solve each instance of the problem from scratch discarding information from prior instances. In this paper, we consider how predictions informed by prior instances can be used to warm-start practical minimum cut algorithms. The paper considers the widely used Karger's algorithm and its counterpart, the Karger-Stein algorithm. Given good predictions, we show these algorithms become near-linear time and have robust performance to erroneous predictions. Both of these algorithms are randomized edge-contraction algorithms. Our natural idea is to probabilistically prioritize the contraction of edges that are unlikely to be in the minimum cut.

Benjamin Moseley, Helia Niaparast, Karan Singh3/10/2025

arXiv:2503.05695v1 Announce Type: new Abstract: In this work, we revisit well-studied problems of fair allocation of indivisible items among agents with general, non-monotone valuations. We explore the existence and efficient computation of allocations that satisfy either fairness or equity constraints. The fairness notions we consider ensure that each agent values her bundle at least as much as others', allowing for (any or some) item removal, while the equity guarantees roughly equal valuations among agents, with similar adjustments. For objective valuations where items are classified as either goods or chores, we present a pseudo-polynomial local-search algorithm computing an ``equitable-up-to-any-good-or-any-chore'' (EQX*) allocation, a weaker version of an ``equitable-up-to-any-item" (EQX) allocation. Additionally, we provide a polynomial-time greedy algorithm that computes an ``equitable-up-to-one-item" (EQ1) allocation, and a similar algorithm returning an EQX* allocation when the valuations are also additive. As a key technical contribution of this work, by leveraging fixed-point theorems (such as Sperner's Lemma and its variants), we establish the existence of ``equitable-up-to-one-good-and-one-chore'' (EQ1*) and ``envy-free-up-to-one-good-and-one-chore'' (EF1*) allocations for non-negative (and possibly non-objective and non-monotone) valuations. This holds even when items are arranged in a path and bundles must form connected sub-paths. Additionally, we present a polynomial-time dynamic-programming algorithm that computes an EQ1* allocation. Finally, we extend the EF1* and EQ1* results to non-positive valuations using a novel multi-coloring variant of Sperner's lemma, a combinatorial result of independent interest. For monotone non-increasing valuations and path-connected bundles, this implies the existence of EF1 and EQ1 allocations, with EQ1 allocations being efficiently computable.

Vittorio Bil\`o, Martin Loebl, Cosimo Vinci3/10/2025

arXiv:2503.05661v1 Announce Type: cross Abstract: Two graph parameters are said to be coarsely equivalent if they are within constant factors from each other for every graph $G$. Recently, several graph parameters were shown to be coarsely equivalent to tree-length. Recall that the length of a tree-decomposition ${\cal T}(G)$ of a graph $G$ is the largest diameter of a bag in ${\cal T}(G)$, and the tree-length $tl(G)$ of $G$ is the minimum of the length, over all tree-decompositions of $G$. Similarly, the length of a path-decomposition ${\cal P}(G)$ of a graph $G$ is the largest diameter of a bag in ${\cal P}(G)$, and the path-length $pl(G)$ of $G$ is the minimum of the length, over all path-decompositions of $G$. In this paper, we present several graph parameters that are coarsely equivalent to path-length. Among other results, we show that the path-length of a graph $G$ is small if and only if one of the following equivalent conditions is true: (a) $G$ can be embedded to an unweighted caterpillar tree (equivalently, to a graph of path-width one) with a small additive distortion; (b) there is a constant $r\ge 0$ such that for every triple of vertices $u,v,w$ of $G$, disk of radius $r$ centered at one of them intercepts all paths connecting two others; (c) $G$ has a $k$-dominating shortest path with small $k\ge 0$; (d) $G$ has a $k'$-dominating pair with small $k'\ge 0$; (e) some power $G^\mu$ of $G$ is an AT-free (or even a cocomparability) graph for a small integer $\mu\ge 0$.

Feodor F. Dragan, Ekkehard K\"ohler3/10/2025

arXiv:2503.05596v1 Announce Type: new Abstract: String matching is a fundamental problem in computer science, with critical applications in text retrieval, bioinformatics, and data analysis. Among the numerous solutions that have emerged for this problem in recent decades, bit-parallelism has significantly enhanced their practical efficiency, leading to the development of several optimized approaches for both exact and approximate string matching. However, their potential in quantum computing remains largely unexplored. This paper presents a novel pathway that not only translates bit-parallel string matching algorithms into the quantum framework but also enhances their performance to achieve a quadratic speedup through Grover's search. By embedding quantum search within a bit-parallel model, we reduce the time complexity of string matching, establishing a structured pathway for transforming classical algorithms into quantum solutions with provable computational advantages. Beyond exact matching, this technique offers a foundation for tackling a wide range of non-standard string matching problems, opening new avenues for efficient text searching in the quantum era. To demonstrate the simplicity and adaptability of the technique presented in this paper, we apply this translation and adaptation process to two landmark bit-parallel algorithms: Shift-And for exact pattern matching and Shift-Add for approximate string matching with up to k errors.

Simone Faro, Arianna Pavone, Caterina Viola3/10/2025

arXiv:1704.08000v3 Announce Type: replace Abstract: We say that an algorithm is stable if small changes in the input result in small changes in the output. This kind of algorithm stability is particularly relevant when analyzing and visualizing time-varying data. Stability in general plays an important role in a wide variety of areas, such as numerical analysis, machine learning, and topology, but is poorly understood in the context of (combinatorial) algorithms. In this paper we present a framework for analyzing the stability of algorithms. We focus in particular on the trade-off between the stability of an algorithm and the quality of the solution it computes. Our framework allows for three types of stability analysis with increasing degrees of complexity: event stability, topological stability, and Lipschitz stability. In addition, we need to refine the model of an algorithm based on how it interacts with the time-varying data, for which we consider several options. We demonstrate the use of our stability framework by applying it to kinetic Euclidean minimum spanning trees.

Wouter Meulemans, Bettina Speckmann, Kevin Verbeek, Jules Wulms3/10/2025

arXiv:2503.05312v1 Announce Type: new Abstract: A proper vertex coloring of a connected graph $G$ is called an odd coloring if, for every vertex $v$ in $G$, there exists a color that appears odd number of times in the open neighborhood of $v$. The minimum number of colors required to obtain an odd coloring of $G$ is called the \emph{odd chromatic number} of $G$, denoted by $\chi_{o}(G)$. Determining $\chi_o(G)$ known to be ${\sf NP}$-hard. Given a graph $G$ and an integer $k$, the \odc{} problem is to decide whether $\chi_o(G)$ is at most $k$. In this paper, we study the parameterized complexity of the problem, particularly with respect to structural graph parameters. We obtain the following results: \begin{itemize} \item We prove that the problem admits a polynomial kernel when parameterized by the distance to clique. \item We show that the problem cannot have a polynomial kernel when parameterized by the vertex cover number unless ${\sf NP} \subseteq {\sf Co {\text -} NP/poly}$. \item We show that the problem is fixed-parameter tractable when parameterized by distance to cluster, distance to co-cluster, or neighborhood diversity. \item We show that the problem is ${\sf W[1]}$-hard parameterized by clique-width. \end{itemize} Finally, we study the complexity of the problem on restricted graph classes. We show that it can be solved in polynomial time on cographs and split graphs but remains NP-complete on certain subclasses of bipartite graphs.

Sriram Bhyravarapu, Swati Kumari, I. Vinod Reddy3/10/2025