cs.CG
21 postsarXiv:2412.20317v2 Announce Type: replace Abstract: Graph drawing is a fundamental task in information visualization, with the Fruchterman--Reingold (FR) force model being one of the most popular choices. We can interpret this visualization task as a continuous optimization problem, which can be solved using the FR algorithm, the original algorithm for this force model, or the L-BFGS algorithm, a quasi-Newton method. However, both algorithms suffer from twist problems and are computationally expensive per iteration, which makes achieving high-quality visualizations for large-scale graphs challenging. In this research, we propose a new initial placement based on the stochastic coordinate descent to accelerate the optimization process. We first reformulate the problem as a discrete optimization problem using a hexagonal lattice and then iteratively update a randomly selected vertex along the coordinate Newton direction. We can use the FR or L-BFGS algorithms to obtain the final placement. We demonstrate the effectiveness of our proposed approach through experiments, highlighting the potential of coordinate descent methods for graph drawing tasks. Additionally, we suggest combining our method with other graph drawing techniques for further improvement. We also discuss the relationship between our proposed method and broader graph-related applications.
arXiv:2501.00207v1 Announce Type: new Abstract: Given a set $P$ of $n$ points in the plane, its unit-disk graph $G(P)$ is a graph with $P$ as its vertex set such that two points of $P$ are connected by an edge if their (Euclidean) distance is at most $1$. We consider several classical problems on $G(P)$ in a special setting when points of $P$ are in convex position. These problems are all NP-hard in the general case. We present efficient algorithms for these problems under the convex position assumption. The considered problems include the following: finding a minimum weight dominating set in $G(P)$, the discrete $k$-center problem for $P$, finding a maximum weight independent set in $G(P)$, the dispersion problem for $P$, and several of their variations. For some of these problems, our algorithms improve the previously best results, while for others, our results provide first-known solutions.
arXiv:2501.00390v1 Announce Type: new Abstract: In their seminal work, Gauci et al. (2014) studied the fundamental task of aggregation, wherein multiple robots need to gather without an a priori agreed-upon meeting location, using minimal hardware. That paper considered differential-drive robots that are memoryless and unable to compute. Moreover, the robots cannot communicate with one another and are only equipped with a simple sensor that determines whether another robot is directly in front of them. Despite those severe limitations, Gauci et al. introduced a controller and proved mathematically that it aggregates a system of two robots for any initial state. Unfortunately, for larger systems, the same controller aggregates empirically in many cases but not all. Thus, the question of whether a controller exists that aggregates for any number of robots remains open. In this paper, we show that no such controller exists by investigating the geometric structure of controllers. In addition, we disprove the aggregation proof of the paper above for two robots and present an alternative controller alongside a simple and rigorous aggregation proof.
arXiv:2305.00425v2 Announce Type: replace Abstract: An orthogonal drawing is an embedding of a plane graph into a grid. In a seminal work of Tamassia (SIAM Journal on Computing 1987), a simple combinatorial characterization of angle assignments that can be realized as bend-free orthogonal drawings was established, thereby allowing an orthogonal drawing to be described combinatorially by listing the angles of all corners. The characterization reduces the need to consider certain geometric aspects, such as edge lengths and vertex coordinates, and simplifies the task of graph drawing algorithm design. Barth, Niedermann, Rutter, and Wolf (SoCG 2017) established an analogous combinatorial characterization for ortho-radial drawings, which are a generalization of orthogonal drawings to cylindrical grids. The proof of the characterization is existential and does not result in an efficient algorithm. Niedermann, Rutter, and Wolf (SoCG 2019) later addressed this issue by developing quadratic-time algorithms for both testing the realizability of a given angle assignment as an ortho-radial drawing without bends and constructing such a drawing. In this paper, we further improve the time complexity of these tasks to near-linear time. We establish a new characterization for ortho-radial drawings based on the concept of a good sequence. Using the new characterization, we design a simple greedy algorithm for constructing ortho-radial drawings.
arXiv:2405.00246v2 Announce Type: replace Abstract: We investigate approximation algorithms for several fundamental optimization problems on geometric packing. The geometric objects considered are very generic, namely $d$-dimensional convex fat objects. Our main contribution is a versatile framework for designing efficient approximation schemes for classic geometric packing problems. The framework effectively addresses problems such as the multiple knapsack, bin packing, multiple strip packing, and multiple minimum container problems. Furthermore, the framework handles additional problem features, including item multiplicity, item rotation, and additional constraints on the items commonly encountered in packing contexts. The core of our framework lies in formulating the problems as integer programs with a nearly decomposable structure. This approach enables us to obtain well-behaved fractional solutions, which can then be efficiently rounded. By modeling the problems in this manner, our framework offers significant flexibility, allowing it to address a wide range of problems and incorporate additional features. To the best of our knowledge, prior to this work, the known results on approximation algorithms for packing problems were either highly fixed for one problem or restricted to one class of objects, mainly polygons and hypercubes. In this sense, our framework is the first result with a general toolbox flavor in the context of approximation algorithms for geometric packing problems. Thus, we believe that our technique is of independent interest, being possible to inspire further work on geometric packing.
arXiv:2501.00120v1 Announce Type: new Abstract: For a set $P$ of $n$ points in the plane and a value $r > 0$, the unit-disk range reporting problem is to construct a data structure so that given any query disk of radius $r$, all points of $P$ in the disk can be reported efficiently. We consider the dynamic version of the problem where point insertions and deletions of $P$ are allowed. The previous best method provides a data structure of $O(n\log n)$ space that supports $O(\log^{3+\epsilon}n)$ amortized insertion time, $O(\log^{5+\epsilon}n)$ amortized deletion time, and $O(\log^2 n/\log\log n+k)$ query time, where $\epsilon$ is an arbitrarily small positive constant and $k$ is the output size. In this paper, we improve the query time to $O(\log n+k)$ while keeping other complexities the same as before. A key ingredient of our approach is a shallow cutting algorithm for circular arcs, which may be interesting in its own right. A related problem that can also be solved by our techniques is the dynamic unit-disk range emptiness queries: Given a query unit disk, we wish to determine whether the disk contains a point of $P$. The best previous work can maintain $P$ in a data structure of $O(n)$ space that supports $O(\log^2 n)$ amortized insertion time, $O(\log^4n)$ amortized deletion time, and $O(\log^2 n)$ query time. Our new data structure also uses $O(n)$ space but can support each update in $O(\log^{1+\epsilon} n)$ amortized time and support each query in $O(\log n)$ time.
arXiv:2412.18452v1 Announce Type: cross Abstract: The Persistent Homology Transform (PHT) was introduced in the field of Topological Data Analysis about 10 years ago, and has since been proven to be a very powerful descriptor of Euclidean shapes. The PHT consists of scanning a shape from all possible directions $v\in S^{n-1}$ and then computing the persistent homology of sublevel set filtrations of the respective height functions $h_v$; this results in a sufficient and continuous descriptor of Euclidean shapes. We introduce a generalisation of the PHT in which we consider arbitrary parameter spaces and sublevel sets with respect to any function. In particular, we study transforms, defined on the Grassmannian $\mathbb{A}\mathbb{G}(m,n)$ of affine subspaces of $\mathbb{R}^n$, that allow to scan a shape by probing it with all possible affine $m$-dimensional subspaces $P\subset \mathbb{R}^n$, for fixed dimension $m$, and by computing persistent homology of sublevel set filtrations of the function $\mathrm{dist}(\cdot, P)$ encoding the distance from the flat $P$. We call such transforms "distance-from-flat" PHTs. We show that these transforms are injective and continuous and that they provide computational advantages over the classical PHT. In particular, we show that it is enough to compute homology only in degrees up to $m-1$ to obtain injectivity; for $m=1$ this provides a very powerful and computationally advantageous tool for examining shapes, which in a previous work by a subset of the authors has proven to significantly outperform state-of-the-art neural networks for shape classification tasks.
arXiv:2412.18515v1 Announce Type: cross Abstract: We introduce a new algorithm for finding robust circular coordinates on data that is expected to exhibit recurrence, such as that which appears in neuronal recordings of C. elegans. Techniques exist to create circular coordinates on a simplicial complex from a dimension 1 cohomology class, and these can be applied to the Rips complex of a dataset when it has a prominent class in its dimension 1 cohomology. However, it is known this approach is extremely sensitive to uneven sampling density. Our algorithm comes with a new method to correct for uneven sampling density, adapting our prior work on averaging coordinates in manifold learning. We use rejection sampling to correct for inhomogeneous sampling and then apply Procrustes matching to align and average the subsamples. In addition to providing a more robust coordinate than other approaches, this subsampling and averaging approach has better efficiency. We validate our technique on both synthetic data sets and neuronal activity recordings. Our results reveal a topological model of neuronal trajectories for C. elegans that is constructed from loops in which different regions of the brain state space can be mapped to specific and interpretable macroscopic behaviors in the worm.
arXiv:2308.00334v3 Announce Type: replace Abstract: We study the Art Gallery Problem under $k$-hop visibility in polyominoes. In this visibility model, two unit squares of a polyomino can see each other if and only if the shortest path between the respective vertices in the dual graph of the polyomino has length at most $k$. In this paper, we show that the VC dimension of this problem is $3$ in simple polyominoes, and $4$ in polyominoes with holes. Furthermore, we provide a reduction from Planar Monotone 3Sat, thereby showing that the problem is NP-complete even in thin polyominoes (i.e., polyominoes that do not a contain a $2\times 2$ block of cells). Complementarily, we present a linear-time $4$-approximation algorithm for simple $2$-thin polyominoes (which do not contain a $3\times 3$ block of cells) for all $k\in \mathbb{N}$.
arXiv:2412.18005v1 Announce Type: cross Abstract: One common function class in machine learning is the class of ReLU neural networks. ReLU neural networks induce a piecewise linear decomposition of their input space called the canonical polyhedral complex. It has previously been established that it is decidable whether a ReLU neural network is piecewise linear Morse. In order to expand computational tools for analyzing the topological properties of ReLU neural networks, and to harness the strengths of discrete Morse theory, we introduce a schematic for translating between a given piecewise linear Morse function (e.g. parameters of a ReLU neural network) on a canonical polyhedral complex and a compatible (``relatively perfect") discrete Morse function on the same complex. Our approach is constructive, producing an algorithm that can be used to determine if a given vertex in a canonical polyhedral complex corresponds to a piecewise linear Morse critical point. Furthermore we provide an algorithm for constructing a consistent discrete Morse pairing on cells in the canonical polyhedral complex which contain this vertex. We additionally provide some new realizability results with respect to sublevel set topology in the case of shallow ReLU neural networks.
arXiv:2404.03981v2 Announce Type: replace Abstract: We study the geometric knapsack problem in which we are given a set of $d$-dimensional objects (each with associated profits) and the goal is to find the maximum profit subset that can be packed non-overlappingly into a given $d$-dimensional (unit hypercube) knapsack. Even if $d=2$ and all input objects are disks, this problem is known to be \textsf{NP}-hard [Demaine, Fekete, Lang, 2010]. In this paper, we give polynomial time $(1+\varepsilon)$-approximation algorithms for the following types of input objects in any constant dimension $d$: - disks and hyperspheres, - a class of fat convex polygons that generalizes regular $k$-gons for $k\ge 5$ (formally, polygons with a constant number of edges, whose lengths are in a bounded range, and in which each angle is strictly larger than $\pi/2$), - arbitrary fat convex objects that are sufficiently small compared to the knapsack. We remark that in our \textsf{PTAS} for disks and hyperspheres, we output the computed set of objects, but for a $O_\varepsilon(1)$ of them, we determine their coordinates only up to an exponentially small error. However, it is unclear whether there always exists a $(1+\varepsilon)$-approximate solution that uses only rational coordinates for the disks' centers. We leave this as an open problem that is related to well-studied geometric questions in the realm of circle packing.
arXiv:2412.16962v1 Announce Type: new Abstract: The 2x2 space-filling curve is a type of generalized space-filling curve characterized by a basic unit is in a "U-shape" that traverses a 2x2 grid. In this work, we propose a universal framework for constructing general 2x2 curves where self-similarity is not strictly required. The construction is based on a novel set of grammars that define the expansion of curves from level 0 (a single point) to level 1 (units in U-shapes), which ultimately determines all $36 \times 2^k$ possible forms of curves on any level $k$ initialized from single points. We further developed an encoding system in which each unique form of the curve is associated with a specific combination of an initial seed and a sequence of codes that sufficiently describes both the global and local structures of the curve. We demonstrated that this encoding system is a powerful tool for studying 2x2 curves and we established comprehensive theoretical foundations from the following three key perspectives: 1) We provided a determinstic encoding for any unit on any level and position on the curve, enabling the study of curve generation across arbitrary parts on the curve and ranges of iterations; 2) We gave determinstic encodings for various curve transformations, including rotations, reflections and reversals; 3) We provided deterministic forms of families of curves exhibiting specific structures, including homogeneous curves, curves with identical shapes, with partially identical shapes and with completely distinct shapes. We also explored families of recursive curves, subunit identically shaped curves, symmetric curves and closed curves. Finally, we proposed a method to calculate the location of any point on the curve arithmetically, within a time complexity linear to the level of the curve.
arXiv:2412.17138v1 Announce Type: new Abstract: In this paper, we contribute a proof that minimum radius balls over metric spaces with the Heine-Borel property are always LP type. Additionally, we prove that weak metric spaces, those without symmetry, also have this property if we fix the direction in which we take their distances from the centers of the balls. We use this to prove that the minimum radius ball problem is LP type in the Hilbert and Thompson metrics and Funk weak metric. In doing so, we contribute a proof that the topology induced by the Thompson metric coincides with the Hilbert. We provide explicit primitives for computing the minimum radius ball in the Hilbert metric.
arXiv:2412.17382v1 Announce Type: cross Abstract: Recently, two extraordinary results on aperiodic monotiles have been obtained in two different settings. One is a family of aperiodic monotiles in the plane discovered by Smith, Myers, Kaplan and Goodman-Strauss in 2023, where rotation is allowed, breaking the 50-year-old record (aperiodic sets of two tiles found by Roger Penrose in the 1970s) on the minimum size of aperiodic sets in the plane. The other is the existence of an aperiodic monotile in the translational tiling of $\mathbb{Z}^n$ for some huge dimension $n$ proved by Greenfeld and Tao. This disproves the long-standing periodic tiling conjecture. However, it is known that there is no aperiodic monotile for translational tiling of the plane. The smallest size of known aperiodic sets for translational tilings of the plane is $8$, which was discovered more than $30$ years ago by Ammann. In this paper, we prove that translational tiling of the plane with a set of $7$ polyominoes is undecidable. As a consequence of the undecidability, we have constructed a family of aperiodic sets of size $7$ for the translational tiling of the plane. This breaks the 30-year-old record of Ammann.
arXiv:2301.04350v3 Announce Type: replace Abstract: Given a set of disks in the plane, the goal of the problem studied in this paper is to choose a subset of these disks such that none of its members contains the centre of any other. Each disk not in this subset must be merged with one of its nearby disks that is, increasing the latter's radius. This problem has applications in labelling rotating maps and in visualizing the distribution of entities in static maps. We prove that this problem is NP-hard. We also present an ILP formulation for this problem, and a polynomial-time algorithm for the special case in which the centres of all disks are on a line.
arXiv:2307.15130v5 Announce Type: replace Abstract: Data consisting of a graph with a function mapping into $\mathbb{R}^d$ arise in many data applications, encompassing structures such as Reeb graphs, geometric graphs, and knot embeddings. As such, the ability to compare and cluster such objects is required in a data analysis pipeline, leading to a need for distances between them. In this work, we study the interleaving distance on discretization of these objects, called mapper graphs when $d=1$, where functor representations of the data can be compared by finding pairs of natural transformations between them. However, in many cases, computation of the interleaving distance is NP-hard. For this reason, we take inspiration from recent work by Robinson to find quality measures for families of maps that do not rise to the level of a natural transformation, called assignments. We then endow the functor images with the extra structure of a metric space and define a loss function which measures how far an assignment is from making the required diagrams of an interleaving commute. Finally we show that the computation of the loss function is polynomial with a given assignment. We believe this idea is both powerful and translatable, with the potential to provide approximations and bounds on interleavings in a broad array of contexts.
arXiv:2412.15564v1 Announce Type: new Abstract: We introduce a novel offset meshing approach that can robustly handle a 3D surface mesh with an arbitrary geometry and topology configurations, while nicely capturing the sharp features on the original input for both inward and outward offsets. Compared to the existing approaches focusing on constant-radius offset, to the best of our knowledge, we propose the first-ever solution for mitered offset that can well preserve sharp features. Our method is designed based on several core principals: 1) explicitly generating the offset vertices and triangles with feature-capturing energy and constraints; 2) prioritizing the generation of the offset geometry before establishing its connectivity, 3) employing exact algorithms in critical pipeline steps for robustness, balancing the use of floating-point computations for efficiency, 4) applying various conservative speed up strategies including early reject non-contributing computations to the final output. Our approach further uniquely supports variable offset distances on input surface elements, offering a wider range practical applications compared to conventional methods. We have evaluated our method on a subset of Thinkgi10K, containing models with diverse topological and geometric complexities created by practitioners in various fields. Our results demonstrate the superiority of our approach over current state-of-the-art methods in terms of element count, feature preservation, and non-uniform offset distances of the resulting offset mesh surfaces, marking a significant advancement in the field.
arXiv:2412.15567v1 Announce Type: new Abstract: We show the following problems are in $\textsf{P}$: 1. The contiguous art gallery problem -- a variation of the art gallery problem where each guard can protect a contiguous interval along the boundary of a simple polygon. This was posed at the open problem session at CCCG '24 by Thomas C. Shermer. 2. The polygon separation problem for line segments -- For two sets of line segments $S_1$ and $S_2$, find a minimum-vertex convex polygon $P$ that completely contains $S_1$ and does not contain or cross any segment of $S_2$. 3. Minimizing the number of half-plane cuts to carve a 3D polytope. To accomplish this, we study the analytic arc cover problem -- an interval set cover problem over the unit circle with infinitely many implicitly-defined arcs, given by a function.
arXiv:2404.06376v2 Announce Type: replace Abstract: Let $P$ be a $k$-colored set of $n$ points in the plane, $4 \leq k \leq n$. We study the problem of deciding if $P$ contains a subset of four points of different colors such that its Rectilinear Convex Hull has positive area. We show this problem to be equivalent to deciding if there exists a point $c$ in the plane such that each of the open quadrants defined by $c$ contains a point of $P$, each of them having a different color. We provide an $O(n \log n)$-time algorithm for this problem, where the hidden constant does not depend on $k$; then, we prove that this problem has time complexity $\Omega(n \log n)$ in the algebraic computation tree model. No general position assumptions for $P$ are required.
arXiv:2410.11656v2 Announce Type: replace Abstract: We present a new software package, ``HexOpt,'' for improving the quality of all-hexahedral (all-hex) meshes by maximizing the minimum mixed scaled Jacobian-Jacobian energy functional, and projecting the surface points of the all-hex meshes onto the input triangular mesh. The proposed HexOpt method takes as input a surface triangular mesh and a volumetric all-hex mesh. A constrained optimization problem is formulated to improve mesh quality using a novel function that combines Jacobian and scaled Jacobian metrics which are rectified and scaled to quadratic measures, while preserving the surface geometry. This optimization problem is solved using the augmented Lagrangian (AL) method, where the Lagrangian terms enforce the constraint that surface points must remain on the triangular mesh. Specifically, corner points stay exactly at the corner, edge points are confined to the edges, and face points are free to move across the surface. To take the advantage of the Quasi-Newton method while tackling the high-dimensional variable problem, the Limited-Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm is employed. The step size for each iteration is determined by the Armijo line search. Coupled with smart Laplacian smoothing, HexOpt has demonstrated robustness and efficiency, successfully applying to 3D models and hex meshes generated by different methods without requiring any manual intervention or parameter adjustment.