math.AT

5 posts

arXiv:2412.18005v1 Announce Type: cross Abstract: One common function class in machine learning is the class of ReLU neural networks. ReLU neural networks induce a piecewise linear decomposition of their input space called the canonical polyhedral complex. It has previously been established that it is decidable whether a ReLU neural network is piecewise linear Morse. In order to expand computational tools for analyzing the topological properties of ReLU neural networks, and to harness the strengths of discrete Morse theory, we introduce a schematic for translating between a given piecewise linear Morse function (e.g. parameters of a ReLU neural network) on a canonical polyhedral complex and a compatible (``relatively perfect") discrete Morse function on the same complex. Our approach is constructive, producing an algorithm that can be used to determine if a given vertex in a canonical polyhedral complex corresponds to a piecewise linear Morse critical point. Furthermore we provide an algorithm for constructing a consistent discrete Morse pairing on cells in the canonical polyhedral complex which contain this vertex. We additionally provide some new realizability results with respect to sublevel set topology in the case of shallow ReLU neural networks.

Robyn Brooks, Marissa Masden12/25/2024

arXiv:2412.18452v1 Announce Type: cross Abstract: The Persistent Homology Transform (PHT) was introduced in the field of Topological Data Analysis about 10 years ago, and has since been proven to be a very powerful descriptor of Euclidean shapes. The PHT consists of scanning a shape from all possible directions $v\in S^{n-1}$ and then computing the persistent homology of sublevel set filtrations of the respective height functions $h_v$; this results in a sufficient and continuous descriptor of Euclidean shapes. We introduce a generalisation of the PHT in which we consider arbitrary parameter spaces and sublevel sets with respect to any function. In particular, we study transforms, defined on the Grassmannian $\mathbb{A}\mathbb{G}(m,n)$ of affine subspaces of $\mathbb{R}^n$, that allow to scan a shape by probing it with all possible affine $m$-dimensional subspaces $P\subset \mathbb{R}^n$, for fixed dimension $m$, and by computing persistent homology of sublevel set filtrations of the function $\mathrm{dist}(\cdot, P)$ encoding the distance from the flat $P$. We call such transforms "distance-from-flat" PHTs. We show that these transforms are injective and continuous and that they provide computational advantages over the classical PHT. In particular, we show that it is enough to compute homology only in degrees up to $m-1$ to obtain injectivity; for $m=1$ this provides a very powerful and computationally advantageous tool for examining shapes, which in a previous work by a subset of the authors has proven to significantly outperform state-of-the-art neural networks for shape classification tasks.

Adam Onus, Nina Otter, Renata Turkes12/25/2024

arXiv:2412.18515v1 Announce Type: cross Abstract: We introduce a new algorithm for finding robust circular coordinates on data that is expected to exhibit recurrence, such as that which appears in neuronal recordings of C. elegans. Techniques exist to create circular coordinates on a simplicial complex from a dimension 1 cohomology class, and these can be applied to the Rips complex of a dataset when it has a prominent class in its dimension 1 cohomology. However, it is known this approach is extremely sensitive to uneven sampling density. Our algorithm comes with a new method to correct for uneven sampling density, adapting our prior work on averaging coordinates in manifold learning. We use rejection sampling to correct for inhomogeneous sampling and then apply Procrustes matching to align and average the subsamples. In addition to providing a more robust coordinate than other approaches, this subsampling and averaging approach has better efficiency. We validate our technique on both synthetic data sets and neuronal activity recordings. Our results reveal a topological model of neuronal trajectories for C. elegans that is constructed from loops in which different regions of the brain state space can be mapped to specific and interpretable macroscopic behaviors in the worm.

Andrew J. Blumberg, Mathieu Carri\`ere, Jun Hou Fung, Michael A. Mandell12/25/2024

arXiv:2412.16619v1 Announce Type: new Abstract: Gaussian Splatting (GS) has emerged as a crucial technique for representing discrete volumetric radiance fields. It leverages unique parametrization to mitigate computational demands in scene optimization. This work introduces Topology-Aware 3D Gaussian Splatting (Topology-GS), which addresses two key limitations in current approaches: compromised pixel-level structural integrity due to incomplete initial geometric coverage, and inadequate feature-level integrity from insufficient topological constraints during optimization. To overcome these limitations, Topology-GS incorporates a novel interpolation strategy, Local Persistent Voronoi Interpolation (LPVI), and a topology-focused regularization term based on persistent barcodes, named PersLoss. LPVI utilizes persistent homology to guide adaptive interpolation, enhancing point coverage in low-curvature areas while preserving topological structure. PersLoss aligns the visual perceptual similarity of rendered images with ground truth by constraining distances between their topological features. Comprehensive experiments on three novel-view synthesis benchmarks demonstrate that Topology-GS outperforms existing methods in terms of PSNR, SSIM, and LPIPS metrics, while maintaining efficient memory usage. This study pioneers the integration of topology with 3D-GS, laying the groundwork for future research in this area.

Tianqi Shen, Shaohua Liu, Jiaqi Feng, Ziye Ma, Ning An12/24/2024

arXiv:2412.17468v1 Announce Type: new Abstract: While message passing graph neural networks result in informative node embeddings, they may suffer from describing the topological properties of graphs. To this end, node filtration has been widely used as an attempt to obtain the topological information of a graph using persistence diagrams. However, these attempts have faced the problem of losing node embedding information, which in turn prevents them from providing a more expressive graph representation. To tackle this issue, we shift our focus to edge filtration and introduce a novel edge filtration-based persistence diagram, named Topological Edge Diagram (TED), which is mathematically proven to preserve node embedding information as well as contain additional topological information. To implement TED, we propose a neural network based algorithm, named Line Graph Vietoris-Rips (LGVR) Persistence Diagram, that extracts edge information by transforming a graph into its line graph. Through LGVR, we propose two model frameworks that can be applied to any message passing GNNs, and prove that they are strictly more powerful than Weisfeiler-Lehman type colorings. Finally we empirically validate superior performance of our models on several graph classification and regression benchmarks.

Jaesun Shin, Eunjoo Jeon, Taewon Cho, Namkyeong Cho, Youngjune Gwon12/24/2024