cs.DB

52 posts

arXiv:2501.07398v1 Announce Type: new Abstract: In recent years, the importance of well-documented metadata has been discussed increasingly in many research fields. Making all metadata generated during scientific research available in a findable, accessible, interoperable, and reusable (FAIR) manner remains a significant challenge for researchers across fields. Scientific communities are agreeing to achieve this by making all data available in a semantically annotated knowledge graph using semantic web technologies. Most current approaches do not gather metadata in a consistent and community-agreed standardized way, and there are insufficient tools to support the process of turning them into a knowledge graph. We present an example solution in which the creation of a schema and ontology are placed at the beginning of the scientific process which is then - using the electronic laboratory notebook framework Herbie - turned into a bespoke data collection platform to facilitate validation and semantic annotation of the metadata immediately during an experiment. Using the example of synchrotron radiation-based nano computed tomography measurements, we present a holistic approach which can capture the complex metadata of such research instruments in a flexible and straightforward manner. Different instrument setups of this beamline can be considered, allowing a user-friendly experience. We show how Herbie turns all semantic documents into an accessible user interface, where all data entered automatically fulfills all requirements of being FAIR, and present how data can be directly extracted via competency questions without requiring familiarity with the fine-grained structure of the knowledge graph.

Fabian Kirchner (Institute of Metallic Biomaterials, Helmholtz-Zentrum Hereon, Germany), D. C. Florian Wieland (Institute of Metallic Biomaterials, Helmholtz-Zentrum Hereon, Germany), Sarah Irvine (Institute of Materials Physics, Helmholtz-Zentrum Hereon, Germany), Sven Schimek (Institute of Metallic Biomaterials, Helmholtz-Zentrum Hereon, Germany), Jan Reimers (Institute of Metallic Biomaterials, Helmholtz-Zentrum Hereon, Germany, Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons, Forschungszentrum J\"ulich GmbH, J\"ulich, Germany), Rossella Aversa (Karlsruhe Institute of Technology, Scientific Computing Center), Alexey Boubnov (Karlsruhe Institute of Technology, Institute of Nanotechnology), Christian Lucas (Bruker Daltonics SPR, Hamburg, Germany), Silja Flenner (Institute of Materials Physics, Helmholtz-Zentrum Hereon, Germany), Imke Greving (Institute of Materials Physics, Helmholtz-Zentrum Hereon, Germany), Andr\'e Lopes Marinho (Institute of Metallic Biomaterials, Helmholtz-Zentrum Hereon, Germany), Tak Ming Wong (Institute of Metallic Biomaterials, Helmholtz-Zentrum Hereon, Germany, Institute of Materials Physics, Helmholtz-Zentrum Hereon, Germany), Regine Willumeit-R\"omer (Institute of Metallic Biomaterials, Helmholtz-Zentrum Hereon, Germany), Catriona Eschke (Institute of Metallic Biomaterials, Helmholtz-Zentrum Hereon, Germany), Berit Zeller-Plumhoff (Institute of Metallic Biomaterials, Helmholtz-Zentrum Hereon, Germany, Data-driven Analysis and Design of Materials, Faculty of Mechanical Engineering and Marine Technologies, University of Rostock, Germany)1/14/2025

arXiv:2501.06705v1 Announce Type: new Abstract: Recent advancements in quantum technologies, particularly in quantum sensing and simulation, have facilitated the generation and analysis of inherently quantum data. This progress underscores the necessity for developing efficient and scalable quantum data management strategies. This goal faces immense challenges due to the exponential dimensionality of quantum data and its unique quantum properties such as no-cloning and measurement stochasticity. Specifically, classical storage and manipulation of an arbitrary n-qubit quantum state requires exponential space and time. Hence, there is a critical need to revisit foundational data management concepts and algorithms for quantum data. In this paper, we propose succinct quantum data sketches to support basic database operations such as search and selection. We view our work as an initial step towards the development of quantum data management model, opening up many possibilities for future research in this direction.

Qin Zhang, Mohsen Heidari1/14/2025

arXiv:2501.07106v1 Announce Type: new Abstract: Kernel density estimation (KDE) has become a popular method for visual analysis in various fields, such as financial risk forecasting, crime clustering, and traffic monitoring. KDE can identify high-density areas from discrete datasets. However, most existing works only consider planar distance and spatial data. In this paper, we introduce a new model, called TN-KDE, that applies KDE-based techniques to road networks with temporal data. Specifically, we introduce a novel solution, Range Forest Solution (RFS), which can efficiently compute KDE values on spatiotemporal road networks. To support the insertion operation, we present a dynamic version, called Dynamic Range Forest Solution (DRFS). We also propose an optimization called Lixel Sharing (LS) to share similar KDE values between two adjacent lixels. Furthermore, our solutions support many non-polynomial kernel functions and still report exact values. Experimental results show that our solutions achieve up to 6 times faster than the state-of-the-art method.

Yu Shao, Peng Cheng, Xiang Lian, Lei Chen, Wangze Ni, Xuemin Lin, Chen Zhang, Liping Wang1/14/2025

arXiv:2411.14754v2 Announce Type: replace Abstract: Approximate Nearest Neighbor (ANN) search in high-dimensional Euclidean spaces is a fundamental problem with a wide range of applications. However, there is currently no ANN method that performs well in both indexing and query answering performance, while providing rigorous theoretical guarantees for the quality of the answers. In this paper, we first design SC-score, a metric that we show follows the Pareto principle and can act as a proxy for the Euclidean distance between data points. Inspired by this, we propose a novel ANN search framework called Subspace Collision (SC), which can provide theoretical guarantees on the quality of its results. We further propose SuCo, which achieves efficient and accurate ANN search by designing a clustering-based lightweight index and query strategies for our proposed subspace collision framework. Extensive experiments on real-world datasets demonstrate that both the indexing and query answering performance of SuCo outperform state-of-the-art ANN methods that can provide theoretical guarantees, performing 1-2 orders of magnitude faster query answering with only up to one-tenth of the index memory footprint. Moreover, SuCo achieves top performance (best for hard datasets) even when compared to methods that do not provide theoretical guarantees. This paper was published in SIGMOD 2025.

Jiuqi Wei, Xiaodong Lee, Zhenyu Liao, Themis Palpanas, Botao Peng1/14/2025

arXiv:2407.14953v2 Announce Type: replace Abstract: Edge applications generate a large influx of sensor data on massive scales, and these massive data streams must be processed shortly to derive actionable intelligence. However, traditional data processing systems are not well-suited for these edge applications as they often do not scale well with a large number of concurrent stream queries, do not support low-latency processing under limited edge computing resources, and do not adapt to the level of heterogeneity and dynamicity commonly present in edge computing environments. As such, we present AgileDart, an agile and scalable edge stream processing engine that enables fast stream processing of many concurrently running low-latency edge applications' queries at scale in dynamic, heterogeneous edge environments. The novelty of our work lies in a dynamic dataflow abstraction that leverages distributed hash table-based peer-to-peer overlay networks to autonomously place, chain, and scale stream operators to reduce query latencies, adapt to workload variations, and recover from failures and a bandit-based path planning model that re-plans the data shuffling paths to adapt to unreliable and heterogeneous edge networks. We show that AgileDart outperforms Storm and EdgeWise on query latency and significantly improves scalability and adaptability when processing many real-world edge stream applications' queries.

Cheng-Wei Ching, Xin Chen, Chaeeun Kim, Tongze Wang, Dong Chen, Dilma Da Silva, Liting Hu1/14/2025

arXiv:2403.12605v4 Announce Type: replace Abstract: Microservice architectures have become a popular approach for designing scalable distributed applications. Despite their extensive use in industrial settings for over a decade, there is limited understanding of the data management challenges that arise in these applications. Consequently, it has been difficult to advance data system technologies that effectively support microservice applications. To fill this gap, we present Online Marketplace, a microservice benchmark that highlights core data management challenges that existing benchmarks fail to address. These challenges include transaction processing, query processing, event processing, constraint enforcement, and data replication. We have defined criteria for various data management issues to enable proper comparison across data systems and platforms. Through case studies with state-of-the-art data platforms, we discuss the issues encountered while implementing and meeting Online Marketplace's criteria. By capturing the overhead of meeting the key data management requirements that are overlooked by existing benchmarks, we gain actionable insights into the experimental platforms. This highlights the significance of Online Marketplace in advancing future data systems to meet the needs of microservice practitioners.

Rodrigo Laigner, Zhexiang Zhang, Yijian Liu, Leonardo Freitas Gomes, Yongluan Zhou1/14/2025

arXiv:2412.06189v3 Announce Type: replace Abstract: One fundamental question in database theory is the following: Given a Boolean conjunctive query Q, what is the best complexity for computing the answer to Q in terms of the input database size N? When restricted to the class of combinatorial algorithms, it is known that the best known complexity for any query Q is captured by the submodular width of Q. However, beyond combinatorial algorithms, certain queries are known to admit faster algorithms that often involve a clever combination of fast matrix multiplication and data partitioning. Nevertheless, there is no systematic way to derive and analyze the complexity of such algorithms for arbitrary queries Q. In this work, we introduce a general framework that captures the best complexity for answering any Boolean conjunctive query Q using matrix multiplication. Our framework unifies both combinatorial and non-combinatorial techniques under the umbrella of information theory. It generalizes the notion of submodular width to a new stronger notion called the omega-submodular width that naturally incorporates the power of fast matrix multiplication. We describe a matching algorithm that computes the answer to any query Q in time corresponding to the omega-submodular width of Q. We show that our framework recovers the best known complexities for Boolean queries that have been studied in the literature, to the best of our knowledge, and also discovers new algorithms for some classes of queries that improve upon the best known complexities.

Mahmoud Abo-Khamis, Xiao Hu, Dan Suciu1/14/2025

arXiv:2501.06570v1 Announce Type: new Abstract: There is a proliferation of applications requiring the management of large-scale, evolving graphs under workloads with intensive graph updates and lookups. Driven by this challenge, we introduce Poly-LSM, a high-performance key-value storage engine for graphs with the following novel techniques: (1) Poly-LSM is embedded with a new design of graph-oriented LSM-tree structure that features a hybrid storage model for concisely and effectively storing graph data. (2) Poly-LSM utilizes an adaptive mechanism to handle edge insertions and deletions on graphs with optimized I/O efficiency. (3) Poly-LSM exploits the skewness of graph data to encode the key-value entries. Building upon this foundation, we further implement Aster, a robust and versatile graph database that supports Gremlin query language facilitating various graph applications. In our experiments, we compared Aster against several mainstream real-world graph databases. The results demonstrate that Aster outperforms all baseline graph databases, especially on large-scale graphs. Notably, on the billion-scale Twitter graph dataset, Aster achieves up to 17x throughput improvement compared to the best-performing baseline graph system.

Dingheng Mo, Junfeng Liu, Fan Wang, Siqiang Luo1/14/2025

arXiv:2501.07449v1 Announce Type: new Abstract: Database normalization theory is the basis for logical design of relational databases. Normalization reduces data redundancy and consequently eliminates potential data anomalies, while increasing the computational cost of read operations. Despite decades worth of applications of normalization theory, it still remains largely unclear to what extent normalization affects database size and efficiency. In this study, we study the effects of database normalization using the Internet Movie Database (IMDb) public dataset and PostgreSQL. The results indicate, rather intuitively, that (i) database size on disk is reduced through normalization from 1NF to 2NF by 10%, but not from 2NF to 4NF, (ii) the number of tables and table rows in total increase monotonically from 1NF to 2NF to 4NF, and that (iii) query complexity increases with further normalization. Surprisingly, however, the results also indicate that (iv) normalization from 1NF to 2NF increases throughput by a factor of 4, and consequently, (v) energy consumption per transaction reduces by 74% with normalization from 1NF to 2NF. The results imply that the gains of normalization from 2NF to 4NF in terms of throughput and energy consumption are minimal, yet increase the storage space requirements by approximately 7%. While these results represent merely one specific case, they provide needed empirical evaluation on the practical effects and magnitude of database normalization.

Toni Taipalus1/14/2025

arXiv:2501.06659v1 Announce Type: new Abstract: Many documents, that we call templatized documents, are programmatically generated by populating fields in a visual template. Effective data extraction from these documents is crucial to supporting downstream analytical tasks. Current data extraction tools often struggle with complex document layouts, incur high latency and/or cost on large datasets, and often require significant human effort, when extracting tables or values given user-specified fields from documents. The key insight of our tool, TWIX, is to predict the underlying template used to create such documents, modeling the visual and structural commonalities across documents. Data extraction based on this predicted template provides a more principled, accurate, and efficient solution at a low cost. Comprehensive evaluations on 34 diverse real-world datasets show that uncovering the template is crucial for data extraction from templatized documents. TWIX achieves over 90% precision and recall on average, outperforming tools from industry: Textract and Azure Document Intelligence, and vision-based LLMs like GPT-4-Vision, by over 25% in precision and recall. TWIX scales easily to large datasets and is 734X faster and 5836X cheaper than vision-based LLMs for extracting data from a large document collection with 817 pages.

Yiming Lin, Mawil Hasan, Rohan Kosalge, Alvin Cheung, Aditya G. Parameswaran1/14/2025

arXiv:2501.07078v1 Announce Type: new Abstract: In the current development of large language models (LLMs), it is important to ensure the accuracy and reliability of the underlying data sources. LLMs are critical for various applications, but they often suffer from hallucinations and inaccuracies due to knowledge gaps in the training data. Knowledge graphs (KGs), as a powerful structural tool, could serve as a vital external information source to mitigate the aforementioned issues. By providing a structured and comprehensive understanding of real-world data, KGs enhance the performance and reliability of LLMs. However, it is common that errors exist in KGs while extracting triplets from unstructured data to construct KGs. This could lead to degraded performance in downstream tasks such as question-answering and recommender systems. Therefore, anomaly detection in KGs is essential to identify and correct these errors. This paper presents an anomaly detection algorithm in knowledge graphs with dual-channel learning (ADKGD). ADKGD leverages a dual-channel learning approach to enhance representation learning from both the entity-view and triplet-view perspectives. Furthermore, using a cross-layer approach, our framework integrates internal information aggregation and context information aggregation. We introduce a kullback-leibler (KL)-loss component to improve the accuracy of the scoring function between the dual channels. To evaluate ADKGD's performance, we conduct empirical studies on three real-world KGs: WN18RR, FB15K, and NELL-995. Experimental results demonstrate that ADKGD outperforms the state-of-the-art anomaly detection algorithms. The source code and datasets are publicly available at https://github.com/csjywu1/ADKGD.

Jiayang Wu, Wensheng Gan, Jiahao Zhang, Philip S. Yu1/14/2025

arXiv:2501.06978v1 Announce Type: new Abstract: Two-phase locking (2PL) is a consolidated policy commonly adopted by Database Management Systems to enforce serializability of a schedule. While the policy is well understood, both in its standard and in the strict version, automatically deriving a suitable tabular/graphical analysis of schedules with respect to 2PL is far from trivial, and requires several technicalities that do not straightforwardly translate to visual cues. In this paper, we delve into the details of the development of a tool for 2PL analysis.

Davide Martinenghi1/14/2025

arXiv:2406.00344v2 Announce Type: replace Abstract: Bipartite graphs are ubiquitous in many domains, e.g., e-commerce platforms, social networks, and academia, by modeling interactions between distinct entity sets. Within these graphs, the butterfly motif, a complete 2*2 biclique, represents the simplest yet significant subgraph structure, crucial for analyzing complex network patterns. Counting the butterflies offers significant benefits across various applications, including community analysis and recommender systems. Additionally, the temporal dimension of bipartite graphs, where edges activate within specific time frames, introduces the concept of historical butterfly counting, i.e., counting butterflies within a given time interval. This temporal analysis sheds light on the dynamics and evolution of network interactions, offering new insights into their mechanisms. Despite its importance, no existing algorithm can efficiently solve the historical butterfly counting task. To address this, we design two novel indices whose memory footprints are dependent on #butterflies and #wedges, respectively. Combining these indices, we propose a graph structure-aware indexing approach that significantly reduces memory usage while preserving exceptional query speed. We theoretically prove that our approach is particularly advantageous on power-law graphs, a common characteristic of real-world bipartite graphs, by surpassing traditional complexity barriers for general graphs. Extensive experiments reveal that our query algorithms outperform existing methods by up to five magnitudes, effectively balancing speed with manageable memory requirements.

Qiuyang Mang, Jingbang Chen, Hangrui Zhou, Yu Gao, Yingli Zhou, Qingyu Shi, Richard Peng, Yixiang Fang, Chenhao Ma1/14/2025

arXiv:2406.00251v2 Announce Type: replace Abstract: SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries. SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL. To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.

Julian Hyde, John Fremlin1/14/2025

arXiv:2501.03850v1 Announce Type: new Abstract: While classical skyline queries identify interesting data within large datasets, flexible skylines introduce preferences through constraints on attribute weights, and further reduce the data returned. However, computing these queries can be time-consuming for large datasets. We propose and implement a parallel computation scheme consisting of a parallel phase followed by a sequential phase, and apply it to flexible skylines. We assess the additional effect of an initial filtering phase to reduce dataset size before parallel processing, and the elimination of the sequential part (the most time-consuming) altogether. All our experiments are executed in the PySpark framework for a number of different datasets of varying sizes and dimensions.

Emilio De Lorenzis, Davide Martinenghi1/8/2025

arXiv:2501.03892v1 Announce Type: new Abstract: Social scientists are increasingly interested in analyzing the semantic information (e.g., emotion) of unstructured data (e.g., Tweets), where the semantic information is not natively present. Performing this analysis in a cost-efficient manner requires using machine learning (ML) models to extract the semantic information and subsequently analyze the now structured data. However, this process remains challenging for domain experts. To demonstrate the challenges in social science analytics, we collect a dataset, QUIET-ML, of 120 real-world social science queries in natural language and their ground truth answers. Existing systems struggle with these queries since (1) they require selecting and applying ML models, and (2) more than a quarter of these queries are vague, making standard tools like natural language to SQL systems unsuited. To address these issues, we develop LEAP, an end-to-end library that answers social science queries in natural language with ML. LEAP filters vague queries to ensure that the answers are deterministic and selects from internally supported and user-defined ML functions to extend the unstructured data to structured tables with necessary annotations. LEAP further generates and executes code to respond to these natural language queries. LEAP achieves a 100% pass @ 3 and 92% pass @ 1 on QUIET-ML, with a \$1.06 average end-to-end cost, of which code generation costs \$0.02.

Chuxuan Hu, Austin Peters, Daniel Kang1/8/2025

arXiv:2501.03639v1 Announce Type: new Abstract: The recent surge in the field of generative artificial intelligence (GenAI) has the potential to bring about transformative changes across a range of sectors, including software engineering and education. As GenAI tools, such as OpenAI's ChatGPT, are increasingly utilised in software engineering, it becomes imperative to understand the impact of these technologies on the software product. This study employs a methodological approach, comprising web scraping and data mining from LeetCode, with the objective of comparing the software quality of Python programs produced by LeetCode users with that generated by GPT-4o. In order to gain insight into these matters, this study addresses the question whether GPT-4o produces software of superior quality to that produced by humans. The findings indicate that GPT-4o does not present a considerable impediment to code quality, understandability, or runtime when generating code on a limited scale. Indeed, the generated code even exhibits significantly lower values across all three metrics in comparison to the user-written code. However, no significantly superior values were observed for the generated code in terms of memory usage in comparison to the user code, which contravened the expectations. Furthermore, it will be demonstrated that GPT-4o encountered challenges in generalising to problems that were not included in the training data set. This contribution presents a first large-scale study comparing generated code with human-written code based on LeetCode platform based on multiple measures including code quality, code understandability, time behaviour and resource utilisation. All data is publicly available for further research.

Manuel Merkel, Jens D\"orpinghaus1/8/2025

arXiv:2501.03647v1 Announce Type: new Abstract: Many approaches have been proposed to pre-compute data cubes in order to efficiently respond to OLAP queries in data warehouses. However, few have proposed solutions integrating all of the possible outcomes, and it is this idea that leads the integration of hierarchical dimensions into these responses. To meet this need, we propose, in this paper, a complete redefinition of the framework and the formal definition of traditional database analysis through the prism of hierarchical dimensions. After characterizing the hierarchical data cube lattice, we introduce the hierarchical data cube and its most concise reduced representation, the closed hierarchical data cube. It offers compact replication so as to optimize storage space by removing redundancies of strongly correlated data. Such data are typical of data warehouses, and in particular in video games, our field of study and experimentation, where hierarchical dimension attributes are widely represented.

Micka\"el Martin Nevot (AMU, LIS), S\'ebastien Nedjar (AMU, LIS), Lotfi Lakhal (AMU, LIS)1/8/2025

arXiv:2501.03413v1 Announce Type: new Abstract: Foundation models, particularly those that incorporate Transformer architectures, have demonstrated exceptional performance in domains such as natural language processing and image processing. Adapting these models to structured data, like tables, however, introduces significant challenges. These difficulties are even more pronounced when addressing multi-table data linked via foreign key, which is prevalent in the enterprise realm and crucial for empowering business use cases. Despite its substantial impact, research focusing on such linked business tables within enterprise settings remains a significantly important yet underexplored domain. To address this, we introduce a curated dataset sourced from an Enterprise Resource Planning (ERP) system, featuring extensive linked tables. This dataset is specifically designed to support research endeavors in table representation learning. By providing access to authentic enterprise data, our goal is to potentially enhance the effectiveness and applicability of models for real-world business contexts.

Tassilo Klein, Clemens Biehl, Margarida Costa, Andre Sres, Jonas Kolk, Johannes Hoffart1/8/2025

arXiv:2404.14692v2 Announce Type: replace Abstract: Overlapping Community Search (OCS) identifies nodes that interact with multiple communities based on a specified query. Existing community search approaches fall into two categories: algorithm-based models and Machine Learning-based (ML) models. Despite the long-standing focus on this topic within the database domain, current solutions face two major limitations: 1) Both approaches fail to address personalized user requirements in OCS, consistently returning the same set of nodes for a given query regardless of user differences. 2) Existing ML-based CS models suffer from severe training efficiency issues. In this paper, we formally redefine the problem of OCS. By analyzing the gaps in both types of approaches, we then propose a general solution for OCS named Sparse Subspace Filter (SSF), which can extend any ML-based CS model to enable personalized search in overlapping structures. To overcome the efficiency issue in the current models, we introduce Simplified Multi-hop Attention Networks (SMN), a lightweight yet effective community search model with larger receptive fields. To the best of our knowledge, this is the first ML-based study of overlapping community search. Extensive experiments validate the superior performance of SMN within the SSF pipeline, achieving a 13.73% improvement in F1-Score and up to 3 orders of magnitude acceleration in model efficiency compared to state-of-the-art approaches.

Qing Sima, Jianke Yu, Xiaoyang Wang, Wenjie Zhang, Ying Zhang, Xuemin Lin1/8/2025