cs.DB
29 postsarXiv:2412.20177v2 Announce Type: replace Abstract: Discovering co-movement patterns from urban-scale video data sources has emerged as an attractive topic. This task aims to identify groups of objects that travel together along a common route, which offers effective support for government agencies in enhancing smart city management. However, the previous work has made a strong assumption on the accuracy of recovered trajectories from videos and their co-movement pattern definition requires the group of objects to appear across consecutive cameras along the common route. In practice, this often leads to missing patterns if a vehicle is not correctly identified from a certain camera due to object occlusion or vehicle mis-matching. To address this challenge, we propose a relaxed definition of co-movement patterns from video data, which removes the consecutiveness requirement in the common route and accommodates a certain number of missing captured cameras for objects within the group. Moreover, a novel enumeration framework called MaxGrowth is developed to efficiently retrieve the relaxed patterns. Unlike previous filter-and-refine frameworks comprising both candidate enumeration and subsequent candidate verification procedures, MaxGrowth incurs no verification cost for the candidate patterns. It treats the co-movement pattern as an equivalent sequence of clusters, enumerating candidates with increasing sequence length while avoiding the generation of any false positives. Additionally, we also propose two effective pruning rules to efficiently filter the non-maximal patterns. Extensive experiments are conducted to validate the efficiency of MaxGrowth and the quality of its generated co-movement patterns. Our MaxGrowth runs up to two orders of magnitude faster than the baseline algorithm. It also demonstrates high accuracy in real video dataset when the trajectory recovery algorithm is not perfect.
arXiv:2501.00021v1 Announce Type: new Abstract: Each application developer desires to provide its users with consistent results and an always-available system despite failures. Boldly, the CALM theorem disagrees. It states that it is hard to design a system that is both consistent and available under network partitions; select at most two out of these three properties. One possible solution is to design coordination-free monotonic applications. However, a majority of real-world applications require coordination. We resolve this dilemma by conjecturing that partial progress is possible under network partitions. This partial progress ensures the system appears responsive to a subset of clients and achieves non-zero throughput during failures. To this extent, we present the design of our CASSANDRA consensus protocol that allows partitioned replicas to order client requests.
arXiv:2501.00252v1 Announce Type: new Abstract: Predicting missing facts for temporal knowledge graphs (TKGs) is a fundamental task, called temporal knowledge graph completion (TKGC). One key challenge in this task is the imbalance in data distribution, where facts are unevenly spread across entities and timestamps. This imbalance can lead to poor completion performance or long-tail entities and timestamps, and unstable training due to the introduction of false negative samples. Unfortunately, few previous studies have investigated how to mitigate these effects. Moreover, for the first time, we found that existing methods suffer from model preferences, revealing that entities with specific properties (e.g., recently active) are favored by different models. Such preferences will lead to error accumulation and further exacerbate the effects of imbalanced data distribution, but are overlooked by previous studies. To alleviate the impacts of imbalanced data and model preferences, we introduce Booster, the first data augmentation strategy for TKGs. The unique requirements here lie in generating new samples that fit the complex semantic and temporal patterns within TKGs, and identifying hard-learning samples specific to models. Therefore, we propose a hierarchical scoring algorithm based on triadic closures within TKGs. By incorporating both global semantic patterns and local time-aware structures, the algorithm enables pattern-aware validation for new samples. Meanwhile, we propose a two-stage training approach to identify samples that deviate from the model's preferred patterns. With a well-designed frequency-based filtering strategy, this approach also helps to avoid the misleading of false negatives. Experiments justify that Booster can seamlessly adapt to existing TKGC models and achieve up to an 8.7% performance improvement.
arXiv:2501.00773v1 Announce Type: new Abstract: Graphs are essential data structures for modeling complex interactions in domains such as social networks, molecular structures, and biological systems. Graph-level tasks, which predict properties or classes for the entire graph, are critical for applications, such as molecular property prediction and subgraph counting. Graph Neural Networks (GNNs) have shown promise in these tasks, but their evaluations are often limited to narrow datasets, tasks, and inconsistent experimental setups, restricting their generalizability. To address these limitations, we propose a unified evaluation framework for graph-level GNNs. This framework provides a standardized setting to evaluate GNNs across diverse datasets, various graph tasks (e.g., graph classification and regression), and challenging scenarios, including noisy, imbalanced, and few-shot graphs. Additionally, we propose a novel GNN model with enhanced expressivity and generalization capabilities. Specifically, we enhance the expressivity of GNNs through a $k$-path rooted subgraph approach, enabling the model to effectively count subgraphs (e.g., paths and cycles). Moreover, we introduce a unified graph contrastive learning algorithm for graphs across diverse domains, which adaptively removes unimportant edges to augment graphs, thereby significantly improving generalization performance. Extensive experiments demonstrate that our model achieves superior performance against fourteen effective baselines across twenty-seven graph datasets, establishing it as a robust and generalizable model for graph-level tasks.
arXiv:2501.01062v1 Announce Type: new Abstract: Recently, consensus protocols based on Directed Acyclic Graph (DAG) have gained significant attention due to their potential to build robust blockchain systems, particularly in asynchronous networks. In this paper, we propose Fides, an asynchronous DAG-based BFT consensus protocol that leverages Trusted Execution Environments (TEEs) to tackle three major scalability and security challenges faced by existing protocols: (i) the need for a larger quorum size (i.e., at least 3x larger) to tolerate Byzantine replicas, (ii) high communication costs and reliance on expensive cryptographic primitives (i.e., global common coin) to reach agreement in asynchronous networks, and (iii) poor censorship resilience undermining the liveness guarantee. Specifically, Fides adopts four trusted components-Reliable Broadcast, Vertex Validation, Common Coin, and Transaction Disclosure-within TEEs. Incorporating these components enables Fides to achieve linear message complexity, guaranteed censorship resilience, 2x larger quorum size, and lightweight common coin usage. Besides, abstracting these essential components rather than porting the entire protocol into TEE can significantly reduce the Trusted Computing Base (TCB). Experimental evaluations of Fides in local and geo-distributed networks demonstrate its superior performance compared to established state-of-the-art protocols such as Tusk, RCC, HotStuff, and PBFT. The results indicate that Fides achieves a throughput of 400k transactions per second in a geo-distributed network and 810k transactions per second in a local network. Our analysis further explores the protocol's overhead, highlighting its suitability and effectiveness for practical deployment in real-world blockchain systems.
arXiv:2410.02234v2 Announce Type: replace Abstract: Ego-centric queries, focusing on a target vertex and its direct neighbors, are essential for various applications. Enabling such queries on graphs owned by mutually distrustful data providers, without breaching privacy, holds promise for more comprehensive results. In this paper, we propose GORAM, a graph-oriented data structure that enables efficient ego-centric queries on federated graphs with strong privacy guarantees. GORAM is built upon secure multi-party computation (MPC) and ensures that no single party can learn any sensitive information about the graph data or the querying keys during the process. However, achieving practical performance with privacy guaranteed presents a challenge. To overcome this, GORAM is designed to partition the federated graph and construct an Oblivious RAM(ORAM)-inspired index atop these partitions. This design enables each ego-centric query to process only a single partition, which can be accessed fast and securely. To evaluate the performance of GORAM, we developed a prototype querying engine on a real-world MPC framework. We conduct a comprehensive evaluation with five commonly used queries on both synthetic and real-world graphs. Our evaluation shows that all benchmark queries can be completed in just 58.1 milliseconds to 35.7 seconds, even on graphs with up to 41.6 million vertices and 1.4 billion edges. To the best of our knowledge, this represents the first instance of processing billion-scale graphs with practical performance on MPC.
arXiv:2412.14832v2 Announce Type: replace Abstract: Federated heavy hitter analytics enables service providers to better understand the preferences of cross-party users by analyzing the most frequent items. As with federated learning, it faces challenges of privacy concerns, statistical heterogeneity, and expensive communication. Local differential privacy (LDP), as the de facto standard for privacy-preserving data collection, solves the privacy challenge by letting each user perturb her data locally and report the sanitized version. However, in federated settings, applying LDP complicates the other two challenges, due to the deteriorated utility by the injected LDP noise or increasing communication/computation costs by perturbation mechanism. To tackle these problems, we propose a novel target-aligning prefix tree mechanism satisfying $\epsilon$-LDP, for federated heavy hitter analytics. In particular, we propose an adaptive extension strategy to address the inconsistencies between covering necessary prefixes and estimating heavy hitters within a party to enhance the utility. We also present a consensus-based pruning strategy that utilizes noisy prior knowledge from other parties to further align the inconsistency between finding heavy hitters in each party and providing reasonable frequency information to identify the global ones. To the best of our knowledge, our study is the first solution to the federated heavy hitter analytics in a cross-party setting while satisfying the stringent $\epsilon$-LDP. Comprehensive experiments on both real-world and synthetic datasets confirm the effectiveness of our proposed mechanism.
arXiv:2412.18012v1 Announce Type: new Abstract: Process mining has grown popular today given their ability to provide managers with insights into the actual business process as executed by employees. Process mining depends on event logs found in process aware information systems to model business processes. This has raised the need to develop event log standards given that event logs are the entry point to any process mining project. One of the main challenges of event logs and process mining in general as was mentioned by the IEEE task force on process mining deals with the finding, merging and cleaning event data.This resulted in having multiple event log standards with different features. This paper attempts to propose a new unified standard for event logs that enriches the results of process mining without the need to tailor event logs for each process mining project.
arXiv:2412.18022v1 Announce Type: new Abstract: In the rapidly evolving AI era with large language models (LLMs) at the core, making LLMs more trustworthy and efficient, especially in output generation (inference), has gained significant attention. This is to reduce plausible but faulty LLM outputs (a.k.a hallucinations) and meet the highly increased inference demands. This tutorial explores such efforts and makes them transparent to the database community. Understanding these efforts is essential in harnessing LLMs in database tasks and adapting database techniques to LLMs. Furthermore, we delve into the synergy between LLMs and databases, highlighting new opportunities and challenges in their intersection. This tutorial aims to share with database researchers and practitioners essential concepts and strategies around LLMs, reduce the unfamiliarity of LLMs, and inspire joining in the intersection between LLMs and databases.
arXiv:2412.18143v1 Announce Type: new Abstract: Graphs are the most suitable structures for modeling objects and interactions in applications where component inter-connectivity is a key feature. There has been increased interest in graphs to represent domains such as social networks, web site link structures, and biology. Graph stores recently rose to prominence along the NoSQL movement. In this work we will focus on NOSQL graph databases, describing their peculiarities that sets them apart from other data storage and management solutions, and how they differ among themselves. We will also analyze in-depth two different graph database management systems - AllegroGraph and Neo4j that uses the most popular graph models used by NoSQL stores in practice: the resource description framework (RDF) and the labeled property graph (LPG), respectively.
arXiv:2412.18522v1 Announce Type: new Abstract: Association rules are an important technique for gaining insights over large relational datasets consisting of tuples of elements (i.e. attribute-value pairs). However, it is difficult to explain the relative importance of data elements with respect to the rules in which they appear. This paper develops a measure of an element's contribution to a set of association rules based on Shapley values, denoted SHARQ (ShApley Rules Quantification). As is the case with many Shapely-based computations, the cost of a naive calculation of the score is exponential in the number of elements. To that end, we present an efficient framework for computing the exact SharQ value of a single element whose running time is practically linear in the number of rules. Going one step further, we develop an efficient multi-element SHARQ algorithm which amortizes the cost of the single element SHARQ calculation over a set of elements. Based on the definition of SHARQ for elements we describe two additional use cases for association rules explainability: rule importance and attribute importance. Extensive experiments over a novel benchmark dataset containing 45 instances of mined rule sets show the effectiveness of our approach.
arXiv:2412.18535v1 Announce Type: new Abstract: Spatial-temporal data collected across different geographic locations often suffer from missing values, posing challenges to data analysis. Existing methods primarily leverage fixed spatial graphs to impute missing values, which implicitly assume that the spatial relationship is roughly the same for all features across different locations. However, they may overlook the different spatial relationships of diverse features recorded by sensors in different locations. To address this, we introduce the multi-scale Graph Structure Learning framework for spatial-temporal Imputation (GSLI) that dynamically adapts to the heterogeneous spatial correlations. Our framework encompasses node-scale graph structure learning to cater to the distinct global spatial correlations of different features, and feature-scale graph structure learning to unveil common spatial correlation across features within all stations. Integrated with prominence modeling, our framework emphasizes nodes and features with greater significance in the imputation process. Furthermore, GSLI incorporates cross-feature and cross-temporal representation learning to capture spatial-temporal dependencies. Evaluated on six real incomplete spatial-temporal datasets, GSLI showcases the improvement in data imputation.
arXiv:2412.17892v1 Announce Type: new Abstract: The integration of LLM-generated feedback into educational settings has shown promise in enhancing student learning outcomes. This paper presents a novel LLM-driven system that provides targeted feedback for conceptual designs in a Database Systems course. The system converts student-created entity-relationship diagrams (ERDs) into JSON format, allows the student to prune the diagram by isolating a relationship, extracts relevant requirements for the selected relationship, and utilizes a large language model (LLM) to generate detailed feedback. Additionally, the system creates a tailored set of questions and answers to further aid student understanding. Our pilot implementation in a Database System course demonstrates effective feedback generation that helped the students improve their design skills.
arXiv:2412.14519v2 Announce Type: replace Abstract: Within the dynamic world of Big Data, traditional systems typically operate in a passive mode, processing and responding to user queries by returning the requested data. However, this methodology falls short of meeting the evolving demands of users who not only wish to analyze data but also to receive proactive updates on topics of interest. To bridge this gap, Big Active Data (BAD) frameworks have been proposed to support extensive data subscriptions and analytics for millions of subscribers. As data volumes and the number of interested users continue to increase, the imperative to optimize BAD systems for enhanced scalability, performance, and efficiency becomes paramount. To this end, this paper introduces three main optimizations, namely: strategic aggregation, intelligent modifications to the query plan, and early result filtering, all aimed at reinforcing a BAD platform's capability to actively manage and efficiently process soaring rates of incoming data and distribute notifications to larger numbers of subscribers.
arXiv:2409.01388v2 Announce Type: replace Abstract: Serverless query processing has become increasingly popular due to its auto-scaling, high elasticity, and pay-as-you-go pricing. It allows cloud data warehouse (or lakehouse) users to focus on data analysis without the burden of managing systems and resources. Accordingly, in serverless query services, users become more concerned about cost-efficiency under acceptable performance than performance under fixed resources. This poses new challenges for serverless query engine design in providing flexible performance service-level agreements (SLAs) and cost-efficiency (i.e., prices). In this paper, we first define the problem of flexible performance SLAs and prices in serverless query processing and discuss its significance. Then, we envision the challenges and solutions for solving this problem and the opportunities it raises for other database research. Finally, we present PixelsDB, an open-source prototype with three service levels supported by dedicated architectural designs. Evaluations show that PixelsDB reduces resource costs by 65.5\% for near-real-world workloads generated by Cloud Analytics Benchmark (CAB) while not violating the pending time guarantees.
arXiv:2412.14146v2 Announce Type: replace Abstract: This paper presents the Advanced Reasoning and Transformation Engine for Multi-Step Insight Synthesis in Data Analytics (ARTEMIS-DA), a novel framework designed to augment Large Language Models (LLMs) for solving complex, multi-step data analytics tasks. ARTEMIS-DA integrates three core components: the Planner, which dissects complex user queries into structured, sequential instructions encompassing data preprocessing, transformation, predictive modeling, and visualization; the Coder, which dynamically generates and executes Python code to implement these instructions; and the Grapher, which interprets generated visualizations to derive actionable insights. By orchestrating the collaboration between these components, ARTEMIS-DA effectively manages sophisticated analytical workflows involving advanced reasoning, multi-step transformations, and synthesis across diverse data modalities. The framework achieves state-of-the-art (SOTA) performance on benchmarks such as WikiTableQuestions and TabFact, demonstrating its ability to tackle intricate analytical tasks with precision and adaptability. By combining the reasoning capabilities of LLMs with automated code generation and execution and visual analysis, ARTEMIS-DA offers a robust, scalable solution for multi-step insight synthesis, addressing a wide range of challenges in data analytics.
arXiv:2412.16266v1 Announce Type: new Abstract: Time series play a crucial role in many fields, including finance, healthcare, industry, and environmental monitoring. The storage and retrieval of time series can be challenging due to their unstoppable growth. In fact, these applications often sacrifice precious historical data to make room for new data. General-purpose compressors can mitigate this problem with their good compression ratios, but they lack efficient random access on compressed data, thus preventing real-time analyses. Ad-hoc streaming solutions, instead, typically optimise only for compression and decompression speed, while giving up compression effectiveness and random access functionality. Furthermore, all these methods lack awareness of certain special regularities of time series, whose trends over time can often be described by some linear and nonlinear functions. To address these issues, we introduce NeaTS, a randomly-accessible compression scheme that approximates the time series with a sequence of nonlinear functions of different kinds and shapes, carefully selected and placed by a partitioning algorithm to minimise the space. The approximation residuals are bounded, which allows storing them in little space and thus recovering the original data losslessly, or simply discarding them to obtain a lossy time series representation with maximum error guarantees. Our experiments show that NeaTS improves the compression ratio of the state-of-the-art lossy compressors that use linear or nonlinear functions (or both) by up to 14%. Compared to lossless compressors, NeaTS emerges as the only approach to date providing, simultaneously, compression ratios close to or better than the best existing compressors, a much faster decompression speed, and orders of magnitude more efficient random access, thus enabling the storage and real-time analysis of massive and ever-growing amounts of (historical) time series data.
arXiv:2412.16323v1 Announce Type: new Abstract: As database query processing techniques are being used to handle diverse workloads, a key emerging challenge is how to efficiently handle multi-way join queries containing multiple many-to-many joins. While uncommon in traditional enterprise settings that have been the focus of much of the query optimization work to date, such queries are seen frequently in other contexts such as graph workloads. This has led to much work on developing join algorithms for handling cyclic queries, on compressed (factorized) representations for more efficient storage of intermediate results, and on use of semi-joins or predicate transfer to avoid generating large redundant intermediate results. In this paper, we address a core query optimization problem in this context. Specifically, we introduce an improved cost model that more accurately captures the cost of a query plan in such scenarios, and we present several optimization algorithms for query optimization that incorporate these new cost functions. We present an extensive experimental evaluation, that compares the factorized representation approach with a full semi-join reduction approach as well as to an approach that uses bitvectors to eliminate tuples early through sideways information passing. We also present new analyses of robustness of these techniques to the choice of the join order, potentially eliminating the need for more complex query optimization and selectivity estimation techniques.
arXiv:2412.16766v1 Announce Type: new Abstract: Knowledge graph construction (KGC) from (semi-)structured data is challenging, and facilitating user involvement is an issue frequently brought up within this community. We cannot deny the progress we have made with respect to (declarative) knowledge generation languages and tools to help build such mappings. However, it is surprising that no two studies report on similar protocols. This heterogeneity does not allow for a comparison of KGC languages, techniques, and tools. This paper first analyses the various studies that report on studies involving users to identify the points of comparison. These gaps include a lack of systematic consistency in task design, participant selection, and evaluation metrics. Moreover, there needs to be a systematic way of analyzing the data and reporting the findings, which is also lacking. We thus propose and introduce a user protocol for KGC designed to address this challenge. Where possible, we draw and take elements from the literature we deem fit for such a protocol. The protocol, as such, allows for the comparison of languages and techniques for the RDF Mapping Languages core functionality, which is covered by most of the other state-of-the-art techniques and tools. We also propose how the protocol can be amended to compare extensions (of RML). This protocol provides an important step towards a more comparable evaluation of KGC user studies.
arXiv:2412.16864v1 Announce Type: new Abstract: Row-level lineage explains what input rows produce an output row through a data processing pipeline, having many applications like data debugging, auditing, data integration, etc. Prior work on lineage falls in two lines: eager lineage tracking and lazy lineage inference. Eager tracking integrates lineage tracing tightly into the operator implementation, enabling efficient customized tracking. However, this approach is intrusive, system-specific, and lacks adaptability. In contrast, lazy inference generates additional queries to compute lineage; it can be easily applied to any database, but the lineage query is usually slow. Furthermore, both approaches have limited coverage of the type of data processing pipeline supported due to operator-specific tracking or inference rules. In this work, we propose PredTrace, a lineage inference approach that achieves easy adaptation, low runtime overhead, efficient lineage querying, and high pipeline coverage. It achieves this by leveraging predicate pushdown: pushing a row-selection predicate that describes the target output down to source tables and querying the lineage by running the pushed-down predicate. PredTrace may require saving intermediate results when running the pipeline in order to compute the precise lineage. When this is not viable, it can still infer lineage but may return a superset. Compared to prior work, PredTrace achieves higher coverage on TPC-H queries as well as 70 sampled real-world data processing pipelines in which UDFs are widely used. It can infer lineage in seconds, outperforming prior lazy approaches by up to 10x.