cs.SE
106 postsarXiv:2501.00674v1 Announce Type: new Abstract: Software applications that run on a blockchain platform are known as DApps. DApps are built using smart contracts, which are immutable after deployment. Just like any real-world software system, DApps need to receive new features and bug fixes over time in order to remain useful and secure. However, Ethereum lacks native solutions for post-deployment smart contract maintenance, requiring developers to devise their own methods. A popular method is known as the upgradeability proxy contract (UPC), which involves implementing the proxy design pattern (as defined by the Gang of Four). In this method, client calls first hit a proxy contract, which then delegates calls to a certain implementation contract. Most importantly, the proxy contract can be reconfigured during runtime to delegate calls to another implementation contract, effectively enabling application upgrades. For researchers, the accurate detection of UPCs is a strong requirement in the understanding of how exactly real-world DApps are maintained over time. For practitioners, the accurate detection of UPCs is crucial for providing application behavior transparency and enabling auditing. In this paper, we introduce UPC Sentinel, a novel three-layer algorithm that utilizes both static and dynamic analysis of smart contract bytecode to accurately detect active UPCs. We evaluated UPC Sentinel using two distinct ground truth datasets. In the first dataset, our method demonstrated a near-perfect accuracy of 99%. The evaluation on the second dataset further established our method's efficacy, showing a perfect precision rate of 100% and a near-perfect recall of 99.3%, outperforming the state of the art. Finally, we discuss the potential value of UPC Sentinel in advancing future research efforts.
arXiv:2501.01224v1 Announce Type: new Abstract: Mission-critical system, such as satellite systems, healthcare systems, and nuclear power plant control systems, undergo rigorous testing to ensure they meet specific operational requirements throughout their operation. This includes Operational Acceptance Testing (OAT), which aims to ensure that the system functions correctly under real-world operational conditions. In satellite development, In-Orbit Testing (IOT) is a crucial OAT activity performed regularly and as needed after deployment in orbit to check the satellite's performance and ensure that operational requirements are met. The scheduling of an IOT campaign, which executes multiple IOT procedures, is an important yet challenging problem, as it accounts for various factors, including satellite visibility, antenna usage costs, testing time periods, and operational constraints. To address the IOT scheduling problem, we propose a multi-objective approach to generate near-optimal IOT schedules, accounting for operational costs, fragmentation (i.e., the splitting of tests), and resource efficiency, which align with practitioners' objectives for IOT scheduling. Our industrial case study with SES Techcom shows significant improvements, as follows: an average improvement of 49.4% in the cost objective, 60.4% in the fragmentation objective, and 30% in the resource usage objective, compared to our baselines. Additionally, our approach improves cost efficiency by 538% and resource usage efficiency by 39.42% compared to manually constructed schedules provided by practitioners, while requiring only 12.5% of the time needed for manual IOT scheduling.
arXiv:2501.00372v1 Announce Type: new Abstract: The increasing complexity of 6G systems demands innovative tools for network management, simulation, and optimization. This work introduces the integration of ns-3 with Sionna RT, establishing the foundation for the first open source full-stack Digital Network Twin (DNT) capable of supporting multi-RAT. By incorporating a deterministic ray tracer for precise and site-specific channel modeling, this framework addresses limitations of traditional stochastic models and enables realistic, dynamic, and multilayered wireless network simulations. Tested in a challenging vehicular urban scenario, the proposed solution demonstrates significant improvements in accurately modeling wireless channels and their cascading effects on higher network layers. With up to 65% observed differences in application-layer performance compared to stochastic models, this work highlights the transformative potential of ray-traced simulations for 6G research, training, and network management.
arXiv:2501.00655v1 Announce Type: new Abstract: Compilers are complex, and significant effort has been expended on testing them. Techniques such as random program generation and differential testing have proved highly effective and have uncovered thousands of bugs in production compilers. The majority of effort has been expended on validating that a compiler produces correct code for a given input, while less attention has been paid to ensuring that the compiler produces performant code. In this work we adapt differential testing to the task of identifying missed optimization opportunities in compilers. We develop a novel testing approach which combines large language models (LLMs) with a series of differential testing strategies and use them to find missing code size optimizations in C / C++ compilers. The advantage of our approach is its simplicity. We offload the complex task of generating random code to an off-the-shelf LLM, and use heuristics and analyses to identify anomalous compiler behavior. Our approach requires fewer than 150 lines of code to implement. This simplicity makes it extensible. By simply changing the target compiler and initial LLM prompt we port the approach from C / C++ to Rust and Swift, finding bugs in both. To date we have reported 24 confirmed bugs in production compilers, and conclude that LLM-assisted testing is a promising avenue for detecting optimization bugs in real world compilers.
arXiv:2501.01054v1 Announce Type: new Abstract: Current large language models (LLMs) often struggle to produce accurate responses on the first attempt for complex reasoning tasks like code generation. Prior research tackles this challenge by generating multiple candidate solutions and validating them with LLM-generated unit tests. The execution results of unit tests serve as reward signals to identify correct solutions. As LLMs always confidently make mistakes, these unit tests are not reliable, thereby diminishing the quality of reward signals. Motivated by the observation that scaling the number of solutions improves LLM performance, we explore the impact of scaling unit tests to enhance reward signal quality. Our pioneer experiment reveals a positive correlation between the number of unit tests and reward signal quality, with greater benefits observed in more challenging problems. Based on these insights, we propose CodeRM-8B, a lightweight yet effective unit test generator that enables efficient and high-quality unit test scaling. Additionally, we implement a dynamic scaling mechanism that adapts the number of unit tests based on problem difficulty, further improving efficiency. Experimental results show that our approach significantly improves performance across various models on three benchmarks (e.g., with gains of 18.43% for Llama3-8B and 3.42% for GPT-4o-mini on HumanEval Plus).
arXiv:2501.01131v1 Announce Type: new Abstract: Privacy regulations mandate that developers must provide authentic and comprehensive privacy notices, e.g., privacy policies or labels, to inform users of their apps' privacy practices. However, due to a lack of knowledge of privacy requirements, developers often struggle to create accurate privacy notices, especially for sophisticated mobile apps with complex features and in crowded development teams. To address these challenges, we introduce Privacy Bills of Materials (PriBOM), a systematic software engineering approach that leverages different development team roles to better capture and coordinate mobile app privacy information. PriBOM facilitates transparency-centric privacy documentation and specific privacy notice creation, enabling traceability and trackability of privacy practices. We present a pre-fill of PriBOM based on static analysis and privacy notice analysis techniques. We demonstrate the perceived usefulness of PriBOM through a human evaluation with 150 diverse participants. Our findings suggest that PriBOM could serve as a significant solution for providing privacy support in DevOps for mobile apps.
arXiv:2501.00298v1 Announce Type: new Abstract: Supervised machine learning techniques have shown promising results in code analysis and optimization problems. However, a learning-based solution can be brittle because minor changes in hardware or application workloads -- such as facing a new CPU architecture or code pattern -- may jeopardize decision accuracy, ultimately undermining model robustness. We introduce Prom, an open-source library to enhance the robustness and performance of predictive models against such changes during deployment. Prom achieves this by using statistical assessments to identify test samples prone to mispredictions and using feedback on these samples to improve a deployed model. We showcase Prom by applying it to 13 representative machine learning models across 5 code analysis and optimization tasks. Our extensive evaluation demonstrates that Prom can successfully identify an average of 96% (up to 100%) of mispredictions. By relabeling up to 5% of the Prom-identified samples through incremental learning, Prom can help a deployed model achieve a performance comparable to that attained during its model training phase.
arXiv:2501.00363v1 Announce Type: new Abstract: Privacy computing receives increasing attention but writing privacy computing code remains challenging for developers due to limited library functions that necessitate extensive function implementation from scratch as well as the data-oblivious requirement which contradicts intuitive thinking and usual practices of programmers. Large language models (LLMs) have demonstrated surprising capabilities in coding tasks and achieved state-of-the-art performance across many benchmarks. However, even with extensive prompting, existing LLMs struggle with code translation task for privacy computing, such as translating Python to MP-SPDZ, due to the scarcity of MP-SPDZ data required for effective pre-training or fine-tuning. To address the limitation, this paper proposes SPDZCoder, a rule-based framework to teach LLMs to synthesize privacy computing code without asking experts to write tons of code and by leveraging the instruction-following and in-context learning ability of LLMs. Specifically, SPDZCoder decouples the translation task into the refactoring stage and the generation stage, which can mitigate the semantic-expressing differences at different levels. In addition, SPDZCoder can further improve its performance by a feedback stage. SPDZCoder does not require fine-tuning since it adopts an in-context learning paradigm of LLMs. To evaluate SPDZCoder, we manually created a benchmark dataset, named SPDZEval, containing six classes of difficult tasks to implement in MP-SPDZ. We conduct experiments on SPDZEval and the experimental results shows that SPDZCoder achieves the state-of-the-art performance in pass@1 and pass@2 across six data splits. Specifically, SPDZCoder achieves an overall correctness of 85.94% and 92.01% in pass@1 and pass@2, respectively, significantly surpassing baselines (at most 30.35% and 49.84% in pass@1 and pass@2, respectively) by a large margin.
arXiv:2501.00532v1 Announce Type: new Abstract: The emergence of machine learning (ML) has led to a transformative shift in software techniques and guidelines for building software applications that support data analysis process activities such as data ingestion, modeling, and deployment. Specifically, this shift is impacting ML model selection, which is one of the key phases in this process. There have been several advances in model selection from the standpoint of core ML methods, including basic probability measures and resampling methods. However, from a software engineering perspective, this selection is still an ad hoc and informal process, is not supported by a design approach and representation formalism that explicitly captures the selection process and can not support the specification of existing model selection procedures. The selection adapts to a variety of contextual factors that affect the model selection, such as data characteristics, number of features, prediction type, and their intricate dependencies. Further, it does not provide an explanation for selecting a model and does not consider the contextual factors and their interdependencies when selecting a technique. Although the current literature provides a wide variety of ML techniques and algorithms, there is a lack of design approaches to support algorithm selection. In this paper, we present a variability-aware ML algorithm selection approach that considers the commonalities and variations in the model selection process. The approach's applicability is illustrated by an experimental case study based on the Scikit-Learn heuristics, in which existing model selections presented in the literature are compared with selections suggested by the approach. The proposed approach can be seen as a step towards providing a more explicit, adaptive, transparent, interpretable, and automated basis for model selection.
arXiv:2501.00539v1 Announce Type: new Abstract: While Large Language Models (LLMs) perform exceptionally well at natural language tasks, they often struggle with precise formal reasoning and the rigorous specification of problems. We present MCP-Solver, a prototype implementation of the Model Context Protocol that demonstrates the potential for systematic integration between LLMs and constraint programming systems. Our implementation provides interfaces for the creation, editing, and validation of a constraint model. Through an item-based editing approach with integrated validation, the system ensures model consistency at every modification step and enables structured iterative refinement. The system handles concurrent solving sessions and maintains a persistent knowledge base of modeling insights. Initial experiments suggest that this integration can effectively combine LLMs' natural language understanding with constraint-solving capabilities. Our open-source implementation is proof of concept for integrating formal reasoning systems with LLMs through standardized protocols. While further research is needed to establish comprehensive formal guarantees, this work takes a first step toward principled integration of natural language processing with constraint-based reasoning.
arXiv:2501.00840v1 Announce Type: new Abstract: Modern configurable systems provide tremendous opportunities for engineering future intelligent software systems. A key difficulty thereof is how to effectively self-adapt the configuration of a running system such that its performance (e.g., runtime and throughput) can be optimized under time-varying workloads. This unfortunately remains unaddressed in existing approaches as they either overlook the available past knowledge or rely on static exploitation of past knowledge without reasoning the usefulness of information when planning for self-adaptation. In this paper, we tackle this challenging problem by proposing DLiSA, a framework that self-adapts configurable systems. DLiSA comes with two properties: firstly, it supports lifelong planning, and thereby the planning process runs continuously throughout the lifetime of the system, allowing dynamic exploitation of the accumulated knowledge for rapid adaptation. Secondly, the planning for a newly emerged workload is boosted via distilled knowledge seeding, in which the knowledge is dynamically purified such that only useful past configurations are seeded when necessary, mitigating misleading information. Extensive experiments suggest that the proposed DLiSA significantly outperforms state-of-the-art approaches, demonstrating a performance improvement of up to 229% and a resource acceleration of up to 2.22x on generating promising adaptation configurations. All data and sources can be found at our repository: https://github.com/ideas-labo/dlisa.
arXiv:2501.00965v1 Announce Type: new Abstract: The proxy pattern is a well-known design pattern with numerous use cases in several sectors of the software industry. As such, the use of the proxy pattern is also a common approach in the development of complex decentralized applications (DApps) on the Ethereum blockchain. Despite the importance of proxy contracts, little is known about (i) how their prevalence changed over time, (ii) the ways in which developers integrate proxies in the design of DApps, and (iii) what proxy types are being most commonly leveraged by developers. This study bridges these gaps through a comprehensive analysis of Ethereum smart contracts, utilizing a dataset of 50 million contracts and 1.6 billion transactions as of September 2022. Our findings reveal that 14.2% of all deployed smart contracts are proxy contracts. We show that proxy contracts are being more actively used than non-proxy contracts. Also, the usage of proxy contracts in various contexts, transactions involving proxy contracts, and adoption of proxy contracts by users have shown an upward trend over time, peaking at the end of our study period. They are either deployed through off-chain scripts or on-chain factory contracts, with the former and latter being employed in 39.1% and 60.9% of identified usage contexts in turn. We found that while the majority (67.8%) of proxies act as an interceptor, 32.2% enables upgradeability. Proxy contracts are typically (79%) implemented based on known reference implementations with 29.4% being of type ERC-1167, a class of proxies that aims to cheaply reuse and clone contracts' functionality. Our evaluation shows that our proposed behavioral proxy detection method has a precision and recall of 100% in detecting active proxies. Finally, we derive a set of practical recommendations for developers and introduce open research questions to guide future research on the topic.
arXiv:2501.01068v1 Announce Type: new Abstract: Self-Admitted Technical Debt, or SATD, is a self-admission of technical debt present in a software system. To effectively manage SATD, developers need to estimate its priority and assess the effort required to fix the described technical debt. About a quarter of descriptions of SATD in software systems express some form of negativity or negative emotions when describing technical debt. In this paper, we report on an experiment conducted with 59 respondents to study whether negativity expressed in the description of SATD \textbf{actually} affects the prioritization of SATD. The respondents are a mix of professional developers and students, and in the experiment, we asked participants to prioritize four vignettes: two expressing negativity and two expressing neutral sentiment. To ensure realism, vignettes were based on existing SATD. We find that negativity causes between one-third and half of developers to prioritize SATD, in which negativity is expressed as having more priority. Developers affected by negativity when prioritizing SATD are twice as likely to increase their estimation of urgency and 1.5 times as likely to increase their estimation of importance and effort for SATD compared to the likelihood of decreasing these prioritization scores. Our findings show how developers actively use negativity in SATD to determine how urgently a particular instance of TD should be addressed. However, our study also describes a gap in the actions and belief of developers. Even if 33% to 50% use negativity to prioritize SATD, 67% of developers believe that using negativity as a proxy for priority is unacceptable. Therefore, we would not recommend using negativity as a proxy for priority. However, we also recognize that developers might unavoidably express negativity when describing technical debt.
arXiv:2501.00106v1 Announce Type: new Abstract: Dataset license compliance is a critical yet complex aspect of developing commercial AI products, particularly with the increasing use of publicly available datasets. Ambiguities in dataset licenses pose significant legal risks, making it challenging even for software IP lawyers to accurately interpret rights and obligations. In this paper, we introduce LicenseGPT, a fine-tuned foundation model (FM) specifically designed for dataset license compliance analysis. We first evaluate existing legal FMs (i.e., FMs specialized in understanding and processing legal texts) and find that the best-performing model achieves a Prediction Agreement (PA) of only 43.75%. LicenseGPT, fine-tuned on a curated dataset of 500 licenses annotated by legal experts, significantly improves PA to 64.30%, outperforming both legal and general-purpose FMs. Through an A/B test and user study with software IP lawyers, we demonstrate that LicenseGPT reduces analysis time by 94.44%, from 108 seconds to 6 seconds per license, without compromising accuracy. Software IP lawyers perceive LicenseGPT as a valuable supplementary tool that enhances efficiency while acknowledging the need for human oversight in complex cases. Our work underscores the potential of specialized AI tools in legal practice and offers a publicly available resource for practitioners and researchers.
arXiv:2501.00125v1 Announce Type: new Abstract: When SE data is scarce, "active learners" use models learned from tiny samples of the data to find the next most informative example to label. In this way, effective models can be generated using very little data. For multi-objective software engineering (SE) tasks, active learning can benefit from an effective set of initial guesses (also known as "warm starts"). This paper explores the use of Large Language Models (LLMs) for creating warm-starts. Those results are compared against Gaussian Process Models and Tree of Parzen Estimators. For 49 SE tasks, LLM-generated warm starts significantly improved the performance of low- and medium-dimensional tasks. However, LLM effectiveness diminishes in high-dimensional problems, where Bayesian methods like Gaussian Process Models perform best.
arXiv:2501.00169v1 Announce Type: new Abstract: Deep Learning experiments have critical requirements regarding the careful handling of their datasets as well as the efficient and correct usage of APIs that interact with hardware accelerators. On the one hand, software mistakes during data handling can contaminate experiments and lead to incorrect results. On the other hand, poorly coded APIs that interact with the hardware can lead to sub-optimal usage and untrustworthy conclusions. In this work we investigate the use of Linear Logic for the analysis of Deep Learning experiments. We show that primitives and operators of Linear Logic can be used to express: (i) an abstract representation of the control flow of an experiment, (ii) a set of available experimental resources, such as API calls to the underlying data-structures and hardware as well as (iii) reasoning rules about the correct consumption of resources during experiments. Our proposed model is not only lightweight but also easy to comprehend having both a symbolic and a visual component. Finally, its artifacts are themselves proofs in Linear Logic that can be readily verified by off-the-shelf reasoners.
arXiv:2501.00217v1 Announce Type: new Abstract: Having a high quality software is essential in software engineering, which requires robust validation and verification processes during testing activities. Manual testing, while effective, can be time consuming and costly, leading to an increased demand for automated methods. Recent advancements in Large Language Models (LLMs) have significantly influenced software engineering, particularly in areas like requirements analysis, test automation, and debugging. This paper explores an agent-oriented approach to automated software testing, using LLMs to reduce human intervention and enhance testing efficiency. The proposed framework integrates LLMs to generate unit tests, visualize call graphs, and automate test execution and reporting. Evaluations across multiple applications in Python and Java demonstrate the system's high test coverage and efficient operation. This research underscores the potential of LLM-powered agents to streamline software testing workflows while addressing challenges in scalability and accuracy.
arXiv:2501.00276v1 Announce Type: new Abstract: This paper is a sequel to an evolving research project on a diagrammatic methodology called thinging machine (TM). Initially, it was proposed as a base for conceptual modelling (e.g., conceptual UML) in areas such as requirement engineering. Conceptual modelling involves a high-level representation of a real-world system that integrates various components to refine it into a more concrete (computer) executable form. The TM project has progressed into a more comprehensive approach by applying it in several research areas and expanding its theoretical and ontological foundation. Accordingly, the first part of the paper involves enhancing some TM aspects related to structuring events in existence, such as absent events. The second part of the paper focuses on how to classify events and the kinds of relationships that can be recognized among events. The notion of events has occupied a central role in modelling. It influences computer science and such diverse disciplines as linguistics, probability theory, artificial intelligence, physics, philosophy and history. In TM, an event is defined as the so-called thimac (thing/machine) with a time breath that infuses dynamism into the static description of the thimac called a region. A region is a diagrammatic specification based on five generic actions: create, process, release, transfer and receive. The results of this research provide (a) an enrichment of conceptual modelling, especially concerning varieties of existence, e.g., absent events of negative propositions, and (b) a proposal that instead of semantic categorizations of events, it is possible to develop a new type of classification based on graphs grounded on the TM model diagrams.
arXiv:2501.00279v1 Announce Type: new Abstract: BLAS is a fundamental building block of advanced linear algebra libraries and many modern scientific computing applications. GPUs are known for their strong arithmetic computing capabilities and are highly suited for BLAS operations. However, porting code to GPUs often requires significant effort, especially for large, complex codes or legacy codes, even for BLAS-heavy applications. While various tools exist to automatically offload BLAS to GPUs, they are often impractical due to the high costs associated with mandatory data transfers. The advent of unified memory architectures in recent GPU designs, such as the NVIDIA Grace-Hopper, allows cache-coherent memory access across all types of memory for both CPU and GPU, potentially eliminating the bottlenecks faced in conventional architectures. This breakthrough paves the way for innovative application developments and porting strategies. Building on our preliminary work demonstrating the potential of automatic *gemm offload, this paper extends the framework to all level-3 BLAS operations and introduces SCILIB-Accel, a novel tool for automatic BLAS offload. SCILIB-Accel leverages the memory coherency in Grace-Hopper and introduces a Device First-Use data movement policy inspired by the OpenMP First-Touch approach in multi-socket CPU programming, minimizing CPU-GPU data transfers for typical scientific computing codes. Additionally, utilizing dynamic binary instrumentation, the tool intercepts BLAS symbols directly from a CPU binary, requiring no code modifications or recompilation. SCILIB-Accel has been evaluated using multiple quantum physics codes on up to a few hundred GPU nodes, yielding promising speedups. Notably, for the LSMS method in the MuST suite, a 3x speedup was achieved on Grace-Hopper compared to Grace-Grace.
arXiv:2501.01236v1 Announce Type: new Abstract: In recent years, a plethora of database management systems have surfaced to meet the demands of various scenarios. Emerging database systems, such as time-series and streaming database systems, are tailored to specific use cases requiring enhanced functionality and performance. However, as they are typically less mature, there can be bugs that either cause incorrect results or errors impacting reliability. To tackle this, we propose enhanced differential testing to uncover various bugs in emerging SQL-like database systems. The challenge is how to deal with differences of these emerging databases. Our insight is that many emerging database systems are conceptually extensions of relational database systems, making it possible to reveal logic bugs leveraging existing relational, known-reliable database systems. However, due to inevitable syntax or semantics gaps, it remains challenging to scale differential testing to various emerging database systems. We enhance differential testing for emerging database systems with three steps: (i) identifying shared clauses; (ii) extending shared clauses via mapping new features back to existing clauses of relational database systems; and (iii) generating differential inputs using extended shared clauses. We implemented our approach in a tool called SQLxDiff and applied it to four popular emerging database systems. In total, we found 57 unknown bugs, of which 17 were logic bugs and 40 were internal errors. Overall, vendors fixed 50 bugs and confirmed 5. Our results demonstrate the practicality and effectiveness of SQLxDiff in detecting bugs in emerging database systems, which has the potential to improve the reliability of their applications.