econ.EM

9 posts

arXiv:2412.20173v2 Announce Type: replace-cross Abstract: This study proposes a debiasing method for smooth nonparametric estimators. While machine learning techniques such as random forests and neural networks have demonstrated strong predictive performance, their theoretical properties remain relatively underexplored. Specifically, many modern algorithms lack assurances of pointwise asymptotic normality and uniform convergence, which are critical for statistical inference and robustness under covariate shift and have been well-established for classical methods like Nadaraya-Watson regression. To address this, we introduce a model-free debiasing method that guarantees these properties for smooth estimators derived from any nonparametric regression approach. By adding a correction term that estimates the conditional expected residual of the original estimator, or equivalently, its estimation error, we obtain a debiased estimator with proven pointwise asymptotic normality, and uniform convergence. These properties enable statistical inference and enhance robustness to covariate shift, making the method broadly applicable to a wide range of nonparametric regression problems.

Masahiro Kato1/3/2025

arXiv:2408.12014v2 Announce Type: replace Abstract: In recent years, power grids have seen a surge in large cryptocurrency mining firms, with individual consumption levels reaching 700MW. This study examines the behavior of these firms in Texas, focusing on how their consumption is influenced by cryptocurrency conversion rates, electricity prices, local weather, and other factors. We transform the skewed electricity consumption data of these firms, perform correlation analysis, and apply a seasonal autoregressive moving average model for analysis. Our findings reveal that, surprisingly, short-term mining electricity consumption is not directly correlated with cryptocurrency conversion rates. Instead, the primary influencers are the temperature and electricity prices. These firms also respond to avoid transmission and distribution network (T&D) charges - commonly referred to as four Coincident peak (4CP) charges - during the summer months. As the scale of these firms is likely to surge in future years, the developed electricity consumption model can be used to generate public, synthetic datasets to understand the overall impact on the power grid. The developed model could also lead to better pricing mechanisms to effectively use the flexibility of these resources towards improving power grid reliability.

Subir Majumder, Ignacio Aravena, Le Xie1/3/2025

arXiv:2409.05798v4 Announce Type: replace Abstract: Interactive preference learning systems infer human preferences by presenting queries as pairs of options and collecting binary choices. Although binary choices are simple and widely used, they provide limited information about preference strength. To address this, we leverage human response times, which are inversely related to preference strength, as an additional signal. We propose a computationally efficient method that combines choices and response times to estimate human utility functions, grounded in the EZ diffusion model from psychology. Theoretical and empirical analyses show that for queries with strong preferences, response times complement choices by providing extra information about preference strength, leading to significantly improved utility estimation. We incorporate this estimator into preference-based linear bandits for fixed-budget best-arm identification. Simulations on three real-world datasets demonstrate that using response times significantly accelerates preference learning compared to choice-only approaches. Additional materials, such as code, slides, and talk video, are available at https://shenlirobot.github.io/pages/NeurIPS24.html

Shen Li, Yuyang Zhang, Zhaolin Ren, Claire Liang, Na Li, Julie A. Shah1/3/2025

arXiv:2201.05139v2 Announce Type: replace-cross Abstract: A core challenge in causal inference is how to extrapolate long term effects, of possibly continuous actions, from short term experimental data. It arises in artificial intelligence: the long term consequences of continuous actions may be of interest, yet only short term rewards may be collected in exploration. For this estimand, called the long term dose response curve, we propose a simple nonparametric estimator based on kernel ridge regression. By embedding the distribution of the short term experimental data with kernels, we derive interpretable weights for extrapolating long term effects. Our method allows actions, short term rewards, and long term rewards to be continuous in general spaces. It also allows for nonlinearity and heterogeneity in the link between short term effects and long term effects. We prove uniform consistency, with nonasymptotic error bounds reflecting the effective dimension of the data. As an application, we estimate the long term dose response curve of Project STAR, a social program which randomly assigned students to various class sizes. We extend our results to long term counterfactual distributions, proving weak convergence.

Rahul Singh, Hannah Zhou1/3/2025

arXiv:2312.08174v5 Announce Type: replace-cross Abstract: Recent advances in causal inference have seen the development of methods which make use of the predictive power of machine learning algorithms. In this paper, we develop novel double machine learning (DML) procedures for panel data in which these algorithms are used to approximate high-dimensional and nonlinear nuisance functions of the covariates. Our new procedures are extensions of the well-known correlated random effects, within-group and first-difference estimators from linear to nonlinear panel models, specifically, Robinson (1988)'s partially linear regression model with fixed effects and unspecified nonlinear confounding. Our simulation study assesses the performance of these procedures using different machine learning algorithms. We use our procedures to re-estimate the impact of minimum wage on voting behaviour in the UK. From our results, we recommend the use of first-differencing because it imposes the fewest constraints on the distribution of the fixed effects, and an ensemble learning strategy to ensure optimum estimator accuracy.

Paul S. Clarke, Annalivia Polselli1/3/2025

arXiv:2103.04021v3 Announce Type: replace-cross Abstract: In the standard data analysis framework, data is collected (once and for all), and then data analysis is carried out. However, with the advancement of digital technology, decision-makers constantly analyze past data and generate new data through their decisions. We model this as a Markov decision process and show that the dynamic interaction between data generation and data analysis leads to a new type of bias -- reinforcement bias -- that exacerbates the endogeneity problem in standard data analysis. We propose a class of instrument variable (IV)-based reinforcement learning (RL) algorithms to correct for the bias and establish their theoretical properties by incorporating them into a stochastic approximation (SA) framework. Our analysis accommodates iterate-dependent Markovian structures and, therefore, can be used to study RL algorithms with policy improvement. We also provide formulas for inference on optimal policies of the IV-RL algorithms. These formulas highlight how intertemporal dependencies of the Markovian environment affect the inference.

Jin Li, Ye Luo, Zigan Wang, Xiaowei Zhang12/25/2024

arXiv:2412.16452v1 Announce Type: cross Abstract: Statistical protocols are often used for decision-making involving multiple parties, each with their own incentives, private information, and ability to influence the distributional properties of the data. We study a game-theoretic version of hypothesis testing in which a statistician, also known as a principal, interacts with strategic agents that can generate data. The statistician seeks to design a testing protocol with controlled error, while the data-generating agents, guided by their utility and prior information, choose whether or not to opt in based on expected utility maximization. This strategic behavior affects the data observed by the statistician and, consequently, the associated testing error. We analyze this problem for general concave and monotonic utility functions and prove an upper bound on the Bayes false discovery rate (FDR). Underlying this bound is a form of prior elicitation: we show how an agent's choice to opt in implies a certain upper bound on their prior null probability. Our FDR bound is unimprovable in a strong sense, achieving equality at a single point for an individual agent and at any countable number of points for a population of agents. We also demonstrate that our testing protocols exhibit a desirable maximin property when the principal's utility is considered. To illustrate the qualitative predictions of our theory, we examine the effects of risk aversion, reward stochasticity, and signal-to-noise ratio, as well as the implications for the Food and Drug Administration's testing protocols.

Flora C. Shi, Stephen Bates, Martin J. Wainwright12/24/2024

arXiv:2412.17753v1 Announce Type: cross Abstract: This study investigates an asymptotically minimax optimal algorithm in the two-armed fixed-budget best-arm identification (BAI) problem. Given two treatment arms, the objective is to identify the arm with the highest expected outcome through an adaptive experiment. We focus on the Neyman allocation, where treatment arms are allocated following the ratio of their outcome standard deviations. Our primary contribution is to prove the minimax optimality of the Neyman allocation for the simple regret, defined as the difference between the expected outcomes of the true best arm and the estimated best arm. Specifically, we first derive a minimax lower bound for the expected simple regret, which characterizes the worst-case performance achievable under the location-shift distributions, including Gaussian distributions. We then show that the simple regret of the Neyman allocation asymptotically matches this lower bound, including the constant term, not just the rate in terms of the sample size, under the worst-case distribution. Notably, our optimality result holds without imposing locality restrictions on the distribution, such as the local asymptotic normality. Furthermore, we demonstrate that the Neyman allocation reduces to the uniform allocation, i.e., the standard randomized controlled trial, under Bernoulli distributions.

Masahiro Kato12/24/2024

arXiv:2405.19317v2 Announce Type: replace Abstract: This study investigates an asymptotically locally minimax optimal algorithm for fixed-budget best-arm identification (BAI). We propose the Generalized Neyman Allocation (GNA) algorithm and demonstrate that its worst-case upper bound on the probability of misidentifying the best arm aligns with the worst-case lower bound under the small-gap regime, where the gap between the expected outcomes of the best and suboptimal arms is small. Our lower and upper bounds are tight, matching exactly including constant terms within the small-gap regime. The GNA algorithm generalizes the Neyman allocation for two-armed bandits (Neyman, 1934; Kaufmann et al., 2016) and refines existing BAI algorithms, such as those proposed by Glynn & Juneja (2004). By proposing an asymptotically minimax optimal algorithm, we address the longstanding open issue in BAI (Kaufmann, 2020) and treatment choice (Kasy & Sautmann, 202) by restricting a class of distributions to the small-gap regimes.

Masahiro Kato12/24/2024