econ.TH
19 postsarXiv:2503.01855v1 Announce Type: cross Abstract: The desirable gambles framework provides a foundational approach to imprecise probability theory but relies heavily on linear utility assumptions. This paper introduces {\em function-coherent gambles}, a generalization that accommodates non-linear utility while preserving essential rationality properties. We establish core axioms for function-coherence and prove a representation theorem that characterizes acceptable gambles through continuous linear functionals. The framework is then applied to analyze various forms of discounting in intertemporal choice, including hyperbolic, quasi-hyperbolic, scale-dependent, and state-dependent discounting. We demonstrate how these alternatives to constant-rate exponential discounting can be integrated within the function-coherent framework. This unified treatment provides theoretical foundations for modeling sophisticated patterns of time preference within the desirability paradigm, bridging a gap between normative theory and observed behavior in intertemporal decision-making under genuine uncertainty.
arXiv:2503.05338v1 Announce Type: new Abstract: In the context of decentralized blockchains, accurately simulating the outcome of order flow auctions (OFAs) off-chain is challenging due to adversarial sequencing, encrypted bids, and frequent state changes. Existing approaches, such as deterministic sorting via consensus layer modifications (e.g., MEV taxes) (Robinson and White 2024) and BRAID (Resnick 2024) or atomic execution of aggregated bids (e.g., Atlas) (Watts et al. 2024), remain vulnerable in permissionless settings where limited throughput allows rational adversaries to submit "spoof" bids that block their competitors' access to execution. We propose a new failure cost penalty that applies only when a solution is executed but does not pay its bid or fulfill the order. Combined with an on-chain escrow system, this mechanism empowers applications to asynchronously issue their users a guaranteed minimum outcome before the execution results are finalized. It implies a direct link between blockchain throughput, censorship resistance, and the capital efficiency of auction participants (e.g., solvers), which intuitively extends to execution quality. At equilibrium, bids fully reflect the potential for price improvement between bid submission and execution, but only partially reflect the potential for price declines. This asymmetry unbounded upside for winning bids, limited downside for failed bids, and no loss for losing bids - ultimately benefits users.
arXiv:2502.19553v2 Announce Type: replace-cross Abstract: We study the effect of providing information to agents who queue before a scarce good is distributed at a fixed time. When agents have quasi-linear utility in time spent waiting, they choose entry times as they would bids in a descending auction. An information designer can influence their behavior by providing updates about the length of the queue. Many natural information policies release "sudden bad news," which occurs when agents learn that the queue is longer than previously believed. We show that sudden bad news can cause assortative inefficiency by prompting a mass of agents to simultaneously attempt to join the queue. As a result, if the value distribution has an increasing (decreasing) hazard rate, information policies that release sudden bad news increase (decrease) total surplus, relative to releasing no information. When agents face entry costs to join the queue and the value distribution has a decreasing hazard rate, an information designer maximizes total surplus by announcing only when the queue is full.
arXiv:2503.05015v1 Announce Type: cross Abstract: This study extends Blackwell's (1953) comparison of information to a sequential social learning model, where agents make decisions sequentially based on both private signals and the observed actions of others. In this context, we introduce a new binary relation over information structures: An information structure is more socially valuable than another if it yields higher expected payoffs for all agents, regardless of their preferences. First, we establish that this binary relation is strictly stronger than the Blackwell order. Then, we provide a necessary and sufficient condition for our binary relation and propose a simpler sufficient condition that is easier to verify.
arXiv:2501.11897v1 Announce Type: new Abstract: We formulate and study a general time-varying multi-agent system where players repeatedly compete under incomplete information. Our work is motivated by scenarios commonly observed in online advertising and retail marketplaces, where agents and platform designers optimize algorithmic decision-making in dynamic competitive settings. In these systems, no-regret algorithms that provide guarantees relative to \emph{static} benchmarks can perform poorly and the distributions of play that emerge from their interaction do not correspond anymore to static solution concepts such as coarse correlated equilibria. Instead, we analyze the interaction of \textit{dynamic benchmark} consistent policies that have performance guarantees relative to \emph{dynamic} sequences of actions, and through a novel \textit{tracking error} notion we delineate when their empirical joint distribution of play can approximate an evolving sequence of static equilibria. In systems that change sufficiently slowly (sub-linearly in the horizon length), we show that the resulting distributions of play approximate the sequence of coarse correlated equilibria, and apply this result to establish improved welfare bounds for smooth games. On a similar vein, we formulate internal dynamic benchmark consistent policies and establish that they approximate sequences of correlated equilibria. Our findings therefore suggest that, in a broad range of multi-agent systems where non-stationarity is prevalent, algorithms designed to compete with dynamic benchmarks can improve both individual and welfare guarantees, and their emerging dynamics approximate a sequence of static equilibrium outcomes.
arXiv:2410.05533v3 Announce Type: replace Abstract: Classical information design models (e.g., Bayesian persuasion and cheap talk) require players to have perfect knowledge of the prior distribution of the state of the world. Our paper studies repeated persuasion problems in which the information designer does not know the prior. The information designer learns to design signaling schemes from repeated interactions with the receiver. We design learning algorithms for the information designer to achieve no regret compared to using the optimal signaling scheme with known prior, under two models of the receiver's decision-making. (1) The first model assumes that the receiver knows the prior and can perform posterior update and best respond to signals. In this model, we design a learning algorithm for the information designer with $O(\log T)$ regret in the general case, and another algorithm with $\Theta(\log \log T)$ regret in the case where the receiver has only two actions. (2) The second model assumes that the receiver does not know the prior and employs a no-regret learning algorithm to take actions. We show that the information designer can achieve regret $O(\sqrt{\mathrm{rReg}(T) T})$, where $\mathrm{rReg}(T)=o(T)$ is an upper bound on the receiver's learning regret. Our work thus provides a learning foundation for the problem of information design with unknown prior.
arXiv:2404.17413v4 Announce Type: replace Abstract: The Plurality rule for linear orders selects the alternatives most frequently appearing in the first position of those orders, while the Anti-Plurality rule selects the alternatives least often occurring in the final position. We explore extensions of these rules to partial orders, offering axiomatic characterisations for these extensions.\\ \textbf{Keywords:} Plurality, Anti-Plurality, Strict Partial Orders, Voting.
arXiv:2412.11122v2 Announce Type: replace Abstract: Collaborative machine learning (CML) provides a promising paradigm for democratizing advanced technologies by enabling cost-sharing among participants. However, the potential for rent-seeking behaviors among parties can undermine such collaborations. Contract theory presents a viable solution by rewarding participants with models of varying accuracy based on their contributions. However, unlike monetary compensation, using models as rewards introduces unique challenges, particularly due to the stochastic nature of these rewards when contribution costs are privately held information. This paper formalizes the optimal contracting problem within CML and proposes a transformation that simplifies the non-convex optimization problem into one that can be solved through convex optimization algorithms. We conduct a detailed analysis of the properties that an optimal contract must satisfy when models serve as the rewards, and we explore the potential benefits and welfare implications of these contract-driven CML schemes through numerical experiments.
arXiv:2305.09065v3 Announce Type: replace-cross Abstract: A seller wants to sell an item to $n$ buyers. Buyer valuations are drawn i.i.d. from a distribution unknown to the seller; the seller only knows that the support is included in $[a, b]$. To be robust, the seller chooses a DSIC mechanism that optimizes the worst-case performance relative to the ideal expected revenue the seller could have collected with knowledge of buyers' valuations. Our analysis unifies the regret and the ratio objectives. For these objectives, we derive an optimal mechanism and the corresponding performance in quasi-closed form, as a function of the support information $[a, b]$ and the number of buyers $n$. Our analysis reveals three regimes of support information and a new class of robust mechanisms. i.) When $a/b$ is below a threshold, the optimal mechanism is a second-price auction (SPA) with random reserve, a focal class in earlier literature. ii.) When $a/b$ is above another threshold, SPAs are strictly suboptimal, and an optimal mechanism belongs to a class of mechanisms we introduce, which we call pooling auctions (POOL); whenever the highest value is above a threshold, the mechanism still allocates to the highest bidder, but otherwise the mechanism allocates to a uniformly random buyer, i.e., pools low types. iii.) When $a/b$ is between two thresholds, a randomization between SPA and POOL is optimal. We also characterize optimal mechanisms within nested central subclasses of mechanisms: standard mechanisms that only allocate to the highest bidder, SPA with random reserve, and SPA with no reserve. We show strict separations in terms of performance across classes, implying that deviating from standard mechanisms is necessary for robustness.
arXiv:2501.00578v1 Announce Type: cross Abstract: I propose a model of aggregation of intervals relevant to the study of legal standards of tolerance. Seven axioms: responsiveness, anonymity, continuity, strategyproofness, and three variants of neutrality are then used to prove several important results about a new class of aggregation methods called endpoint rules. The class of endpoint rules includes extreme tolerance (allowing anything permitted by anyone) and a form of majoritarianism (the median rule).
arXiv:2501.00745v1 Announce Type: new Abstract: The increasing integration of Large Language Model (LLM) based search engines has transformed the landscape of information retrieval. However, these systems are vulnerable to adversarial attacks, especially ranking manipulation attacks, where attackers craft webpage content to manipulate the LLM's ranking and promote specific content, gaining an unfair advantage over competitors. In this paper, we study the dynamics of ranking manipulation attacks. We frame this problem as an Infinitely Repeated Prisoners' Dilemma, where multiple players strategically decide whether to cooperate or attack. We analyze the conditions under which cooperation can be sustained, identifying key factors such as attack costs, discount rates, attack success rates, and trigger strategies that influence player behavior. We identify tipping points in the system dynamics, demonstrating that cooperation is more likely to be sustained when players are forward-looking. However, from a defense perspective, we find that simply reducing attack success probabilities can, paradoxically, incentivize attacks under certain conditions. Furthermore, defensive measures to cap the upper bound of attack success rates may prove futile in some scenarios. These insights highlight the complexity of securing LLM-based systems. Our work provides a theoretical foundation and practical insights for understanding and mitigating their vulnerabilities, while emphasizing the importance of adaptive security strategies and thoughtful ecosystem design.
arXiv:2308.04399v3 Announce Type: replace Abstract: Recent advances in Machine Learning (ML) and Artificial Intelligence (AI) follow a familiar structure: A firm releases a large, pretrained model. It is designed to be adapted and tweaked by other entities to perform particular, domain-specific functions. The model is described as `general-purpose,' meaning it can be transferred to a wide range of downstream tasks, in a process known as adaptation or fine-tuning. Understanding this process - the strategies, incentives, and interactions involved in the development of AI tools - is crucial for making conclusions about societal implications and regulatory responses, and may provide insights beyond AI about general-purpose technologies. We propose a model of this adaptation process. A Generalist brings the technology to a certain level of performance, and one or more Domain specialist(s) adapt it for use in particular domain(s). Players incur costs when they invest in the technology, so they need to reach a bargaining agreement on how to share the resulting revenue before making their investment decisions. We find that for a broad class of cost and revenue functions, there exists a set of Pareto-optimal profit-sharing arrangements where the players jointly contribute to the technology. Our analysis, which utilizes methods based on bargaining solutions and sub-game perfect equilibria, provides insights into the strategic behaviors of firms in these types of interactions. For example, profit-sharing can arise even when one firm faces significantly higher costs than another. After demonstrating findings in the case of one domain-specialist, we provide closed-form and numerical bargaining solutions in the generalized setting with $n$ domain specialists. We find that any potential domain specialization will either contribute, free-ride, or abstain in their uptake of the technology, and provide conditions yielding these different responses.
arXiv:2402.11437v5 Announce Type: replace Abstract: The classic paper of Shapley and Shubik \cite{Shapley1971assignment} characterized the core of the assignment game. We observe that a sub-coalition consisting of one player (or a set of players from the same side of the bipartition) can make zero profit, and therefore its profit under a core imputation can be an arbitrary amount. Hence an arbitrary core imputation makes {\em no fairness guarantee at the level of individual agents}. Can this deficiency be addressed by picking a ``good'' core imputation? To arrive at an appropriate solution concept, we give specific criteria for picking a special core imputation, and we undertake a detailed comparison of four solution concepts. Leximin and leximax core imputations come out as clear winners; we define these to be {\em equitable core imputations}. These imputations achieve ``fairness'' in different ways: whereas leximin tries to make poor agents more rich, leximax tries to make rich agents less rich. We give combinatorial strongly polynomial algorithms for computing these imputations via a novel adaptation of the classical primal-dual paradigm. The ``engine'' driving them involves insights into core imputations obtained via complementarity. It will not be surprising if our work leads to new uses of this powerful technique. Furthermore, we expect more work on computing the leximin and leximax core imputations of other natural games, in addition to the recent follow-up work \cite{Leximin-max}.
arXiv:2412.00897v2 Announce Type: replace-cross Abstract: We propose a model of unawareness that remains close to the paradigm of Aumann's model for knowledge [R. J. Aumann, International Journal of Game Theory 28 (1999) 263-300]: just as Aumann uses a correspondence on a state space to define an agent's knowledge operator on events, we use a correspondence on a state space to define an agent's awareness operator on events. This is made possible by three ideas. First, like the model of [A. Heifetz, M. Meier, and B. Schipper, Journal of Economic Theory 130 (2006) 78-94], ours is based on a space of partial specifications of the world, partially ordered by a relation of further specification or refinement, and the idea that agents may be aware of some coarser-grained specifications while unaware of some finer-grained specifications; however, our model is based on a different implementation of this idea, related to forcing in set theory. Second, we depart from a tradition in the literature, initiated by [S. Modica and A. Rustichini, Theory and Decision 37 (1994) 107-124] and adopted by Heifetz et al. and [J. Li, Journal of Economic Theory 144 (2009) 977-993], of taking awareness to be definable in terms of knowledge. Third, we show that the negative conclusion of a well-known impossibility theorem concerning unawareness in [Dekel, Lipman, and Rustichini, Econometrica 66 (1998) 159-173] can be escaped by a slight weakening of a key axiom. Together these points demonstrate that a correspondence on a partial-state space is sufficient to model unawareness of events. Indeed, we prove a representation theorem showing that any abstract Boolean algebra equipped with awareness, knowledge, and belief operators satisfying some plausible axioms is representable as the algebra of events arising from a partial-state space with awareness, knowledge, and belief correspondences.
arXiv:2401.12366v2 Announce Type: replace-cross Abstract: We analyze how market segmentation affects consumer welfare when a monopolist can engage in both second-degree price discrimination (through product differentiation) and third-degree price discrimination (through market segmentation). We characterize the consumer-optimal market segmentation and show that it has several striking properties: (1) the market segmentation displays monotonicity$\unicode{x2014}$higher-value customers always receive higher quality product than lower-value regardless of their segment and across any segment; and (2) when aggregate demand elasticity exceeds a threshold determined by marginal costs, no segmentation maximizes consumer surplus. Our results demonstrate that strategic market segmentation can benefit consumers even when it enables price discrimination, but these benefits depend critically on demand elasticities and cost structures. The findings have implications for regulatory policy regarding price discrimination and market segmentation practices.
arXiv:2401.16412v3 Announce Type: replace Abstract: By classic results in social choice theory, any reasonable preferential voting method sometimes gives individuals an incentive to report an insincere preference. The extent to which different voting methods are more or less resistant to such strategic manipulation has become a key consideration for comparing voting methods. Here we measure resistance to manipulation by whether neural networks of various sizes can learn to profitably manipulate a given voting method in expectation, given different types of limited information about how other voters will vote. We trained over 100,000 neural networks of 26 sizes to manipulate against 8 different voting methods, under 6 types of limited information, in committee-sized elections with 5-21 voters and 3-6 candidates. We find that some voting methods, such as Borda, are highly manipulable by networks with limited information, while others, such as Instant Runoff, are not, despite being quite profitably manipulated by an ideal manipulator with full information. For the three probability models for elections that we use, the overall least manipulable of the 8 methods we study are Condorcet methods, namely Minimax and Split Cycle.
arXiv:2412.16384v1 Announce Type: new Abstract: A contract is an economic tool used by a principal to incentivize one or more agents to exert effort on her behalf, by defining payments based on observable performance measures. A key challenge addressed by contracts -- known in economics as moral hazard -- is that, absent a properly set up contract, agents might engage in actions that are not in the principal's best interest. Another common feature of contracts is limited liability, which means that payments can go only from the principal -- who has the deep pocket -- to the agents. With classic applications of contract theory moving online, growing in scale, and becoming more data-driven, tools from contract theory become increasingly important for incentive-aware algorithm design. At the same time, algorithm design offers a whole new toolbox for reasoning about contracts, ranging from additional tools for studying the tradeoff between simple and optimal contracts, through a language for discussing the computational complexity of contracts in combinatorial settings, to a formalism for analyzing data-driven contracts. This survey aims to provide a computer science-friendly introduction to the basic concepts of contract theory. We give an overview of the emerging field of "algorithmic contract theory" and highlight work that showcases the potential for interaction between the two areas. We also discuss avenues for future research.
arXiv:2412.16132v1 Announce Type: cross Abstract: We study mechanism design when agents hold private information about both their preferences and a common payoff-relevant state. We show that standard message-driven mechanisms cannot implement socially efficient allocations when agents have multidimensional types, even under favorable conditions. To overcome this limitation, we propose data-driven mechanisms that leverage additional post-allocation information, modeled as an estimator of the payoff-relevant state. Our data-driven mechanisms extend the classic Vickrey-Clarke-Groves class. We show that they achieve exact implementation in posterior equilibrium when the state is either fully revealed or the utility is linear in an unbiased estimator. We also show that they achieve approximate implementation with a consistent estimator, converging to exact implementation as the estimator converges, and present bounds on the convergence rate. We demonstrate applications to digital advertising auctions and large language model (LLM)-based mechanisms, where user engagement naturally reveals relevant information.
arXiv:2412.15472v1 Announce Type: new Abstract: Allocating indivisible goods is a ubiquitous task in fair division. We study additive welfarist rules, an important class of rules which choose an allocation that maximizes the sum of some function of the agents' utilities. Prior work has shown that the maximum Nash welfare (MNW) rule is the unique additive welfarist rule that guarantees envy-freeness up to one good (EF1). We strengthen this result by showing that MNW remains the only additive welfarist rule that ensures EF1 for identical-good instances, two-value instances, as well as normalized instances with three or more agents. On the other hand, if the agents' utilities are integers, we demonstrate that several other rules offer the EF1 guarantee, and provide characterizations of these rules for various classes of instances.