econ.TH
12 postsarXiv:2412.11122v2 Announce Type: replace Abstract: Collaborative machine learning (CML) provides a promising paradigm for democratizing advanced technologies by enabling cost-sharing among participants. However, the potential for rent-seeking behaviors among parties can undermine such collaborations. Contract theory presents a viable solution by rewarding participants with models of varying accuracy based on their contributions. However, unlike monetary compensation, using models as rewards introduces unique challenges, particularly due to the stochastic nature of these rewards when contribution costs are privately held information. This paper formalizes the optimal contracting problem within CML and proposes a transformation that simplifies the non-convex optimization problem into one that can be solved through convex optimization algorithms. We conduct a detailed analysis of the properties that an optimal contract must satisfy when models serve as the rewards, and we explore the potential benefits and welfare implications of these contract-driven CML schemes through numerical experiments.
arXiv:2305.09065v3 Announce Type: replace-cross Abstract: A seller wants to sell an item to $n$ buyers. Buyer valuations are drawn i.i.d. from a distribution unknown to the seller; the seller only knows that the support is included in $[a, b]$. To be robust, the seller chooses a DSIC mechanism that optimizes the worst-case performance relative to the ideal expected revenue the seller could have collected with knowledge of buyers' valuations. Our analysis unifies the regret and the ratio objectives. For these objectives, we derive an optimal mechanism and the corresponding performance in quasi-closed form, as a function of the support information $[a, b]$ and the number of buyers $n$. Our analysis reveals three regimes of support information and a new class of robust mechanisms. i.) When $a/b$ is below a threshold, the optimal mechanism is a second-price auction (SPA) with random reserve, a focal class in earlier literature. ii.) When $a/b$ is above another threshold, SPAs are strictly suboptimal, and an optimal mechanism belongs to a class of mechanisms we introduce, which we call pooling auctions (POOL); whenever the highest value is above a threshold, the mechanism still allocates to the highest bidder, but otherwise the mechanism allocates to a uniformly random buyer, i.e., pools low types. iii.) When $a/b$ is between two thresholds, a randomization between SPA and POOL is optimal. We also characterize optimal mechanisms within nested central subclasses of mechanisms: standard mechanisms that only allocate to the highest bidder, SPA with random reserve, and SPA with no reserve. We show strict separations in terms of performance across classes, implying that deviating from standard mechanisms is necessary for robustness.
arXiv:2501.00745v1 Announce Type: new Abstract: The increasing integration of Large Language Model (LLM) based search engines has transformed the landscape of information retrieval. However, these systems are vulnerable to adversarial attacks, especially ranking manipulation attacks, where attackers craft webpage content to manipulate the LLM's ranking and promote specific content, gaining an unfair advantage over competitors. In this paper, we study the dynamics of ranking manipulation attacks. We frame this problem as an Infinitely Repeated Prisoners' Dilemma, where multiple players strategically decide whether to cooperate or attack. We analyze the conditions under which cooperation can be sustained, identifying key factors such as attack costs, discount rates, attack success rates, and trigger strategies that influence player behavior. We identify tipping points in the system dynamics, demonstrating that cooperation is more likely to be sustained when players are forward-looking. However, from a defense perspective, we find that simply reducing attack success probabilities can, paradoxically, incentivize attacks under certain conditions. Furthermore, defensive measures to cap the upper bound of attack success rates may prove futile in some scenarios. These insights highlight the complexity of securing LLM-based systems. Our work provides a theoretical foundation and practical insights for understanding and mitigating their vulnerabilities, while emphasizing the importance of adaptive security strategies and thoughtful ecosystem design.
arXiv:2501.00578v1 Announce Type: cross Abstract: I propose a model of aggregation of intervals relevant to the study of legal standards of tolerance. Seven axioms: responsiveness, anonymity, continuity, strategyproofness, and three variants of neutrality are then used to prove several important results about a new class of aggregation methods called endpoint rules. The class of endpoint rules includes extreme tolerance (allowing anything permitted by anyone) and a form of majoritarianism (the median rule).
arXiv:2308.04399v3 Announce Type: replace Abstract: Recent advances in Machine Learning (ML) and Artificial Intelligence (AI) follow a familiar structure: A firm releases a large, pretrained model. It is designed to be adapted and tweaked by other entities to perform particular, domain-specific functions. The model is described as `general-purpose,' meaning it can be transferred to a wide range of downstream tasks, in a process known as adaptation or fine-tuning. Understanding this process - the strategies, incentives, and interactions involved in the development of AI tools - is crucial for making conclusions about societal implications and regulatory responses, and may provide insights beyond AI about general-purpose technologies. We propose a model of this adaptation process. A Generalist brings the technology to a certain level of performance, and one or more Domain specialist(s) adapt it for use in particular domain(s). Players incur costs when they invest in the technology, so they need to reach a bargaining agreement on how to share the resulting revenue before making their investment decisions. We find that for a broad class of cost and revenue functions, there exists a set of Pareto-optimal profit-sharing arrangements where the players jointly contribute to the technology. Our analysis, which utilizes methods based on bargaining solutions and sub-game perfect equilibria, provides insights into the strategic behaviors of firms in these types of interactions. For example, profit-sharing can arise even when one firm faces significantly higher costs than another. After demonstrating findings in the case of one domain-specialist, we provide closed-form and numerical bargaining solutions in the generalized setting with $n$ domain specialists. We find that any potential domain specialization will either contribute, free-ride, or abstain in their uptake of the technology, and provide conditions yielding these different responses.
arXiv:2402.11437v5 Announce Type: replace Abstract: The classic paper of Shapley and Shubik \cite{Shapley1971assignment} characterized the core of the assignment game. We observe that a sub-coalition consisting of one player (or a set of players from the same side of the bipartition) can make zero profit, and therefore its profit under a core imputation can be an arbitrary amount. Hence an arbitrary core imputation makes {\em no fairness guarantee at the level of individual agents}. Can this deficiency be addressed by picking a ``good'' core imputation? To arrive at an appropriate solution concept, we give specific criteria for picking a special core imputation, and we undertake a detailed comparison of four solution concepts. Leximin and leximax core imputations come out as clear winners; we define these to be {\em equitable core imputations}. These imputations achieve ``fairness'' in different ways: whereas leximin tries to make poor agents more rich, leximax tries to make rich agents less rich. We give combinatorial strongly polynomial algorithms for computing these imputations via a novel adaptation of the classical primal-dual paradigm. The ``engine'' driving them involves insights into core imputations obtained via complementarity. It will not be surprising if our work leads to new uses of this powerful technique. Furthermore, we expect more work on computing the leximin and leximax core imputations of other natural games, in addition to the recent follow-up work \cite{Leximin-max}.
arXiv:2412.00897v2 Announce Type: replace-cross Abstract: We propose a model of unawareness that remains close to the paradigm of Aumann's model for knowledge [R. J. Aumann, International Journal of Game Theory 28 (1999) 263-300]: just as Aumann uses a correspondence on a state space to define an agent's knowledge operator on events, we use a correspondence on a state space to define an agent's awareness operator on events. This is made possible by three ideas. First, like the model of [A. Heifetz, M. Meier, and B. Schipper, Journal of Economic Theory 130 (2006) 78-94], ours is based on a space of partial specifications of the world, partially ordered by a relation of further specification or refinement, and the idea that agents may be aware of some coarser-grained specifications while unaware of some finer-grained specifications; however, our model is based on a different implementation of this idea, related to forcing in set theory. Second, we depart from a tradition in the literature, initiated by [S. Modica and A. Rustichini, Theory and Decision 37 (1994) 107-124] and adopted by Heifetz et al. and [J. Li, Journal of Economic Theory 144 (2009) 977-993], of taking awareness to be definable in terms of knowledge. Third, we show that the negative conclusion of a well-known impossibility theorem concerning unawareness in [Dekel, Lipman, and Rustichini, Econometrica 66 (1998) 159-173] can be escaped by a slight weakening of a key axiom. Together these points demonstrate that a correspondence on a partial-state space is sufficient to model unawareness of events. Indeed, we prove a representation theorem showing that any abstract Boolean algebra equipped with awareness, knowledge, and belief operators satisfying some plausible axioms is representable as the algebra of events arising from a partial-state space with awareness, knowledge, and belief correspondences.
arXiv:2412.16384v1 Announce Type: new Abstract: A contract is an economic tool used by a principal to incentivize one or more agents to exert effort on her behalf, by defining payments based on observable performance measures. A key challenge addressed by contracts -- known in economics as moral hazard -- is that, absent a properly set up contract, agents might engage in actions that are not in the principal's best interest. Another common feature of contracts is limited liability, which means that payments can go only from the principal -- who has the deep pocket -- to the agents. With classic applications of contract theory moving online, growing in scale, and becoming more data-driven, tools from contract theory become increasingly important for incentive-aware algorithm design. At the same time, algorithm design offers a whole new toolbox for reasoning about contracts, ranging from additional tools for studying the tradeoff between simple and optimal contracts, through a language for discussing the computational complexity of contracts in combinatorial settings, to a formalism for analyzing data-driven contracts. This survey aims to provide a computer science-friendly introduction to the basic concepts of contract theory. We give an overview of the emerging field of "algorithmic contract theory" and highlight work that showcases the potential for interaction between the two areas. We also discuss avenues for future research.
arXiv:2401.16412v3 Announce Type: replace Abstract: By classic results in social choice theory, any reasonable preferential voting method sometimes gives individuals an incentive to report an insincere preference. The extent to which different voting methods are more or less resistant to such strategic manipulation has become a key consideration for comparing voting methods. Here we measure resistance to manipulation by whether neural networks of various sizes can learn to profitably manipulate a given voting method in expectation, given different types of limited information about how other voters will vote. We trained over 100,000 neural networks of 26 sizes to manipulate against 8 different voting methods, under 6 types of limited information, in committee-sized elections with 5-21 voters and 3-6 candidates. We find that some voting methods, such as Borda, are highly manipulable by networks with limited information, while others, such as Instant Runoff, are not, despite being quite profitably manipulated by an ideal manipulator with full information. For the three probability models for elections that we use, the overall least manipulable of the 8 methods we study are Condorcet methods, namely Minimax and Split Cycle.
arXiv:2401.12366v2 Announce Type: replace-cross Abstract: We analyze how market segmentation affects consumer welfare when a monopolist can engage in both second-degree price discrimination (through product differentiation) and third-degree price discrimination (through market segmentation). We characterize the consumer-optimal market segmentation and show that it has several striking properties: (1) the market segmentation displays monotonicity$\unicode{x2014}$higher-value customers always receive higher quality product than lower-value regardless of their segment and across any segment; and (2) when aggregate demand elasticity exceeds a threshold determined by marginal costs, no segmentation maximizes consumer surplus. Our results demonstrate that strategic market segmentation can benefit consumers even when it enables price discrimination, but these benefits depend critically on demand elasticities and cost structures. The findings have implications for regulatory policy regarding price discrimination and market segmentation practices.
arXiv:2412.15472v1 Announce Type: new Abstract: Allocating indivisible goods is a ubiquitous task in fair division. We study additive welfarist rules, an important class of rules which choose an allocation that maximizes the sum of some function of the agents' utilities. Prior work has shown that the maximum Nash welfare (MNW) rule is the unique additive welfarist rule that guarantees envy-freeness up to one good (EF1). We strengthen this result by showing that MNW remains the only additive welfarist rule that ensures EF1 for identical-good instances, two-value instances, as well as normalized instances with three or more agents. On the other hand, if the agents' utilities are integers, we demonstrate that several other rules offer the EF1 guarantee, and provide characterizations of these rules for various classes of instances.
arXiv:2412.16132v1 Announce Type: cross Abstract: We study mechanism design when agents hold private information about both their preferences and a common payoff-relevant state. We show that standard message-driven mechanisms cannot implement socially efficient allocations when agents have multidimensional types, even under favorable conditions. To overcome this limitation, we propose data-driven mechanisms that leverage additional post-allocation information, modeled as an estimator of the payoff-relevant state. Our data-driven mechanisms extend the classic Vickrey-Clarke-Groves class. We show that they achieve exact implementation in posterior equilibrium when the state is either fully revealed or the utility is linear in an unbiased estimator. We also show that they achieve approximate implementation with a consistent estimator, converging to exact implementation as the estimator converges, and present bounds on the convergence rate. We demonstrate applications to digital advertising auctions and large language model (LLM)-based mechanisms, where user engagement naturally reveals relevant information.