econ.GN

10 posts

arXiv:2501.00382v1 Announce Type: cross Abstract: This paper advances empirical demand analysis by integrating multimodal product representations derived from artificial intelligence (AI). Using a detailed dataset of toy cars on \textit{Amazon.com}, we combine text descriptions, images, and tabular covariates to represent each product using transformer-based embedding models. These embeddings capture nuanced attributes, such as quality, branding, and visual characteristics, that traditional methods often struggle to summarize. Moreover, we fine-tune these embeddings for causal inference tasks. We show that the resulting embeddings substantially improve the predictive accuracy of sales ranks and prices and that they lead to more credible causal estimates of price elasticity. Notably, we uncover strong heterogeneity in price elasticity driven by these product-specific features. Our findings illustrate that AI-driven representations can enrich and modernize empirical demand analysis. The insights generated may also prove valuable for applied causal inference more broadly.

Philipp Bach, Victor Chernozhukov, Sven Klaassen, Martin Spindler, Jan Teichert-Kluge, Suhas Vijaykumar1/3/2025

arXiv:2412.20447v2 Announce Type: replace Abstract: The Bitcoin Network is a sophisticated accounting system that allows its underlying cryptocurrency to be trusted even in the absence of a reliable financial authority. Given its undeniable success, the technology, generally referred to as blockchain, has also been proposed as a means to improve legacy accounting systems. Accounting for real-world data, however, requires the intervention of a third party known as an Oracle, which, having not the same characteristics as a blockchain, could potentially reduce the expected integration benefit. Through a systematic review of the literature, this study aims to investigate whether the papers concerning blockchain integration in accounting consider and address the limitations posed by oracles. A broad overview of the limitations that emerged in the literature is provided and distinguished according to the specific accounting integration. Results support the view that although research on the subject counts numerous articles, actual studies considering oracle limitations are lacking. Interestingly, despite the scarce production of papers addressing oracles in various accounting sectors, reporting for ESG already shows interesting workarounds for oracle limitations, with permissioned chains envisioned as a valid support for the safe storage of sustainability data.

Giulio Caldarelli1/3/2025

arXiv:2412.18032v1 Announce Type: cross Abstract: There is growing concern about our vulnerability to space weather hazards and the disruption critical infrastructure failures could cause to society and the economy. However, the socio-economic impacts of space weather hazards, such as from geomagnetic storms, remain under-researched. This study introduces a novel framework to estimate the economic impacts of electricity transmission infrastructure failure due to space weather. By integrating existing geophysical and geomagnetically induced current (GIC) estimation models with a newly developed geospatial model of the Continental United States power grid, GIC vulnerabilities are assessed for a range of space weather scenarios. The approach evaluates multiple power network architectures, incorporating input-output economic modeling to translate business and population disruptions into macroeconomic impacts from GIC-related thermal heating failures. The results indicate a daily GDP loss from 6 billion USD to over 10 billion USD. Even under conservative GIC thresholds (75 A/ph) aligned with thermal withstand limits from the North American Electric Reliability Corporation (NERC), significant economic disruptions are evident. This study is limited by its restriction to thermal heating analysis, though GICs can also affect the grid through other pathways, such as voltage instability and harmonic distortions. Addressing these other failure mechanisms need to be the focus of future research.

Edward J. Oughton, Dennies K. Bor, Michael Wiltberger, Robert Weigel, C. Trevor Gaunt, Ridvan Dogan, Liling Huang12/25/2024

arXiv:2412.18337v1 Announce Type: cross Abstract: AI-generated content (AIGC), such as advertisement copy, product descriptions, and social media posts, is becoming ubiquitous in business practices. However, the value of AI-generated metadata, such as titles, remains unclear on user-generated content (UGC) platforms. To address this gap, we conducted a large-scale field experiment on a leading short-video platform in Asia to provide about 1 million users access to AI-generated titles for their uploaded videos. Our findings show that the provision of AI-generated titles significantly boosted content consumption, increasing valid watches by 1.6% and watch duration by 0.9%. When producers adopted these titles, these increases jumped to 7.1% and 4.1%, respectively. This viewership-boost effect was largely attributed to the use of this generative AI (GAI) tool increasing the likelihood of videos having a title by 41.4%. The effect was more pronounced for groups more affected by metadata sparsity. Mechanism analysis revealed that AI-generated metadata improved user-video matching accuracy in the platform's recommender system. Interestingly, for a video for which the producer would have posted a title anyway, adopting the AI-generated title decreased its viewership on average, implying that AI-generated titles may be of lower quality than human-generated ones. However, when producers chose to co-create with GAI and significantly revised the AI-generated titles, the videos outperformed their counterparts with either fully AI-generated or human-generated titles, showcasing the benefits of human-AI co-creation. This study highlights the value of AI-generated metadata and human-AI metadata co-creation in enhancing user-content matching and content consumption for UGC platforms.

Xinyi Zhang, Chenshuo Sun, Renyu Zhang, Khim-Yong Goh12/25/2024

arXiv:2412.16166v1 Announce Type: cross Abstract: Given the fact that climate change has become one of the most pressing problems in many countries in recent years, specialized research on how to mitigate climate change has been adopted by many countries. Within this discussion, the influence of advanced technologies in achieving carbon neutrality has been discussed. While several studies investigated how AI and Digital innovations could be used to reduce the environmental footprint, the actual influence of AI in reducing CO2 emissions (a proxy measuring carbon footprint) has yet to be investigated. This paper studies the role of advanced technologies in general, and Artificial Intelligence (AI) and ICT use in particular, in advancing carbon neutrality in the United States, between 2021. Secondly, this paper examines how Stock Market Growth, ICT use, Gross Domestic Product (GDP), and Population affect CO2 emissions using the STIRPAT model. After examining stationarity among the variables using a variety of unit root tests, this study concluded that there are no unit root problems across all the variables, with a mixed order of integration. The ARDL bounds test for cointegration revealed that variables in this study have a long-run relationship. Moreover, the estimates revealed from the ARDL model in the short- and long-run indicated that economic growth, stock market capitalization, and population significantly contributed to the carbon emissions in both the short-run and long-run. Conversely, AI and ICT use significantly reduced carbon emissions over both periods. Furthermore, findings were confirmed to be robust using FMOLS, DOLS, and CCR estimations. Furthermore, diagnostic tests indicated the absence of serial correlation, heteroscedasticity, and specification errors and, thus, the model was robust.

Azizul Hakim Rafi, Abdullah Al Abrar Chowdhury, Adita Sultana, Abdulla All Noman12/24/2024

arXiv:2412.16174v1 Announce Type: new Abstract: With consistent growth in Indian Economy, Initial Public Offerings (IPOs) have become a popular avenue for investment. With the modern technology simplifying investments, more investors are interested in making data driven decisions while subscribing for IPOs. In this paper, we describe a machine learning and natural language processing based approach for estimating if an IPO will be successful. We have extensively studied the impact of various facts mentioned in IPO filing prospectus, macroeconomic factors, market conditions, Grey Market Price, etc. on the success of an IPO. We created two new datasets relating to the IPOs of Indian companies. Finally, we investigated how information from multiple modalities (texts, images, numbers, and categorical features) can be used for estimating the direction and underpricing with respect to opening, high and closing prices of stocks on the IPO listing day.

Sohom Ghosh, Arnab Maji, N Harsha Vardhan, Sudip Kumar Naskar12/24/2024

arXiv:2406.15593v2 Announce Type: replace Abstract: Social scientists and the general public often analyze contemporary events by drawing parallels with the past, a process complicated by the vast, noisy, and unstructured nature of historical texts. For example, hundreds of millions of page scans from historical newspapers have been noisily transcribed. Traditional sparse methods for searching for relevant material in these vast corpora, e.g., with keywords, can be brittle given complex vocabularies and OCR noise. This study introduces News Deja Vu, a novel semantic search tool that leverages transformer large language models and a bi-encoder approach to identify historical news articles that are most similar to modern news queries. News Deja Vu first recognizes and masks entities, in order to focus on broader parallels rather than the specific named entities being discussed. Then, a contrastively trained, lightweight bi-encoder retrieves historical articles that are most similar semantically to a modern query, illustrating how phenomena that might seem unique to the present have varied historical precedents. Aimed at social scientists, the user-friendly News Deja Vu package is designed to be accessible for those who lack extensive familiarity with deep learning. It works with large text datasets, and we show how it can be deployed to a massive scale corpus of historical, open-source news articles. While human expertise remains important for drawing deeper insights, News Deja Vu provides a powerful tool for exploring parallels in how people have perceived past and present.

Brevin Franklin, Emily Silcock, Abhishek Arora, Tom Bryan, Melissa Dell12/23/2024

arXiv:2404.17227v2 Announce Type: replace-cross Abstract: In the rapidly evolving cryptocurrency landscape, trust is a critical yet underexplored factor shaping market behaviors and driving user preferences between centralized exchanges (CEXs) and decentralized exchanges (DEXs). Despite its importance, trust remains challenging to measure, limiting the study of its effects on market dynamics. The collapse of FTX, a major CEX, provides a unique natural experiment to examine the measurable impacts of trust and its sudden erosion on the cryptocurrency ecosystem. This pivotal event raised questions about the resilience of centralized trust systems and accelerated shifts toward decentralized alternatives. This research investigates the impacts of the FTX collapse on user trust, focusing on token valuation, trading flows, and sentiment dynamics. Employing causal inference methods, including Regression Discontinuity Design (RDD) and Difference-in-Differences (DID), we reveal significant declines in WETH prices and NetFlow from CEXs to DEXs, signaling a measurable transfer of trust. Additionally, natural language processing methods, including topic modeling and sentiment analysis, uncover the complexities of user responses, highlighting shifts from functional discussions to emotional fragmentation in Binance's community, while Uniswap's sentiment exhibits a gradual upward trend. Despite data limitations and external influences, the findings underscore the intricate interplay between trust, sentiment, and market behavior in the cryptocurrency ecosystem. By bridging blockchain analytics, behavioral finance, and decentralized finance (DeFi), this study contributes to interdisciplinary research, offering a deeper understanding of distributed trust mechanisms and providing critical insights for future investigations into the socio-technical dimensions of trust in digital economies.

Xintong Wu, Wanlin Deng, Yutong Quan, Luyao Zhang12/23/2024

arXiv:2412.15239v1 Announce Type: new Abstract: Understanding when and why consumers engage with stories is crucial for content creators and platforms. While existing theories suggest that audience beliefs of what is going to happen should play an important role in engagement decisions, empirical work has mostly focused on developing techniques to directly extract features from actual content, rather than capturing forward-looking beliefs, due to the lack of a principled way to model such beliefs in unstructured narrative data. To complement existing feature extraction techniques, this paper introduces a novel framework that leverages large language models to model audience forward-looking beliefs about how stories might unfold. Our method generates multiple potential continuations for each story and extracts features related to expectations, uncertainty, and surprise using established content analysis techniques. Applying our method to over 30,000 book chapters from Wattpad, we demonstrate that our framework complements existing feature engineering techniques by amplifying their marginal explanatory power on average by 31%. The results reveal that different types of engagement-continuing to read, commenting, and voting-are driven by distinct combinations of current and anticipated content features. Our framework provides a novel way to study and explore how audience forward-looking beliefs shape their engagement with narrative media, with implications for marketing strategy in content-focused industries.

Hortense Fong, George Gui12/23/2024

arXiv:2412.15433v1 Announce Type: new Abstract: We present a quantitative model for tracking dangerous AI capabilities over time. Our goal is to help the policy and research community visualise how dangerous capability testing can give us an early warning about approaching AI risks. We first use the model to provide a novel introduction to dangerous capability testing and how this testing can directly inform policy. Decision makers in AI labs and government often set policy that is sensitive to the estimated danger of AI systems, and may wish to set policies that condition on the crossing of a set threshold for danger. The model helps us to reason about these policy choices. We then run simulations to illustrate how we might fail to test for dangerous capabilities. To summarise, failures in dangerous capability testing may manifest in two ways: higher bias in our estimates of AI danger, or larger lags in threshold monitoring. We highlight two drivers of these failure modes: uncertainty around dynamics in AI capabilities and competition between frontier AI labs. Effective AI policy demands that we address these failure modes and their drivers. Even if the optimal targeting of resources is challenging, we show how delays in testing can harm AI policy. We offer preliminary recommendations for building an effective testing ecosystem for dangerous capabilities and advise on a research agenda.

Paolo Bova, Alessandro Di Stefano, The Anh Han12/23/2024