cs.SI
62 postsarXiv:2501.01333v1 Announce Type: new Abstract: Recent advances in cover song identification have shown great success. However, models are usually tested on a fixed set of datasets which are relying on the online cover song database SecondHandSongs. It is unclear how well models perform on cover songs on online video platforms, which might exhibit alterations that are not expected. In this paper, we annotate a subset of songs from YouTube sampled by a multi-modal uncertainty sampling approach and evaluate state-of-the-art models. We find that existing models achieve significantly lower ranking performance on our dataset compared to a community dataset. We additionally measure the performance of different types of versions (e.g., instrumental versions) and find several types that are particularly hard to rank. Lastly, we provide a taxonomy of alterations in cover versions on the web.
arXiv:2501.00855v1 Announce Type: new Abstract: Chatter on social media is 20% bots and 80% humans. Chatter by bots and humans is consistently different: bots tend to use linguistic cues that can be easily automated while humans use cues that require dialogue understanding. Bots use words that match the identities they choose to present, while humans may send messages that are not related to the identities they present. Bots and humans differ in their communication structure: sampled bots have a star interaction structure, while sampled humans have a hierarchical structure. These conclusions are based on a large-scale analysis of social media tweets across ~200mil users across 7 events. Social media bots took the world by storm when social-cybersecurity researchers realized that social media users not only consisted of humans but also of artificial agents called bots. These bots wreck havoc online by spreading disinformation and manipulating narratives. Most research on bots are based on special-purposed definitions, mostly predicated on the event studied. This article first begins by asking, "What is a bot?", and we study the underlying principles of how bots are different from humans. We develop a first-principle definition of a social media bot. With this definition as a premise, we systematically compare characteristics between bots and humans across global events, and reflect on how the software-programmed bot is an Artificial Intelligent algorithm, and its potential for evolution as technology advances. Based on our results, we provide recommendations for the use and regulation of bots. Finally, we discuss open challenges and future directions: Detect, to systematically identify these automated and potentially evolving bots; Differentiate, to evaluate the goodness of the bot in terms of their content postings and relationship interactions; Disrupt, to moderate the impact of malicious bots.
arXiv:2411.12329v2 Announce Type: replace Abstract: Graph clustering is an unsupervised machine learning method that partitions the nodes in a graph into different groups. Despite achieving significant progress in exploiting both attributed and structured data information, graph clustering methods often face practical challenges related to data isolation. Moreover, the absence of collaborative methods for graph clustering limits their effectiveness. In this paper, we propose a collaborative graph clustering framework for attributed graphs, supporting attributed graph clustering over vertically partitioned data with different participants holding distinct features of the same data. Our method leverages a novel technique that reduces the sample space, improving the efficiency of the attributed graph clustering method. Furthermore, we compare our method to its centralized counterpart under a proximity condition, demonstrating that the successful local results of each participant contribute to the overall success of the collaboration. We fully implement our approach and evaluate its utility and efficiency by conducting experiments on four public datasets. The results demonstrate that our method achieves comparable accuracy levels to centralized attributed graph clustering methods. Our collaborative graph clustering framework provides an efficient and effective solution for graph clustering challenges related to data isolation.
arXiv:2112.03543v3 Announce Type: replace Abstract: Communication noise is a common feature in several real-world scenarios where systems of agents need to communicate in order to pursue some collective task. In particular, many biologically inspired systems that try to achieve agreements on some opinion must implement resilient dynamics that are not strongly affected by noisy communications. In this work, we study the popular 3-Majority dynamics, an opinion dynamics which has been proved to be an efficient protocol for the majority consensus problem, in which we introduce a simple feature of uniform communication noise, following (d'Amore et al. 2020). We prove that in the fully connected communication network of n agents and in the binary opinion case, the process induced by the 3-Majority dynamics exhibits a phase transition. For a noise probability $p 1/3$, no form of consensus is possible, and any information regarding the initial majority opinion is lost in logarithmic time with high probability. Despite more communications per-round are allowed, the 3-Majority dynamics surprisingly turns out to be less resilient to noise than the Undecided-State dynamics (d'Amore et al. 2020), whose noise threshold value is $p = 1/2$.
arXiv:2501.01203v1 Announce Type: new Abstract: Academic journal recommendation requires effectively combining structural understanding of scholarly networks with interpretable recommendations. While graph neural networks (GNNs) and large language models (LLMs) excel in their respective domains, current approaches often fail to achieve true integration at the reasoning level. We propose HetGCoT-Rec, a framework that deeply integrates heterogeneous graph transformer with LLMs through chain-of-thought reasoning. Our framework features two key technical innovations: (1) a structure-aware mechanism that transforms heterogeneous graph neural network learned subgraph information into natural language contexts, utilizing predefined metapaths to capture academic relationships, and (2) a multi-step reasoning strategy that systematically embeds graph-derived contexts into the LLM's stage-wise reasoning process. Experiments on a dataset collected from OpenAlex demonstrate that our approach significantly outperforms baseline methods, achieving 96.48% Hit rate and 92.21% H@1 accuracy. Furthermore, we validate the framework's adaptability across different LLM architectures, showing consistent improvements in both recommendation accuracy and explanation quality. Our work demonstrates an effective approach for combining graph-structured reasoning with language models for interpretable academic venue recommendations.
arXiv:2501.01031v1 Announce Type: new Abstract: Cultural values alignment in Large Language Models (LLMs) is a critical challenge due to their tendency to embed Western-centric biases from training data, leading to misrepresentations and fairness issues in cross-cultural contexts. Recent approaches, such as role-assignment and few-shot learning, often struggle with reliable cultural alignment as they heavily rely on pre-trained knowledge, lack scalability, and fail to capture nuanced cultural values effectively. To address these issues, we propose ValuesRAG, a novel and effective framework that applies Retrieval-Augmented Generation (RAG) with in-context learning to integrate cultural and demographic knowledge dynamically during text generation. Leveraging the World Values Survey (WVS) dataset, ValuesRAG first generates summaries of values for each individual. Subsequently, we curated several representative regional datasets to serve as test datasets and retrieve relevant summaries of values based on demographic features, followed by a reranking step to select the top-k relevant summaries. ValuesRAG consistently outperforms baseline methods, both in the main experiment and in the ablation study where only the values summary was provided, highlighting ValuesRAG's potential to foster culturally aligned AI systems and enhance the inclusivity of AI-driven applications.
arXiv:2310.10114v3 Announce Type: replace Abstract: In the node classification task, it is natural to presume that densely connected nodes tend to exhibit similar attributes. Given this, it is crucial to first define what constitutes a dense connection and to develop a reliable mathematical tool for assessing node cohesiveness. In this paper, we propose a probability-based objective function for semi-supervised node classification that takes advantage of higher-order networks' capabilities. The proposed function reflects the philosophy aligned with the intuition behind classifying within higher order networks, as it is designed to reduce the likelihood of nodes interconnected through higher-order networks bearing different labels. Additionally, we propose the Stochastic Block Tensor Model (SBTM) as a graph generation model designed specifically to address a significant limitation of the traditional stochastic block model, which does not adequately represent the distribution of higher-order structures in real networks. We evaluate the objective function using networks generated by the SBTM, which include both balanced and imbalanced scenarios. Furthermore, we present an approach that integrates the objective function with graph neural network (GNN)-based semi-supervised node classification methodologies, aiming for additional performance gains. Our results demonstrate that in challenging classification scenarios--characterized by a low probability of homo-connections, a high probability of hetero-connections, and limited prior node information--models based on the higher-order network outperform pairwise interaction-based models. Furthermore, experimental results suggest that integrating our proposed objective function with existing GNN-based node classification approaches enhances classification performance by efficiently learning higher-order structures distributed in the network.
arXiv:2406.09169v2 Announce Type: replace Abstract: Real-world networks are sparse. As we show in this article, even when a large number of interactions is observed, most node pairs remain disconnected. We demonstrate that classical multi-edge network models, such as the $G(N,p)$, configuration models, and stochastic block models, fail to accurately capture this phenomenon. To mitigate this issue, zero-inflation must be integrated into these traditional models. Through zero-inflation, we incorporate a mechanism that accounts for the excess number of zeroes (disconnected pairs) observed in empirical data. By performing an analysis on all the datasets from the Sociopatterns repository, we illustrate how zero-inflated models more accurately reflect the sparsity and heavy-tailed edge count distributions observed in empirical data. Our findings underscore that failing to account for these ubiquitous properties in real-world networks inadvertently leads to biased models that do not accurately represent complex systems and their dynamics.
arXiv:2411.13134v2 Announce Type: replace Abstract: In historical studies, the older the sources, the more common it is to have access to data that are only partial, and/or unreliable or imprecise. This can make it difficult, or even impossible, to perform certain tasks of interest, such as the segmentation of some urban space based on the location of its constituting elements. Indeed, traditional approaches to tackle this specific task require knowing the position of all these elements before clustering them. Yet, alternative information is sometimes available, which can be leveraged to address this challenge. For instance, in the Middle Ages, land registries typically do not provide exact addresses, but rather locate spatial objects relative to each other, e.g. x being to the North of y. Spatial graphs are particularly adapted to model such spatial relationships, called confronts, which is why we propose their use over standard tabular databases. However, historical data are rich and allow extracting confront networks in many ways, making the process non-trivial. In this article, we propose several extraction methods and compare them to identify the most appropriate. We postulate that the best candidate must constitute an optimal trade-off between covering as much of the original data as possible, and providing the best graph-based approximation of spatial distance. Leveraging a dataset that describes Avignon during its papal period, we show empirically that the best results require ignoring some of the information present in the original historical sources, and that including additional information from secondary sources significantly improves the confront network. We illustrate the relevance of our method by partitioning the best graph that we extracted, and discussing its community structure in terms of urban space organization, from a historical perspective. Our data and source code are both publicly available online.
arXiv:2501.00191v1 Announce Type: new Abstract: We study a networked economic system composed of $n$ producers supplying a single homogeneous good to a number of geographically separated markets and of a centralized authority, called the market maker. Producers compete \`a la Cournot, by choosing the quantities of good to supply to each market they have access to in order to maximize their profit. Every market is characterized by its inverse demand functions returning the unit price of the considered good as a function of the total available quantity. Markets are interconnected by a dispatch network through which quantities of the considered good can flow within finite capacity constraints. Such flows are determined by the market maker, who aims at maximizing a designated welfare function. We model such competition as a strategic game with $n+1$ players: the producers and the market game. For this game, we first establish the existence of Nash equilibria under standard concavity assumptions. We then identify sufficient conditions for the game to be potential with an essentially unique Nash equilibrium. Next, we present a general result that connects the optimal action of the market maker with the capacity constraints imposed on the network. For the commonly used Walrasian welfare, our finding proves a connection between capacity bottlenecks in the market network and the emergence of price differences between markets separated by saturated lines. This phenomenon is frequently observed in real-world scenarios, for instance in power networks. Finally, we validate the model with data from the Italian day-ahead electricity market.
arXiv:2501.00204v1 Announce Type: new Abstract: Although social bots can be engineered for constructive applications, their potential for misuse in manipulative schemes and malware distribution cannot be overlooked. This dichotomy underscores the critical need to detect social bots on social media platforms. Advances in artificial intelligence have improved the abilities of social bots, allowing them to generate content that is almost indistinguishable from human-created content. These advancements require the development of more advanced detection techniques to accurately identify these automated entities. Given the heterogeneous information landscape on social media, spanning images, texts, and user statistical features, we propose MSM-BD, a Multimodal Social Media Bot Detection approach using heterogeneous information. MSM-BD incorporates specialized encoders for heterogeneous information and introduces a cross-modal fusion technology, Cross-Modal Residual Cross-Attention (CMRCA), to enhance detection accuracy. We validate the effectiveness of our model through extensive experiments using the TwiBot-22 dataset.
arXiv:2501.00260v1 Announce Type: new Abstract: Phishing is an online identity theft technique where attackers steal users personal information, leading to financial losses for individuals and organizations. With the increasing adoption of smartphones, which provide functionalities similar to desktop computers, attackers are targeting mobile users. Smishing, a phishing attack carried out through Short Messaging Service (SMS), has become prevalent due to the widespread use of SMS-based services. It involves deceptive messages designed to extract sensitive information. Despite the growing number of smishing attacks, limited research focuses on detecting these threats. This work presents a smishing detection model using a content-based analysis approach. To address the challenge posed by slang, abbreviations, and short forms in text communication, the model normalizes these into standard forms. A machine learning classifier is employed to classify messages as smishing or ham. Experimental results demonstrate the model effectiveness, achieving classification accuracies of 97.14% for smishing and 96.12% for ham messages, with an overall accuracy of 96.20%.
arXiv:2501.00417v1 Announce Type: new Abstract: PageRank, widely used for network analysis in various fields, is a form of Katz centrality based on the recursive definition of importance (RDI). However, PageRank has a free parameter known as the damping factor, whose recommended value is 0.85, although its validity has been guaranteed theoretically or rationally. To solve this problem, we propose PureRank, a new parameter-free RDI-based importance measure. The term ``pure'' in PureRank denotes the purity of its parameter-free nature, which ensures the uniqueness of importance scores and eliminates subjective and empirical adjustments. PureRank also offers computational advantages over PageRank, such as improved parallelizability and scalability, due to the use of strongly connected component decomposition in its definition. Furthermore, we introduce the concept of splitting for networks with both positively and negatively weighted links and extend PureRank to such general networks, providing a more versatile tool for network analysis.
arXiv:2501.00779v1 Announce Type: new Abstract: In social online platforms, identifying influential seed users to maximize influence spread is a crucial as it can greatly diminish the cost and efforts required for information dissemination. While effective, traditional methods for Multiplex Influence Maximization (MIM) have reached their performance limits, prompting the emergence of learning-based approaches. These novel methods aim for better generalization and scalability for more sizable graphs but face significant challenges, such as (1) inability to handle unknown diffusion patterns and (2) reliance on high-quality training samples. To address these issues, we propose the Reinforced Expert Maximization framework (REM). REM leverages a Propagation Mixture of Experts technique to encode dynamic propagation of large multiplex networks effectively in order to generate enhanced influence propagation. Noticeably, REM treats a generative model as a policy to autonomously generate different seed sets and learn how to improve them from a Reinforcement Learning perspective. Extensive experiments on several real-world datasets demonstrate that REM surpasses state-of-the-art methods in terms of influence spread, scalability, and inference time in influence maximization tasks.
arXiv:2412.18145v1 Announce Type: cross Abstract: The social characteristics of players in a social network are closely associated with their network positions and relational importance. Identifying those influential players in a network is of great importance as it helps to understand how ties are formed, how information is propagated, and, in turn, can guide the dissemination of new information. Motivated by a Sina Weibo social network analysis of the 2021 Henan Floods, where response variables for each Sina Weibo user are available, we propose a new notion of supervised centrality that emphasizes the task-specific nature of a player's centrality. To estimate the supervised centrality and identify important players, we develop a novel sparse network influence regression by introducing individual heterogeneity for each user. To overcome the computational difficulties in fitting the model for large social networks, we further develop a forward-addition algorithm and show that it can consistently identify a superset of the influential Sina Weibo users. We apply our method to analyze three responses in the Henan Floods data: the number of comments, reposts, and likes, and obtain meaningful results. A further simulation study corroborates the developed method.
arXiv:2412.18464v1 Announce Type: new Abstract: Social segregation in cities, spanning racial, residential, and income dimensions, is becoming more diverse and severe. As urban spaces and social relations grow more complex, residents in metropolitan areas experience varying levels of social segregation. If left unaddressed, this could lead to increased crime rates, heightened social tensions, and other serious issues. Effectively quantifying and analyzing the structures within urban spaces and resident interactions is crucial for addressing segregation. Previous studies have mainly focused on surface-level indicators of urban segregation, lacking comprehensive analyses of urban structure and mobility. This limitation fails to capture the full complexity of segregation. To address this gap, we propose a framework named Motif-Enhanced Graph Prototype Learning (MotifGPL),which consists of three key modules: prototype-based graph structure extraction, motif distribution discovery, and urban graph structure reconstruction. Specifically, we use graph structure prototype learning to extract key prototypes from both the urban spatial graph and the origin-destination graph, incorporating key urban attributes such as points of interest, street view images, and flow indices. To enhance interpretability, the motif distribution discovery module matches each prototype with similar motifs, representing simpler graph structures reflecting local patterns. Finally, we use the motif distribution results to guide the reconstruction of the two graphs. This model enables a detailed exploration of urban spatial structures and resident mobility patterns, helping identify and analyze motif patterns that influence urban segregation, guiding the reconstruction of urban graph structures. Experimental results demonstrate that MotifGPL effectively reveals the key motifs affecting urban social segregation and offer robust guidance for mitigating this issue.
arXiv:2412.18046v1 Announce Type: new Abstract: Emojis are widely used across social media platforms but are often lost in noisy or garbled text, posing challenges for data analysis and machine learning. Conventional preprocessing approaches recommend removing such text, risking the loss of emojis and their contextual meaning. This paper proposes a three-step reverse-engineering methodology to retrieve emojis from garbled text in social media posts. The methodology also identifies reasons for the generation of such text during social media data mining. To evaluate its effectiveness, the approach was applied to 509,248 Tweets about the Mpox outbreak, a dataset referenced in about 30 prior works that failed to retrieve emojis from garbled text. Our method retrieved 157,748 emojis from 76,914 Tweets. Improvements in text readability and coherence were demonstrated through metrics such as Flesch Reading Ease, Flesch-Kincaid Grade Level, Coleman-Liau Index, Automated Readability Index, Dale-Chall Readability Score, Text Standard, and Reading Time. Additionally, the frequency of individual emojis and their patterns of usage in these Tweets were analyzed, and the results are presented.
arXiv:2408.08685v3 Announce Type: replace Abstract: Graph neural networks (GNNs) are vulnerable to adversarial attacks, especially for topology perturbations, and many methods that improve the robustness of GNNs have received considerable attention. Recently, we have witnessed the significant success of large language models (LLMs), leading many to explore the great potential of LLMs on GNNs. However, they mainly focus on improving the performance of GNNs by utilizing LLMs to enhance the node features. Therefore, we ask: Will the robustness of GNNs also be enhanced with the powerful understanding and inference capabilities of LLMs? By presenting the empirical results, we find that despite that LLMs can improve the robustness of GNNs, there is still an average decrease of 23.1% in accuracy, implying that the GNNs remain extremely vulnerable against topology attacks. Therefore, another question is how to extend the capabilities of LLMs on graph adversarial robustness. In this paper, we propose an LLM-based robust graph structure inference framework, LLM4RGNN, which distills the inference capabilities of GPT-4 into a local LLM for identifying malicious edges and an LM-based edge predictor for finding missing important edges, so as to recover a robust graph structure. Extensive experiments demonstrate that LLM4RGNN consistently improves the robustness across various GNNs. Even in some cases where the perturbation ratio increases to 40%, the accuracy of GNNs is still better than that on the clean graph. The source code can be found in https://github.com/zhongjian-zhang/LLM4RGNN.
arXiv:2412.18287v1 Announce Type: new Abstract: Credit card fraud incurs a considerable cost for both cardholders and issuing banks. Contemporary methods apply machine learning-based classifiers to detect fraudulent behavior from labeled transaction records. But labeled data are usually a small proportion of billions of real transactions due to expensive labeling costs, which implies that they do not well exploit many natural features from unlabeled data. Therefore, we propose a semi-supervised graph neural network for fraud detection. Specifically, we leverage transaction records to construct a temporal transaction graph, which is composed of temporal transactions (nodes) and interactions (edges) among them. Then we pass messages among the nodes through a Gated Temporal Attention Network (GTAN) to learn the transaction representation. We further model the fraud patterns through risk propagation among transactions. The extensive experiments are conducted on a real-world transaction dataset and two publicly available fraud detection datasets. The result shows that our proposed method, namely GTAN, outperforms other state-of-the-art baselines on three fraud detection datasets. Semi-supervised experiments demonstrate the excellent fraud detection performance of our model with only a tiny proportion of labeled data.
arXiv:2312.03779v3 Announce Type: replace Abstract: The proliferation of interactive AI like ChatGPT has fueled intense public discourse surrounding AI- generated content (AIGC). While some fear job displacement, others anticipate productivity gains. Social media provides a rich source of data reflecting public opinion, attitudes, and behaviors. By examining the factors influencing collective sentiment toward AIGC on various platforms, we can refine products, marketing, and AI models themselves. Our research utilized a novel system for real-time tracking and detailed visualization of public mood related to AIGC. This system enabled analysis of the dynamics shaping opinions on nine AIGC products across China's three leading social media sites. Our findings reveal a negative correlation between user demographics (age and education) and positive sentiment towards AIGC on Douyin, contrasting with Weibo's susceptibility to the rapid spread of extreme viewpoints. This work uniquely connects group dynamics theory with social media sentiment, offering valuable guidance for managing online opinion and tailoring targeted campaigns.