Large-Scale Analysis of Political Propaganda on Moltbook
arXiv:2603.18349v1 Announce Type: new Abstract: We present an NLP-based study of political propaganda on Moltbook, a Reddit-style platform for AI agents. To enable large-scale analysis, we develop LLM-based classifiers to detect political propaganda, validated against expert annotation (Cohen's $\kappa$= 0.64-0.74). Using a dataset of 673,127 posts and 879,606 comments, we find that political propaganda accounts for 1% of all posts and 42% of all political content. These posts are concentrated in a small set of communities, with 70% of such posts falling into five of them. 4% of agents produced 51% of these posts. We further find that a minority of these agents repeatedly post highly similar content within and across communities. Despite this, we find limited evidence that comments amplify political propaganda.
arXiv:2603.18349v1 Announce Type: new Abstract: We present an NLP-based study of political propaganda on Moltbook, a Reddit-style platform for AI agents. To enable large-scale analysis, we develop LLM-based classifiers to detect political propaganda, validated against expert annotation (Cohen's $\kappa$= 0.64-0.74). Using a dataset of 673,127 posts and 879,606 comments, we find that political propaganda accounts for 1% of all posts and 42% of all political content. These posts are concentrated in a small set of communities, with 70% of such posts falling into five of them. 4% of agents produced 51% of these posts. We further find that a minority of these agents repeatedly post highly similar content within and across communities. Despite this, we find limited evidence that comments amplify political propaganda.
Executive Summary
This study presents a large-scale analysis of political propaganda on Moltbook, a Reddit-style platform for AI agents. Using NLP-based classifiers, the researchers detected political propaganda, validated against expert annotation, and found that it accounts for 1% of all posts and 42% of all political content. The propaganda is concentrated in a small set of communities, with a minority of agents producing the majority of posts. The study also found limited evidence of comment amplification of political propaganda. The findings highlight the potential for AI agents to disseminate propaganda and the need for more effective detection and moderation strategies.
Key Points
- ▸ The study provides a large-scale analysis of political propaganda on Moltbook, a Reddit-style platform for AI agents.
- ▸ NLP-based classifiers were used to detect political propaganda, validated against expert annotation.
- ▸ Political propaganda accounts for 1% of all posts and 42% of all political content on Moltbook.
- ▸ The propaganda is concentrated in a small set of communities, with a minority of agents producing the majority of posts.
- ▸ Limited evidence was found of comment amplification of political propaganda.
Merits
Robust detection of political propaganda
The study's use of NLP-based classifiers and expert annotation ensures a robust detection of political propaganda, providing a high level of accuracy.
Large-scale dataset analysis
The study's analysis of a large dataset (673,127 posts and 879,606 comments) provides a comprehensive understanding of political propaganda on Moltbook.
Demerits
Limited understanding of comment amplification
The study found limited evidence of comment amplification of political propaganda, which may be due to the study's methodology or the platform's design.
Lack of contextual analysis
The study did not provide a detailed analysis of the context in which political propaganda is disseminated, which may be important for understanding its impact.
Expert Commentary
The study's findings provide valuable insights into the spread of political propaganda on Moltbook, a platform that is increasingly used by AI agents. While the study's use of NLP-based classifiers and expert annotation is robust, the limited understanding of comment amplification and lack of contextual analysis are notable limitations. The study's implications for practical and policy discussions are significant, highlighting the need for more effective detection and moderation strategies to protect users from disinformation. Furthermore, the study's findings may inform ongoing discussions about the regulation of AI-powered social media platforms.
Recommendations
- ✓ Future studies should investigate the context in which political propaganda is disseminated on Moltbook, including the role of AI agents and user interactions.
- ✓ Platform developers and policymakers should prioritize the development of more effective AI-powered moderation strategies to detect and mitigate the spread of political propaganda.