Master ThesisFirst publicationDOI: 10.48548/pubdata-3178

Influencing Belief in LLM-Based Agent Networks: An Empirically Validated Simulation of Bot-Driven Manipulation

Chronological data

Date of first publication2026-03-19
Date of publication in PubData 2026-03-19
Date of thesis submission2026-01-03
Date of defense2026-01-21

Language of the resource

English

Related external resources

Supplemented by DOI: 10.5281/zenodo.18134121
Burmester, J. (2026). PersuaRealSim: Simulation Data and Trained Models for Bot-Driven Belief Manipulation (Version v1). Data set. Zenodo.

Editor

Advisor

Case provider

Other contributors

Abstract

Large language models enable generative social simulations in which agents interact through natural language rather than predefined behavioural rules, extending classical agent-based modelling toward language-mediated interaction. Prior work has implemented stylised opinion dynamics models and platform-like social simulations augmented with LLM agents. Yet, belief updating is often driven by unconstrained prompting, heuristic rules, or agent self-reports, leaving it unclear whether observed dynamics reflect human persuasion or artefacts of modelling and prompting choices. This thesis asks whether belief change induced by language-capable social bots can be analysed under empirically grounded conditions, and how bot narrative styles shape population-level belief dynamics in a Twitter-like environment. To address this, a Twitter-like generative social simulation, PersuaRealSim1, is developed in which human-like generative agents and specialised bot agents interact. Stance updating is externalised to a supervised persuasion judge implemented as a RankFormer model trained on 46,846 r/ChangeMyViewthreads containing human-verified belief change outcomes. The resulting persuasion scores are calibrated to plausible stance-shift magnitudes and injected into a continuous stance update mechanism to ground belief change at the message level. Across two misinformation-relevant domains, simulations with four distinct bot narrative styles reveal rapid early belief change, followed by asymptotic convergence, consistent with assimilative influence. Narrative styles primarily affect the speed and efficiency of early shifts. Scientific-authority bots induce the strongest and fastest shifts, while emotional framing underperforms control conditions, although statistical resolution is limited by the number of simulation runs feasible under computational constraints. Beyond substantive findings, the thesis contributes methodologically by demonstrating how external, purpose-aligned validation can be integrated directly into the causal mechanism of a generative social simulation. Overall, the work shows that generative agents can support the principled study of belief change when empirical grounding is embedded into the simulation mechanism itself, and conclusions are drawn within clearly stated methodological bounds.

Keywords

Large Language Models; Generative Social Simulation; Social Bot; Opinion Dynamics; Information Disorder

Grantor

Leuphana University Lüneburg

Study programme

Management & Data Science

More information

DDC

006.31

Creation Context

Study