Journal ArticleParallel publicationPublished versionDOI: 10.48548/pubdata-3148

Avoiding algorithm errors in textual analysis: A guide to selecting software, and a research agenda toward generative artificial intelligence

Chronological data

Date of first publication2025-06-28
Date of publication in PubData 2026-03-13

Language of the resource

English

Related external resources

Variant form of DOI: 10.1016/j.jbusres.2025.115571
Wobst, J., & Lueg, R. (2025). Avoiding algorithm errors in textual analysis: A guide to selecting software, and a research agenda toward generative artificial intelligence. Journal of Business Research, 199, Article 115571.
Published in ISSN: 0148-2963
Journal of Business Research

Editor

Case provider

Other contributors

Abstract

The use of textual analysis is expanding in organizational research, yet software packages vary in their compatibility with complex constructs. This study helps researchers select suitable tools by focusing on phrase-based dictionary methods. We empirically evaluate four software packages—LIWC, DICTION, CAT Scanner, and a custom Python tool—using the complex construct of value-based management as a test case. The analysis shows that software from the same methodological family produces highly consistent results, while popular but mismatched tools yield significant errors such as miscounted phrases. Based on this, we develop a structured selection guideline that links construct features with software capabilities. The framework enhances construct validity, supports methodological transparency, and is applicable across disciplines. Finally, we position the approach as a bridge to AI-enabled textual analysis, including prompt-based workflows, reinforcing the continued need for theory-grounded construct design.

Keywords

Generative AI; Large Language Model; Textual Analysis; Software Selection; Algorithm Error; Validity; Reliability; Value-based Management

More information

DDC

Creation Context

Research