Avoiding algorithm errors in textual analysis: A guide to selecting software, and a research agenda toward generative artificial intelligence
Chronological data
Date of first publication2025-06-28
Date of publication in PubData 2026-03-13
Language of the resource
English
Abstract
The use of textual analysis is expanding in organizational research, yet software packages vary in their compatibility with complex constructs. This study helps researchers select suitable tools by focusing on phrase-based dictionary methods. We empirically evaluate four software packages—LIWC, DICTION, CAT Scanner, and a custom Python tool—using the complex construct of value-based management as a test case. The analysis shows that software from the same methodological family produces highly consistent results, while popular but mismatched tools yield significant errors such as miscounted phrases. Based on this, we develop a structured selection guideline that links construct features with software capabilities. The framework enhances construct validity, supports methodological transparency, and is applicable across disciplines. Finally, we position the approach as a bridge to AI-enabled textual analysis, including prompt-based workflows, reinforcing the continued need for theory-grounded construct design.
Keywords
Generative AI; Large Language Model; Textual Analysis; Software Selection; Algorithm Error; Validity; Reliability; Value-based Management
