Research
The VCEE conducts cutting-edge research in experimental and behavioral economics. We explore a wide range of topics, from political and public economics to behavioral game theory and psychology. Using laboratory, online, and field experiments, we generate evidence-based insights that advance academic knowledge and contribute to meaningful societal impact.
Highlights

Cash for Votes
Anand Murugesan and Jean-Robert Tyran conducted a field experiment in India to investigate information-based interventions to combat the widespread practice of “cash-for-votes” (C4V). Their focus was on two key areas: voters’ misconceptions about political candidates and their beliefs about fellow voters’ willingness to oppose C4V. A baseline survey revealed that voters often hold inaccurate views, overestimating the number of wealthy and criminally-charged candidates in politics. Moreover, they tend to underestimate the extent to which their peers are willing to reject C4V.

Misperceptions about the welfare state: Immigration and health behavior
Christian Koch and Jean-Robert Tyran
The welfare state is large and complex in modern societies. Its functioning is therefore difficult to grasp for citizens. Public support for the welfare state and public acceptance of its policies hinge on citizens’ perceptions – including their misperception – of the welfare state. The project focuses on the malleability of perceptions of the welfare state by implementing a randomized information intervention and to allow for a detailed analysis of heterogeneity in the population by combining data from a survey with register data. More specifically, we use an online survey from a representative sample of the adult Austrian voting population to elicit the effects of information interventions on the perception of factual characteristics, the perception of behavioral effects, and on fairness perceptions with a special emphasis on immigration- and health-related aspects of the welfare state. The combination of register and survey data allows us to provide a detailed picture of who knows and thinks what about the welfare state in Austria and whose perceptions can be swayed. We implement both positive and negative informational interventions to evaluate whether one-sided information campaigns can strengthen or undermine the support for the welfare state, and to study which groups in the voting population are most receptive to such information.

Using Large Language Models for Text Classification
Can Celebi and Stefan Penczynski
We examine the use of large language models (LLMs) for text classification. We investigate whether original instructions can be effectively repurposed as prompts with moderate changes to achieve classification results comparable to human-coded benchmarks. Additionally, we study the impact of two prompting techniques -- providing a n classified examples (n-shot) and requiring a justification explanation (zero-shot Chain-of-Thought) -- on classification performance. Using GPT-3.5 and GPT-4, we further examine the extent to which larger model size improves classification accuracy. To assess these factors, we classify text from four economic experiments, covering tasks with varying complexity and prevalence in pre-training data, providing insights into how task characteristics influence classification performance. We find that Large language models (LLMs) can accurately and cost-effectively classify text across these tasks and replicate human annotations well. Performance improves through $n$-shot prompting and we observe task-dependent gains from Chain-of-Thought prompting. Our findings offer guidance for integrating LLM-based text classification into social science research.
