Niklas M. Witzig

profile.png

I am a PhD student at the Johannes Gutenberg University in Mainz, Germany at the chair of Daniel Schunk.

My research is primarily in Behavioral Economics with a methodological focus on structural (Bayesian) modeling of behavior, for example of cognitive processes underlying altruistic preferences or (behavioral) time allocation to tasks. I use machine learning in my research, for instance to predict when human chess players make mistakes or to estimate heterogeneous treatment effects in experiments.

In a second line of research, I collaborate with computer scientists on interdisciplinary research related to fair and explainable AI as well as using AI methods for sustainability purposes.

   

Research Projects

Cognitive Noise and Altruistic Preferences

[arxiv] [pdf]

I study altruistic choices through the lens of a cognitively noisy decision-maker. I introduce a theoretical framework that demonstrates how increased cognitive noise can directionally affect altruistic decisions and put its implications to the test: In a laboratory experiment, participants make a series of binary choices between taking and giving monetary payments. In the treatment, to-be-calculated sums replace plain monetary payments, increasing the cognitive difficulty of choosing. The Treatment group exhibits a lower sensitivity towards changes in payments and decides significantly more often in favor of the other person, i.e., is more altruistic. I explore the origins of this effect with Bayesian hierarchical models and a number-comparison task, mirroring the mechanics of the altruism choices absent any altruistic preference. The treatment effect is similar in this task, suggesting that a biased perception of numerical magnitudes drives treatment differences. The probabilistic models support this interpretation. A series of additional results show a negative correlation between cognitive reflection and individual measures of cognitive noise, as well as associations between altruistic choice and number comparison. Overall, these results suggest that altruistic preferences -- and potentially social preferences more generally -- are affected by the cognitive difficulty of their implementation.

Trust in Fair Algorithms

with Mattia Cerrato, Marius Köppel and Alesia Vallenas
[pilot completed]

We study how humans incorporate advice by an algorithm in a setting where gender biases are known to play a large role: hiring decisions and labor market outcomes. In an online experiment, we investigate if humans, when tasked to predict e.g., the employment status of another person, rely more strongly on predictions by *fair* algorithms, i.e., whose accuracy is the same regardless of gender. Crucially, we omit the sensitive attribute of the person at hand for both the human and machine. We furthermore investigate the role of prior beliefs of group differences and are able to capture the optimal Bayesian action and can compare if human behavior favors fair predictions beyond what is statistically optimal.

Strategic Risk-Taking and Time Pressure in Professional Chess

with Johannes Carow, under review
[draft]

We study the impact of time pressure on strategic risk-taking of professional chess players. We propose a novel machine-learning-based measure for the degree of strategic risk of a single chess move and apply this measure to the 2013-2023 FIDE Chess World Cups that allow for plausibly exogenous variation in thinking time. Our results indicate that having less thinking time consistently leads chess players to opt for more risk-averse moves. This effect is particular pronounced in disadvantageous positions, i.e., in which a player is trailing behind. We additionally provide correlational evidence for strategic loss aversion, a tendency for more risky moves after a mistake and in a disadvantageous position. Our results suggest that even high-proficiency decision-makers in high-stake situations react to time pressure and contextual factors more broadly.

Behavioral Time Choices in Speed-Accuracy Trade-offs

with Alexander Dzionara
[draft available upon request]

In many economic contexts, people need to solve trade-offs between doing an activity (e.g., solving a task) *faster* and doing it *better*. While time choices in speed-accuracy trade-offs have been extensively studied in cognitive science for motor-response and perception tasks, little evidence is available for more deliberate economic decision-making, where people's choices often fail to maximize payoffs. Conversely, the impact of behavioral biases -- key explanans of that failure -- on time choices has yet to be explored. We present a theoretical model linking time choices in speed-accuracy trade-offs to agents' abilities and their subjective beliefs and uncertainty attitudes. We test the predictions of the model in an experiment for two distinct (but otherwise identical) environments: prospective time choices before solving a task and simultaneous time choices while solving a task. Correlational analyses indicate that overconfidence (in one's ability) and uncertainty aversion affect time choices in the prospective but not in the simultaneous environment. Structural estimations, aimed at capturing the (behavioral) optimization process, support this interpretation. This suggests that long-known behavioral biases play a role outside of classical domains, yet could ``play out`` differently between planned and actual actions.