Intelligent Systems
Note: This research group has relocated.


2024


no image
A mathematical principle for the gamification of behavior change

Lieder, F., Chen, P., Prentice, M., Amo, V., Tošić, M.

JMIR Serious Games , 12, JMIR Publications, March 2024 (article)

Abstract
Many people want to build good habits to become healthier, live longer, or become happier but struggle to change their behavior. Gamification can make behavior change easier by awarding points for the desired behavior and deducting points for its omission.

link (url) DOI [BibTex]

2024

link (url) DOI [BibTex]


no image
Toward a normative theory of (self-)management by goal-setting

Singhi, N., Mohnert, F., Prystawski, B., Lieder, F.

Proceedings of the Annual Meeting of the Cognitive Science Society, Annual Meeting of the Cognitive Science Society, July 2023 (conference) Accepted

link (url) DOI [BibTex]

link (url) DOI [BibTex]


A Computational Process-Tracing Method for Measuring People’s Planning Strategies and How They Change Over Time
A Computational Process-Tracing Method for Measuring People’s Planning Strategies and How They Change Over Time

Jain, Y. R., Callaway, F., Griffiths, T. L., Dayan, P., He, R., Krueger, P. M., Lieder, F.

Behavior Research Methods, 55, pages: 20377-2079, June 2023 (article)

Abstract
One of the most unique and impressive feats of the human mind is its ability to discover and continuouslyrefine its own cognitive strategies. Elucidating the underlying learning and adaptation mechanisms is verydifficult because changes in cognitive strategies are not directly observable. One important domain in whichstrategies and mechanisms are studied is planning. To enable researchers to uncover how people learn howto plan, we offer a tutorial introduction to a recently developed process-tracing paradigm along with a newcomputational method for inferring people’s planning strategies and their changes over time from the resultingprocess-tracing data. Our method allows researchers to reveal experience-driven changes in people’s choice ofindividual planning operations, planning strategies, strategy types, and the relative contributions of differentdecision systems. We validate our method on simulated and empirical data. On simulated data, its inferencesabout the strategies and the relative influence of different decision systems are accurate. When evaluated on human data generated using our process-tracing paradigm, our computational method correctly detects theplasticity-enhancing effect of feedback and the effect of the structure of the environment on people’s planningstrategies. Together, these methods can be used to investigate the mechanisms of cognitive plasticity and toelucidate how people acquire complex cognitive skills such as planning and problem-solving. Importantly, ourmethods can also be used to measure individual differences in cognitive plasticity and examine how differenttypes (pedagogical) interventions affect the acquisition of cognitive skills.

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


no image
Learning from Consequences Shapes Reliance on Moral Rules vs. Cost-Benefit Reasoning

Maier, M., Cheung, V., Bartos, F., Lieder, F.

April 2023 (article) Submitted

Abstract
Many controversies arise from differences in how people resolve moral dilemmas by following deontological moral rules versus consequentialist cost-benefit reasoning (CBR). This article explores whether and, if so, how these seemingly intractable differences may arise from experience and whether they can be overcome through moral learning. We designed a new experimental paradigm to investigate moral learning from consequences of previous decisions. Our participants (N=387) faced a series of realistic moral dilemmas between two conflicting choices: one prescribed by a moral rule and the other favored by CBR. Critically, we let them observe the consequences of each of their decisions before making the next one. In one condition, CBR-based decisions consistently led to good outcomes, whereas rule-based decisions consistently led to bad outcomes. In the other condition, this contingency was reversed. We observed systematic, experience-dependent changes in people's moral rightness ratings and moral decisions over the course of just 13 decisions. Without being aware of it, participants adjusted how much moral weight they gave to CBR versus moral rules according to which approach produced better consequences in their respective experimental condition. These learning effects transferred to their subsequent responses to the Oxford Utilitarianism Scale, indicating genuine moral learning rather than task-specific effects. Our findings demonstrate the existence of rapid adaptive moral learning from the consequences of previous decisions. Individual differences in morality may thus be more malleable than previously thought.

DOI [BibTex]


no image
Systematic metacognitive reflection helps people discover far-sighted decision strategies: a process-tracing experiment

Becker, F., Wirzberger, M., Pammer-Schindler, V., Srinivas, S., Lieder, F.

Judgment and Decision Making, March 2023 (article) Accepted

DOI [BibTex]


no image
Formative assessment of the InsightApp: An ecological momentary intervention that helps people develop (meta-)cognitive skills to cope with stressful situations and difficult emotions

Amo, V., Prentice, M., Lieder, F.

JMIR Formative Research, March 2023 (article) Accepted

Abstract
Ecological Momentary interventions (EMIs) open new and exciting possibilities for conducting research and delivering mental health interventions in real-life environments via smartphones. This makes designing psychotherapeutic EMIs a promising step towards cost-effective, scalable digital solutions for improving mental health and understanding the effects and mechanisms of psychotherapy.

link (url) DOI [BibTex]


Automatic discovery and description of human planning strategies
Automatic discovery and description of human planning strategies

Skirzynski, J., Jain, Y. R., Lieder, F.

Behavior Research Methods, January 2023 (article) Accepted

Abstract
Scientific discovery concerns finding patterns in data and creating insightful hypotheses that explain these patterns. Traditionally, each step of this process required human ingenuity. But the galloping development of computer chips and advances in artificial intelligence (AI) make it increasingly more feasible to automate some parts of scientific discovery. Understanding human planning is one of the fields in which AI has not yet been utilized. State-of-the-art methods for discovering new planning strategies still rely on manual data analysis. Data about the process of human planning is often used to group similar behaviors together. Researchers then use this data to formulate verbal descriptions of the strategies which might underlie those groups of behaviors. In this work we leverage AI to automate these two steps of scientific discovery. We introduce a method for the automatic discovery and description of human planning strategies from process-tracing data collected with the Mouselab-MDP paradigm. Our algorithm, called Human-Interpret, uses imitation learning to describe data gathered in the experiment in terms of a procedural formula and then translates that formula to natural language using a pre-defined predicate dictionary. We test our method on a benchmark data set that researchers have previously scrutinized manually. We find that the descriptions of human planning strategies that we obtain automatically are about as understandable as human-generated descriptions. They also cover a substantial proportion of all types of human planning strategies that had been discovered manually. Our method saves scientists' time and effort as all the reasoning about human planning is done automatically. This might make it feasible to more rapidly scale up the search for yet undiscovered cognitive strategies that people use for planning and decision-making to many new decision environments, populations, tasks, and domains. Given these results, we believe that the presented work may accelerate scientific discovery in psychology, and due to its generality, extend to problems from other fields.

link (url) DOI [BibTex]

2022


no image
Can we improve self-regulation during computer-based work with optimal feedback?

Wirzberger, M., Lado, A., Prentice, M., Oreshnikov, I., Passy, J., Stock, A., Lieder, F.

Behaviour & Information Technology, November 2022 (article) Submitted

Abstract
Distractions are omnipresent and can derail our attention, which is a precious and very limited resource. To achieve their goals in the face of distractions, people need to regulate their attention, thoughts, and behavior; this is known as self-regulation. How can self-regulation be supported or strengthened in ways that are relevant for everyday work and learning activities? To address this question, we introduce and evaluate a desktop application that helps people stay focused on their work and train self-regulation at the same time. Our application lets the user set a goal for what they want to do during a defined period of focused work at their computer, then gives negative feedback when they get distracted, and positive feedback when they reorient their attention towards their goal. After this so-called focus session, the user receives overall feedback on how well they focused on their goal relative to previous sessions. While existing approaches to attention training often use artificial tasks, our approach transforms real-life challenges into opportunities for building strong attention control skills. Our results indicate that optimal attentional feedback can generate large increases in behavioral focus, task motivation, and self-control – benefitting users to successfully achieve their long-term goals.

link (url) [BibTex]


no image
Life Improvement Science

Lieder, F., Prentice, M.

In Encyclopedia of Quality of Life and Well-Being Research, Springer, November 2022 (inbook)

DOI [BibTex]

DOI [BibTex]


no image
Which research topics are most important for promoting flourishing?

Lieder, F.

In Global Conference on Human Flourishing, Templeton World Charity Foundation, November 2022 (inproceedings) Accepted

link (url) DOI [BibTex]

link (url) DOI [BibTex]


An interdisciplinary synthesis of research on understanding and promoting well-doing
An interdisciplinary synthesis of research on understanding and promoting well-doing

Lieder, F., Prentice, M., Corwin-Renner, E.

Social and Personality Psychology Compass, 16(9), September 2022 (article)

Abstract
People’s intentional pursuit of prosocial goals and values (i.e., well-doing) is critical to the flourishing of humanity in the long run. Understanding and promoting well-doing is a shared goal across many fields inside and outside of social and personality psychology. Several of these fields are (partially) disconnected from each other and could benefit from more integration of existing knowledge, interdisciplinary collaboration, and cross-fertilization. To foster the transfer and integration of knowledge across these different fields, we provide a brief overview with pointers to some of the key articles in each field, highlight connections, and introduce an integrative model of the psychological mechanisms of well-doing. We identify some gaps in the current understanding of well-doing, such as the paucity of research on well-doing with large and long-lasting positive consequences. Building on this analysis, we identify opportunities for high-impact research on well-doing in social and personality psychology, such as understanding and promoting the effective pursuit of highly impactful altruistic goals.

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
A cautionary tale about AI-generated goal suggestions

Lieder, F., Chen, P., Stojcheski, J., Consul, S., Pammer-Schindler, V.

In MuC ’22: Proceedings of Mensch und Computer 2022 , pages: 354-359, Mensch und Computer 2022 (MuC 2022) , September 2022 (inproceedings)

Abstract
Setting the right goals and prioritizing them might be the most crucial and the most challenging type of decisions people make for themselves, their teams, and their organizations. In this article, we explore whether it might be possible to leverage artificial intelligence (AI) to help people set better goals and which potential problems might arise from such applications. We devised the first prototype of an AI-powered digital goal-setting assistant and a rigorous empirical paradigm for assessing the quality of AI-generated goal suggestions. Our empirical paradigm compares the AI-generated goal suggestions against randomly-generated goal suggestions and unassisted goal-setting on a battery of self-report measures of important goal characteristics, motivation, and usability in a large-scale repeated-measures online experiment. The results of an online experiment with 259 participants revealed that our intuitively compelling goal suggestion algorithm was actively harmful to the quality of the people's goals and their motivation to pursue them. These surprising findings highlight three crucial problems to be tackled by future work on leveraging AI to help people set better goals: i) aligning the objective function of the AI algorithms with the design goals, ii) helping people quantify how valuable different goals are to them, and iii) preserving the user's sense of autonomy.

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Leveraging AI for effective to-do list gamification

Consul, S., Stojcheski, J., Lieder, F.

In Mensch und Computer 2022 – Workshopband MuC 2022 , Mensch und Computer 2022 (MuC 2022) : 5th International Workshop "Gam-R – Gamification Reloaded" , September 2022 (inproceedings)

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Does deliberate prospection help students set better goals?

Jähnichen, S., Weber, F., Prentice, M., Lieder, F.

In 15th Biannual Meeting of the German Cognitive Science Society , pages: 188-189 , 15th Biannual Meeting of the German Cognitive Science Society (KogWis 2022 – Understanding Minds) , September 2022 (inproceedings)

link (url) [BibTex]

link (url) [BibTex]


Boosting human decision-making with AI-generated decision aids
Boosting human decision-making with AI-generated decision aids

Becker, F., Skirzyński, J., van Opheusden, B., Lieder, F.

Computational Brain & Behavior, 5(4):467-490, July 2022 (article)

Abstract
Human decision-making is plagued by many systematic errors. Many of these errors can be avoided by providing decision aids that guide decision-makers to attend to the important information and integrate it according to a rational decision strategy. Designing such decision aids is a tedious manual process. Advances in cognitive science might make it possible to automate this process in the future. We recently introduced machine learning methods for discovering optimal strategies for human decision-making automatically and an automatic method for explaining those strategies to people. Decision aids constructed by this method were able to improve human decision-making. However, following the descriptions generated by this method is very tedious. We hypothesized that this problem can be overcome by conveying the automatically discovered decision strategy as a series of natural language instructions for how to reach a decision. Experiment 1 showed that people do indeed understand such procedural instructions more easily than the decision aids generated by our previous method. Encouraged by this finding, we developed an algorithm for translating the output of our previous method into procedural instructions. We applied the improved method to automatically generate decision aids for a naturalistic planning task (i.e., planning a road trip) and a naturalistic decision task (i.e., choosing a mortgage). Experiment 2 showed that these automatically generated decision-aids significantly improved people's performance in planning a road trip and choosing a mortgage. These findings suggest that AI-powered boosting has potential for improving human decision-making in the real world.

DOI [BibTex]

DOI [BibTex]


no image
Leveraging machine learning to automatically derive robust decision strategies from imperfect models of the real world

Mehta, A., Jain, Y. R., Kemtur, A., Stojcheski, J., Consul, S., Tosic, M., Lieder, F.

Computational Brain & Behavior, 5, pages: 343-377, Springer Nature, June 2022 (article)

Abstract
Teaching people clever heuristics is a promising approach to improve decision-making under uncertainty. The theory of resource-rationality makes it possible to leverage machine learning to discover optimal heuristics automatically. One bottleneck of this approach is that the resulting decision strategies are only as good as the model of the decision-problem that the machine learning methods were applied to. This is problematic because even domain experts cannot give complete and fully accurate descriptions of the decisions they face. To address this problem, we develop strategy discovery methods that are robust to potential inaccuracies in the description of the scenarios in which people will use the discovered decision strategies. The basic idea is to derive the strategy that will perform best in expectation across all possible real-world problems that could have given rise to the likely erroneous description that a domain expert provided. To achieve this, our method uses a probabilistic model of how the description of a decision problem might be corrupted by biases in human judgment and memory. Our method uses this model to perform Bayesian inference on which real-world scenarios might have given rise to the provided descriptions. We applied our Bayesian approach to robust strategy discovery in two domains: planning and risky choice. In both applications, we find that our approach is more robust to errors in the description of the decision problem and that teaching the strategies it discovers significantly improves human decision-making in scenarios where approaches ignoring the risk that the description might be incorrect are ineffective or even harmful. The methods developed in this article are an important step towards leveraging machine learning to improve human decision-making in the real world because they tackle the problem that the real world is fundamentally uncertain.

Leveraging Machine Learning to Automatically Derive Robust Decision Strategies from Imperfect Knowledge of the Real World link (url) DOI [BibTex]


Improving Human Decision-Making by Discovering Efficient Strategies for Hierarchical Planning
Improving Human Decision-Making by Discovering Efficient Strategies for Hierarchical Planning

Consul, S., Heindrich, L., Stojcheski, J., Lieder, F.

Computational Brain & Behavior, 5, pages: 185-216, Springer, 2022 (article)

Abstract
To make good decisions in the real world people need efficient planning strategies because their computational resources are limited. Knowing which planning strategies would work best for people in different situations would be very useful for understanding and improving human decision-making. But our ability to compute those strategies used to be limited to very small and very simple planning tasks. To overcome this computational bottleneck, we introduce a cognitively-inspired reinforcement learning method that can overcome this limitation by exploiting the hierarchical structure of human behavior. The basic idea is to decompose sequential decision problems into two sub-problems: setting a goal and planning how to achieve it. This hierarchical decomposition enables us to discover optimal strategies for human planning in larger and more complex tasks than was previously possible. The discovered strategies outperform existing planning algorithms and achieve a super-human level of computational efficiency. We demonstrate that teaching people to use those strategies significantly improves their performance in sequential decision-making tasks that require planning up to eight steps ahead. By contrast, none of the previous approaches was able to improve human performance on these problems. These findings suggest that our cognitively-informed approach makes it possible to leverage reinforcement learning to improve human decision-making in complex sequential decision-problems. Future work can leverage our method to develop decision support systems that improve human decision making in the real world.

link (url) DOI Project Page Project Page [BibTex]


no image
Rational use of cognitive resources in human planning

Callaway, F., Opheusden, B. V., Gul, S., Das, P., Krueger, P. M., Griffiths, T. L., Lieder, F.

Nature Human Behaviour, 6, pages: 1112-1125, April 2022 (article)

Abstract
Making good decisions requires thinking ahead, but the huge number of actions and outcomes one could consider makes exhaustive planning infeasible for computationally constrained agents, such as humans. How people are nevertheless able to solve novel problems when their actions have long-reaching consequences is thus a long-standing question in cognitive science. To address this question, we propose a model of resource-constrained planning that allows us to derive optimal planning strategies. We find that previously proposed heuristics such as best-first search are near-optimal under some circumstances, but not others. In a mouse-tracking paradigm, we show that people adapt their planning strategies accordingly, planning in a manner that is broadly consistent with the optimal model but not with any single heuristic model. We also find systematic deviations from the optimal model that might result from additional cognitive constraints that are yet to be uncovered.

link (url) DOI [BibTex]

link (url) DOI [BibTex]


What to learn next? Aligning gamification rewards to long-term goals using reinforcement learning
What to learn next? Aligning gamification rewards to long-term goals using reinforcement learning

Pauly, R., Heindrich, L., Amo, V., Lieder, F.

March 2022 (article) Accepted

Abstract
Nowadays, more people can access digital educational resources than ever before. However, access alone is often not sufficient for learners to fulfil their learning goals. To support motivation, learning environments are often gamified, meaning that they offer points for interacting with them. But gamification can add to learners’ tendencies to choose learning activities in a short-sighted manner. An example for a short-sighted choice bias is the preference for an easy task offering a quick sense of accomplishment (and in gamified environments often a quick accumulation of points) over a harder task offering to make real progress. The concept of optimal brain points demonstrates that methods from the field of reinforcement learning, specifically reward shaping, allow us to align short-term rewards for learning choices with their expected long-term benefit in a learning context. Building on that work, we here present a scalable approach to supporting self-directed learning in digital learning environments applicable to real-world educational games. It can motivate learners to choose the learning activities that are most beneficial for them in the long run. This is achieved by incentivizing each learning activity in a way that reflects how much progress can be made by completing it and how that progress relates to their learning goal. Specifically, the approach entails modelling how learners choose between learning activities as a Markov Decision Process and applying methods from reinforcement learning to compute which learning choices optimize the learners progress based on their current knowledge. We specify how our developed method can be applied to the English-learning App “Dawn of Civilisation”. We further present the first evaluation of the approach in a controlled online experiment with a simplified learning task, which showed that the derived incentives can significantly improve both learners’ choice behaviour and their learning outcomes.

link (url) [BibTex]


Leveraging artificial intelligence to improve people’s planning strategies
Leveraging artificial intelligence to improve people’s planning strategies

Callaway, F., Jain, Y. R., Opheusden, B. V., Das, P., Iwama, G., Gul, S., Krueger, P. M., Becker, F., Griffiths, T. L., Lieder, F.

119(12), PNAS, March 2022 (article)

Abstract
Human decision making is plagued by systematic errors that can have devastating consequences. Previous research has found that such errors can be partly prevented by teaching people decision strategies that would allow them to make better choices in specific situations. Three bottlenecks of this approach are our limited knowledge of effective decision strategies, the limited transfer of learning beyond the trained task, and the challenge of efficiently teaching good decision strategies to a large number of people. We introduce a general approach to solving these problems that leverages artificial intelligence to discover and teach optimal decision strategies. As a proof of concept, we developed an intelligent tutor that teaches people the automatically discovered optimal heuristic for environments where immediate rewards do not predict long-term outcomes. We found that practice with our intelligent tutor was more effective than conventional approaches to improving human decision making. The benefits of training with our cognitive tutor transferred to a more challenging task and were retained over time. Our general approach to improving human decision making by developing intelligent tutors also proved successful for another environment with a very different reward structure. These findings suggest that leveraging artificial intelligence to discover and teach optimal cognitive strategies is a promising approach to improving human judgment and decision making.

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Promoting value-congruent action by supporting effective metacognitive emotion-regulation strategies with a gamified app

Amo, V., Prentice, M., Lieder, F.

Society for Personality and Social Psychology (SPSP) Annual Convention 2022, San Francisco, USA, Society for Personality and Social Psychology (SPSP) Annual Convention 2022, February 2022 (conference)

Abstract
Negative emotions can make maladaptive behavior more likely, especially when people have poor emotion regulation and metacognitive skills (ERMSs). We developed an app to help non-clinical populations train and apply good ERMSs. The app teaches ERMSs with the help of gamified features such as customizable emotion avatars and points for practicing ERMSs. In an initial, brief pre/post test of the app, 60 participants used it to reflect on a difficult emotional challenge and (non-)beneficial ways of acting. Then, they completed a metacognitive skill-building module. After using the app, participants' scores showed significantly decreased/increased perceived likelihood of unwanted/beneficial actions, decreased emotional struggle and emotional intensity, and decreased/increased cognitive endorsement of self-limiting/self-efficacious beliefs (Paired Samples Wilcoxon Test average effect size = 0.71, range = [.26, .87], all p<0.008). These results provide an important proof-of-concept for the app. A subsequent study will test the app's effectiveness for at least two weeks using event-contingent reporting for participants' real-life regulatory challenges and ERMS training in context.

[BibTex]

[BibTex]


no image
Evaluating Life Reflection Techniques to Help People Select Virtuous Life Goals

Prentice, M., Gonzalez Cruz, H., Lieder, F.

Integrating Research on Character and Virtues: 10 Years of Impact, Oriel College, Oxford, Integrating Research on Character and Virtues: 10 Years of Impact, January 2022 (conference) Accepted

Abstract
The purpose of the present studies was to identify an effective tool for helping people to select virtuous life goals that promote their own well-being and contribute to the well-being of others (well-doing). Across two studies, we tested four candidate interventions against each other and a control condition. In the first study (N = 218), the intervention conditions were the eulogy and valued living questionnaire exercises from the Acceptance and Commitment Therapy literature. In the second study (N = 537), the intervention conditions were self-affirmation and value self-confrontation from the social psychology literature and the eulogy exercise. The eulogy exercise is a very brief reflection (3-5 minutes) on how one would like to be remembered by friends and family speaking at one’s funeral. The valued living questionnaire exercise involves rating 10 life domains for importance and behavioral consistency with that importance and reflecting on discrepancies. Self-affirmation involves writing about a time when one acted in line with one’s values. And value self-confrontation involves inducing a discrepancy between participants’ values and those of a socially desirable group. Participants were randomly assigned to one of these brief interventions or a control condition. They were then asked to select a life goal that they would like to start pursuing or make more progress on in the near future. In Study 1, selection was open-ended and participants indicated which of 5 life domains it best fit, including interpersonal goals. In Study 2, the goal was selected from a list of prosocial, personal growth, or materialistic life goals. Across both studies, we found that the eulogy exercise stood out as an effective intervention for helping people select life goals that are likely to promote well-being and well-doing, such as wanting to improve other people’s lives, and avoid life goals that are associated with vices, such as wanting to have many expensive possessions. These findings point to the usefulness of humanistic-existential approaches for promoting character development via life goals and provide an example of how philosophically-informed psychological interventions can be effective.

[BibTex]

[BibTex]


no image
Discovering Rational Heuristics for Risky Choice

Krueger, P., Callaway, F., Gul, S., Griffiths, T., Lieder, F.

PsyArXiv Preprints, January 2022 (article) Submitted

Abstract
For computationally limited agents such as humans, perfectly rational decision-making is almost always out of reach. Instead, people may rely on computationally frugal heuristics that usually yield good outcomes. Although previous research has identified many such heuristics, discovering good heuristics and predicting when they will be used remains challenging. Here, we present a machine learning method that identifies the best heuristics to use in any given situation. To demonstrate the generalizability and accuracy of our method, we compare the strategies it discovers against those used by people across a wide range of multi-alternative risky choice environments in a behavioral experiment that is an order of magnitude larger than any previous experiments of its type. Our method rediscovered known heuristics, identifying them as rational strategies for specific environments, and discovered novel heuristics that had been previously overlooked. Our results show that people adapt their decision strategies to the structure of the environment and generally make good use of their limited cognitive resources, although they tend to collect too little information and their strategy choices do not always fully exploit the structure of the environment.

Discovering Rational Heuristics for Risky Choice link (url) DOI [BibTex]

Discovering Rational Heuristics for Risky Choice link (url) DOI [BibTex]

2021


Promoting metacognitive learning through systematic reflection
Promoting metacognitive learning through systematic reflection

Becker, F., Lieder, F.

Workshop on Metacognition in the Age of AI. Thirty-fifth Conference on Neural Information Processing Systems, 35th Conference on Neural Information Processing Systems (NeurIPS 2021), December 2021 (conference)

Abstract
People are able to learn clever cognitive strategies through trial and error from small amounts of experience. This is facilitated by people's ability to reflect on their own thinking which is known as metacognition. To examine the effects of deliberate systematic metacognitive reflection on how people learn how to plan, the experimental group was guided to systematically reflect on their decision-making process after every third decision. We found that participants assisted by reflection prompts learned to plan better faster. Moreover, we found that reflection led to immediate improvements in the participants' planning strategies. Our preliminary results do suggest that deliberate metacognitive reflection can help people discover clever cognitive strategies from very small amounts of experience. Understanding the role of reflection in human learning is a promising approach for making reinforcement learning more sample efficient in both humans and machines.

link (url) DOI Project Page [BibTex]

2021

link (url) DOI Project Page [BibTex]


Have I done enough planning or should I plan more?
Have I done enough planning or should I plan more?

He, R., Jain, Y. R., Lieder, F.

Workshop on Metacognition in the Age of AI. Thirty-fifth Conference on Neural Information Processing Systems, Long Paper, Workshop on Metacognition in the Age of AI. Thirty-fifth Conference on Neural Information Processing Systems, December 2021 (conference) Accepted

Abstract
People’s decisions about how to allocate their limited computational resources are essential to human intelligence. An important component of this metacognitive ability is deciding whether to continue thinking about what to do and move on to the next decision. Here, we show that people acquire this ability through learning and reverse-engineer the underlying learning mechanisms. Using a process-tracing paradigm that externalises human planning, we find that people quickly adapt how much planning they perform to the cost and benefit of planning. To discover the underlying metacognitive learning mechanisms we augmented a set of reinforcement learning models with metacognitive features and performed Bayesian model selection. Our results suggest that the metacognitive ability to adjust the amount of planning might be learned through a policy-gradient mechanism that is guided by metacognitive pseudo-rewards that communicate the value of planning.

Project Page [BibTex]

Project Page [BibTex]


A Rational Reinterpretation of Dual Process Theories
A Rational Reinterpretation of Dual Process Theories

Milli, S., Lieder, F., Griffiths, T. L.

Cognition, 217, December 2021 (article)

Abstract
Highly influential "dual-process" accounts of human cognition postulate the coexistence of a slow accurate system with a fast error-prone system. But why would there be just two systems rather than, say, one or 93? Here, we argue that a dual-process architecture might reflect a rational tradeoff between the cognitive flexibility afforded by multiple systems and the time and effort required to choose between them. We investigate what the optimal set and number of cognitive systems would be depending on the structure of the environment. We find that the optimal number of systems depends on the variability of the environment and the difficulty of deciding when which system should be used. Furthermore, we find that there is a plausible range of conditions under which it is optimal to be equipped with a fast system that performs no deliberation (``System 1'') and a slow system that achieves a higher expected accuracy through deliberation (``System 2''). Our findings thereby suggest a rational reinterpretation of dual-process theories.

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Resource-Rational Models of Human Goal Pursuit
Resource-Rational Models of Human Goal Pursuit

Prystawski, B., Mohnert, F., Tošić, M., Lieder, F.

Topics in Cognitive Science, 14(3):528-549 , Online, Wiley Online Library, August 2021 (article)

Abstract
Goal-directed behaviour is a deeply important part of human psychology. People constantly set goals for themselves and pursue them in many domains of life. In this paper, we develop computational models that characterize how humans pursue goals in a complex dynamic environment and test how well they describe human behaviour in an experiment. Our models are motivated by the principle of resource rationality and draw upon psychological insights about people's limited attention and planning capacities. We found that human goal pursuit is qualitatively different and substantially less efficient than optimal goal pursuit. Models of goal pursuit based on the principle of resource rationality captured human behavior better than both a model of optimal goal pursuit and heuristics that are not resource-rational. We conclude that human goal pursuit is jointly shaped by its function, the structure of the environment, and cognitive costs and constraints on human planning and attention. Our findings are an important step toward understanding humans goal pursuit, as cognitive limitations play a crucial role in shaping people's goal-directed behaviour.

Resource-rational models of human goal pursuit link (url) DOI Project Page [BibTex]

Resource-rational models of human goal pursuit link (url) DOI Project Page [BibTex]


Encouraging far-sightedness with automatically generated descriptions of optimal planning strategies: Potentials and Limitations
Encouraging far-sightedness with automatically generated descriptions of optimal planning strategies: Potentials and Limitations

Becker, F., Skirzynski, J. M., van Opheusden, B., Lieder, F.

Proceedings of the 43rd Annual Meeting of the Cognitive Science Society, Online, Annual Meeting of the Cognitive Science Society, July 2021 (conference)

Abstract
People often fall victim to decision-making biases, e.g. short-sightedness, that lead to unfavorable outcomes in their lives. It is possible to overcome these biases by teaching people better decision-making strategies. Finding effective interventions is an open problem, with a key challenge being the lack of transfer to the real world. Here, we tested a new approach to improving human decision-making that leverages Artificial Intelligence to discover procedural descriptions of effective planning strategies. Our benchmark problem regarded improving far-sightedness. We found our intervention elicits transfer to a similar task in a different domain, but its effects in more naturalistic financial decisions were not statistically significant. Even though the tested intervention is on par with conventional approaches, which also struggle in far-transfer, further improvements are required to help people make better decisions in real life. We conclude that future work should focus on training decision-making in more naturalistic scenarios.

link (url) [BibTex]

link (url) [BibTex]


Promoting metacognitive learning through systematic reflection
Promoting metacognitive learning through systematic reflection

Frederic Becker, , Lieder, F.

The first edition of Life Improvement Science Conference, June 2021 (poster)

Abstract
Human decision-making is sometimes systematically biased toward suboptimal decisions. For example, people often make short-sighted choices because they don't give enough weight to the long-term consequences of their actions. Previous studies showed that it is possible to overcome such biases by teaching people a more rational decision strategy through instruction, demonstrations, or practice with feedback. The benefits of these approaches tend to be limited to situations that are very similar to those used during the training. One way to overcome this limitation is to create general tools and strategies that people can use to improve their decision-making in any situation. Here we propose one such approach, namely directing people to systematically reflect on how they make their decisions. In systematic reflection, past experience is re-evaluated with the intention to learn. In this study, we investigate how reflection affects how people learn to plan and whether reflective learning can help people to discover more far-sighted planning strategies. In our experiment participants solve a series of 30 planning problems where the immediate rewards are smaller and therefore less important than long-term rewards. Building on Wolfbauer et al. (2020), the experimental group is guided by four reflection prompts asking the participant to describe their planning strategy, the strategy's performance, and his or her emotional response, insights, and intention to change their strategy. The control group practices planning without reflection prompts. Our pilot data suggest that systematic reflection helps people to more rapidly discover adaptive planning strategies. Our findings suggest that reflection is useful not only for helping people learn what to do in a specific situation but also for helping people learn how to think about what to do. In future work, we will compare the effects of different types of reflection on the subsequent changes in people's decision strategies. Developing apps that prompt people to reflect on their decisions may be a promising approach to accelerating cognitive growth and promoting lifelong learning.

[BibTex]

[BibTex]


no image
Leveraging AI to support the self-directed learning of disadvantaged youth in developing countries

Teo, J., Pauly, R., Heindrich, L., Amo, V., Lieder, F.

The first Life Improvement Science Conference, Tübingen, Germany, The first Life Improvement Science Conference, June 2021 (conference) Accepted

Abstract
Globally 258 million children and youth do not have access to school (Unesco, 2019), while 600 million receive ineffective education (Unesco, 2017). Solve Education! (SE!) is a non-profit organization committed to enable these young people to empower themselves through education, and currently operates in over 7 countries. Their team includes educationists, technologists, and business executives, who work together with governments and local communities to reach young people with disadvantaged backgrounds. Solve Education!’s main mobile application “The Dawn of Civilisation” (DoC), is an open platform that can deliver different learning content, with the focus on English literacy. It is designed to support lower end devices, as well as offline learning. At the Rationality Enhancement Group, we are laying the scientific foundation for helping people do more good in better ways. We combine methods from computational cognitive science, psychology, human-computer interaction, and artificial intelligence for the development of practical tools, strategies, and interventions that support people in their personal growth. In our collaboration with SE!, we aim at learning from and contributing to real-world challenges by applying our research to enhance SE!’s learning platform. We are currently working on two projects. The first project’s goal is to develop a principled approach to incentivize efficient self-directed learning with digital educational resources and to evaluate its effectiveness regarding learners’ behaviors and success in cooperation with SE!. Specifically, SE!’s DoC serves as the digital educational resource and allows to evaluate the approach with very high ecological validity. The planned intervention is based on the concept of optimal brain points developed by Xu, Wirzberger & Lieder (2019). The core idea is to incentivize effort and smart study choices rather than performance and to do so in a way that learners cannot exploit shortcuts to accumulate game points without also moving closer to their actual learning goals. If successful, SE! can build upon the intervention to further enhance the benefits their users draw from DoC. The second project is based on hierarchical goal setting and consists of a digital assistant that helps users set real-world goals and make progress towards them by reaching milestones with DoC. In this talk, in addition to introducing our work together with SE, we will highlight the mutual benefits of the collaboration between scientists and socially impactful organizations.

[BibTex]

[BibTex]


Toward a Formal Theory of Proactivity
Toward a Formal Theory of Proactivity

Lieder, F., Iwama, G.

Cognitive, Affective, & Behavioral Neuroscience, 42, pages: 490-508, Springer, June 2021 (article)

Abstract
Beyond merely reacting to their environment and impulses, people have the remarkable capacity to proactively set and pursue their own goals. But the extent to which they leverage this capacity varies widely across people and situations. The goal of this article is to make the mechanisms and variability of proactivity more amenable to rigorous experiments and computational modeling. We proceed in three steps. First, we develop and validate a mathematically precise behavioral measure of proactivity and reactivity that can be applied across a wide range of experimental paradigms. Second, we propose a formal definition of proactivity and reactivity, and develop a computational model of proactivity in the AX Continuous Performance Task (AX-CPT). Third, we develop and test a computational-level theory of meta-control over proactivity in the AX-CPT that identifies three distinct meta-decision-making problems: intention setting, resolving response conflict between intentions and automaticity, and deciding whether to recall context and intentions into working memory. People's response frequencies in the AX-CPT were remarkably well captured by a mixture between the predictions of our models of proactive and reactive control. Empirical data from an experiment varying the incentives and contextual load of an AX-CPT confirmed the predictions of our meta-control model of individual differences in proactivity. Our results suggest that proactivity can be understood in terms of computational models of meta-control. Our model makes additional empirically testable predictions. Future work will extend our models from proactive control in the AX-CPT to proactive goal creation and goal pursuit in the real world.

Toward a formal theory of proactivity link (url) DOI Project Page [BibTex]

Toward a formal theory of proactivity link (url) DOI Project Page [BibTex]


Improving Human Decision-Making by Discovering Efficient Strategies for Hierarchical Planning
Improving Human Decision-Making by Discovering Efficient Strategies for Hierarchical Planning

Heindrich, L., Consul, S., Stojcheski, J., Lieder, F.

Tübingen, Germany, The first edition of Life Improvement Science Conference, June 2021 (talk) Accepted

Abstract
The discovery of decision strategies is an essential part of creating effective cognitive tutors that teach planning and decision-making skills to humans. In the context of bounded rationality, this requires weighing the benefits of different planning operations compared to their computational costs. For small decision problems, it has already been shown that near-optimal decision strategies can be discovered automatically and that the discovered strategies can be taught to humans to increase their performance. Unfortunately, these near-optimal strategy discovery algorithms have not been able to scale well to larger problems due to their computational complexity. In this talk, we will present recent work at the Rationality Enhancement Group to overcome the computational bottleneck of existing strategy discovery algorithms. Our approach makes use of the hierarchical structure of human behavior by decomposing sequential decision problems into two sub-problems: setting a goal and planning how to achieve it. An additional metacontroller component is introduced to switch the current goal when it becomes beneficial. The hierarchical decomposition enables us to discover near-optimal strategies for human planning in larger and more complex tasks than previously possible. We then show in online experiments that teaching the discovered strategies to humans improves their performance in complex sequential decision-making tasks.

Project Page [BibTex]

Project Page [BibTex]


Evaluating Life Reflection Techniques to Help People Set Better Value-Driven Life Goals
Evaluating Life Reflection Techniques to Help People Set Better Value-Driven Life Goals

Prentice, M., González Cruz, H., Lieder, F.

13th Annual Conference of the Society for the Science of Motivation, Society for the Science of Motivation, 13th Annual Conference of the Society for the Science of Motivation , May 2021 (conference)

Abstract
We tested two reflection techniques derived from Acceptance Commitment Therapy for helping people set life goals that are self-determined, communal, and future-minded. Participants were assigned randomly to control, Eulogy, or the Valued Living Questionnaire (VLQ) conditions. Eulogy participants envisioned what they wanted people to say about them at their funeral. In VLQ, participants rated the importance of life domains and how consistent their behavior has recently been with the importance assigned to each domain. Participants then set a life goal, rated it for self-determination, and indicated its time horizon and life domain. Despite only requiring internal reflection, Eulogy was particularly effective for generating self-determined goals that were interpersonal and future-minded. The Eulogy exercise may be a useful and important building block for inspiring the setting and effective pursuit of goals that are simultaneously self-determined, communal, and future-minded. Future research will examine its efficacy in changing experienced well-being and enacted well-doing.

[BibTex]

[BibTex]


Toward a Science of Effective Well-Doing
Toward a Science of Effective Well-Doing

Lieder, F., Prentice, M., Corwin-Renner, E.

May 2021 (techreport)

Abstract
Well-doing, broadly construed, encompasses acting and thinking in ways that contribute to humanity’s flourishing in the long run. This often takes the form of setting a prosocial goal and pursuing it over an extended period of time. To set and pursue goals in a way that is extremely beneficial for humanity (effective well-doing), people often have to employ critical thinking and far-sighted, rational decision-making in the service of the greater good. To promote effective well-doing, we need to better understand its determinants and psychological mechanisms, as well as the barriers to effective well-doing and how they can be overcome. In this article, we introduce a taxonomy of different forms of well-doing and introduce a conceptual model of the cognitive mechanisms of effective well-doing. We view effective well-doing as the upper end of a moral continuum whose lower half comprises behaviors that are harmful to humanity (ill-doing), and we argue that the capacity for effective well-doing has to be developed through personal growth (e.g., learning how to pursue goals effectively). Research on these phenomena has so far been scattered across numerous disconnected literatures from multiple disciplines. To bring these communities together, we call for the establishment of a transdisciplinary research field focussed on understanding and promoting effective well-doing and personal growth as well as understanding and reducing ill-doing. We define this research field in terms of its goals and questions. We review what is already known about these questions in different disciplines and argue that laying the scientific foundation for promoting effective well-doing is one of the most valuable contributions that the behavioral sciences can make in the 21st century.

Preprint Project Page [BibTex]


'What Do You Want in Life and How Can You Get There?'  An Evaluation of a Hierarchical Goal-Setting Chatbot
’What Do You Want in Life and How Can You Get There?’ An Evaluation of a Hierarchical Goal-Setting Chatbot

González Cruz, H., Prentice, M., Lieder, F.

13th Annual meeting of the Society for the Science of Motivation, Abstract of presentation at the 13th SSM Virtual Congress, Society for the Science of Motivation, Virtual Congress, May 2021 (conference)

Abstract
The translation of abstract, long-term goals, such as “make a contribution to the field of motivation science,” into short-term, actionable intentions is inherently difficult. Hierarchical goal-setting, a goal-setting strategy in which people construct a hierarchy of increasingly more concrete and proximal subgoals is a promising way to support this process. We designed a goal-setting chatbot that helps people craft action hierarchies for achieving their life goals. We conducted a large online field experiment with two follow-up surveys at one week and one month after the intervention to evaluate the effects of a brief hierarchical planning session with our chatbot on goal pursuit. Although there were no main effects of hierarchical planning on goal-related outcomes, exploratory analyses indicated that hierarchical goal-setting enabled people to make more progress towards goals that appeared less actionable. This suggests that supporting hierarchical goal-setting with chatbots is a promising approach to helping people who don’t know how to pursue their goals.

[BibTex]

[BibTex]


Automatic Discovery of Interpretable Planning Strategies
Automatic Discovery of Interpretable Planning Strategies

Skirzyński, J., Becker, F., Lieder, F.

Machine Learning, 110, pages: 2641-2683, 2021 (article)

Abstract
When making decisions, people often overlook critical information or are overly swayed by irrelevant information. A common approach to mitigate these biases is to provide decisionmakers, especially professionals such as medical doctors, with decision aids, such as decision trees and flowcharts. Designing effective decision aids is a difficult problem. We propose that recently developed reinforcement learning methods for discovering clever heuristics for good decision-making can be partially leveraged to assist human experts in this design process. One of the biggest remaining obstacles to leveraging the aforementioned methods for improving human decision-making is that the policies they learn are opaque to people. To solve this problem, we introduce AI-Interpret: a general method for transforming idiosyncratic policies into simple and interpretable descriptions. Our algorithm combines recent advances in imitation learning and program induction with a new clustering method for identifying a large subset of demonstrations that can be accurately described by a simple, high-performing decision rule. We evaluate our new AI-Interpret algorithm and employ it to translate information-acquisition policies discovered through metalevel reinforcement learning. The results of three large behavioral experiments showed that the provision of decision rules as flowcharts significantly improved people’s planning strategies and decisions across three different classes of sequential decision problems. Furthermore, a series of ablation studies confirmed that our AI-Interpret algorithm was critical to the discovery of interpretable decision rules and that it is ready to be applied to other reinforcement learning problems. We conclude that the methods and findings presented in this article are an important step towards leveraging automatic strategy discovery to improve human decision-making.

Automatic Discovery of Interpretable Planning Strategies The code for our algorithm and the experiments is available link (url) Project Page Project Page [BibTex]


Learning to Overexert Cognitive Control in a Stroop Task
Learning to Overexert Cognitive Control in a Stroop Task

Bustamante, L., Lieder, F., Musslick, S., Shenhav, A., Cohen, J.

Cognitive, Affective, & Behavioral Neuroscience, 21, pages: 453-471, January 2021, Laura Bustamante and Falk Lieder contributed equally to this publication. (article)

Abstract
How do people learn when to allocate how much cognitive control to which task? According to the Learned Value of Control (LVOC) model, people learn to predict the value of alternative control allocations from features of a given situation. This suggests that people may generalize the value of control learned in one situation to other situations with shared features, even when the demands for cognitive control are different. This makes the intriguing prediction that what a person learned in one setting could, under some circumstances, cause them to misestimate the need for, and potentially over-exert control in another setting, even if this harms their performance. To test this prediction, we had participants perform a novel variant of the Stroop task in which, on each trial, they could choose to either name the color (more control-demanding) or read the word (more automatic). However only one of these tasks was rewarded, it changed from trial to trial, and could be predicted by one or more of the stimulus features (the color and/or the word). Participants first learned colors that predicted the rewarded task. Then they learned words that predicted the rewarded task. In the third part of the experiment, we tested how these learned feature associations transferred to novel stimuli with some overlapping features. The stimulus-task-reward associations were designed so that for certain combinations of stimuli the transfer of learned feature associations would incorrectly predict that more highly rewarded task would be color naming, which would require the exertion of control, even though the actually rewarded task was word reading and therefore did not require the engagement of control. Our results demonstrated that participants over-exerted control for these stimuli, providing support for the feature-based learning mechanism described by the LVOC model.

Learning to Overexert Cognitive Control in a Stroop Task Learning to Overexert Cognitive Control in a Stroop Tas link (url) DOI [BibTex]

Learning to Overexert Cognitive Control in a Stroop Task Learning to Overexert Cognitive Control in a Stroop Tas link (url) DOI [BibTex]


no image
Do Behavioral Observations Make People Catch the Goal? A Meta-Analysis on Goal Contagion

Brohmer, H., Eckerstorfer, L. V., van Aert, R. C., Corcoran, K.

International Review of Social Psychology , 34(1):3, Online, January 2021 (article)

Abstract
Goal contagion is a social-cognitive approach to understanding how other people’s behavior influences one’s goal pursuit: An observation of goal-directed behavior leads to an automatic inference and activation of the goal before it can be adopted and pursued thereafter by the observer. We conducted a meta-analysis focusing on experimental studies with a goal condition, depicting goal-directed behavior and a control condition. We searched four databases (PsychInfo, Web of Science, ScienceDirect, and JSTOR) and the citing literature on Google Scholar, and eventually included e = 48 effects from published studies, unpublished studies and registered reports based on 4751 participants. The meta-analytic summary effect was small − g = 0.30, 95%CI [0.21; 0.40], τ² = 0.05, 95%CI [0.03, 0.13] − implying that goal contagion might occur for some people, compared to when this goal is not perceived in behavior. However, the original effect seemed to be biased through the current publication system. As shown by several publication-bias tests, the effect could rather be half the size, for example, selection model: g = 0.15, 95%CI [–0.02; 0.32]. Further, we could not detect any potential moderator (such as the presentation of the manipulation and the contrast of the control condition). We suggest that future research on goal contagion makes use of open science practices to advance research in this domain.

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Development and Validation of a Goal Characteristics Questionnaire

Iwama, G., Weber, F., Prentice, M., Lieder, F.

Collabra Psychology, 2021 (article) Submitted

Abstract
How motivated a person is to pursue a goal may depend on many different properties of the goal, such as how specific it is, how important it is to the person, and how actionable it is. Rigorously measuring all of the relevant goal characteristics is still very difficult. Existing measures are scattered across multiple research fields. Some goal characteristics are not yet covered, while others have been measured under ambiguous terminology. Other conceptually related characteristics have yet to be adapted to goals. Last but not least, the validity of most measures of goal characteristics has yet to be assessed. The aim of this study is to: a) integrate, refine, and extend previous measures into a more comprehensive battery of self-report measures, the Goal Characteristics Questionnaire (GCQ; https://osf.io/3gxk5/?view_only=1ff0e62127c64b82862a0fe7d73c4faf), and b) investigate its evidence of validity. In two empirical studies, this paper provides evidence for the validity of the measures regarding their internal structure, measurement invariance, and convergence and divergence with other relevant goal-related measures, such as the motivation, affect, and the dimensions of Personal Project Analysis. The results show that our goal characteristic dimensions have incremental validity for explaining important outcomes, such as goal commitment and well-being. It concludes with practical recommendations for using the GCQ in research on goal-setting and goal-pursuit, and a discussion about directions for future studies.

link (url) DOI [BibTex]


Measuring and modelling how people learn how to plan and how people adapt their planning strategies the to structure of the environment
Measuring and modelling how people learn how to plan and how people adapt their planning strategies the to structure of the environment

He, R., Jain, Y. R., Lieder, F.

International Conference on Cognitive Modeling, International Conference on Cognitive Modeling, 2021 (conference)

Abstract
Often we find ourselves in unknown situations where we have to make a decision based on reasoning upon experiences. However, it is still unclear how people choose which pieces of information to take into account to achieve well-informed decisions. Answering this question requires an understanding of human metacognitive learning, that is how do people learn how to think. In this study, we focus on a special kind of metacognitive learning, namely how people learn how to plan and how their mechanisms of metacognitive learning adapt the planning strategies to the structures of the environment. We first measured people's adaptation to different environments via a process-tracing paradigm that externalises planning. Then we introduced and fitted novel metacognitive reinforcement learning algorithms to model the underlying learning mechanisms, which enabled us insights into the learning behaviour. Model-based analysis suggested two sources of maladaptation: no learning and reluctance to explore new alternatives.

link (url) Project Page [BibTex]

2020


no image
Improving Human Decision-Making using Metalevel-RL and Bayesian Inference

Kemtur, A., Jain, Y. R., Mehta, A., Callaway, F., Consul, S., Stojcheski, J., Lieder, F.

NeurIPS Workshop on Challenges for Real-World RL, December 2020 (article) Accepted

Abstract
Teaching clever heuristics is a promising approach to improve decision-making. We can leverage machine learning to discover clever strategies automatically. Current methods require an accurate model of the decision problems people face in real life. But most models are misspecified because of limited in-formation and cognitive biases. To address this problem we develop strategy discovery methods that are robust to model misspecification. Robustness is achieved by modeling model-misspecification using common cognitive biases and handling uncertainty about the real-world according to Bayesian inference. We translate our methods into an intelligent tutor that automatically discovers and teaches robust planning strategies. Our robust cognitive tutor significantly improved human decision-making when the model was so biased that conventional cognitive tutors were no longer effective. These findings highlight that our robust strategy discovery methods are a significant step towards leveraging artificial intelligence to improve human decision-making in the real world.Teaching clever heuristics is a promising approach to improve decision-making. We can leverage machine learning to dis- cover clever strategies automatically. Current methods require an accurate model of the decision problems people face in real life. But most models are misspecified because of limited information and cognitive biases. To address this problem we develop strategy discovery methods that are robust to model misspecification. Robustness is achieved by modeling model-misspecification using common cognitive biases and handling uncertainty about the real-world according to Bayesian inference. We translate our methods into an intelligent tutor that automatically discovers and teaches robust planning strategies. Our robust cognitive tutor significantly improved human decision-making when the model was so biased that conventional cognitive tutors were no longer effective. These findings highlight that our robust strategy discovery methods are a significant step towards leveraging artificial intelligence to improve human decision-making in the real world.

Improving Human Decision-Making using Metalevel-RL and Bayesian Inference [BibTex]


A Gamified App that Helps People Overcome Self-Limiting Beliefs by Promoting Metacognition
A Gamified App that Helps People Overcome Self-Limiting Beliefs by Promoting Metacognition

Amo, V., Lieder, F.

SIG 8 Meets SIG 16, SIG 8 Meets SIG 16, September 2020 (conference) Accepted

Abstract
Previous research has shown that approaching learning with a growth mindset is key for maintaining motivation and overcoming setbacks. Mindsets are systems of beliefs that people hold to be true. They influence a person's attitudes, thoughts, and emotions when they learn something new or encounter challenges. In clinical psychology, metareasoning (reflecting on one's mental processes) and meta-awareness (recognizing thoughts as mental events instead of equating them to reality) have proven effective for overcoming maladaptive thinking styles. Hence, they are potentially an effective method for overcoming self-limiting beliefs in other domains as well. However, the potential of integrating assisted metacognition into mindset interventions has not been explored yet. Here, we propose that guiding and training people on how to leverage metareasoning and meta-awareness for overcoming self-limiting beliefs can significantly enhance the effectiveness of mindset interventions. To test this hypothesis, we develop a gamified mobile application that guides and trains people to use metacognitive strategies based on Cognitive Restructuring (CR) and Acceptance Commitment Therapy (ACT) techniques. The application helps users to identify and overcome self-limiting beliefs by working with aversive emotions when they are triggered by fixed mindsets in real-life situations. Our app aims to help people sustain their motivation to learn when they face inner obstacles (e.g. anxiety, frustration, and demotivation). We expect the application to be an effective tool for helping people better understand and develop the metacognitive skills of emotion regulation and self-regulation that are needed to overcome self-limiting beliefs and develop growth mindsets.

A gamified app that helps people overcome self-limiting beliefs by promoting metacognition Project Page [BibTex]


Optimal To-Do List Gamification
Optimal To-Do List Gamification

Stojcheski, J., Felso, V., Lieder, F.

ArXiv Preprint, 2020 (techreport)

Abstract
What should I work on first? What can wait until later? Which projects should I prioritize and which tasks are not worth my time? These are challenging questions that many people face every day. People’s intuitive strategy is to prioritize their immediate experience over the long-term consequences. This leads to procrastination and the neglect of important long-term projects in favor of seemingly urgent tasks that are less important. Optimal gamification strives to help people overcome these problems by incentivizing each task by a number of points that communicates how valuable it is in the long-run. Unfortunately, computing the optimal number of points with standard dynamic programming methods quickly becomes intractable as the number of a person’s projects and the number of tasks required by each project increase. Here, we introduce and evaluate a scalable method for identifying which tasks are most important in the long run and incentivizing each task according to its long-term value. Our method makes it possible to create to-do list gamification apps that can handle the size and complexity of people’s to-do lists in the real world.

link (url) DOI Project Page [BibTex]


no image
How to navigate everyday distractions: Leveraging optimal feedback to train attention control

Wirzberger, M., Lado, A., Eckerstorfer, L., Oreshnikov, I., Passy, J., Stock, A., Shenhav, A., Lieder, F.

Proceedings of the 42nd Annual Meeting of the Cognitive Science Society, Cognitive Science Society, July 2020 (conference)

Abstract
To stay focused on their chosen tasks, people have to inhibit distractions. The underlying attention control skills can improve through reinforcement learning, which can be accelerated by giving feedback. We applied the theory of metacognitive reinforcement learning to develop a training app that gives people optimal feedback on their attention control while they are working or studying. In an eight-day field experiment with 99 participants, we investigated the effect of this training on people's productivity, sustained attention, and self-control. Compared to a control condition without feedback, we found that participants receiving optimal feedback learned to focus increasingly better (f = .08, p < .01) and achieved higher productivity scores (f = .19, p < .01) during the training. In addition, they evaluated their productivity more accurately (r = .12, p < .01). However, due to asymmetric attrition problems, these findings need to be taken with a grain of salt.

How to navigate everyday distractions: Leveraging optimal feedback to train attention control DOI Project Page [BibTex]


no image
Measuring the Costs of Planning

Felso, V., Jain, Y. R., Lieder, F.

Proceedings of the 42nd Annual Meeting of the Cognitive Science Society, (Editors: S. Denison and M. Mack and Y. Zu and B. C. Armstrong), Cognitive Science Society, CogSci 2020, July 2020 (conference) Accepted

Abstract
Which information is worth considering depends on how much effort it would take to acquire and process it. From this perspective people’s tendency to neglect considering the long-term consequences of their actions (present bias) might reflect that looking further into the future becomes increasingly more effortful. In this work, we introduce and validate the use of Bayesian Inverse Reinforcement Learning (BIRL) for measuring individual differences in the subjective costs of planning. We extend the resource-rational model of human planning introduced by Callaway, Lieder, et al. (2018) by parameterizing the cost of planning. Using BIRL, we show that increased subjective cost for considering future outcomes may be associated with both the present bias and acting without planning. Our results highlight testing the causal effects of the cost of planning on both present bias and mental effort avoidance as a promising direction for future work.

Project Page Project Page [BibTex]

Project Page Project Page [BibTex]


no image
Leveraging Machine Learning to Automatically Derive Robust Planning Strategies from Biased Models of the Environment

Kemtur, A., Jain, Y. R., Mehta, A., Callaway, F., Consul, S., Stojcheski, J., Lieder, F.

Proceedings of the 42nd Annual Meeting of the Cognitive Science Society, Cognitive Science Society, CogSci 2020, July 2020, Anirudha Kemtur and Yash Raj Jain contributed equally to this publication. (conference)

Abstract
Teaching clever heuristics is a promising approach to improve decision-making. We can leverage machine learning to discover clever strategies automatically. Current methods require an accurate model of the decision problems people face in real life. But most models are misspecified because of limited information and cognitive biases. To address this problem we develop strategy discovery methods that are robust to model misspecification. Robustness is achieved by model-ing model-misspecification and handling uncertainty about the real-world according to Bayesian inference. We translate our methods into an intelligent tutor that automatically discovers and teaches robust planning strategies. Our robust cognitive tutor significantly improved human decision-making when the model was so biased that conventional cognitive tutors were no longer effective. These findings highlight that our robust strategy discovery methods are a significant step towards leveraging artificial intelligence to improve human decision-making in the real world.

Leveraging Machine Learning to Automatically Derive Robust Planning Strategies from Biased Models of the Environment Project Page Project Page [BibTex]

Leveraging Machine Learning to Automatically Derive Robust Planning Strategies from Biased Models of the Environment Project Page Project Page [BibTex]


no image
Advancing Rational Analysis to the Algorithmic Level

Lieder, F., Griffiths, T. L.

Behavioral and Brain Sciences, 43, Cambridge University Press, March 2020 (article)

Abstract
The commentaries raised questions about normativity, human rationality, cognitive architectures, cognitive constraints, and the scope or resource rational analysis (RRA). We respond to these questions and clarify that RRA is a methodological advance that extends the scope of rational modeling to understanding cognitive processes, why they differ between people, why they change over time, and how they could be improved.

Advancing rational analysis to the algorithmic level link (url) DOI Project Page [BibTex]

Advancing rational analysis to the algorithmic level link (url) DOI Project Page [BibTex]