Intelligent Systems
Note: This research group has relocated.

Automatic Discovery of Interpretable Planning Strategies

2021

Article

re


When making decisions, people often overlook critical information or are overly swayed by irrelevant information. A common approach to mitigate these biases is to provide decisionmakers, especially professionals such as medical doctors, with decision aids, such as decision trees and flowcharts. Designing effective decision aids is a difficult problem. We propose that recently developed reinforcement learning methods for discovering clever heuristics for good decision-making can be partially leveraged to assist human experts in this design process. One of the biggest remaining obstacles to leveraging the aforementioned methods for improving human decision-making is that the policies they learn are opaque to people. To solve this problem, we introduce AI-Interpret: a general method for transforming idiosyncratic policies into simple and interpretable descriptions. Our algorithm combines recent advances in imitation learning and program induction with a new clustering method for identifying a large subset of demonstrations that can be accurately described by a simple, high-performing decision rule. We evaluate our new AI-Interpret algorithm and employ it to translate information-acquisition policies discovered through metalevel reinforcement learning. The results of three large behavioral experiments showed that the provision of decision rules as flowcharts significantly improved people’s planning strategies and decisions across three different classes of sequential decision problems. Furthermore, a series of ablation studies confirmed that our AI-Interpret algorithm was critical to the discovery of interpretable decision rules and that it is ready to be applied to other reinforcement learning problems. We conclude that the methods and findings presented in this article are an important step towards leveraging automatic strategy discovery to improve human decision-making.

Author(s): Julian Skirzyński and Frederic Becker and Falk Lieder
Journal: Machine Learning
Year: 2021

Department(s): Rationality Enhancement
Research Project(s): Interpretable Strategy Discovery
Metacognitive Learning
Bibtex Type: Article (article)
Paper Type: Journal

Language: English
State: Published

Links: Automatic Discovery of Interpretable Planning Strategies
The code for our algorithm and the experiments is available

BibTex

@article{Skirzynski2021Automatic,
  title = {Automatic Discovery of Interpretable Planning Strategies},
  author = {Skirzyński, Julian and Becker, Frederic and Lieder, Falk},
  journal = {Machine Learning},
  year = {2021},
  doi = {}
}