Home  | Publications | FSH+25b

Investigating Co-Constructive Behavior of Large Language Models in Explanation Dialogues

MCML Authors

Link to Profile Eyke Hüllermeier PI Matchmaking

Eyke Hüllermeier

Prof. Dr.

Principal Investigator

Abstract

The ability to generate explanations that are understood by explainees is the quintessence of explainable artificial intelligence. Since understanding depends on the explainee's background and needs, recent research has focused on co-constructive explanation dialogues, where the explainer continuously monitors the explainee's understanding and adapts explanations dynamically. We investigate the ability of large language models (LLMs) to engage as explainers in co-constructive explanation dialogues. In particular, we present a user study in which explainees interact with LLMs, of which some have been instructed to explain a predefined topic co-constructively. We evaluate the explainees' understanding before and after the dialogue, as well as their perception of the LLMs' co-constructive behavior. Our results indicate that current LLMs show some co-constructive behaviors, such as asking verification questions, that foster the explainees' engagement and can improve understanding of a topic. However, their ability to effectively monitor the current understanding and scaffold the explanations accordingly remains limited.

misc FSH+25b


Preprint

Apr. 2025

Authors

L. Fichtel • M. Spliethöver • E. Hüllermeier • P. Jimenez • N. Klowait • S. Kopp • A.-C. N. Ngomo • A. Robrecht • I. Scharlau • L. Terfloth • A.-L. Vollmer • H. Wachsmuth

Links

arXiv

Research Area

 A3 | Computational Models

BibTeXKey: FSH+25b

Back to Top