Stepwise Verification and Remediation of Student Reasoning Errors with Large Language Model Tutors

Arxiv GitHub 🤗 Dataset License Python Versions

This repository contains dataset and code for the EMNLP 2024 paper "Stepwise Verification and Remediation of Student Reasoning Errors with Large Language Model Tutors".

Abstract

Large language models (LLMs) offer many opportunities to scale high-quality personalized tutoring. A promising approach is to build dialog tutoring models to scaffold students' problem-solving. However, even though existing models perform well in solving reasoning questions, they can struggle to precisely detect student`s errors and tailor their feedback to these errors. Inspired by real-world teaching practice where teachers identify student errors and customize their response based on them, we focus on verifying student solutions and show how grounding to such verification improves the overall quality of tutor response generation. We collect a dataset of 1,002 stepwise math reasoning chains with the first error step annotated by teachers. We show empirically that finding the mistake in a student solution is challenging for current models. We propose and evaluate several verifiers for detecting these errors. Using both automatic and human evaluation we show that the student solution verifiers steer the generation model towards highly targeted responses to student error which are more often correct with less hallucinations compared to existing baselines. The benchmark dataset and code will be released openly.

Contact Persons

Affiliations

Main Figure

Getting Started

Install dependencies with:

pip install -r requirements.txt

Dataset

The dataset will be available in the dataset folder. It extends MathDial.

Running models & Evaluation

Verification

python verification/error_verification.py --setting overall_verification --model_name gpt3 --top_n_only 10

Verification-based Generation

python verification_based_response/main.py --model_name gpt3 --settings baseline --top_n_only 10

Citation

@inproceedings{daheim-etal-2024-stepwise,
    title = "Stepwise Verification and Remediation of Student Reasoning Errors with Large Language Model Tutors",
    author = "Daheim, Nico  and
      Macina, Jakub  and
      Kapur, Manu  and
      Gurevych, Iryna  and
      Sachan, Mrinmaya",
    booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.emnlp-main.478/",
    doi = "10.18653/v1/2024.emnlp-main.478",
    pages = "8386--8411",
}