Auto-Grader - Auto-Grading Free Text Answers

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

Teachers spend a great amount of time grading free text answer type questions. To encounter this challenge an auto-grader system is proposed. The thesis illustrates that the auto-grader can be approached with simple, recurrent, and Transformer-based neural networks. Hereby, the Transformer-based models has the best performance. It is further demonstrated that geometric representation of question-answer pairs is a worthwhile strategy for an auto-grader. Finally, it is indicated that while the auto-grader could potentially assist teachers in saving time with grading, it is not yet on a level to fully replace teachers for this task.

Author(s): Robin Richner
Series: BestMasters
Publisher: Springer Gabler
Year: 2022

Language: English
Pages: 105
City: Wiesbaden

Acknowledgements
Abstract
Contents
List of Figures
List of Tables
1 Introduction
1.1 Research Problem
1.2 Research Objective and Contribution
2 Research Design
3 Research Background
3.1 Learning Methods
3.2 Representation of Words
3.3 Artificial Neural Networks
3.3.1 Recurrent Networks
3.3.2 Convolutional Networks
3.4 Transfer Learning
3.5 Transformers
3.5.1 BERT
3.5.2 GPT-3
3.6 Automatic Grading
4 Data
4.1 Exploratory Data Analysis and Preprocessing
4.1.1 Question Data
4.1.2 Answer Data
4.2 Classification of Answers into Types
4.3 Preprocessed Data
5 Model Development
5.1 Data Augmentation and Processing
5.1.1 Data Split
5.1.2 From Letters to Numbers
5.1.3 Batch Generator
5.2 Simple Layer Model
5.3 Recurrent Models
5.3.1 GRU
5.3.2 LSTM
5.4 Pre-trained Models
5.4.1 Data Augmentation
5.4.2 Threshold Calculation
5.4.3 Multilingual BERT
5.4.4 LaBSE
6 Evaluation
6.1 First iteration—small data set
6.2 All data and hyperparameter tuning
6.3 Misclassification analysis
7 Discussion, limitations and further research
7.1 Preprocessing
7.2 Data Augmentation and Output
7.3 Pre-training
7.4 Fine-tuning
7.5 Bias
8 Conclusion
Glossary
References