Improving faithfulness in abstractive text summarization with EDUs using BART
dc.contributor.author | Delpisheh, Narjes | |
dc.contributor.author | University of Lethbridge. Faculty of Arts and Science | |
dc.contributor.supervisor | Chali, Yllias | |
dc.date.accessioned | 2023-05-24T17:09:29Z | |
dc.date.available | 2023-05-24T17:09:29Z | |
dc.date.issued | 2023-04-27 | |
dc.degree.level | Masters | |
dc.description.abstract | Abstractive summarization aims to reproduce the essential information of a source document in a summary by using the summarizer's own words. Although this approach is more similar to how humans summarize, it is more challenging to automate as it requires a complete understanding of natural language. However, the development of deep learning approaches, such as the sequence-to-sequence model with an attention-based mechanism, and the availability of pre-trained language models have led to improved performance in summarization systems. Nonetheless, abstractive summarization still suffers from issues such as hallucination and unfaithfulness. To address these issues, we propose an approach that utilizes a guidance signal using important Elementary Discourse Units (EDUs). We compare our work with previous guided summarization and two other summarization models that enhanced the faithfulness of the summary. Our approach was tested on CNN/Daily Mail dataset, and results showed an improvement in both truthfulness and good quantity coverage of the source document. | |
dc.identifier.uri | https://hdl.handle.net/10133/6499 | |
dc.language.iso | en_US | |
dc.proquest.subject | 0984 | |
dc.proquestyes | Yes | |
dc.publisher | Lethbridge, Alta. : University of Lethbridge, Dept. of Mathematics and Computer Science | |
dc.publisher.department | Department of Mathematics and Computer Science | |
dc.publisher.faculty | Arts and Science | |
dc.relation.ispartofseries | Thesis (University of Lethbridge. Faculty of Arts and Science) | |
dc.subject | Faithfulness | |
dc.subject | Abstractive summarization | |
dc.subject | Elementary discourse units | |
dc.subject | EDU | |
dc.subject | Hallucination | |
dc.subject | Text summarization | |
dc.subject | Natural language | |
dc.subject.lcsh | Natural language processing (Computer science) | |
dc.subject.lcsh | Human-computer interaction | |
dc.subject.lcsh | Semantic computing | |
dc.subject.lcsh | Dissertations, Academic | |
dc.title | Improving faithfulness in abstractive text summarization with EDUs using BART | |
dc.type | Thesis |