Normal view MARC view ISBD view

Accountable and Explainable Methods for Complex Reasoning over Text [electronic resource] / by Pepa Atanasova.

By: Atanasova, Pepa [author.].
Contributor(s): SpringerLink (Online service).
Material type: materialTypeLabelBookPublisher: Cham : Springer Nature Switzerland : Imprint: Springer, 2024Edition: 1st ed. 2024.Description: XVIII, 199 p. 24 illus. in color. online resource.Content type: text Media type: computer Carrier type: online resourceISBN: 9783031515187.Subject(s): Natural language processing (Computer science) | Information storage and retrieval systems | Machine learning | Natural Language Processing (NLP) | Information Storage and Retrieval | Machine LearningAdditional physical formats: Printed edition:: No title; Printed edition:: No titleDDC classification: 006.35 Online resources: Click here to access online
Contents:
1. Executive Summary -- Part I: Accountability for Complex Reasoning Tasks over Text -- 2. Fact Checking with Insufficient Evidence -- 3. Generating Label Cohesive and Well-Formed Adversarial Claims -- Part II: Explainability for Complex Reasoning Tasks over Text -- 4. Generating Fact Checking Explanations -- 5. Generating Fluent Fact Checking Explanations with Unsupervised Post-Editing -- 6. Multi-Hop Fact Checking of Political Claims -- Part III: Diagnostic Explainability Methods -- 7. A Diagnostic Study of Explainability Techniques for Text Classification -- 8. Diagnostics-Guided Explanation Generation -- 9. Recent Developments on Accountability and Explainability for Complex Reasoning Tasks.
In: Springer Nature eBookSummary: This thesis presents research that expands the collective knowledge in the areas of accountability and transparency of machine learning (ML) models developed for complex reasoning tasks over text. In particular, the presented results facilitate the analysis of the reasons behind the outputs of ML models and assist in detecting and correcting for potential harms. It presents two new methods for accountable ML models; advances the state of the art with methods generating textual explanations that are further improved to be fluent, easy to read, and to contain logically connected multi-chain arguments; and makes substantial contributions in the area of diagnostics for explainability approaches. All results are empirically tested on complex reasoning tasks over text, including fact checking, question answering, and natural language inference. This book is a revised version of the PhD dissertation written by the author to receive her PhD from the Faculty of Science, University of Copenhagen, Denmark. In 2023, it won the Informatics Europe Best Dissertation Award, granted to the most outstanding European PhD thesis in the field of computer science.
    average rating: 0.0 (0 votes)
No physical items for this record

1. Executive Summary -- Part I: Accountability for Complex Reasoning Tasks over Text -- 2. Fact Checking with Insufficient Evidence -- 3. Generating Label Cohesive and Well-Formed Adversarial Claims -- Part II: Explainability for Complex Reasoning Tasks over Text -- 4. Generating Fact Checking Explanations -- 5. Generating Fluent Fact Checking Explanations with Unsupervised Post-Editing -- 6. Multi-Hop Fact Checking of Political Claims -- Part III: Diagnostic Explainability Methods -- 7. A Diagnostic Study of Explainability Techniques for Text Classification -- 8. Diagnostics-Guided Explanation Generation -- 9. Recent Developments on Accountability and Explainability for Complex Reasoning Tasks.

This thesis presents research that expands the collective knowledge in the areas of accountability and transparency of machine learning (ML) models developed for complex reasoning tasks over text. In particular, the presented results facilitate the analysis of the reasons behind the outputs of ML models and assist in detecting and correcting for potential harms. It presents two new methods for accountable ML models; advances the state of the art with methods generating textual explanations that are further improved to be fluent, easy to read, and to contain logically connected multi-chain arguments; and makes substantial contributions in the area of diagnostics for explainability approaches. All results are empirically tested on complex reasoning tasks over text, including fact checking, question answering, and natural language inference. This book is a revised version of the PhD dissertation written by the author to receive her PhD from the Faculty of Science, University of Copenhagen, Denmark. In 2023, it won the Informatics Europe Best Dissertation Award, granted to the most outstanding European PhD thesis in the field of computer science.

There are no comments for this item.

Log in to your account to post a comment.