Normal view MARC view ISBD view

Pretrained Transformers for Text Ranking [electronic resource] : BERT and Beyond / by Jimmy Lin, Rodrigo Nogueira, Andrew Yates.

By: Lin, Jimmy [author.].
Contributor(s): Nogueira, Rodrigo [author.] | Yates, Andrew [author.] | SpringerLink (Online service).
Material type: materialTypeLabelBookSeries: Synthesis Lectures on Human Language Technologies: Publisher: Cham : Springer International Publishing : Imprint: Springer, 2022Edition: 1st ed. 2022.Description: XVII, 307 p. online resource.Content type: text Media type: computer Carrier type: online resourceISBN: 9783031021817.Subject(s): Artificial intelligence | Natural language processing (Computer science) | Computational linguistics | Artificial Intelligence | Natural Language Processing (NLP) | Computational LinguisticsAdditional physical formats: Printed edition:: No title; Printed edition:: No title; Printed edition:: No titleDDC classification: 006.3 Online resources: Click here to access online
Contents:
Preface -- Acknowledgments -- Introduction -- Setting the Stage -- Multi-Stage Architectures for Reranking -- Refining Query and Document Representations -- Learned Dense Representations for Ranking -- Future Directions and Conclusions -- Bibliography -- Authors' Biographies.
In: Springer Nature eBookSummary: The goal of text ranking is to generate an ordered list of texts retrieved from a corpus in response to a query. Although the most common formulation of text ranking is search, instances of the task can also be found in many natural language processing (NLP) applications.This book provides an overview of text ranking with neural network architectures known as transformers, of which BERT (Bidirectional Encoder Representations from Transformers) is the best-known example. The combination of transformers and self-supervised pretraining has been responsible for a paradigm shift in NLP, information retrieval (IR), and beyond. This book provides a synthesis of existing work as a single point of entry for practitioners who wish to gain a better understanding of how to apply transformers to text ranking problems and researchers who wish to pursue work in this area. It covers a wide range of modern techniques, grouped into two high-level categories: transformer models that perform reranking inmulti-stage architectures and dense retrieval techniques that perform ranking directly. Two themes pervade the book: techniques for handling long documents, beyond typical sentence-by-sentence processing in NLP, and techniques for addressing the tradeoff between effectiveness (i.e., result quality) and efficiency (e.g., query latency, model and index size). Although transformer architectures and pretraining techniques are recent innovations, many aspects of how they are applied to text ranking are relatively well understood and represent mature techniques. However, there remain many open research questions, and thus in addition to laying out the foundations of pretrained transformers for text ranking, this book also attempts to prognosticate where the field is heading.
    average rating: 0.0 (0 votes)
No physical items for this record

Preface -- Acknowledgments -- Introduction -- Setting the Stage -- Multi-Stage Architectures for Reranking -- Refining Query and Document Representations -- Learned Dense Representations for Ranking -- Future Directions and Conclusions -- Bibliography -- Authors' Biographies.

The goal of text ranking is to generate an ordered list of texts retrieved from a corpus in response to a query. Although the most common formulation of text ranking is search, instances of the task can also be found in many natural language processing (NLP) applications.This book provides an overview of text ranking with neural network architectures known as transformers, of which BERT (Bidirectional Encoder Representations from Transformers) is the best-known example. The combination of transformers and self-supervised pretraining has been responsible for a paradigm shift in NLP, information retrieval (IR), and beyond. This book provides a synthesis of existing work as a single point of entry for practitioners who wish to gain a better understanding of how to apply transformers to text ranking problems and researchers who wish to pursue work in this area. It covers a wide range of modern techniques, grouped into two high-level categories: transformer models that perform reranking inmulti-stage architectures and dense retrieval techniques that perform ranking directly. Two themes pervade the book: techniques for handling long documents, beyond typical sentence-by-sentence processing in NLP, and techniques for addressing the tradeoff between effectiveness (i.e., result quality) and efficiency (e.g., query latency, model and index size). Although transformer architectures and pretraining techniques are recent innovations, many aspects of how they are applied to text ranking are relatively well understood and represent mature techniques. However, there remain many open research questions, and thus in addition to laying out the foundations of pretrained transformers for text ranking, this book also attempts to prognosticate where the field is heading.

There are no comments for this item.

Log in to your account to post a comment.