Getting Started with Google BERT: Build and train state-of-the-art natural language processing models using BERT (Paperback)

Ravichandiran, Sudharsan

買這商品的人也買了...

相關主題

商品描述

Kickstart your NLP journey by exploring BERT and its variants such as ALBERT, RoBERTa, DistilBERT, VideoBERT, and more with Hugging Face's transformers library

Key Features

  • Explore the encoder and decoder of the transformer model
  • Become well-versed with BERT along with ALBERT, RoBERTa, and DistilBERT
  • Discover how to pre-train and fine-tune BERT models for several NLP tasks

Book Description

BERT (bidirectional encoder representations from transformer) has revolutionized the world of natural language processing (NLP) with promising results. This book is an introductory guide that will help you get to grips with Google's BERT architecture. With a detailed explanation of the transformer architecture, this book will help you understand how the transformer's encoder and decoder work.

You'll explore the BERT architecture by learning how the BERT model is pre-trained and how to use pre-trained BERT for downstream tasks by fine-tuning it for NLP tasks such as sentiment analysis and text summarization with the Hugging Face transformers library. As you advance, you'll learn about different variants of BERT such as ALBERT, RoBERTa, and ELECTRA, and look at SpanBERT, which is used for NLP tasks like question answering. You'll also cover simpler and faster BERT variants based on knowledge distillation such as DistilBERT and TinyBERT. The book takes you through MBERT, XLM, and XLM-R in detail and then introduces you to sentence-BERT, which is used for obtaining sentence representation. Finally, you'll discover domain-specific BERT models such as BioBERT and ClinicalBERT, and discover an interesting variant called VideoBERT.

By the end of this BERT book, you'll be well-versed with using BERT and its variants for performing practical NLP tasks.

What you will learn

  • Understand the transformer model from the ground up
  • Find out how BERT works and pre-train it using masked language model (MLM) and next sentence prediction (NSP) tasks
  • Get hands-on with BERT by learning to generate contextual word and sentence embeddings
  • Fine-tune BERT for downstream tasks
  • Get to grips with ALBERT, RoBERTa, ELECTRA, and SpanBERT models
  • Get the hang of the BERT models based on knowledge distillation
  • Understand cross-lingual models such as XLM and XLM-R
  • Explore Sentence-BERT, VideoBERT, and BART

Who this book is for

This book is for NLP professionals and data scientists looking to simplify NLP tasks to enable efficient language understanding using BERT. A basic understanding of NLP concepts and deep learning is required to get the best out of this book.

商品描述(中文翻譯)

開始你的自然語言處理之旅,探索BERT及其變體,如ALBERT、RoBERTa、DistilBERT、VideoBERT等,使用Hugging Face的transformers庫。

主要特點:

- 探索transformer模型的編碼器和解碼器
- 熟悉BERT以及ALBERT、RoBERTa和DistilBERT
- 發現如何對BERT模型進行預訓練和微調,以應用於多個NLP任務

書籍描述:

BERT(雙向編碼器轉換器)以其有希望的結果在自然語言處理(NLP)領域引起了革命。本書是一本入門指南,將幫助您瞭解Google的BERT架構。通過對transformer架構的詳細解釋,本書將幫助您瞭解transformer的編碼器和解碼器的工作原理。

您將通過學習BERT模型的預訓練方式以及如何使用預訓練的BERT進行下游任務(如情感分析和文本摘要)的微調,使用Hugging Face的transformers庫來探索BERT架構。隨著您的進一步學習,您將瞭解BERT的不同變體,如ALBERT、RoBERTa和ELECTRA,並研究用於問答等NLP任務的SpanBERT。您還將涵蓋基於知識蒸餾的簡化和更快的BERT變體,如DistilBERT和TinyBERT。本書詳細介紹了MBERT、XLM和XLM-R,然後介紹了用於獲取句子表示的Sentence-BERT。最後,您將瞭解領域特定的BERT模型,如BioBERT和ClinicalBERT,並發現一個有趣的變體VideoBERT。

通過閱讀本書,您將熟練使用BERT及其變體來執行實際的NLP任務。

您將學到什麼:

- 從頭開始瞭解transformer模型
- 瞭解BERT的工作原理,並使用遮蔽語言模型(MLM)和下一句預測(NSP)任務對其進行預訓練
- 通過學習生成上下文詞和句子嵌入來實踐BERT
- 對下游任務微調BERT
- 熟悉ALBERT、RoBERTa、ELECTRA和SpanBERT模型
- 理解基於知識蒸餾的BERT模型
- 瞭解跨語言模型,如XLM和XLM-R
- 探索Sentence-BERT、VideoBERT和BART

本書適合對NLP概念和深度學習有基本了解的NLP專業人士和數據科學家,以簡化NLP任務,實現高效的語言理解。

作者簡介

Sudharsan Ravichandiran is a data scientist and artificial intelligence enthusiast. He holds a Bachelors in Information Technology from Anna University. His area of research focuses on practical implementations of deep learning and reinforcement learning including natural language processing and computer vision. He is an open-source contributor and loves answering questions on Stack Overflow.

作者簡介(中文翻譯)

Sudharsan Ravichandiran 是一位資料科學家和人工智慧愛好者。他擁有Anna大學的資訊科技學士學位。他的研究領域專注於深度學習和強化學習的實際應用,包括自然語言處理和電腦視覺。他是一位開源貢獻者,喜歡在Stack Overflow上回答問題。

目錄大綱

Table of Contents

  1. A Primer on Transformer Model
  2. Understanding the BERT Model
  3. Getting Hands-On with BERT
  4. BERT variants I - ALBERT, RoBERTa, ELECTRA, and SpanBERT
  5. BERT variants II - Based on knowledge distillation
  6. Exploring BERTSUM for Text Summarization
  7. Applying BERT for Other Languages
  8. Exploring Sentence and Domain Specific BERT
  9. Working with VideoBERT, BART, and more

目錄大綱(中文翻譯)

目錄


  1. Transformer 模型入門

  2. 理解 BERT 模型

  3. 實踐 BERT

  4. BERT 變體 I - ALBERT、RoBERTa、ELECTRA 和 SpanBERT

  5. BERT 變體 II - 基於知識蒸餾

  6. 探索 BERTSUM 進行文本摘要

  7. 應用 BERT 於其他語言

  8. 探索句子和領域特定的 BERT

  9. 使用 VideoBERT、BART 等進行工作