Foundation Models for Natural Language Processing: Pre-Trained Language Models Integrating Media (Paperback)
Paaß, Gerhard, Giesselbach, Sven
- 出版商: Springer
- 出版日期: 2023-05-24
- 售價: $2,080
- 貴賓價: 9.5 折 $1,976
- 語言: 英文
- 頁數: 436
- 裝訂: Quality Paper - also called trade paper
- ISBN: 3031231929
- ISBN-13: 9783031231926
-
相關分類:
人工智慧、Machine Learning、Text-mining
立即出貨 (庫存=1)
買這商品的人也買了...
-
$1,176Database Management Systems, 3/e (IE-Paperback)
-
$4,200$3,990 -
$1,780$1,744 -
$1,200$1,140 -
$1,400$1,330 -
$2,470$2,347 -
$1,617Deep Learning (Hardcover)
-
$948Scala for the Impatient,2/e
-
$1,150$1,093 -
$2,640Natural Language Processing with PyTorch
-
$1,750$1,715 -
$1,850$1,758 -
$1,416$1,341 -
$2,160$2,052 -
$3,780Deep Learning for Nlp and Speech Recognition (Hardcover)
-
$1,420$1,392 -
$1,995The Pragmatic Programmer: your journey to mastery, 2/e (20th Anniversary Edition) (Hardcover)
-
$2,185$2,070 -
$2,641$2,502 -
$2,800$2,660 -
$3,150$2,993 -
$2,233$2,115 -
$1,400$1,330 -
$2,280$2,166 -
$2,660$2,520
相關主題
商品描述
This open access book provides a comprehensive overview of the state of the art in research and applications of Foundation Models and is intended for readers familiar with basic Natural Language Processing (NLP) concepts.
Over the recent years, a revolutionary new paradigm has been developed for training models for NLP. These models are first pre-trained on large collections of text documents to acquire general syntactic knowledge and semantic information. Then, they are fine-tuned for specific tasks, which they can often solve with superhuman accuracy. When the models are large enough, they can be instructed by prompts to solve new tasks without any fine-tuning. Moreover, they can be applied to a wide range of different media and problem domains, ranging from image and video processing to robot control learning. Because they provide a blueprint for solving many tasks in artificial intelligence, they have been called Foundation Models.
After a brief introduction to basic NLP models the main pre-trained language models BERT, GPT and sequence-to-sequence transformer are described, as well as the concepts of self-attention and context-sensitive embedding. Then, different approaches to improving these models are discussed, such as expanding the pre-training criteria, increasing the length of input texts, or including extra knowledge. An overview of the best-performing models for about twenty application areas is then presented, e.g., question answering, translation, story generation, dialog systems, generating images from text, etc. For each application area, the strengths and weaknesses of current models are discussed, and an outlook on further developments is given. In addition, links are provided to freely available program code. A concluding chapter summarizes the economic opportunities, mitigation of risks, and potential developments of AI.
商品描述(中文翻譯)
這本開放存取的書提供了對基礎模型研究和應用的全面概述,適合熟悉基本自然語言處理(NLP)概念的讀者。近年來,為NLP訓練模型開發了一種革命性的新範式。這些模型首先在大量文本文檔中進行預訓練,以獲取一般的句法知識和語義信息。然後,它們通常可以以超人的準確性對特定任務進行微調。當模型足夠大時,它們可以通過提示指示來解決新任務,而無需進行任何微調。此外,它們可以應用於各種不同的媒體和問題領域,從圖像和視頻處理到機器人控制學習。由於它們為解決人工智能中的許多任務提供了藍圖,因此被稱為基礎模型。
在簡要介紹基本NLP模型後,描述了主要的預訓練語言模型BERT、GPT和序列到序列轉換器,以及自我注意力和上下文敏感嵌入的概念。然後,討論了改進這些模型的不同方法,例如擴展預訓練標準、增加輸入文本的長度或包含額外知識。接著,概述了大約二十個應用領域中表現最佳的模型,例如問答、翻譯、故事生成、對話系統、從文本生成圖像等。對於每個應用領域,討論了當前模型的優點和缺點,並展望了進一步的發展。此外,提供了免費可用的程式碼連結。最後一章總結了人工智能的經濟機會、風險緩解和潛在發展。
作者簡介
Dr. Gerhard Paaß is a Lead Scientist at the Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS). With a background in Mathematics, he is a recognized expert in the field of Artificial Intelligence, particularly in the area of Natural Language Processing. Dr. Paaß has previously worked at UC Berkeley in California and the University of Technology in Brisbane. He has served as reviewer and conference chair at various international conferences, including NeurIPS, CIKM, ECML/PKDD, ICDM, and KDD, where he regularly is a member of the program committee. Dr. Paaß has received a "best paper" award on probabilistic logic and is the author of about 70 publications for international conferences and journals. Recently, he authored the book "Artificial Intelligence: What's Behind the Technology of the Future?" (in German). He is currently involved in the creation of a computer center for Foundation Models. Besides experimental research on Foundation Models, he holds lectures for Deep Learning and Natural Language Understanding at the University of Bonn and in industry.
Sven Giesselbach is the leader of the Natural Language Understanding (NLU) team at the Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS), where he has specialized in Artificial Intelligence and Natural Language Processing. He and his team develop solutions in the areas of medical, legal and general document understanding, which in their core build upon Foundation Models. Sven Giesselbach is also part of the Competence Center for Machine Learning Rhine-Ruhr (ML2R), where he works as a research scientist and investigates Informed Machine Learning, a paradigm in which knowledge is injected into machine learning models, in conjunction with language modeling. He has published more than 10 papers on Natural Language Processing and Understanding, which focus on the creation of application-ready NLU systems and the integration of expert knowledge in various stages of the solution design. He led the development of the Natural Language Understanding Showroom, a platform for showcasing state-of-the-art Natural Language Understanding models. He regularly gives talks about NLU at summer schools, conferences and AI-Meetups.
作者簡介(中文翻譯)
Dr. Gerhard Paaß是德國弗勞恩霍夫智能分析和信息系統研究所(IAIS)的首席科學家。他具有數學背景,是人工智能領域,尤其是自然語言處理領域的認可專家。Dr. Paaß曾在加利福尼亞大學伯克利分校和布里斯班科技大學工作。他曾擔任多個國際會議的審稿人和會議主席,包括NeurIPS、CIKM、ECML/PKDD、ICDM和KDD,並經常擔任該項目委員會的成員。Dr. Paaß在概率邏輯方面獲得了“最佳論文”獎,並且是國際會議和期刊的約70篇論文的作者。最近,他撰寫了德文書籍《人工智能:未來技術背後的內幕》。他目前參與創建一個基礎模型的計算機中心。除了對基礎模型進行實驗性研究外,他還在波恩大學和工業界講授深度學習和自然語言理解的課程。
Sven Giesselbach是德國弗勞恩霍夫智能分析和信息系統研究所(IAIS)自然語言理解(NLU)團隊的負責人,他專注於人工智能和自然語言處理。他和他的團隊在醫療、法律和一般文件理解領域開發解決方案,這些解決方案的核心是基於基礎模型。Sven Giesselbach還是萊茵-魯爾機器學習能力中心(ML2R)的一員,他在那裡擔任研究科學家,並研究知識注入機器學習模型的知情機器學習,以及語言建模相結合。他發表了10多篇關於自然語言處理和理解的論文,重點是創建應用就緒的NLU系統和在解決方案設計的各個階段集成專家知識。他領導開發了自然語言理解展示廳,這是一個展示最先進的自然語言理解模型的平台。他經常在暑期學校、會議和AI-Meetup上發表有關NLU的演講。