Large Language Models: A Deep Dive: Bridging Theory and Practice

Kamath, Uday, Keenan, Kevin, Somers, Garrett

  • 出版商: Springer
  • 出版日期: 2024-08-21
  • 售價: $3,290
  • 貴賓價: 9.5$3,126
  • 語言: 英文
  • 頁數: 472
  • 裝訂: Hardcover - also called cloth, retail trade, or trade
  • ISBN: 3031656466
  • ISBN-13: 9783031656460
  • 相關分類: LangChain
  • 海外代購書籍(需單獨結帳)

相關主題

商品描述

Large Language Models (LLMs) have emerged as a cornerstone technology, transforming how we interact with information and redefining the boundaries of artificial intelligence. LLMs offer an unprecedented ability to understand, generate, and interact with human language in an intuitive and insightful manner, leading to transformative applications across domains like content creation, chatbots, search engines, and research tools. While fascinating, the complex workings of LLMs--their intricate architecture, underlying algorithms, and ethical considerations--require thorough exploration, creating a need for a comprehensive book on this subject.

This book provides an authoritative exploration of the design, training, evolution, and application of LLMs. It begins with an overview of pre-trained language models and Transformer architectures, laying the groundwork for understanding prompt-based learning techniques. Next, it dives into methods for fine-tuning LLMs, integrating reinforcement learning for value alignment, and the convergence of LLMs with computer vision, robotics, and speech processing. The book strongly emphasizes practical applications, detailing real-world use cases such as conversational chatbots, retrieval-augmented generation (RAG), and code generation. These examples are carefully chosen to illustrate the diverse and impactful ways LLMs are being applied in various industries and scenarios.

Readers will gain insights into operationalizing and deploying LLMs, from implementing modern tools and libraries to addressing challenges like bias and ethical implications. The book also introduces the cutting-edge realm of multimodal LLMs that can process audio, images, video, and robotic inputs. With hands-on tutorials for applying LLMs to natural language tasks, this thorough guide equips readers with both theoretical knowledge and practical skills for leveraging the full potential of large language models.

This comprehensive resource is appropriate for a wide audience: students, researchers and academics in AI or NLP, practicing data scientists, and anyone looking to grasp the essence and intricacies of LLMs.

Key Features:

  • Over 100 techniques and state-of-the-art methods, including pre-training, prompt-based tuning, instruction tuning, parameter-efficient and compute-efficient fine-tuning, end-user prompt engineering, and building and optimizing Retrieval-Augmented Generation systems, along with strategies for aligning LLMs with human values using reinforcement learning
  • Over 200 datasets compiled in one place, covering everything from pre- training to multimodal tuning, providing a robust foundation for diverse LLM applications
  • Over 50 strategies to address key ethical issues such as hallucination, toxicity, bias, fairness, and privacy. Gain comprehensive methods for measuring, evaluating, and mitigating these challenges to ensure responsible LLM deployment
  • Over 200 benchmarks covering LLM performance across various tasks, ethical considerations, multimodal applications, and more than 50 evaluation metrics for the LLM lifecycle
  • Nine detailed tutorials that guide readers through pre-training, fine- tuning, alignment tuning, bias mitigation, multimodal training, and deploying large language models using tools and libraries compatible with Google Colab, ensuring practical application of theoretical concepts
  • Over 100 practical tips for data scientists and practitioners, offering implementation details, tricks, and tools to successfully navigate the LLM life- cycle and accomplish tasks efficiently

商品描述(中文翻譯)

大型語言模型(LLMs)已成為一項基石技術,改變了我們與資訊互動的方式,並重新定義了人工智慧的邊界。LLMs 提供了前所未有的能力,以直觀且具洞察力的方式理解、生成和互動人類語言,從而在內容創作、聊天機器人、搜尋引擎和研究工具等領域帶來變革性的應用。儘管引人入勝,LLMs 的複雜運作——其精密的架構、底層算法和倫理考量——需要深入探討,因此對於這一主題的全面書籍需求日益增加。

本書提供了對 LLMs 的設計、訓練、演變和應用的權威探索。它首先概述了預訓練語言模型和 Transformer 架構,為理解基於提示的學習技術奠定基礎。接著,深入探討了微調 LLMs 的方法,整合強化學習以實現價值對齊,以及 LLMs 與計算機視覺、機器人技術和語音處理的融合。本書強調實際應用,詳細介紹了如對話聊天機器人、檢索增強生成(RAG)和代碼生成等真實世界的使用案例。這些例子經過精心挑選,以說明 LLMs 在各行各業和場景中的多樣且深遠的應用方式。

讀者將獲得有關 LLMs 的運作和部署的見解,從實施現代工具和庫到解決偏見和倫理影響等挑戰。本書還介紹了能夠處理音頻、圖像、視頻和機器人輸入的多模態 LLMs 的前沿領域。通過針對自然語言任務應用 LLMs 的實作教程,這本全面的指南使讀者具備理論知識和實踐技能,以充分發揮大型語言模型的潛力。

這本綜合資源適合廣泛的讀者群:學生、AI 或 NLP 領域的研究人員和學者、實踐中的數據科學家,以及任何希望掌握 LLMs 本質和複雜性的讀者。

主要特色:
- 超過 100 種技術和最先進的方法,包括預訓練、基於提示的調整、指令調整、參數高效和計算高效的微調、最終用戶提示工程,以及構建和優化檢索增強生成系統的策略,還有使用強化學習將 LLMs 與人類價值對齊的策略。
- 超過 200 個數據集集中在一處,涵蓋從預訓練到多模態調整的所有內容,為多樣的 LLM 應用提供堅實的基礎。
- 超過 50 種策略來解決關鍵的倫理問題,如幻覺、毒性、偏見、公平性和隱私。獲得全面的方法來測量、評估和減輕這些挑戰,以確保負責任的 LLM 部署。
- 超過 200 個基準涵蓋 LLM 在各種任務中的表現、倫理考量、多模態應用,以及超過 50 種 LLM 生命週期的評估指標。
- 九個詳細的教程指導讀者進行預訓練、微調、對齊調整、偏見緩解、多模態訓練,以及使用與 Google Colab 兼容的工具和庫部署大型語言模型,確保理論概念的實際應用。
- 超過 100 條針對數據科學家和實踐者的實用建議,提供實施細節、技巧和工具,以成功導航 LLM 生命週期並高效完成任務。

作者簡介

Uday Kamath has 25 years of experience in analytical development and a Ph.D. in scalable machine learning. His significant contributions span numerous journals, conferences, books, and patents. Notable books include Applied Causal Inference, Explainable Artificial Intelligence, Transformers for Machine Learning, Deep Learning for NLP and Speech Recognition, Mastering Java Machine Learning, and Machine Learning: End-to-End Guide for Java Developers. Currently serving as the Chief Analytics Officer for Smarsh, his role encompasses spearheading data science and research in communication AI. He is also an active member of the Board of Advisors for entities, including commercial companies like Falkonry and academic institutions such as the Center for Human-Machine Partnership at GMU.

Kevin Keenan, Ph.D has more than 15 years of experience in the application of statistics, data analytics, and machine learning to real-world data across academia, cybersecurity, and financial services. Within these domains, he has specialized in the rigorous application of the scientific method, especially within scrappy commercial environments, where data quality and completeness are never ideal but from which immense value and insight can still be derived. With 8+ years of experience using NLP to surface human-mediated corporate, legal, and regulatory risk from communications and deep packet network traffic data, Kevin has successfully delivered machine learning applied to unstructured data at huge scales. He is the author of four published scientific papers in the academic field of Evolutionary Genetics, with over 1,400 citations, and is the author and maintainer of the open-source "diveRsity" project for population genetics research in the R statistical programming language.

Sarah Sorenson has spent over 15 years working in the software industry. She is a polyglot programmer, having done full-stack development in Python, Java, C#, and JavaScript at various times. She has spent the past ten years building machine learning capabilities and putting them into operation, primarily in the financial services domain. She has extensive experience in the application of machine learning to fraud detection and, most recently, has specialized in the development and deployment of NLP models for regulatory compliance on large-scale communications data at some of the world's top banks.

Garrett Somers has been doing data-intensive research for over 10 years. Trained as an astrophysicist, he began his career studying X-ray emissions from distant black holes, before authoring his dissertation on numerical models of the evolving structure, spin, and magnetic fields of stars. He is the first author of eight peer-reviewed astrophysics articles totaling over 400 citations and the contributing author of an additional twenty-seven (over 4,000 citations in total). In 2019, he began a career in data science, specializing in applications of natural language processing to behavioral analysis in large communication corpora.

作者簡介(中文翻譯)

Uday Kamath 擁有 25 年的分析開發經驗,並取得可擴展機器學習的博士學位。他在多個期刊、會議、書籍和專利方面做出了重要貢獻。著名的書籍包括《Applied Causal Inference》、《Explainable Artificial Intelligence》、《Transformers for Machine Learning》、《Deep Learning for NLP and Speech Recognition》、《Mastering Java Machine Learning》和《Machine Learning: End-to-End Guide for Java Developers》。目前,他擔任 Smarsh 的首席分析官,負責推動通信 AI 的數據科學和研究。他也是多個實體的顧問委員會活躍成員,包括商業公司如 Falkonry 和學術機構如 GMU 的人機合作中心。

Kevin Keenan 博士在統計學、數據分析和機器學習應用於現實數據方面擁有超過 15 年的經驗,涵蓋學術界、網絡安全和金融服務。在這些領域中,他專注於科學方法的嚴謹應用,特別是在數據質量和完整性從未理想的商業環境中,仍能從中獲得巨大的價值和洞察。擁有超過 8 年使用 NLP 從通信和深度數據包網絡流量中挖掘人為企業、法律和監管風險的經驗,Kevin 成功地將機器學習應用於大規模的非結構化數據。他在進化遺傳學的學術領域發表了四篇科學論文,引用次數超過 1,400 次,並且是開源項目 'diveRsity' 的作者和維護者,該項目專注於 R 語言中的群體遺傳學研究。

Sarah Sorenson 在軟體產業工作了超過 15 年。她是一位多語言程式設計師,曾在不同時期進行 Python、Java、C# 和 JavaScript 的全棧開發。過去十年來,她專注於建立機器學習能力並將其投入運作,主要在金融服務領域。她在機器學習應用於詐騙檢測方面擁有豐富的經驗,最近專注於為全球頂尖銀行的大規模通信數據開發和部署 NLP 模型以符合監管要求。

Garrett Somers 從事數據密集型研究已超過 10 年。作為一名天體物理學家,他的職業生涯始於研究遙遠黑洞的 X 射線輻射,隨後撰寫了有關恆星演變結構、旋轉和磁場的數值模型的論文。他是八篇經過同行評審的天體物理學文章的第一作者,總引用次數超過 400 次,並且是另外二十七篇文章的貢獻作者(總引用次數超過 4,000 次)。2019 年,他開始了數據科學的職業生涯,專注於自然語言處理在大型通信語料庫中的行為分析應用。