Hands-On Mathematics for Deep Learning

Jay Dawani

  • 出版商: Packt Publishing
  • 出版日期: 2020-06-12
  • 售價: $1,330
  • 貴賓價: 9.5$1,264
  • 語言: 英文
  • 頁數: 364
  • 裝訂: Paperback
  • ISBN: 1838647295
  • ISBN-13: 9781838647292
  • 相關分類: DeepLearning
  • 立即出貨 (庫存=1)

買這商品的人也買了...

商品描述

Key Features

  • Understand linear algebra, calculus, gradient algorithms, and other concepts essential for training deep neural networks
  • Learn the mathematical concepts needed to understand how deep learning models function
  • Use deep learning for solving problems related to vision, image, text, and sequence applications

Book Description

Most programmers and data scientists struggle with mathematics, having either overlooked or forgotten core mathematical concepts. This book uses Python libraries to help you understand the math required to build deep learning (DL) models.

You'll begin by learning about core mathematical and modern computational techniques used to design and implement DL algorithms. This book will cover essential topics, such as linear algebra, eigenvalues and eigenvectors, the singular value decomposition concept, and gradient algorithms, to help you understand how to train deep neural networks. Later chapters focus on important neural networks, such as the linear neural network and multilayer perceptrons, with a primary focus on helping you learn how each model works. As you advance, you will delve into the math used for regularization, multi-layered DL, forward propagation, optimization, and backpropagation techniques to understand what it takes to build full-fledged DL models. Finally, you'll explore CNN, recurrent neural network (RNN), and GAN models and their application.

By the end of this book, you'll have built a strong foundation in neural networks and DL mathematical concepts, which will help you to confidently research and build custom models in DL.

What you will learn

  • Understand the key mathematical concepts for building neural network models
  • Discover core multivariable calculus concepts
  • Improve the performance of deep learning models using optimization techniques
  • Cover optimization algorithms, from basic stochastic gradient descent (SGD) to the advanced Adam optimizer
  • Understand computational graphs and their importance in DL
  • Explore the backpropagation algorithm to reduce output error
  • Cover DL algorithms such as convolutional neural networks (CNNs), sequence models, and generative adversarial networks (GANs)

Who this book is for

This book is for data scientists, machine learning developers, aspiring deep learning developers, or anyone who wants to understand the foundation of deep learning by learning the math behind it. Working knowledge of the Python programming language and machine learning basics is required.

商品描述(中文翻譯)

主要特點


  • 理解線性代數、微積分、梯度算法和其他訓練深度神經網絡所必需的概念

  • 學習理解深度學習模型運作所需的數學概念

  • 使用深度學習解決與視覺、圖像、文本和序列應用相關的問題

書籍描述

大多數程序員和數據科學家在數學方面遇到困難,要麼是因為忽視了核心數學概念,要麼是因為忘記了這些概念。本書使用Python庫來幫助您理解構建深度學習(DL)模型所需的數學知識。

您將首先學習設計和實現DL算法所使用的核心數學和現代計算技術。本書將涵蓋線性代數、特徵值和特徵向量、奇異值分解概念以及梯度算法等重要主題,以幫助您理解如何訓練深度神經網絡。後面的章節專注於重要的神經網絡,如線性神經網絡和多層感知器,主要目的是幫助您了解每個模型的運作方式。隨著您的進步,您將深入研究用於正則化、多層DL、前向傳播、優化和反向傳播技術的數學知識,以了解構建完整DL模型所需的要素。最後,您將探索CNN、循環神經網絡(RNN)和GAN模型及其應用。

通過閱讀本書,您將建立起神經網絡和DL數學概念的堅實基礎,這將幫助您自信地研究和構建自定義的DL模型。

您將學到什麼


  • 理解構建神經網絡模型所需的關鍵數學概念

  • 探索核心多變量微積分概念

  • 使用優化技術提高深度學習模型的性能

  • 涵蓋從基本隨機梯度下降(SGD)到高級Adam優化器的優化算法

  • 理解計算圖及其在DL中的重要性

  • 探索反向傳播算法以減少輸出誤差

  • 涵蓋卷積神經網絡(CNN)、序列模型和生成對抗網絡(GAN)等DL算法

適合閱讀對象

本書適合數據科學家、機器學習開發人員、有志於成為深度學習開發人員的人,或任何希望通過學習背後的數學知識來理解深度學習基礎的人。需要具備Python編程語言和機器學習基礎的工作知識。

作者簡介

Jay Dawani is a former professional swimmer turned mathematician and computer scientist. He is also a Forbes 30 Under 30 Fellow. At present, he is the Director of Artificial Intelligence at Geometric Energy Corporation (NATO CAGE) and the CEO of Lemurian Labs - a startup he founded that is developing the next generation of autonomy, intelligent process automation, and driver intelligence. Previously he has also been the technology and R&D advisor to Spacebit Capital. He has spent the last three years researching at the frontiers of AI with a focus on reinforcement learning, open-ended learning, deep learning, quantum machine learning, human-machine interaction, multi-agent and complex systems, and artificial general intelligence.

作者簡介(中文翻譯)

Jay Dawani是一位前職業游泳選手,現已轉行成為數學家和計算機科學家。他也是《福布斯》30位30歲以下的成員。目前,他擔任Geometric Energy Corporation(NATO CAGE)的人工智能總監,並擔任他創辦的初創公司Lemurian Labs的首席執行官。該公司正在開發下一代自主性、智能流程自動化和駕駛智能技術。此前,他還擔任Spacebit Capital的技術和研發顧問。過去三年,他一直在人工智能的前沿進行研究,專注於強化學習、開放式學習、深度學習、量子機器學習、人機交互、多智能體和複雜系統以及人工通用智能。

目錄大綱

  1. Linear Algebra
  2. Vector Calculus
  3. Probability and Statistics
  4. Optimization
  5. Graph Theory
  6. Linear Neural Networks
  7. Feedforward Neural Networks
  8. Regularization
  9. Convolutional Neural Networks
  10. Recurrent Neural Networks
  11. Attention Mechanisms
  12. Generative Models
  13. Transfer and Meta Learning
  14. Geometric Deep Learning

目錄大綱(中文翻譯)

線性代數
向量微積分
機率與統計
最佳化
圖論
線性神經網路
前饋神經網路
正規化
卷積神經網路
循環神經網路
注意機制
生成模型
轉移與元學習
幾何深度學習