Hyperparameter Optimization in Machine Learning: Make Your Machine Learning and Deep Learning Models More Efficient
Agrawal, Tanay
- 出版商: Apress
- 出版日期: 2020-11-29
- 定價: $1,400
- 售價: 9.5 折 $1,330
- 貴賓價: 9.0 折 $1,260
- 語言: 英文
- 頁數: 164
- 裝訂: Quality Paper - also called trade paper
- ISBN: 1484265785
- ISBN-13: 9781484265789
-
相關分類:
Machine Learning、DeepLearning
立即出貨 (庫存=1)
買這商品的人也買了...
-
$1,188Fedora 11 and Red Hat Enterprise Linux Bible (Paperback)
-
$800$680 -
$3,730$3,544 -
$360$281 -
$450$356 -
$450$356 -
$650$507 -
$500$390 -
$650$553 -
$520$411 -
$780$616 -
$450$351 -
$600$468 -
$560$442 -
$1,976$1,872 -
$500$395 -
$1,260$1,235 -
$680$537 -
$1,200$948 -
$680$537 -
$1,910$1,815 -
$780$608 -
$600$468 -
$800$632 -
$980$774
相關主題
商品描述
Dive into hyperparameter tuning of machine learning models and focus on what hyperparameters are and how they work. This book discusses different techniques of hyperparameters tuning, from the basics to advanced methods.
This is a step-by-step guide to hyperparameter optimization, starting with what hyperparameters are and how they affect different aspects of machine learning models. It then goes through some basic (brute force) algorithms of hyperparameter optimization. Further, the author addresses the problem of time and memory constraints, using distributed optimization methods. Next you'll discuss Bayesian optimization for hyperparameter search, which learns from its previous history.
The book discusses different frameworks, such as Hyperopt and Optuna, which implements sequential model-based global optimization (SMBO) algorithms. During these discussions, you'll focus on different aspects such as creation of search spaces and distributed optimization of these libraries.
Hyperparameter Optimization in Machine Learning creates an understanding of how these algorithms work and how you can use them in real-life data science problems. The final chapter summaries the role of hyperparameter optimization in automated machine learning and ends with a tutorial to create your own AutoML script.
Hyperparameter optimization is tedious task, so sit back and let these algorithms do your work.
What You Will Learn
- Discover how changes in hyperparameters affect the model's performance.
- Apply different hyperparameter tuning algorithms to data science problems
- Work with Bayesian optimization methods to create efficient machine learning and deep learning models
- Distribute hyperparameter optimization using a cluster of machines
- Approach automated machine learning using hyperparameter optimization
Who This Book Is For
Professionals and students working with machine learning.
商品描述(中文翻譯)
深入探討機器學習模型的超參數調整,並關注超參數是什麼以及它們如何工作。本書討論了從基礎到高級方法的超參數調整技術。
這是一本逐步指南,介紹了超參數優化的過程,從超參數是什麼以及它們如何影響機器學習模型的不同方面開始。然後,它通過一些基本(暴力搜索)的超參數優化算法。此外,作者還解決了時間和內存限制的問題,使用分佈式優化方法。接下來,您將討論貝葉斯優化用於超參數搜索,該方法從先前的歷史中學習。
本書討論了不同的框架,如Hyperopt和Optuna,它們實現了序列模型優化(SMBO)算法。在這些討論中,您將關注不同方面,例如搜索空間的創建和這些庫的分佈式優化。
《機器學習中的超參數優化》讓您了解這些算法的工作原理以及如何在實際的數據科學問題中使用它們。最後一章總結了超參數優化在自動機器學習中的作用,並以教程結束,教您如何創建自己的AutoML腳本。
超參數優化是一項繁瑣的任務,所以請坐下來,讓這些算法為您工作。
您將學到什麼:
- 發現超參數的變化如何影響模型的性能。
- 將不同的超參數調整算法應用於數據科學問題。
- 使用貝葉斯優化方法創建高效的機器學習和深度學習模型。
- 使用一組機器進行超參數優化。
- 使用超參數優化進行自動機器學習。
本書適合專業人士和從事機器學習的學生。
作者簡介
Tanay is a deep learning engineer and researcher, who graduated in 2019 in Bachelor of Technology from SMVDU, J&K. He is currently working at Curl Hg on SARA, an OCR platform. He is also advisor to Witooth Dental Services and Technologies. He started his career at MateLabs working on an AutoML Platform, Mateverse. He has worked extensively on hyperparameter optimization. He has also delivered talks on hyperparameter optimization at conferences including PyData, Delhi and PyCon, India.
作者簡介(中文翻譯)
Tanay是一位深度學習工程師和研究員,於2019年畢業於J&K的SMVDU科技學士學位。他目前在Curl Hg工作,負責SARA,一個OCR平台。他還是Witooth Dental Services and Technologies的顧問。他在MateLabs開始了他的職業生涯,並在一個名為Mateverse的AutoML平台上工作。他在超參數優化方面有豐富的經驗。他還在包括PyData Delhi和PyCon India在內的會議上發表了關於超參數優化的演講。