Optimization for Machine Learning (Hardcover)
暫譯: 機器學習的優化 (精裝版)
Suvrit Sra, Sebastian Nowozin, Stephen J. Wright
- 出版商: MIT
- 出版日期: 2011-09-30
- 售價: $2,260
- 貴賓價: 9.5 折 $2,147
- 語言: 英文
- 頁數: 512
- 裝訂: Hardcover
- ISBN: 026201646X
- ISBN-13: 9780262016469
-
相關分類:
Machine Learning
海外代購書籍(需單獨結帳)
買這商品的人也買了...
-
$990$891 -
$480$379 -
$750$495 -
$600$510 -
$420$332 -
$600$468 -
$580$458 -
$880$695 -
$780$663 -
$450$351 -
$680$530 -
$480$408 -
$580$458 -
$360$306 -
$780$515 -
$750$593 -
$480$379 -
$1,200$948 -
$550$435 -
$450$356 -
$980$833 -
$580$458 -
$280$238 -
$4,530$4,304 -
$1,680$1,646
相關主題
商品描述
The interplay between optimization and machine learning is one of the most important developments in modern computational science. Optimization formulations and methods are proving to be vital in designing algorithms to extract essential knowledge from huge volumes of data. Machine learning, however, is not simply a consumer of optimization technology but a rapidly evolving field that is itself generating new optimization ideas. This book captures the state of the art of the interaction between optimization and machine learning in a way that is accessible to researchers in both fields.Optimization approaches have enjoyed prominence in machine learning because of their wide applicability and attractive theoretical properties. The increasing complexity, size, and variety of today's machine learning models call for the reassessment of existing assumptions. This book starts the process of reassessment. It describes the resurgence in novel contexts of established frameworks such as first-order methods, stochastic approximations, convex relaxations, interior-point methods, and proximal methods. It also devotes attention to newer themes such as regularized optimization, robust optimization, gradient and subgradient methods, splitting techniques, and second-order methods. Many of these techniques draw inspiration from other fields, including operations research, theoretical computer science, and subfields of optimization. The book will enrich the ongoing cross-fertilization between the machine learning community and these other fields, and within the broader optimization community.
商品描述(中文翻譯)
優化與機器學習之間的相互作用是現代計算科學中最重要的發展之一。優化的公式和方法在設計算法以從大量數據中提取關鍵知識方面顯得至關重要。然而,機器學習不僅僅是優化技術的消費者,它是一個快速發展的領域,並且本身也在產生新的優化理念。本書以易於接觸的方式捕捉了優化與機器學習之間互動的最新進展,適合兩個領域的研究人員。
優化方法在機器學習中受到重視,因為它們具有廣泛的適用性和吸引人的理論特性。當今機器學習模型的複雜性、規模和多樣性日益增加,這要求重新評估現有的假設。本書開始了這一重新評估的過程。它描述了在新情境中已建立框架的復甦,例如一階方法、隨機近似、凸鬆弛、內點法和近端方法。它還關注於較新的主題,如正則化優化、穩健優化、梯度和次梯度方法、分裂技術以及二階方法。這些技術中的許多都受到其他領域的啟發,包括運籌學、理論計算機科學和優化的子領域。本書將豐富機器學習社群與這些其他領域之間的持續交叉影響,以及在更廣泛的優化社群內部的互動。