The Artificial Intelligence Infrastructure Workshop: Build your own highly scalable and robust data storage systems that can support a variety of cutt
暫譯: 人工智慧基礎設施工作坊:構建可支援多種前沿應用的高可擴展性和穩健性資料儲存系統

Arankalle, Chinmay, Dwyer, Gareth, Geerdink, Bas

  • 出版商: Packt Publishing
  • 出版日期: 2020-08-13
  • 售價: $2,030
  • 貴賓價: 9.5$1,929
  • 語言: 英文
  • 頁數: 732
  • 裝訂: Quality Paper - also called trade paper
  • ISBN: 1800209843
  • ISBN-13: 9781800209848
  • 相關分類: JVM 語言人工智慧
  • 海外代購書籍(需單獨結帳)

買這商品的人也買了...

相關主題

商品描述

Key Features

  • Understand how artificial intelligence, machine learning, and deep learning are different from one another
  • Discover the data storage requirements of different AI apps using case studies
  • Explore popular data solutions such as Hadoop Distributed File System (HDFS) and Amazon Simple Storage Service (S3)

Book Description

Social networking sites see an average of 350 million uploads daily - a quantity impossible for humans to scan and analyze. Only AI can do this job at the required speed, and to leverage an AI application at its full potential, you need an efficient and scalable data storage pipeline. The Artificial Intelligence Infrastructure Workshop will teach you how to build and manage one.

The Artificial Intelligence Infrastructure Workshop begins taking you through some real-world applications of AI. You'll explore the layers of a data lake and get to grips with security, scalability, and maintainability. With the help of hands-on exercises, you'll learn how to define the requirements for AI applications in your organization. This AI book will show you how to select a database for your system and run common queries on databases such as MySQL, MongoDB, and Cassandra. You'll also design your own AI trading system to get a feel of the pipeline-based architecture. As you learn to implement a deep Q-learning algorithm to play the CartPole game, you'll gain hands-on experience with PyTorch. Finally, you'll explore ways to run machine learning models in production as part of an AI application.

By the end of the book, you'll have learned how to build and deploy your own AI software at scale, using various tools, API frameworks, and serialization methods.

What you will learn

  • Get to grips with the fundamentals of artificial intelligence
  • Understand the importance of data storage and architecture in AI applications
  • Build data storage and workflow management systems with open source tools
  • Containerize your AI applications with tools such as Docker
  • Discover commonly used data storage solutions and best practices for AI on Amazon Web Services (AWS)
  • Use the AWS CLI and AWS SDK to perform common data tasks

Who this book is for

If you are looking to develop the data storage skills needed for machine learning and AI and want to learn AI best practices in data engineering, this workshop is for you. Experienced programmers can use this book to advance their career in AI. Familiarity with programming, along with knowledge of exploratory data analysis and reading and writing files using Python will help you to understand the key concepts covered.

商品描述(中文翻譯)

**主要特點**

- 了解人工智慧、機器學習和深度學習之間的區別
- 通過案例研究發現不同 AI 應用的數據存儲需求
- 探索流行的數據解決方案,如 Hadoop 分散式檔案系統 (HDFS) 和 Amazon 簡單儲存服務 (S3)

**書籍描述**

社交網路網站每天平均有 3.5 億次上傳,這個數量對人類來說無法進行掃描和分析。只有人工智慧能以所需的速度完成這項工作,為了充分發揮 AI 應用的潛力,您需要一個高效且可擴展的數據存儲管道。《人工智慧基礎設施工作坊》將教您如何建立和管理這樣的管道。

《人工智慧基礎設施工作坊》將帶您了解一些 AI 的實際應用。您將探索數據湖的層次,並掌握安全性、可擴展性和可維護性。通過實作練習,您將學會如何定義您組織中 AI 應用的需求。本書將向您展示如何為您的系統選擇數據庫,並在 MySQL、MongoDB 和 Cassandra 等數據庫上執行常見查詢。您還將設計自己的 AI 交易系統,以體驗基於管道的架構。在學習如何實現深度 Q 學習算法以玩 CartPole 遊戲的過程中,您將獲得使用 PyTorch 的實作經驗。最後,您將探索在生產環境中運行機器學習模型作為 AI 應用的一部分的方法。

在書籍結束時,您將學會如何使用各種工具、API 框架和序列化方法,構建和部署自己的 AI 軟體。

**您將學到的內容**

- 掌握人工智慧的基本原理
- 了解數據存儲和架構在 AI 應用中的重要性
- 使用開源工具構建數據存儲和工作流程管理系統
- 使用 Docker 等工具將您的 AI 應用容器化
- 發現常用的數據存儲解決方案和在 Amazon Web Services (AWS) 上的最佳實踐
- 使用 AWS CLI 和 AWS SDK 執行常見的數據任務

**本書適合誰**

如果您希望發展機器學習和 AI 所需的數據存儲技能,並想學習數據工程中的 AI 最佳實踐,這個工作坊適合您。經驗豐富的程式設計師可以利用本書推進他們在 AI 領域的職業生涯。熟悉程式設計,以及具備使用 Python 進行探索性數據分析和讀寫檔案的知識,將有助於您理解所涵蓋的關鍵概念。

作者簡介

Chinmay Arankalle has been working with data since day 1 of his career. In his 7 years in the field, he has designed and built production-grade data systems for telecommunication, pharmaceutical, and life science domains, where new and exciting challenges are always on the horizon. Chinmay started as a software engineer and, over time, has worked extensively on data cleaning, pre-processing, text mining, transforming, and modeling. Production-ready big data systems are his forte.

Gareth Dwyer hails from South Africa but now lives in Europe. He is a software engineer and author and is currently serving as the CTO at the largest coding education provider in Africa. Gareth is passionate about technology, education, and sharing knowledge through mentorship. He holds four university degrees in computer science and machine learning, with a specialization in natural language processing. He has worked with companies such as Amazon Web Services and has published many online tutorials as well as the book Flask by Example.

Bas Geerdink is a programmer, scientist, and IT manager. He works as a technology lead in the AI and big data domain. Having an academic background in artificial intelligence and informatics, he has his research on reference architectures for big data solutions published at the IEEE conference ICITST 2013. Bas has a background in software development, design, and architecture with a broad technical view from C++ to Prolog to Scala. He occasionally teaches programming courses and is a regular speaker at conferences and informal meetings, where he presents a mix of market context, his own vision, business cases, architecture, and source code in an interesting way.

Kunal Gera has been involved in unleashing solutions with the help of data. He has successfully implemented various projects in the field of predictive analytic and data analysis using the analytical skill gained over his professional experience and education.

Kevin Liao has a rich experience in applying data science in different industries by building classes of data science solutions for applications ranging from startup fintech products to web-scale consumer-facing web/mobile pages. Kevin started his career as a statistician/quant in a fintech startup. As data scaled, he honed his data engineering skills and established best practices for web-scale data science solutions. Even after moving to a consumer-facing product company, Kevin has continued to develop data science experiences in an online environment space, which requires extremely low latency solutions.

Anand N.S. has more than two decades of technology experience working, with a strong hands-on track record of application of artificial intelligence, machine learning, and data science to create measurable business outcomes. He has been granted several US patents in the areas of data science, machine learning, and artificial Intelligence. Anand has a B.Tech in Electrical Engineering from IIT Madras and an MBA with a Gold Medal from IIM Kozhikode.

作者簡介(中文翻譯)

Chinmay Arankalle 自職業生涯的第一天起就一直在與數據打交道。在他七年的職業生涯中,他為電信、製藥和生命科學領域設計並構建了生產級數據系統,這些領域總是面臨著新的挑戰。Chinmay 起初是一名軟體工程師,隨著時間的推移,他在數據清理、預處理、文本挖掘、轉換和建模方面積累了豐富的經驗。生產就緒的大數據系統是他的專長。

Gareth Dwyer 來自南非,但現在居住在歐洲。他是一名軟體工程師和作家,目前擔任非洲最大的編程教育提供商的首席技術官。Gareth 對技術、教育充滿熱情,並通過導師制度分享知識。他擁有四個計算機科學和機器學習的學位,專攻自然語言處理。他曾與亞馬遜網路服務(Amazon Web Services)等公司合作,並發表了許多在線教程以及書籍《Flask by Example》。

Bas Geerdink 是一名程式設計師、科學家和IT經理。他在人工智慧和大數據領域擔任技術負責人。擁有人工智慧和資訊學的學術背景,他的研究成果在IEEE會議ICITST 2013上發表,主題為大數據解決方案的參考架構。Bas 在軟體開發、設計和架構方面有著廣泛的技術視野,涵蓋從C++到Prolog再到Scala的技術。他偶爾教授程式設計課程,並且是會議和非正式會議的常客演講者,經常以有趣的方式展示市場背景、自己的願景、商業案例、架構和源代碼。

Kunal Gera 一直致力於利用數據釋放解決方案。他成功地在預測分析和數據分析領域實施了各種項目,運用他在專業經驗和教育中獲得的分析技能。

Kevin Liao 在不同產業中應用數據科學方面擁有豐富的經驗,為從初創金融科技產品到網絡規模的消費者面向的網頁/移動頁面構建了各類數據科學解決方案。Kevin 的職業生涯始於一家金融科技初創公司的統計學家/量化分析師。隨著數據的擴展,他磨練了數據工程技能,並為網絡規模的數據科學解決方案建立了最佳實踐。即使在轉到一家面向消費者的產品公司後,Kevin 仍然在需要極低延遲解決方案的在線環境中持續發展數據科學經驗。

Anand N.S. 擁有超過二十年的技術經驗,並在應用人工智慧、機器學習和數據科學方面有著強大的實踐記錄,創造可衡量的商業成果。他在數據科學、機器學習和人工智慧領域獲得了多項美國專利。Anand 擁有印度理工學院馬德拉斯分校的電氣工程學士學位,以及印度管理學院科澤科德的金獎MBA學位。

目錄大綱

  1. Data Storage Fundamentals
  2. Artificial Intelligence Storage Requirements
  3. Data Preparation
  4. Ethics of AI Data Storage
  5. Data Stores: SQL and NoSQL Databases
  6. Big Data File Formats
  7. Introduction to Analytics Engine (Spark) for Big Data
  8. Data System Design Examples
  9. Workflow Management for AI
  10. Introduction to Data Storage on Cloud Services (AWS)
  11. Building an Artificial Intelligence Algorithm
  12. Productionizing Your AI Applications

目錄大綱(中文翻譯)


  1. Data Storage Fundamentals

  2. Artificial Intelligence Storage Requirements

  3. Data Preparation

  4. Ethics of AI Data Storage

  5. Data Stores: SQL and NoSQL Databases

  6. Big Data File Formats

  7. Introduction to Analytics Engine (Spark) for Big Data

  8. Data System Design Examples

  9. Workflow Management for AI

  10. Introduction to Data Storage on Cloud Services (AWS)

  11. Building an Artificial Intelligence Algorithm

  12. Productionizing Your AI Applications