Hands-On LLM Serving and Optimization: Hosting LLMs at Scale
Date: June 2nd, 2026
ISBN: 9798341621497
Language: English
Number of pages: 371 pages
Format: EPUB
Add favorites
Large language models (LLMs) are the reasoning engines of modern AI. Today, a major inflection point has arrived: as the world races to deploy AI at scale, model inference has moved to the center of the stack. Welcome to the inference era.
Without proper optimization, however, LLMs can be expensive and slow to serve. Hands-On LLM Serving and Optimization is a comprehensive guide to the complexities of deploying and optimizing LLMs at scale.
In this hands-on, engineering-focused book, authors Chi Wang and Peiheng Hu combine practical examples, code, and strategies for building robust, performant, and cost-efficient AI token factories. Whether you’re building the LLM inference infrastructure or the applications that consume it, a deep understanding of LLM serving will make you a more effective, future-ready engineer as AI transforms how we work and build.
• Learn the foundations of model serving with core concepts, design paradigms, and industry best practices
• Understand the common challenges of hosting LLMs at scale
• Balance latency and throughput to meet the demands of AI applications and business requirements
• Host LLMs cost-effectively with practical, code-backed techniques
Without proper optimization, however, LLMs can be expensive and slow to serve. Hands-On LLM Serving and Optimization is a comprehensive guide to the complexities of deploying and optimizing LLMs at scale.
In this hands-on, engineering-focused book, authors Chi Wang and Peiheng Hu combine practical examples, code, and strategies for building robust, performant, and cost-efficient AI token factories. Whether you’re building the LLM inference infrastructure or the applications that consume it, a deep understanding of LLM serving will make you a more effective, future-ready engineer as AI transforms how we work and build.
• Learn the foundations of model serving with core concepts, design paradigms, and industry best practices
• Understand the common challenges of hosting LLMs at scale
• Balance latency and throughput to meet the demands of AI applications and business requirements
• Host LLMs cost-effectively with practical, code-backed techniques
Download Hands-On LLM Serving and Optimization: Hosting LLMs at Scale
Similar books
Information
Users of Guests are not allowed to comment this publication.
Users of Guests are not allowed to comment this publication.
