AI

Amazon EC2 P5e instances are generally available

6 Mins read

State-of-the-art generative AI models and high performance computing (HPC) applications are driving the need for unprecedented levels of compute. Customers are pushing the boundaries of these technologies to bring higher fidelity products and experiences to market across industries.

The size of large language models (LLMs), as measured by the number of parameters, has grown exponentially in recent years, reflecting a significant trend in the field of AI. Model sizes have increased from billions of parameters to hundreds of billions of parameters within a span of 5 years. As LLMs have grown larger, their performance on a wide range of natural language processing tasks has also improved significantly, but the increased size of LLMs has led to significant computational and resource challenges. Training and deploying these models requires vast amounts of computing power, memory, and storage.

The size of an LLM has a significant impact on the choice of compute needed for inference. Larger LLMs require more GPU memory to store the model parameters and intermediate computations, as well as greater computational power to perform the matrix multiplications and other operations needed for inference. Large LLMs take longer to perform a single inference pass due to this increased computational complexity. This increased compute requirement can lead to higher inference latency, which is a critical factor for applications that require real-time or near real-time responses.

HPC customers exhibit similar trends. With the fidelity of HPC customer data collection increasing and datasets reaching exabyte scale, customers are looking for ways to enable faster time to solution across increasingly complex applications.

To address customer needs for high performance and scalability in deep learning, generative AI, and HPC workloads, we are happy to announce the general availability of Amazon Elastic Compute Cloud (Amazon EC2) P5e instances, powered by NVIDIA H200 Tensor Core GPUs. AWS is the first leading cloud provider to offer the H200 GPU in production. Additionally, we are announcing that P5en instances, a network optimized variant of P5e instances, are coming soon.

In this post, we discuss the core capabilities of these instances and the use cases they’re well-suited for, and walk you through an example of how to get started with these instances and carry out inference deployment of Meta Llama 3.1 70B and 405B models on them.

EC2 P5e instances overview

P5e instances are powered by NVIDIA H200 GPUs with 1.7 times more GPU memory capacity and 1.5 times faster GPU memory bandwidth as compared to NVIDIA H100 Tensor Core GPUs featured in P5 instances.

P5e instances incorporate 8 NVIDIA H200 GPUs with 1128 GB of high bandwidth GPU memory, 3rd Gen AMD EPYC processors, 2 TiB of system memory, and 30 TB of local NVMe storage. P5e instances also provide 3,200 Gbps of aggregate network bandwidth with support for GPUDirect RDMA, enabling lower latency and efficient scale-out performance by bypassing the CPU for internode communication.

The following table summarizes the details for the instance.

Instance Size vCPUs Instance Memory (TiB) GPU GPU memory Network Bandwidth (Gbps) GPUDirect RDMA GPU Peer to Peer Instance Storage (TB) EBS Bandwidth (Gbps)
p5e.48xlarge 192 2 8 x NVIDIA H200 1128 GB
HBM3e
3200 Gbps EFA Yes 900 GB/s NVSwitch 8 x 3.84 NVMe SSD 80

EC2 P5en instances coming soon

One of the bottlenecks in GPU-accelerated computing may lie in the communication between CPUs and GPUs. The transfer of data between these two components can be time-consuming, especially for large datasets or workloads that require frequent data exchanges. This challenge could impact wide range of GPU-accelerated applications such as deep learning, high-performance computing, and real-time data processing. The need to move data between the CPU and GPU can introduce latency and reduce the overall efficiency. Additionally, network latency can become an issue for ML workloads on distributed systems, because data needs to be transferred between multiple machines.

EC2 P5en instances, coming soon in 2024, can help solve these challenges. P5en instances pair the NVIDIA H200 GPUs with custom 4th Generation Intel Xeon Scalable processors, enabling PCIe Gen 5 between CPU and GPU. These instances will provide up to four times the bandwidth between CPU and GPU and lower network latency, thereby improving workload performance.

P5e use cases

P5e instances are ideal for training, fine-tuning, and running inference for increasingly complex LLMs and multimodal foundation models (FMs) behind the most demanding and compute-intensive generative AI applications, including question answering, code generation, video and image generation, speech recognition, and more.

Customers deploying LLMs for inference can benefit from using P5e instances, which offer several key advantages that make them an excellent choice for these workloads.

Firstly, the higher memory bandwidth of the H200 GPUs in the P5e instances allows the GPU to fetch and process data from memory more quickly. This translates to reduced inference latency, which is critical for real-time applications like conversational AI systems where users expect near-instant responses. The higher memory bandwidth also enables higher throughput, allowing the GPU to process more inferences per second. Customers deploying the 70-billion-parameter Meta Llama 3.1 model on P5e instances can expect up to 1.871 times higher throughput and up to 40%1 lower cost compared to using comparable P5 instances. (1Input Sequence Length 121, Output Sequence Length 5000, batch size 10, vLLM framework)

Secondly, the massive scale of modern LLMs, with hundreds of billions of parameters, requires an immense amount of memory to store the model and intermediate computations during inference. On the standard P5 instances, this would likely necessitate the use of multiple instances to accommodate the memory requirements. However, the P5e instances’ 1.76 times higher GPU memory capacity enables you to scale up by using a single instance to fit the entire model. This avoids the complexity and overhead associated with distributed inference systems, such as data synchronization, communication, and load balancing. Customers deploying the 405-billion-parameter Meta Llama 3.1 model on a single P5e instance can expect up to 1.72 times higher throughput and up to 69%2 lower cost compared to using two P5 instances. (2Input Sequence Length 121, Output Sequence Length 50, batch size 10, vLLM framework)

Finally, the higher GPU memory of the P5e instances also enables the use of larger batch sizes during inference for better GPU utilization, resulting in faster inference times and higher overall throughput. This additional memory can be particularly beneficial for customers with high-volume inference requirements.

When optimizing inference throughput and cost, consider adjusting batch size, input/output sequence length, and quantization level, because these parameters can have a substantial impact. Experiment with different configurations to find the optimal balance between performance and cost for your specific use case.

In summary, the combination of higher memory bandwidth, increased GPU memory capacity, and support for larger batch sizes make the P5e instances an excellent choice for customers deploying LLM inference workloads. These instances can deliver significant performance improvements, cost savings, and operational simplicity compared to alternative options.

P5e instances are also well-suited for memory-intensive HPC applications like simulations, pharmaceutical discovery, seismic analysis, weather forecasting, and financial modeling. Customers using dynamic programming (DP) algorithms for applications like genome sequencing or accelerated data analytics can also see further benefit from P5e through support for the DPX instruction set.

Get started with P5e instances

When launching P5 instances, you can use AWS Deep Learning AMIs (DLAMI) to support P5 instances. DLAMI provides ML practitioners and researchers with the infrastructure and tools to quickly build scalable, secure, distributed ML applications in preconfigured environments. You can run containerized applications on P5 instances with AWS Deep Learning Containers using libraries for Amazon Elastic Container Service (Amazon ECS) or Amazon Elastic Kubernetes Service (Amazon EKS).

P5e instances now available

EC2 P5e instances are now available in the US East (Ohio) AWS Region in the p5e.48xlarge sizes through Amazon EC2 Capacity Blocks for ML. For more information, refer to Amazon EC2 P5 Instances.


About the authors

Avi Kulkarni is an Senior Specialist focusing on worldwide business development and go-to-market for ML and HPC workloads across both commercial and public sector customers. Previously, he has managed partnerships at AWS and led product management for automotive customers at Honeywell, covering electrified, autonomous, and traditional vehicles.

Karthik Venna is a Principal Product Manager at AWS. He leads development of EC2 instances for a wide variety of workloads including deep learning and generative AI.

Khaled Rawashdeh is a Senior Product Manager at AWS. He defines and creates Amazon EC2 accelerated computing instances for most demanding AI/machine learning workloads. Before joining AWS, he worked for leading companies focusing on creating datacenter software and system for enterprise customers.

Aman Shanbhag is an Associate Specialist Solutions Architect on the ML Frameworks team at Amazon Web Services, where he helps customers and partners with deploying ML Training and Inference solutions at scale. Before joining AWS, Aman graduated from Rice University with degrees in Computer Science, Mathematics, and Entrepreneurship.

Pavel Belevich is a Senior Applied Scientist in the ML Frameworks team at Amazon Web Services. He applies his research in distributed training and inference of large models to real-life customer needs. Before joining AWS Pavel worked in PyTorch Distributed team on various distributed training techniques such as FSDP and Pipeline parallelism.

Dr. Maxime Hugues is a Principal WW Specialist Solutions Architect GenAI at AWS, which he joined in 2020. He holds a M.E. from the French National Engineer School “ISEN-Toulon”, a M.S. degree from the University of Science and a Ph.D. degree in Computer Science in 2011 from the University of Lille 1. His researches were mainly focused on programming paradigms, innovative hardware for Extreme computers and performance of HPC/Machine Learning. Prior joining AWS, he worked as HPC Research Scientist and Tech lead at TotalEnergies.

Shruti Koparkar is a Senior Product Marketing Manager at AWS. She helps customers explore, evaluate, and adopt Amazon EC2 accelerated computing infrastructure for their machine learning needs.


Source link

Related posts
AI

OneEdit: A Neural-Symbolic Collaborative Knowledge Editing System for Seamless Integration and Conflict Resolution in Knowledge Graphs and Large Language Models

4 Mins read
Artificial Intelligence (AI) has long been focused on developing systems that can store and manage vast amounts of information and update that…
AI

Microsoft Unveils Copilot Agents: Revolutionizing Business Productivity

2 Mins read
In an exciting move that underscores its commitment to redefining workplace productivity, Microsoft has unveiled Copilot Agents—a new feature within Copilot Studio…
AI

TravelAgent: Revolutionizing Personalized Travel Planning Through AI-Driven Itineraries with Real-Time Data, Dynamic Constraints, and Comprehensive User Preferences

3 Mins read
With the surge in global tourism, the demand for AI-driven travel assistants is rapidly growing. These systems are expected to generate practical…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *