当前位置: X-MOL 学术IEEE Micro › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Habana Labs purpose-built AI Inference & Training Processor Architectures: Scaling AI Training Systems using Standard Ethernet with Gaudi Processor
IEEE Micro ( IF 2.8 ) Pub Date : 2020-03-01 , DOI: 10.1109/mm.2020.2975185
Eitan Medina 1 , Eran Dagan 1
Affiliation  

The growing computational requirements of AI applications are challenging today's general-purpose CPU and GPU architectures and driving the need for purpose-built, programmable AI solutions. Habana Labs designed its Goya processor to meet the high throughput/low latency demands of Inference workloads, and its Gaudi processor for throughput combined with massive scale up and scale out capability needed to speed training workloads efficiently. To address the need for scaling training, Habana is the first AI chip developer to integrate standard Ethernet onto a training processor.

中文翻译:

Habana Labs 专门构建的 AI 推理和训练处理器架构:使用标准以太网和 Gaudi 处理器扩展 AI 训练系统

AI 应用程序不断增长的计算需求正在挑战当今的通用 CPU 和 GPU 架构,并推动对专用、可编程 AI 解决方案的需求。Habana Labs 设计了 ​​Goya 处理器来满足推理工作负载的高吞吐量/低延迟需求,其 Gaudi 处理器用于实现吞吐量以及有效加速训练工作负载所需的大规模纵向扩展和横向扩展能力。为了满足扩展训练的需求,Habana 是第一家将标准以太网集成到训练处理器上的 AI 芯片开发商。
更新日期:2020-03-01
down
wechat
bug