Lightning Talk: Large-Scale Distributed Training with Dynamo and... - Yeounoh Chung & Jiewen Tan

Ойын-сауық

Lightning Talk: Large-Scale Distributed Training with Dynamo and PyTorch/XLA SPMD - Yeounoh Chung & Jiewen Tan, Google
In this talk we cover PyTorch/XLA distributed API in relation with Torch.Dynamo. Specifically, we discuss the new PyTorch/XLA SPMD API for automatic parallelization and our latest LLaMA2 training results. PyTorch/XLA SPMD makes it simple for PyTorch developers to distribute their ML workloads (e.g., training & inference with Dynamo) with easy-to-use API, and uses XLA GSPMD, high-performance automatic parallelization system. Under the hood, it transforms the user single-device program into a partitioned one. We will share how we enabled advanced 2D sharding strategies for LLaMA2 using PyTorch/XLA SPMD.

Пікірлер: 2

  • @lilegend4382
    @lilegend4382Ай бұрын

    👍👍👍

  • @chp7865
    @chp786524 күн бұрын

    😊😊😊

Келесі