Understanding the Meta Llama 3 Tokenizer | Llama for Developers

Ғылым және технология

Download Meta Llama 3 ➡️ go. kbpn54
Aston Zhang, research scientist working on Llama at Meta discusses the new tokenizer in Meta Llama 3. He discusses the improvements made to the tokenizer in Meta's latest Llama 3 models. The new tokenizer uses Tiktoken instead of SentencePiece and has a larger vocabulary size of 128k, resulting in better performance on coding, reasoning, and more. The increased vocabulary size allows for more specific and nuanced encoding of inputs, while the higher compression ratio reduces the number of tokens required to represent an input. Additionally, the use of Group Query Attention helps balance out the increased memory and compute needs, resulting in a model that can process larger batches without increasing latency.
Timestamps
00:00 Introduction
00:25 What's new in the Llama 3 tokenizer?
01:58 Vocabulary size and compression ratio
13:01 Performance, efficiency and improving costs
17:46 Recap and resources
Additional Resources
• Dive into Deep Learning ebook: go. ao405f
• Getting Started Guide: go. xucc2m
#llama3 #llm #opensource
- - -
Subscribe: kzread.info?sub_...
Learn more about our work: ai.meta.com
Follow us on social media
Follow us on Twitter: / aiatmeta
Follow us on LinkedIn: / aiatmeta
Follow us on Threads: threads.net/aiatmeta
Follow us on Facebook: / aiatmeta
Meta AI focuses on bringing the world together by advancing AI, powering meaningful and safe experiences, and conducting open research.

Пікірлер: 8

  • @loabrasumente2283
    @loabrasumente228313 күн бұрын

    TLDR - from llama 2 to llama3 they switched from sentencepiece to tiktoken - vocab size 32k -> 128k - ~15% fewer tokens for english, ~50% fewer for "some other languages"

  • @anirbansen7132
    @anirbansen71325 күн бұрын

    Informative

  • @parvesh-rana
    @parvesh-rana16 күн бұрын

    Aston please explain the attention mechanism , Actually I am stuck in the chapter "Attention and transformer" of your book d2l

  • @stephennfernandes
    @stephennfernandes8 күн бұрын

    could someone from the meta LLaMa 3 team please explain how to train my very own tiktoken tokenizer like you guys did for llama 3. there is no opensource steps to recreate this

  • @maksymkyiv1111
    @maksymkyiv111114 күн бұрын

    ok.

  • @Windowsmakes
    @Windowsmakes2 күн бұрын

    x

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w14 күн бұрын

    i don't think this format works unless the intent is to discuss at a high level.

Келесі