Domain adaptation and fine-tuning for domain-specific LLMs: Abi Aryan

Ғылым және технология

In this talk, we will talk about the different model adaptation methods from Prompt Engineering to RAGs to fine-tuning methods depending on the dataset and problem. We will also go into detail on some operational best practices for fine-tuning and how to evaluate them for specific business use-cases. Furthermore, we will conclude with a comparative framework, cost-benefit analysis benefits and tradeoffs of fine-tuning versus knowledge bases for improving the performance of large language models for a specific task.
Recorded live in San Francisco at the AI Engineer Summit 2023. See the full schedule of talks at ai.engineer/summit/schedule & join us at the AI Engineer World's Fair in 2024! Get your tickets today at ai.engineer/worlds-fair
About Abi Aryan
Hi, my name is Abi. I am a computer scientist working extensively in machine learning to make the software systems smarter. Over the past seven years, my focus has been building machine learning systems for various applications including recommender systems, automated data labelling pipelines for both audio and video, audio-speech synthesis, forecasting and time-series analysis etc. In the past, I also attended Insight as a Data Science Fellow and was a Visiting Research Scholar at UCLA under Dr. Judea Pearl where I worked in AutoML, MultiAgent Systems and Emotion Recognition. I am also currently authoring LLMOps: Managing Large Language Models in Production book for O'Reilley Publications and an MLOps: Deploying ML models in production course for data scientists to learn fundamentals of data engineering and how to deploy machine learning models in production.

Пікірлер: 6

  • @SarbjeetJohal
    @SarbjeetJohal6 ай бұрын

    Great talk Abi! To the point, knowledge packed!

  • @swyxTV
    @swyxTV6 ай бұрын

    great talk Abi! should have plugged your book at the end :)

  • @jdray
    @jdray6 ай бұрын

    While the engineering details are over my head, I certainly learned something. Thank you. I'm interested to learn how to train and implement a LoRA adapter, and this was helpful to lay some groundwork.

  • @kevon217
    @kevon2176 ай бұрын

    Fabulous overview. Appreciate it!

  • @Aba97867
    @Aba978676 ай бұрын

    Excelente, उत्कृष्ट (Utkrusht), and Excellent

  • @juneauroras2383
    @juneauroras23835 ай бұрын

    well, I hope I understand🥲

Келесі