Melanie Mitchell: Straight Talk on A.I. Large Language Models

Ғылым және технология

In this edition of the Ground Truths podcast, Melanie Mitchell, PhD, and Eric Topol, MD, discuss all things A.I. and large language models.
Listen to more episodes here: erictopol.substack.com/

Пікірлер: 3

  • @jorgemonasterio8361
    @jorgemonasterio83618 ай бұрын

    Prof Mitchell amazing. Ty.

  • @PeanutB
    @PeanutB9 ай бұрын

    Feel like the data issue could be subverted by modalities. Much like our ability to abstract and reason is tied to our multimodal understanding of language, how things behave and interact in the physical world from our visual or physical perspective, and what we learn through our other senses. At re very least, it's a road with much more data to use. Also curious if things will be improved by altering tokenization. If the larger structure of some aspects of reasoning are improved by better grasp of more informational relationships, like individual letters and numbers in their uses, represented in their more grand probabilistic structure within the datasets. Although it will still be held back in different ways compared to our own minds. Currently recommending Earl K. Miller's presentation on thought as an emergent property.

  • @HoriaCristescu
    @HoriaCristescu6 ай бұрын

    I don't agree that models train on their own text start behaving poorly. They only showed the GIGO effect if you do that in a closed loop for many times over. But internet text generated by AI is filtered, edited and has reactions from the public, so it contains very valuable feedback for AI, it is in distribution for the kind of mistakes LLMs make. And modern LLMs use tools so they get feedback from tool output as well, which might improve over bare LLM.

Келесі