Geoffrey Hinton: What is wrong with convolutional neural nets?

Ойын-сауық

Geoffrey Hinton, Professor of Computer Science at the University of Toronto and a member of the Google Brain team, presents "What is wrong with convolutional neural nets?" at the Fields Institute. Special thanks to the Vector Institute for organizing the machine learning seminar series. This talk was presented on August 17, 2017.
Please see the 2017-2018 seminar page:
www.fields.utoronto.ca/activit...

Пікірлер: 8

  • @ProfessionalTycoons
    @ProfessionalTycoons5 жыл бұрын

    great lecture

  • @ProfessionalTycoons

    @ProfessionalTycoons

    5 жыл бұрын

    @@deeplemming3746 just appreciate what you have kiddo

  • @nemesis9410

    @nemesis9410

    4 жыл бұрын

    no you're great

  • @teckyify
    @teckyify3 жыл бұрын

    why does he have slides when he doesnt use them 🙄

  • @RickeyBowers
    @RickeyBowers Жыл бұрын

    I'm fairly certain the audio does not match the video. The audio in from another lecture I've seen on YT.

  • @mritunjaymusale
    @mritunjaymusale3 жыл бұрын

    The only thing bad about this video is, how horribly it's recorded

  • @primodernious
    @primodernious3 жыл бұрын

    they will never get skynet this way. actually i think the layer model is right and wrong at the same time. its right in principle but not by the way it works. the network supposed to do all its thinking in the outer layers but only use the input layer to store memory of all its thinks and use the outer layers as a hirarchy. the network must have a way to store its memories permanently in the network and organize how the infomaiton is wired in the outer layers by doing the actual thinking in the outer layers. my idea is that the outer layers do the raw thinking of how to extract data from the input layer. if the outer layers are in a hirachy the next layer after the first will extert control over a much smalller limit of nodes in the input layer beforing passing data to the next layer. that means a hirachy of nodes behave like generals above generals. you just limt how many nodes in the input layer can send a combined sum to one of the nodes in the next layer and the same one each time but not the same arangement of pieces. i mean that the outer layers must decide what part of data in the input layer to combine and find best fit between pieces of data. you split the input data into small pieces and feed each piece or section into each perceptron one by one until each node in the input layer saturate to a optimum value and let the rest of the network do the thinking on how to process this data further. what google is doing does not work this way but do somthing similar as they mess it up by shifting weight in the outer layers that screw up the input. instead of letting the network guess what part of input prertained data to merge they just pass input data partiallly processed into the next layers and then modify it and then back propagata error correction data into the input layer. this process make the network stuck in mimic the brain and can only be used for a spesific purpose and not work like our brain do.

Келесі