On the Importance of Deconstruction in Machine Learning Research

Фильм және анимация

This is a talk I gave in December 2020 at the NeurIPS Retrospective Workshop.
I explain why it is so important to carefully analyze your own research contributions through the story of 3 recent publications from my research group at Cornell University. In all three cases did we first invent something far more complicated, only to realize that the gains could be attributed to something far simple and different from what we had initially believed.
slideslive.com/38938218/the-i...

Пікірлер: 19

  • @chimu3056
    @chimu30563 жыл бұрын

    I come here after a long time, with my favourite words being 'Put away your Laptops, please' and 'raise your hand if you understand'. Super awesome content professor.

  • @vijaymaraviya9443
    @vijaymaraviya94433 жыл бұрын

    Can we please force everyone who writes research papers, specifically in AI/ML, to watch this video? The world would be a far better place. Loved the insights.

  • @saitrinathdubba
    @saitrinathdubba3 жыл бұрын

    I have learnt a lot from your lectures on Machine Learning !! Thanks for very valuable insights into the brilliant work that your lab has produced. Please upload any new lecture series that you are offering. Thank you professor !!

  • @KW-md1bq
    @KW-md1bq2 жыл бұрын

    Pretty tough to be a Ph.D student under Kilian, apparently Step 1 is making every other SOTA research team look incompetent by an order of magnitude, Step 2 is realizing your own solution is still stupidly complex prior to publication and Step 3 is increasing Tesla's valuation by billions of dollars without receiving any compensation in return.

  • @logicboard7746
    @logicboard77463 жыл бұрын

    You are developing a community of learners around you and teaching them to think simple. Hopefully someday someone in this community will come up with something simple yet groundbreaking. Thank you Professor Killian

  • @rakeshkumarmallik1545
    @rakeshkumarmallik15452 жыл бұрын

    I became a big fan yours , Prof. Kilian

  • @theruviyal8307
    @theruviyal83073 жыл бұрын

    Vola!! after a long time , those words "does it makes sense " and "raise your hands if you are with me " learnt a lot. Eager to see more of ur lectures

  • @khaledmohamed-gj4cn
    @khaledmohamed-gj4cn3 жыл бұрын

    Thanks a lot professor Kilian Weinberger for your great lectures . I can not put it into words how much i appreciate your great lectures .I have learned a lot from your lectures .I am watching the last lecture of your course and i do not want it to end . I think i will binge watch it again in my free time because it is so much entertaining (needless to say informative and thought provoking) and i would like to wish you a merry christmas .

  • @deltasun
    @deltasun3 жыл бұрын

    great talk and great message for the community. thank you

  • @doyourealise
    @doyourealise3 жыл бұрын

    Hello sir i have learnt many things from your videos, and i would like to say Merry Christmas and have a great year.

  • @jordankuzmanovik5297
    @jordankuzmanovik52973 жыл бұрын

    Bring us new videos about deep learning.. Pleaseeeeee

  • @satviktripathi6601
    @satviktripathi66013 жыл бұрын

    Amazing lecture ❤❤❤❤

  • @vivekdahiya1301
    @vivekdahiya13013 жыл бұрын

    Deep learning and natural language processing lectures please!! I have learnt a lot from your lectures . Thank you!

  • @juliocardenas4485
    @juliocardenas44853 жыл бұрын

    I wish I could take a sabbatical to work with Dr. Weinberger. I guess applying what I’m learning here is great compromise. 👨🏾‍💻

  • @alleycatsphinx
    @alleycatsphinx3 жыл бұрын

    There’s still a curious relationship between error correcting codes and nearest neighborhoods - are you familiar with hilbert curves? Thank you for a great and sensible talk.

  • @kilianweinberger698

    @kilianweinberger698

    3 жыл бұрын

    I am, but what is the relationship that you are alluding to?

  • @alleycatsphinx

    @alleycatsphinx

    3 жыл бұрын

    @@kilianweinberger698 I’ve been impressed with how elegantly nearest neighbors can be solved (approximately) very quickly in parallel if you use a “linear octree” type structure, and those are most effective when based on error correcting codes (ie Gray Code.) What inspired your use of error correcting codes?

  • @Fatunecher
    @Fatunecher3 жыл бұрын

    Thank you very much for sharing your lecture series and a great talk! It worked for me like smth is needed as the air at the time when you can't breath. I've caught up a lot and you inspired me as well in some sense. If you accept some small $ support for your youtube channel work, I'd love to do it. Thanks again!

Келесі