What is an Autoencoder? | Two Minute Papers #86

Ғылым және технология

Autoencoders are neural networks that are capable of creating sparse representations of the input data and can therefore be used for image compression. There are denoising autoencoders that after learning these sparse representations, can be presented with noisy images. What is even better is a variant that is called the variational autoencoder that not only learns these sparse representations, but can also draw new images as well. We can, for instance, ask it to create new handwritten digits and we can actually expect the results to make sense!
_____________________________
The paper "Auto-Encoding Variational Bayes" is available here:
arxiv.org/pdf/1312.6114.pdf
Recommended for you:
Recurrent Neural Network Writes Sentences About Images - • Recurrent Neural Netwo...
Andrej Karpathy's convolutional neural network that you can train in your browser:
cs.stanford.edu/people/karpath...
Sentdex's KZread channel is available here:
/ sentdex
Francois Chollet's blog post on autoencoders:
blog.keras.io/building-autoen...
More reading on autoencoders:
probablydance.com/2016/04/30/...
WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE:
David Jaenisch, Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang.
/ twominutepapers
We also thank Experiment for sponsoring our series. - experiment.com/
Subscribe if you would like to see more of these! - kzread.info_c...
Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (creativecommons.org/licenses/...)
Artist: audionautix.com/
Thumbnail background image source (we have edited the colors and edited it some more): pixabay.com/hu/fizet-sz%C3%A1...
Splash screen/thumbnail design: Felícia Fehér - felicia.hu
Károly Zsolnai-Fehér's links:
Facebook → / twominutepapers
Twitter → / karoly_zsolnai
Web → cg.tuwien.ac.at/~zsolnai/

Пікірлер: 73

  • @salman3112
    @salman31127 жыл бұрын

    Just discovered this channel. Would call it my best online discovery ever. Thanks a lot for this. :)

  • @TwoMinutePapers

    @TwoMinutePapers

    7 жыл бұрын

    Thanks so much for the kind words and happy to have you around! :)

  • @niaei
    @niaei2 жыл бұрын

    Came here from The Coding Train. And now you are sending me to sentdex. I knew about you all. Means I am on the right track

  • @feraudyh
    @feraudyh6 жыл бұрын

    I think you explain it much better than some of the others.

  • @TwoMinutePapers
    @TwoMinutePapers7 жыл бұрын

    The next episode is going to be about Two Minute Papers itself, and after that, we'll be back to the usual visual fireworks. :)

  • @TwoMinutePapers

    @TwoMinutePapers

    7 жыл бұрын

    I am very well aware of the existence of stacked autoencoders, and was looking for an entire separate episode for that (while mentioning that we were only scratching the surface here). It would be great presenting it together with PCA and some matrix factorization techniques like SVD that I really wanted to do for a while. Still trying to find a way to do it in a way that is visually and intellectually exciting. :) Thanks for the feedback!

  • @jfk_the_second

    @jfk_the_second

    2 жыл бұрын

    @@TwoMinutePapers You've evolved a lot since five years ago! ❤️

  • @varunmahanot5766
    @varunmahanot57666 жыл бұрын

    Its really nice of you to promote a good channel like sentdex.

  • @atrumluminarium
    @atrumluminarium6 жыл бұрын

    I think the main advantage of AE compression over the standard compression techniques is that it is possibly a bit more general as opposed to something like JPEG which is only limited to images

  • @jfk_the_second
    @jfk_the_second2 жыл бұрын

    Wow. It's fascinating to see what this channel was like when it was sprinting up. The style is largely the same, but less fine-tuned. Karoly had learned a lot more about engaging speech, and the icon looks just a little different. Also, we have two favorite phrases that have basically become a culture: 1) "Hold on to your papers" (and variations stemming therefrom), and 2) "Just two more papers down the line" (and variations therefrom).

  • @CopperHermit
    @CopperHermit2 жыл бұрын

    I'm glad I found this channel, thank you!

  • @TheAwesomeDudeGuy
    @TheAwesomeDudeGuy7 жыл бұрын

    I love what you are doing. Pleasure to watch your videos!

  • @ahmed.ea.abdalla
    @ahmed.ea.abdalla7 жыл бұрын

    Thanks for pointing us to such a valuable channel :D

  • @TwoMinutePapers

    @TwoMinutePapers

    7 жыл бұрын

  • @ahmed.ea.abdalla

    @ahmed.ea.abdalla

    7 жыл бұрын

    +Károly Zsolnai-Fehér (Two Minute Papers) and of course as usual, thanks for the awesomeness you give us ;)

  • @JS-lf4sm
    @JS-lf4sm9 ай бұрын

    I have to put my paws to the 'like' button immediately!

  • @haroldsu1696
    @haroldsu16966 жыл бұрын

    thank you for the great lecture!

  • @Ludifant
    @Ludifant4 жыл бұрын

    A great application could be in denoising before vectorisation of mid-lines or in animation when you need to automatically morph complex shapes. It seems to do that with quite a lot of understanding of what lines are.

  • @sirajkhan4571
    @sirajkhan45716 жыл бұрын

    Nice video as usual! Thanks

  • @offchan
    @offchan7 жыл бұрын

    1:48 Shouldn't we call it a very dense representation instead of the sparse one? Here's how I think about it: the less number of neurons has to compress the data from a large representation into a very dense small one. Compressing should mean that you are making things dense, isn't it? And usually, we refer to a sparse vector as a really large representation.

  • @TwoMinutePapers

    @TwoMinutePapers

    7 жыл бұрын

    It's good that you raised this point, thanks! It is dense in a sense that there is likely "a lot of stuff" that neuron would be firing for, but the mathematical description of that representation is sparse in a sense that the basis vector is containing a tiny number of elements (the # of neurons, that is).

  • @bobsmithy3103
    @bobsmithy31037 жыл бұрын

    Thanks a ton for the link. It'll probably help with my schol dts

  • @hedgehog_fox
    @hedgehog_fox6 жыл бұрын

    BEST CHANNEL EVER!

  • @ServetEdu
    @ServetEdu7 жыл бұрын

    I love this channel, thank you! I am setting up a Patreon account asap :)

  • @TwoMinutePapers

    @TwoMinutePapers

    7 жыл бұрын

    Happy to hear that you are enjoying the series. Thank you so much for your generous support! :)

  • @summerxia7474
    @summerxia74742 жыл бұрын

    Very clear and to the point!!! Why my teacher can't just talk in this way?

  • @muaazzakria287
    @muaazzakria2873 жыл бұрын

    Thanks for explaining

  • @adityaraut5966
    @adityaraut59664 жыл бұрын

    This is amazing

  • @tyan4380
    @tyan43802 жыл бұрын

    excellent explanation

  • @ndavid42
    @ndavid427 жыл бұрын

    I glad to see this kind of ratio on youtube at the likes-dislikes, it's well deserved! keep up the good work! (egyik kedvenc csatornám, nagyon jó témákat szedsz össze!)

  • @TwoMinutePapers

    @TwoMinutePapers

    7 жыл бұрын

    Nagyon orulok, hogy tetszett, es udv a klubban! :)

  • @proloycodes

    @proloycodes

    2 жыл бұрын

    this aged like fine wine

  • @vijayvaswani3812
    @vijayvaswani38122 жыл бұрын

    Amazing channel.

  • @Vextrove
    @Vextrove4 жыл бұрын

    The inner nodes represent abstract concepts!

  • @ellisiverdavid7978
    @ellisiverdavid79783 жыл бұрын

    Concise and truly informative lecture! I’m just wondering-after we obtained the most important features from the bottleneck of our trained neural network, is it possible to apply the denoising capability of the autoencoder to a live feed video that is somewhat highly correlated to the training images? Will this be better, or even recommended, instead of using traditional denoising filters of OpenCV for real-time videos? I’d love to learn more from your expertise and advices as I explore this topic further. Thank you for the insightful explanation and demo by the way! Subscribed! :)

  • @tariqulislam2512
    @tariqulislam25127 жыл бұрын

    Nice video as usual! :)

  • @TwoMinutePapers

    @TwoMinutePapers

    7 жыл бұрын

    Thanks for watching! :)

  • @AbgezocktXD
    @AbgezocktXD4 жыл бұрын

    Damn that map at 3:05! Crazy stuff

  • @kvreddy1985
    @kvreddy19855 жыл бұрын

    Thank you..

  • @jacobstegemann8192
    @jacobstegemann81927 жыл бұрын

    nice music at the ending :D

  • @wentworthmiller1890
    @wentworthmiller18903 жыл бұрын

    Though we weren't asked, but I'm holding on to my papers! Might squeeze a bit too! :)

  • @bosepukur
    @bosepukur5 жыл бұрын

    thanks

  • @thomasblackmore1509
    @thomasblackmore15093 жыл бұрын

    do you have a link to the video that explains how to build the 'tanks' game shown at 3:24?

  • @KaranDoshicool
    @KaranDoshicool4 жыл бұрын

    Can you give link of research paper which uses autoencoder to generate handwritten digits?

  • @rnbbexyjlobt
    @rnbbexyjlobt7 жыл бұрын

    I love machine learning and simulations and I don't want the videos about them to stop; however, I think that this channel would attract a wider audience and lead the viewers to do more research on their own if two minute papers also reported on other topics like astrophysics, quantum physics, bioengineering, nanotech, and the plethora of others available. Either way, keep up the good work

  • @TwoMinutePapers

    @TwoMinutePapers

    7 жыл бұрын

    I completely agree, we have episodes on these topics every now and then, but widening further is definitely on our todo list. However, since these topics are further away from my field of expertise, and therefore require even more preparation, which is currently not possible with a full time job. If it will be possible in the future to do Two Minute Papers as a full time thing, I can't wait to do more of those. :)

  • @robosergTV
    @robosergTV7 жыл бұрын

    Any chance you know the video of Sentdex's where you show the tank game?

  • @TwoMinutePapers

    @TwoMinutePapers

    7 жыл бұрын

    I have asked him about this through twitter, let's see if we can find it out! :)

  • @robosergTV

    @robosergTV

    7 жыл бұрын

    thanks!

  • @r3ijmsszf3bsrew7tw7o
    @r3ijmsszf3bsrew7tw7o7 жыл бұрын

    But, regular (non variational) autoencoders are generative models too!

  • @TheDiscoMole
    @TheDiscoMole7 жыл бұрын

    Wasted you chance to say 'bear necessities' at 1:41

  • @wasaamhazm
    @wasaamhazm6 жыл бұрын

    Why SoftMax is better than svm with autoencoder if you have paper explain that

  • @viratponugoti7735
    @viratponugoti7735 Жыл бұрын

    YOUR daily dose of research papers (get the reference?).

  • @kim15742
    @kim157427 жыл бұрын

    Hey, just wanted to ask what IDE/text editor you use for coding.

  • @kim15742

    @kim15742

    7 жыл бұрын

    Also, what operating system?

  • @TwoMinutePapers

    @TwoMinutePapers

    7 жыл бұрын

    Generally, I have projects spanning all 3 major operating systems - whichever is fit for the job at hand. As an editor, I use vim 90% of the time.

  • @kim15742

    @kim15742

    7 жыл бұрын

    Okay, thanks for the info. Which language do you actually use?

  • @PiyushPallav49

    @PiyushPallav49

    6 жыл бұрын

    Kim sublime text is also one of the more widely used text editor. .There is this cool feature of multiple text edit in single go which I find quite time saving..u can have a look at this too :)

  • @jordia.2970
    @jordia.29702 жыл бұрын

    Now I get why they compare it to PCA!

  • @miltondossantos9876
    @miltondossantos98764 жыл бұрын

    Hi Dear, thanks for the video. How do you make that script at ~ 0:29 min?

  • @WMTeWu

    @WMTeWu

    4 жыл бұрын

    SOURCE in the top-left corner

  • @code-grammardude5974
    @code-grammardude59742 жыл бұрын

    imagine using this to create datasets from very few samples

  • @SFtheWolf
    @SFtheWolf7 жыл бұрын

    I see nefarious applications for both captcha breaking and signature forgery.

  • @HY-dd6sc

    @HY-dd6sc

    7 жыл бұрын

    Captcha breaking? How so?

  • @Kram1032

    @Kram1032

    7 жыл бұрын

    there are absolutely stunning results about writing in a given style of handwriting given just single handwritten note primer examples. The networks can also serve to "beautify" handwritten text simply by making it a bit less divergent. I suspect with a correspondingly extended dataset you could train those to faithfully generate hand signatures and, on top of that, manage to write entire books in a given signature style. Wanna read a novel in Dr's Claw font? :D

  • @ArvindDevaraj1
    @ArvindDevaraj15 жыл бұрын

    Am I the only one thinking about impostor syndrome when he says "dear fellow scholars"

  • @dibyakantaacharya4104
    @dibyakantaacharya41043 жыл бұрын

    Can u send me the cat and dog detection source code

  • @AviPars
    @AviPars7 жыл бұрын

    can you collab with sirag rival or Udacity and their self driving AI nano degree. It will help you grow your channel.

  • @MartinDxt
    @MartinDxt7 жыл бұрын

    1/7 like ratio :D

  • @underdoge6862
    @underdoge68626 ай бұрын

    Damn, I just realized I'm a hardcore nerd

  • @sophiacristina
    @sophiacristina5 жыл бұрын

    I'm getting so addicted with AIs... :/

  • @NorthIT
    @NorthIT Жыл бұрын

    WHERE'S YOUR ACCENT

Келесі