Deep Learning Lecture 13: Alex Graves on Hallucination with RNNs
Slides available at: www.cs.ox.ac.uk/people/nando....
Course taught in 2015 at the University of Oxford by Nando de Freitas with great help from Brendan Shillingford. Guest lecture by Alex Graves of Google Deepmind, who gives a new meaning to "the dreams in which I'm dying are the best I ever had"
Пікірлер: 13
Beautiful.
*alex graves** on Hallucination with RNNs* 49:28 recurrent neural network trained on black and white atari video games (_Enduro_ here and _River Raid_ at 52:10) simulate their inputs, they generate the game being played. The goodness of the imagined game represents the fidelity of the rnn model. It's possible to preserve external joystick control though it is said the car/aircraft sometimes moves in the wrong direction. ('Hey Sheldon what are you doing?' 'Playing Super Mario on a poorly trained neural network approximation') (Was this shown in that DeepMind promotional video from a few weeks ago, or was that just the nn playing the real game? I should watch that)
Question: how does he go from spectrogram to speech here? Griffin Lim?
at 27:58 what does convolution of the Gaussian with input give exactly ?
@TragicGFuel
4 ай бұрын
I have the same question. I wonder if you found the answer, 7 years later.
Is there a software to convert digital text to manuscript text?
WHEN IS SOMEONE GOING TO TRY THIS WITH MUSIC
@AjayTalati
9 жыл бұрын
I'm working on that - generative time series models :) You should check out recurrent variational autoencoders, and this video out - youtube/cu1_uJ9qkHA that was done a year ago by some masters students. So with Deepmind/Googles resources, it should be coming out of their pipeline pretty soon?
@LucasWalter
9 жыл бұрын
Ajay Talati Can you fix that link?
@dudufridak1145
6 жыл бұрын
kzread.info/dash/bejne/lallwdeDadTReKQ.html
21:41 shit