Sparse Sensor Placement Optimization for Classification
Ғылым және технология
This video discusses the important problem of how to select the fewest and most informative sensors for a classification problem. I will discuss the algorithm and give several examples.
Book Website: databookuw.com
Book PDF: databookuw.com/databook.pdf
These lectures follow Chapter 3 from:
"Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control" by Brunton and Kutz
Amazon: www.amazon.com/Data-Driven-Science-Engineering-Learning-Dynamical/dp/1108422098/
Brunton Website: eigensteve.com
This video was produced at the University of Washington
Пікірлер: 24
自从2018年关注了您,真的是醍醐灌顶,胜读十年书。
Please make a MOOC on this!
Thank you. I have bought your book, its really interesting and learned a lot from it. I'm also interested by your work on PNAS about sparse sensing of insects' wings. Hope to see more followup papers on this topic.
This is pure gold 👏🏻👏🏻 Well done and thank you so much.
At last you uploaded new one thanks professor
Finally, the good stuff!
Thank you very much .. I've learned so much from you . Best wishes
thank you very much sir, for clear explanation!!
It would be really cool if you could talk some more about the extension to neural networks some more. My first thought is that you would apply it to the latent space representation? But that seems like it wouldn't save very much effort, since you would already have done the whole forward pass of all but one layer of the network. You could use it as a preprocessing step, and train a classifier on the reduced images, but optimizing for linearly separable data seems at odds with the strength of neural networks, which is finding patterns that go beyond linear separability.
Awesome Thank you!
Thank you!
Very interesting! One thing I'm curious about. I wonder if the SVD step can't be replaced with something more tailored to the downstream classification task? I'm thinking this b/c the SVD mapping to the low D space is determined for max-variance reasons. Maybe SVD is chosen b/c its fast? Or some other reason I'm not seeing?
Fascinating
Whooaaa that moth thing is super cool! I wonder if this can be used to model the unpredictability of a housefly's seemingly random path.
wow, its great!
Steve, when we are instead retrieving features by passing source data through neural network layers (as opposed to linear decomposition via PCA/SVD), how are the network architectures and neuron-level activation functions reflected in the sparse optimization problem?
Thank you for the presentation. What if our data matrix short and fat and instead of tall and skinny. Does QR apply in that case?
Thanks heaps! Can you post the links to your papers here??
Is sparsity robust to ood generalization
great lecture as usual you are the best , i am waiting for how to code it in Matlab and python
@loiseaujc
3 жыл бұрын
While waiting for Steve to put online the coding videos, here are two Medium posts I wrote on the subject with code included. It is Julia code but it should give a pretty good idea of how to do it. - towardsdatascience.com/not-all-pixels-matter-for-classification-b8d8f0f198d3 - towardsdatascience.com/how-to-reconstruct-an-image-if-you-see-only-a-few-pixels-e3899d038bf9 :)
Why is w a discriminant vector
CIA might know what you are talking about
Neurodivergent ppl produce better OOD generalization sensors