Classifying Cat vs Dogs | Kaggle Top 1%, No Ensemble
❤️ Support the channel ❤️
/ @aladdinpersson
Paid Courses I recommend for learning (affiliate links, no extra cost for you):
⭐ Machine Learning Specialization bit.ly/3hjTBBt
⭐ Deep Learning Specialization bit.ly/3YcUkoI
📘 MLOps Specialization bit.ly/3wibaWy
📘 GAN Specialization bit.ly/3FmnZDl
📘 NLP Specialization bit.ly/3GXoQuP
✨ Free Resources that are great:
NLP: web.stanford.edu/class/cs224n/
CV: cs231n.stanford.edu/
Deployment: fullstackdeeplearning.com/
FastAI: www.fast.ai/
💻 My Deep Learning Setup and Recording Setup:
www.amazon.com/shop/aladdinpe...
GitHub Repository:
github.com/aladdinpersson/Mac...
✅ One-Time Donations:
Paypal: bit.ly/3buoRYH
▶️ You Can Connect with me on:
Twitter - / aladdinpersson
LinkedIn - / aladdin-persson-a95384153
Github - github.com/aladdinpersson
Timestamps:
0:00 - Introduction to Competition
4:08 - My Solution
6:10 - Training the small model
8:00 - Result1
8:43 - Finetuning
9:14 - Result2
10:19 - How to improve further?
Пікірлер: 33
I'll do more Kaggles so let me know which ones you'd be most interested in
@nachiketkanore
3 жыл бұрын
Do more computer vision
@harshraj22_
3 жыл бұрын
Something involving transformers. Maybe use transformer from torch.nn as good tuts using that are very rare
@davidduran7541
3 жыл бұрын
Plant Pathology from Kaggle. Recently done it with Resnet and ViT. Perhaps a custom implementation from the later?
@suryaj2810
3 жыл бұрын
Personally, I would like Graph neural networks - not from scratch, but using the libraries already present and explain the intuiton behind network element in research papers. My aim is to understand and implement vectorNet paper from Google
@AIPlayerrrr
3 жыл бұрын
Peking autonomous driving please. You will implement centernet in that competition.
Thank you so much for your videos, currently doing my Bachelor Thesis on Pix2Pix and you are bringing me up to speed in the perfect way. You get the perfect balance between focus to detail/explanations and still show the bigger picture of the Network architecture. Couldnt do it without your videos(at least not that fast:D)
Thank you! your solution is so cool.
Thanks for the sharing!!
So it is like you use good feature representation from pretrained net and use logistic regression as final output layer in the model , maybe you can try STN or IC-STN to better reduce scale and rotation variant , I think
From your experience. Are correlated features a problem for neural nets (overfitting,...) and should be removed upfront or not?
I am not sure if u can just resize the picture into the bigger one and the source of the photo is another concern in those very old days, we use PCA to do classification it is very sensitive to the light source and the quality of the image Even in this modern time, photos come from the different camera can cause some error
Do a biomedical kaggle competition they seem very interesting
This is great!
When you fine tuned efficient net, how did you unfreeze layers , did you use model.train(), because I am getting confused how pretrained model layers requires_grad set true.
@AladdinPersson
3 жыл бұрын
Yes just model.train() run it for an epoch or two and then set it to model.eval() when extracting the features
error showing NotImplementedError: When subclassing the `Model` class, you should implement a `call` method. need solution for UNET created using Tensorflow
Tried to run the code on collab and I got this error : CUDA out of memory. Tried to allocate 124.00 MiB (GPU 0; 11.17 GiB total capacity; 10.51 GiB already allocated; 80.81 MiB free; 10.58 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
The clipping thing, to ensure model isn't too confident about its prediction. How to do that clipping in torch.nn.Linear( ) ? Also, is it possible that your model is performing well, but clamping is reducing the score as it is reducing the probability ?
@AladdinPersson
3 жыл бұрын
In PyTorch you'll just to torch.clip instead of np.clip. It's definitely possible there better values for clipping, I didn't test this extensively but the ones I found seemed to work a lot better than not having them
I want to suggest VAEs as a topic
where can we get this code
@AladdinPersson
3 жыл бұрын
hey, you can check out the code here: github.com/aladdinpersson/Machine-Learning-Collection/tree/master/ML/Kaggles/Dog%20vs%20Cat%20Competition
Instead if just setting 1 and 0 as labels i think something like random number from range (0,0.2) for cats and (0.8,1) for dogs is a decent idea to try
@AladdinPersson
3 жыл бұрын
Yeah pseudo labels would definitely be a good idea!
How about just posting up the finished project ... your finished project ... so we can test it ourselves without having to go through this exercise ? Talk Talk Talk ... just provide the end solution so the world can test the results of the work discussed here. Is that a difficult thing to do ?
@AladdinPersson
Жыл бұрын
A lot of people like to understand the thought process and step through it. Code for all my videos is available on GitHub, here you go: github.com/aladdinpersson/Machine-Learning-Collection/tree/master/ML/Kaggles/Dog%20vs%20Cat%20Competition
Thank you Aladdin Persson. It's helpful to watch your video to learn how deep learning workflow has to be done. As I am not a CS major person, I am glad that I could watch your videos, and learn from you!