Can you provide the notebook link for video segmentation? Thanks
@ismailkattar82707 сағат бұрын
can I segment clothes with sam 2 ? if no do you know any pretrained model
@Roboflow7 сағат бұрын
SAM2 can segment anything, close included. But it would need a little bit of help to understand what clothes are. You’d need to prompt it the right way.
@elnaghy8 сағат бұрын
Greetings from Egypt,,,, thanks for your awosome presentation
@Roboflow7 сағат бұрын
My pleasure!
@tomiantoljak157311 сағат бұрын
@Roboflow do you know if the model works in real time on camera, live on device? Without a video input file, no user input of any kind, just a plain camera running on for example iPhone, and model inferring the segmentation live as the person is moving? Thanks a lot!!
@Roboflow7 сағат бұрын
It can and can’t depending on what you want to do. Could you be an bit more specific? What outcome do you expect?
@marjanehtaghavi304511 сағат бұрын
Thanks for the video. Does it work with Cuda 11.8?
@Roboflow7 сағат бұрын
I’m not sure. I used 12.2.
@philipkopylov305811 сағат бұрын
Finetuning soon? : )
@Roboflow7 сағат бұрын
Hahaha. Not sure if I’m smart enough to fine-tune SAM ;)
@hegalzhang145713 сағат бұрын
Greate work !Do you have example code for finetuning OCR task?
@Roboflow11 сағат бұрын
Not yet. But I plan to play with OCR and VQA tasks finetuning.
@TUSHARGOPALKA-nj7jx20 сағат бұрын
Does SAM2 allow for instance or panoptic segmentation?
@Roboflow11 сағат бұрын
Great question. Unfortunately not. SAM2 only gives you masks without classes.
@TUSHARGOPALKA-nj7jx20 сағат бұрын
Hope somwething similar to FastSAM and MobileSAM comes also for SAM2. Also, combining with Grounding Dino to autodistill to a smaller model would really be something amazing for video segmentations
@Roboflow11 сағат бұрын
I have no doubt people already work on project like this!
@ojasvisingh78620 сағат бұрын
👏👏
@TheVarun622 сағат бұрын
Hi, I'm looking to fine-tune Florence 2 for Segmentation task. Would appreciate your insights!
@abdshomad22 сағат бұрын
I wanted to count bees and small chicken in their hive without labeling and training. It was not successful on SAM (1). Hope it can succeeded using SAM 2.
@Roboflow11 сағат бұрын
How did you tied to do it last time?
@abdshomad10 сағат бұрын
@@Roboflow using CVAT annotation + SAM online service. Not yet tried using roboflow segmentation tools. Detection using Detection2, but result is not yet accurate. Using napari and stardict yield better result. But the annotation tooling is not yet yield consistent result and needs to code the process.
@juanpita9387Күн бұрын
The google collab code does not work :(
@Roboflow11 сағат бұрын
What’s the problem?
@juanpita938711 сағат бұрын
@@Roboflow In cell number 11: <ipython-input-11-8dab19eccf85> in <cell line: 4>() 2 generator = sv.get_video_frames_generator(SOURCE_VIDEO_PATH) 3 # create instance of BoxAnnotator ----> 4 box_annotator = sv.BoxAnnotator(thickness=4, text_thickness=4, text_scale=2) 5 # acquire first video frame 6 iterator = iter(generator) TypeError: BoxAnnotator.__init__() got an unexpected keyword argument 'text_thickness'
@schneidershadesКүн бұрын
Thank you
@OnuralpSEZERКүн бұрын
It was an awesome and fun session, thank you so much ! :)
@Roboflow11 сағат бұрын
Always a pleasure to see you in the chat!
@hanma9249Күн бұрын
GG
@roeyasher1396Күн бұрын
can i use this for real time use? and if so how
@amir-ui3vhКүн бұрын
i love you man you are greate
@hegalzhang1457Күн бұрын
Hey guys do you have have example to finetune an OCR model by Florence-2?
@ObsidianMusic842004Күн бұрын
Greetings. First of all this is an excellent video, and I learned a lot from it. I just have one question, I'm confused regarding what deque is and why did we use it in our defaultdict?
@subhammishra6737Күн бұрын
hello sir ,how can i create weight files and cfg files for a dataset using yolo v9
@vinni86194 күн бұрын
great video! how can i increase the size of the labels in the image?
@Roboflow3 күн бұрын
Thanks, change text_thickness and text_scale values of LabelAnnotator
@RipNonoRasta4 күн бұрын
amazing work!
@Roboflow3 күн бұрын
Thank you!
@hongbo-wei5 күн бұрын
Amazing, Reboflow makes utilizing computer vision so easy! Great job! Much appreciate it!
@boskobuha85236 күн бұрын
If I have remote camera on hill and I am monitoring wildfire how to get alert from camera and is it possible to work? What i need of equipment?
@Roboflow5 күн бұрын
It’s actually very easy especially with tools like Telegram. Where you can set up chatbots and send messages to yourself.
@boskobuha85235 күн бұрын
@@Roboflow But how? Any example?
@UltimatedKevin6 күн бұрын
Hello! I have a question, how does the model interpret the "out" variable in the candy example? Can it make the difference between if the object is moving to the right or left? Because of how the bounding box is approaching the line? And thank you so much for creating this content!
@barderino56736 күн бұрын
I would really really really really really like to see how you do train multiple datasets on different tasks like OD , OCR, REGION_PROPOSAL , and maybe something like OPEN_VOCABULARY on 1 set and MORE DETAILED CAPTION on another and seeing if effectively can transfer the knowledge for example including in the captioned images things that are not in the caption dataset but are in the other or improve OCR in images description
@NaveenKumarLaskari7 күн бұрын
How can I finetune the model for OCR_WITH_REGION task
@NaveenKumarLaskari7 күн бұрын
Thanks for the Video tutorial. Though multiple tasks can be achieved by this model, all the videos are single task Can you explain how we can tune the model for two different tasks, for example : OCR and OD
@Roboflow7 күн бұрын
The model is still capable of doing both detection and OCR. We just focused on OD fine-tuning in this video. Take a look here to learn more about other tasks: kzread.info/dash/bejne/mp6T28ScgsfRZbw.html&ab_channel=Roboflow
@inspirehub9997 күн бұрын
It's a great informative video 😊.but does Gelan-c.pt is family of yolov9 .I trained the model using ur note book.Later to test in on my local machine I downloaded whole yolov9 folder where best.pt I tried to use it in my code .But it shows error ....... I used the code from ultralytivs import yolo Model=YOLO(path to my best.pt) And later for predicted the outcome but is shows error in loading 😢 the model .... Sir could u please help me out of these hoping for a faster reply
@qrubmeeaz8 күн бұрын
11:00
@sonnyson07238 күн бұрын
Thanks ALL FOR instruction
@nicolassuarez29338 күн бұрын
Sorry, but if you do not explain how to fine-tune real custom data from scratch, the tutorial is almost useless...
@Roboflow8 күн бұрын
I’m afraid I don’t understand what you mean. That’s pretty much the topic of the video. Maybe there is a part that you expected but was not there?
@ikramessafi95608 күн бұрын
Hello,i am training my model on 2500 imagesthen the precision is just 80 and i have sometimes an overfitiing , and i really need to improve results for my project really soonn, can you explain why
@pushpendrakushwaha60410 күн бұрын
Hey! Thats really great, I have a question if I want to extract the segemented masks from the predictions is there any way?
@alonsovalderrama668811 күн бұрын
Excellent video!
@hiteshpradhan424611 күн бұрын
can i make a object detection machine model just from this video or I have to watch the whole tutorial? fast replies would be appreciated!
@Boromosel13 күн бұрын
This is super interesting. Would be great to know how these compare today (4 years later)
@arifahnurainia27213 күн бұрын
thank you for the video tutorial, you are cool..... 👏👏👏 I hope there is this tutorial using jupyter notebook 😁
@edisukirman987714 күн бұрын
why i can't do install ultralystic the command say ERROR: Could not find a version that satisfies the requirement ultralystic (from versions: none) ERROR: No matching distribution found for ultralystic even though I have upgraded pip to the latest version
@Roboflow13 күн бұрын
What’s your Python version? What’s your OS?
@ObsidianMusic84200414 күн бұрын
Greetings. I wanted to know if there is anyway to fill the polygons with a specific color?
@LGN.42014 күн бұрын
thank you i just finished my aimbot model, goat
@adiasz15 күн бұрын
Great content. Thanx!
@gabelito2516 күн бұрын
why this warning: WARNING [07/15 17:04:11 d2.data.datasets.coco]: Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.
@adrianalbertomarinbalseca713216 күн бұрын
ufss se ve bacano pero en si necesita de internet esos entrenamientos :/ si en el caso no hubiera
@gonzaloillansalvador585816 күн бұрын
What is the cost of using that model?
@Likith_Gannarapu18 күн бұрын
While using the tracker, I noticed that the tracker IDs are not assigned sequentially. Specifically, after tracker ID #6, the next assigned tracker ID was #8. Tracker ID #7 was skipped. This issue can be observed starting at timestamp 22:45 in the video.
@ahmeddiaamaroufi86719 күн бұрын
Hey, I've gone through 10 different companies and I still love yours the most. I'm excited about your service and also run two KZread channels with 560k and 280k subscribers. Could we work together to make a video about your service? I have some ideas for how I would do the video. Have you done any work with KZreadrs in the past? I hope we can work together on this collaboration. If you have any questions, feel free to ask. Best regards, diaa maroufi.
@mootal281219 күн бұрын
Nobody can see the screen! 😅
@josephyucra450319 күн бұрын
Nice video, will there be an update for python 3.11.9? 'Cause when i installed requirements it showed that it only accepted versions from 3.7 to 3.11 , (3.11.9) it is what I use in visual code, thanks regards
Пікірлер
Can you provide the notebook link for video segmentation? Thanks
can I segment clothes with sam 2 ? if no do you know any pretrained model
SAM2 can segment anything, close included. But it would need a little bit of help to understand what clothes are. You’d need to prompt it the right way.
Greetings from Egypt,,,, thanks for your awosome presentation
My pleasure!
@Roboflow do you know if the model works in real time on camera, live on device? Without a video input file, no user input of any kind, just a plain camera running on for example iPhone, and model inferring the segmentation live as the person is moving? Thanks a lot!!
It can and can’t depending on what you want to do. Could you be an bit more specific? What outcome do you expect?
Thanks for the video. Does it work with Cuda 11.8?
I’m not sure. I used 12.2.
Finetuning soon? : )
Hahaha. Not sure if I’m smart enough to fine-tune SAM ;)
Greate work !Do you have example code for finetuning OCR task?
Not yet. But I plan to play with OCR and VQA tasks finetuning.
Does SAM2 allow for instance or panoptic segmentation?
Great question. Unfortunately not. SAM2 only gives you masks without classes.
Hope somwething similar to FastSAM and MobileSAM comes also for SAM2. Also, combining with Grounding Dino to autodistill to a smaller model would really be something amazing for video segmentations
I have no doubt people already work on project like this!
👏👏
Hi, I'm looking to fine-tune Florence 2 for Segmentation task. Would appreciate your insights!
I wanted to count bees and small chicken in their hive without labeling and training. It was not successful on SAM (1). Hope it can succeeded using SAM 2.
How did you tied to do it last time?
@@Roboflow using CVAT annotation + SAM online service. Not yet tried using roboflow segmentation tools. Detection using Detection2, but result is not yet accurate. Using napari and stardict yield better result. But the annotation tooling is not yet yield consistent result and needs to code the process.
The google collab code does not work :(
What’s the problem?
@@Roboflow In cell number 11: <ipython-input-11-8dab19eccf85> in <cell line: 4>() 2 generator = sv.get_video_frames_generator(SOURCE_VIDEO_PATH) 3 # create instance of BoxAnnotator ----> 4 box_annotator = sv.BoxAnnotator(thickness=4, text_thickness=4, text_scale=2) 5 # acquire first video frame 6 iterator = iter(generator) TypeError: BoxAnnotator.__init__() got an unexpected keyword argument 'text_thickness'
Thank you
It was an awesome and fun session, thank you so much ! :)
Always a pleasure to see you in the chat!
GG
can i use this for real time use? and if so how
i love you man you are greate
Hey guys do you have have example to finetune an OCR model by Florence-2?
Greetings. First of all this is an excellent video, and I learned a lot from it. I just have one question, I'm confused regarding what deque is and why did we use it in our defaultdict?
hello sir ,how can i create weight files and cfg files for a dataset using yolo v9
great video! how can i increase the size of the labels in the image?
Thanks, change text_thickness and text_scale values of LabelAnnotator
amazing work!
Thank you!
Amazing, Reboflow makes utilizing computer vision so easy! Great job! Much appreciate it!
If I have remote camera on hill and I am monitoring wildfire how to get alert from camera and is it possible to work? What i need of equipment?
It’s actually very easy especially with tools like Telegram. Where you can set up chatbots and send messages to yourself.
@@Roboflow But how? Any example?
Hello! I have a question, how does the model interpret the "out" variable in the candy example? Can it make the difference between if the object is moving to the right or left? Because of how the bounding box is approaching the line? And thank you so much for creating this content!
I would really really really really really like to see how you do train multiple datasets on different tasks like OD , OCR, REGION_PROPOSAL , and maybe something like OPEN_VOCABULARY on 1 set and MORE DETAILED CAPTION on another and seeing if effectively can transfer the knowledge for example including in the captioned images things that are not in the caption dataset but are in the other or improve OCR in images description
How can I finetune the model for OCR_WITH_REGION task
Thanks for the Video tutorial. Though multiple tasks can be achieved by this model, all the videos are single task Can you explain how we can tune the model for two different tasks, for example : OCR and OD
The model is still capable of doing both detection and OCR. We just focused on OD fine-tuning in this video. Take a look here to learn more about other tasks: kzread.info/dash/bejne/mp6T28ScgsfRZbw.html&ab_channel=Roboflow
It's a great informative video 😊.but does Gelan-c.pt is family of yolov9 .I trained the model using ur note book.Later to test in on my local machine I downloaded whole yolov9 folder where best.pt I tried to use it in my code .But it shows error ....... I used the code from ultralytivs import yolo Model=YOLO(path to my best.pt) And later for predicted the outcome but is shows error in loading 😢 the model .... Sir could u please help me out of these hoping for a faster reply
11:00
Thanks ALL FOR instruction
Sorry, but if you do not explain how to fine-tune real custom data from scratch, the tutorial is almost useless...
I’m afraid I don’t understand what you mean. That’s pretty much the topic of the video. Maybe there is a part that you expected but was not there?
Hello,i am training my model on 2500 imagesthen the precision is just 80 and i have sometimes an overfitiing , and i really need to improve results for my project really soonn, can you explain why
Hey! Thats really great, I have a question if I want to extract the segemented masks from the predictions is there any way?
Excellent video!
can i make a object detection machine model just from this video or I have to watch the whole tutorial? fast replies would be appreciated!
This is super interesting. Would be great to know how these compare today (4 years later)
thank you for the video tutorial, you are cool..... 👏👏👏 I hope there is this tutorial using jupyter notebook 😁
why i can't do install ultralystic the command say ERROR: Could not find a version that satisfies the requirement ultralystic (from versions: none) ERROR: No matching distribution found for ultralystic even though I have upgraded pip to the latest version
What’s your Python version? What’s your OS?
Greetings. I wanted to know if there is anyway to fill the polygons with a specific color?
thank you i just finished my aimbot model, goat
Great content. Thanx!
why this warning: WARNING [07/15 17:04:11 d2.data.datasets.coco]: Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.
ufss se ve bacano pero en si necesita de internet esos entrenamientos :/ si en el caso no hubiera
What is the cost of using that model?
While using the tracker, I noticed that the tracker IDs are not assigned sequentially. Specifically, after tracker ID #6, the next assigned tracker ID was #8. Tracker ID #7 was skipped. This issue can be observed starting at timestamp 22:45 in the video.
Hey, I've gone through 10 different companies and I still love yours the most. I'm excited about your service and also run two KZread channels with 560k and 280k subscribers. Could we work together to make a video about your service? I have some ideas for how I would do the video. Have you done any work with KZreadrs in the past? I hope we can work together on this collaboration. If you have any questions, feel free to ask. Best regards, diaa maroufi.
Nobody can see the screen! 😅
Nice video, will there be an update for python 3.11.9? 'Cause when i installed requirements it showed that it only accepted versions from 3.7 to 3.11 , (3.11.9) it is what I use in visual code, thanks regards