At most one key value pair per id per node (not key value pair per node as far as I understand) after using the reduceByKey().
@vikastangudu712 Жыл бұрын
you are awesome.
@mateusznowakowski6805 Жыл бұрын
Great video
@bigdataenthusiast Жыл бұрын
simply great
@ddoshi39 Жыл бұрын
Thank you so much
@damianoderin48742 жыл бұрын
Awesome course. Thanks a lot!
@rydmerlin2 жыл бұрын
How can I combine queries to multiple data sources and get one result?
@rydmerlin2 жыл бұрын
Why does it flicker so much?
@balanceresume28022 жыл бұрын
🤩😍🥰 heather miller
@Manapoker12 жыл бұрын
thx you for this video, it helps a lot! <3
@WaterWheel3602 жыл бұрын
commenting for the KZread algorithm
@ashwinichandran88393 жыл бұрын
Wonderful explanation.... waiting for many videos from you on different technologies like HIVE and PySpark
@ManikantGoutamReal3 жыл бұрын
this is god-level video. thanks a lot.
@user-ep2vw2ss5y3 жыл бұрын
the only sorry that i cant get english
@souravbanerjee57443 жыл бұрын
can you share the link of the scala course referred often in this series ?
@nageshbs89453 жыл бұрын
we can't say database are structured, many no sql database do not support schema
@Mryajivramuk3 жыл бұрын
Very impressive mentor you are....pls do full series on spark and scala ...and be a part of our journey.
@madhu1987ful3 жыл бұрын
Coalesce is a wide transformation? Can u pls explain in detail. Thanks
@andys75963 жыл бұрын
So many videos in other channel but this one after so many years still has best value content. Thank you !
@LivenLove3 жыл бұрын
What are the deciding factors for number of partitions
@LivenLove3 жыл бұрын
Only channel where a don't increase playback speed
@avsbharadwaj81903 жыл бұрын
why there is no mapper side optimisation for the groupByKey operation?
@underlecht3 жыл бұрын
Hello, 1:30 for "fastest" calculation you apply shuffling in line 3, and after that you measure the duration. Why don't you include shuffling to duration? Data preparation also takes time. Unless you mean "shuffle once and for all", but in reality it is hard to imagine that you will be grouping by one column only in your calculations. Thanks.
@narendernegi74933 жыл бұрын
Amazing.
@gothamsudheer47513 жыл бұрын
Your teaching skills excellent. You know how to teach.Thank you so much......
@oguzhan23933 жыл бұрын
finally, I found good videos about spark and scala and she is using crystal clear english
@yangmingwang1603 жыл бұрын
You make the best video among the Spark tutorials on KZread, thank you!
@aspait4 жыл бұрын
we can use pre-partition in map RDD(like hash and range) how can I use it in Dataframe?
@aneksingh44964 жыл бұрын
Please keep posting new videos on spark and scala ... Your videos are awesome 👍
@DatNguyen-ry1vr4 жыл бұрын
Gold!!
@aneksingh44964 жыл бұрын
absolutely great .... please add some more videos on spark real time use cases ... thanks
@pratikkawalgikar48394 жыл бұрын
The concept is now clear for me after searching all over the net from last 3 months. Thanks a lot. Your videos are very simple to understand. Please upload more on spark as I have finished watching all your videos and they are simply superb.
@skms314 жыл бұрын
❤️ From India
@WisdomWaves334924 жыл бұрын
How Millions of data analysed by Spark?....
@mrkrish5014 жыл бұрын
Excellent
@gauravlotekar6604 жыл бұрын
would you be able to share the PPT ?
@lishi68584 жыл бұрын
The best formation of spark !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
@jacobkim98564 жыл бұрын
Best
@jayfix1924 жыл бұрын
thank you
@careymain30364 жыл бұрын
how to you filter by date like between 1-1-2020 and 1-30-2020 from a parquet file the date field is a string
@careymain30364 жыл бұрын
how to you filter by date like between 1-1-2020 and 1-30-2020 from a parquet file the date field is a string
@_sr4 жыл бұрын
The best explanation I have ever seen.
@kiraninam4 жыл бұрын
the teacher has remarkable concepts, Hi teacher how can i join your course if you are offering. I am looking for Spark training
@kiraninam4 жыл бұрын
very impressive concept based knowledge. greate job.
@sunitareddy87174 жыл бұрын
Your explanation is amazing which I couldn't get even spending hrs.
@JohnDoe-zc4mu4 жыл бұрын
Holy cow, u explained in 12min something I had to understand in 1 hour from other videos.
@rizvihasan64594 жыл бұрын
This channel is one of the best tutorial i have seen in youtube. Big thanks and I really appreciate it.
@hiteshbitscs4 жыл бұрын
why all mediocre content in HD and one of the most imp in 360p... sick
Пікірлер
Really helpful !!!
At most one key value pair per id per node (not key value pair per node as far as I understand) after using the reduceByKey().
you are awesome.
Great video
simply great
Thank you so much
Awesome course. Thanks a lot!
How can I combine queries to multiple data sources and get one result?
Why does it flicker so much?
🤩😍🥰 heather miller
thx you for this video, it helps a lot! <3
commenting for the KZread algorithm
Wonderful explanation.... waiting for many videos from you on different technologies like HIVE and PySpark
this is god-level video. thanks a lot.
the only sorry that i cant get english
can you share the link of the scala course referred often in this series ?
we can't say database are structured, many no sql database do not support schema
Very impressive mentor you are....pls do full series on spark and scala ...and be a part of our journey.
Coalesce is a wide transformation? Can u pls explain in detail. Thanks
So many videos in other channel but this one after so many years still has best value content. Thank you !
What are the deciding factors for number of partitions
Only channel where a don't increase playback speed
why there is no mapper side optimisation for the groupByKey operation?
Hello, 1:30 for "fastest" calculation you apply shuffling in line 3, and after that you measure the duration. Why don't you include shuffling to duration? Data preparation also takes time. Unless you mean "shuffle once and for all", but in reality it is hard to imagine that you will be grouping by one column only in your calculations. Thanks.
Amazing.
Your teaching skills excellent. You know how to teach.Thank you so much......
finally, I found good videos about spark and scala and she is using crystal clear english
You make the best video among the Spark tutorials on KZread, thank you!
we can use pre-partition in map RDD(like hash and range) how can I use it in Dataframe?
Please keep posting new videos on spark and scala ... Your videos are awesome 👍
Gold!!
absolutely great .... please add some more videos on spark real time use cases ... thanks
The concept is now clear for me after searching all over the net from last 3 months. Thanks a lot. Your videos are very simple to understand. Please upload more on spark as I have finished watching all your videos and they are simply superb.
❤️ From India
How Millions of data analysed by Spark?....
Excellent
would you be able to share the PPT ?
The best formation of spark !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Best
thank you
how to you filter by date like between 1-1-2020 and 1-30-2020 from a parquet file the date field is a string
how to you filter by date like between 1-1-2020 and 1-30-2020 from a parquet file the date field is a string
The best explanation I have ever seen.
the teacher has remarkable concepts, Hi teacher how can i join your course if you are offering. I am looking for Spark training
very impressive concept based knowledge. greate job.
Your explanation is amazing which I couldn't get even spending hrs.
Holy cow, u explained in 12min something I had to understand in 1 hour from other videos.
This channel is one of the best tutorial i have seen in youtube. Big thanks and I really appreciate it.
why all mediocre content in HD and one of the most imp in 360p... sick
Excellent one. Thanks.