Spark Interview Question | Bucketing | Spark SQL

#Apache #Spark #SparkSQL #Bucketing
Please join as a member in my channel to get additional benefits like materials in BigData , Data Science, live streaming for Members and many more
Click here to subscribe : / @techwithviresh
About us:
We are a technology consulting and training providers, specializes in the technology areas like : Machine Learning,AI,Spark,Big Data,Nosql, graph DB,Cassandra and Hadoop ecosystem.
Mastering Spark : • Spark Scenario Based I...
Mastering Hive : • Mastering Hive Tutoria...
Spark Interview Questions : • Cache vs Persist | Spa...
Mastering Hadoop : • Hadoop Tutorial | Map ...
Visit us :
Email: techwithviresh@gmail.com
Facebook : / tech-greens
Twitter : @TechViresh
Thanks for watching
Please Subscribe!!! Like, share and comment!!!!

Пікірлер: 29

  • @vishalaaa1
    @vishalaaa1 Жыл бұрын

    nice

  • @cajaykiran
    @cajaykiran2 жыл бұрын

    Thank you

  • @gauravbhartia7543
    @gauravbhartia75434 жыл бұрын

    Nicely Explained.

  • @TechWithViresh

    @TechWithViresh

    4 жыл бұрын

    Thanks:)

  • @dipanjansaha6824
    @dipanjansaha68244 жыл бұрын

    When we directly write to adls i.e the files then how bucketing helps? 2. Also is that a correct understanding bucketing is good when we use a datafram for read purpose only.. as what I understood if there's a use case where in every build write operation happens.. bucketing would not be the best approach..

  • @TechWithViresh

    @TechWithViresh

    4 жыл бұрын

    Yes, bucketing is more effective in reusable tables involved in heavier joins

  • @eknathsatish7502
    @eknathsatish75023 жыл бұрын

    Excellent..

  • @TechWithViresh

    @TechWithViresh

    3 жыл бұрын

    Thanks :)

  • @bhushanmayank
    @bhushanmayank3 жыл бұрын

    How does spark know that other table attribute is identical on which it is bucketed while joining?

  • @RAVIC3200
    @RAVIC32004 жыл бұрын

    Again great content video, Viresh can you make video on those scenarios which interviewer usually ask like - 1) if you have 1TB of file how much time it takes to process (you take any standard cluster setup configuration to explain) and if i reduce to 500GB then how much time it will take. 2) DAG related scenarios questions ? 3) If spark job failed in middle then, will it start from starting if you re-trigger it again ? if not then why? 4) checkpoint related. Please try to cover such scenarios, if its inside one video then it will be really helpful.. thanks again for such videos.....

  • @TechWithViresh

    @TechWithViresh

    4 жыл бұрын

    Thanks, don’t forget to subscribe.

  • @RAVIC3200

    @RAVIC3200

    4 жыл бұрын

    @@TechWithViresh I'm your permanent viewer 🙏🙏

  • @SpiritOfIndiaaa
    @SpiritOfIndiaaa2 жыл бұрын

    Can you please the share the notebook URL please ? thanks a lot , really gr8 learnings

  • @gunishjha4030
    @gunishjha40303 жыл бұрын

    Great content!!!, You have used bucketBy in scala code to do the changes, can you tell how to handle the same in spark sql as well. do we have any function we can pass in spark sql for the same.

  • @gunishjha4030

    @gunishjha4030

    3 жыл бұрын

    found it thanks anyways PARTITIONED BY (favorite_color) CLUSTERED BY(name) SORTED BY (favorite_numbers) INTO 42 BUCKETS;

  • @mdfurqan

    @mdfurqan

    Жыл бұрын

    @@gunishjha4030 but are u able to insert the data in bucketed table using spark-sql underlaying storage is Hive?

  • @aashishraina2831
    @aashishraina28313 жыл бұрын

    excellent

  • @TechWithViresh

    @TechWithViresh

    3 жыл бұрын

    Thanks :)

  • @mateen161
    @mateen1614 жыл бұрын

    Nice explanation!...Just wondering how the number of buckets should be decided. In this example, you had used 4 buckets, can't we use 6 or 8 or 10. Is there a specific reason for using 4 buckets ?

  • @TechWithViresh

    @TechWithViresh

    4 жыл бұрын

    It can be any number, depending on your data and bucket column

  • @sachink.gorade8209
    @sachink.gorade82094 жыл бұрын

    Hello Viresh sir, Nice explaination. Just one thing I did not understand when we create 8 partitions for these two tables as I could not find any code for it in video. So could you please explain?

  • @TechWithViresh

    @TechWithViresh

    4 жыл бұрын

    8 is the default partitions(round robin) created for the cluster used here with 8 nodes.

  • @TechWithViresh

    @TechWithViresh

    4 жыл бұрын

    8 is the default number of partitions (round robin) as the cluster used has 8 nodes

  • @cajaykiran
    @cajaykiran2 жыл бұрын

    Is there anyway I can reach out to you to discuss something important?

  • @TechWithViresh

    @TechWithViresh

    2 жыл бұрын

    Send the details at techwithviresh@gmail.com.

  • @himanshusekharpaul476
    @himanshusekharpaul4764 жыл бұрын

    Hey ..Nice explanation ..But here i have one doubt ... in above vedio you have given no of bucket is 4 . What are the criteria we should keep in mind while deciding no of bucket in real time.?? Is there any formula or bucket size constraints ??? Could you please help ??

  • @TechWithViresh

    @TechWithViresh

    4 жыл бұрын

    The idea behind these two data distribution techniques- partition and bucket is to have data distribution evenly and in such optimum size , which can be effectively processed in a single task

  • @himanshusekharpaul476

    @himanshusekharpaul476

    4 жыл бұрын

    Ok.. What is the optimum bucket size that can be processed by single task??

  • @aashishraina2831
    @aashishraina28313 жыл бұрын

    i think this video is reapted above. can be deleted.