Partition vs bucketing | Spark and Hive Interview Question

This video is part of the Spark learning Series. Spark provides different methods to optimize the performance of queries. So As part of this video, we are covering the following
What is Partitioning
How does partitioning help to improve performance
What is Bucketing
How does bucketing helps to improve performance
Difference between Partitioning and Bucketing
How Spark's performance is impacted by Dynamic Partition Pruning
Here are a few Links useful for you
Git Repo: github.com/harjeet88/
Spark Interview Questions: • Spark Interview Questions
Spark performance tuning:
If you are interested to join our community. Please join the following groups
Telegram: t.me/bigdata_hkr
Whatsapp: chat.whatsapp.com/KKUmcOGNiix...
You can drop me an email for any queries at
aforalgo@gmail.com
#apachespark #sparktutorial #bigdata
#spark #hadoop #spark3

Пікірлер: 89

  • @alibinmazi452
    @alibinmazi4523 жыл бұрын

    small file problem in Hadoop? According to me if we have lots of small files in cluster that will increase burden on namenode . bcoz namenode stores the meta data of file so if we have lots of small files name node keep noting address of files and hence if master down cluster also gone down.

  • @DataSavvy

    @DataSavvy

    3 жыл бұрын

    That is right... In addition to this spark will also need to create more executor tasks... This will create unnecessary overhead and slow down your data processing

  • @saurabhgulati2505

    @saurabhgulati2505

    3 жыл бұрын

    Also if these files are compressed, the executor core will get busy decompressing them.

  • @tanmaydash803

    @tanmaydash803

    Жыл бұрын

    name node ?

  • @-leaflet

    @-leaflet

    11 ай бұрын

    @@tanmaydash803 otherwise called the Master

  • @cajaykiran
    @cajaykiran2 жыл бұрын

    I would have watched this video at least 5 times between yesterday and today. Thank you very

  • @prosperakwo7563
    @prosperakwo75633 жыл бұрын

    Thanks for the great video, very clear explanation

  • @ShashankGupta347
    @ShashankGupta3472 жыл бұрын

    crisp & clear , Thanks !

  • @FaizanAli-we5wc
    @FaizanAli-we5wc Жыл бұрын

    You are too good sir thank you soo much for clearing our concepts❤

  • @saurabhgarud6690
    @saurabhgarud66903 жыл бұрын

    Thanks for a very helpful video. My question here is, how we can perform optimisation using bucketing,? As in bucketing data is shuffled among different buckets, so it will not be sorted, so if i am using where condition over bucketed table how should i avoid irrelevant bucket scans like i do in partitioning? In short does where condition optimises bucketed table if not then what are other optimisations over bucketing ?

  • @ksktest187
    @ksktest1873 жыл бұрын

    Great efforts ,keep it up

  • @rakeshdey1702
    @rakeshdey17023 жыл бұрын

    This is nice explanation, But you are considering physical partition for hive , but memory level partition for spark to show difference no of files generated

  • @sanketkhandare6430
    @sanketkhandare64302 жыл бұрын

    excellent explaination. helped a lot

  • @HemanthKumardigital
    @HemanthKumardigital2 жыл бұрын

    Thank you so much sir ☺️ .

  • @ayushjain139
    @ayushjain1393 жыл бұрын

    How can I find if my bucketing was really utilized by the query? Can be visible from the physical plan? Also, I am believing that in the case of partition+bucketing, both the partition and bucket filters should be on my query?

  • @shikhargupta7552
    @shikhargupta7552 Жыл бұрын

    Please keep making more such videos. Also would be great if you could make something for cloud related big data technologies

  • @DataSavvy

    @DataSavvy

    6 ай бұрын

    Thanks Shikhar, I will plan to create videos on cloud. Do u need videos on any specific topic on cloud?

  • @vikramrajsahu1962
    @vikramrajsahu19623 жыл бұрын

    Can we increase the performance of the Hive query while fetching the records, assuming table is already partitioned?

  • @anurodhpatil4776
    @anurodhpatil4776 Жыл бұрын

    excellent

  • @vutv5742
    @vutv57426 ай бұрын

    Nice explanation ❤ Completed ❤

  • @DataSavvy

    @DataSavvy

    6 ай бұрын

    Thanks

  • @sumit_ks
    @sumit_ks2 жыл бұрын

    Very well explained sir.

  • @DataSavvy

    @DataSavvy

    2 жыл бұрын

    Thanks Sumit :)

  • @raviranjan217
    @raviranjan2173 жыл бұрын

    Small file problem is headache to name node since it has to manage metadata info. also spark need more number of executor which is again a overhead .

  • @bhooshan25
    @bhooshan25 Жыл бұрын

    very useful

  • @anandraj2558
    @anandraj25583 жыл бұрын

    Nice. explanation.. Can you please also take Hive join example map side join and all other joins and performance tuning.

  • @DataSavvy

    @DataSavvy

    3 жыл бұрын

    Sure will create videos on that

  • @kumarsatyachaitanyayedida4717
    @kumarsatyachaitanyayedida47172 жыл бұрын

    How can we consider a particular column to use as partitioning or to use as bucketing

  • @nobinstren3798
    @nobinstren37983 жыл бұрын

    thanks men its help

  • @DataSavvy

    @DataSavvy

    3 жыл бұрын

    Thanks Nobin. Pleasure... :)

  • @bhavaniv1721
    @bhavaniv17213 жыл бұрын

    Hi,r u handling spark and scala training classes?

  • @uditmittal3816
    @uditmittal38162 жыл бұрын

    Thanks for the video. But i have one query ,how to insert data in bucketed table of hive using spark. I tried this, but it didn't give correct output.

  • @krunalgoswami4654
    @krunalgoswami46542 жыл бұрын

    I like it

  • @subhajitroy5850
    @subhajitroy58503 жыл бұрын

    Really appreciate @Data Savvy for the effort. I have a question: The data searching/retrieval process in case of partitioned table can (to create an analogy) we understand, the way element retrieval is done in binary tree and in case of partitioned bucketed table, a way search is done in nested binary tree . I am referring to Binary tree in Data structure Recently, I followed one Mock Bigdata Interview video in your channel,liked a lot. If possible please upload a few more such videos. Thanks :)

  • @DataSavvy

    @DataSavvy

    3 жыл бұрын

    Hi Subhajit... Thanks. More mock interviews are planned in next few weeks.. excuse me but I did not get your question :(

  • @subhajitroy5850

    @subhajitroy5850

    3 жыл бұрын

    @@DataSavvy The way data is retrieved / searched in partitioned hive table, can we think / correlate the same with that of element retrieve in case of binary tree (Binary Tree in Data Structure). Not sure if this is a better version :)

  • @tanushreenagar3116
    @tanushreenagar3116 Жыл бұрын

    Best explanation

  • @DataSavvy

    @DataSavvy

    Жыл бұрын

    Thanks for liking

  • @jonathasrocha6480
    @jonathasrocha64802 жыл бұрын

    Does Bucketing is used when the column have high cardinality ?

  • @routhmahesh9525
    @routhmahesh95253 жыл бұрын

    How can we decide the number of buckets in case after partitioning one file 128 mb ,2nd file 400mb ,3rd file 200 mb..kindly answer..thanks in advance

  • @sashikiran9
    @sashikiran92 жыл бұрын

    Important point - hive partitioning is not same as Spark partitioning. 7:34-9:14

  • @vamshi878
    @vamshi8783 жыл бұрын

    @data savvy, i obesrved in my local system with multiple cores, partitionBy and bucketBy both doesn't perform any shuffle, there is no exchange in plan. That is why it is producing small files in both cases? Is that right? Will it perform shuffle in large cluster? I am jts reading from a file and writing in partitionby or bucket by no transformations, tell me in this case cluster level also no shuffle will be there?

  • @khanmujahid4743

    @khanmujahid4743

    3 жыл бұрын

    It uses hash value of the search item and go to the bucket which matches with the hash value

  • @sambitkumardash9585
    @sambitkumardash95853 жыл бұрын

    Sir, could you please give one example syntactically between Hive partition, bucketing vs spark partition, bucketing . And couldn't understand the last point of your summary, could you please give some more clarity on it .

  • @DataSavvy

    @DataSavvy

    3 жыл бұрын

    Let me look into that

  • @Apna_Banaras
    @Apna_Banaras2 жыл бұрын

    Small file problem in hadoop? Its generates lot's of metadata . Than its increase the burden of name node

  • @r.kishorekumar1388
    @r.kishorekumar13882 жыл бұрын

    Where there are lot of small files in hadoop, the namenode performance can be impact because of unable to fast process the data.. Actually Hadoop is for handling big data.. So creating too many small files may end up with namenode performance impact. I came across this problem in my project

  • @bharathraj4545

    @bharathraj4545

    6 ай бұрын

    Hi bro iam new to big data can you guide me further

  • @DataSavvy

    @DataSavvy

    6 ай бұрын

    Hi Bharath, happy to guide you. Drop me an email on aforalgo@gmail.com

  • @likithaguntha8105
    @likithaguntha81053 жыл бұрын

    Can we partition after bucketing?

  • @rajeshp3323
    @rajeshp33233 жыл бұрын

    but what i herd is in spark 1 partition = 1 block size.... partitions are not created like in hive using specific column name again here in spark when comes to bucketing..as u said 1 bucket should be minimum of block size....so is it mean 1 bucket = 1 partition...then what is the need of bucketing in spark...im confused

  • @alokdaipuriya4607
    @alokdaipuriya46073 жыл бұрын

    Hi Harjeet..... Thanks for such informative video. One qq here U choose country column for partition that's ok And u choose age column for buckets. So here why did u choose age column for bucketing ? Why not Name column ? Or we can choose any from name and age or there is some technicality behind to choose bucketing column ? If yes plz do comment.

  • @saketmulay8353

    @saketmulay8353

    Жыл бұрын

    it depends on the filter you want to apply, if you want to apply filter on age and you are bucketing by name, then the problem will remain as it is and it won't make any sense.

  • @iketanbhalerao
    @iketanbhalerao Жыл бұрын

    without partitioning can we directly do bucketing in spark?

  • @selvansenthil1
    @selvansenthil111 ай бұрын

    How can we make bucket size to 128 mb as partion size would be 128 mb which will further devided into buckets.

  • @kaladharnaidusompalyam851
    @kaladharnaidusompalyam8513 жыл бұрын

    Hi Harjeet, i have came across a question in my latest interview. what are the packages we need when we want to impliment spark?

  • @DataSavvy

    @DataSavvy

    3 жыл бұрын

    Hi... It depends on what dependencies are u using in your project... Check you sbt file

  • @sagarbalai1122

    @sagarbalai1122

    3 жыл бұрын

    If you already have some project then check in sbt/ pom file but generally you need atleast spark-core, spark-sql to start with basic ops.

  • @anikethdeshpande8336
    @anikethdeshpande83369 ай бұрын

    is bucketing not used with save() method ? it works fine with saveAsTable() getting this error AnalysisException: 'save' does not support bucketBy and sortBy right now.

  • @NN-sw4io
    @NN-sw4io3 жыл бұрын

    Sir, What if the filter only by age? So how about the partition and bucket?

  • @xxxxxxxxxxa232
    @xxxxxxxxxxa232 Жыл бұрын

    Partitioning and bucketing are similar to GROUP BY ... and WHERE value in a range

  • @ramchundi2816
    @ramchundi28163 жыл бұрын

    Thanks, Harjeet. It was a great explanation. Quick question for you - What will happen if we remove a partition key after loading the data (in managed and external tables)?

  • @nikhithapolanki

    @nikhithapolanki

    3 жыл бұрын

    How can u remove partition key once table is created? If u drop and recreate table without partition, data present in physical location of table cannot be read by table. It will give parsing exception

  • @rajlakshmipatil4415
    @rajlakshmipatil44153 жыл бұрын

    No of bucket in spark = size of data /128 Iam I correct so in that case as above we can't specify no of buckets in spark ? In which case should we go for bucketing and which case should go for partitioning can you give some example ?

  • @DataSavvy

    @DataSavvy

    3 жыл бұрын

    If u use partitioning and it creates small files, then u should consider using bucketing there...

  • @rajlakshmipatil4415

    @rajlakshmipatil4415

    3 жыл бұрын

    @@DataSavvy Thanks for answering

  • @kaladharnaidusompalyam851

    @kaladharnaidusompalyam851

    3 жыл бұрын

    I ll tell you one thing here. Partitioning is done based upon the column & bucketing is done based upon the rows. (i.e., both concepts are splitting data into multiple pieces. But part based on column and buck based on rows/records.) Suppose if we have data 1-100 .we can bucket data like 1-25 in one bucket and 25 -50 in second bucket and 50-75 &75-100respectively. Based on rows. But partation is based on column. Ex. If you have column name (population in year wise from 2010-2020) we split data based on year wise . 2010 ,2011,2012...2020into 10 partations. If it is 100%correct .please comment some one. Dont feel bad. If im wrong i make it correct. Tq

  • @DataSavvy

    @DataSavvy

    3 жыл бұрын

    Partitioning and bucketing both are done one column... only diff is , How the records are grouped. I think your statement is right but u are viewing these concepts in more complex way..

  • @DataSavvy

    @DataSavvy

    3 жыл бұрын

    Thanks Rajlakshmi :)

  • @kaladharnaidusompalyam851
    @kaladharnaidusompalyam8513 жыл бұрын

    what kind of problems we will face when there are a lot of small files in hadoop? My ans is : Hadoop is meant for handling large size of files in less number. i.e , hadoop can handle big size files with less count. hadoop wont give better results in efficient way for lot of small files, because there sould be SEEK time for reading data from hard disk to fetch a record . this would increase if you use lot of small files, it will increase system down time. and more over meta data also increases.

  • @DataSavvy

    @DataSavvy

    3 жыл бұрын

    Thats Right :) . There will be few more issues. Please see pinned message

  • @sandipsawant7525
    @sandipsawant75253 жыл бұрын

    Thanks for this video. One question, in which kind of cases we need to use only bucketing , and how query search happens ? Thanks again🙏

  • @DataSavvy

    @DataSavvy

    3 жыл бұрын

    When partition on a column will create small files, use bucketing without partition.. before doing sort merge join also u can create buckted table and improve performance of join

  • @sandipsawant7525

    @sandipsawant7525

    3 жыл бұрын

    @@DataSavvy Thank you sir for answer. If I used 4 buckets, when I hit select query then it will go to only one specific bucket or it will search in all buckets? Because in partition we have folders with value, in case of bucketing, how query will know , in which bucket to search ?

  • @AtifImamAatuif

    @AtifImamAatuif

    3 жыл бұрын

    @@sandipsawant7525 It will use " hash value" of the search item and go to the bucket , which matches with the "hash value "

  • @sandipsawant7525

    @sandipsawant7525

    3 жыл бұрын

    @@AtifImamAatuif Thanks

  • @ayushjain139

    @ayushjain139

    3 жыл бұрын

    @@DataSavvy "before doing sort merge join also u can create buckted table and improve performance of join" - Kindly explain how and why?

  • @dheemanjain8205
    @dheemanjain82056 ай бұрын

    partition is same as group By and Bucketing is same as range

  • @DataSavvy

    @DataSavvy

    6 ай бұрын

    Hi, it's actually different...

  • @mohitmehta3788
    @mohitmehta37883 жыл бұрын

    If we want to query the table for country= india and age=20. Now that we have create new bucketed table, do we have to query bucketed table or initial table. Little lost here.

  • @DataSavvy

    @DataSavvy

    3 жыл бұрын

    u will query bucketed table :)

  • @gyan_chakra
    @gyan_chakra2 жыл бұрын

    Sir better quality is not available for this video. Please fix it.

  • @DataSavvy

    @DataSavvy

    2 жыл бұрын

    Hi Bhumitra...I am working on fixing this

  • @ADY_SR
    @ADY_SR8 ай бұрын

    Volume of data would increase if we have small file.. Volume can be alot of small files or few large files.. both are No No

  • @sivakrishna3413
    @sivakrishna34133 жыл бұрын

    I want to learn Spark and pyspark. Are you providing any training?

  • @DataSavvy

    @DataSavvy

    3 жыл бұрын

    Hi Siva... I am currently to perusing any online training... Let me look into this prospect

  • @GreatIndia1729
    @GreatIndia1729 Жыл бұрын

    IF we have large Number of Small files, then Number of I/O operations...like opening & closing files will be increased. This is Performance Issue.