How to handle Data skewness in Apache Spark using Key Salting Technique
Ғылым және технология
Handling the Data Skewness using Key Salting Technique. One of the biggest problem in parallel computational systems is data skewness. Data Skewness in Spark happens due to joining on a key that is not evenly distributed across the cluster, causing some partitions to be very large and not allowing Spark to process data in parallel.
GitHub Link - github.com/gjeevanm/SparkData...
Content By - Jeevan Madhur [LinkedIn - / jeevan-madhur-225a3a86 ]
Editing By - Sivaraman Ravi [LinkedIn - / sivaraman-ravi-791838114 ]
Пікірлер: 26
This really great and crystal clear explanations....thanks a lot for sharing and spreading knowledge!
Excellent video..thanks for the explanation and sharing the code
Well, I must say, thanks a lot.....have been searching for this kind of explaination.
Excellent. Thank you
Amazing video..!!
Hi Sir... Perfect Great Explanation... Thank you for your effort... I have a doubt :-- After joining The Salting step should be - unsalted and then grouped by has to be applied, Right...? .....
Excellent Description
amazing sir! thanks a lot
beautifully explained, thank you very much :)
Hey great video, could you also link the associated resources you referred to while making this video?
Amazing video.... How can we use the salting technique in PySpark for data skew?
Thanks but if we have multiple columns as KEY how to handle it ?
Good work, its better you show the ourput after the salting dataframes and explain udf more detail.
Great Explanation, Thanks for sharing this. I think there is off by 1 error. You are using (0 to 3) which will have (0, 1, 2, 3) but random number range will be (0, 1, 2)
but the join output will not be correct because in previous scenario it would have joined with all the matching ids but with new salting method it will join with only newly slated key, that's weird
amazing video.. however, i don't know scala. So can you please give an example on how to implement the salting technique with Spark SQL queries ? that'll be of great help..
@jeevanmadhur3732
3 жыл бұрын
Will update SQL query
@ashwinc9867
3 жыл бұрын
@@jeevanmadhur3732 waiting for the query
@balajia8376
2 жыл бұрын
@@ashwinc9867 did you get it?
best
Can u please explain how to take the random number count
@jeevanmadhur3732
4 жыл бұрын
Hi Aravind, If I understand your question correctly you wanted to take the first data frame count where we are appending a random number var df1 = leftTable .withColumn(leftCol, concat( leftTable.col(leftCol), lit("_"), lit(floor(rand(123456) * 10)))) We can simply do df1.select(col("id")).count() This should give the count of the first data frame column For more details, you can refer below git link github.com/gjeevanm/SparkDataSkewness/blob/master/src/main/scala/com/gjeevan/DataSkew/RemoveDataSkew.scala
I have 2 questions: First one: I think that is wrong on your visual presentation of table 2 after salting. Why don't you have z_2 und z_3 there? Also why are you using capital letters sometimes, that's confusing. Secone question: I don't get the benefit of Key Salting in general. How is this different from broadcasting you second table? Because you explode it and then you will end up with sending the whole table to every executor anyway? No one can give an answer to this question.
Hi, are you missing something in code ?? I used your code but its throwing an exception for the below code of lines //join after elminating data skewness df3.join( df4, df3.col("id") df4.col("id") ) .show(100,false) }
@jeevanmadhur3732
3 жыл бұрын
Hi, Thanks for highlighting, there is small issue with checked-in join code which I fixed now. Please pull latest code and try out
@NishaKumari-op2ek
3 жыл бұрын
@@jeevanmadhur3732 Thank you Jeevan. your videos helps us a lot :)