Spark Executor Core & Memory Explained

Ғылым және технология

Spark Executor Core & Memory Explained
#apachespark #bigdata #apachespark
Big Data Integration Book - bit.ly/3ipIlBx
Spark Memory Calculation Tamil - • Spark [Executor & Driv...
Spark Memory Calculation English - • Spark [Executor & Driv...
Video Playlist
-----------------------
Big Data Full Course English - bit.ly/3hpCaN0
Big Data Full Course Tamil - bit.ly/3yF5uVD
Big Data Shorts in Tamil - bit.ly/3v2aL8p
Big Data Shorts in English - bit.ly/3tVEs9T
Hadoop in Tamil - bit.ly/32k6mBD
Hadoop in English - bit.ly/32jle3t
Spark in Tamil - bit.ly/2ZzWAJN
Spark in English - bit.ly/3mmc0eu
Hive in Tamil - bit.ly/2UQVUgv
Hive in English - bit.ly/372nCwj
NOSQL in English - bit.ly/2XtU07B
NOSQL in Tamil - bit.ly/2XVLLjP
Scala in Tamil : goo.gl/VfAp6d
Scala in English: goo.gl/7l2USl
Email: atozknowledge.com@gmail.com
LinkedIn : / sbgowtham
Instagram: / bigdata.in
KZread channel link
/ atozknowledgevideos
Website
codewithgowtham.blogspot.com
github.com/atozknowledge
Technology in Tamil & English

Пікірлер: 32

  • @rajasaroj2297
    @rajasaroj22972 жыл бұрын

    Thank much it was really a simple and best explanation for those configs.

  • @CrashLaker
    @CrashLaker2 жыл бұрын

    Hi! great content! i'm wondering how yarn container vpu mem size works with executors.

  • @akberj4544
    @akberj45442 жыл бұрын

    Very useful video Anna. Thanks Much! Anna requesting to please make a video on the Real-Time project which is done in Industries as one video. Similarly, as a continuation make another video on, "what sort of question we get on that same real-time project in real-time interviews. Please Please Anna Please make a video on this. Thanks in advance.

  • @RP-sx6kf
    @RP-sx6kf5 ай бұрын

    Thank you very much for the detailed explanation and it gave very good understanding on how these properties help in running the spark job. Really appreciate your help in educating the tech community 👏👏

  • @ranitdey5829
    @ranitdey58292 жыл бұрын

    This is great. Thanks!

  • @mohans3143
    @mohans31432 жыл бұрын

    I have a 250gb file to process and I used dynamic allocation. when I try to run the job it is giving an error job got aborted due to stage failure. how do I fix this issue?

  • @cognologix-data-ai-engg
    @cognologix-data-ai-engg Жыл бұрын

    superb explanation bro thanks a lot

  • @mahak8914
    @mahak8914 Жыл бұрын

    Hi, is it possible to create multiple executors on my personal laptop having 6 cores and 16 gb RAM?

  • @praketasaxena
    @praketasaxena Жыл бұрын

    Thank You Sir ! Namaskaaram !

  • @subhadeepkoley2592
    @subhadeepkoley25922 жыл бұрын

    Nice explanation 😊✌️

  • @thenaughtyanil
    @thenaughtyanil Жыл бұрын

    Can we use sparksession on worker node. Facing issue with accessing spark session on worker nodes. Pls hp

  • @aarthiravindrakumar6153
    @aarthiravindrakumar61532 жыл бұрын

    Love yur content 😊

  • @RohitSaini52945
    @RohitSaini529452 жыл бұрын

    you have a great teaching skill. Kudos!

  • @w0nderw0manjy
    @w0nderw0manjy Жыл бұрын

    Good explanation 👌

  • @AkshayKangude-nw1xh
    @AkshayKangude-nw1xh Жыл бұрын

    What configuration will required for 250GB data?

  • @shajeep9170
    @shajeep91702 жыл бұрын

    nice explanation

  • @praptijoshi9102
    @praptijoshi91024 ай бұрын

    Just Amazing

  • @souhailaakrikez1740
    @souhailaakrikez17406 ай бұрын

    Do executors themselves run in parallel in Spark, or is it just the tasks within them?

  • @sahilll7694
    @sahilll76942 жыл бұрын

    How spark gets its metadata

  • @prabas5646
    @prabas56462 ай бұрын

    Clearly explained

  • @nagamanickam6604
    @nagamanickam66043 ай бұрын

    Thank you

  • @ardavanmoinzadeh801
    @ardavanmoinzadeh8012 жыл бұрын

    Can you explain why does Spark spill to disk and what cause this? I understand that in wide transformation or groupbykey statement where data is too big to fit in memory then spark has no choice but to spill it to disk ; my question is if we can minimize this with any performance tuning like bucketing/mapside join,etc...

  • @sandip9901

    @sandip9901

    Жыл бұрын

    We can increase number of shuffle partitions and also we can adopt salting technique to increase no. of unique keys and increase cardinality to avoid skewness. If none works we can increase executor cores or memory.

  • @brightinstrument1621

    @brightinstrument1621

    Жыл бұрын

    we can increase the partition size

  • @prabhudevbm2921
    @prabhudevbm29216 ай бұрын

    thanks

  • @Amarjeet-fb3lk
    @Amarjeet-fb3lk2 ай бұрын

    If no. of cores are 5 per executor, At shuffle time, by default it creates 200 partitions,how that 200 partitions will be created,if no of cores are less, because 1 partition will be stored on 1 core. Suppose, that My config is, 2 executor each with 5 core. Now, how it will create 200 partitions if I do a group by operation? There are 10 cores, and 200 partitions are required to store them, right? How is that possible?

  • @swapnilpatil6986
    @swapnilpatil69863 ай бұрын

    Can we say that cores are actual available threads in spark, As core can run multiple tasks . So its not always one core for one task. A core can multitask. Can you confirm this?

  • @ThePrasanna77
    @ThePrasanna772 жыл бұрын

    Please can you explain this video in Tamil. It will be very helpful for me. Thank you

  • @ultimo8458
    @ultimo84588 ай бұрын

    i have applay 4x memory in each core for 5Gb file but no luck can you please help me to how to resolve this issue Road map: 1)Find the number of partition -->5GB(10240mb)/128mb=40 2)find the CPU cores for maximum parallelism -->40 cores for partition 3)find the maximum allowed CPU cores for each executor -->5 cores per executor for Yarn 4)number of executors=total cores/executor cores -> 40/5=8 executors Amount of memory is required Road map: 1)Find the partition size -> by default size is 128mb 2)assign a minimum of 4x memory for each core -> what is applay ??????? 3)multiple it by executor cores to get executor memory ->????

  • @NAJEEB111
    @NAJEEB1112 жыл бұрын

    bro please we want projects on big data

  • @dataengineeringvideos

    @dataengineeringvideos

    2 жыл бұрын

    Sure will make videos on real time projects

  • @bhargavhr8834
    @bhargavhr88342 жыл бұрын

    Gold

Келесі