Spark Architecture in 3 minutes| Spark components | How spark works
Ғылым және технология
Spark is one of the most prominent and widely used processing framework in Bigdata world. This videos explains the core components and architecture of spark with a real world example in just 3 minutes.
Пікірлер: 124
Excellent way of explaining things in a most simplified manner. Looking forward to more videos on Spark.
@BigDataThoughts
3 жыл бұрын
Thanks Satish
Finally it got cleared to me after reading here and there . thank you .
Ma'am you taught amazing 😍😍 very less time consuming lecture but perfect... Keep it up
I havent seen such a lucid way of explaining something this complex concept. Great work!
@BigDataThoughts
3 жыл бұрын
Thanks Bitthal
very simple and nice explanation. Thank you for posting this video
finally, a video that simplifies spark, amazing, keep the videos coming please!!
@BigDataThoughts
Жыл бұрын
Thanks
This is the only video on KZread which clarified my doubts. Thanks!!
@BigDataThoughts
2 жыл бұрын
Thanks showbhik
@BigDataThoughts
2 жыл бұрын
Thanks showbhik
Great Video, I have learned alot. Thank you
Very nice video and it covers everything related to the Spark architecture in just 5 minutes. Keep sharing new videos.
@BigDataThoughts
2 жыл бұрын
Thanks sandeep
Simple example and easy way of explaining an important concept.. thanks!
@BigDataThoughts
3 жыл бұрын
Thanks seemanthini
Nice explanation.. Pls keep videos 🎥 like this
It's the best video I've seen so far on spark architecture..awesome..keep going..
@BigDataThoughts
3 жыл бұрын
Thanks
Example was very good for beginners
Very Informative, with full of clarity, Thank you.
@BigDataThoughts
11 ай бұрын
thanks
Lot of information presented in simple way for everyone to understand. 👍
@BigDataThoughts
3 жыл бұрын
Thanks sriganesh
This made far easy to understand Spark Architecture. Thank u ma'am, you are great
@BigDataThoughts
2 жыл бұрын
Thanks Ashish
Just woow, very simple explanation of a complex cluster overview.. Thanks.
@BigDataThoughts
3 ай бұрын
Thanks
Amazing content ! Really appreciate the understanding and approach to explain...looking fwd to more
@BigDataThoughts
Жыл бұрын
Thanks Aditya
Very clearly explained, really appreciate all your efforts
@BigDataThoughts
2 жыл бұрын
Thanks Dhruv
This video is better than 1 hour course on spark . Thanks
@BigDataThoughts
2 жыл бұрын
Thanks puneet
Simple and easy to understand. Thanks. I like doodle way of explaining concepts :)
@BigDataThoughts
3 жыл бұрын
Thanks sheik
Great video
I wish I could give 1000 likes. You’re an excellent teacher!
@BigDataThoughts
3 ай бұрын
Thanks
Beautifully explained short video 👏👏
@BigDataThoughts
3 жыл бұрын
thanks lakshmikanth
Your videos really help me in clearing my interviews and getting the job, now I need some support for my new job. Can you either post videos regarding how pyspark class object work in backend, how parquet/csv reading writing work in distributed environment like which data read by each executor, how to do pagination of some order data etc. If possible provide your LinkedIn profile url or suggest some way to connect
just love the way u explained it ;) Really appreciated mam.
@BigDataThoughts
2 жыл бұрын
Thanks minakshee
Amazing!
Nice explanation. For 1 GB input data for a batch processing how can we decide how can we decide the cluster size, no. of nodes or no. of executors ? Could you please explain Thanks ma’am
How did you make such a good visual explanation? Which tool you used to draw sketches ? Pls guide 🙏
Grate nice explanation
Thank you mam for excellent way of teaching Spark.
@BigDataThoughts
Жыл бұрын
Thanks
the best explanation ever great work !
@BigDataThoughts
11 ай бұрын
Thanks
way of explanation is .....just amazing
@BigDataThoughts
11 ай бұрын
Thanks
Very helpful, it would be great if you can take an example and illustrate how the data chunking happens
Thank you so.................. much for this
Thank you ma'am..👍
Vividly explained. Thanks mam
@BigDataThoughts
3 жыл бұрын
Thanks saurabh
Saw this video.. content looks promising... great job
@BigDataThoughts
Жыл бұрын
Thanks
Thank you for such a goooood explanation :D
@BigDataThoughts
3 жыл бұрын
thanks sheereen
Great video! I have one question though. Is it my correct understanding that each student which got the coin bag is same as how data is partitioned. i.e. 1 student = 1 data partition?
@BigDataThoughts
3 жыл бұрын
1 data partition is operated on by 1 slot/task
Excellent example 👏
@BigDataThoughts
9 ай бұрын
Thanks
VERY HELPFUL BEST EXPLANATION
@BigDataThoughts
Жыл бұрын
Thanks
Thank you for this video.
@BigDataThoughts
4 күн бұрын
Thanks
Really informative Shreya. One quick question. Stage will run sequentially depending on use case and Tasks will run in parallel?
@BigDataThoughts
3 жыл бұрын
Thanks Madhu. Stage may run sequentially or in parallel depending upon whether they have dependency or not. Typically a stage will have multiple tasks running in parallel on a different set of data but doing the same set of operations that the stage contains.
Extremely good explanation
@BigDataThoughts
Жыл бұрын
Thanks mou
Great explanation Ma'am, please add more videos and arrange it in seq. under playlist
@BigDataThoughts
3 жыл бұрын
Thanks Chetan. yes there are more videos coming up. stay tuned
This was marked to know that I'm here on 3/10/2023
Nice video. Really liked it. So you said one node can act as driver. I want to know what is the best practise here? I usually submit jobs by doing SSH to master node (atleast in GCP dataproc) and then submit job. So should I consider my master node as driver? Is it right to do that way?
@BigDataThoughts
2 жыл бұрын
To give an example if we are using Yarn - when you are submitting a spark job in cluster mode. Container where the Application Master runs acts as Master node (driver) and the containers where all the executor process runs the tasks are called Slave Node. When the job gets submitted first the spark submit calls resource manager which in turn starts the application master and from there driver takes over.
one of the bests
@BigDataThoughts
Жыл бұрын
Thanks vikas
Very Nice Explanation
@BigDataThoughts
3 жыл бұрын
Thanks Naresh
Superb delivery
@BigDataThoughts
2 жыл бұрын
Thanks Hemanth
Mam, I have one question. If spark has to write a data to sql database and as our data is broken on to multiple worker nodes, so is it driver who establishes single connection with sql db or it is worker nodes who establishes multiple parallel connections ?
@BigDataThoughts
Жыл бұрын
When Spark writes data to a SQL database, it is the driver program that establishes a connection with the database and manages the write process. Each worker node will write its portion of the data to the database through this single connection established by the driver
@vedantshirodkar
Жыл бұрын
@@BigDataThoughts Thank You Mam for the elaborated explanation.
Excelent video :)!
@BigDataThoughts
10 ай бұрын
Thanks
Extraordinary mam
@BigDataThoughts
Жыл бұрын
Thanks harika
Very nice explanation,
@BigDataThoughts
2 жыл бұрын
Thanks Vidya
Great work ma'am
@BigDataThoughts
3 жыл бұрын
Thanks Shivratan
Amazing explanation mam 😊😊👍
@BigDataThoughts
Жыл бұрын
Thanks ravi
One executor in one core and 2 partitions are assigned so one by one will execute. My que is if 10 tasks are there then these tasks wil execute parallelly or sequentially in partition level
@BigDataThoughts
2 жыл бұрын
A task operates on a partition of data. Tasks do run in parallel. If you have multiple cores you can specify how many cores will a executor use. The number of concurrent tasks an executor can run is equal to the cores assigned..
good explanation...
@BigDataThoughts
2 жыл бұрын
Thanks
❤
Very nice 👍
@BigDataThoughts
4 ай бұрын
Thanks
@hlearningkids
4 ай бұрын
@@BigDataThoughts did you explained in this style big query also. ? improvement in this video can be summary in slow way. please dont get hurt because i gave comment. you did really well in video. excellent explanation.
Thanks a lot mam
@BigDataThoughts
Жыл бұрын
Thanks Sandy
@SandyRocker
Жыл бұрын
Subscribed ✅
Can you please turn on the subtitles? thank you
could you pls explain more on parttiton
@BigDataThoughts
3 жыл бұрын
The Dataset is divided into partitions and each partition is the unit on which a task works. That's the input to task.
Can you make it slow to follow. I felt this was fast to get to know the terms.
Can you help in understanding RDD
@BigDataThoughts
2 жыл бұрын
RDD are resilient distributed datasets and they are the lowest abstraction in spark. check this video - kzread.info/dash/bejne/dqmLqaSGdpqngsY.html
At the start of the video i was so happy seing all the diagrams.. Later got fully confused & felt complicated and i didnt understand well 😢
Great Video....!! Appreciate your efforts!🎉 One question, Where does a cluster manager fit in in this architecture? What role does it play in comparison with driver?
@BigDataThoughts
8 ай бұрын
Cluster manager's job is to provide resources for job execution. Ex - yarn, mesos etc. Driver is the one controlling the overall job execution and which executors take part in the job
@vibhad-cv4sf
8 ай бұрын
@@BigDataThoughts ohh okay!! Thank you!!
Good playlist for Spark kzread.info/head/PL1RS9FR9qIPEAtSWX3rKLVcRWoaBDqVBV
@BigDataThoughts
3 ай бұрын
Thanks
18/april/2024