Pyspark Tutorials 3 | pandas vs pyspark || what is rdd in spark || Features of RDD

#RanjanSharma
This is third Video with a difference between Pandas vs PySpark and Complete understanding of RDD.
Covering below Topics:
What is PySpark ?
Why Pyspark when We have Pandas a PowerFul API and difference between them
What is RDD how it processes Data ?
Important Features of RDD
Stay tuned for Part 4 Video of Installation of Apache Spark and Pyspark in local Environment.
BIG DATA IS PROBLEM and HADOOP IS A SOLUTION
Hit the Like button if you really liked the video.
PPT is uploaded in to the Google Drive Link and Github link
Python Playlist: kzread.info?list...
AI PlayList: kzread.info?list...
Join Whatsapp Group for AI : chat.whatsapp.com/IB6fQBEcZAd...
Telegram Group : www.t.me/@MachineLearningIndia
Subscribe my Channel / ranjansharma
Google Drive: drive.google.com/drive/u/1/fo...
Github : github.com/iamranjan/youtube-...
*** Connect with me on below Channels ***
LinkedIn: / iamranjan
Medium : / iamranjansharma
Instagram : / iamranjan.sharma
Email : iamranjan.sh@gmail.com
Keep Practicing :-)
Happy Learning !!
#MachineLearning #Python #artificialIntelligence #dataScientist #DeepLearning #intelligence #BuisnessIntelligence #Ranjan #RanjanSharma
#Pyspark #SPark #ApachePyspark #apacheSpark #hadoop #bigData #MAPREDUCE #PysparkMachineLearning

Пікірлер: 26

  • @fahdelalaoui3228
    @fahdelalaoui32282 жыл бұрын

    that's what I call quality content. Very logically presented and instructed.

  • @HamdiBejjar
    @HamdiBejjar2 жыл бұрын

    Excellent Content, Thank you Ranjan.. Subscribed :D

  • @neerajjain2138
    @neerajjain21383 жыл бұрын

    Very neat and clear explanation. Thank you so much.!! .**SUBSCRIBED** one more thing ..how can someone dislike anyone's efforts to produce such helpful content. please respect the hard work.

  • @RanjanSharma

    @RanjanSharma

    3 жыл бұрын

    thanks So nice of you :) . Keep sharing and Exploring bro :)

  • @deepaktamhane8373
    @deepaktamhane83733 жыл бұрын

    Great sir ...happy for clearing the concepts

  • @RanjanSharma

    @RanjanSharma

    3 жыл бұрын

    Keep watching..thanks bro . Keep sharing and Exploring bro :)

  • @sukhishdhawan
    @sukhishdhawan3 жыл бұрын

    excellent explanation,, strong hold on concepts,,

  • @RanjanSharma

    @RanjanSharma

    3 жыл бұрын

    Glad you liked it! thank you :)

  • @sridharm8550
    @sridharm8550 Жыл бұрын

    Nice explanation

  • @mohamedamineazizi3360
    @mohamedamineazizi33603 жыл бұрын

    great explanation

  • @RanjanSharma

    @RanjanSharma

    3 жыл бұрын

    Glad you think so! Buddy keep exploring and sharing with your friends :)

  • @JeFFiNat0R
    @JeFFiNat0R3 жыл бұрын

    Great thank you for this explanation

  • @RanjanSharma

    @RanjanSharma

    3 жыл бұрын

    Thanks :) Keep Exploring :)

  • @JeFFiNat0R

    @JeFFiNat0R

    3 жыл бұрын

    @@RanjanSharma I just got a job offer for a data engineer working with databricks spark. Your video definitely helped me in the interview. Thank you again.

  • @RanjanSharma

    @RanjanSharma

    3 жыл бұрын

    @@JeFFiNat0R Glad i could help you 😊

  • @dhanyadave6146
    @dhanyadave61462 жыл бұрын

    Hi Ranjan, thank you for the great series and excellent explanations. I have two questions: 1) In the video at 5:05, you mention that PySpark requires a cluster to be created. However, we can create Spark Sessions locally as well if I am not mistaken. When we run spark locally, could you please explain how PySpark would outperform pandas? I am confused about this concept. You can process data using various cores locally, but your ram size will not change right? 2) In the previous video you mentioned that Apache Spark computing engine is much faster than Hadoop Map Reduce because Hadoop Map Reduce reads data from the hard disk memory during data processing steps, whereas Apache Spark loads the data on the node's RAM. Would there be a situation where this can be a problem? For example, if our dataset is 4TB and we have 4 nodes in our cluster and we assign 1TB to each node. How will an individual node load 1TB data into RAM? Would we have to create more nested clusters in this case?

  • @universal4334

    @universal4334

    Жыл бұрын

    I've same doubt. How spark would store TB's of data in ram

  • @guitarkahero4885
    @guitarkahero48853 жыл бұрын

    Content wise great videos.. way of explaining can be improved.

  • @RanjanSharma

    @RanjanSharma

    3 жыл бұрын

    Glad you think so!Thanks :) Keep Exploring :)

  • @naveenchandra7388
    @naveenchandra73882 жыл бұрын

    @9:19 min RDD in memory computation? Panda does in memory isn't it? do RDD also do in-memory.. may be i lost somewhere with point can you explain this minute difference please?

  • @TK-vt3ep
    @TK-vt3ep3 жыл бұрын

    you are too fast in explaining things. Could you please slow down a bit ? btw, good work

  • @RanjanSharma

    @RanjanSharma

    3 жыл бұрын

    Thanks for your visit .. Keep Exploring :) in my further videos , i have decreased the pace.

  • @AkashShahapure
    @AkashShahapure Жыл бұрын

    Audio is low compared previous 2 videos.

  • @loganboyd
    @loganboyd4 жыл бұрын

    Why are you still using RDDs and not the Spark SQL Dataframe API?

  • @RanjanSharma

    @RanjanSharma

    4 жыл бұрын

    This video was just for explanation of RDD. In next video, I will be explaining SQL DataFrame.

  • @kritikalai8204
    @kritikalai82042 жыл бұрын

    **gj**