Handling corrupted records in spark | PySpark | Databricks

In this video I have talked about reading bad records file in spark. I have also talked about the modes present in spark for reading.
Directly connect with me on:- topmate.io/manish_kumar25
Data:-
id,name,age,salary,address,nominee
1,Manish,26,75000,bihar,nominee1
2,Nikita,23,100000,uttarpradesh,nominee2
3,Pritam,22,150000,Bangalore,India,nominee3
4,Prantosh,17,200000,Kolkata,India,nominee4
5,Vikash,31,300000,,nominee5
For more queries reach out to me on my below social media handle.
Follow me on LinkedIn:- / manish-kumar-373b86176
Follow Me On Instagram:- / competitive_gyan1
Follow me on Facebook:- / manish12340
My Second Channel -- / @competitivegyan1
Interview series Playlist:- • Interview Questions an...
My Gear:-
Rode Mic:-- amzn.to/3RekC7a
Boya M1 Mic-- amzn.to/3uW0nnn
Wireless Mic:-- amzn.to/3TqLRhE
Tripod1 -- amzn.to/4avjyF4
Tripod2:-- amzn.to/46Y3QPu
camera1:-- amzn.to/3GIQlsE
camera2:-- amzn.to/46X190P
Pentab (Medium size):-- amzn.to/3RgMszQ (Recommended)
Pentab (Small size):-- amzn.to/3RpmIS0
Mobile:-- amzn.to/47Y8oa4 ( Aapko ye bilkul nahi lena hai)
Laptop -- amzn.to/3Ns5Okj
Mouse+keyboard combo -- amzn.to/3Ro6GYl
21 inch Monitor-- amzn.to/3TvCE7E
27 inch Monitor-- amzn.to/47QzXlA
iPad Pencil:-- amzn.to/4aiJxiG
iPad 9th Generation:-- amzn.to/470I11X
Boom Arm/Swing Arm:-- amzn.to/48eH2we
My PC Components:-
intel i7 Processor:-- amzn.to/47Svdfe
G.Skill RAM:-- amzn.to/47VFffI
Samsung SSD:-- amzn.to/3uVSE8W
WD blue HDD:-- amzn.to/47Y91QY
RTX 3060Ti Graphic card:- amzn.to/3tdLDjn
Gigabyte Motherboard:-- amzn.to/3RFUTGl
O11 Dynamic Cabinet:-- amzn.to/4avkgSK
Liquid cooler:-- amzn.to/472S8mS
Antec Prizm FAN:-- amzn.to/48ey4Pj

Пікірлер: 146

  • @manish_kumar_1
    @manish_kumar_1 Жыл бұрын

    Directly connect with me on:- topmate.io/manish_kumar25

  • @maheshsadhanagiri3636
    @maheshsadhanagiri36363 ай бұрын

    your simply super, I am a Azure Solution Architect, but now I would like to start my journey with Data Engineering. I am very lucky that there is such very valuable and appreciable learning opportunity from your channel. Your are really good my dear, explaining concepts in good understandable way with execution. Even I am recommending your videos to my colleagues and friends

  • @bluerays5384
    @bluerays53844 ай бұрын

    Very well explained Sir, Thank you.Keep educating n sharing your knowledge n experience..❤❤❤

  • @maboodahmad7289
    @maboodahmad7289Ай бұрын

    very well explained . The way you are first ask the questions and then explain simply. Thank you so much.

  • @kunalk3830
    @kunalk3830 Жыл бұрын

    Manish bhai thank you so much for this. Mujhe jo doubts the woh apne aap clear ho rahe, really appreciate this videos bhai.

  • @Thakur3427
    @Thakur3427 Жыл бұрын

    I am preparing data engineer interview from your videos 😊 Thank you You are doing great job

  • @rishav144
    @rishav144 Жыл бұрын

    thanks Manish ....very well explained ....This spark series is very top notch

  • @sairamguptha9988
    @sairamguptha9988 Жыл бұрын

    Thanks much Manish for doing this amazing job.

  • @deepakchaudhary4118
    @deepakchaudhary4118 Жыл бұрын

    Bahut ache Manish Bhai, I am eagerly waiting for upcoming video ❤

  • @itsme-fe1vh
    @itsme-fe1vh4 ай бұрын

    Manish Bhai plz continue the same way now ur following to explain the concept. It will be Crystal clear for every one also helpful to interviews. Over all excellent THX for ur work and efforts in making content.

  • @hubspotvalley580
    @hubspotvalley5809 ай бұрын

    Manish bhai, You are the best teacher.

  • @ankitdhurbey7093
    @ankitdhurbey7093 Жыл бұрын

    Great interview questions ❤

  • @debojyotihazra9571
    @debojyotihazra95717 ай бұрын

    Hello Manish, I have been following your course from last couple of days and so far I have covered 17 session of Spark Theory and 7 sessions of Spark Practical. Thank You for all your efforts. Before this I have purchased multiple course on python and pyspark but I lost interest in each of the courses as they were monotonous. I'm actively looking for a job change and interviews are on pipeline, and I got the confidence on PySpark after watching your videos. Thank You ❤.

  • @priyamehandi99
    @priyamehandi996 ай бұрын

    Nice explanation ....All spark session are helpful for me Thanku u manish sir.

  • @gchanakya2979
    @gchanakya2979 Жыл бұрын

    Consistently following your videos, these videos are helping me a ton.

  • @Watson22j
    @Watson22j Жыл бұрын

    bhaia ye jo approch hai na padhane ka interview questions ke through wo bhot mst hai!🙏

  • @anirbanadhikary7997
    @anirbanadhikary7997 Жыл бұрын

    Best thing about this series is potential interview questions. I can challenge you will not find this none of channels other than this in the entire KZread. Also we can make a separate document consisting of these questions only which will be greatly beneficial during interview preparation.

  • @manish_kumar_1

    @manish_kumar_1

    Жыл бұрын

    Glad you enjoy it!

  • @kushchakraborty8013
    @kushchakraborty80132 ай бұрын

    "khud Jake type Karo Ctrl + Enter marneke liar nahi phada raha huun" Thank you sir !!! ❤

  • @user-rh1hr5cc1r
    @user-rh1hr5cc1r2 ай бұрын

    Thank you bro. Big Hug🙂So many things to learn

  • @deepakchaudhary4118
    @deepakchaudhary4118 Жыл бұрын

    Thanks Manish please upload upcoming video ASAP

  • @ADESHKUMAR-yz2el
    @ADESHKUMAR-yz2el2 ай бұрын

    Mja aagya bhaiya, thanks

  • @omkarm7865
    @omkarm7865 Жыл бұрын

    addicted to this channel

  • @HanuamnthReddy
    @HanuamnthReddy5 ай бұрын

    Awesome buai

  • @AMITKUMAR-np7el
    @AMITKUMAR-np7el9 ай бұрын

    Aap data engineer ke betaj badshah ho ... please ese hi video banate rho

  • @manish_kumar_1

    @manish_kumar_1

    8 ай бұрын

    Mujhe bahut Kam chije aati hai bhai. Av to main 1% v jaanta nhi hounga. Bahut kuch hai sikhne and karne ko in DE

  • @adarsharora6097
    @adarsharora60976 ай бұрын

    Thanks Manish! I am studying late night after finishing office. Will make transition to DE soon. Thanks for the video!

  • @pritambiswas1023

    @pritambiswas1023

    5 ай бұрын

    same here, cheers to DE .🤞

  • @dishant_22
    @dishant_2210 ай бұрын

    Nice Tutorial!!, I was trying to implement the same but found that "badRecordsPath" option in only a databricks specific feature and I was executing locally in my machine.

  • @Rakesh-if2tx
    @Rakesh-if2tx Жыл бұрын

    Thank you bhai...

  • @omkarm7865
    @omkarm7865 Жыл бұрын

    you are best

  • @user-jv3lv2dn8k
    @user-jv3lv2dn8k7 ай бұрын

    Hi Manish, If we have input csv file and we have not defined any manual schema, then in that case to show corrupted records- do we have to manually define schema for corrupted record column or how to handle that?

  • @ComedyXRoad
    @ComedyXRoad2 ай бұрын

    at 12:00 instead of creating whole schema can we create new column using withColumn function? or do we need to create eplicit schema to hangle bad records? could you ans?

  • @HanuamnthReddy
    @HanuamnthReddy5 ай бұрын

    Manish bad record handling doc share khariya..😊

  • @shubhamagarwal5932
    @shubhamagarwal59322 ай бұрын

    When I tried the same on databricks, I'm getting the other 3 records instead of bad records.

  • @user-rh1hr5cc1r
    @user-rh1hr5cc1r2 ай бұрын

    12:13 after printing corrupt records... in my case only nomine2,nomine 3 came under the column ..while in videos whole details of id 3 and 4 came. is there any catch here..as i followed same approach

  • @sukritisachan5773
    @sukritisachan57732 ай бұрын

    I had a question, let's say if we have a CSV file which has some data in comma lets say address itself has commas so can we pass some text wrap in pyspark? Data can be like "Six Street,Ontario", so how can we pass this because this is not a corrupted record.

  • @raviyadav-dt1tb
    @raviyadav-dt1tb5 ай бұрын

    corrupted data and complex data both are same or different in spark?

  • @mohitupadhayay1439
    @mohitupadhayay1439Ай бұрын

    Can we do this when reading XML and JSON files

  • @saifrahman1388
    @saifrahman13884 ай бұрын

    Hi sir, Your videos are really interesting and the way you are teaching is too easy. One doubt i couldn't find the file link in the description. Could you pls provide the link. May god fulfill all your dreams. Thank you☺

  • @udittiwari8420
    @udittiwari84205 ай бұрын

    hello sir kya as a freshers bhi this all questions are asked ?

  • @lakshyagupta5688
    @lakshyagupta5688Ай бұрын

    At 15:32 why only 3 records are shown as the mode is permissive, shouldn't the query fetch all the records ?

  • @vishalupadhyay4356
    @vishalupadhyay43563 ай бұрын

    Where did we get this CSV file --employee data

  • @user-gt3pi6ir5u
    @user-gt3pi6ir5u3 ай бұрын

    where can I get the csv file used

  • @lazycool0298
    @lazycool02982 ай бұрын

    Hello Manish Hum ab tak flight_csv mein kaam kr rhe the, fir ye smployess_csv kab daalna hai pls guide.

  • @AAKASHSINGH-jt9hp
    @AAKASHSINGH-jt9hp4 ай бұрын

    Manish Ji ...apka concept delivery bahaut acha hai, lekin yad karane ke liye koe document dijiye, kyo ki bar bar video dekhane me time lag raha hai....

  • @SantoshKumar-yr2md
    @SantoshKumar-yr2md4 ай бұрын

    dataset bhi de dijiye Sir sath me so good hands on ho jayega

  • @vaibhavshendre1
    @vaibhavshendre13 ай бұрын

    How will spark know that it's bad records. Based on what conditions ,it's deciding it's bad ?

  • @HeenaKhan-lk3dg
    @HeenaKhan-lk3dg2 ай бұрын

    Hi manish, Where we can check employee file

  • @SubhamKumar-or8vc
    @SubhamKumar-or8vc9 ай бұрын

    Hi Manish, When I run only this df = spark.read.format("csv") \ .option("inferschema", "true") \ .option("header", "true") \ .option("mode", "FAILFAST") \ .load("/FileStore/tables/data.csv") and then run df.count(), it is showing 5 records in all 3 modes. But when I am running df.show(), it is giving output as per your explanation. What can be the possible reasons for the behavior of the count function?

  • @udittiwari8420

    @udittiwari8420

    5 ай бұрын

    The count() function in Spark DataFrame counts the number of rows in the DataFrame. It does not specifically check for corrupted or malformed rows when performing the count. In your case, even if there are corrupted rows in the DataFrame, the count() function will still return the total number of rows.

  • @apurvsharma413
    @apurvsharma4134 ай бұрын

    Hi Manish, I have tried saving the corrupted records to a file, but i am unable to use %fs ls. It shows error - UsageError: Line magic function `%fs` not found. Can you help here?

  • @lazycool0298
    @lazycool02982 ай бұрын

    How to delete created table in SPARK we created?

  • @yugantshekhar782
    @yugantshekhar782Ай бұрын

    Hi Manish, Nice so;ution but what if I have 500+ columns in my table, how can I do that then?

  • @muskangupta735
    @muskangupta7352 ай бұрын

    in permissive mode why it didn't created a new column?

  • @amanjha5422
    @amanjha5422 Жыл бұрын

    Plz upload your next videos bhaiya 😊

  • @manish_kumar_1

    @manish_kumar_1

    Жыл бұрын

    Sure

  • @rakeshcheedara268
    @rakeshcheedara2682 ай бұрын

    bhai....can you please provide the English subtitles too.. just to understand in better way

  • @abinashsatapathy1711
    @abinashsatapathy171111 ай бұрын

    Manish ji, mera ek doubt hai. How corrupt_record column is taking data from the beginning(i.e. ID column)of the row?

  • @kavitathorat4451

    @kavitathorat4451

    4 ай бұрын

    column name should be "_corrupt_record"

  • @aakashkhandalkar9172
    @aakashkhandalkar91726 ай бұрын

    I converted the text file to CSV, but when printed that same CSV in data bricks a new column i.e. 6th column is generated with null values, so basically I am not getting 3 different table for 3 different modes, what could be possible error ?

  • @udittiwari8420

    @udittiwari8420

    5 ай бұрын

    most probaly your csv is not created as pe the need check it once while opening csv in notepad it should look like this id,name,age,salary,address,nominee 1,Manish,26,75000,bihar,nominee1 2,Udit,25,100000,indore,nominee2 3,jiya,15,1500000,lomri , India,nominee2 4,swati,19,200000,kota,nominee4 5,ravi,25,300000,indore ,India,nominee5 6,tanu,25,120000,,nominee6 there should not be any records inside " " and try

  • @user-ed6wk8in5d
    @user-ed6wk8in5d4 ай бұрын

    Hi @manish, After created extra column to store corrupt data, instead of getting whole row i'm getting extra values present in other column like 'nominee3'. while i was watching video i have confusion how whole row of corrupt data stored automatically.

  • @kavitathorat4451

    @kavitathorat4451

    4 ай бұрын

    column name should be "_corrupt_record"

  • @Anonymous-qg5cw

    @Anonymous-qg5cw

    3 ай бұрын

    @@kavitathorat4451 thanks, got it. there is other option also i saw where we can load corrupt record in any column we want.

  • @mdreyaz5824
    @mdreyaz58249 ай бұрын

    How to handle corrupted data in parquet file ?

  • @yogeshpatil186
    @yogeshpatil1866 ай бұрын

    sir i created schema after that in my case it shows in correpted column as nominee why it is shows like this

  • @tanmaytandel2425

    @tanmaytandel2425

    22 күн бұрын

    SAME FOR ME DO YOU GET SOLUTION?

  • @user-kv3yb1jn3f
    @user-kv3yb1jn3f2 ай бұрын

    how do you trace why a record is corrupt , or capture error while parsing

  • @manish_kumar_1

    @manish_kumar_1

    2 ай бұрын

    You will have to check your source that why corrupted data is being pushed

  • @ayushtiwari104
    @ayushtiwari1045 ай бұрын

    Where is the spark doccumentation source?

  • @anaggarwal2512
    @anaggarwal25129 ай бұрын

    why there is 1 job in Permissive mode 3 jobs in DROPMALFORMED and FAILFAST

  • @deeksha6514
    @deeksha65143 ай бұрын

    badRecordsPath not getting created in local mode

  • @kumarshivam8077
    @kumarshivam80775 ай бұрын

    there is no csv file in description.

  • @user-dl3ck6ym4r
    @user-dl3ck6ym4r4 ай бұрын

    how can we reset our databrick password. i am unable to reset .please suggest

  • @ajaysinha5996
    @ajaysinha599611 ай бұрын

    Hello Manish Bhai.. CSV file aapne mention nhi kiya description me...please provide krenge kya

  • @manish_kumar_1

    @manish_kumar_1

    11 ай бұрын

    Data ko aap copy karke save as csv kar lijiye

  • @sachindubey4315
    @sachindubey431511 ай бұрын

    i m doing in jupiter noteboo i m i have run the code but data is not getting stored in new column , n=in new column i m only getting null and nomminee 1 and 2 what error or gap can be the reason of this ???

  • @manish_kumar_1

    @manish_kumar_1

    11 ай бұрын

    Have you defined your own schema?

  • @sachindubey4315

    @sachindubey4315

    11 ай бұрын

    @@manish_kumar_1 yes i had

  • @shitalkurkure1402
    @shitalkurkure1402 Жыл бұрын

    What if we have 100 columns nd we want to print bad records...so in that case it is not possible to create schema manually...any other option to print bad records in that case????

  • @manish_kumar_1

    @manish_kumar_1

    Жыл бұрын

    No idea

  • @soumyaranjanrout2843

    @soumyaranjanrout2843

    7 ай бұрын

    @shitalkurkure1402 Printing bad records is same as storing it and then view it by converting it to dataframe. Suppose you have 100 columns and it's not possible to create schema manually then store the bad records in required path then view it.

  • @shitalkurkure1402

    @shitalkurkure1402

    5 ай бұрын

    ​​​@@soumyaranjanrout2843hey, thank you😊 but for that also we need to write schema manually first.

  • @amlansharma5429
    @amlansharma5429 Жыл бұрын

    @Manish Kumar, in option are you using columnNameOfCorruptRecord, because otherwise it's not displaying corrupt records.. I don't know how my previous comment got deleted I searched for this option in the internet and it worked for me.

  • @manish_kumar_1

    @manish_kumar_1

    Жыл бұрын

    No I didn't use columnNameIfCorruptRecord. I created manual schema and read the schema from there

  • @amlansharma5429

    @amlansharma5429

    Жыл бұрын

    @@manish_kumar_1 after creating manualSchema, for me it's appearing as extra column and showing records of India for those rows

  • @mukulraj1545

    @mukulraj1545

    7 ай бұрын

    Same for me Have you got any solution for this?

  • @ajourney179
    @ajourney179 Жыл бұрын

    Hi Manish. From where are you learning Scala ?

  • @manish_kumar_1

    @manish_kumar_1

    Жыл бұрын

    Scala cookbook

  • @poojajoshi871
    @poojajoshi871 Жыл бұрын

    IllegalArgumentException: If 'badRecordsPath' is specified, 'mode' is not allowed to set. mode: PermissiveMode This is the error while storing the bad record . Please advice

  • @manish_kumar_1

    @manish_kumar_1

    Жыл бұрын

    Remove the mode from the code

  • @killerricky54
    @killerricky5410 ай бұрын

    hi manish, facing problem reading the csv file when creating the csv by myself, after running the code, only showing the output, but mode fuction is not working, headers is also not reflecting, probabaly due to incorrect csv file format, can you help me in sharing your csv file so that i can download,that file ,this will help a lot.thanks

  • @manish_kumar_1

    @manish_kumar_1

    10 ай бұрын

    Data is already there in description

  • @chandanareddy8158

    @chandanareddy8158

    5 ай бұрын

    copy the data and save as from notepad++ to file_name.csv

  • @ShubhamSinghBelhari
    @ShubhamSinghBelhari11 күн бұрын

    where is the CSV file ?

  • @celebritysview007
    @celebritysview0074 ай бұрын

    Sir what happens when our DF is emp_details, in this DF we have total 9 rows, 6 columns, by adding _currpet_Record column in my_schema. How it works/not?, plz xplain. Id,name,age,sal,add,nomine, 1,raju,17,15000,india,nom1 2,mani,19,21000,usa,nom2 3,Mona,21,31000,usa,nom3 4,rani,32,4100,ind,nom4 5,Mira,25,5000,mum,ind,nom5 6,yoo,21,510,mum,mh,IND,nom6 7,mahi,27,611,hyd,TS,Ind,nom7 8,nani,31,711,hyd,TS,ind,nom1,nom2 9,om,21,911,Pune,mh,ind,nom1,nom2,nom3

  • @amitkhandelwal8030
    @amitkhandelwal80302 ай бұрын

    Hi where is csv file link ? not able to see in description

  • @manish_kumar_1

    @manish_kumar_1

    2 ай бұрын

    Hai to data. Save that as csv file. KZread me file daalne ka option nhi hai

  • @shivakrishna1743
    @shivakrishna1743 Жыл бұрын

    Guys, where can I find the employee file? I have been following the series from first.. not sure where is it. I am not finding data in description.

  • @manish_kumar_1

    @manish_kumar_1

    Жыл бұрын

    Check now

  • @shivakrishna1743

    @shivakrishna1743

    Жыл бұрын

    @@manish_kumar_1 Thanks!

  • @varsha2906ify
    @varsha2906ify10 ай бұрын

    can you please tell me how to add comments

  • @manish_kumar_1

    @manish_kumar_1

    10 ай бұрын

    use # key for single line comment and control+/ for multiple line comments

  • @yogesh9992008
    @yogesh9992008 Жыл бұрын

    Not found csv data in description

  • @manish_kumar_1

    @manish_kumar_1

    Жыл бұрын

    It's there

  • @kavitharaju7944
    @kavitharaju79442 ай бұрын

    Can you release your videos in english ??

  • @amanjha5422
    @amanjha5422 Жыл бұрын

    Bhaiya next video kb ayegi

  • @manish_kumar_1

    @manish_kumar_1

    Жыл бұрын

    Day after tomorrow

  • @sanketraut8462
    @sanketraut846211 ай бұрын

    I tried but its not shows the seventh column

  • @manish_kumar_1

    @manish_kumar_1

    11 ай бұрын

    Send me the complete code

  • @sanketraut8462

    @sanketraut8462

    11 ай бұрын

    @@manish_kumar_1 done, thank you

  • @mmohammedsadiq2483
    @mmohammedsadiq24839 ай бұрын

    where is the Data , csv file

  • @manish_kumar_1

    @manish_kumar_1

    8 ай бұрын

    In description

  • @gangaaramtech
    @gangaaramtech3 ай бұрын

    HI @ manish_kumar_1 I tried with the data you provided but I am not able to see the corrupted records based on on different modes and when I created the employee schema and tried to see the corrupted records it is showing all the records under corrupted records.

  • @aakashrathore2287
    @aakashrathore2287 Жыл бұрын

    CSV kha hai

  • @manish_kumar_1

    @manish_kumar_1

    Жыл бұрын

    Didn't give. I will update it soon

  • @dakshitamishra7501
    @dakshitamishra7501 Жыл бұрын

    I am not able to print corrupted records, when I am printing records it forms a new column with nominee value.I am not able to understand what am I doing wrong. from pyspark.sql.types import StructType,StructField,IntegerType,StringType emp_schema=StructType([ StructField("id",IntegerType(),True), StructField("name",StringType(),True), StructField("age",IntegerType(),True), StructField("salary",IntegerType(),True), StructField("address",StringType(),True), StructField("nominee",StringType(),True), StructField("corrrecod",StringType(),True) ]) employee_df=spark.read.format("csv")\ .option("header","true")\ .option("inferschema","true")\ .option("mode","PERMISSIVE")\ .schema(emp_schema)\ .load("/FileStore/tables/EmployeeDetails.csv") employee_df.show(truncate=False)

  • @manish_kumar_1

    @manish_kumar_1

    Жыл бұрын

    emp_schema=StructType([ StructField("id",IntegerType(),True), StructField("name",StringType(),True), StructField("age",IntegerType(),True), StructField("salary",IntegerType(),True), StructField("address",StringType(),True), StructField("nominee",StringType(),True), StructField("_corrupt_record",StringType(),True) ]) use this schema. Your schema was not correct.

  • @KotlaMuraliKrishna

    @KotlaMuraliKrishna

    10 ай бұрын

    @@manish_kumar_1 Hi Manish, can you please define the error in his schema, as I was getting the same issue but after copy pasting your schema it worked for me not sure why. Thanks in advance.

  • @KotlaMuraliKrishna

    @KotlaMuraliKrishna

    10 ай бұрын

    @@manish_kumar_1 Got it, we should only use as _corrupt_record as StructField to get the complete record.

  • @ashishkumarak1783

    @ashishkumarak1783

    8 ай бұрын

    @@KotlaMuraliKrishna @dakshitamishra7501 @Mdkaleem__ Manish sir has used column name as _corrupt_record in under StructField . If you want any other name of column then We have to add the columnNameOfCorruptRecord option as column name which we have given in schema. Like this:- emp_schema= StructType( [ StructField("id",IntegerType(),True), StructField("name",StringType(),True), StructField("age",IntegerType(),True), StructField("salary",IntegerType(),True), StructField("address",StringType(),True), StructField("nominee",StringType(),True), StructField("any_name_of_corrupt_record",StringType(),True) ] ) employee_df2=spark.read.format("csv")\ .option("header","true")\ .option("inferschema","false")\ .option("mode","PERMISSIVE")\ .schema(emp_schema)\ .option("columnNameOfCorruptRecord", "any_name_of_corrupt_record")\ .load("/FileStore/tables/employee_file.csv") employee_df2.show() You can go through this site:- medium.com/@sasidharan-r/how-to-handle-corrupt-or-bad-record-in-apache-spark-custom-logic-pyspark-aws-430ddec9bb41#:~:text=to%20add%20the-,columnNameOfCorruptRecord,-option%20as%20column

  • @swaroopsonu

    @swaroopsonu

    7 ай бұрын

    @@KotlaMuraliKrishna did you find any solution for this issue ? As I'm also getting the same output as yours

  • @swetasoni2914
    @swetasoni29143 ай бұрын

    why I am getting corrupt record here? Also corrupt record value is only nominee emp_df = spark.read.format("csv")\ .option("header", "true")\ .option("inferschema","true")\ .schema(emp_schema)\ .option("badRecordsPath","/FileStore/tables/bad_records")\ .load("/FileStore/tables/employee_file.csv") emp_df.show(truncate = False) (1) Spark Jobs emp_df:pyspark.sql.dataframe.DataFrame = [id: integer, name: string ... 5 more fields] +---+--------+---+------+---------+-------+--------------+ |id |name |age|salary|address |nominee|corrupt_record| +---+--------+---+------+---------+-------+--------------+ |3 |Pritam |22 |150000|Bangalore|India |nominee3 | |4 |Prantosh|17 |200000|Kolkata |India |nominee4 | +---+--------+---+------+---------+-------+--------------+

  • @Anonymous-qg5cw

    @Anonymous-qg5cw

    3 ай бұрын

    same doubt i have, even if we add new column extra records value will be placed in that and other have null

  • @Anonymous-qg5cw

    @Anonymous-qg5cw

    3 ай бұрын

    there is 2 options 1) keep column name '_corrupt_record' 2) .load("/FileStore/tables/employee_file.csv",ColumnNameOfCorruptRecord='corrupt_record')

  • @sansha3881
    @sansha3881 Жыл бұрын

    Unable to find spark link @manish kumar

  • @manish_kumar_1

    @manish_kumar_1

    Жыл бұрын

    Yes I had not given. I will add soon

  • @amlansharma5429
    @amlansharma5429 Жыл бұрын

    id,name,age,salary,address,nominee 1,Manish,26,75000,bihar,nominee1 2,Nikita,23,100000,uttarpradesh,nominee2 3,Pritam,22,150000,Bangalore,India,nominee3 4,Prantosh,17,200000,Kolkata,India,nominee4 5,Vikash,31,300000,,nominee5 Employee.csv

  • @manish_kumar_1

    @manish_kumar_1

    Жыл бұрын

    Thanks Amlan

  • @amlansharma5429

    @amlansharma5429

    Жыл бұрын

    @@manish_kumar_1 arre sir Jaan de denge aapke liye...

  • @Mdkaleem__
    @Mdkaleem__ Жыл бұрын

    Instead of bad records it shows me the correct one... bad_records_df=spark.read.format("json").load("/FileStore/tables/bad_records/20230610T072018/bad_records/") bad_records_df.show(truncate = False) |dbfs:/FileStore/tables/employee_df.csv|org.apache.spark.SparkRuntimeException: [MALFORMED_CSV_RECORD] Malformed CSV record: 1,Manish,26,75000,bihar,nominee1 |1,Manish,26,75000,bihar,nominee1 | |dbfs:/FileStore/tables/employee_df.csv|org.apache.spark.SparkRuntimeException: [MALFORMED_CSV_RECORD] Malformed CSV record: 2,Nikita,23,100000,uttarpradesh,nominee2|2,Nikita,23,100000,uttarpradesh,nominee2| |dbfs:/FileStore/tables/employee_df.csv|org.apache.spark.SparkRuntimeException: [MALFORMED_CSV_RECORD] Malformed CSV record: 5,Vikash,31,300000,,nominee5 |5,Vikash,31,300000,,nominee5 |

  • @Marcopronto

    @Marcopronto

    11 ай бұрын

    while creating schema, using "_corrupt_record" as the field name.

  • @celebritysview007
    @celebritysview0074 ай бұрын

    Sir what happens when our DF is emp_details, in this DF we have total 9 rows, 6 columns, by adding _currpet_Record column in my_schema. How it works/not?, plz xplain. Id,name,age,sal,add,nomine, 1,raju,17,15000,india,nom1 2,mani,19,21000,usa,nom2 3,Mona,21,31000,usa,nom3 4,rani,32,4100,ind,nom4 5,Mira,25,5000,mum,ind,nom5 6,yoo,21,510,mum,mh,IND,nom6 7,mahi,27,611,hyd,TS,Ind,nom7 8,nani,31,711,hyd,TS,ind,nom1,nom2 9,om,21,911,Pune,mh,ind,nom1,nom2,nom3