when i try the pull cloudera command, it tells me that it's no longer supported. What's the solution to that?
@Abhish9221Ай бұрын
Thank you ,I am able to connect
@dishasaxena71312 ай бұрын
thank you, this saved my life 2 hours before my practical exam
@sa21443 ай бұрын
Good video for starters.
@danistudycorner3 ай бұрын
Thank you!!!
@VibavMahendran-hj4og3 ай бұрын
Exception in thread "main" java.lang.ExceptionInInitializerError, bro I am getting this error while printing how to resolve this, I used the same code as you gave
@berkayates62544 ай бұрын
When i stopeed and started the aws ubuntu machines their ip addresses changes so hapoop does not know their new ip addresses, What is the solution for this situation
@shot_freeze4 ай бұрын
What type of job description can we search for this hadoop with AWS ?
@ryan7ait5 ай бұрын
how can i acces the cloudera manager interfaace?
@ganeshjaggineni40975 ай бұрын
NICE SUPER EXCELLENT MOTIVATED
@msftora36 ай бұрын
thank very much, this solved my question on intellij/gradle/scala .
@raviyadav-dt1tb6 ай бұрын
Can any tell how solve this type question in interview? Where from u have to practice? Please tell me
@HemanthKumar-cm9lv6 ай бұрын
Why cant we do in just 1 line? print(sorted(s) == sorted(t))
@JayeshTank9756 ай бұрын
Thanks bro, very helpful for beginers
@balajikagiti69847 ай бұрын
Bro , can you make a one video how to write a spark scala dataframe into specific file instead of part files...or if you did already please provide me link
@mahmoudelsayed23253 ай бұрын
yes pleaesee
@hey_ashy7 ай бұрын
Great demo. Thank you.
@mohammedradman27568 ай бұрын
Hi, after I added the variables to the /.bashrc and ran the command source ~/.bashrc I couldn't run any other command except cd, the rest of the commands can't be identified by Ubuntu, any advice ?
@doseofdopamine20318 ай бұрын
is it possible to install spark in there as well?
@arupanandaswain95818 ай бұрын
nice one buddy its full of information...(#mostInformativeVideo )
@nandhuvenkat11988 ай бұрын
How to get the file in another server instead of local?
@sushantsawant65779 ай бұрын
I am getting this error, Caused by: java.lang.ClassNotFoundException: com.fasterxml.jackson.core.StreamReadConstraints at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 18 more package org.example.scala import org.apache.spark.sql.SparkSession; object SparkDemo { def main(args: Array[String]): Unit = { val spark= SparkSession.builder() .master("local[1]") .appName("Spark Demo") .getOrCreate(); println("session behaviour") println("app name " + spark.sparkContext.appName) println("Deployment mode "+spark.sparkContext.deployMode) println("master "+spark.sparkContext.master) val df=spark.read.json("C:\\Users\\sushants asus book\\Downloads\\tweets.json") df.printSchema() df.show(false) spark.stop(); } }
@undefin-ed9 ай бұрын
bhai aisa question nahi poochenge kabhi , easy se bhi neeche category main hai ye
@ajay_sn9 ай бұрын
Hi great video on docker for Hue , Could you tell me how to install Ambari on the same on Mac M1 ?
@sabesanj55099 ай бұрын
# Create a list of numbers numbers = [5, 2, 8, 1, 5, 4, 9, 3, 2, 6, 7, 1, 8, 3, 6, 4, 9, 7] # Perform the bubble sort n = len(numbers) for i in range(n): for j in range(0, n - i - 1): if numbers[j] > numbers[j + 1]: # Swap the elements numbers[j], numbers[j + 1] = numbers[j + 1], numbers[j] # Print the sorted list print(numbers)
@vamsivegi9 ай бұрын
can you use scala 3.
@aurovindsahoo10 ай бұрын
Great post more
@amitkamble8493 Жыл бұрын
Excellent logic bro
@reachrishav Жыл бұрын
Hi Dinesh, Upon refactoring came up with this approach too: SELECT a.value FROM GENERATE_SERIES(1, (SELECT id FROM test)) a CROSS APPLY ( SELECT value FROM GENERATE_SERIES(1, a.value) ) b
@user-vi6su2qk6t Жыл бұрын
Can you please create vedio on shell scripting to get and put files from Linux server to windows and Linux server to Linux server with creating logs files in brief explanation for Automation
@reachrishav Жыл бұрын
Thank you for posting. Keep these videos coming. I tried a similar approach: with cte as ( select id, id as num from test union all select id - 1, id - 1 from cte where id > 1 ) , cte2 as( select * from cte union all select id - 1, num from cte2 where id > 1 ) select num from cte2 order by 1
@umamaheswaranj9890 Жыл бұрын
Thank you very much... I was able to successfully connect to Snowflake with the help from your demo.
@arvindynr Жыл бұрын
can you make a video on running hotonworks on docker on m1 mac?
@arvindynr Жыл бұрын
will this work on m1 mac?
@lightmyra2334 Жыл бұрын
You saved me so much time! I've been trying other tutorials over and over, and this worked on first attempt! Thanks a bunch
@miggyjang4476 Жыл бұрын
java version 6 error com.jcraft.jsch.JSchException: Algorithm negotiation fail at com.jcraft.jsch.Session.receive_kexinit(Session.java:582)
@monikaveeramalla33815 ай бұрын
How did you resolve the error I am also facing the same issue please help
@prasadpatil5397 Жыл бұрын
Very nice sir. Please add whole series of spark. Wating for whole series
@prasadpatil5397 Жыл бұрын
Very very very nice explanation sir. Please please add whole series of spark
@AkhilGupta2007 Жыл бұрын
Just curious list.sort() function can do this right away. Am i missing something here?
@rathikarajeshkanna335411 ай бұрын
you're right. May be we can assume the interview question as sort without using sort function.
@praveenreddy94549 ай бұрын
with out using any predefined functions, methods. and also we have to check tricomplexity and space complexity of each DS
@davidfong7624 Жыл бұрын
Thanks so much. This was very helpful.
@professorpablo1465 Жыл бұрын
why sometimes hue starts and work well and hive also start no error but sometime hue and hive both doesn't start what shouuld i do
@PaOne24T Жыл бұрын
Hi , How to handle the public DNS that changes everytime we stop and start the nodes
@brunocarvalho322910 ай бұрын
You need to allocate an Elastic IP to each one of your instances before doing the Hadoop setup
@max003003003 Жыл бұрын
Thank you very verymuch i'm struggle with Cloudera installation for a while this quite very useful to me to get start with cloudera.
@perfectperfecto Жыл бұрын
Thank you this will be a good starting foundation for helping me learn more
@ashutoshgadgil8407 Жыл бұрын
it was previously gui based, have they removed it now?
@shaileshsingh1445 Жыл бұрын
good video . network port is already mentioned when u run docker ps , so inspect is not mandatory here !!
@aneelswarna2208 Жыл бұрын
Good explanation...keep going with more videos
@franciscobelenguervicente4364 Жыл бұрын
Hey!. First of all, thanks for your content, it's very valuable, :). I just completed the tutorial and succesfully ran the cluster, however, I haven't had success launching the web ui. How do I get to the Web UI from local machine?. Thanks
@vorugantikamalika4455 Жыл бұрын
How can we transfer files from one path to another path in the same Unix server using SFTP. can we follow the same process??
@emadsamir6092 Жыл бұрын
did you get an answer for this question?
@vorugantikamalika4455 Жыл бұрын
@@emadsamir6092 noo
@ggaur10 Жыл бұрын
Hi Voruganti, I don't know about Linux but for windows, you can do it. But you have to give the local path as "/C:/<path to remote path>". Note the "/" before C: One tip to understand the structure of local and remote side paths for SFTP, use the popular software WinSCP and notice the remote and local paths there. Thanks, Gaurav
@hanumansays3725 Жыл бұрын
How to specify the localPath if we are developing on windows machine. Somehow C:\\Users\\file.txt - This isn't working
@ggaur10 Жыл бұрын
Hi Hanuman, If the remote machine is windows: For sftpPath use: "/C:/Users/file.txt" For local path based on your OS, you can use windows style or Linux style path: For Windows: "C:\\Users\\file.txt" or "C:/Users/file.txt" both will work For Linux: "/home/users/user1/file1.txt" will work. Let me know if it worked or not? Thanks, Gaurav
@hanumansays3725 Жыл бұрын
@@ggaur10 Thanks Gaurav, this worked on windows machine. Appreciate your response.
Пікірлер
when i try the pull cloudera command, it tells me that it's no longer supported. What's the solution to that?
Thank you ,I am able to connect
thank you, this saved my life 2 hours before my practical exam
Good video for starters.
Thank you!!!
Exception in thread "main" java.lang.ExceptionInInitializerError, bro I am getting this error while printing how to resolve this, I used the same code as you gave
When i stopeed and started the aws ubuntu machines their ip addresses changes so hapoop does not know their new ip addresses, What is the solution for this situation
What type of job description can we search for this hadoop with AWS ?
how can i acces the cloudera manager interfaace?
NICE SUPER EXCELLENT MOTIVATED
thank very much, this solved my question on intellij/gradle/scala .
Can any tell how solve this type question in interview? Where from u have to practice? Please tell me
Why cant we do in just 1 line? print(sorted(s) == sorted(t))
Thanks bro, very helpful for beginers
Bro , can you make a one video how to write a spark scala dataframe into specific file instead of part files...or if you did already please provide me link
yes pleaesee
Great demo. Thank you.
Hi, after I added the variables to the /.bashrc and ran the command source ~/.bashrc I couldn't run any other command except cd, the rest of the commands can't be identified by Ubuntu, any advice ?
is it possible to install spark in there as well?
nice one buddy its full of information...(#mostInformativeVideo )
How to get the file in another server instead of local?
I am getting this error, Caused by: java.lang.ClassNotFoundException: com.fasterxml.jackson.core.StreamReadConstraints at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 18 more package org.example.scala import org.apache.spark.sql.SparkSession; object SparkDemo { def main(args: Array[String]): Unit = { val spark= SparkSession.builder() .master("local[1]") .appName("Spark Demo") .getOrCreate(); println("session behaviour") println("app name " + spark.sparkContext.appName) println("Deployment mode "+spark.sparkContext.deployMode) println("master "+spark.sparkContext.master) val df=spark.read.json("C:\\Users\\sushants asus book\\Downloads\\tweets.json") df.printSchema() df.show(false) spark.stop(); } }
bhai aisa question nahi poochenge kabhi , easy se bhi neeche category main hai ye
Hi great video on docker for Hue , Could you tell me how to install Ambari on the same on Mac M1 ?
# Create a list of numbers numbers = [5, 2, 8, 1, 5, 4, 9, 3, 2, 6, 7, 1, 8, 3, 6, 4, 9, 7] # Perform the bubble sort n = len(numbers) for i in range(n): for j in range(0, n - i - 1): if numbers[j] > numbers[j + 1]: # Swap the elements numbers[j], numbers[j + 1] = numbers[j + 1], numbers[j] # Print the sorted list print(numbers)
can you use scala 3.
Great post more
Excellent logic bro
Hi Dinesh, Upon refactoring came up with this approach too: SELECT a.value FROM GENERATE_SERIES(1, (SELECT id FROM test)) a CROSS APPLY ( SELECT value FROM GENERATE_SERIES(1, a.value) ) b
Can you please create vedio on shell scripting to get and put files from Linux server to windows and Linux server to Linux server with creating logs files in brief explanation for Automation
Thank you for posting. Keep these videos coming. I tried a similar approach: with cte as ( select id, id as num from test union all select id - 1, id - 1 from cte where id > 1 ) , cte2 as( select * from cte union all select id - 1, num from cte2 where id > 1 ) select num from cte2 order by 1
Thank you very much... I was able to successfully connect to Snowflake with the help from your demo.
can you make a video on running hotonworks on docker on m1 mac?
will this work on m1 mac?
You saved me so much time! I've been trying other tutorials over and over, and this worked on first attempt! Thanks a bunch
java version 6 error com.jcraft.jsch.JSchException: Algorithm negotiation fail at com.jcraft.jsch.Session.receive_kexinit(Session.java:582)
How did you resolve the error I am also facing the same issue please help
Very nice sir. Please add whole series of spark. Wating for whole series
Very very very nice explanation sir. Please please add whole series of spark
Just curious list.sort() function can do this right away. Am i missing something here?
you're right. May be we can assume the interview question as sort without using sort function.
with out using any predefined functions, methods. and also we have to check tricomplexity and space complexity of each DS
Thanks so much. This was very helpful.
why sometimes hue starts and work well and hive also start no error but sometime hue and hive both doesn't start what shouuld i do
Hi , How to handle the public DNS that changes everytime we stop and start the nodes
You need to allocate an Elastic IP to each one of your instances before doing the Hadoop setup
Thank you very verymuch i'm struggle with Cloudera installation for a while this quite very useful to me to get start with cloudera.
Thank you this will be a good starting foundation for helping me learn more
it was previously gui based, have they removed it now?
good video . network port is already mentioned when u run docker ps , so inspect is not mandatory here !!
Good explanation...keep going with more videos
Hey!. First of all, thanks for your content, it's very valuable, :). I just completed the tutorial and succesfully ran the cluster, however, I haven't had success launching the web ui. How do I get to the Web UI from local machine?. Thanks
How can we transfer files from one path to another path in the same Unix server using SFTP. can we follow the same process??
did you get an answer for this question?
@@emadsamir6092 noo
Hi Voruganti, I don't know about Linux but for windows, you can do it. But you have to give the local path as "/C:/<path to remote path>". Note the "/" before C: One tip to understand the structure of local and remote side paths for SFTP, use the popular software WinSCP and notice the remote and local paths there. Thanks, Gaurav
How to specify the localPath if we are developing on windows machine. Somehow C:\\Users\\file.txt - This isn't working
Hi Hanuman, If the remote machine is windows: For sftpPath use: "/C:/Users/file.txt" For local path based on your OS, you can use windows style or Linux style path: For Windows: "C:\\Users\\file.txt" or "C:/Users/file.txt" both will work For Linux: "/home/users/user1/file1.txt" will work. Let me know if it worked or not? Thanks, Gaurav
@@ggaur10 Thanks Gaurav, this worked on windows machine. Appreciate your response.
underrated but helpful... thanks