This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
%dep | |
z.load("/var/lib/ambari-server/resources/mysql-connector-java-5.1.17.jar") | |
val url="jdbc:mysql://localhost:3306/hive" | |
val prop = new java.util.Properties | |
prop.setProperty("user","root") | |
prop.setProperty("password","****") | |
val people = sqlContext.read.jdbc(url,"version",prop) | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
%dep | |
z.load("mysql:mysql-connector-java:5.1.35") | |
and then | |
val driver = "com.mysql.jdbc.Driver" | |
val url = "jdbc:mysql://address=(protocol=tcp)(host=localhost)(port=3306)(user=...)(password=...)/dbname" | |
val jdbcDF = sqlc.load("jdbc", Map( | |
"url" -> url, | |
"driver" -> driver, | |
"dbtable" -> "table1")) | |
jdbcDF.registerTempTable("table1") |
%dep z.reset() // clean up previously added artifact and repository // add maven repository z.addRepo("RepoName").url("RepoURL") // add maven snapshot repository z.addRepo("RepoName").url("RepoURL").snapshot() // add artifact from filesystem z.load("/path/to.jar") // add artifact from maven repository, with no dependency z.load("groupId:artifactId:version").excludeAll() // add artifact recursively z.load("groupId:artifactId:version") // add artifact recursively except comma separated GroupID:ArtifactId list z.load("groupId:artifactId:version").exclude("groupId:artifactId,groupId:artifactId, ...") // exclude with pattern z.load("groupId:artifactId:version").exclude(*) z.load("groupId:artifactId:version").exclude("groupId:artifactId:*") z.load("groupId:artifactId:version").exclude("groupId:*") // local() skips adding artifact to spark clusters (skipping sc.addJar()) z.load("groupId:artifactId:version").local()Note that %dep interpreter should be used before %spark, %pyspark, %sql. Thanks to Ali and Neeraj from HWX for help in solving this issue.
Very nice blog,keep sharing more posts with us.
ReplyDeletethank you for info...
hadoop admin training