本文整理汇总了Python中pyspark.sql.SQLContext.read方法的典型用法代码示例。如果您正苦于以下问题:Python SQLContext.read方法的具体用法?Python SQLContext.read怎么用?Python SQLContext.read使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类pyspark.sql.SQLContext
的用法示例。
在下文中一共展示了SQLContext.read方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: loadMySQL
# 需要导入模块: from pyspark.sql import SQLContext [as 别名]
# 或者: from pyspark.sql.SQLContext import read [as 别名]
def loadMySQL():
sc = SparkContext("local", "Simple App")
sqlContext = SQLContext(sc)
query = "(SELECT name_hash as name, paper_hash as paper from dblp.author limit 1000) as author"
jdbcDF = sqlContext.read().load(path="jdbc:mysql://qcis4:3306/dblp?user=root&password=passwd", format="jdbc",schema = query)
jdbcDF.show()
示例2: matEntry
# 需要导入模块: from pyspark.sql import SQLContext [as 别名]
# 或者: from pyspark.sql.SQLContext import read [as 别名]
# print matEntry()
print "memememememememmmmmmemmmm"
me = authorsRDD.map(matEntry(author,authors)).collect()#.reduce(lambda x,y: x.append(y))
# me = matEntry()
# me = matEntryNoArgs()
print "memememememememmmmmmemmmmOoooooooooooooooo"
entries = sc.parallelize(me)
print "ssssssssssssssss"
# # Create an CoordinateMatrix from an RDD of MatrixEntries.
mat = CoordinateMatrix(entries)
#
print mat
# mat.saveAsTextFile("/home/xuepeng/uts/metapath.txt")
# Get its size.
print mat.numRows() # 3
print mat.numCols() # 2
#
if __name__ == "__main__":
conf = SparkConf().setAppName("MetaPath")
sc = SparkContext(conf = conf)
sqlContext = SQLContext(sc)
sqlContext.read().jdbc(url="jdbc:mysql://qcis4:3306/dblp?user=root&password=passwd", table="author_sample", column = "name_hash, paper_hash")
process(sc,sqlContext)