本文整理汇总了Python中pyspark.sql.SQLContext.tableNames方法的典型用法代码示例。如果您正苦于以下问题:Python SQLContext.tableNames方法的具体用法?Python SQLContext.tableNames怎么用?Python SQLContext.tableNames使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类pyspark.sql.SQLContext
的用法示例。
在下文中一共展示了SQLContext.tableNames方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1:
# 需要导入模块: from pyspark.sql import SQLContext [as 别名]
# 或者: from pyspark.sql.SQLContext import tableNames [as 别名]
# register dataframe as a sqlContext table.
sqlContext.registerDataFrameAsTable(df, "person")
# perform sql select query upon the registered context table person.
d = sqlContext.sql("select name from person")
# craeting new RDD by applying function to each row.
names = d.map(lambda p: "name: " + p.name)
# to print new row data.
for name in names.collect():
print name
# to print list of tables in the current database.
print sqlContext.tableNames()
# creating new data frame containing union of rows in two dataframes.
ndf = df.unionAll(df1)
# display contents of new dataframe.
ndf.show()
# to update data to mysql table. if you set overwrite to true it will truncate the table before performing insert.
df1.insertIntoJDBC(
url="jdbc:mysql://localhost:3306/test?user=root&password=password", dbtable="person", overwrite=False
)
# to create new table in database.if you set alloExisting to true, it will drop any table with the given name.
# df1.createJDBCTable(url="jdbc:mysql://localhost:3306/test?user=root&password=password", dbtable="person", allowExisting=False)
示例2: SQLContext
# 需要导入模块: from pyspark.sql import SQLContext [as 别名]
# 或者: from pyspark.sql.SQLContext import tableNames [as 别名]
## 2. Register DataFrame as a table ##
from pyspark.sql import SQLContext
sqlCtx = SQLContext(sc)
df = sqlCtx.read.json("census_2010.json")
df.registerTempTable('census2010')
tables = sqlCtx.tableNames()
print(tables)