本文整理汇总了Python中pyspark.sql.HiveContext.tableNames方法的典型用法代码示例。如果您正苦于以下问题:Python HiveContext.tableNames方法的具体用法?Python HiveContext.tableNames怎么用?Python HiveContext.tableNames使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类pyspark.sql.HiveContext
的用法示例。
在下文中一共展示了HiveContext.tableNames方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: SparkContext
# 需要导入模块: from pyspark.sql import HiveContext [as 别名]
# 或者: from pyspark.sql.HiveContext import tableNames [as 别名]
from pyspark import SparkContext
from pyspark.sql import HiveContext
from pyspark.sql.types import *
from udf.pyspark.udfs import *
if __name__ == "__main__":
sc = SparkContext(appName="SparkSQL:[demo][pysparkdemo]")
sqlContext = HiveContext(sc)
# RDD is created from a list of rows
df = sqlContext.read.parquet("/mvad/warehouse/session/dspan/date=2015-09-01/")
df.registerTempTable("sessionlog")
for table in sqlContext.tableNames():
print table
df.printSchema()
sqlContext.udf.register("toNormalCookie",toNormalCookie )
sql1 = """ select toNormalCookie(cookie) as cookiestr,eventTime,eventType,geoInfo.country as country,
geoInfo.province as province from sessionlog limit 10 """.replace('\n',' ')
sample = sqlContext.sql(sql1)
sample.show()
sql2 = """select eventType, count(cookie) as count from sessionlog
group by eventType """.replace('\n',' ')
result = sqlContext.sql(sql2)
result.cache()
# only show 20 records