本文整理汇总了Python中pyspark.sql.SQLContext.stop方法的典型用法代码示例。如果您正苦于以下问题:Python SQLContext.stop方法的具体用法?Python SQLContext.stop怎么用?Python SQLContext.stop使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类pyspark.sql.SQLContext
的用法示例。
在下文中一共展示了SQLContext.stop方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: __init__
# 需要导入模块: from pyspark.sql import SQLContext [as 别名]
# 或者: from pyspark.sql.SQLContext import stop [as 别名]
class Consumer:
'Simple spark kafka streaming consumer'
def __init__(self, casshost, interval, zookeeper, topic):
self.conf = SparkConf().setAppName("KafkaSpark").set("spark.cassandra.connection.host", casshost)
self.sc = SparkContext(conf=self.conf)
self.sqlContext = SQLContext(sparkContext=self.sc)
self.ssc = StreamingContext(self.sc, batchDuration=interval)
self.zookeeper = zookeeper
self.topic = topic
def check_and_write(self, x):
try:
x.toDF().write.format("org.apache.spark.sql.cassandra").options(table="test1", keyspace = "mykeyspace").save(mode ="append")
except ValueError:
print "No rdd found!"
def consume(self):
messages = KafkaUtils.createStream(self.ssc, self.zookeeper, "spark-streaming-consumer", {self.topic: 1})
lines = messages.map(lambda x: x[1])
rows = lines.map(lambda x: {
"data": json.loads(x)['data'],
"time": json.loads(x)['time']
})
rows.foreachRDD(lambda x: {
self.check_and_write(x)
})
self.ssc.start()
self.ssc.awaitTermination()
def stop(self):
if self.sqlContext != None:
self.sqlContext.stop()
if self.ssc != None:
self.ssc.stop()
if self.sc != None:
self.sc.stop()