當前位置: 首頁>>代碼示例>>Python>>正文


Python SQLContext.load方法代碼示例

本文整理匯總了Python中pyspark.SQLContext.load方法的典型用法代碼示例。如果您正苦於以下問題:Python SQLContext.load方法的具體用法?Python SQLContext.load怎麽用?Python SQLContext.load使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在pyspark.SQLContext的用法示例。


在下文中一共展示了SQLContext.load方法的4個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: int

# 需要導入模塊: from pyspark import SQLContext [as 別名]
# 或者: from pyspark.SQLContext import load [as 別名]
CLOUDSQL_PWD  = sys.argv[4]

BEST_RANK = int(sys.argv[5])
BEST_ITERATION = int(sys.argv[6])
BEST_REGULATION = float(sys.argv[7])

TABLE_ITEMS  = "Accommodation"
TABLE_RATINGS = "Rating"
TABLE_RECOMMENDATIONS = "Recommendation"

# Read the data from the Cloud SQL
# Create dataframes
#[START read_from_sql]
jdbcDriver = 'com.mysql.jdbc.Driver'
jdbcUrl    = 'jdbc:mysql://%s:3306/%s?user=%s&password=%s' % (CLOUDSQL_INSTANCE_IP, CLOUDSQL_NAME, CLOUDSQL_USER, CLOUDSQL_PWD)
dfAccos = sqlContext.load(source='jdbc', driver=jdbcDriver, url=jdbcUrl, dbtable=TABLE_ITEMS)
dfRates = sqlContext.load(source='jdbc', driver=jdbcDriver, url=jdbcUrl, dbtable=TABLE_RATINGS)
#[END read_from_sql]

# Get all the ratings rows of our user
dfUserRatings  = dfRates.filter(dfRates.userId == USER_ID).map(lambda r: r.accoId).collect()
print(dfUserRatings)

# Returns only the accommodations that have not been rated by our user
rddPotential  = dfAccos.rdd.filter(lambda x: x[0] not in dfUserRatings)
pairsPotential = rddPotential.map(lambda x: (USER_ID, x[0]))

#[START split_sets]
rddTraining, rddValidating, rddTesting = dfRates.rdd.randomSplit([6,2,2])
#[END split_sets]
開發者ID:JimTravis,項目名稱:spark-recommendation-engine,代碼行數:32,代碼來源:app_collaborative.py

示例2: SparkConf

# 需要導入模塊: from pyspark import SQLContext [as 別名]
# 或者: from pyspark.SQLContext import load [as 別名]
from pyspark.sql.types import StructType
from pyspark.sql.types import StructField
from pyspark.sql.types import StringType

conf = SparkConf().setAppName("app_collaborative")
sc = SparkContext(conf=conf)
sqlContext = SQLContext(sc)

jdbcDriver = 'com.mysql.jdbc.Driver'
jdbcUrl    = 'jdbc:mysql://173.194.227.120:3306/recoom?user=root'

USER_ID = 0

# Read the data from the Cloud SQL
# Create dataframes
dfAccos = sqlContext.load(source='jdbc', driver=jdbcDriver, url=jdbcUrl, dbtable='AccommodationT')
dfRates = sqlContext.load(source='jdbc', driver=jdbcDriver, url=jdbcUrl, dbtable='RatingT')

# Get all the ratings rows of our user
dfUserRatings  = dfRates.filter(dfRates.userId == USER_ID).map(lambda r: r.accoId).collect()
print(dfUserRatings)

# Returns only the accos that have not been rated by our user
rddPotential  = dfAccos.rdd.filter(lambda x: x[0] not in dfUserRatings)
pairsPotential = rddPotential.map(lambda x: (USER_ID, x[0]))


rddTraining, rddValidating, rddTesting = dfRates.rdd.randomSplit([6,2,2])
model = ALS.train(rddTraining, 20, 20, 0.1)

"""
開發者ID:watthieu,項目名稱:gcp-recommendation,代碼行數:33,代碼來源:app_collaborative_t.py

示例3: int

# 需要導入模塊: from pyspark import SQLContext [as 別名]
# 或者: from pyspark.SQLContext import load [as 別名]
  againstWiRatings = against.map(lambda x: ((int(x[0]),int(x[1])), int(x[2])) )

  # Make a prediction and map it for later comparison
  # The map has to be ((user,product), rating) not ((product,user), rating)
  predictions = model.predictAll(againstNoRatings).map(lambda p: ( (p[0],p[1]), p[2]) )

  # Returns the pairs (prediction, rating)
  predictionsAndRatings = predictions.join(againstWiRatings).values()

  # Returns the variance
  return sqrt(predictionsAndRatings.map(lambda s: (s[0] - s[1]) ** 2).reduce(add) / float(sizeAgainst))
#[END how_far]

# Read the data from the Cloud SQL
# Create dataframes
dfRates = sqlContext.load(source='jdbc', driver=jdbcDriver, url=jdbcUrl, dbtable='Rating')

rddUserRatings = dfRates.filter(dfRates.userId == 0).rdd
print(rddUserRatings.count())

# Split the data in 3 different sets : training, validating, testing
# 60% 20% 20%
rddRates = dfRates.rdd
rddTraining, rddValidating, rddTesting = rddRates.randomSplit([6,2,2])

#Add user ratings in the training model
rddTraining.union(rddUserRatings)
nbValidating = rddValidating.count()
nbTesting    = rddTesting.count()

print("Training: %d, validation: %d, test: %d" % (rddTraining.count(), nbValidating, rddTesting.count()))
開發者ID:JimTravis,項目名稱:spark-recommendation-engine,代碼行數:33,代碼來源:find_model_collaborative.py

示例4: map

# 需要導入模塊: from pyspark import SQLContext [as 別名]
# 或者: from pyspark.SQLContext import load [as 別名]
sqc=SQLContext(sc)

#idea is to read the csv directly in to the dataframe of spark

#defining the schema
#msisdn,SongUniqueCode,Duration,Circle,DATE,DNIS,MODE,businesscategory
#9037991838,Hun-14-63767,202,Kolkata,10/1/2014,59090,,HindiTop20

mySchema=sql.types.StructType([
                        sql.types.StructField("msisdn",sql.types.StringType(),False),
                        sql.types.StructField("songid",sql.types.StringType(),False),
                        sql.types.StructField("duration",sql.types.IntegerType(),True),
                        sql.types.StructField("Circle",sql.types.StringType(),True),
                        sql.types.StructField("date",sql.types.StringType(),True),
                        sql.types.StructField("mode",sql.types.StringType(),True),
                        sql.types.StructField("businesscategory",sql.types.StringType(),True)
                        ])

transdf=sqc.load(source="com.databricks.spark.csv",path ="file:///home/loq/sunil/spark/content_data.csv",schema=mySchema)

transdf.take(2)

#reading the testfile way
'''
transrdd=sc.textFile("file:///home/loq/sunil/spark/content_data.csv").\
            map(lambda x: x.split(',')).\
            map(lambda y: sql.Row(msisdn=y[0],songid=y[1],duration=y[2],circle=y[3],businesscategory=y[7]))

print transrdd.take(2)
'''
開發者ID:sunil3loq,項目名稱:sparking,代碼行數:32,代碼來源:experimentsone.py


注:本文中的pyspark.SQLContext.load方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。