當前位置: 首頁>>代碼示例>>Python>>正文


Python sql.Row類代碼示例

本文整理匯總了Python中pyspark.sql.Row的典型用法代碼示例。如果您正苦於以下問題:Python Row類的具體用法?Python Row怎麽用?Python Row使用的例子?那麽, 這裏精選的類代碼示例或許可以為您提供幫助。


在下文中一共展示了Row類的4個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: test_convert_row_to_dict

 def test_convert_row_to_dict(self):
     row = Row(l=[Row(a=1, b='s')], d={"key": Row(c=1.0, d="2")})
     self.assertEqual(1, row.asDict()['l'][0].a)
     df = self.sc.parallelize([row]).toDF()
     df.registerTempTable("test")
     row = self.sqlCtx.sql("select l, d from test").head()
     self.assertEqual(1, row.asDict()["l"][0].a)
     self.assertEqual(1.0, row.asDict()['d']['key'].c)
開發者ID:uncleGen,項目名稱:ps-on-spark,代碼行數:8,代碼來源:tests.py

示例2: test_convert_row_to_dict

    def test_convert_row_to_dict(self):
        row = Row(l=[Row(a=1, b='s')], d={"key": Row(c=1.0, d="2")})
        self.assertEqual(1, row.asDict()['l'][0].a)
        df = self.sc.parallelize([row]).toDF()

        with self.tempView("test"):
            df.createOrReplaceTempView("test")
            row = self.spark.sql("select l, d from test").head()
            self.assertEqual(1, row.asDict()["l"][0].a)
            self.assertEqual(1.0, row.asDict()['d']['key'].c)
開發者ID:JingchengDu,項目名稱:spark,代碼行數:10,代碼來源:test_types.py

示例3: _create_row

def _create_row(fields, values):
    row = Row(*values)
    row.__fields__ = fields
    return row
開發者ID:Bekbolatov,項目名稱:spark,代碼行數:4,代碼來源:types.py

示例4: StructField

    StructField("pix6",DoubleType(),True),
    StructField("pix7",DoubleType(),True),
    StructField("pix8",DoubleType(),True),
    StructField("pix9",DoubleType(),True),
    StructField("pix10",DoubleType(),True),
    StructField("pix11",DoubleType(),True),
    StructField("pix12",DoubleType(),True),
    StructField("pix13",DoubleType(),True),
    StructField("pix14",DoubleType(),True),
    StructField("pix15",DoubleType(),True),
    StructField("pix16",DoubleType(),True),
    StructField("label",DoubleType(),True)
])
pen_raw = sc.textFile("first-edition/ch08/penbased.dat", 4).map(lambda x:  x.split(", ")).map(lambda row: [float(x) for x in row])

dfpen = sqlContext.createDataFrame(pen_raw.map(Row.fromSeq(_)), penschema)
def parseRow(row):
    d = {("pix"+str(i)): row[i-1] for i in range(1,17)}
    d.update({"label": row[16]})
    return d

dfpen = sqlContext.createDataFrame(pen_raw.map(parseRow), penschema)
va = VectorAssembler(outputCol="features", inputCols=dfpen.columns[0:-1])
penlpoints = va.transform(dfpen).select("features", "label")

pensets = penlpoints.randomSplit([0.8, 0.2])
pentrain = pensets[0].cache()
penvalid = pensets[1].cache()

penlr = LogisticRegression(regParam=0.01)
開發者ID:AkiraKane,項目名稱:first-edition,代碼行數:30,代碼來源:ch08-listings.py


注:本文中的pyspark.sql.Row類示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。