当前位置: 首页>>代码示例>>Python>>正文


Python SQLContext.table方法代码示例

本文整理汇总了Python中pyspark.sql.SQLContext.table方法的典型用法代码示例。如果您正苦于以下问题:Python SQLContext.table方法的具体用法?Python SQLContext.table怎么用?Python SQLContext.table使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在pyspark.sql.SQLContext的用法示例。


在下文中一共展示了SQLContext.table方法的7个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: Row

# 需要导入模块: from pyspark.sql import SQLContext [as 别名]
# 或者: from pyspark.sql.SQLContext import table [as 别名]
row_data = csv_data.map(lambda p: Row(
    duration = int(p[0]), 
    protocol_type = p[1],
    service = p[2],
    flag = p[3],
    src_bytes = int(p[4]),
    dst_bytes = int(p[5]),
    label = p[41]
    )
)

#list what we have
sqlContext.sql("show tables").show()

#table to df
sqlContext.table("people")

#transform pythonRDD to dataframe
interactions_df = sqlContext.createDataFrame(row_data)
#register temp table
interactions_df.registerTempTable("interactions")

#check our table list again
sqlContext.sql("show tables").show()

#print schema format
interactions_df.printSchema()
#query top 2 row data
interactions_df.show(2)

开发者ID:wlsherica,项目名称:HadoopCon_2015_SparkSQL,代码行数:31,代码来源:SparkSQL_training.py

示例2: PSparkContext

# 需要导入模块: from pyspark.sql import SQLContext [as 别名]
# 或者: from pyspark.sql.SQLContext import table [as 别名]

#.........这里部分代码省略.........
        data.
        The types are re-infered, so they may not match.
        Parameters
        ----------
        local_df: Pandas DataFrame
            The data to turn into a distributed Sparkling Pandas DataFrame.
            See http://bit.ly/pandasDataFrame for docs.
        Returns
        -------
        A Sparkling Pandas DataFrame.
        """
        def frame_to_rows(frame):
            """Convert a Pandas DataFrame into a list of Spark SQL Rows"""
            # TODO: Convert to row objects directly?
            return [r.tolist() for r in frame.to_records()]
        schema = list(local_df.columns)
        index_names = list(local_df.index.names)
        index_names = _normalize_index_names(index_names)
        schema = index_names + schema
        rows = self.spark_ctx.parallelize(frame_to_rows(local_df))
        sp_df = DataFrame.from_schema_rdd(
            self.sql_ctx.createDataFrame(
                rows,
                schema=schema,
                # Look at all the rows, should be ok since coming from
                # a local dataset
                samplingRatio=1))
        sp_df._index_names = index_names
        return sp_df

    def sql(self, query):
        """Perform a SQL query and create a L{DataFrame} of the result.
        The SQL query is run using Spark SQL. This is not intended for
        querying arbitrary databases, but rather querying Spark SQL tables.
        Parameters
        ----------
        query: string
            The SQL query to pass to Spark SQL to execute.
        Returns
        -------
        Sparkling Pandas DataFrame.
        """
        return DataFrame.from_spark_rdd(self.sql_ctx.sql(query), self.sql_ctx)

    def table(self, table):
        """Returns the provided Spark SQL table as a L{DataFrame}
        Parameters
        ----------
        table: string
            The name of the Spark SQL table to turn into a L{DataFrame}
        Returns
        -------
        Sparkling Pandas DataFrame.
        """
        return DataFrame.from_spark_rdd(self.sql_ctx.table(table),
                                        self.sql_ctx)

    def from_spark_rdd(self, spark_rdd):
        """
        Translates a Spark DataFrame into a Sparkling Pandas Dataframe.
        Currently, no checking or validation occurs.
        Parameters
        ----------
        spark_rdd: Spark DataFrame
            Input Spark DataFrame.
        Returns
开发者ID:jhlch,项目名称:sparklingpandas,代码行数:70,代码来源:pcontext.py

示例3: PSparkContext

# 需要导入模块: from pyspark.sql import SQLContext [as 别名]
# 或者: from pyspark.sql.SQLContext import table [as 别名]

#.........这里部分代码省略.........
            return self.from_pandas_rdd(
                self.spark_ctx.wholeTextFiles(name)
                .mapPartitionsWithIndex(csv_file))
        else:
            return self.from_pandas_rdd(
                self.spark_ctx.textFile(name).mapPartitionsWithIndex(csv_rows))

    def parquetFile(self, *paths):
        """Loads a Parquet file, returning the result as a L{Dataframe}.
        """
        return self.from_spark_rdd(self.sql_ctx.parquetFile(paths),
                                   self.sql_ctx)

    def jsonFile(self, path, schema=None, sampling_ratio=1.0):
        """Loads a text file storing one JSON object per line as a
        L{Dataframe}.
        """
        schema_rdd = self.sql_ctx.jsonFile(path, schema, sampling_ratio)
        return self.from_spark_rdd(schema_rdd, self.sql_ctx)

    def from_pd_data_frame(self, local_df):
        """Make a distributed dataframe from a local dataframe. The intend use
        is for testing. Note: dtypes are re-infered, so they may not match."""
        def frame_to_rows(frame):
            """Convert a Panda's DataFrame into Spark SQL Rows"""
            # TODO: Convert to row objects directly?
            return [r.tolist() for r in frame.to_records()]
        schema = list(local_df.columns)
        index_names = list(local_df.index.names)
        index_names = _normalize_index_names(index_names)
        schema = index_names + schema
        rows = self.spark_ctx.parallelize(frame_to_rows(local_df))
        sp_df = Dataframe.from_schema_rdd(
            self.sql_ctx.createDataFrame(
                rows,
                schema=schema,
                # Look at all the rows, should be ok since coming from
                # a local dataset
                samplingRatio=1))
        sp_df._index_names = index_names
        return sp_df

    def sql(self, query):
        """Perform a SQL query and create a L{Dataframe} of the result."""
        return Dataframe.from_spark_rdd(self.sql_ctx.sql(query), self.sql_ctx)

    def table(self, table):
        """Returns the provided table as a L{Dataframe}"""
        return Dataframe.from_spark_rdd(self.sql_ctx.table(table),
                                        self.sql_ctx)

    def from_spark_rdd(self, spark_rdd, sql_ctx):
        """
        Translates a Spark DataFrame Rdd into a SparklingPandas dataframe.
        :param dataframe_rdd: Input dataframe RDD to convert
        :return: Matchign SparklingPandas dataframe
        """
        return Dataframe.from_spark_rdd(spark_rdd, sql_ctx)

    def DataFrame(self, elements, *args, **kwargs):
        """Wraps the pandas.DataFrame operation."""
        return self.from_pd_data_frame(pandas.DataFrame(
            elements,
            *args,
            **kwargs))

    def from_pandas_rdd(self, pandas_rdd):
        def _extract_records(data):
            return [r for r in data.to_records(index=False).tolist()]

        def _from_pandas_rdd_records(pandas_rdd_records, schema):
            """Create a L{Dataframe} from an RDD of records with schema"""
            return Dataframe.from_spark_rdd(
                self.sql_ctx.createDataFrame(pandas_rdd_records,
                                             schema.values.tolist()),
                self.sql_ctx)

        schema = pandas_rdd.map(lambda x: x.columns).first()
        rdd_records = pandas_rdd.flatMap(_extract_records)
        return _from_pandas_rdd_records(rdd_records, schema)

    def read_json(self, name,
                  *args, **kwargs):
        """Read a json file in and parse it into Pandas DataFrames.
        If no names is provided we use the first row for the names.
        Currently, it is not possible to skip the first n rows of a file.
        Headers are provided in the json file and not specified separately.
        """
        def json_file_to_df(files):
            """ Transforms a JSON file into a list of data"""
            for _, contents in files:
                yield pandas.read_json(sio(contents), *args, **kwargs)

        return self.from_pandas_rdd(self.spark_ctx.wholeTextFiles(name)
                                    .mapPartitions(json_file_to_df))

    def stop(self):
        """Stop the underlying SparkContext
        """
        self.spark_ctx.stop()
开发者ID:asaf-erlich,项目名称:sparklingpandas,代码行数:104,代码来源:pcontext.py

示例4: tablePercentile

# 需要导入模块: from pyspark.sql import SQLContext [as 别名]
# 或者: from pyspark.sql.SQLContext import table [as 别名]
def tablePercentile(sc, column, percentile,tableName):
    # pass
    sqlCtx = SQLContext(sc)
    df = sqlCtx.table(tableName)
    df.sort(asc(column)).select(column).map(lambda x: np.percentile(x, percentile)).collect()
开发者ID:SophyXu,项目名称:PySpark-Framework,代码行数:7,代码来源:framework.py

示例5: tableMedian

# 需要导入模块: from pyspark.sql import SQLContext [as 别名]
# 或者: from pyspark.sql.SQLContext import table [as 别名]
def tableMedian(sc, column, tableName):
    # pass
    sqlCtx = SQLContext(sc)
    df = sqlCtx.table(tableName)
    df.sort(asc(column)).select(column).map(lambda x: percentile(x, 0.5)).collect()
开发者ID:SophyXu,项目名称:PySpark-Framework,代码行数:7,代码来源:framework.py

示例6: tableJoin

# 需要导入模块: from pyspark.sql import SQLContext [as 别名]
# 或者: from pyspark.sql.SQLContext import table [as 别名]
def tableJoin(sc, tb1, tb2, joinExp = None, joinType = None):
    sqlCtx = SQLContext(sc)
    df1 = sqlCtx.table(tb1)
    df2 = sqlCtx.table(tb2)
    df1.join(df2, joinExp, joinType)
开发者ID:SophyXu,项目名称:PySpark-Framework,代码行数:7,代码来源:framework.py

示例7: tableMin

# 需要导入模块: from pyspark.sql import SQLContext [as 别名]
# 或者: from pyspark.sql.SQLContext import table [as 别名]
def tableMin(sc, tableName, Column):
    sqlCtx = SQLContext(sc)
    df = sqlCtx.table(tableName)
    df.groupBy().min(Column).collect()
    return
开发者ID:SophyXu,项目名称:PySpark-Framework,代码行数:7,代码来源:framework.py


注:本文中的pyspark.sql.SQLContext.table方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。