當前位置: 首頁>>代碼示例>>Java>>正文


Java SQLContext.sparkContext方法代碼示例

本文整理匯總了Java中org.apache.spark.sql.SQLContext.sparkContext方法的典型用法代碼示例。如果您正苦於以下問題:Java SQLContext.sparkContext方法的具體用法?Java SQLContext.sparkContext怎麽用?Java SQLContext.sparkContext使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在org.apache.spark.sql.SQLContext的用法示例。


在下文中一共展示了SQLContext.sparkContext方法的6個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Java代碼示例。

示例1: interpret

import org.apache.spark.sql.SQLContext; //導入方法依賴的package包/類
@Override
public InterpreterResult interpret(String st, InterpreterContext context) {
  SQLContext sqlc = null;
  SparkInterpreter sparkInterpreter = getSparkInterpreter();

  if (sparkInterpreter.getSparkVersion().isUnsupportedVersion()) {
    return new InterpreterResult(Code.ERROR, "Spark "
        + sparkInterpreter.getSparkVersion().toString() + " is not supported");
  }

  sqlc = getSparkInterpreter().getSQLContext();
  SparkContext sc = sqlc.sparkContext();
  if (concurrentSQL()) {
    sc.setLocalProperty("spark.scheduler.pool", "fair");
  } else {
    sc.setLocalProperty("spark.scheduler.pool", null);
  }

  sc.setJobGroup(getJobGroup(context), "Zeppelin", false);
  Object rdd = null;
  try {
    // method signature of sqlc.sql() is changed
    // from  def sql(sqlText: String): SchemaRDD (1.2 and prior)
    // to    def sql(sqlText: String): DataFrame (1.3 and later).
    // Therefore need to use reflection to keep binary compatibility for all spark versions.
    Method sqlMethod = sqlc.getClass().getMethod("sql", String.class);
    rdd = sqlMethod.invoke(sqlc, st);
  } catch (NoSuchMethodException | SecurityException | IllegalAccessException
      | IllegalArgumentException | InvocationTargetException e) {
    throw new InterpreterException(e);
  }

  String msg = ZeppelinContext.showDF(sc, context, rdd, maxResult);
  sc.clearJobGroup();
  return new InterpreterResult(Code.SUCCESS, msg);
}
 
開發者ID:lorthos,項目名稱:incubator-zeppelin-druid,代碼行數:37,代碼來源:SparkSqlInterpreter.java

示例2: cancel

import org.apache.spark.sql.SQLContext; //導入方法依賴的package包/類
@Override
public void cancel(InterpreterContext context) {
  SQLContext sqlc = getSparkInterpreter().getSQLContext();
  SparkContext sc = sqlc.sparkContext();

  sc.cancelJobGroup(getJobGroup(context));
}
 
開發者ID:lorthos,項目名稱:incubator-zeppelin-druid,代碼行數:8,代碼來源:SparkSqlInterpreter.java

示例3: cancel

import org.apache.spark.sql.SQLContext; //導入方法依賴的package包/類
@Override
public void cancel(InterpreterContext context) throws InterpreterException {
  SparkInterpreter sparkInterpreter = getSparkInterpreter();
  SQLContext sqlc = sparkInterpreter.getSQLContext();
  SparkContext sc = sqlc.sparkContext();

  sc.cancelJobGroup(Utils.buildJobGroupId(context));
}
 
開發者ID:apache,項目名稱:zeppelin,代碼行數:9,代碼來源:SparkSqlInterpreter.java

示例4: cancel

import org.apache.spark.sql.SQLContext; //導入方法依賴的package包/類
@Override
public void cancel() {
    SQLContext sqlc = getSparkInterpreter().getSQLContext();
    SparkContext sc = sqlc.sparkContext();

    sc.cancelJobGroup(jobGroup);
}
 
開發者ID:Stratio,項目名稱:Explorer,代碼行數:8,代碼來源:SparkSqlInterpreter.java

示例5: getProgress

import org.apache.spark.sql.SQLContext; //導入方法依賴的package包/類
public int getProgress() {
    SQLContext sqlc = getSparkInterpreter().getSQLContext();
    SparkContext sc = sqlc.sparkContext();
    JobProgressListener sparkListener = getSparkInterpreter().getJobProgressListener();
    int completedTasks = 0;
    int totalTasks = 0;

    DAGScheduler scheduler = sc.dagScheduler();
    HashSet<ActiveJob> jobs = scheduler.activeJobs();
    Iterator<ActiveJob> it = jobs.iterator();
    while (it.hasNext()) {
        ActiveJob job = it.next();
        String g = (String) job.properties().get("spark.jobGroup.id");
        if (jobGroup.equals(g)) {
            int[] progressInfo = null;
            if (sc.version().startsWith("1.0")) {
                progressInfo = getProgressFromStage_1_0x(sparkListener, job.finalStage());
            } else if (sc.version().startsWith("1.1") || sc.version().startsWith("1.2")) {
                progressInfo = getProgressFromStage_1_1x(sparkListener, job.finalStage());
            } else {
                logger.warn("Spark {} getting progress information not supported" + sc.version());
                continue;
            }
            totalTasks += progressInfo[0];
            completedTasks += progressInfo[1];
        }
    }

    if (totalTasks == 0) {
        return 0;
    }
    return completedTasks * 100 / totalTasks;
}
 
開發者ID:Stratio,項目名稱:Explorer,代碼行數:34,代碼來源:SparkSqlInterpreter.java

示例6: interpret

import org.apache.spark.sql.SQLContext; //導入方法依賴的package包/類
@Override
public InterpreterResult interpret(String st) {

    SQLContext sqlc = getSparkInterpreter().getSQLContext();
    SparkContext sc = sqlc.sparkContext();
    sc.setJobGroup(jobGroup, "Notebook", false);
    DataFrame dataFrame;
    Row[] rows = null;
    try {
        dataFrame = sqlc.sql(st);
        rows = dataFrame.take(maxResult + 1);
    } catch (Exception e) {
        logger.error("Error", e);
        sc.clearJobGroup();
        return new InterpreterResult(Code.ERROR, e.getMessage());
    }

    String msg = null;
    // get field names
    List<Attribute> columns = scala.collection.JavaConverters.asJavaListConverter(
            dataFrame.queryExecution().analyzed().output()).asJava();
    for (Attribute col : columns) {
        if (msg == null) {
            msg = col.name();
        } else {
            msg += "\t" + col.name();
        }
    }
    msg += "\n";

    // ArrayType, BinaryType, BooleanType, ByteType, DecimalType, DoubleType, DynamicType, FloatType, FractionalType, IntegerType, IntegralType, LongType, MapType, NativeType, NullType, NumericType, ShortType, StringType, StructType

    for (int r = 0; r < maxResult && r < rows.length; r++) {
        Row row = rows[r];

        for (int i = 0; i < columns.size(); i++) {
            if (!row.isNullAt(i)) {
                msg += row.apply(i).toString();
            } else {
                msg += "null";
            }
            if (i != columns.size() - 1) {
                msg += "\t";
            }
        }
        msg += "\n";
    }

    if (rows.length > maxResult) {
        msg += "\n<font color=red>Results are limited by " + maxResult + ".</font>";
    }
    InterpreterResult rett = new InterpreterResult(Code.SUCCESS, "%table " + msg);
    sc.clearJobGroup();
    return rett;
}
 
開發者ID:Stratio,項目名稱:Explorer,代碼行數:56,代碼來源:SparkSqlInterpreter.java


注:本文中的org.apache.spark.sql.SQLContext.sparkContext方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。