当前位置: 首页>>代码示例>>Python>>正文


Python common._py2java函数代码示例

本文整理汇总了Python中pyspark.mllib.common._py2java函数的典型用法代码示例。如果您正苦于以下问题:Python _py2java函数的具体用法?Python _py2java怎么用?Python _py2java使用的例子?那么恭喜您, 这里精选的函数代码示例或许可以为您提供帮助。


在下文中一共展示了_py2java函数的15个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: save

 def save(self, sc, path):
     """Save a IsotonicRegressionModel."""
     java_boundaries = _py2java(sc, self.boundaries.tolist())
     java_predictions = _py2java(sc, self.predictions.tolist())
     java_model = sc._jvm.org.apache.spark.mllib.regression.IsotonicRegressionModel(
         java_boundaries, java_predictions, self.isotonic)
     java_model.save(sc._jsc.sc(), path)
开发者ID:0xqq,项目名称:spark,代码行数:7,代码来源:regression.py

示例2: save

 def save(self, sc, path):
     java_labels = _py2java(sc, self.labels.tolist())
     java_pi = _py2java(sc, self.pi.tolist())
     java_theta = _py2java(sc, self.theta.tolist())
     java_model = sc._jvm.org.apache.spark.mllib.classification.NaiveBayesModel(
         java_labels, java_pi, java_theta)
     java_model.save(sc._jsc.sc(), path)
开发者ID:OspreyX,项目名称:spark,代码行数:7,代码来源:classification.py

示例3: save

 def save(self, sc, path):
     """
     Save this model to the given path.
     """
     java_centers = _py2java(sc, [_convert_to_vector(c) for c in self.centers])
     java_model = sc._jvm.org.apache.spark.mllib.clustering.KMeansModel(java_centers)
     java_model.save(sc._jsc.sc(), path)
开发者ID:11wzy001,项目名称:spark,代码行数:7,代码来源:clustering.py

示例4: perform_pca

def perform_pca(matrix, row_count, nr_principal_components=2):
    """Return principal components of the input matrix.

    This function uses MLlib's ``RowMatrix`` to compute principal components.

    Args:
        matrix: An RDD[int, (int, float)] representing a sparse matrix. This
            is returned by ``center_matrix`` but it is not required to center
            the matrix first.
        row_count: The size (N) of the N x N ``matrix``.
        nr_principal_components: Number of components we want to obtain. This
            value must be less than or equal to the number of rows in the input
            square matrix.

    Returns:
        An array of ``nr_principal_components`` columns, and same number of rows
        as the input ``matrix``. This array is a ``numpy`` array.
    """

    py_rdd = matrix.map(lambda row: linalg.Vectors.sparse(row_count, row))
    sc = pyspark.SparkContext._active_spark_context
    java_rdd = mllib_common._py2java(sc, py_rdd)
    scala_rdd = java_rdd.rdd()
    sc = pyspark.SparkContext._active_spark_context
    row_matrix = (sc._jvm.org.apache.spark.mllib.linalg.distributed.
        RowMatrix(scala_rdd)
    )
    pca = row_matrix.computePrincipalComponents(nr_principal_components)
    pca = mllib_common._java2py(sc, pca)
    return pca.toArray()
开发者ID:buptjkshub,项目名称:spark-examples,代码行数:30,代码来源:variants_pca.py

示例5: autofit

def autofit(ts, maxp=5, maxd=2, maxq=5, sc=None):
    """
    Utility function to help in fitting an automatically selected ARIMA model based on approximate
    Akaike Information Criterion (AIC) values. The model search is based on the heuristic
    developed by Hyndman and Khandakar (2008) and described in [[http://www.jstatsoft
    .org/v27/i03/paper]]. In contrast to the algorithm in the paper, we use an approximation to
    the AIC, rather than an exact value. Note that if the maximum differencing order provided
    does not suffice to induce stationarity, the function returns a failure, with the appropriate
    message. Additionally, note that the heuristic only considers models that have parameters
    satisfying the stationarity/invertibility constraints. Finally, note that our algorithm is
    slightly more lenient than the original heuristic. For example, the original heuristic
    rejects models with parameters "close" to violating stationarity/invertibility. We only
    reject those that actually violate it.
   
    This functionality is even less mature than some of the other model fitting functions here, so
    use it with caution.
   
    Parameters
    ----------
    ts:
        time series to which to automatically fit an ARIMA model
    maxP:
        limit for the AR order
    maxD:
        limit for differencing order
    maxQ:
        limit for the MA order
    sc:
        The SparkContext, required.
    
    returns an ARIMAModel
    """
    jmodel = sc._jvm.com.cloudera.sparkts.models.ARIMA.autoFit(_py2java(sc, ts), maxp, maxd, maxq)
    return ARIMAModel(jmodel=jmodel, sc=sc)
开发者ID:zachahuy,项目名称:spark-timeseries,代码行数:34,代码来源:ARIMA.py

示例6: _call_java

def _call_java(sc, java_obj, name, *args):
    """
    Method copied from pyspark.ml.wrapper.  Uses private Spark APIs.
    """
    m = getattr(java_obj, name)
    java_args = [_py2java(sc, arg) for arg in args]
    return _java2py(sc, m(*java_args))
开发者ID:Anhmike,项目名称:spark-sklearn,代码行数:7,代码来源:util.py

示例7: save

 def save(self, sc, path):
     """
     Save this model to the given path.
     """
     java_model = sc._jvm.org.apache.spark.mllib.classification.LogisticRegressionModel(
         _py2java(sc, self._coeff), self.intercept, self.numFeatures, self.numClasses)
     java_model.save(sc._jsc.sc(), path)
开发者ID:vijaykiran,项目名称:spark,代码行数:7,代码来源:classification.py

示例8: forecast

 def forecast(self, ts, nfuture):
     """
     Provided fitted values for timeseries ts as 1-step ahead forecasts, based on current
     model parameters, and then provide `nFuture` periods of forecast. We assume AR terms
     prior to the start of the series are equal to the model's intercept term (or 0.0, if fit
     without and intercept term).Meanwhile, MA terms prior to the start are assumed to be 0.0. If
     there is differencing, the first d terms come from the original series.
    
     Parameters
     ----------
     ts:
         Timeseries to use as gold-standard. Each value (i) in the returning series
         is a 1-step ahead forecast of ts(i). We use the difference between ts(i) -
         estimate(i) to calculate the error at time i, which is used for the moving
         average terms. Numpy array.
     nFuture:
         Periods in the future to forecast (beyond length of ts)
         
     Returns a series consisting of fitted 1-step ahead forecasts for historicals and then
     `nFuture` periods of forecasts. Note that in the future values error terms become
     zero and prior predictions are used for any AR terms.
     
     """
     jts = _py2java(self._ctx, Vectors.dense(ts))
     jfore = self._jmodel.forecast(jts, nfuture)
     return _java2py(self._ctx, jfore)
开发者ID:pegli,项目名称:spark-timeseries,代码行数:26,代码来源:ARIMA.py

示例9: log_likelihood

 def log_likelihood(self, ts):
     """
     Returns the log likelihood of the parameters on the given time series.
     
     Based on http://www.unc.edu/~jbhill/Bollerslev_GARCH_1986.pdf
     """
     likelihood = self._jmodel.logLikelihood(_py2java(self._ctx, Vectors.dense(ts)))
     return _java2py(self._ctx, likelihood)
开发者ID:BabelTower,项目名称:spark-timeseries,代码行数:8,代码来源:GARCH.py

示例10: _new_java_obj

def _new_java_obj(sc, java_class, *args):
    """
    Construct a new Java object.
    """
    java_obj = _jvm()
    for name in java_class.split("."):
        java_obj = getattr(java_obj, name)
    java_args = [_py2java(sc, arg) for arg in args]
    return java_obj(*java_args)
开发者ID:Anhmike,项目名称:spark-sklearn,代码行数:9,代码来源:util.py

示例11: _make_java_param_pair

 def _make_java_param_pair(self, param, value):
     """
     Makes a Java parm pair.
     """
     sc = SparkContext._active_spark_context
     param = self._resolveParam(param)
     java_param = self._java_obj.getParam(param.name)
     java_value = _py2java(sc, value)
     return java_param.w(java_value)
开发者ID:Atry,项目名称:spark,代码行数:9,代码来源:wrapper.py

示例12: add_time_dependent_effects

 def add_time_dependent_effects(self, ts, destts):
     """
     Given a timeseries, apply an ARIMA(p, d, q) model to it.
     We assume that prior MA terms are 0.0 and prior AR terms are equal to the intercept or 0.0 if
     fit without an intercept
     
     Parameters
     ----------
     ts:
         Time series of i.i.d. observations as a DenseVector
     destts:
         Time series with added time-dependent effects as a DenseVector.
     
     returns the dest series, representing the application of the model to provided error
      terms, for convenience.
     """
     result =  self._jmodel.addTimeDependentEffects(_py2java(self._ctx, ts), _py2java(self._ctx, destts))
     return _java2py(self._ctx, result)
开发者ID:zachahuy,项目名称:spark-timeseries,代码行数:18,代码来源:ARIMA.py

示例13: remove_time_dependent_effects

 def remove_time_dependent_effects(self, ts, destts):
     """
     Given a timeseries, assume that it is the result of an ARIMA(p, d, q) process, and apply
     inverse operations to obtain the original series of underlying errors.
     To do so, we assume prior MA terms are 0.0, and prior AR are equal to the model's intercept or
     0.0 if fit without an intercept
     
     Parameters
     ----------
     ts:
         Time series of observations with this model's characteristics as a DenseVector
     destts:
         Time series with removed time-dependent effects as a DenseVector.
     
     returns The dest series, representing remaining errors, for convenience.
     """
     result =  self._jmodel.removeTimeDependentEffects(_py2java(self._ctx, ts), _py2java(self._ctx, destts))
     return _java2py(self._ctx, result)
开发者ID:zachahuy,项目名称:spark-timeseries,代码行数:18,代码来源:ARIMA.py

示例14: gradient

 def gradient(self, ts):
     """
     Find the gradient of the log likelihood with respect to the given time series.
     
     Based on http://www.unc.edu/~jbhill/Bollerslev_GARCH_1986.pdf
     
     Returns an 3-element array containing the gradient for the alpha, beta, and omega parameters.
     """
     gradient = self._jmodel.gradient(_py2java(self._ctx, Vectors.dense(ts)))
     return _java2py(self._ctx, gradient)
开发者ID:BabelTower,项目名称:spark-timeseries,代码行数:10,代码来源:GARCH.py

示例15: _new_java_obj

 def _new_java_obj(java_class, *args):
     """
     Returns a new Java object.
     """
     sc = SparkContext._active_spark_context
     java_obj = _jvm()
     for name in java_class.split("."):
         java_obj = getattr(java_obj, name)
     java_args = [_py2java(sc, arg) for arg in args]
     return java_obj(*java_args)
开发者ID:Atry,项目名称:spark,代码行数:10,代码来源:wrapper.py


注:本文中的pyspark.mllib.common._py2java函数示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。