本文整理汇总了Python中sklearn.gaussian_process.GaussianProcessRegressor.kernel_方法的典型用法代码示例。如果您正苦于以下问题:Python GaussianProcessRegressor.kernel_方法的具体用法?Python GaussianProcessRegressor.kernel_怎么用?Python GaussianProcessRegressor.kernel_使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类sklearn.gaussian_process.GaussianProcessRegressor
的用法示例。
在下文中一共展示了GaussianProcessRegressor.kernel_方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: print
# 需要导入模块: from sklearn.gaussian_process import GaussianProcessRegressor [as 别名]
# 或者: from sklearn.gaussian_process.GaussianProcessRegressor import kernel_ [as 别名]
index_y2 = 7 #only valid until len_x1
print("Radius is "+str(inputs_x_array[:,0][index_y2*len_x2:index_y2*len_x2 + len_x2][0])+"m")
plt.figure()
plt.scatter(inputs_x_array[:,1][index_y2*len_x2:index_y2*len_x2 + len_x2],y_pred[index_y2*len_x2:index_y2*len_x2 + len_x2]) #from x2_min to x2_max
plt.xlabel('Y Label (time)')
plt.ylabel('Z Label (density)')
plt.show()
print(gp.kernel_) #gives optimized hyperparameters
print(gp.log_marginal_likelihood(gp.kernel_.theta)) #log-likelihood
alpha=1e-10
input_prediction = gp.predict(X,return_std=True)
K, K_gradient = gp.kernel_(X, eval_gradient=True)
K[np.diag_indices_from(K)] += alpha
L = cholesky(K, lower=True) # Line 2
# Support multi-dimensional output of self.y_train_
if y.ndim == 1:
y = y[:, np.newaxis]
alpha = cho_solve((L, True), y)
log_likelihood_dims = -0.5 * np.einsum("ik,ik->k", y, alpha)
log_likelihood_dims -= np.log(np.diag(L)).sum()
log_likelihood_dims -= (K.shape[0] / 2.) * np.log(2 * np.pi)
log_likelihood = log_likelihood_dims.sum(-1)
print(log_likelihood)
mean_sq_rel_err = ((input_prediction[0][:,0] - y[:,0])**2./y[:,0]**2.) #mean square relative error
开发者ID:AbhilashMathews,项目名称:AbhilashMathews.github.io,代码行数:32,代码来源:2D_GPR_spatiotemporal_profiles-sklearn.py