本文整理汇总了Python中statsmodels.regression.linear_model.OLS.loglike方法的典型用法代码示例。如果您正苦于以下问题:Python OLS.loglike方法的具体用法?Python OLS.loglike怎么用?Python OLS.loglike使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类statsmodels.regression.linear_model.OLS
的用法示例。
在下文中一共展示了OLS.loglike方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: test_ols
# 需要导入模块: from statsmodels.regression.linear_model import OLS [as 别名]
# 或者: from statsmodels.regression.linear_model.OLS import loglike [as 别名]
def test_ols():
# More comprehensive tests against OLS estimates
mod = RecursiveLS(endog, dta['m1'])
res = mod.fit()
mod_ols = OLS(endog, dta['m1'])
res_ols = mod_ols.fit()
# Regression coefficients, standard errors, and estimated scale
assert_allclose(res.params, res_ols.params)
assert_allclose(res.bse, res_ols.bse)
# Note: scale here is computed according to Harvey, 1989, 4.2.5, and is
# the called the ML estimator and sometimes (e.g. later in section 5)
# denoted \tilde \sigma_*^2
assert_allclose(res.filter_results.obs_cov[0, 0], res_ols.scale)
# OLS residuals are equivalent to smoothed forecast errors
# (the latter are defined as e_t|T by Harvey, 1989, 5.4.5)
# (this follows since the smoothed state simply contains the
# full-information estimates of the regression coefficients)
actual = (mod.endog[:, 0] -
np.sum(mod['design', 0, :, :] * res.smoothed_state, axis=0))
assert_allclose(actual, res_ols.resid)
# Given the estimate of scale as `sum(v_t^2 / f_t) / (T - d)` (see
# Harvey, 1989, 4.2.5 on p. 183), then llf_recursive is equivalent to the
# full OLS loglikelihood (i.e. without the scale concentrated out).
desired = mod_ols.loglike(res_ols.params, scale=res_ols.scale)
assert_allclose(res.llf_recursive, desired)
# Alternatively, we can constrcut the concentrated OLS loglikelihood
# by computing the scale term with `nobs` in the denominator rather than
# `nobs - d`.
scale_alternative = np.sum((
res.standardized_forecasts_error[0, 1:] *
res.filter_results.obs_cov[0, 0]**0.5)**2) / mod.nobs
llf_alternative = np.log(norm.pdf(res.resid_recursive, loc=0,
scale=scale_alternative**0.5)).sum()
assert_allclose(llf_alternative, res_ols.llf)
# Prediction
actual = res.forecast(10, design=np.ones((1, 1, 10)))
assert_allclose(actual, res_ols.predict(np.ones((10, 1))))
# Sums of squares, R^2
assert_allclose(res.ess, res_ols.ess)
assert_allclose(res.ssr, res_ols.ssr)
assert_allclose(res.centered_tss, res_ols.centered_tss)
assert_allclose(res.uncentered_tss, res_ols.uncentered_tss)
assert_allclose(res.rsquared, res_ols.rsquared)
# Mean squares
assert_allclose(res.mse_model, res_ols.mse_model)
assert_allclose(res.mse_resid, res_ols.mse_resid)
assert_allclose(res.mse_total, res_ols.mse_total)
# Hypothesis tests
actual = res.t_test('m1 = 0')
desired = res_ols.t_test('m1 = 0')
assert_allclose(actual.statistic, desired.statistic)
assert_allclose(actual.pvalue, desired.pvalue, atol=1e-15)
actual = res.f_test('m1 = 0')
desired = res_ols.f_test('m1 = 0')
assert_allclose(actual.statistic, desired.statistic)
assert_allclose(actual.pvalue, desired.pvalue, atol=1e-15)
# Information criteria
# Note: the llf and llf_obs given in the results are based on the Kalman
# filter and so the ic given in results will not be identical to the
# OLS versions. Additionally, llf_recursive is comparable to the
# non-concentrated llf, and not the concentrated llf that is by default
# used in OLS. Compute new ic based on llf_alternative to compare.
actual_aic = aic(llf_alternative, res.nobs_effective, res.df_model)
assert_allclose(actual_aic, res_ols.aic)
actual_bic = bic(llf_alternative, res.nobs_effective, res.df_model)
assert_allclose(actual_bic, res_ols.bic)