本文整理汇总了Python中statsmodels.duration.hazard_regression.PHReg.fit_regularized方法的典型用法代码示例。如果您正苦于以下问题:Python PHReg.fit_regularized方法的具体用法?Python PHReg.fit_regularized怎么用?Python PHReg.fit_regularized使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类statsmodels.duration.hazard_regression.PHReg
的用法示例。
在下文中一共展示了PHReg.fit_regularized方法的2个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。
示例1: test_fit_regularized
# 需要导入模块: from statsmodels.duration.hazard_regression import PHReg [as 别名]
# 或者: from statsmodels.duration.hazard_regression.PHReg import fit_regularized [as 别名]
def test_fit_regularized(self):
# Data set sizes
for n,p in (50,2),(100,5):
# Penalty weights
for js,s in enumerate([0,0.1]):
coef_name = "coef_%d_%d_%d" % (n, p, js)
coef = getattr(survival_enet_r_results, coef_name)
fname = "survival_data_%d_%d.csv" % (n, p)
time, status, entry, exog = self.load_file(fname)
exog -= exog.mean(0)
exog /= exog.std(0, ddof=1)
mod = PHReg(time, exog, status=status, ties='breslow')
rslt = mod.fit_regularized(alpha=s)
# The agreement isn't very high, the issue may be on
# their side. They seem to use some approximations
# that we are not using.
assert_allclose(rslt.params, coef, rtol=0.3)
# Smoke test for summary
smry = rslt.summary()
示例2: test_fit_regularized
# 需要导入模块: from statsmodels.duration.hazard_regression import PHReg [as 别名]
# 或者: from statsmodels.duration.hazard_regression.PHReg import fit_regularized [as 别名]
def test_fit_regularized(self):
# Data set sizes
for n,p in (50,2),(100,5):
# Penalty weights
for js,s in enumerate([0,0.1]):
coef_name = "coef_%d_%d_%d" % (n, p, js)
params = getattr(survival_enet_r_results, coef_name)
fname = "survival_data_%d_%d.csv" % (n, p)
time, status, entry, exog = self.load_file(fname)
exog -= exog.mean(0)
exog /= exog.std(0, ddof=1)
model = PHReg(time, exog, status=status, ties='breslow')
sm_result = model.fit_regularized(alpha=s)
# The agreement isn't very high, the issue may be on
# the R side. See below for further checks.
assert_allclose(sm_result.params, params, rtol=0.3)
# The penalized log-likelihood that we are maximizing.
def plf(params):
llf = model.loglike(params) / len(time)
L1_wt = 1
llf = llf - s * ((1 - L1_wt)*np.sum(params**2) / 2 + L1_wt*np.sum(np.abs(params)))
return llf
# Confirm that we are doing better than glmnet.
llf_r = plf(params)
llf_sm = plf(sm_result.params)
assert_equal(np.sign(llf_sm - llf_r), 1)