當前位置: 首頁>>代碼示例>>Python>>正文


Python xgboost.plot_importance方法代碼示例

本文整理匯總了Python中xgboost.plot_importance方法的典型用法代碼示例。如果您正苦於以下問題:Python xgboost.plot_importance方法的具體用法?Python xgboost.plot_importance怎麽用?Python xgboost.plot_importance使用的例子?那麽, 這裏精選的方法代碼示例或許可以為您提供幫助。您也可以進一步了解該方法所在xgboost的用法示例。


在下文中一共展示了xgboost.plot_importance方法的4個代碼示例,這些例子默認根據受歡迎程度排序。您可以為喜歡或者感覺有用的代碼點讚,您的評價將有助於係統推薦出更棒的Python代碼示例。

示例1: plot_importance

# 需要導入模塊: import xgboost [as 別名]
# 或者: from xgboost import plot_importance [as 別名]
def plot_importance(self, ax=None, height=0.2,
                        xlim=None, title='Feature importance',
                        xlabel='F score', ylabel='Features',
                        grid=True, **kwargs):

        """Plot importance based on fitted trees.

        Parameters
        ----------
        ax : matplotlib Axes, default None
            Target axes instance. If None, new figure and axes will be created.
        height : float, default 0.2
            Bar height, passed to ax.barh()
        xlim : tuple, default None
            Tuple passed to axes.xlim()
        title : str, default "Feature importance"
            Axes title. To disable, pass None.
        xlabel : str, default "F score"
            X axis title label. To disable, pass None.
        ylabel : str, default "Features"
            Y axis title label. To disable, pass None.
        kwargs :
            Other keywords passed to ax.barh()

        Returns
        -------
        ax : matplotlib Axes
        """

        import xgboost as xgb

        if not isinstance(self._df.estimator, xgb.XGBModel):
            raise ValueError('estimator must be XGBRegressor or XGBClassifier')
        # print(type(self._df.estimator.booster), self._df.estimator.booster)
        return xgb.plot_importance(self._df.estimator,
                                   ax=ax, height=height, xlim=xlim, title=title,
                                   xlabel=xlabel, ylabel=ylabel, grid=True, **kwargs) 
開發者ID:pandas-ml,項目名稱:pandas-ml,代碼行數:39,代碼來源:base.py

示例2: plot_importance

# 需要導入模塊: import xgboost [as 別名]
# 或者: from xgboost import plot_importance [as 別名]
def plot_importance(self):
        ax = xgb.plot_importance(self.model)
        self.save_topn_features()
        return ax 
開發者ID:ChenglongChen,項目名稱:kaggle-HomeDepot,代碼行數:6,代碼來源:xgb_utils.py

示例3: save_topn_features

# 需要導入模塊: import xgboost [as 別名]
# 或者: from xgboost import plot_importance [as 別名]
def save_topn_features(self, fname="XGBRegressor_topn_features.txt", topn=-1):
        ax = xgb.plot_importance(self.model)
        yticklabels = ax.get_yticklabels()[::-1]
        if topn == -1:
            topn = len(yticklabels)
        else:
            topn = min(topn, len(yticklabels))
        with open(fname, "w") as f:
            for i in range(topn):
                f.write("%s\n"%yticklabels[i].get_text()) 
開發者ID:ChenglongChen,項目名稱:kaggle-HomeDepot,代碼行數:12,代碼來源:xgb_utils.py

示例4: run_train_validation

# 需要導入模塊: import xgboost [as 別名]
# 或者: from xgboost import plot_importance [as 別名]
def run_train_validation(self):
        x_train, y_train,x_validation,y_validation = self.get_train_validationset()
        dtrain = xgb.DMatrix(x_train, label= y_train,feature_names=x_train.columns)
        dvalidation = xgb.DMatrix(x_validation, label= y_validation,feature_names=x_validation.columns)
        self.set_xgb_parameters()
        
        evals=[(dtrain,'train'),(dvalidation,'eval')]
        model = xgb.train(self.xgb_params, dtrain, evals=evals, **self.xgb_learning_params)
        xgb.plot_importance(model)
        plt.show()
         
        print "features used:\n {}".format(self.get_used_features())
         
        return 
開發者ID:LevinJ,項目名稱:Supply-demand-forecasting,代碼行數:16,代碼來源:xgbbasemodel.py


注:本文中的xgboost.plot_importance方法示例由純淨天空整理自Github/MSDocs等開源代碼及文檔管理平台,相關代碼片段篩選自各路編程大神貢獻的開源項目,源碼版權歸原作者所有,傳播和使用請參考對應項目的License;未經允許,請勿轉載。