Sklearn.metrics函数
Python Sklearn.metrics 简介及应用示例
利用Python进行各种机器学习算法的实现时,经常会用到sklearn(scikit-learn)这个模块/库。
无论利用机器学习算法进行回归、分类或者聚类时,评价指标,即检验机器学习模型效果的定量指标,都是一个不可避免且十分重要的问题。因此,结合scikit-learn主页上的介绍,以及网上大神整理的一些资料,对常用的评价指标及其实现、应用进行简单介绍。
网上教程很多,此处不再赘述,具体可以参照:
https://www.cnblogs.com/zhangqunshi/p/6646987.html
此外,如果安装了Anoconda,可以直接从Anoconda Navigator——Environment里面搜索添加。
pip install -U scikit-learn
有两种方式导入:
from sklearn.metrics import 评价指标函数名称
例如:
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
调用方式为:直接使用函数名调用
计算均方误差mean squared error
mse = mean_squared_error(y_test, y_pre)
算回归的决定系数R2
R2 = r2_score(y_test,y_pre)
from sklearn import metrics
调用方式为:metrics.评价指标函数名称(parameter)
例如:
计算均方误差mean squared error
mse = metrics.mean_squared_error(y_test, y_pre)
- 1
计算回归的决定系数R2
R2 = metrics.r2_score(y_test,y_pre)
- 1
三、 scikit-learn.metrics里各种指标简介
简单介绍参见:
https://www.cnblogs.com/mdevelopment/p/9456486.html
详细介绍参见:
https://www.cnblogs.com/harvey888/p/6964741.html
官网介绍:
https://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics
转自第一个链接的内容,简单介绍内容如下:
回归指标
-
explained_variance_score(y_true, y_pred, sample_weight=None, multioutput=‘uniform_average’):回归方差(反应自变量与因变量之间的相关程度)
-
mean_absolute_error(y_true,y_pred,sample_weight=None,
multioutput=‘uniform_average’):
平均绝对误差 -
mean_squared_error(y_true, y_pred, sample_weight=None, multioutput=‘uniform_average’):均方差
-
median_absolute_error(y_true, y_pred) 中值绝对误差
-
r2_score(y_true, y_pred,sample_weight=None,multioutput=‘uniform_average’) :R平方值
分类指标
-
accuracy_score(y_true,y_pre) : 精度
-
auc(x, y, reorder=False) : ROC曲线下的面积;较大的AUC代表了较好的performance。
-
average_precision_score(y_true, y_score, average=‘macro’, sample_weight=None):根据预测得分计算平均精度(AP)
-
brier_score_loss(y_true, y_prob, sample_weight=None, pos_label=None):The smaller the Brier score, the better.
-
confusion_matrix(y_true, y_pred, labels=None, sample_weight=None):通过计算混淆矩阵来评估分类的准确性 返回混淆矩阵
-
f1_score(y_true, y_pred, labels=None, pos_label=1, average=‘binary’, sample_weight=None): F1值
F1 = 2 * (precision * recall) / (precision + recall) precision(查准率)=TP/(TP+FP) recall(查全率)=TP/(TP+FN) -
log_loss(y_true, y_pred, eps=1e-15, normalize=True, sample_weight=None, labels=None):对数损耗,又称逻辑损耗或交叉熵损耗
-
precision_score(y_true, y_pred, labels=None, pos_label=1, average=‘binary’,) :查准率或者精度; precision(查准率)=TP/(TP+FP)
-
recall_score(y_true, y_pred, labels=None, pos_label=1, average=‘binary’, sample_weight=None):查全率 ;recall(查全率)=TP/(TP+FN)
-
roc_auc_score(y_true, y_score, average=‘macro’, sample_weight=None):计算ROC曲线下的面积就是AUC的值,the larger the better
-
roc_curve(y_true, y_score, pos_label=None, sample_weight=None, drop_intermediate=True);计算ROC曲线的横纵坐标值,TPR,FPR
TPR = TP/(TP+FN) = recall(真正例率,敏感度) FPR = FP/(FP+TN)(假正例率,1-特异性)
结合官网的案例,利用自己的数据,实现的一个应用实例:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import ensemble
from sklearn import metrics
##############################################################################
# Load data
data = pd.read_csv('Data for train_0.003D.csv')
y = data.iloc[:,0]
X = data.iloc[:,1:]
offset = int(X.shape[0] * 0.9)
X_train, y_train = X[:offset], y[:offset]
X_test, y_test = X[offset:], y[offset:]
##############################################################################
# Fit regression model
params = {'n_estimators': 500, 'max_depth': 4, 'min_samples_split': 2,
'learning_rate': 0.01, 'loss': 'ls'}
clf = ensemble.GradientBoostingRegressor(**params)
clf.fit(X_train, y_train)
y_pre = clf.predict(X_test)
# Calculate metrics
mse = metrics.mean_squared_error(y_test, y_pre)
print("MSE: %.4f" % mse)
mae = metrics.mean_absolute_error(y_test, y_pre)
print("MAE: %.4f" % mae)
R2 = metrics.r2_score(y_test,y_pre)
print("R2: %.4f" % R2)
##############################################################################
# Plot training deviance
# compute test set deviance
test_score = np.zeros((params['n_estimators'],), dtype=np.float64)
for i, y_pred in enumerate(clf.staged_predict(X_test)):
test_score[i] = clf.loss_(y_test, y_pred)
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.title('Deviance')
plt.plot(np.arange(params['n_estimators']) + 1, clf.train_score_, 'b-',
label='Training Set Deviance')
plt.plot(np.arange(params['n_estimators']) + 1, test_score, 'r-',
label='Test Set Deviance')
plt.legend(loc='upper right')
plt.xlabel('Boosting Iterations')
plt.ylabel('Deviance')
##############################################################################
# Plot feature importance
feature_importance = clf.feature_importances_
# make importances relative to max importance
feature_importance = 100.0 * (feature_importance / feature_importance.max())
sorted_idx = np.argsort(feature_importance)
pos = np.arange(sorted_idx.shape[0]) + .5
plt.subplot(1, 2, 2)
plt.barh(pos, feature_importance[sorted_idx], align='center')
plt.yticks(pos, X.columns[sorted_idx])
plt.xlabel('Relative Importance')
plt.title('Variable Importance')
plt.show()
最终的运行结果为:
更多推荐
所有评论(0)