분류 - 앙상블

머신 러닝
공개

2025년 7월 27일

voting

  • 서로 다른 알고리즘이 결합. 분류에서는 voting1으로 결정

Example

import pandas as pd

from sklearn.ensemble import VotingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
import warnings

warnings.filterwarnings('ignore')

cancer = load_breast_cancer()

df = pd.DataFrame(cancer.data, columns=cancer.feature_names)
df
mean radius mean texture mean perimeter mean area mean smoothness mean compactness mean concavity mean concave points mean symmetry mean fractal dimension ... worst radius worst texture worst perimeter worst area worst smoothness worst compactness worst concavity worst concave points worst symmetry worst fractal dimension
0 17.99 10.38 122.80 1001.0 0.11840 0.27760 0.30010 0.14710 0.2419 0.07871 ... 25.380 17.33 184.60 2019.0 0.16220 0.66560 0.7119 0.2654 0.4601 0.11890
1 20.57 17.77 132.90 1326.0 0.08474 0.07864 0.08690 0.07017 0.1812 0.05667 ... 24.990 23.41 158.80 1956.0 0.12380 0.18660 0.2416 0.1860 0.2750 0.08902
2 19.69 21.25 130.00 1203.0 0.10960 0.15990 0.19740 0.12790 0.2069 0.05999 ... 23.570 25.53 152.50 1709.0 0.14440 0.42450 0.4504 0.2430 0.3613 0.08758
3 11.42 20.38 77.58 386.1 0.14250 0.28390 0.24140 0.10520 0.2597 0.09744 ... 14.910 26.50 98.87 567.7 0.20980 0.86630 0.6869 0.2575 0.6638 0.17300
4 20.29 14.34 135.10 1297.0 0.10030 0.13280 0.19800 0.10430 0.1809 0.05883 ... 22.540 16.67 152.20 1575.0 0.13740 0.20500 0.4000 0.1625 0.2364 0.07678
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
564 21.56 22.39 142.00 1479.0 0.11100 0.11590 0.24390 0.13890 0.1726 0.05623 ... 25.450 26.40 166.10 2027.0 0.14100 0.21130 0.4107 0.2216 0.2060 0.07115
565 20.13 28.25 131.20 1261.0 0.09780 0.10340 0.14400 0.09791 0.1752 0.05533 ... 23.690 38.25 155.00 1731.0 0.11660 0.19220 0.3215 0.1628 0.2572 0.06637
566 16.60 28.08 108.30 858.1 0.08455 0.10230 0.09251 0.05302 0.1590 0.05648 ... 18.980 34.12 126.70 1124.0 0.11390 0.30940 0.3403 0.1418 0.2218 0.07820
567 20.60 29.33 140.10 1265.0 0.11780 0.27700 0.35140 0.15200 0.2397 0.07016 ... 25.740 39.42 184.60 1821.0 0.16500 0.86810 0.9387 0.2650 0.4087 0.12400
568 7.76 24.54 47.92 181.0 0.05263 0.04362 0.00000 0.00000 0.1587 0.05884 ... 9.456 30.37 59.16 268.6 0.08996 0.06444 0.0000 0.0000 0.2871 0.07039

569 rows × 30 columns

lr_clf = LogisticRegression(solver='liblinear')
knn_clf = KNeighborsClassifier(n_neighbors=8)

vo_clf = VotingClassifier(estimators=[('LR', lr_clf), ('KNN', knn_clf)],
                          voting='soft')
X_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, test_size=0.2)
vo_clf.fit(X_train, y_train)
pred = vo_clf.predict(X_test)
accuracy = accuracy_score(y_test, pred)
accuracy
0.9473684210526315
for classifier in [lr_clf, knn_clf]:
    classifier.fit(X_train, y_train)
    pred = classifier.predict(X_test)
    class_name = classifier.__class__.__name__
    print(f'{class_name} 정확도: {accuracy_score(y_test, pred):.4f}')
LogisticRegression 정확도: 0.9298
KNeighborsClassifier 정확도: 0.9386
  • 반드시 voting이 제일 좋은 모델을 선택하는 것보다 좋은건 아님

bagging

  • 같은 유형의 알고리즘의 분류기가 boostrap 해가서 예측. random forest가 대표적. 분류에서는 voting2으로 결정

RandomForest

from sklearn.ensemble import RandomForestClassifier

def get_new_feature_name_df(old):
    df = pd.DataFrame(data=old.groupby('column_name').cumcount(), columns=['dup_cnt'])
    df = df.reset_index()
    new_df = pd.merge(old.reset_index(), df, how='outer')
    new_df['column_name'] = new_df[['column_name', 'dup_cnt']].apply(lambda x: x[0] + '_' + str(x[1]) if x[1] > 0 else x[0], axis=1)
    new_df = new_df.drop(['index'], axis=1)
    return new_df

def get_human_dataset():
    feature_name_df = pd.read_csv('_data/human_activity/features.txt', sep='\s+', header=None, names=['column_index', 'column_name'])
    new_feature_name_df = get_new_feature_name_df(feature_name_df)
    feature_name = new_feature_name_df.iloc[:, 1].values.tolist()

    X_train = pd.read_csv('_data/human_activity/train/X_train.txt', sep='\s+', names=feature_name)
    X_test = pd.read_csv('_data/human_activity/test/X_test.txt', sep='\s+', names=feature_name)

    y_train = pd.read_csv('_data/human_activity/train/y_train.txt', sep='\s+', header=None, names=['action'])
    y_test = pd.read_csv('_data/human_activity/test/y_test.txt', sep='\s+', header=None, names=['action'])

    return X_train, X_test, y_train, y_test

X_train, X_test, y_train, y_test = get_human_dataset()
rf_clf = RandomForestClassifier(max_depth=8)
rf_clf.fit(X_train, y_train)
pred = rf_clf.predict(X_test)
accuracy = accuracy_score(y_test, pred)
accuracy
0.9121140142517815

boosting

GBM

# from sklearn.ensemble import GradientBoostingClassifier
# import time
# 
# X_train, X_test, y_train, y_test = get_human_dataset()
# start_time = time.time()
# 
# gb_clf = GradientBoostingClassifier()
# gb_clf.fit(X_train, y_train)
# gb_pred = gb_clf.predict(X_test)
# gb_accuracy = accuracy_score(y_test, gb_pred)
#
# end_time = time.time()
#
# print(f'{gb_accuracy:.3f}, {end_time - start_time}초')

0.939, 701.6343066692352초

  • 아주 오래 걸림.

XGBoost

  • 결손값을 자체 처리할 수 있다.

  • 조기 종료 기능이 있다.

  • 자체적으로 교차 검증, 성능 평가, 피처 중요도 시각화 기능이 있다.

  • python xgboost

import xgboost as xgb
from xgboost import plot_importance
import numpy as np

dataset = load_breast_cancer()

X_train, X_test, y_train, y_test = train_test_split(dataset.data, dataset.target, test_size=0.2)
X_tr, X_val, y_tr, y_val = train_test_split(X_train, y_train, test_size=0.1)
dtr = xgb.DMatrix(data=X_tr, label=y_tr)
dval = xgb.DMatrix(data=X_val, label=y_val)
dtest = xgb.DMatrix(data=X_test, label=y_test)
params = {
    'max_depth': 3,
    'eta': 0.05,
    'objective': 'binary:logistic',
    'eval_metric': 'logloss'
}
num_rounds = 400
eval_list = [(dtr, 'train'), (dval, 'eval')]

xgb_model = xgb.train(params=params, dtrain=dtr, num_boost_round=num_rounds, early_stopping_rounds=50, evals=eval_list)
[0] train-logloss:0.61998   eval-logloss:0.59703
[1] train-logloss:0.58197   eval-logloss:0.57110
[2] train-logloss:0.54757   eval-logloss:0.54592
[3] train-logloss:0.51601   eval-logloss:0.52588
[4] train-logloss:0.48728   eval-logloss:0.50815
[5] train-logloss:0.46090   eval-logloss:0.48925
[6] train-logloss:0.43663   eval-logloss:0.47570
[7] train-logloss:0.41424   eval-logloss:0.45534
[8] train-logloss:0.39307   eval-logloss:0.44195
[9] train-logloss:0.37343   eval-logloss:0.42443
[10]    train-logloss:0.35524   eval-logloss:0.41319
[11]    train-logloss:0.33815   eval-logloss:0.39814
[12]    train-logloss:0.32204   eval-logloss:0.38904
[13]    train-logloss:0.30707   eval-logloss:0.37554
[14]    train-logloss:0.29311   eval-logloss:0.36850
[15]    train-logloss:0.27945   eval-logloss:0.35855
[16]    train-logloss:0.26705   eval-logloss:0.34728
[17]    train-logloss:0.25522   eval-logloss:0.34114
[18]    train-logloss:0.24369   eval-logloss:0.33369
[19]    train-logloss:0.23288   eval-logloss:0.32780
[20]    train-logloss:0.22312   eval-logloss:0.31902
[21]    train-logloss:0.21382   eval-logloss:0.31284
[22]    train-logloss:0.20471   eval-logloss:0.30783
[23]    train-logloss:0.19620   eval-logloss:0.30313
[24]    train-logloss:0.18827   eval-logloss:0.29949
[25]    train-logloss:0.18066   eval-logloss:0.29573
[26]    train-logloss:0.17345   eval-logloss:0.29148
[27]    train-logloss:0.16674   eval-logloss:0.28587
[28]    train-logloss:0.16041   eval-logloss:0.28159
[29]    train-logloss:0.15421   eval-logloss:0.27984
[30]    train-logloss:0.14846   eval-logloss:0.27685
[31]    train-logloss:0.14304   eval-logloss:0.27437
[32]    train-logloss:0.13789   eval-logloss:0.27101
[33]    train-logloss:0.13290   eval-logloss:0.26683
[34]    train-logloss:0.12805   eval-logloss:0.26610
[35]    train-logloss:0.12361   eval-logloss:0.26441
[36]    train-logloss:0.11950   eval-logloss:0.26142
[37]    train-logloss:0.11536   eval-logloss:0.25818
[38]    train-logloss:0.11152   eval-logloss:0.25687
[39]    train-logloss:0.10796   eval-logloss:0.25437
[40]    train-logloss:0.10438   eval-logloss:0.25147
[41]    train-logloss:0.10088   eval-logloss:0.25168
[42]    train-logloss:0.09770   eval-logloss:0.25166
[43]    train-logloss:0.09467   eval-logloss:0.24986
[44]    train-logloss:0.09178   eval-logloss:0.24949
[45]    train-logloss:0.08893   eval-logloss:0.24738
[46]    train-logloss:0.08629   eval-logloss:0.24574
[47]    train-logloss:0.08349   eval-logloss:0.24264
[48]    train-logloss:0.08106   eval-logloss:0.24329
[49]    train-logloss:0.07870   eval-logloss:0.24164
[50]    train-logloss:0.07645   eval-logloss:0.24162
[51]    train-logloss:0.07421   eval-logloss:0.23970
[52]    train-logloss:0.07196   eval-logloss:0.23703
[53]    train-logloss:0.06999   eval-logloss:0.23626
[54]    train-logloss:0.06804   eval-logloss:0.23466
[55]    train-logloss:0.06623   eval-logloss:0.23565
[56]    train-logloss:0.06445   eval-logloss:0.23413
[57]    train-logloss:0.06273   eval-logloss:0.23295
[58]    train-logloss:0.06093   eval-logloss:0.23110
[59]    train-logloss:0.05936   eval-logloss:0.23104
[60]    train-logloss:0.05792   eval-logloss:0.23144
[61]    train-logloss:0.05647   eval-logloss:0.23027
[62]    train-logloss:0.05509   eval-logloss:0.22936
[63]    train-logloss:0.05382   eval-logloss:0.22986
[64]    train-logloss:0.05253   eval-logloss:0.22886
[65]    train-logloss:0.05126   eval-logloss:0.22886
[66]    train-logloss:0.05008   eval-logloss:0.22796
[67]    train-logloss:0.04890   eval-logloss:0.22733
[68]    train-logloss:0.04781   eval-logloss:0.22840
[69]    train-logloss:0.04671   eval-logloss:0.22801
[70]    train-logloss:0.04567   eval-logloss:0.22754
[71]    train-logloss:0.04472   eval-logloss:0.22904
[72]    train-logloss:0.04384   eval-logloss:0.23060
[73]    train-logloss:0.04282   eval-logloss:0.23003
[74]    train-logloss:0.04202   eval-logloss:0.23156
[75]    train-logloss:0.04114   eval-logloss:0.23122
[76]    train-logloss:0.04031   eval-logloss:0.23064
[77]    train-logloss:0.03943   eval-logloss:0.23019
[78]    train-logloss:0.03866   eval-logloss:0.22972
[79]    train-logloss:0.03799   eval-logloss:0.22949
[80]    train-logloss:0.03721   eval-logloss:0.22910
[81]    train-logloss:0.03644   eval-logloss:0.22854
[82]    train-logloss:0.03575   eval-logloss:0.22762
[83]    train-logloss:0.03512   eval-logloss:0.22733
[84]    train-logloss:0.03450   eval-logloss:0.22755
[85]    train-logloss:0.03383   eval-logloss:0.22704
[86]    train-logloss:0.03324   eval-logloss:0.22625
[87]    train-logloss:0.03267   eval-logloss:0.22573
[88]    train-logloss:0.03207   eval-logloss:0.22487
[89]    train-logloss:0.03153   eval-logloss:0.22416
[90]    train-logloss:0.03095   eval-logloss:0.22480
[91]    train-logloss:0.03046   eval-logloss:0.22387
[92]    train-logloss:0.02991   eval-logloss:0.22381
[93]    train-logloss:0.02940   eval-logloss:0.22385
[94]    train-logloss:0.02887   eval-logloss:0.22266
[95]    train-logloss:0.02843   eval-logloss:0.22336
[96]    train-logloss:0.02796   eval-logloss:0.22344
[97]    train-logloss:0.02750   eval-logloss:0.22415
[98]    train-logloss:0.02703   eval-logloss:0.22302
[99]    train-logloss:0.02664   eval-logloss:0.22376
[100]   train-logloss:0.02625   eval-logloss:0.22443
[101]   train-logloss:0.02586   eval-logloss:0.22393
[102]   train-logloss:0.02546   eval-logloss:0.22413
[103]   train-logloss:0.02513   eval-logloss:0.22360
[104]   train-logloss:0.02471   eval-logloss:0.22419
[105]   train-logloss:0.02435   eval-logloss:0.22570
[106]   train-logloss:0.02401   eval-logloss:0.22527
[107]   train-logloss:0.02368   eval-logloss:0.22569
[108]   train-logloss:0.02333   eval-logloss:0.22634
[109]   train-logloss:0.02303   eval-logloss:0.22595
[110]   train-logloss:0.02274   eval-logloss:0.22652
[111]   train-logloss:0.02245   eval-logloss:0.22584
[112]   train-logloss:0.02212   eval-logloss:0.22487
[113]   train-logloss:0.02183   eval-logloss:0.22596
[114]   train-logloss:0.02155   eval-logloss:0.22743
[115]   train-logloss:0.02125   eval-logloss:0.22652
[116]   train-logloss:0.02099   eval-logloss:0.22771
[117]   train-logloss:0.02075   eval-logloss:0.22863
[118]   train-logloss:0.02053   eval-logloss:0.22753
[119]   train-logloss:0.02030   eval-logloss:0.22743
[120]   train-logloss:0.02007   eval-logloss:0.22719
[121]   train-logloss:0.01984   eval-logloss:0.22828
[122]   train-logloss:0.01958   eval-logloss:0.22620
[123]   train-logloss:0.01937   eval-logloss:0.22613
[124]   train-logloss:0.01917   eval-logloss:0.22586
[125]   train-logloss:0.01897   eval-logloss:0.22694
[126]   train-logloss:0.01876   eval-logloss:0.22699
[127]   train-logloss:0.01853   eval-logloss:0.22612
[128]   train-logloss:0.01835   eval-logloss:0.22640
[129]   train-logloss:0.01817   eval-logloss:0.22646
[130]   train-logloss:0.01797   eval-logloss:0.22760
[131]   train-logloss:0.01776   eval-logloss:0.22753
[132]   train-logloss:0.01755   eval-logloss:0.22671
[133]   train-logloss:0.01737   eval-logloss:0.22785
[134]   train-logloss:0.01719   eval-logloss:0.22794
[135]   train-logloss:0.01703   eval-logloss:0.22707
[136]   train-logloss:0.01686   eval-logloss:0.22629
[137]   train-logloss:0.01669   eval-logloss:0.22639
[138]   train-logloss:0.01653   eval-logloss:0.22753
[139]   train-logloss:0.01639   eval-logloss:0.22772
[140]   train-logloss:0.01624   eval-logloss:0.22731
[141]   train-logloss:0.01613   eval-logloss:0.22757
[142]   train-logloss:0.01599   eval-logloss:0.22767
[143]   train-logloss:0.01586   eval-logloss:0.22727
pred_probs = xgb_model.predict(dtest)
preds = [1 if x > 0.5 else 0 for x in pred_probs]
  • sklearn xgboost
from xgboost import XGBClassifier

evals = [(X_tr, y_tr), (X_val, y_val)]
xgb = XGBClassifier(n_estimators=400, 
                    learning_rate=0.05, 
                    max_depth=3, 
                    early_stopping_rounds=50,
                    eval_metric=['logloss'])
xgb.fit(X_tr, y_tr, eval_set=evals)
preds = xgb.predict(X_test)
pred_probs = xgb.predict_proba(X_test)[:, 1]
[0] validation_0-logloss:0.61998    validation_1-logloss:0.59703
[1] validation_0-logloss:0.58197    validation_1-logloss:0.57110
[2] validation_0-logloss:0.54757    validation_1-logloss:0.54592
[3] validation_0-logloss:0.51601    validation_1-logloss:0.52588
[4] validation_0-logloss:0.48728    validation_1-logloss:0.50815
[5] validation_0-logloss:0.46090    validation_1-logloss:0.48925
[6] validation_0-logloss:0.43663    validation_1-logloss:0.47570
[7] validation_0-logloss:0.41424    validation_1-logloss:0.45534
[8] validation_0-logloss:0.39307    validation_1-logloss:0.44195
[9] validation_0-logloss:0.37343    validation_1-logloss:0.42443
[10]    validation_0-logloss:0.35524    validation_1-logloss:0.41319
[11]    validation_0-logloss:0.33815    validation_1-logloss:0.39814
[12]    validation_0-logloss:0.32204    validation_1-logloss:0.38904
[13]    validation_0-logloss:0.30707    validation_1-logloss:0.37554
[14]    validation_0-logloss:0.29311    validation_1-logloss:0.36850
[15]    validation_0-logloss:0.27945    validation_1-logloss:0.35855
[16]    validation_0-logloss:0.26705    validation_1-logloss:0.34728
[17]    validation_0-logloss:0.25522    validation_1-logloss:0.34114
[18]    validation_0-logloss:0.24369    validation_1-logloss:0.33369
[19]    validation_0-logloss:0.23288    validation_1-logloss:0.32780
[20]    validation_0-logloss:0.22312    validation_1-logloss:0.31902
[21]    validation_0-logloss:0.21382    validation_1-logloss:0.31284
[22]    validation_0-logloss:0.20471    validation_1-logloss:0.30783
[23]    validation_0-logloss:0.19620    validation_1-logloss:0.30313
[24]    validation_0-logloss:0.18827    validation_1-logloss:0.29949
[25]    validation_0-logloss:0.18066    validation_1-logloss:0.29573
[26]    validation_0-logloss:0.17345    validation_1-logloss:0.29148
[27]    validation_0-logloss:0.16674    validation_1-logloss:0.28587
[28]    validation_0-logloss:0.16041    validation_1-logloss:0.28159
[29]    validation_0-logloss:0.15421    validation_1-logloss:0.27984
[30]    validation_0-logloss:0.14846    validation_1-logloss:0.27685
[31]    validation_0-logloss:0.14304    validation_1-logloss:0.27437
[32]    validation_0-logloss:0.13789    validation_1-logloss:0.27101
[33]    validation_0-logloss:0.13290    validation_1-logloss:0.26683
[34]    validation_0-logloss:0.12805    validation_1-logloss:0.26610
[35]    validation_0-logloss:0.12361    validation_1-logloss:0.26441
[36]    validation_0-logloss:0.11950    validation_1-logloss:0.26142
[37]    validation_0-logloss:0.11536    validation_1-logloss:0.25818
[38]    validation_0-logloss:0.11152    validation_1-logloss:0.25687
[39]    validation_0-logloss:0.10796    validation_1-logloss:0.25437
[40]    validation_0-logloss:0.10438    validation_1-logloss:0.25147
[41]    validation_0-logloss:0.10088    validation_1-logloss:0.25168
[42]    validation_0-logloss:0.09770    validation_1-logloss:0.25166
[43]    validation_0-logloss:0.09467    validation_1-logloss:0.24986
[44]    validation_0-logloss:0.09178    validation_1-logloss:0.24949
[45]    validation_0-logloss:0.08893    validation_1-logloss:0.24738
[46]    validation_0-logloss:0.08629    validation_1-logloss:0.24574
[47]    validation_0-logloss:0.08349    validation_1-logloss:0.24264
[48]    validation_0-logloss:0.08106    validation_1-logloss:0.24329
[49]    validation_0-logloss:0.07870    validation_1-logloss:0.24164
[50]    validation_0-logloss:0.07645    validation_1-logloss:0.24162
[51]    validation_0-logloss:0.07421    validation_1-logloss:0.23970
[52]    validation_0-logloss:0.07196    validation_1-logloss:0.23703
[53]    validation_0-logloss:0.06999    validation_1-logloss:0.23626
[54]    validation_0-logloss:0.06804    validation_1-logloss:0.23466
[55]    validation_0-logloss:0.06623    validation_1-logloss:0.23565
[56]    validation_0-logloss:0.06445    validation_1-logloss:0.23413
[57]    validation_0-logloss:0.06273    validation_1-logloss:0.23295
[58]    validation_0-logloss:0.06093    validation_1-logloss:0.23110
[59]    validation_0-logloss:0.05936    validation_1-logloss:0.23104
[60]    validation_0-logloss:0.05792    validation_1-logloss:0.23144
[61]    validation_0-logloss:0.05647    validation_1-logloss:0.23027
[62]    validation_0-logloss:0.05509    validation_1-logloss:0.22936
[63]    validation_0-logloss:0.05382    validation_1-logloss:0.22986
[64]    validation_0-logloss:0.05253    validation_1-logloss:0.22886
[65]    validation_0-logloss:0.05126    validation_1-logloss:0.22886
[66]    validation_0-logloss:0.05008    validation_1-logloss:0.22796
[67]    validation_0-logloss:0.04890    validation_1-logloss:0.22733
[68]    validation_0-logloss:0.04781    validation_1-logloss:0.22840
[69]    validation_0-logloss:0.04671    validation_1-logloss:0.22801
[70]    validation_0-logloss:0.04567    validation_1-logloss:0.22754
[71]    validation_0-logloss:0.04472    validation_1-logloss:0.22904
[72]    validation_0-logloss:0.04384    validation_1-logloss:0.23060
[73]    validation_0-logloss:0.04282    validation_1-logloss:0.23003
[74]    validation_0-logloss:0.04202    validation_1-logloss:0.23156
[75]    validation_0-logloss:0.04114    validation_1-logloss:0.23122
[76]    validation_0-logloss:0.04031    validation_1-logloss:0.23064
[77]    validation_0-logloss:0.03943    validation_1-logloss:0.23019
[78]    validation_0-logloss:0.03866    validation_1-logloss:0.22972
[79]    validation_0-logloss:0.03799    validation_1-logloss:0.22949
[80]    validation_0-logloss:0.03721    validation_1-logloss:0.22910
[81]    validation_0-logloss:0.03644    validation_1-logloss:0.22854
[82]    validation_0-logloss:0.03575    validation_1-logloss:0.22762
[83]    validation_0-logloss:0.03512    validation_1-logloss:0.22733
[84]    validation_0-logloss:0.03450    validation_1-logloss:0.22755
[85]    validation_0-logloss:0.03383    validation_1-logloss:0.22704
[86]    validation_0-logloss:0.03324    validation_1-logloss:0.22625
[87]    validation_0-logloss:0.03267    validation_1-logloss:0.22573
[88]    validation_0-logloss:0.03207    validation_1-logloss:0.22487
[89]    validation_0-logloss:0.03153    validation_1-logloss:0.22416
[90]    validation_0-logloss:0.03095    validation_1-logloss:0.22480
[91]    validation_0-logloss:0.03046    validation_1-logloss:0.22387
[92]    validation_0-logloss:0.02991    validation_1-logloss:0.22381
[93]    validation_0-logloss:0.02940    validation_1-logloss:0.22385
[94]    validation_0-logloss:0.02887    validation_1-logloss:0.22266
[95]    validation_0-logloss:0.02843    validation_1-logloss:0.22336
[96]    validation_0-logloss:0.02796    validation_1-logloss:0.22344
[97]    validation_0-logloss:0.02750    validation_1-logloss:0.22415
[98]    validation_0-logloss:0.02703    validation_1-logloss:0.22302
[99]    validation_0-logloss:0.02664    validation_1-logloss:0.22376
[100]   validation_0-logloss:0.02625    validation_1-logloss:0.22443
[101]   validation_0-logloss:0.02586    validation_1-logloss:0.22393
[102]   validation_0-logloss:0.02546    validation_1-logloss:0.22413
[103]   validation_0-logloss:0.02513    validation_1-logloss:0.22360
[104]   validation_0-logloss:0.02471    validation_1-logloss:0.22419
[105]   validation_0-logloss:0.02435    validation_1-logloss:0.22570
[106]   validation_0-logloss:0.02401    validation_1-logloss:0.22527
[107]   validation_0-logloss:0.02368    validation_1-logloss:0.22569
[108]   validation_0-logloss:0.02333    validation_1-logloss:0.22634
[109]   validation_0-logloss:0.02303    validation_1-logloss:0.22595
[110]   validation_0-logloss:0.02274    validation_1-logloss:0.22652
[111]   validation_0-logloss:0.02245    validation_1-logloss:0.22584
[112]   validation_0-logloss:0.02212    validation_1-logloss:0.22487
[113]   validation_0-logloss:0.02183    validation_1-logloss:0.22596
[114]   validation_0-logloss:0.02155    validation_1-logloss:0.22743
[115]   validation_0-logloss:0.02125    validation_1-logloss:0.22652
[116]   validation_0-logloss:0.02099    validation_1-logloss:0.22771
[117]   validation_0-logloss:0.02075    validation_1-logloss:0.22863
[118]   validation_0-logloss:0.02053    validation_1-logloss:0.22753
[119]   validation_0-logloss:0.02030    validation_1-logloss:0.22743
[120]   validation_0-logloss:0.02007    validation_1-logloss:0.22719
[121]   validation_0-logloss:0.01984    validation_1-logloss:0.22828
[122]   validation_0-logloss:0.01958    validation_1-logloss:0.22620
[123]   validation_0-logloss:0.01937    validation_1-logloss:0.22613
[124]   validation_0-logloss:0.01917    validation_1-logloss:0.22586
[125]   validation_0-logloss:0.01897    validation_1-logloss:0.22694
[126]   validation_0-logloss:0.01876    validation_1-logloss:0.22699
[127]   validation_0-logloss:0.01853    validation_1-logloss:0.22612
[128]   validation_0-logloss:0.01835    validation_1-logloss:0.22640
[129]   validation_0-logloss:0.01817    validation_1-logloss:0.22646
[130]   validation_0-logloss:0.01797    validation_1-logloss:0.22760
[131]   validation_0-logloss:0.01776    validation_1-logloss:0.22753
[132]   validation_0-logloss:0.01755    validation_1-logloss:0.22671
[133]   validation_0-logloss:0.01737    validation_1-logloss:0.22785
[134]   validation_0-logloss:0.01719    validation_1-logloss:0.22794
[135]   validation_0-logloss:0.01703    validation_1-logloss:0.22707
[136]   validation_0-logloss:0.01686    validation_1-logloss:0.22629
[137]   validation_0-logloss:0.01669    validation_1-logloss:0.22639
[138]   validation_0-logloss:0.01653    validation_1-logloss:0.22753
[139]   validation_0-logloss:0.01639    validation_1-logloss:0.22772
[140]   validation_0-logloss:0.01624    validation_1-logloss:0.22731
[141]   validation_0-logloss:0.01613    validation_1-logloss:0.22757
[142]   validation_0-logloss:0.01599    validation_1-logloss:0.22767
[143]   validation_0-logloss:0.01586    validation_1-logloss:0.22727
[144]   validation_0-logloss:0.01575    validation_1-logloss:0.22753

LightGBM

  • 성능은 xgboost랑 별로 차이가 없음.

  • 1만건 이하의 데이터 세트에 대해 과적합이 발생할 가능성이 높다.

  • one hot 인코딩 필요 없음

  • python lightgbm

from lightgbm import LGBMClassifier, early_stopping, plot_importance
import matplotlib.pyplot as plt

lgbm = LGBMClassifier(n_estimators=400, learning_rate=0.05)
evals = [(X_tr, y_tr), (X_val, y_val)]
lgbm.fit(X_tr, y_tr, 
         callbacks = [early_stopping(stopping_rounds = 50)],
         eval_metric='logloss', 
         eval_set=evals)
preds = lgbm.predict(X_test)
pred_proba = lgbm.predict_proba(X_test)[:, 1]

plot_importance(lgbm)
plt.show()
[LightGBM] [Info] Number of positive: 255, number of negative: 154
[LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.001549 seconds.
You can set `force_col_wise=true` to remove the overhead.
[LightGBM] [Info] Total Bins 4092
[LightGBM] [Info] Number of data points in the train set: 409, number of used features: 30
[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.623472 -> initscore=0.504311
[LightGBM] [Info] Start training from score 0.504311
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
Training until validation scores don't improve for 50 rounds
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
[LightGBM] [Warning] No further splits with positive gain, best gain: -inf
Early stopping, best iteration is:
[85]    training's binary_logloss: 0.0234098    valid_1's binary_logloss: 0.242924

stacking

from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression

X_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, test_size=0.2)

knn_clf = KNeighborsClassifier(n_neighbors=4)
rf_clf = RandomForestClassifier(n_estimators=100)
dt_clf = DecisionTreeClassifier()
ada_clf = AdaBoostClassifier(n_estimators=100)

lr_final = LogisticRegression()
knn_clf.fit(X_train, y_train)
rf_clf.fit(X_train, y_train)
dt_clf.fit(X_train, y_train)
ada_clf.fit(X_train, y_train)

knn_pred = knn_clf.predict(X_test)
rf_pred = rf_clf.predict(X_test)
dt_pred = dt_clf.predict(X_test)
ada_pred = ada_clf.predict(X_test)

pred = np.array([knn_pred, rf_pred, dt_pred, ada_pred])
pred = np.transpose(pred)
lr_final.fit(pred, y_test)
final = lr_final.predict(pred)
print(f'{accuracy_score(y_test, final):.3f}')
0.982
  • test 셋으로 훈련을 하고 있는 부분이 문제 → cv 세트로 해야함

CV 세트 기반 stacking

from sklearn.model_selection import KFold
from sklearn.metrics import mean_absolute_error

def get_stacking_base_datasets(model, X_train_n, y_train_n, X_test_n, n_folds):
    kf = KFold(n_splits=n_folds, shuffle=False)
    train_fold_pred = np.zeros((X_train_n.shape[0], 1))
    test_pred = np.zeros((X_test_n.shape[0], n_folds))
    for folder_counter, (train_index, valid_index) in enumerate(kf.split(X_train_n)):
        X_tr = X_train_n[train_index]
        y_tr = y_train_n[train_index]
        X_te = X_train_n[valid_index]

        model.fit(X_tr, y_tr)
        train_fold_pred[valid_index, :] = model.predict(X_te).reshape(-1, 1)
        test_pred[:, folder_counter] = model.predict(X_test_n)

    test_pred_mean = np.mean(test_pred, axis=1).reshape(-1, 1)

    return train_fold_pred, test_pred_mean
knn_train, knn_test = get_stacking_base_datasets(knn_clf, X_train, y_train, X_test, 7)
rf_train, rf_test = get_stacking_base_datasets(rf_clf, X_train, y_train, X_test, 7)
dt_train, dt_test = get_stacking_base_datasets(dt_clf, X_train, y_train, X_test, 7)
ada_train, ada_test = get_stacking_base_datasets(ada_clf, X_train, y_train, X_test, 7)
Stack_final_X_train = np.concatenate((knn_train, rf_train, dt_train, ada_train), axis=1)
Stack_final_X_test = np.concatenate((knn_test, rf_test, dt_test, ada_test), axis=1)

lr_final.fit(Stack_final_X_train, y_train)
stack_final = lr_final.predict(Stack_final_X_test)

print(f'{accuracy_score(y_test, stack_final):.3f}')
0.974

Baysian Optimization

  • Grid search로는 시간이 너무 오래 걸리는 경우

  • 목표 함수: 하이퍼파라미터 입력 n개에 대한 모델 성능 출력 1개의 모델

  • Surrogate model: 목표 함수에 대한 예상 모델. 사전확률 분포에서 최적해 나감.

  • acquisition function: 불확실성이 가장 큰 point를 다음 관측 데이터로 결정.

from hyperopt import hp, fmin, tpe, Trials, STATUS_OK

search_space = {'x': hp.quniform('x', -10, 10, 1),
                'y': hp.quniform('y', -15, 15, 1)}
def objective_func(search_space):
    x = search_space['x']
    y = search_space['y']

    return x ** 2 - 20 * y

trial_val = Trials()
best = fmin(fn=objective_func,
            space=search_space,
            algo=tpe.suggest,
            max_evals=20,
            trials=trial_val)
best
  0%|          | 0/20 [00:00<?, ?trial/s, best loss=?]100%|██████████| 20/20 [00:00<00:00, 1860.70trial/s, best loss: -251.0]
{'x': -3.0, 'y': 13.0}

XGBoost 하이퍼파라미터 최적화

dataset = load_breast_cancer()

X_train, X_test, y_train, y_test = train_test_split(dataset.data, dataset.target, test_size=0.2)
X_tr, X_val, y_tr, y_val = train_test_split(X_train, y_train, test_size=0.1)

xgb_search_space = {
    'max_depth': hp.quniform('max_depth', 5, 20, 1),
    'min_child_weight': hp.quniform('min_child_weight', 1, 2, 1),
    'learning_rate': hp.uniform('learning_rate', 0.01, 0.2),
    'colsample_bytree': hp.uniform('colsample_bytree', 0.5, 1)
}
# hp.choice('tree_criterion', ['gini', 'entropy']) 이런식으로도 가능
from sklearn.model_selection import cross_val_score

def objective_func(search_space):
    xgb_clf = XGBClassifier(n_estimators=100, 
                            max_depth=int(search_space['max_depth']),
                            min_child_weight=int(search_space['min_child_weight']),
                            learning_rate=search_space['learning_rate'],
                            colsample_bytree=search_space['colsample_bytree'],
                            eval_metric='logloss')
    accuracy = cross_val_score(xgb_clf, X_train, y_train, scoring='accuracy', cv=3)
    return {'loss': -1 * np.mean(accuracy), 'status': STATUS_OK}

trial_val = Trials()
best = fmin(fn=objective_func,
            space=xgb_search_space,
            algo=tpe.suggest,
            max_evals=50,
            trials=trial_val)
best
  0%|          | 0/50 [00:00<?, ?trial/s, best loss=?]  2%|▏         | 1/50 [00:00<00:23,  2.11trial/s, best loss: -0.9538892761705587]  4%|▍         | 2/50 [00:02<01:00,  1.25s/trial, best loss: -0.9604537004763566]  6%|▌         | 3/50 [00:03<01:06,  1.41s/trial, best loss: -0.9604537004763566]  8%|▊         | 4/50 [00:04<00:55,  1.22s/trial, best loss: -0.9604537004763566] 10%|█         | 5/50 [00:05<00:46,  1.04s/trial, best loss: -0.9604537004763566] 12%|█▏        | 6/50 [00:05<00:35,  1.25trial/s, best loss: -0.9604537004763566] 14%|█▍        | 7/50 [00:05<00:24,  1.74trial/s, best loss: -0.9604537004763566] 16%|█▌        | 8/50 [00:06<00:18,  2.32trial/s, best loss: -0.9604537004763566] 18%|█▊        | 9/50 [00:06<00:13,  3.00trial/s, best loss: -0.9604537004763566] 20%|██        | 10/50 [00:07<00:21,  1.88trial/s, best loss: -0.9604537004763566] 22%|██▏       | 11/50 [00:08<00:24,  1.58trial/s, best loss: -0.9604537004763566] 24%|██▍       | 12/50 [00:08<00:23,  1.60trial/s, best loss: -0.9604537004763566] 26%|██▌       | 13/50 [00:09<00:20,  1.77trial/s, best loss: -0.9626612059951203] 28%|██▊       | 14/50 [00:09<00:16,  2.14trial/s, best loss: -0.9626612059951203] 30%|███       | 15/50 [00:09<00:13,  2.65trial/s, best loss: -0.9626612059951203] 32%|███▏      | 16/50 [00:09<00:10,  3.24trial/s, best loss: -0.9626612059951203] 34%|███▍      | 17/50 [00:09<00:08,  3.75trial/s, best loss: -0.9626612059951203] 36%|███▌      | 18/50 [00:09<00:07,  4.33trial/s, best loss: -0.9626612059951203] 38%|███▊      | 19/50 [00:10<00:08,  3.85trial/s, best loss: -0.9626612059951203] 40%|████      | 20/50 [00:10<00:07,  4.18trial/s, best loss: -0.9626612059951203] 42%|████▏     | 21/50 [00:10<00:06,  4.38trial/s, best loss: -0.9626612059951203] 44%|████▍     | 22/50 [00:10<00:06,  4.56trial/s, best loss: -0.9626612059951203] 46%|████▌     | 23/50 [00:11<00:05,  4.67trial/s, best loss: -0.9626612059951203] 48%|████▊     | 24/50 [00:11<00:07,  3.60trial/s, best loss: -0.9648396653886371] 50%|█████     | 25/50 [00:11<00:07,  3.16trial/s, best loss: -0.9648396653886371] 52%|█████▏    | 26/50 [00:12<00:09,  2.62trial/s, best loss: -0.967047170907401]  54%|█████▍    | 27/50 [00:12<00:07,  2.96trial/s, best loss: -0.967047170907401] 56%|█████▌    | 28/50 [00:14<00:14,  1.53trial/s, best loss: -0.967047170907401] 58%|█████▊    | 29/50 [00:14<00:12,  1.65trial/s, best loss: -0.967047170907401] 60%|██████    | 30/50 [00:15<00:11,  1.76trial/s, best loss: -0.967047170907401] 62%|██████▏   | 31/50 [00:15<00:08,  2.25trial/s, best loss: -0.967047170907401] 64%|██████▍   | 32/50 [00:15<00:06,  2.67trial/s, best loss: -0.967047170907401] 66%|██████▌   | 33/50 [00:15<00:05,  3.29trial/s, best loss: -0.967047170907401] 68%|██████▊   | 34/50 [00:15<00:04,  3.81trial/s, best loss: -0.967047170907401] 70%|███████   | 35/50 [00:15<00:03,  3.95trial/s, best loss: -0.967047170907401] 72%|███████▏  | 36/50 [00:16<00:03,  4.66trial/s, best loss: -0.967047170907401] 74%|███████▍  | 37/50 [00:16<00:02,  5.08trial/s, best loss: -0.967047170907401] 76%|███████▌  | 38/50 [00:16<00:02,  4.49trial/s, best loss: -0.967047170907401] 78%|███████▊  | 39/50 [00:16<00:02,  4.89trial/s, best loss: -0.967047170907401] 80%|████████  | 40/50 [00:16<00:02,  4.87trial/s, best loss: -0.967047170907401] 82%|████████▏ | 41/50 [00:17<00:01,  5.07trial/s, best loss: -0.967047170907401] 84%|████████▍ | 42/50 [00:17<00:01,  4.76trial/s, best loss: -0.967047170907401] 86%|████████▌ | 43/50 [00:17<00:01,  4.88trial/s, best loss: -0.967047170907401] 88%|████████▊ | 44/50 [00:17<00:01,  4.96trial/s, best loss: -0.967047170907401] 90%|█████████ | 45/50 [00:17<00:01,  4.99trial/s, best loss: -0.967047170907401] 92%|█████████▏| 46/50 [00:18<00:00,  4.83trial/s, best loss: -0.967047170907401] 94%|█████████▍| 47/50 [00:18<00:00,  5.34trial/s, best loss: -0.967047170907401] 96%|█████████▌| 48/50 [00:18<00:00,  5.59trial/s, best loss: -0.967047170907401] 98%|█████████▊| 49/50 [00:18<00:00,  5.10trial/s, best loss: -0.967047170907401]100%|██████████| 50/50 [00:19<00:00,  2.70trial/s, best loss: -0.967047170907401]100%|██████████| 50/50 [00:19<00:00,  2.58trial/s, best loss: -0.967047170907401]
{'colsample_bytree': 0.6859052847618307,
 'learning_rate': 0.07174400225835986,
 'max_depth': 9.0,
 'min_child_weight': 2.0}
맨 위로

각주

  1. hard voting (단순 다수결), soft voting(label을 예측할 확률의 가중 평균으로 분류)으로 나뉨. 일반적으로 soft voting이 사용됨.↩︎

  2. hard voting (단순 다수결), soft voting(label을 예측할 확률의 가중 평균으로 분류)으로 나뉨. 일반적으로 soft voting이 사용됨.↩︎