0%| | 0/50 [00:00<?, ?trial/s, best loss=?] [LightGBM] [Warning] Unknown parameter: eval_metric
0%| | 0/50 [00:00<?, ?trial/s, best loss=?] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
0%| | 0/50 [00:00<?, ?trial/s, best loss=?] [LightGBM] [Warning] Unknown parameter: eval_metric
0%| | 0/50 [00:00<?, ?trial/s, best loss=?] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
0%| | 0/50 [00:00<?, ?trial/s, best loss=?] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007286 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
0%| | 0/50 [00:00<?, ?trial/s, best loss=?] [LightGBM] [Info] Total Bins 12947
0%| | 0/50 [00:00<?, ?trial/s, best loss=?] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 203
0%| | 0/50 [00:00<?, ?trial/s, best loss=?] [LightGBM] [Warning] Unknown parameter: eval_metric
0%| | 0/50 [00:00<?, ?trial/s, best loss=?] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
0%| | 0/50 [00:00<?, ?trial/s, best loss=?] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
0%| | 0/50 [00:00<?, ?trial/s, best loss=?] [LightGBM] [Info] Start training from score -3.161962
0%| | 0/50 [00:00<?, ?trial/s, best loss=?] Training until validation scores don't improve for 30 rounds
0%| | 0/50 [00:00<?, ?trial/s, best loss=?] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.119239 valid_1's binary_logloss: 0.131547
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] [LightGBM] [Warning] Unknown parameter: eval_metric
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] [LightGBM] [Warning] Unknown parameter: eval_metric
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] [LightGBM] [Warning] Unknown parameter: eval_metric
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009173 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] [LightGBM] [Info] Total Bins 13055
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 199
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] [LightGBM] [Warning] Unknown parameter: eval_metric
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] [LightGBM] [Info] Start training from score -3.210495
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] Training until validation scores don't improve for 30 rounds
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.11513 valid_1's binary_logloss: 0.139265
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] [LightGBM] [Warning] Unknown parameter: eval_metric
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] [LightGBM] [Warning] Unknown parameter: eval_metric
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] [LightGBM] [Warning] Unknown parameter: eval_metric
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008461 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] [LightGBM] [Info] Total Bins 12996
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] [LightGBM] [Warning] Unknown parameter: eval_metric
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] [LightGBM] [Info] Start training from score -3.179828
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] Training until validation scores don't improve for 30 rounds
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.116828 valid_1's binary_logloss: 0.136952
0%| | 0/50 [00:03<?, ?trial/s, best loss=?] [LightGBM] [Warning] Unknown parameter: eval_metric
0%| | 0/50 [00:03<?, ?trial/s, best loss=?] 2%|▏ | 1/50 [00:03<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
2%|▏ | 1/50 [00:03<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
2%|▏ | 1/50 [00:03<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
2%|▏ | 1/50 [00:03<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
2%|▏ | 1/50 [00:03<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008509 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
2%|▏ | 1/50 [00:03<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Total Bins 12835
2%|▏ | 1/50 [00:03<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
2%|▏ | 1/50 [00:03<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
2%|▏ | 1/50 [00:03<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
2%|▏ | 1/50 [00:03<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
2%|▏ | 1/50 [00:03<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Start training from score -3.161962
2%|▏ | 1/50 [00:03<02:57, 3.63s/trial, best loss: -0.8341540202815528] Training until validation scores don't improve for 30 rounds
2%|▏ | 1/50 [00:03<02:57, 3.63s/trial, best loss: -0.8341540202815528] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.134361 valid_1's binary_logloss: 0.134539
2%|▏ | 1/50 [00:04<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
2%|▏ | 1/50 [00:04<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
2%|▏ | 1/50 [00:04<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
2%|▏ | 1/50 [00:04<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
2%|▏ | 1/50 [00:04<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
2%|▏ | 1/50 [00:04<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008662 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
2%|▏ | 1/50 [00:04<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Total Bins 12988
2%|▏ | 1/50 [00:04<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
2%|▏ | 1/50 [00:04<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
2%|▏ | 1/50 [00:04<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
2%|▏ | 1/50 [00:04<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
2%|▏ | 1/50 [00:04<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Start training from score -3.210495
2%|▏ | 1/50 [00:04<02:57, 3.63s/trial, best loss: -0.8341540202815528] Training until validation scores don't improve for 30 rounds
2%|▏ | 1/50 [00:04<02:57, 3.63s/trial, best loss: -0.8341540202815528] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.129831 valid_1's binary_logloss: 0.142347
2%|▏ | 1/50 [00:05<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
2%|▏ | 1/50 [00:05<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
2%|▏ | 1/50 [00:05<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
2%|▏ | 1/50 [00:05<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
2%|▏ | 1/50 [00:05<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
2%|▏ | 1/50 [00:05<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008359 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
2%|▏ | 1/50 [00:05<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Total Bins 12898
2%|▏ | 1/50 [00:05<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
2%|▏ | 1/50 [00:05<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
2%|▏ | 1/50 [00:05<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
2%|▏ | 1/50 [00:05<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
2%|▏ | 1/50 [00:05<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Start training from score -3.179828
2%|▏ | 1/50 [00:05<02:57, 3.63s/trial, best loss: -0.8341540202815528] Training until validation scores don't improve for 30 rounds
2%|▏ | 1/50 [00:05<02:57, 3.63s/trial, best loss: -0.8341540202815528] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.131634 valid_1's binary_logloss: 0.139054
2%|▏ | 1/50 [00:06<02:57, 3.63s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
2%|▏ | 1/50 [00:06<02:57, 3.63s/trial, best loss: -0.8341540202815528] 4%|▍ | 2/50 [00:06<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
4%|▍ | 2/50 [00:06<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
4%|▍ | 2/50 [00:06<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
4%|▍ | 2/50 [00:06<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
4%|▍ | 2/50 [00:06<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008437 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
4%|▍ | 2/50 [00:06<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Total Bins 12902
4%|▍ | 2/50 [00:06<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 197
4%|▍ | 2/50 [00:06<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
4%|▍ | 2/50 [00:06<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
4%|▍ | 2/50 [00:06<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
4%|▍ | 2/50 [00:06<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Start training from score -3.161962
4%|▍ | 2/50 [00:06<02:30, 3.13s/trial, best loss: -0.8341540202815528] Training until validation scores don't improve for 30 rounds
4%|▍ | 2/50 [00:06<02:30, 3.13s/trial, best loss: -0.8341540202815528] Did not meet early stopping. Best iteration is:
[88] training's binary_logloss: 0.113936 valid_1's binary_logloss: 0.131766
4%|▍ | 2/50 [00:07<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
4%|▍ | 2/50 [00:07<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
4%|▍ | 2/50 [00:07<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
4%|▍ | 2/50 [00:07<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
4%|▍ | 2/50 [00:07<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
4%|▍ | 2/50 [00:07<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008792 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
4%|▍ | 2/50 [00:07<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Total Bins 12988
4%|▍ | 2/50 [00:07<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
4%|▍ | 2/50 [00:07<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
4%|▍ | 2/50 [00:07<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
4%|▍ | 2/50 [00:07<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
4%|▍ | 2/50 [00:07<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Start training from score -3.210495
4%|▍ | 2/50 [00:07<02:30, 3.13s/trial, best loss: -0.8341540202815528] Training until validation scores don't improve for 30 rounds
4%|▍ | 2/50 [00:07<02:30, 3.13s/trial, best loss: -0.8341540202815528] Did not meet early stopping. Best iteration is:
[71] training's binary_logloss: 0.11326 valid_1's binary_logloss: 0.139317
4%|▍ | 2/50 [00:08<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
4%|▍ | 2/50 [00:08<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
4%|▍ | 2/50 [00:09<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
4%|▍ | 2/50 [00:09<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
4%|▍ | 2/50 [00:09<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
4%|▍ | 2/50 [00:09<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009763 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
4%|▍ | 2/50 [00:09<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Total Bins 12898
4%|▍ | 2/50 [00:09<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
4%|▍ | 2/50 [00:09<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
4%|▍ | 2/50 [00:09<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
4%|▍ | 2/50 [00:09<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
4%|▍ | 2/50 [00:09<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Start training from score -3.179828
4%|▍ | 2/50 [00:09<02:30, 3.13s/trial, best loss: -0.8341540202815528] Training until validation scores don't improve for 30 rounds
4%|▍ | 2/50 [00:09<02:30, 3.13s/trial, best loss: -0.8341540202815528] Did not meet early stopping. Best iteration is:
[77] training's binary_logloss: 0.113657 valid_1's binary_logloss: 0.136864
4%|▍ | 2/50 [00:10<02:30, 3.13s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
4%|▍ | 2/50 [00:10<02:30, 3.13s/trial, best loss: -0.8341540202815528] 6%|▌ | 3/50 [00:10<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
6%|▌ | 3/50 [00:10<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
6%|▌ | 3/50 [00:10<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
6%|▌ | 3/50 [00:10<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
6%|▌ | 3/50 [00:10<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009179 seconds.
You can set `force_col_wise=true` to remove the overhead.
6%|▌ | 3/50 [00:10<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Total Bins 12835
6%|▌ | 3/50 [00:10<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
6%|▌ | 3/50 [00:10<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
6%|▌ | 3/50 [00:10<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
6%|▌ | 3/50 [00:10<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
6%|▌ | 3/50 [00:10<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Start training from score -3.161962
6%|▌ | 3/50 [00:10<02:38, 3.38s/trial, best loss: -0.8341540202815528] Training until validation scores don't improve for 30 rounds
6%|▌ | 3/50 [00:10<02:38, 3.38s/trial, best loss: -0.8341540202815528] Early stopping, best iteration is:
[39] training's binary_logloss: 0.12109 valid_1's binary_logloss: 0.131246
6%|▌ | 3/50 [00:10<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
6%|▌ | 3/50 [00:10<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
6%|▌ | 3/50 [00:10<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
6%|▌ | 3/50 [00:10<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
6%|▌ | 3/50 [00:11<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
6%|▌ | 3/50 [00:11<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008369 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
6%|▌ | 3/50 [00:11<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Total Bins 12988
6%|▌ | 3/50 [00:11<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
6%|▌ | 3/50 [00:11<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
6%|▌ | 3/50 [00:11<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
6%|▌ | 3/50 [00:11<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
6%|▌ | 3/50 [00:11<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Start training from score -3.210495
6%|▌ | 3/50 [00:11<02:38, 3.38s/trial, best loss: -0.8341540202815528] Training until validation scores don't improve for 30 rounds
6%|▌ | 3/50 [00:11<02:38, 3.38s/trial, best loss: -0.8341540202815528] Early stopping, best iteration is:
[39] training's binary_logloss: 0.116743 valid_1's binary_logloss: 0.139211
6%|▌ | 3/50 [00:11<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
6%|▌ | 3/50 [00:11<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
6%|▌ | 3/50 [00:11<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
6%|▌ | 3/50 [00:11<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
6%|▌ | 3/50 [00:11<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
6%|▌ | 3/50 [00:11<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008111 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
6%|▌ | 3/50 [00:11<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Total Bins 12898
6%|▌ | 3/50 [00:11<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
6%|▌ | 3/50 [00:11<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
6%|▌ | 3/50 [00:11<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
6%|▌ | 3/50 [00:11<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
6%|▌ | 3/50 [00:11<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Info] Start training from score -3.179828
6%|▌ | 3/50 [00:11<02:38, 3.38s/trial, best loss: -0.8341540202815528] Training until validation scores don't improve for 30 rounds
6%|▌ | 3/50 [00:11<02:38, 3.38s/trial, best loss: -0.8341540202815528] Early stopping, best iteration is:
[35] training's binary_logloss: 0.120149 valid_1's binary_logloss: 0.136702
6%|▌ | 3/50 [00:12<02:38, 3.38s/trial, best loss: -0.8341540202815528] [LightGBM] [Warning] Unknown parameter: eval_metric
6%|▌ | 3/50 [00:12<02:38, 3.38s/trial, best loss: -0.8341540202815528] 8%|▊ | 4/50 [00:12<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
8%|▊ | 4/50 [00:12<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
8%|▊ | 4/50 [00:12<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
8%|▊ | 4/50 [00:12<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
8%|▊ | 4/50 [00:12<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008782 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
8%|▊ | 4/50 [00:12<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Total Bins 12947
8%|▊ | 4/50 [00:12<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 203
8%|▊ | 4/50 [00:12<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
8%|▊ | 4/50 [00:12<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
8%|▊ | 4/50 [00:12<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
8%|▊ | 4/50 [00:12<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Start training from score -3.161962
8%|▊ | 4/50 [00:12<02:16, 2.96s/trial, best loss: -0.8346097688713522] Training until validation scores don't improve for 30 rounds
8%|▊ | 4/50 [00:12<02:16, 2.96s/trial, best loss: -0.8346097688713522] Early stopping, best iteration is:
[30] training's binary_logloss: 0.111064 valid_1's binary_logloss: 0.131895
8%|▊ | 4/50 [00:13<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
8%|▊ | 4/50 [00:13<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
8%|▊ | 4/50 [00:13<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
8%|▊ | 4/50 [00:13<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
8%|▊ | 4/50 [00:13<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
8%|▊ | 4/50 [00:13<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.010004 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
8%|▊ | 4/50 [00:13<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Total Bins 13055
8%|▊ | 4/50 [00:13<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 199
8%|▊ | 4/50 [00:13<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
8%|▊ | 4/50 [00:13<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
8%|▊ | 4/50 [00:13<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
8%|▊ | 4/50 [00:13<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Start training from score -3.210495
8%|▊ | 4/50 [00:13<02:16, 2.96s/trial, best loss: -0.8346097688713522] Training until validation scores don't improve for 30 rounds
8%|▊ | 4/50 [00:13<02:16, 2.96s/trial, best loss: -0.8346097688713522] Early stopping, best iteration is:
[27] training's binary_logloss: 0.108994 valid_1's binary_logloss: 0.139854
8%|▊ | 4/50 [00:14<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
8%|▊ | 4/50 [00:14<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
8%|▊ | 4/50 [00:14<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
8%|▊ | 4/50 [00:14<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
8%|▊ | 4/50 [00:14<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
8%|▊ | 4/50 [00:14<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009049 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
8%|▊ | 4/50 [00:14<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Total Bins 12996
8%|▊ | 4/50 [00:14<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
8%|▊ | 4/50 [00:14<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
8%|▊ | 4/50 [00:14<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
8%|▊ | 4/50 [00:14<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
8%|▊ | 4/50 [00:14<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Start training from score -3.179828
8%|▊ | 4/50 [00:14<02:16, 2.96s/trial, best loss: -0.8346097688713522] Training until validation scores don't improve for 30 rounds
8%|▊ | 4/50 [00:14<02:16, 2.96s/trial, best loss: -0.8346097688713522] Early stopping, best iteration is:
[20] training's binary_logloss: 0.116146 valid_1's binary_logloss: 0.13756
8%|▊ | 4/50 [00:14<02:16, 2.96s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
8%|▊ | 4/50 [00:14<02:16, 2.96s/trial, best loss: -0.8346097688713522] 10%|█ | 5/50 [00:14<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
10%|█ | 5/50 [00:15<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
10%|█ | 5/50 [00:15<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
10%|█ | 5/50 [00:15<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
10%|█ | 5/50 [00:15<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008251 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
10%|█ | 5/50 [00:15<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Total Bins 12947
10%|█ | 5/50 [00:15<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 203
10%|█ | 5/50 [00:15<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
10%|█ | 5/50 [00:15<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
10%|█ | 5/50 [00:15<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
10%|█ | 5/50 [00:15<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Start training from score -3.161962
10%|█ | 5/50 [00:15<02:06, 2.80s/trial, best loss: -0.8346097688713522] Training until validation scores don't improve for 30 rounds
10%|█ | 5/50 [00:15<02:06, 2.80s/trial, best loss: -0.8346097688713522] Early stopping, best iteration is:
[25] training's binary_logloss: 0.120067 valid_1's binary_logloss: 0.131511
10%|█ | 5/50 [00:15<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
10%|█ | 5/50 [00:15<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
10%|█ | 5/50 [00:15<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
10%|█ | 5/50 [00:15<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
10%|█ | 5/50 [00:15<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
10%|█ | 5/50 [00:15<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007845 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
10%|█ | 5/50 [00:15<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Total Bins 13055
10%|█ | 5/50 [00:15<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 199
10%|█ | 5/50 [00:15<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
10%|█ | 5/50 [00:15<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
10%|█ | 5/50 [00:15<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
10%|█ | 5/50 [00:15<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Start training from score -3.210495
10%|█ | 5/50 [00:15<02:06, 2.80s/trial, best loss: -0.8346097688713522] Training until validation scores don't improve for 30 rounds
10%|█ | 5/50 [00:15<02:06, 2.80s/trial, best loss: -0.8346097688713522] Early stopping, best iteration is:
[31] training's binary_logloss: 0.112434 valid_1's binary_logloss: 0.139423
10%|█ | 5/50 [00:16<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
10%|█ | 5/50 [00:16<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
10%|█ | 5/50 [00:16<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
10%|█ | 5/50 [00:16<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
10%|█ | 5/50 [00:16<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
10%|█ | 5/50 [00:16<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009165 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
10%|█ | 5/50 [00:16<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Total Bins 12996
10%|█ | 5/50 [00:16<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
10%|█ | 5/50 [00:16<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
10%|█ | 5/50 [00:16<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
10%|█ | 5/50 [00:16<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
10%|█ | 5/50 [00:16<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Start training from score -3.179828
10%|█ | 5/50 [00:16<02:06, 2.80s/trial, best loss: -0.8346097688713522] Training until validation scores don't improve for 30 rounds
10%|█ | 5/50 [00:16<02:06, 2.80s/trial, best loss: -0.8346097688713522] Early stopping, best iteration is:
[28] training's binary_logloss: 0.115651 valid_1's binary_logloss: 0.136891
10%|█ | 5/50 [00:16<02:06, 2.80s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
10%|█ | 5/50 [00:16<02:06, 2.80s/trial, best loss: -0.8346097688713522] 12%|█▏ | 6/50 [00:16<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
12%|█▏ | 6/50 [00:17<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
12%|█▏ | 6/50 [00:17<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
12%|█▏ | 6/50 [00:17<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
12%|█▏ | 6/50 [00:17<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010475 seconds.
You can set `force_col_wise=true` to remove the overhead.
12%|█▏ | 6/50 [00:17<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Total Bins 12902
12%|█▏ | 6/50 [00:17<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 197
12%|█▏ | 6/50 [00:17<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
12%|█▏ | 6/50 [00:17<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
12%|█▏ | 6/50 [00:17<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
12%|█▏ | 6/50 [00:17<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Start training from score -3.161962
12%|█▏ | 6/50 [00:17<01:52, 2.55s/trial, best loss: -0.8346097688713522] Training until validation scores don't improve for 30 rounds
12%|█▏ | 6/50 [00:17<01:52, 2.55s/trial, best loss: -0.8346097688713522] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.128605 valid_1's binary_logloss: 0.133093
12%|█▏ | 6/50 [00:18<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
12%|█▏ | 6/50 [00:18<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
12%|█▏ | 6/50 [00:18<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
12%|█▏ | 6/50 [00:18<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
12%|█▏ | 6/50 [00:18<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
12%|█▏ | 6/50 [00:18<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008203 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
12%|█▏ | 6/50 [00:18<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Total Bins 12988
12%|█▏ | 6/50 [00:18<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
12%|█▏ | 6/50 [00:18<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
12%|█▏ | 6/50 [00:18<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
12%|█▏ | 6/50 [00:18<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
12%|█▏ | 6/50 [00:18<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Start training from score -3.210495
12%|█▏ | 6/50 [00:18<01:52, 2.55s/trial, best loss: -0.8346097688713522] Training until validation scores don't improve for 30 rounds
12%|█▏ | 6/50 [00:18<01:52, 2.55s/trial, best loss: -0.8346097688713522] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.124147 valid_1's binary_logloss: 0.141061
12%|█▏ | 6/50 [00:19<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
12%|█▏ | 6/50 [00:19<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
12%|█▏ | 6/50 [00:19<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
12%|█▏ | 6/50 [00:19<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
12%|█▏ | 6/50 [00:19<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
12%|█▏ | 6/50 [00:19<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007711 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
12%|█▏ | 6/50 [00:19<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Total Bins 12898
12%|█▏ | 6/50 [00:19<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
12%|█▏ | 6/50 [00:19<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
12%|█▏ | 6/50 [00:19<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
12%|█▏ | 6/50 [00:19<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
12%|█▏ | 6/50 [00:19<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Start training from score -3.179828
12%|█▏ | 6/50 [00:19<01:52, 2.55s/trial, best loss: -0.8346097688713522] Training until validation scores don't improve for 30 rounds
12%|█▏ | 6/50 [00:19<01:52, 2.55s/trial, best loss: -0.8346097688713522] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.125878 valid_1's binary_logloss: 0.13813
12%|█▏ | 6/50 [00:20<01:52, 2.55s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
12%|█▏ | 6/50 [00:20<01:52, 2.55s/trial, best loss: -0.8346097688713522] 14%|█▍ | 7/50 [00:20<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
14%|█▍ | 7/50 [00:20<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
14%|█▍ | 7/50 [00:20<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
14%|█▍ | 7/50 [00:20<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
14%|█▍ | 7/50 [00:20<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008418 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
14%|█▍ | 7/50 [00:20<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Total Bins 12947
14%|█▍ | 7/50 [00:20<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 203
14%|█▍ | 7/50 [00:20<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
14%|█▍ | 7/50 [00:20<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
14%|█▍ | 7/50 [00:20<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
14%|█▍ | 7/50 [00:20<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Start training from score -3.161962
14%|█▍ | 7/50 [00:20<01:58, 2.76s/trial, best loss: -0.8346097688713522] Training until validation scores don't improve for 30 rounds
14%|█▍ | 7/50 [00:20<01:58, 2.76s/trial, best loss: -0.8346097688713522] Did not meet early stopping. Best iteration is:
[73] training's binary_logloss: 0.119266 valid_1's binary_logloss: 0.131216
14%|█▍ | 7/50 [00:20<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
14%|█▍ | 7/50 [00:20<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
14%|█▍ | 7/50 [00:21<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
14%|█▍ | 7/50 [00:21<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
14%|█▍ | 7/50 [00:21<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
14%|█▍ | 7/50 [00:21<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007783 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
14%|█▍ | 7/50 [00:21<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Total Bins 12998
14%|█▍ | 7/50 [00:21<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 194
14%|█▍ | 7/50 [00:21<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
14%|█▍ | 7/50 [00:21<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
14%|█▍ | 7/50 [00:21<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
14%|█▍ | 7/50 [00:21<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Start training from score -3.210495
14%|█▍ | 7/50 [00:21<01:58, 2.76s/trial, best loss: -0.8346097688713522] Training until validation scores don't improve for 30 rounds
14%|█▍ | 7/50 [00:21<01:58, 2.76s/trial, best loss: -0.8346097688713522] Early stopping, best iteration is:
[63] training's binary_logloss: 0.116719 valid_1's binary_logloss: 0.139009
14%|█▍ | 7/50 [00:21<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
14%|█▍ | 7/50 [00:21<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
14%|█▍ | 7/50 [00:21<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
14%|█▍ | 7/50 [00:21<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
14%|█▍ | 7/50 [00:22<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
14%|█▍ | 7/50 [00:22<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008308 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
14%|█▍ | 7/50 [00:22<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Total Bins 12968
14%|█▍ | 7/50 [00:22<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 199
14%|█▍ | 7/50 [00:22<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
14%|█▍ | 7/50 [00:22<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
14%|█▍ | 7/50 [00:22<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
14%|█▍ | 7/50 [00:22<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Info] Start training from score -3.179828
14%|█▍ | 7/50 [00:22<01:58, 2.76s/trial, best loss: -0.8346097688713522] Training until validation scores don't improve for 30 rounds
14%|█▍ | 7/50 [00:22<01:58, 2.76s/trial, best loss: -0.8346097688713522] Early stopping, best iteration is:
[56] training's binary_logloss: 0.120087 valid_1's binary_logloss: 0.136444
14%|█▍ | 7/50 [00:22<01:58, 2.76s/trial, best loss: -0.8346097688713522] [LightGBM] [Warning] Unknown parameter: eval_metric
14%|█▍ | 7/50 [00:22<01:58, 2.76s/trial, best loss: -0.8346097688713522] 16%|█▌ | 8/50 [00:22<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
16%|█▌ | 8/50 [00:22<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
16%|█▌ | 8/50 [00:22<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
16%|█▌ | 8/50 [00:22<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
16%|█▌ | 8/50 [00:22<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009077 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
16%|█▌ | 8/50 [00:22<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12835
16%|█▌ | 8/50 [00:22<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
16%|█▌ | 8/50 [00:22<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
16%|█▌ | 8/50 [00:22<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
16%|█▌ | 8/50 [00:22<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
16%|█▌ | 8/50 [00:23<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.161962
16%|█▌ | 8/50 [00:23<01:51, 2.65s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
16%|█▌ | 8/50 [00:23<01:51, 2.65s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[57] training's binary_logloss: 0.120993 valid_1's binary_logloss: 0.131385
16%|█▌ | 8/50 [00:23<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
16%|█▌ | 8/50 [00:23<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
16%|█▌ | 8/50 [00:23<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
16%|█▌ | 8/50 [00:23<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
16%|█▌ | 8/50 [00:23<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
16%|█▌ | 8/50 [00:23<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007612 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
16%|█▌ | 8/50 [00:23<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12988
16%|█▌ | 8/50 [00:23<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
16%|█▌ | 8/50 [00:23<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
16%|█▌ | 8/50 [00:23<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
16%|█▌ | 8/50 [00:23<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
16%|█▌ | 8/50 [00:23<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.210495
16%|█▌ | 8/50 [00:23<01:51, 2.65s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
16%|█▌ | 8/50 [00:23<01:51, 2.65s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[62] training's binary_logloss: 0.115325 valid_1's binary_logloss: 0.13881
16%|█▌ | 8/50 [00:24<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
16%|█▌ | 8/50 [00:24<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
16%|█▌ | 8/50 [00:24<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
16%|█▌ | 8/50 [00:24<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
16%|█▌ | 8/50 [00:24<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
16%|█▌ | 8/50 [00:24<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007536 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
16%|█▌ | 8/50 [00:24<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12898
16%|█▌ | 8/50 [00:24<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
16%|█▌ | 8/50 [00:24<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
16%|█▌ | 8/50 [00:24<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
16%|█▌ | 8/50 [00:24<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
16%|█▌ | 8/50 [00:24<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.179828
16%|█▌ | 8/50 [00:24<01:51, 2.65s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
16%|█▌ | 8/50 [00:24<01:51, 2.65s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[50] training's binary_logloss: 0.120231 valid_1's binary_logloss: 0.136346
16%|█▌ | 8/50 [00:24<01:51, 2.65s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
16%|█▌ | 8/50 [00:25<01:51, 2.65s/trial, best loss: -0.8354478683012264] 18%|█▊ | 9/50 [00:25<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
18%|█▊ | 9/50 [00:25<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
18%|█▊ | 9/50 [00:25<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
18%|█▊ | 9/50 [00:25<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
18%|█▊ | 9/50 [00:25<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008124 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
18%|█▊ | 9/50 [00:25<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12835
18%|█▊ | 9/50 [00:25<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
18%|█▊ | 9/50 [00:25<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
18%|█▊ | 9/50 [00:25<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
18%|█▊ | 9/50 [00:25<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
18%|█▊ | 9/50 [00:25<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.161962
18%|█▊ | 9/50 [00:25<01:46, 2.59s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
18%|█▊ | 9/50 [00:25<01:46, 2.59s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[23] training's binary_logloss: 0.116031 valid_1's binary_logloss: 0.132494
18%|█▊ | 9/50 [00:25<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
18%|█▊ | 9/50 [00:25<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
18%|█▊ | 9/50 [00:25<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
18%|█▊ | 9/50 [00:25<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
18%|█▊ | 9/50 [00:26<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
18%|█▊ | 9/50 [00:26<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007737 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
18%|█▊ | 9/50 [00:26<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12988
18%|█▊ | 9/50 [00:26<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
18%|█▊ | 9/50 [00:26<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
18%|█▊ | 9/50 [00:26<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
18%|█▊ | 9/50 [00:26<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
18%|█▊ | 9/50 [00:26<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.210495
18%|█▊ | 9/50 [00:26<01:46, 2.59s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
18%|█▊ | 9/50 [00:26<01:46, 2.59s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[22] training's binary_logloss: 0.112419 valid_1's binary_logloss: 0.140329
18%|█▊ | 9/50 [00:26<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
18%|█▊ | 9/50 [00:26<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
18%|█▊ | 9/50 [00:26<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
18%|█▊ | 9/50 [00:26<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
18%|█▊ | 9/50 [00:26<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
18%|█▊ | 9/50 [00:26<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007861 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
18%|█▊ | 9/50 [00:26<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12898
18%|█▊ | 9/50 [00:26<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
18%|█▊ | 9/50 [00:26<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
18%|█▊ | 9/50 [00:26<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
18%|█▊ | 9/50 [00:26<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
18%|█▊ | 9/50 [00:26<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.179828
18%|█▊ | 9/50 [00:26<01:46, 2.59s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
18%|█▊ | 9/50 [00:26<01:46, 2.59s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[20] training's binary_logloss: 0.115687 valid_1's binary_logloss: 0.137694
18%|█▊ | 9/50 [00:27<01:46, 2.59s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
18%|█▊ | 9/50 [00:27<01:46, 2.59s/trial, best loss: -0.8354478683012264] 20%|██ | 10/50 [00:27<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
20%|██ | 10/50 [00:27<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
20%|██ | 10/50 [00:27<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
20%|██ | 10/50 [00:27<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
20%|██ | 10/50 [00:27<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007751 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
20%|██ | 10/50 [00:27<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12835
20%|██ | 10/50 [00:27<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
20%|██ | 10/50 [00:27<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
20%|██ | 10/50 [00:27<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
20%|██ | 10/50 [00:27<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
20%|██ | 10/50 [00:27<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.161962
20%|██ | 10/50 [00:27<01:40, 2.52s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
20%|██ | 10/50 [00:27<01:40, 2.52s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[45] training's binary_logloss: 0.117033 valid_1's binary_logloss: 0.131893
20%|██ | 10/50 [00:28<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
20%|██ | 10/50 [00:28<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
20%|██ | 10/50 [00:28<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
20%|██ | 10/50 [00:28<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
20%|██ | 10/50 [00:28<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
20%|██ | 10/50 [00:28<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007925 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
20%|██ | 10/50 [00:28<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12988
20%|██ | 10/50 [00:28<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
20%|██ | 10/50 [00:28<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
20%|██ | 10/50 [00:28<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
20%|██ | 10/50 [00:28<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
20%|██ | 10/50 [00:28<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.210495
20%|██ | 10/50 [00:28<01:40, 2.52s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
20%|██ | 10/50 [00:28<01:40, 2.52s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[30] training's binary_logloss: 0.11876 valid_1's binary_logloss: 0.139543
20%|██ | 10/50 [00:28<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
20%|██ | 10/50 [00:29<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
20%|██ | 10/50 [00:29<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
20%|██ | 10/50 [00:29<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
20%|██ | 10/50 [00:29<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
20%|██ | 10/50 [00:29<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008278 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
20%|██ | 10/50 [00:29<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12898
20%|██ | 10/50 [00:29<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
20%|██ | 10/50 [00:29<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
20%|██ | 10/50 [00:29<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
20%|██ | 10/50 [00:29<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
20%|██ | 10/50 [00:29<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.179828
20%|██ | 10/50 [00:29<01:40, 2.52s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
20%|██ | 10/50 [00:29<01:40, 2.52s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[38] training's binary_logloss: 0.117423 valid_1's binary_logloss: 0.136738
20%|██ | 10/50 [00:29<01:40, 2.52s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
20%|██ | 10/50 [00:29<01:40, 2.52s/trial, best loss: -0.8354478683012264] 22%|██▏ | 11/50 [00:29<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
22%|██▏ | 11/50 [00:29<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
22%|██▏ | 11/50 [00:29<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
22%|██▏ | 11/50 [00:30<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
22%|██▏ | 11/50 [00:30<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008637 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
22%|██▏ | 11/50 [00:30<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12902
22%|██▏ | 11/50 [00:30<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 197
22%|██▏ | 11/50 [00:30<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
22%|██▏ | 11/50 [00:30<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
22%|██▏ | 11/50 [00:30<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
22%|██▏ | 11/50 [00:30<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.161962
22%|██▏ | 11/50 [00:30<01:37, 2.49s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
22%|██▏ | 11/50 [00:30<01:37, 2.49s/trial, best loss: -0.8354478683012264] Did not meet early stopping. Best iteration is:
[78] training's binary_logloss: 0.115732 valid_1's binary_logloss: 0.13138
22%|██▏ | 11/50 [00:30<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
22%|██▏ | 11/50 [00:30<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
22%|██▏ | 11/50 [00:30<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
22%|██▏ | 11/50 [00:30<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
22%|██▏ | 11/50 [00:30<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
22%|██▏ | 11/50 [00:30<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007739 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
22%|██▏ | 11/50 [00:30<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12988
22%|██▏ | 11/50 [00:30<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
22%|██▏ | 11/50 [00:30<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
22%|██▏ | 11/50 [00:30<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
22%|██▏ | 11/50 [00:30<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
22%|██▏ | 11/50 [00:30<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.210495
22%|██▏ | 11/50 [00:30<01:37, 2.49s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
22%|██▏ | 11/50 [00:30<01:37, 2.49s/trial, best loss: -0.8354478683012264] Did not meet early stopping. Best iteration is:
[74] training's binary_logloss: 0.112624 valid_1's binary_logloss: 0.139339
22%|██▏ | 11/50 [00:31<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
22%|██▏ | 11/50 [00:31<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
22%|██▏ | 11/50 [00:31<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
22%|██▏ | 11/50 [00:31<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
22%|██▏ | 11/50 [00:31<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
22%|██▏ | 11/50 [00:31<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008653 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
22%|██▏ | 11/50 [00:31<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12898
22%|██▏ | 11/50 [00:31<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
22%|██▏ | 11/50 [00:31<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
22%|██▏ | 11/50 [00:31<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
22%|██▏ | 11/50 [00:31<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
22%|██▏ | 11/50 [00:31<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.179828
22%|██▏ | 11/50 [00:31<01:37, 2.49s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
22%|██▏ | 11/50 [00:31<01:37, 2.49s/trial, best loss: -0.8354478683012264] Did not meet early stopping. Best iteration is:
[74] training's binary_logloss: 0.114351 valid_1's binary_logloss: 0.136737
22%|██▏ | 11/50 [00:32<01:37, 2.49s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
22%|██▏ | 11/50 [00:32<01:37, 2.49s/trial, best loss: -0.8354478683012264] 24%|██▍ | 12/50 [00:32<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
24%|██▍ | 12/50 [00:32<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
24%|██▍ | 12/50 [00:32<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
24%|██▍ | 12/50 [00:32<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
24%|██▍ | 12/50 [00:32<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.010321 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
24%|██▍ | 12/50 [00:32<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12947
24%|██▍ | 12/50 [00:32<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 203
24%|██▍ | 12/50 [00:32<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
24%|██▍ | 12/50 [00:32<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
24%|██▍ | 12/50 [00:32<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
24%|██▍ | 12/50 [00:32<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.161962
24%|██▍ | 12/50 [00:32<01:35, 2.51s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
24%|██▍ | 12/50 [00:32<01:35, 2.51s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[46] training's binary_logloss: 0.11276 valid_1's binary_logloss: 0.13165
24%|██▍ | 12/50 [00:33<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
24%|██▍ | 12/50 [00:33<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
24%|██▍ | 12/50 [00:33<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
24%|██▍ | 12/50 [00:33<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
24%|██▍ | 12/50 [00:33<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
24%|██▍ | 12/50 [00:33<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009865 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
24%|██▍ | 12/50 [00:33<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 13055
24%|██▍ | 12/50 [00:33<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 199
24%|██▍ | 12/50 [00:33<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
24%|██▍ | 12/50 [00:33<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
24%|██▍ | 12/50 [00:33<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
24%|██▍ | 12/50 [00:33<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.210495
24%|██▍ | 12/50 [00:33<01:35, 2.51s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
24%|██▍ | 12/50 [00:34<01:35, 2.51s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[40] training's binary_logloss: 0.111011 valid_1's binary_logloss: 0.139831
24%|██▍ | 12/50 [00:34<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
24%|██▍ | 12/50 [00:34<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
24%|██▍ | 12/50 [00:34<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
24%|██▍ | 12/50 [00:34<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
24%|██▍ | 12/50 [00:34<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
24%|██▍ | 12/50 [00:34<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008053 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
24%|██▍ | 12/50 [00:34<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12996
24%|██▍ | 12/50 [00:34<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
24%|██▍ | 12/50 [00:34<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
24%|██▍ | 12/50 [00:35<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
24%|██▍ | 12/50 [00:35<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
24%|██▍ | 12/50 [00:35<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.179828
24%|██▍ | 12/50 [00:35<01:35, 2.51s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
24%|██▍ | 12/50 [00:35<01:35, 2.51s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[43] training's binary_logloss: 0.111276 valid_1's binary_logloss: 0.137335
24%|██▍ | 12/50 [00:35<01:35, 2.51s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
24%|██▍ | 12/50 [00:35<01:35, 2.51s/trial, best loss: -0.8354478683012264] 26%|██▌ | 13/50 [00:35<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
26%|██▌ | 13/50 [00:35<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
26%|██▌ | 13/50 [00:35<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
26%|██▌ | 13/50 [00:35<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
26%|██▌ | 13/50 [00:35<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008260 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
26%|██▌ | 13/50 [00:35<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12844
26%|██▌ | 13/50 [00:35<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 195
26%|██▌ | 13/50 [00:35<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
26%|██▌ | 13/50 [00:35<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
26%|██▌ | 13/50 [00:35<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
26%|██▌ | 13/50 [00:36<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.161962
26%|██▌ | 13/50 [00:36<01:41, 2.74s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
26%|██▌ | 13/50 [00:36<01:41, 2.74s/trial, best loss: -0.8354478683012264] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.12681 valid_1's binary_logloss: 0.13222
26%|██▌ | 13/50 [00:36<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
26%|██▌ | 13/50 [00:36<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
26%|██▌ | 13/50 [00:36<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
26%|██▌ | 13/50 [00:36<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
26%|██▌ | 13/50 [00:37<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
26%|██▌ | 13/50 [00:37<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008850 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
26%|██▌ | 13/50 [00:37<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12988
26%|██▌ | 13/50 [00:37<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
26%|██▌ | 13/50 [00:37<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
26%|██▌ | 13/50 [00:37<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
26%|██▌ | 13/50 [00:37<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
26%|██▌ | 13/50 [00:37<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.210495
26%|██▌ | 13/50 [00:37<01:41, 2.74s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
26%|██▌ | 13/50 [00:37<01:41, 2.74s/trial, best loss: -0.8354478683012264] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.122208 valid_1's binary_logloss: 0.139981
26%|██▌ | 13/50 [00:37<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
26%|██▌ | 13/50 [00:37<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
26%|██▌ | 13/50 [00:37<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
26%|██▌ | 13/50 [00:37<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
26%|██▌ | 13/50 [00:38<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
26%|██▌ | 13/50 [00:38<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009314 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
26%|██▌ | 13/50 [00:38<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12898
26%|██▌ | 13/50 [00:38<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
26%|██▌ | 13/50 [00:38<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
26%|██▌ | 13/50 [00:38<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
26%|██▌ | 13/50 [00:38<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
26%|██▌ | 13/50 [00:38<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.179828
26%|██▌ | 13/50 [00:38<01:41, 2.74s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
26%|██▌ | 13/50 [00:38<01:41, 2.74s/trial, best loss: -0.8354478683012264] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.124131 valid_1's binary_logloss: 0.137316
26%|██▌ | 13/50 [00:39<01:41, 2.74s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
26%|██▌ | 13/50 [00:39<01:41, 2.74s/trial, best loss: -0.8354478683012264] 28%|██▊ | 14/50 [00:39<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
28%|██▊ | 14/50 [00:39<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
28%|██▊ | 14/50 [00:39<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
28%|██▊ | 14/50 [00:39<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
28%|██▊ | 14/50 [00:39<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.010560 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
28%|██▊ | 14/50 [00:39<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12943
28%|██▊ | 14/50 [00:39<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
28%|██▊ | 14/50 [00:39<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
28%|██▊ | 14/50 [00:39<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
28%|██▊ | 14/50 [00:39<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
28%|██▊ | 14/50 [00:39<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.161962
28%|██▊ | 14/50 [00:39<01:49, 3.04s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
28%|██▊ | 14/50 [00:39<01:49, 3.04s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[19] training's binary_logloss: 0.119948 valid_1's binary_logloss: 0.132615
28%|██▊ | 14/50 [00:40<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
28%|██▊ | 14/50 [00:40<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
28%|██▊ | 14/50 [00:40<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
28%|██▊ | 14/50 [00:40<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
28%|██▊ | 14/50 [00:40<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
28%|██▊ | 14/50 [00:40<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009233 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
28%|██▊ | 14/50 [00:40<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12988
28%|██▊ | 14/50 [00:40<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
28%|██▊ | 14/50 [00:40<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
28%|██▊ | 14/50 [00:40<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
28%|██▊ | 14/50 [00:40<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
28%|██▊ | 14/50 [00:40<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.210495
28%|██▊ | 14/50 [00:40<01:49, 3.04s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
28%|██▊ | 14/50 [00:40<01:49, 3.04s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[17] training's binary_logloss: 0.116812 valid_1's binary_logloss: 0.140251
28%|██▊ | 14/50 [00:40<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
28%|██▊ | 14/50 [00:40<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
28%|██▊ | 14/50 [00:41<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
28%|██▊ | 14/50 [00:41<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
28%|██▊ | 14/50 [00:41<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
28%|██▊ | 14/50 [00:41<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009001 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
28%|██▊ | 14/50 [00:41<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12958
28%|██▊ | 14/50 [00:41<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 197
28%|██▊ | 14/50 [00:41<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
28%|██▊ | 14/50 [00:41<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
28%|██▊ | 14/50 [00:41<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
28%|██▊ | 14/50 [00:41<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.179828
28%|██▊ | 14/50 [00:41<01:49, 3.04s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
28%|██▊ | 14/50 [00:41<01:49, 3.04s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[19] training's binary_logloss: 0.117331 valid_1's binary_logloss: 0.137237
28%|██▊ | 14/50 [00:41<01:49, 3.04s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
28%|██▊ | 14/50 [00:41<01:49, 3.04s/trial, best loss: -0.8354478683012264] 30%|███ | 15/50 [00:41<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
30%|███ | 15/50 [00:41<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
30%|███ | 15/50 [00:41<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
30%|███ | 15/50 [00:42<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
30%|███ | 15/50 [00:42<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.010186 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
30%|███ | 15/50 [00:42<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12943
30%|███ | 15/50 [00:42<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
30%|███ | 15/50 [00:42<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
30%|███ | 15/50 [00:42<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
30%|███ | 15/50 [00:42<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
30%|███ | 15/50 [00:42<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.161962
30%|███ | 15/50 [00:42<01:38, 2.83s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
30%|███ | 15/50 [00:42<01:38, 2.83s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[17] training's binary_logloss: 0.120445 valid_1's binary_logloss: 0.132691
30%|███ | 15/50 [00:42<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
30%|███ | 15/50 [00:42<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
30%|███ | 15/50 [00:42<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
30%|███ | 15/50 [00:42<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
30%|███ | 15/50 [00:42<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
30%|███ | 15/50 [00:42<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.010950 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
30%|███ | 15/50 [00:43<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12988
30%|███ | 15/50 [00:43<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
30%|███ | 15/50 [00:43<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
30%|███ | 15/50 [00:43<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
30%|███ | 15/50 [00:43<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
30%|███ | 15/50 [00:43<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.210495
30%|███ | 15/50 [00:43<01:38, 2.83s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
30%|███ | 15/50 [00:43<01:38, 2.83s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[16] training's binary_logloss: 0.117054 valid_1's binary_logloss: 0.139941
30%|███ | 15/50 [00:43<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
30%|███ | 15/50 [00:43<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
30%|███ | 15/50 [00:43<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
30%|███ | 15/50 [00:43<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
30%|███ | 15/50 [00:43<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
30%|███ | 15/50 [00:43<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008302 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
30%|███ | 15/50 [00:43<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12906
30%|███ | 15/50 [00:43<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 195
30%|███ | 15/50 [00:43<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
30%|███ | 15/50 [00:43<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
30%|███ | 15/50 [00:43<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
30%|███ | 15/50 [00:43<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.179828
30%|███ | 15/50 [00:43<01:38, 2.83s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
30%|███ | 15/50 [00:43<01:38, 2.83s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[20] training's binary_logloss: 0.115401 valid_1's binary_logloss: 0.137413
30%|███ | 15/50 [00:44<01:38, 2.83s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
30%|███ | 15/50 [00:44<01:38, 2.83s/trial, best loss: -0.8354478683012264] 32%|███▏ | 16/50 [00:44<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
32%|███▏ | 16/50 [00:44<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
32%|███▏ | 16/50 [00:44<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
32%|███▏ | 16/50 [00:44<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
32%|███▏ | 16/50 [00:44<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009370 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
32%|███▏ | 16/50 [00:44<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12835
32%|███▏ | 16/50 [00:44<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
32%|███▏ | 16/50 [00:44<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
32%|███▏ | 16/50 [00:44<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
32%|███▏ | 16/50 [00:44<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
32%|███▏ | 16/50 [00:44<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.161962
32%|███▏ | 16/50 [00:44<01:32, 2.72s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
32%|███▏ | 16/50 [00:44<01:32, 2.72s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[18] training's binary_logloss: 0.11605 valid_1's binary_logloss: 0.133209
32%|███▏ | 16/50 [00:44<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
32%|███▏ | 16/50 [00:45<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
32%|███▏ | 16/50 [00:45<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
32%|███▏ | 16/50 [00:45<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
32%|███▏ | 16/50 [00:45<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
32%|███▏ | 16/50 [00:45<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008886 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
32%|███▏ | 16/50 [00:45<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12988
32%|███▏ | 16/50 [00:45<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
32%|███▏ | 16/50 [00:45<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
32%|███▏ | 16/50 [00:45<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
32%|███▏ | 16/50 [00:45<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
32%|███▏ | 16/50 [00:45<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.210495
32%|███▏ | 16/50 [00:45<01:32, 2.72s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
32%|███▏ | 16/50 [00:45<01:32, 2.72s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[15] training's binary_logloss: 0.114923 valid_1's binary_logloss: 0.140959
32%|███▏ | 16/50 [00:45<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
32%|███▏ | 16/50 [00:45<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
32%|███▏ | 16/50 [00:45<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
32%|███▏ | 16/50 [00:45<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
32%|███▏ | 16/50 [00:46<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
32%|███▏ | 16/50 [00:46<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008924 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
32%|███▏ | 16/50 [00:46<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12898
32%|███▏ | 16/50 [00:46<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
32%|███▏ | 16/50 [00:46<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
32%|███▏ | 16/50 [00:46<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
32%|███▏ | 16/50 [00:46<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
32%|███▏ | 16/50 [00:46<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.179828
32%|███▏ | 16/50 [00:46<01:32, 2.72s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
32%|███▏ | 16/50 [00:46<01:32, 2.72s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[14] training's binary_logloss: 0.117846 valid_1's binary_logloss: 0.13746
32%|███▏ | 16/50 [00:46<01:32, 2.72s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
32%|███▏ | 16/50 [00:46<01:32, 2.72s/trial, best loss: -0.8354478683012264] 34%|███▍ | 17/50 [00:46<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
34%|███▍ | 17/50 [00:46<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
34%|███▍ | 17/50 [00:46<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
34%|███▍ | 17/50 [00:46<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
34%|███▍ | 17/50 [00:46<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009462 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
34%|███▍ | 17/50 [00:46<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12835
34%|███▍ | 17/50 [00:46<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
34%|███▍ | 17/50 [00:46<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
34%|███▍ | 17/50 [00:46<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
34%|███▍ | 17/50 [00:46<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
34%|███▍ | 17/50 [00:47<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.161962
34%|███▍ | 17/50 [00:47<01:26, 2.63s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
34%|███▍ | 17/50 [00:47<01:26, 2.63s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[49] training's binary_logloss: 0.11538 valid_1's binary_logloss: 0.131723
34%|███▍ | 17/50 [00:47<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
34%|███▍ | 17/50 [00:47<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
34%|███▍ | 17/50 [00:47<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
34%|███▍ | 17/50 [00:47<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
34%|███▍ | 17/50 [00:47<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
34%|███▍ | 17/50 [00:47<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.010342 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
34%|███▍ | 17/50 [00:47<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12988
34%|███▍ | 17/50 [00:47<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
34%|███▍ | 17/50 [00:47<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
34%|███▍ | 17/50 [00:47<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
34%|███▍ | 17/50 [00:47<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
34%|███▍ | 17/50 [00:47<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.210495
34%|███▍ | 17/50 [00:47<01:26, 2.63s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
34%|███▍ | 17/50 [00:47<01:26, 2.63s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[31] training's binary_logloss: 0.117853 valid_1's binary_logloss: 0.139219
34%|███▍ | 17/50 [00:48<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
34%|███▍ | 17/50 [00:48<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
34%|███▍ | 17/50 [00:48<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
34%|███▍ | 17/50 [00:48<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
34%|███▍ | 17/50 [00:48<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
34%|███▍ | 17/50 [00:48<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007465 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
34%|███▍ | 17/50 [00:48<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12898
34%|███▍ | 17/50 [00:48<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
34%|███▍ | 17/50 [00:48<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
34%|███▍ | 17/50 [00:48<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
34%|███▍ | 17/50 [00:48<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
34%|███▍ | 17/50 [00:48<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.179828
34%|███▍ | 17/50 [00:48<01:26, 2.63s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
34%|███▍ | 17/50 [00:48<01:26, 2.63s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[29] training's binary_logloss: 0.120676 valid_1's binary_logloss: 0.136931
34%|███▍ | 17/50 [00:48<01:26, 2.63s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
34%|███▍ | 17/50 [00:48<01:26, 2.63s/trial, best loss: -0.8354478683012264] 36%|███▌ | 18/50 [00:48<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
36%|███▌ | 18/50 [00:49<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
36%|███▌ | 18/50 [00:49<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
36%|███▌ | 18/50 [00:49<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
36%|███▌ | 18/50 [00:49<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007036 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
36%|███▌ | 18/50 [00:49<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12835
36%|███▌ | 18/50 [00:49<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
36%|███▌ | 18/50 [00:49<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
36%|███▌ | 18/50 [00:49<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
36%|███▌ | 18/50 [00:49<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
36%|███▌ | 18/50 [00:49<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.161962
36%|███▌ | 18/50 [00:49<01:20, 2.53s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
36%|███▌ | 18/50 [00:49<01:20, 2.53s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[29] training's binary_logloss: 0.119523 valid_1's binary_logloss: 0.131926
36%|███▌ | 18/50 [00:49<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
36%|███▌ | 18/50 [00:49<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
36%|███▌ | 18/50 [00:49<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
36%|███▌ | 18/50 [00:49<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
36%|███▌ | 18/50 [00:49<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
36%|███▌ | 18/50 [00:49<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008304 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
36%|███▌ | 18/50 [00:49<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12988
36%|███▌ | 18/50 [00:49<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
36%|███▌ | 18/50 [00:49<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
36%|███▌ | 18/50 [00:49<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
36%|███▌ | 18/50 [00:49<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
36%|███▌ | 18/50 [00:49<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.210495
36%|███▌ | 18/50 [00:49<01:20, 2.53s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
36%|███▌ | 18/50 [00:49<01:20, 2.53s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[27] training's binary_logloss: 0.115902 valid_1's binary_logloss: 0.139583
36%|███▌ | 18/50 [00:50<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
36%|███▌ | 18/50 [00:50<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
36%|███▌ | 18/50 [00:50<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
36%|███▌ | 18/50 [00:50<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
36%|███▌ | 18/50 [00:50<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
36%|███▌ | 18/50 [00:50<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.010535 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
36%|███▌ | 18/50 [00:50<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12898
36%|███▌ | 18/50 [00:50<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
36%|███▌ | 18/50 [00:50<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
36%|███▌ | 18/50 [00:50<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
36%|███▌ | 18/50 [00:50<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
36%|███▌ | 18/50 [00:50<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.179828
36%|███▌ | 18/50 [00:50<01:20, 2.53s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
36%|███▌ | 18/50 [00:50<01:20, 2.53s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[24] training's binary_logloss: 0.11906 valid_1's binary_logloss: 0.137256
36%|███▌ | 18/50 [00:50<01:20, 2.53s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
36%|███▌ | 18/50 [00:50<01:20, 2.53s/trial, best loss: -0.8354478683012264] 38%|███▊ | 19/50 [00:50<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
38%|███▊ | 19/50 [00:51<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
38%|███▊ | 19/50 [00:51<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
38%|███▊ | 19/50 [00:51<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
38%|███▊ | 19/50 [00:51<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.014995 seconds.
You can set `force_col_wise=true` to remove the overhead.
38%|███▊ | 19/50 [00:51<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12835
38%|███▊ | 19/50 [00:51<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
38%|███▊ | 19/50 [00:51<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
38%|███▊ | 19/50 [00:51<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
38%|███▊ | 19/50 [00:51<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
38%|███▊ | 19/50 [00:51<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.161962
38%|███▊ | 19/50 [00:51<01:13, 2.38s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
38%|███▊ | 19/50 [00:51<01:13, 2.38s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[28] training's binary_logloss: 0.117616 valid_1's binary_logloss: 0.132237
38%|███▊ | 19/50 [00:51<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
38%|███▊ | 19/50 [00:51<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
38%|███▊ | 19/50 [00:51<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
38%|███▊ | 19/50 [00:51<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
38%|███▊ | 19/50 [00:51<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
38%|███▊ | 19/50 [00:51<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007897 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
38%|███▊ | 19/50 [00:52<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12988
38%|███▊ | 19/50 [00:52<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
38%|███▊ | 19/50 [00:52<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
38%|███▊ | 19/50 [00:52<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
38%|███▊ | 19/50 [00:52<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
38%|███▊ | 19/50 [00:52<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.210495
38%|███▊ | 19/50 [00:52<01:13, 2.38s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
38%|███▊ | 19/50 [00:52<01:13, 2.38s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[23] training's binary_logloss: 0.115822 valid_1's binary_logloss: 0.140243
38%|███▊ | 19/50 [00:52<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
38%|███▊ | 19/50 [00:52<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
38%|███▊ | 19/50 [00:52<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
38%|███▊ | 19/50 [00:52<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
38%|███▊ | 19/50 [00:52<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
38%|███▊ | 19/50 [00:52<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.020219 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
38%|███▊ | 19/50 [00:52<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 12898
38%|███▊ | 19/50 [00:52<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
38%|███▊ | 19/50 [00:52<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
38%|███▊ | 19/50 [00:52<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
38%|███▊ | 19/50 [00:52<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
38%|███▊ | 19/50 [00:52<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.179828
38%|███▊ | 19/50 [00:52<01:13, 2.38s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
38%|███▊ | 19/50 [00:52<01:13, 2.38s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[25] training's binary_logloss: 0.116668 valid_1's binary_logloss: 0.137218
38%|███▊ | 19/50 [00:53<01:13, 2.38s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
38%|███▊ | 19/50 [00:53<01:13, 2.38s/trial, best loss: -0.8354478683012264] 40%|████ | 20/50 [00:53<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
40%|████ | 20/50 [00:53<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
40%|████ | 20/50 [00:53<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
40%|████ | 20/50 [00:53<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
40%|████ | 20/50 [00:53<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009889 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
40%|████ | 20/50 [00:53<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 13057
40%|████ | 20/50 [00:53<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 211
40%|████ | 20/50 [00:53<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
40%|████ | 20/50 [00:53<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
40%|████ | 20/50 [00:53<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
40%|████ | 20/50 [00:53<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.161962
40%|████ | 20/50 [00:53<01:12, 2.41s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
40%|████ | 20/50 [00:53<01:12, 2.41s/trial, best loss: -0.8354478683012264] Did not meet early stopping. Best iteration is:
[71] training's binary_logloss: 0.118472 valid_1's binary_logloss: 0.130909
40%|████ | 20/50 [00:54<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
40%|████ | 20/50 [00:54<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
40%|████ | 20/50 [00:54<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
40%|████ | 20/50 [00:54<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
40%|████ | 20/50 [00:54<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
40%|████ | 20/50 [00:54<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008639 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
40%|████ | 20/50 [00:54<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 13161
40%|████ | 20/50 [00:54<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 208
40%|████ | 20/50 [00:54<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
40%|████ | 20/50 [00:54<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
40%|████ | 20/50 [00:54<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
40%|████ | 20/50 [00:54<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.210495
40%|████ | 20/50 [00:54<01:12, 2.41s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
40%|████ | 20/50 [00:54<01:12, 2.41s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[60] training's binary_logloss: 0.116535 valid_1's binary_logloss: 0.138826
40%|████ | 20/50 [00:55<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
40%|████ | 20/50 [00:55<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
40%|████ | 20/50 [00:55<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
40%|████ | 20/50 [00:55<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
40%|████ | 20/50 [00:55<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
40%|████ | 20/50 [00:55<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009875 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
40%|████ | 20/50 [00:55<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Total Bins 13044
40%|████ | 20/50 [00:55<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 205
40%|████ | 20/50 [00:55<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
40%|████ | 20/50 [00:55<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
40%|████ | 20/50 [00:55<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
40%|████ | 20/50 [00:55<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Info] Start training from score -3.179828
40%|████ | 20/50 [00:55<01:12, 2.41s/trial, best loss: -0.8354478683012264] Training until validation scores don't improve for 30 rounds
40%|████ | 20/50 [00:55<01:12, 2.41s/trial, best loss: -0.8354478683012264] Early stopping, best iteration is:
[58] training's binary_logloss: 0.118597 valid_1's binary_logloss: 0.136638
40%|████ | 20/50 [00:55<01:12, 2.41s/trial, best loss: -0.8354478683012264] [LightGBM] [Warning] Unknown parameter: eval_metric
40%|████ | 20/50 [00:56<01:12, 2.41s/trial, best loss: -0.8354478683012264] 42%|████▏ | 21/50 [00:56<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
42%|████▏ | 21/50 [00:56<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
42%|████▏ | 21/50 [00:56<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
42%|████▏ | 21/50 [00:56<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
42%|████▏ | 21/50 [00:56<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008395 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
42%|████▏ | 21/50 [00:56<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Total Bins 13057
42%|████▏ | 21/50 [00:56<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 211
42%|████▏ | 21/50 [00:56<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
42%|████▏ | 21/50 [00:56<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
42%|████▏ | 21/50 [00:56<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
42%|████▏ | 21/50 [00:56<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Start training from score -3.161962
42%|████▏ | 21/50 [00:56<01:11, 2.46s/trial, best loss: -0.8361261980967356] Training until validation scores don't improve for 30 rounds
42%|████▏ | 21/50 [00:56<01:11, 2.46s/trial, best loss: -0.8361261980967356] Did not meet early stopping. Best iteration is:
[96] training's binary_logloss: 0.116806 valid_1's binary_logloss: 0.131693
42%|████▏ | 21/50 [00:56<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
42%|████▏ | 21/50 [00:56<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
42%|████▏ | 21/50 [00:57<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
42%|████▏ | 21/50 [00:57<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
42%|████▏ | 21/50 [00:57<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
42%|████▏ | 21/50 [00:57<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008654 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
42%|████▏ | 21/50 [00:57<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Total Bins 13161
42%|████▏ | 21/50 [00:57<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 208
42%|████▏ | 21/50 [00:57<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
42%|████▏ | 21/50 [00:57<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
42%|████▏ | 21/50 [00:57<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
42%|████▏ | 21/50 [00:57<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Start training from score -3.210495
42%|████▏ | 21/50 [00:57<01:11, 2.46s/trial, best loss: -0.8361261980967356] Training until validation scores don't improve for 30 rounds
42%|████▏ | 21/50 [00:57<01:11, 2.46s/trial, best loss: -0.8361261980967356] Did not meet early stopping. Best iteration is:
[75] training's binary_logloss: 0.116396 valid_1's binary_logloss: 0.138474
42%|████▏ | 21/50 [00:57<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
42%|████▏ | 21/50 [00:57<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
42%|████▏ | 21/50 [00:58<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
42%|████▏ | 21/50 [00:58<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
42%|████▏ | 21/50 [00:58<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
42%|████▏ | 21/50 [00:58<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009684 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
42%|████▏ | 21/50 [00:58<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Total Bins 13044
42%|████▏ | 21/50 [00:58<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 205
42%|████▏ | 21/50 [00:58<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
42%|████▏ | 21/50 [00:58<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
42%|████▏ | 21/50 [00:58<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
42%|████▏ | 21/50 [00:58<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Start training from score -3.179828
42%|████▏ | 21/50 [00:58<01:11, 2.46s/trial, best loss: -0.8361261980967356] Training until validation scores don't improve for 30 rounds
42%|████▏ | 21/50 [00:58<01:11, 2.46s/trial, best loss: -0.8361261980967356] Early stopping, best iteration is:
[65] training's binary_logloss: 0.119662 valid_1's binary_logloss: 0.136275
42%|████▏ | 21/50 [00:58<01:11, 2.46s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
42%|████▏ | 21/50 [00:58<01:11, 2.46s/trial, best loss: -0.8361261980967356] 44%|████▍ | 22/50 [00:58<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
44%|████▍ | 22/50 [00:59<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
44%|████▍ | 22/50 [00:59<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
44%|████▍ | 22/50 [00:59<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
44%|████▍ | 22/50 [00:59<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009310 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
44%|████▍ | 22/50 [00:59<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Total Bins 13057
44%|████▍ | 22/50 [00:59<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 211
44%|████▍ | 22/50 [00:59<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
44%|████▍ | 22/50 [00:59<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
44%|████▍ | 22/50 [00:59<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
44%|████▍ | 22/50 [00:59<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Start training from score -3.161962
44%|████▍ | 22/50 [00:59<01:12, 2.58s/trial, best loss: -0.8361261980967356] Training until validation scores don't improve for 30 rounds
44%|████▍ | 22/50 [00:59<01:12, 2.58s/trial, best loss: -0.8361261980967356] Early stopping, best iteration is:
[63] training's binary_logloss: 0.117775 valid_1's binary_logloss: 0.131201
44%|████▍ | 22/50 [00:59<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
44%|████▍ | 22/50 [00:59<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
44%|████▍ | 22/50 [00:59<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
44%|████▍ | 22/50 [00:59<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
44%|████▍ | 22/50 [01:00<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
44%|████▍ | 22/50 [01:00<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009518 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
44%|████▍ | 22/50 [01:00<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Total Bins 13161
44%|████▍ | 22/50 [01:00<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 208
44%|████▍ | 22/50 [01:00<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
44%|████▍ | 22/50 [01:00<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
44%|████▍ | 22/50 [01:00<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
44%|████▍ | 22/50 [01:00<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Start training from score -3.210495
44%|████▍ | 22/50 [01:00<01:12, 2.58s/trial, best loss: -0.8361261980967356] Training until validation scores don't improve for 30 rounds
44%|████▍ | 22/50 [01:00<01:12, 2.58s/trial, best loss: -0.8361261980967356] Early stopping, best iteration is:
[55] training's binary_logloss: 0.11576 valid_1's binary_logloss: 0.138797
44%|████▍ | 22/50 [01:00<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
44%|████▍ | 22/50 [01:00<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
44%|████▍ | 22/50 [01:00<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
44%|████▍ | 22/50 [01:00<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
44%|████▍ | 22/50 [01:01<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
44%|████▍ | 22/50 [01:01<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009738 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
44%|████▍ | 22/50 [01:01<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Total Bins 13044
44%|████▍ | 22/50 [01:01<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 205
44%|████▍ | 22/50 [01:01<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
44%|████▍ | 22/50 [01:01<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
44%|████▍ | 22/50 [01:01<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
44%|████▍ | 22/50 [01:01<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Start training from score -3.179828
44%|████▍ | 22/50 [01:01<01:12, 2.58s/trial, best loss: -0.8361261980967356] Training until validation scores don't improve for 30 rounds
44%|████▍ | 22/50 [01:01<01:12, 2.58s/trial, best loss: -0.8361261980967356] Early stopping, best iteration is:
[48] training's binary_logloss: 0.119114 valid_1's binary_logloss: 0.136592
44%|████▍ | 22/50 [01:01<01:12, 2.58s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
44%|████▍ | 22/50 [01:01<01:12, 2.58s/trial, best loss: -0.8361261980967356] 46%|████▌ | 23/50 [01:01<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
46%|████▌ | 23/50 [01:01<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
46%|████▌ | 23/50 [01:01<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
46%|████▌ | 23/50 [01:01<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
46%|████▌ | 23/50 [01:01<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009989 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
46%|████▌ | 23/50 [01:01<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Total Bins 12993
46%|████▌ | 23/50 [01:01<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 205
46%|████▌ | 23/50 [01:01<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
46%|████▌ | 23/50 [01:01<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
46%|████▌ | 23/50 [01:01<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
46%|████▌ | 23/50 [01:01<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Start training from score -3.161962
46%|████▌ | 23/50 [01:01<01:11, 2.65s/trial, best loss: -0.8361261980967356] Training until validation scores don't improve for 30 rounds
46%|████▌ | 23/50 [01:01<01:11, 2.65s/trial, best loss: -0.8361261980967356] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.121674 valid_1's binary_logloss: 0.131107
46%|████▌ | 23/50 [01:01<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
46%|████▌ | 23/50 [01:01<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
46%|████▌ | 23/50 [01:01<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
46%|████▌ | 23/50 [01:01<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
46%|████▌ | 23/50 [01:02<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
46%|████▌ | 23/50 [01:02<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009480 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
46%|████▌ | 23/50 [01:02<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Total Bins 13086
46%|████▌ | 23/50 [01:02<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 203
46%|████▌ | 23/50 [01:02<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
46%|████▌ | 23/50 [01:02<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
46%|████▌ | 23/50 [01:02<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
46%|████▌ | 23/50 [01:02<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Start training from score -3.210495
46%|████▌ | 23/50 [01:02<01:11, 2.65s/trial, best loss: -0.8361261980967356] Training until validation scores don't improve for 30 rounds
46%|████▌ | 23/50 [01:02<01:11, 2.65s/trial, best loss: -0.8361261980967356] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.11765 valid_1's binary_logloss: 0.138596
46%|████▌ | 23/50 [01:02<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
46%|████▌ | 23/50 [01:02<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
46%|████▌ | 23/50 [01:02<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
46%|████▌ | 23/50 [01:02<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
46%|████▌ | 23/50 [01:03<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
46%|████▌ | 23/50 [01:03<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009169 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
46%|████▌ | 23/50 [01:03<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Total Bins 12996
46%|████▌ | 23/50 [01:03<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
46%|████▌ | 23/50 [01:03<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
46%|████▌ | 23/50 [01:03<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
46%|████▌ | 23/50 [01:03<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
46%|████▌ | 23/50 [01:03<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Info] Start training from score -3.179828
46%|████▌ | 23/50 [01:03<01:11, 2.65s/trial, best loss: -0.8361261980967356] Training until validation scores don't improve for 30 rounds
46%|████▌ | 23/50 [01:03<01:11, 2.65s/trial, best loss: -0.8361261980967356] Did not meet early stopping. Best iteration is:
[95] training's binary_logloss: 0.119781 valid_1's binary_logloss: 0.136145
46%|████▌ | 23/50 [01:03<01:11, 2.65s/trial, best loss: -0.8361261980967356] [LightGBM] [Warning] Unknown parameter: eval_metric
46%|████▌ | 23/50 [01:03<01:11, 2.65s/trial, best loss: -0.8361261980967356] 48%|████▊ | 24/50 [01:03<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Warning] Unknown parameter: eval_metric
48%|████▊ | 24/50 [01:03<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
48%|████▊ | 24/50 [01:03<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Warning] Unknown parameter: eval_metric
48%|████▊ | 24/50 [01:04<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
48%|████▊ | 24/50 [01:04<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008895 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
48%|████▊ | 24/50 [01:04<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Info] Total Bins 12993
48%|████▊ | 24/50 [01:04<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 205
48%|████▊ | 24/50 [01:04<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Warning] Unknown parameter: eval_metric
48%|████▊ | 24/50 [01:04<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
48%|████▊ | 24/50 [01:04<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
48%|████▊ | 24/50 [01:04<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Info] Start training from score -3.161962
48%|████▊ | 24/50 [01:04<01:04, 2.46s/trial, best loss: -0.8362934408440913] Training until validation scores don't improve for 30 rounds
48%|████▊ | 24/50 [01:04<01:04, 2.46s/trial, best loss: -0.8362934408440913] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.123945 valid_1's binary_logloss: 0.131312
48%|████▊ | 24/50 [01:04<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Warning] Unknown parameter: eval_metric
48%|████▊ | 24/50 [01:04<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Warning] Unknown parameter: eval_metric
48%|████▊ | 24/50 [01:04<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
48%|████▊ | 24/50 [01:04<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Warning] Unknown parameter: eval_metric
48%|████▊ | 24/50 [01:05<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
48%|████▊ | 24/50 [01:05<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009337 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
48%|████▊ | 24/50 [01:05<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Info] Total Bins 13086
48%|████▊ | 24/50 [01:05<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 203
48%|████▊ | 24/50 [01:05<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Warning] Unknown parameter: eval_metric
48%|████▊ | 24/50 [01:05<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
48%|████▊ | 24/50 [01:05<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
48%|████▊ | 24/50 [01:05<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Info] Start training from score -3.210495
48%|████▊ | 24/50 [01:05<01:04, 2.46s/trial, best loss: -0.8362934408440913] Training until validation scores don't improve for 30 rounds
48%|████▊ | 24/50 [01:05<01:04, 2.46s/trial, best loss: -0.8362934408440913] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.119785 valid_1's binary_logloss: 0.138758
48%|████▊ | 24/50 [01:05<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Warning] Unknown parameter: eval_metric
48%|████▊ | 24/50 [01:05<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Warning] Unknown parameter: eval_metric
48%|████▊ | 24/50 [01:05<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
48%|████▊ | 24/50 [01:05<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Warning] Unknown parameter: eval_metric
48%|████▊ | 24/50 [01:06<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
48%|████▊ | 24/50 [01:06<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009450 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
48%|████▊ | 24/50 [01:06<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Info] Total Bins 12996
48%|████▊ | 24/50 [01:06<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
48%|████▊ | 24/50 [01:06<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Warning] Unknown parameter: eval_metric
48%|████▊ | 24/50 [01:06<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
48%|████▊ | 24/50 [01:06<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
48%|████▊ | 24/50 [01:06<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Info] Start training from score -3.179828
48%|████▊ | 24/50 [01:06<01:04, 2.46s/trial, best loss: -0.8362934408440913] Training until validation scores don't improve for 30 rounds
48%|████▊ | 24/50 [01:06<01:04, 2.46s/trial, best loss: -0.8362934408440913] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.121345 valid_1's binary_logloss: 0.136253
48%|████▊ | 24/50 [01:06<01:04, 2.46s/trial, best loss: -0.8362934408440913] [LightGBM] [Warning] Unknown parameter: eval_metric
48%|████▊ | 24/50 [01:06<01:04, 2.46s/trial, best loss: -0.8362934408440913] 50%|█████ | 25/50 [01:06<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
50%|█████ | 25/50 [01:06<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
50%|█████ | 25/50 [01:06<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
50%|█████ | 25/50 [01:07<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
50%|█████ | 25/50 [01:07<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.010296 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
50%|█████ | 25/50 [01:07<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12993
50%|█████ | 25/50 [01:07<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 205
50%|█████ | 25/50 [01:07<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
50%|█████ | 25/50 [01:07<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
50%|█████ | 25/50 [01:07<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
50%|█████ | 25/50 [01:07<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.161962
50%|█████ | 25/50 [01:07<01:06, 2.65s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
50%|█████ | 25/50 [01:07<01:06, 2.65s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[99] training's binary_logloss: 0.115076 valid_1's binary_logloss: 0.131544
50%|█████ | 25/50 [01:08<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
50%|█████ | 25/50 [01:08<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
50%|█████ | 25/50 [01:08<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
50%|█████ | 25/50 [01:08<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
50%|█████ | 25/50 [01:08<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
50%|█████ | 25/50 [01:08<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.011525 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
50%|█████ | 25/50 [01:08<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 13086
50%|█████ | 25/50 [01:08<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 203
50%|█████ | 25/50 [01:08<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
50%|█████ | 25/50 [01:08<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
50%|█████ | 25/50 [01:08<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
50%|█████ | 25/50 [01:08<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.210495
50%|█████ | 25/50 [01:08<01:06, 2.65s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
50%|█████ | 25/50 [01:08<01:06, 2.65s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[99] training's binary_logloss: 0.110704 valid_1's binary_logloss: 0.139171
50%|█████ | 25/50 [01:09<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
50%|█████ | 25/50 [01:09<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
50%|█████ | 25/50 [01:09<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
50%|█████ | 25/50 [01:09<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
50%|█████ | 25/50 [01:09<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
50%|█████ | 25/50 [01:09<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009677 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
50%|█████ | 25/50 [01:09<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12996
50%|█████ | 25/50 [01:09<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
50%|█████ | 25/50 [01:09<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
50%|█████ | 25/50 [01:09<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
50%|█████ | 25/50 [01:09<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
50%|█████ | 25/50 [01:09<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.179828
50%|█████ | 25/50 [01:09<01:06, 2.65s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
50%|█████ | 25/50 [01:09<01:06, 2.65s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[74] training's binary_logloss: 0.117386 valid_1's binary_logloss: 0.137077
50%|█████ | 25/50 [01:10<01:06, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
50%|█████ | 25/50 [01:10<01:06, 2.65s/trial, best loss: -0.8365225708987197] 52%|█████▏ | 26/50 [01:10<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
52%|█████▏ | 26/50 [01:10<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
52%|█████▏ | 26/50 [01:10<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
52%|█████▏ | 26/50 [01:11<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
52%|█████▏ | 26/50 [01:11<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009136 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
52%|█████▏ | 26/50 [01:11<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12993
52%|█████▏ | 26/50 [01:11<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 205
52%|█████▏ | 26/50 [01:11<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
52%|█████▏ | 26/50 [01:11<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
52%|█████▏ | 26/50 [01:11<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
52%|█████▏ | 26/50 [01:11<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.161962
52%|█████▏ | 26/50 [01:11<01:13, 3.05s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
52%|█████▏ | 26/50 [01:11<01:13, 3.05s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[40] training's binary_logloss: 0.118745 valid_1's binary_logloss: 0.13174
52%|█████▏ | 26/50 [01:11<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
52%|█████▏ | 26/50 [01:11<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
52%|█████▏ | 26/50 [01:11<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
52%|█████▏ | 26/50 [01:11<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
52%|█████▏ | 26/50 [01:11<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
52%|█████▏ | 26/50 [01:11<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009702 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
52%|█████▏ | 26/50 [01:11<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 13086
52%|█████▏ | 26/50 [01:11<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 203
52%|█████▏ | 26/50 [01:11<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
52%|█████▏ | 26/50 [01:12<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
52%|█████▏ | 26/50 [01:12<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
52%|█████▏ | 26/50 [01:12<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.210495
52%|█████▏ | 26/50 [01:12<01:13, 3.05s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
52%|█████▏ | 26/50 [01:12<01:13, 3.05s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[37] training's binary_logloss: 0.115548 valid_1's binary_logloss: 0.138995
52%|█████▏ | 26/50 [01:12<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
52%|█████▏ | 26/50 [01:12<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
52%|█████▏ | 26/50 [01:12<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
52%|█████▏ | 26/50 [01:12<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
52%|█████▏ | 26/50 [01:12<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
52%|█████▏ | 26/50 [01:12<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.010037 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
52%|█████▏ | 26/50 [01:12<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12996
52%|█████▏ | 26/50 [01:12<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
52%|█████▏ | 26/50 [01:12<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
52%|█████▏ | 26/50 [01:12<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
52%|█████▏ | 26/50 [01:12<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
52%|█████▏ | 26/50 [01:12<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.179828
52%|█████▏ | 26/50 [01:12<01:13, 3.05s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
52%|█████▏ | 26/50 [01:12<01:13, 3.05s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[32] training's binary_logloss: 0.119523 valid_1's binary_logloss: 0.136814
52%|█████▏ | 26/50 [01:13<01:13, 3.05s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
52%|█████▏ | 26/50 [01:13<01:13, 3.05s/trial, best loss: -0.8365225708987197] 54%|█████▍ | 27/50 [01:13<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
54%|█████▍ | 27/50 [01:13<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
54%|█████▍ | 27/50 [01:13<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
54%|█████▍ | 27/50 [01:13<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
54%|█████▍ | 27/50 [01:13<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009837 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
54%|█████▍ | 27/50 [01:13<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12993
54%|█████▍ | 27/50 [01:13<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 205
54%|█████▍ | 27/50 [01:13<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
54%|█████▍ | 27/50 [01:13<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
54%|█████▍ | 27/50 [01:13<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
54%|█████▍ | 27/50 [01:13<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.161962
54%|█████▍ | 27/50 [01:13<01:07, 2.92s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
54%|█████▍ | 27/50 [01:13<01:07, 2.92s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[97] training's binary_logloss: 0.119337 valid_1's binary_logloss: 0.131417
54%|█████▍ | 27/50 [01:14<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
54%|█████▍ | 27/50 [01:14<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
54%|█████▍ | 27/50 [01:14<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
54%|█████▍ | 27/50 [01:14<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
54%|█████▍ | 27/50 [01:14<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
54%|█████▍ | 27/50 [01:14<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.010306 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
54%|█████▍ | 27/50 [01:14<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 13086
54%|█████▍ | 27/50 [01:14<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 203
54%|█████▍ | 27/50 [01:14<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
54%|█████▍ | 27/50 [01:14<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
54%|█████▍ | 27/50 [01:14<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
54%|█████▍ | 27/50 [01:14<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.210495
54%|█████▍ | 27/50 [01:14<01:07, 2.92s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
54%|█████▍ | 27/50 [01:14<01:07, 2.92s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.114456 valid_1's binary_logloss: 0.139157
54%|█████▍ | 27/50 [01:15<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
54%|█████▍ | 27/50 [01:15<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
54%|█████▍ | 27/50 [01:15<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
54%|█████▍ | 27/50 [01:15<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
54%|█████▍ | 27/50 [01:15<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
54%|█████▍ | 27/50 [01:15<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008807 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
54%|█████▍ | 27/50 [01:15<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12996
54%|█████▍ | 27/50 [01:15<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
54%|█████▍ | 27/50 [01:15<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
54%|█████▍ | 27/50 [01:15<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
54%|█████▍ | 27/50 [01:15<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
54%|█████▍ | 27/50 [01:15<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.179828
54%|█████▍ | 27/50 [01:15<01:07, 2.92s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
54%|█████▍ | 27/50 [01:15<01:07, 2.92s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[97] training's binary_logloss: 0.11659 valid_1's binary_logloss: 0.136713
54%|█████▍ | 27/50 [01:16<01:07, 2.92s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
54%|█████▍ | 27/50 [01:16<01:07, 2.92s/trial, best loss: -0.8365225708987197] 56%|█████▌ | 28/50 [01:16<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
56%|█████▌ | 28/50 [01:16<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
56%|█████▌ | 28/50 [01:16<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
56%|█████▌ | 28/50 [01:16<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
56%|█████▌ | 28/50 [01:16<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008596 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
56%|█████▌ | 28/50 [01:16<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12943
56%|█████▌ | 28/50 [01:16<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
56%|█████▌ | 28/50 [01:16<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
56%|█████▌ | 28/50 [01:17<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
56%|█████▌ | 28/50 [01:17<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
56%|█████▌ | 28/50 [01:17<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.161962
56%|█████▌ | 28/50 [01:17<01:06, 3.03s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
56%|█████▌ | 28/50 [01:17<01:06, 3.03s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[22] training's binary_logloss: 0.121698 valid_1's binary_logloss: 0.132138
56%|█████▌ | 28/50 [01:17<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
56%|█████▌ | 28/50 [01:17<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
56%|█████▌ | 28/50 [01:17<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
56%|█████▌ | 28/50 [01:17<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
56%|█████▌ | 28/50 [01:17<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
56%|█████▌ | 28/50 [01:17<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008173 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
56%|█████▌ | 28/50 [01:17<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12998
56%|█████▌ | 28/50 [01:17<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 194
56%|█████▌ | 28/50 [01:17<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
56%|█████▌ | 28/50 [01:17<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
56%|█████▌ | 28/50 [01:17<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
56%|█████▌ | 28/50 [01:17<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.210495
56%|█████▌ | 28/50 [01:17<01:06, 3.03s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
56%|█████▌ | 28/50 [01:17<01:06, 3.03s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[25] training's binary_logloss: 0.11611 valid_1's binary_logloss: 0.139307
56%|█████▌ | 28/50 [01:18<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
56%|█████▌ | 28/50 [01:18<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
56%|█████▌ | 28/50 [01:18<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
56%|█████▌ | 28/50 [01:18<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
56%|█████▌ | 28/50 [01:18<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
56%|█████▌ | 28/50 [01:18<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008090 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
56%|█████▌ | 28/50 [01:18<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12958
56%|█████▌ | 28/50 [01:18<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 197
56%|█████▌ | 28/50 [01:18<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
56%|█████▌ | 28/50 [01:18<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
56%|█████▌ | 28/50 [01:18<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
56%|█████▌ | 28/50 [01:18<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.179828
56%|█████▌ | 28/50 [01:18<01:06, 3.03s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
56%|█████▌ | 28/50 [01:18<01:06, 3.03s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[17] training's binary_logloss: 0.122317 valid_1's binary_logloss: 0.136889
56%|█████▌ | 28/50 [01:18<01:06, 3.03s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
56%|█████▌ | 28/50 [01:18<01:06, 3.03s/trial, best loss: -0.8365225708987197] 58%|█████▊ | 29/50 [01:18<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
58%|█████▊ | 29/50 [01:18<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
58%|█████▊ | 29/50 [01:18<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
58%|█████▊ | 29/50 [01:18<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
58%|█████▊ | 29/50 [01:18<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009576 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
58%|█████▊ | 29/50 [01:18<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12943
58%|█████▊ | 29/50 [01:19<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
58%|█████▊ | 29/50 [01:19<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
58%|█████▊ | 29/50 [01:19<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
58%|█████▊ | 29/50 [01:19<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
58%|█████▊ | 29/50 [01:19<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.161962
58%|█████▊ | 29/50 [01:19<00:57, 2.74s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
58%|█████▊ | 29/50 [01:19<00:57, 2.74s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.117385 valid_1's binary_logloss: 0.131388
58%|█████▊ | 29/50 [01:19<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
58%|█████▊ | 29/50 [01:19<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
58%|█████▊ | 29/50 [01:19<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
58%|█████▊ | 29/50 [01:19<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
58%|█████▊ | 29/50 [01:19<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
58%|█████▊ | 29/50 [01:19<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008772 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
58%|█████▊ | 29/50 [01:20<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12998
58%|█████▊ | 29/50 [01:20<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 194
58%|█████▊ | 29/50 [01:20<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
58%|█████▊ | 29/50 [01:20<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
58%|█████▊ | 29/50 [01:20<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
58%|█████▊ | 29/50 [01:20<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.210495
58%|█████▊ | 29/50 [01:20<00:57, 2.74s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
58%|█████▊ | 29/50 [01:20<00:57, 2.74s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[98] training's binary_logloss: 0.113488 valid_1's binary_logloss: 0.139105
58%|█████▊ | 29/50 [01:20<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
58%|█████▊ | 29/50 [01:20<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
58%|█████▊ | 29/50 [01:20<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
58%|█████▊ | 29/50 [01:20<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
58%|█████▊ | 29/50 [01:21<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
58%|█████▊ | 29/50 [01:21<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008569 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
58%|█████▊ | 29/50 [01:21<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12958
58%|█████▊ | 29/50 [01:21<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 197
58%|█████▊ | 29/50 [01:21<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
58%|█████▊ | 29/50 [01:21<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
58%|█████▊ | 29/50 [01:21<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
58%|█████▊ | 29/50 [01:21<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.179828
58%|█████▊ | 29/50 [01:21<00:57, 2.74s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
58%|█████▊ | 29/50 [01:21<00:57, 2.74s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.114829 valid_1's binary_logloss: 0.136714
58%|█████▊ | 29/50 [01:21<00:57, 2.74s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
58%|█████▊ | 29/50 [01:21<00:57, 2.74s/trial, best loss: -0.8365225708987197] 60%|██████ | 30/50 [01:21<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
60%|██████ | 30/50 [01:21<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
60%|██████ | 30/50 [01:21<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
60%|██████ | 30/50 [01:22<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
60%|██████ | 30/50 [01:22<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008604 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
60%|██████ | 30/50 [01:22<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12993
60%|██████ | 30/50 [01:22<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 205
60%|██████ | 30/50 [01:22<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
60%|██████ | 30/50 [01:22<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
60%|██████ | 30/50 [01:22<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
60%|██████ | 30/50 [01:22<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.161962
60%|██████ | 30/50 [01:22<00:56, 2.84s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
60%|██████ | 30/50 [01:22<00:56, 2.84s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.136137 valid_1's binary_logloss: 0.135864
60%|██████ | 30/50 [01:22<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
60%|██████ | 30/50 [01:22<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
60%|██████ | 30/50 [01:22<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
60%|██████ | 30/50 [01:22<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
60%|██████ | 30/50 [01:22<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
60%|██████ | 30/50 [01:22<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007794 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
60%|██████ | 30/50 [01:22<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 13059
60%|██████ | 30/50 [01:22<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 200
60%|██████ | 30/50 [01:22<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
60%|██████ | 30/50 [01:22<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
60%|██████ | 30/50 [01:22<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
60%|██████ | 30/50 [01:22<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.210495
60%|██████ | 30/50 [01:22<00:56, 2.84s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
60%|██████ | 30/50 [01:22<00:56, 2.84s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.131758 valid_1's binary_logloss: 0.143909
60%|██████ | 30/50 [01:23<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
60%|██████ | 30/50 [01:23<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
60%|██████ | 30/50 [01:23<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
60%|██████ | 30/50 [01:23<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
60%|██████ | 30/50 [01:23<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
60%|██████ | 30/50 [01:23<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008173 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
60%|██████ | 30/50 [01:23<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12996
60%|██████ | 30/50 [01:23<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
60%|██████ | 30/50 [01:23<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
60%|██████ | 30/50 [01:23<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
60%|██████ | 30/50 [01:23<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
60%|██████ | 30/50 [01:23<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.179828
60%|██████ | 30/50 [01:23<00:56, 2.84s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
60%|██████ | 30/50 [01:23<00:56, 2.84s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.133319 valid_1's binary_logloss: 0.140365
60%|██████ | 30/50 [01:24<00:56, 2.84s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
60%|██████ | 30/50 [01:24<00:56, 2.84s/trial, best loss: -0.8365225708987197] 62%|██████▏ | 31/50 [01:24<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
62%|██████▏ | 31/50 [01:24<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
62%|██████▏ | 31/50 [01:24<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
62%|██████▏ | 31/50 [01:24<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
62%|██████▏ | 31/50 [01:24<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007810 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
62%|██████▏ | 31/50 [01:24<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12902
62%|██████▏ | 31/50 [01:24<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 197
62%|██████▏ | 31/50 [01:24<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
62%|██████▏ | 31/50 [01:24<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
62%|██████▏ | 31/50 [01:24<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
62%|██████▏ | 31/50 [01:24<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.161962
62%|██████▏ | 31/50 [01:24<00:52, 2.78s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
62%|██████▏ | 31/50 [01:24<00:52, 2.78s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[52] training's binary_logloss: 0.120557 valid_1's binary_logloss: 0.131463
62%|██████▏ | 31/50 [01:25<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
62%|██████▏ | 31/50 [01:25<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
62%|██████▏ | 31/50 [01:25<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
62%|██████▏ | 31/50 [01:25<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
62%|██████▏ | 31/50 [01:25<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
62%|██████▏ | 31/50 [01:25<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008104 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
62%|██████▏ | 31/50 [01:25<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12988
62%|██████▏ | 31/50 [01:25<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
62%|██████▏ | 31/50 [01:25<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
62%|██████▏ | 31/50 [01:25<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
62%|██████▏ | 31/50 [01:25<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
62%|██████▏ | 31/50 [01:25<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.210495
62%|██████▏ | 31/50 [01:25<00:52, 2.78s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
62%|██████▏ | 31/50 [01:25<00:52, 2.78s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[42] training's binary_logloss: 0.119216 valid_1's binary_logloss: 0.138844
62%|██████▏ | 31/50 [01:25<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
62%|██████▏ | 31/50 [01:26<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
62%|██████▏ | 31/50 [01:26<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
62%|██████▏ | 31/50 [01:26<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
62%|██████▏ | 31/50 [01:26<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
62%|██████▏ | 31/50 [01:26<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008458 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
62%|██████▏ | 31/50 [01:26<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12898
62%|██████▏ | 31/50 [01:26<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
62%|██████▏ | 31/50 [01:26<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
62%|██████▏ | 31/50 [01:26<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
62%|██████▏ | 31/50 [01:26<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
62%|██████▏ | 31/50 [01:26<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.179828
62%|██████▏ | 31/50 [01:26<00:52, 2.78s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
62%|██████▏ | 31/50 [01:26<00:52, 2.78s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[43] training's binary_logloss: 0.120672 valid_1's binary_logloss: 0.136077
62%|██████▏ | 31/50 [01:26<00:52, 2.78s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
62%|██████▏ | 31/50 [01:26<00:52, 2.78s/trial, best loss: -0.8365225708987197] 64%|██████▍ | 32/50 [01:26<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
64%|██████▍ | 32/50 [01:26<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
64%|██████▍ | 32/50 [01:26<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
64%|██████▍ | 32/50 [01:27<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
64%|██████▍ | 32/50 [01:27<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008051 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
64%|██████▍ | 32/50 [01:27<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12943
64%|██████▍ | 32/50 [01:27<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
64%|██████▍ | 32/50 [01:27<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
64%|██████▍ | 32/50 [01:27<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
64%|██████▍ | 32/50 [01:27<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
64%|██████▍ | 32/50 [01:27<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.161962
64%|██████▍ | 32/50 [01:27<00:47, 2.65s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
64%|██████▍ | 32/50 [01:27<00:47, 2.65s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[29] training's binary_logloss: 0.116782 valid_1's binary_logloss: 0.132291
64%|██████▍ | 32/50 [01:27<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
64%|██████▍ | 32/50 [01:27<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
64%|██████▍ | 32/50 [01:27<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
64%|██████▍ | 32/50 [01:27<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
64%|██████▍ | 32/50 [01:27<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
64%|██████▍ | 32/50 [01:27<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008398 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
64%|██████▍ | 32/50 [01:27<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12998
64%|██████▍ | 32/50 [01:27<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 194
64%|██████▍ | 32/50 [01:27<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
64%|██████▍ | 32/50 [01:27<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
64%|██████▍ | 32/50 [01:27<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
64%|██████▍ | 32/50 [01:27<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.210495
64%|██████▍ | 32/50 [01:27<00:47, 2.65s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
64%|██████▍ | 32/50 [01:27<00:47, 2.65s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[29] training's binary_logloss: 0.112525 valid_1's binary_logloss: 0.139834
64%|██████▍ | 32/50 [01:28<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
64%|██████▍ | 32/50 [01:28<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
64%|██████▍ | 32/50 [01:28<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
64%|██████▍ | 32/50 [01:28<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
64%|██████▍ | 32/50 [01:28<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
64%|██████▍ | 32/50 [01:28<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008790 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
64%|██████▍ | 32/50 [01:28<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12958
64%|██████▍ | 32/50 [01:28<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 197
64%|██████▍ | 32/50 [01:28<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
64%|██████▍ | 32/50 [01:28<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
64%|██████▍ | 32/50 [01:28<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
64%|██████▍ | 32/50 [01:28<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.179828
64%|██████▍ | 32/50 [01:28<00:47, 2.65s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
64%|██████▍ | 32/50 [01:28<00:47, 2.65s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[26] training's binary_logloss: 0.116376 valid_1's binary_logloss: 0.13759
64%|██████▍ | 32/50 [01:29<00:47, 2.65s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
64%|██████▍ | 32/50 [01:29<00:47, 2.65s/trial, best loss: -0.8365225708987197] 66%|██████▌ | 33/50 [01:29<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
66%|██████▌ | 33/50 [01:29<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
66%|██████▌ | 33/50 [01:29<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
66%|██████▌ | 33/50 [01:29<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
66%|██████▌ | 33/50 [01:29<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007899 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
66%|██████▌ | 33/50 [01:29<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 13047
66%|██████▌ | 33/50 [01:29<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 210
66%|██████▌ | 33/50 [01:29<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
66%|██████▌ | 33/50 [01:29<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
66%|██████▌ | 33/50 [01:29<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
66%|██████▌ | 33/50 [01:29<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.161962
66%|██████▌ | 33/50 [01:29<00:43, 2.55s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
66%|██████▌ | 33/50 [01:29<00:43, 2.55s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.121235 valid_1's binary_logloss: 0.131795
66%|██████▌ | 33/50 [01:29<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
66%|██████▌ | 33/50 [01:29<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
66%|██████▌ | 33/50 [01:30<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
66%|██████▌ | 33/50 [01:30<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
66%|██████▌ | 33/50 [01:30<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
66%|██████▌ | 33/50 [01:30<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010711 seconds.
You can set `force_col_wise=true` to remove the overhead.
66%|██████▌ | 33/50 [01:30<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 13161
66%|██████▌ | 33/50 [01:30<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 208
66%|██████▌ | 33/50 [01:30<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
66%|██████▌ | 33/50 [01:30<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
66%|██████▌ | 33/50 [01:30<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
66%|██████▌ | 33/50 [01:30<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.210495
66%|██████▌ | 33/50 [01:30<00:43, 2.55s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
66%|██████▌ | 33/50 [01:30<00:43, 2.55s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.117131 valid_1's binary_logloss: 0.139355
66%|██████▌ | 33/50 [01:31<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
66%|██████▌ | 33/50 [01:31<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
66%|██████▌ | 33/50 [01:31<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
66%|██████▌ | 33/50 [01:31<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
66%|██████▌ | 33/50 [01:31<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
66%|██████▌ | 33/50 [01:31<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009935 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
66%|██████▌ | 33/50 [01:31<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 13044
66%|██████▌ | 33/50 [01:31<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 205
66%|██████▌ | 33/50 [01:31<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
66%|██████▌ | 33/50 [01:31<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
66%|██████▌ | 33/50 [01:31<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
66%|██████▌ | 33/50 [01:31<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.179828
66%|██████▌ | 33/50 [01:31<00:43, 2.55s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
66%|██████▌ | 33/50 [01:31<00:43, 2.55s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.11886 valid_1's binary_logloss: 0.136817
66%|██████▌ | 33/50 [01:32<00:43, 2.55s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
66%|██████▌ | 33/50 [01:32<00:43, 2.55s/trial, best loss: -0.8365225708987197] 68%|██████▊ | 34/50 [01:32<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
68%|██████▊ | 34/50 [01:32<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
68%|██████▊ | 34/50 [01:32<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
68%|██████▊ | 34/50 [01:32<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
68%|██████▊ | 34/50 [01:32<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009479 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
68%|██████▊ | 34/50 [01:32<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 13047
68%|██████▊ | 34/50 [01:32<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 210
68%|██████▊ | 34/50 [01:32<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
68%|██████▊ | 34/50 [01:32<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
68%|██████▊ | 34/50 [01:32<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
68%|██████▊ | 34/50 [01:32<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.161962
68%|██████▊ | 34/50 [01:32<00:44, 2.76s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
68%|██████▊ | 34/50 [01:32<00:44, 2.76s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[87] training's binary_logloss: 0.119217 valid_1's binary_logloss: 0.131162
68%|██████▊ | 34/50 [01:33<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
68%|██████▊ | 34/50 [01:33<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
68%|██████▊ | 34/50 [01:33<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
68%|██████▊ | 34/50 [01:33<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
68%|██████▊ | 34/50 [01:32<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
68%|██████▊ | 34/50 [01:32<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009390 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
68%|██████▊ | 34/50 [01:32<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 13130
68%|██████▊ | 34/50 [01:32<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 205
68%|██████▊ | 34/50 [01:32<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
68%|██████▊ | 34/50 [01:32<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
68%|██████▊ | 34/50 [01:32<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
68%|██████▊ | 34/50 [01:32<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.210495
68%|██████▊ | 34/50 [01:32<00:44, 2.76s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
68%|██████▊ | 34/50 [01:32<00:44, 2.76s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[95] training's binary_logloss: 0.11377 valid_1's binary_logloss: 0.138774
68%|██████▊ | 34/50 [01:33<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
68%|██████▊ | 34/50 [01:33<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
68%|██████▊ | 34/50 [01:33<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
68%|██████▊ | 34/50 [01:33<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
68%|██████▊ | 34/50 [01:33<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
68%|██████▊ | 34/50 [01:33<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008385 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
68%|██████▊ | 34/50 [01:33<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 13000
68%|██████▊ | 34/50 [01:33<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 203
68%|██████▊ | 34/50 [01:33<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
68%|██████▊ | 34/50 [01:33<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
68%|██████▊ | 34/50 [01:33<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
68%|██████▊ | 34/50 [01:33<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.179828
68%|██████▊ | 34/50 [01:33<00:44, 2.76s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
68%|██████▊ | 34/50 [01:33<00:44, 2.76s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[71] training's binary_logloss: 0.119355 valid_1's binary_logloss: 0.136516
68%|██████▊ | 34/50 [01:34<00:44, 2.76s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
68%|██████▊ | 34/50 [01:34<00:44, 2.76s/trial, best loss: -0.8365225708987197] 70%|███████ | 35/50 [01:34<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
70%|███████ | 35/50 [01:34<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
70%|███████ | 35/50 [01:34<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
70%|███████ | 35/50 [01:34<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
70%|███████ | 35/50 [01:34<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008260 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
70%|███████ | 35/50 [01:34<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12993
70%|███████ | 35/50 [01:34<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 205
70%|███████ | 35/50 [01:34<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
70%|███████ | 35/50 [01:34<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
70%|███████ | 35/50 [01:34<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
70%|███████ | 35/50 [01:34<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.161962
70%|███████ | 35/50 [01:34<00:37, 2.51s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
70%|███████ | 35/50 [01:34<00:37, 2.51s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[20] training's binary_logloss: 0.117227 valid_1's binary_logloss: 0.132479
70%|███████ | 35/50 [01:34<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
70%|███████ | 35/50 [01:34<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
70%|███████ | 35/50 [01:35<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
70%|███████ | 35/50 [01:35<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
70%|███████ | 35/50 [01:35<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
70%|███████ | 35/50 [01:35<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007966 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
70%|███████ | 35/50 [01:35<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 13059
70%|███████ | 35/50 [01:35<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 200
70%|███████ | 35/50 [01:35<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
70%|███████ | 35/50 [01:35<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
70%|███████ | 35/50 [01:35<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
70%|███████ | 35/50 [01:35<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.210495
70%|███████ | 35/50 [01:35<00:37, 2.51s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
70%|███████ | 35/50 [01:35<00:37, 2.51s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[22] training's binary_logloss: 0.110745 valid_1's binary_logloss: 0.140016
70%|███████ | 35/50 [01:35<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
70%|███████ | 35/50 [01:35<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
70%|███████ | 35/50 [01:35<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
70%|███████ | 35/50 [01:35<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
70%|███████ | 35/50 [01:35<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
70%|███████ | 35/50 [01:35<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007917 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
70%|███████ | 35/50 [01:35<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12996
70%|███████ | 35/50 [01:35<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
70%|███████ | 35/50 [01:35<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
70%|███████ | 35/50 [01:35<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
70%|███████ | 35/50 [01:35<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
70%|███████ | 35/50 [01:35<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.179828
70%|███████ | 35/50 [01:35<00:37, 2.51s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
70%|███████ | 35/50 [01:35<00:37, 2.51s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[18] training's binary_logloss: 0.116325 valid_1's binary_logloss: 0.136868
70%|███████ | 35/50 [01:36<00:37, 2.51s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
70%|███████ | 35/50 [01:36<00:37, 2.51s/trial, best loss: -0.8365225708987197] 72%|███████▏ | 36/50 [01:36<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
72%|███████▏ | 36/50 [01:36<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
72%|███████▏ | 36/50 [01:36<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
72%|███████▏ | 36/50 [01:36<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
72%|███████▏ | 36/50 [01:36<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007671 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
72%|███████▏ | 36/50 [01:36<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12902
72%|███████▏ | 36/50 [01:36<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 197
72%|███████▏ | 36/50 [01:36<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
72%|███████▏ | 36/50 [01:36<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
72%|███████▏ | 36/50 [01:36<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
72%|███████▏ | 36/50 [01:36<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.161962
72%|███████▏ | 36/50 [01:36<00:32, 2.33s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
72%|███████▏ | 36/50 [01:36<00:32, 2.33s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.133676 valid_1's binary_logloss: 0.135443
72%|███████▏ | 36/50 [01:37<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
72%|███████▏ | 36/50 [01:37<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
72%|███████▏ | 36/50 [01:37<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
72%|███████▏ | 36/50 [01:37<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
72%|███████▏ | 36/50 [01:37<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
72%|███████▏ | 36/50 [01:37<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007838 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
72%|███████▏ | 36/50 [01:37<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12988
72%|███████▏ | 36/50 [01:37<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
72%|███████▏ | 36/50 [01:37<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
72%|███████▏ | 36/50 [01:37<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
72%|███████▏ | 36/50 [01:37<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
72%|███████▏ | 36/50 [01:37<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.210495
72%|███████▏ | 36/50 [01:37<00:32, 2.33s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
72%|███████▏ | 36/50 [01:37<00:32, 2.33s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.129111 valid_1's binary_logloss: 0.143846
72%|███████▏ | 36/50 [01:38<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
72%|███████▏ | 36/50 [01:38<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
72%|███████▏ | 36/50 [01:38<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
72%|███████▏ | 36/50 [01:38<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
72%|███████▏ | 36/50 [01:38<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
72%|███████▏ | 36/50 [01:38<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008862 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
72%|███████▏ | 36/50 [01:38<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12898
72%|███████▏ | 36/50 [01:38<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
72%|███████▏ | 36/50 [01:38<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
72%|███████▏ | 36/50 [01:38<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
72%|███████▏ | 36/50 [01:38<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
72%|███████▏ | 36/50 [01:38<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.179828
72%|███████▏ | 36/50 [01:38<00:32, 2.33s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
72%|███████▏ | 36/50 [01:38<00:32, 2.33s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.131021 valid_1's binary_logloss: 0.140157
72%|███████▏ | 36/50 [01:39<00:32, 2.33s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
72%|███████▏ | 36/50 [01:39<00:32, 2.33s/trial, best loss: -0.8365225708987197] 74%|███████▍ | 37/50 [01:39<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
74%|███████▍ | 37/50 [01:39<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
74%|███████▍ | 37/50 [01:39<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
74%|███████▍ | 37/50 [01:39<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
74%|███████▍ | 37/50 [01:39<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007790 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
74%|███████▍ | 37/50 [01:39<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12943
74%|███████▍ | 37/50 [01:39<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
74%|███████▍ | 37/50 [01:39<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
74%|███████▍ | 37/50 [01:39<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
74%|███████▍ | 37/50 [01:39<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
74%|███████▍ | 37/50 [01:39<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.161962
74%|███████▍ | 37/50 [01:39<00:34, 2.62s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
74%|███████▍ | 37/50 [01:39<00:34, 2.62s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[65] training's binary_logloss: 0.118118 valid_1's binary_logloss: 0.131584
74%|███████▍ | 37/50 [01:40<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
74%|███████▍ | 37/50 [01:40<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
74%|███████▍ | 37/50 [01:40<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
74%|███████▍ | 37/50 [01:40<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
74%|███████▍ | 37/50 [01:40<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
74%|███████▍ | 37/50 [01:40<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007983 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
74%|███████▍ | 37/50 [01:40<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12988
74%|███████▍ | 37/50 [01:40<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
74%|███████▍ | 37/50 [01:40<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
74%|███████▍ | 37/50 [01:40<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
74%|███████▍ | 37/50 [01:40<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
74%|███████▍ | 37/50 [01:40<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.210495
74%|███████▍ | 37/50 [01:40<00:34, 2.62s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
74%|███████▍ | 37/50 [01:40<00:34, 2.62s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[65] training's binary_logloss: 0.113718 valid_1's binary_logloss: 0.139017
74%|███████▍ | 37/50 [01:41<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
74%|███████▍ | 37/50 [01:41<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
74%|███████▍ | 37/50 [01:41<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
74%|███████▍ | 37/50 [01:41<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
74%|███████▍ | 37/50 [01:41<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
74%|███████▍ | 37/50 [01:41<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008166 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
74%|███████▍ | 37/50 [01:41<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12906
74%|███████▍ | 37/50 [01:41<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 195
74%|███████▍ | 37/50 [01:41<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
74%|███████▍ | 37/50 [01:41<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
74%|███████▍ | 37/50 [01:41<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
74%|███████▍ | 37/50 [01:41<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.179828
74%|███████▍ | 37/50 [01:41<00:34, 2.62s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
74%|███████▍ | 37/50 [01:41<00:34, 2.62s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[59] training's binary_logloss: 0.116916 valid_1's binary_logloss: 0.136382
74%|███████▍ | 37/50 [01:42<00:34, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
74%|███████▍ | 37/50 [01:42<00:34, 2.62s/trial, best loss: -0.8365225708987197] 76%|███████▌ | 38/50 [01:42<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
76%|███████▌ | 38/50 [01:42<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
76%|███████▌ | 38/50 [01:42<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
76%|███████▌ | 38/50 [01:42<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
76%|███████▌ | 38/50 [01:42<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008362 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
76%|███████▌ | 38/50 [01:42<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12943
76%|███████▌ | 38/50 [01:42<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
76%|███████▌ | 38/50 [01:42<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
76%|███████▌ | 38/50 [01:42<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
76%|███████▌ | 38/50 [01:42<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
76%|███████▌ | 38/50 [01:42<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.161962
76%|███████▌ | 38/50 [01:42<00:32, 2.68s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
76%|███████▌ | 38/50 [01:42<00:32, 2.68s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[43] training's binary_logloss: 0.120211 valid_1's binary_logloss: 0.131444
76%|███████▌ | 38/50 [01:43<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
76%|███████▌ | 38/50 [01:43<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
76%|███████▌ | 38/50 [01:43<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
76%|███████▌ | 38/50 [01:43<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
76%|███████▌ | 38/50 [01:43<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
76%|███████▌ | 38/50 [01:43<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011524 seconds.
You can set `force_col_wise=true` to remove the overhead.
76%|███████▌ | 38/50 [01:43<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12998
76%|███████▌ | 38/50 [01:43<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 194
76%|███████▌ | 38/50 [01:43<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
76%|███████▌ | 38/50 [01:43<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
76%|███████▌ | 38/50 [01:43<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
76%|███████▌ | 38/50 [01:43<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.210495
76%|███████▌ | 38/50 [01:43<00:32, 2.68s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
76%|███████▌ | 38/50 [01:43<00:32, 2.68s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[47] training's binary_logloss: 0.114602 valid_1's binary_logloss: 0.139106
76%|███████▌ | 38/50 [01:43<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
76%|███████▌ | 38/50 [01:44<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
76%|███████▌ | 38/50 [01:44<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
76%|███████▌ | 38/50 [01:44<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
76%|███████▌ | 38/50 [01:44<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
76%|███████▌ | 38/50 [01:44<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008487 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
76%|███████▌ | 38/50 [01:44<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12968
76%|███████▌ | 38/50 [01:44<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 199
76%|███████▌ | 38/50 [01:44<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
76%|███████▌ | 38/50 [01:44<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
76%|███████▌ | 38/50 [01:44<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
76%|███████▌ | 38/50 [01:44<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.179828
76%|███████▌ | 38/50 [01:44<00:32, 2.68s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
76%|███████▌ | 38/50 [01:44<00:32, 2.68s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[40] training's binary_logloss: 0.118651 valid_1's binary_logloss: 0.136544
76%|███████▌ | 38/50 [01:44<00:32, 2.68s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
76%|███████▌ | 38/50 [01:44<00:32, 2.68s/trial, best loss: -0.8365225708987197] 78%|███████▊ | 39/50 [01:44<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
78%|███████▊ | 39/50 [01:44<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
78%|███████▊ | 39/50 [01:44<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
78%|███████▊ | 39/50 [01:45<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
78%|███████▊ | 39/50 [01:45<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007906 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
78%|███████▊ | 39/50 [01:45<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12947
78%|███████▊ | 39/50 [01:45<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 203
78%|███████▊ | 39/50 [01:45<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
78%|███████▊ | 39/50 [01:45<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
78%|███████▊ | 39/50 [01:45<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
78%|███████▊ | 39/50 [01:45<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.161962
78%|███████▊ | 39/50 [01:45<00:28, 2.62s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
78%|███████▊ | 39/50 [01:45<00:28, 2.62s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[40] training's binary_logloss: 0.119116 valid_1's binary_logloss: 0.131438
78%|███████▊ | 39/50 [01:45<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
78%|███████▊ | 39/50 [01:45<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
78%|███████▊ | 39/50 [01:45<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
78%|███████▊ | 39/50 [01:45<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
78%|███████▊ | 39/50 [01:45<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
78%|███████▊ | 39/50 [01:45<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010484 seconds.
You can set `force_col_wise=true` to remove the overhead.
78%|███████▊ | 39/50 [01:45<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 13059
78%|███████▊ | 39/50 [01:45<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 200
78%|███████▊ | 39/50 [01:45<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
78%|███████▊ | 39/50 [01:45<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
78%|███████▊ | 39/50 [01:45<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
78%|███████▊ | 39/50 [01:45<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.210495
78%|███████▊ | 39/50 [01:45<00:28, 2.62s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
78%|███████▊ | 39/50 [01:45<00:28, 2.62s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[37] training's binary_logloss: 0.116085 valid_1's binary_logloss: 0.138771
78%|███████▊ | 39/50 [01:46<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
78%|███████▊ | 39/50 [01:46<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
78%|███████▊ | 39/50 [01:46<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
78%|███████▊ | 39/50 [01:46<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
78%|███████▊ | 39/50 [01:46<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
78%|███████▊ | 39/50 [01:46<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008602 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
78%|███████▊ | 39/50 [01:46<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12996
78%|███████▊ | 39/50 [01:46<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
78%|███████▊ | 39/50 [01:46<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
78%|███████▊ | 39/50 [01:46<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
78%|███████▊ | 39/50 [01:46<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
78%|███████▊ | 39/50 [01:46<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.179828
78%|███████▊ | 39/50 [01:46<00:28, 2.62s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
78%|███████▊ | 39/50 [01:46<00:28, 2.62s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[34] training's binary_logloss: 0.11889 valid_1's binary_logloss: 0.136703
78%|███████▊ | 39/50 [01:46<00:28, 2.62s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
78%|███████▊ | 39/50 [01:46<00:28, 2.62s/trial, best loss: -0.8365225708987197] 80%|████████ | 40/50 [01:46<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
80%|████████ | 40/50 [01:47<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
80%|████████ | 40/50 [01:47<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
80%|████████ | 40/50 [01:47<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
80%|████████ | 40/50 [01:47<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009081 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
80%|████████ | 40/50 [01:47<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12947
80%|████████ | 40/50 [01:47<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 203
80%|████████ | 40/50 [01:47<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
80%|████████ | 40/50 [01:47<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
80%|████████ | 40/50 [01:47<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
80%|████████ | 40/50 [01:47<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.161962
80%|████████ | 40/50 [01:47<00:24, 2.49s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
80%|████████ | 40/50 [01:47<00:24, 2.49s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[29] training's binary_logloss: 0.119904 valid_1's binary_logloss: 0.131539
80%|████████ | 40/50 [01:47<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
80%|████████ | 40/50 [01:47<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
80%|████████ | 40/50 [01:47<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
80%|████████ | 40/50 [01:47<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
80%|████████ | 40/50 [01:48<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
80%|████████ | 40/50 [01:48<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011067 seconds.
You can set `force_col_wise=true` to remove the overhead.
80%|████████ | 40/50 [01:48<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 13055
80%|████████ | 40/50 [01:48<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 199
80%|████████ | 40/50 [01:48<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
80%|████████ | 40/50 [01:48<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
80%|████████ | 40/50 [01:48<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
80%|████████ | 40/50 [01:48<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.210495
80%|████████ | 40/50 [01:48<00:24, 2.49s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
80%|████████ | 40/50 [01:48<00:24, 2.49s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[28] training's binary_logloss: 0.115561 valid_1's binary_logloss: 0.139612
80%|████████ | 40/50 [01:48<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
80%|████████ | 40/50 [01:48<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
80%|████████ | 40/50 [01:48<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
80%|████████ | 40/50 [01:48<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
80%|████████ | 40/50 [01:48<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
80%|████████ | 40/50 [01:48<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007772 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
80%|████████ | 40/50 [01:48<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12996
80%|████████ | 40/50 [01:48<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
80%|████████ | 40/50 [01:48<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
80%|████████ | 40/50 [01:48<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
80%|████████ | 40/50 [01:48<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
80%|████████ | 40/50 [01:48<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.179828
80%|████████ | 40/50 [01:48<00:24, 2.49s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
80%|████████ | 40/50 [01:48<00:24, 2.49s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[26] training's binary_logloss: 0.118624 valid_1's binary_logloss: 0.136879
80%|████████ | 40/50 [01:49<00:24, 2.49s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
80%|████████ | 40/50 [01:49<00:24, 2.49s/trial, best loss: -0.8365225708987197] 82%|████████▏ | 41/50 [01:49<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
82%|████████▏ | 41/50 [01:49<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
82%|████████▏ | 41/50 [01:49<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
82%|████████▏ | 41/50 [01:49<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
82%|████████▏ | 41/50 [01:49<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.012833 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
82%|████████▏ | 41/50 [01:49<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12943
82%|████████▏ | 41/50 [01:49<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
82%|████████▏ | 41/50 [01:49<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
82%|████████▏ | 41/50 [01:49<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
82%|████████▏ | 41/50 [01:49<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
82%|████████▏ | 41/50 [01:49<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.161962
82%|████████▏ | 41/50 [01:49<00:21, 2.40s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
82%|████████▏ | 41/50 [01:49<00:21, 2.40s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[39] training's binary_logloss: 0.121601 valid_1's binary_logloss: 0.131732
82%|████████▏ | 41/50 [01:49<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
82%|████████▏ | 41/50 [01:49<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
82%|████████▏ | 41/50 [01:50<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
82%|████████▏ | 41/50 [01:50<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
82%|████████▏ | 41/50 [01:50<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
82%|████████▏ | 41/50 [01:50<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.012323 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
82%|████████▏ | 41/50 [01:50<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12998
82%|████████▏ | 41/50 [01:50<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 194
82%|████████▏ | 41/50 [01:50<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
82%|████████▏ | 41/50 [01:50<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
82%|████████▏ | 41/50 [01:50<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
82%|████████▏ | 41/50 [01:50<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.210495
82%|████████▏ | 41/50 [01:50<00:21, 2.40s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
82%|████████▏ | 41/50 [01:50<00:21, 2.40s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[34] training's binary_logloss: 0.118977 valid_1's binary_logloss: 0.13905
82%|████████▏ | 41/50 [01:50<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
82%|████████▏ | 41/50 [01:50<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
82%|████████▏ | 41/50 [01:50<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
82%|████████▏ | 41/50 [01:50<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
82%|████████▏ | 41/50 [01:50<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
82%|████████▏ | 41/50 [01:50<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008469 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
82%|████████▏ | 41/50 [01:50<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12958
82%|████████▏ | 41/50 [01:50<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 197
82%|████████▏ | 41/50 [01:50<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
82%|████████▏ | 41/50 [01:50<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
82%|████████▏ | 41/50 [01:50<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
82%|████████▏ | 41/50 [01:51<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.179828
82%|████████▏ | 41/50 [01:51<00:21, 2.40s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
82%|████████▏ | 41/50 [01:51<00:21, 2.40s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[34] training's binary_logloss: 0.120814 valid_1's binary_logloss: 0.136438
82%|████████▏ | 41/50 [01:51<00:21, 2.40s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
82%|████████▏ | 41/50 [01:51<00:21, 2.40s/trial, best loss: -0.8365225708987197] 84%|████████▍ | 42/50 [01:51<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
84%|████████▍ | 42/50 [01:51<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
84%|████████▍ | 42/50 [01:51<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
84%|████████▍ | 42/50 [01:51<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
84%|████████▍ | 42/50 [01:51<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008567 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
84%|████████▍ | 42/50 [01:51<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12947
84%|████████▍ | 42/50 [01:51<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 203
84%|████████▍ | 42/50 [01:51<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
84%|████████▍ | 42/50 [01:51<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
84%|████████▍ | 42/50 [01:51<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
84%|████████▍ | 42/50 [01:51<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.161962
84%|████████▍ | 42/50 [01:51<00:18, 2.35s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
84%|████████▍ | 42/50 [01:51<00:18, 2.35s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.119106 valid_1's binary_logloss: 0.131122
84%|████████▍ | 42/50 [01:52<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
84%|████████▍ | 42/50 [01:52<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
84%|████████▍ | 42/50 [01:52<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
84%|████████▍ | 42/50 [01:52<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
84%|████████▍ | 42/50 [01:52<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
84%|████████▍ | 42/50 [01:52<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011181 seconds.
You can set `force_col_wise=true` to remove the overhead.
84%|████████▍ | 42/50 [01:52<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12998
84%|████████▍ | 42/50 [01:52<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 194
84%|████████▍ | 42/50 [01:52<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
84%|████████▍ | 42/50 [01:52<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
84%|████████▍ | 42/50 [01:52<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
84%|████████▍ | 42/50 [01:52<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.210495
84%|████████▍ | 42/50 [01:52<00:18, 2.35s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
84%|████████▍ | 42/50 [01:52<00:18, 2.35s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[90] training's binary_logloss: 0.116567 valid_1's binary_logloss: 0.138873
84%|████████▍ | 42/50 [01:53<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
84%|████████▍ | 42/50 [01:53<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
84%|████████▍ | 42/50 [01:53<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
84%|████████▍ | 42/50 [01:53<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
84%|████████▍ | 42/50 [01:53<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
84%|████████▍ | 42/50 [01:53<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007625 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
84%|████████▍ | 42/50 [01:53<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12968
84%|████████▍ | 42/50 [01:53<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 199
84%|████████▍ | 42/50 [01:53<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
84%|████████▍ | 42/50 [01:53<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
84%|████████▍ | 42/50 [01:53<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
84%|████████▍ | 42/50 [01:53<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.179828
84%|████████▍ | 42/50 [01:53<00:18, 2.35s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
84%|████████▍ | 42/50 [01:53<00:18, 2.35s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[85] training's binary_logloss: 0.118801 valid_1's binary_logloss: 0.136111
84%|████████▍ | 42/50 [01:54<00:18, 2.35s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
84%|████████▍ | 42/50 [01:54<00:18, 2.35s/trial, best loss: -0.8365225708987197] 86%|████████▌ | 43/50 [01:54<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
86%|████████▌ | 43/50 [01:54<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
86%|████████▌ | 43/50 [01:54<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
86%|████████▌ | 43/50 [01:54<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
86%|████████▌ | 43/50 [01:54<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008375 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
86%|████████▌ | 43/50 [01:54<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 13047
86%|████████▌ | 43/50 [01:54<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 210
86%|████████▌ | 43/50 [01:54<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
86%|████████▌ | 43/50 [01:54<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
86%|████████▌ | 43/50 [01:54<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
86%|████████▌ | 43/50 [01:54<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.161962
86%|████████▌ | 43/50 [01:54<00:17, 2.52s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
86%|████████▌ | 43/50 [01:54<00:17, 2.52s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.12624 valid_1's binary_logloss: 0.132688
86%|████████▌ | 43/50 [01:55<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
86%|████████▌ | 43/50 [01:55<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
86%|████████▌ | 43/50 [01:55<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
86%|████████▌ | 43/50 [01:55<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
86%|████████▌ | 43/50 [01:55<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
86%|████████▌ | 43/50 [01:55<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008360 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
86%|████████▌ | 43/50 [01:55<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 13130
86%|████████▌ | 43/50 [01:55<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 205
86%|████████▌ | 43/50 [01:55<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
86%|████████▌ | 43/50 [01:55<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
86%|████████▌ | 43/50 [01:55<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
86%|████████▌ | 43/50 [01:55<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.210495
86%|████████▌ | 43/50 [01:55<00:17, 2.52s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
86%|████████▌ | 43/50 [01:55<00:17, 2.52s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.121959 valid_1's binary_logloss: 0.140097
86%|████████▌ | 43/50 [01:56<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
86%|████████▌ | 43/50 [01:56<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
86%|████████▌ | 43/50 [01:56<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
86%|████████▌ | 43/50 [01:56<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
86%|████████▌ | 43/50 [01:56<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
86%|████████▌ | 43/50 [01:56<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.012750 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
86%|████████▌ | 43/50 [01:56<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 13000
86%|████████▌ | 43/50 [01:56<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 203
86%|████████▌ | 43/50 [01:56<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
86%|████████▌ | 43/50 [01:56<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
86%|████████▌ | 43/50 [01:56<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
86%|████████▌ | 43/50 [01:56<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.179828
86%|████████▌ | 43/50 [01:56<00:17, 2.52s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
86%|████████▌ | 43/50 [01:56<00:17, 2.52s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.123583 valid_1's binary_logloss: 0.137169
86%|████████▌ | 43/50 [01:57<00:17, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
86%|████████▌ | 43/50 [01:57<00:17, 2.52s/trial, best loss: -0.8365225708987197] 88%|████████▊ | 44/50 [01:57<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
88%|████████▊ | 44/50 [01:57<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
88%|████████▊ | 44/50 [01:57<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
88%|████████▊ | 44/50 [01:57<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
88%|████████▊ | 44/50 [01:57<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009549 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
88%|████████▊ | 44/50 [01:57<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12902
88%|████████▊ | 44/50 [01:57<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 197
88%|████████▊ | 44/50 [01:57<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
88%|████████▊ | 44/50 [01:57<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
88%|████████▊ | 44/50 [01:57<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
88%|████████▊ | 44/50 [01:57<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.161962
88%|████████▊ | 44/50 [01:57<00:16, 2.70s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
88%|████████▊ | 44/50 [01:57<00:16, 2.70s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[23] training's binary_logloss: 0.114805 valid_1's binary_logloss: 0.132779
88%|████████▊ | 44/50 [01:58<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
88%|████████▊ | 44/50 [01:58<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
88%|████████▊ | 44/50 [01:58<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
88%|████████▊ | 44/50 [01:58<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
88%|████████▊ | 44/50 [01:58<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
88%|████████▊ | 44/50 [01:58<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007723 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
88%|████████▊ | 44/50 [01:58<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12988
88%|████████▊ | 44/50 [01:58<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
88%|████████▊ | 44/50 [01:58<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
88%|████████▊ | 44/50 [01:58<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
88%|████████▊ | 44/50 [01:58<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
88%|████████▊ | 44/50 [01:58<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.210495
88%|████████▊ | 44/50 [01:58<00:16, 2.70s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
88%|████████▊ | 44/50 [01:58<00:16, 2.70s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[18] training's binary_logloss: 0.11472 valid_1's binary_logloss: 0.140404
88%|████████▊ | 44/50 [01:58<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
88%|████████▊ | 44/50 [01:58<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
88%|████████▊ | 44/50 [01:59<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
88%|████████▊ | 44/50 [01:59<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
88%|████████▊ | 44/50 [01:59<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
88%|████████▊ | 44/50 [01:59<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.012523 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
88%|████████▊ | 44/50 [01:59<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12898
88%|████████▊ | 44/50 [01:59<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
88%|████████▊ | 44/50 [01:59<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
88%|████████▊ | 44/50 [01:59<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
88%|████████▊ | 44/50 [01:59<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
88%|████████▊ | 44/50 [01:59<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.179828
88%|████████▊ | 44/50 [01:59<00:16, 2.70s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
88%|████████▊ | 44/50 [01:59<00:16, 2.70s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[19] training's binary_logloss: 0.115511 valid_1's binary_logloss: 0.137588
88%|████████▊ | 44/50 [01:59<00:16, 2.70s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
88%|████████▊ | 44/50 [01:59<00:16, 2.70s/trial, best loss: -0.8365225708987197] 90%|█████████ | 45/50 [01:59<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
90%|█████████ | 45/50 [01:59<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
90%|█████████ | 45/50 [01:59<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
90%|█████████ | 45/50 [01:59<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
90%|█████████ | 45/50 [01:59<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008368 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
90%|█████████ | 45/50 [01:59<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12835
90%|█████████ | 45/50 [01:59<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
90%|█████████ | 45/50 [01:59<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
90%|█████████ | 45/50 [01:59<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
90%|█████████ | 45/50 [01:59<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
90%|█████████ | 45/50 [01:59<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.161962
90%|█████████ | 45/50 [01:59<00:12, 2.56s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
90%|█████████ | 45/50 [02:00<00:12, 2.56s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[70] training's binary_logloss: 0.115612 valid_1's binary_logloss: 0.131625
90%|█████████ | 45/50 [02:00<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
90%|█████████ | 45/50 [02:00<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
90%|█████████ | 45/50 [02:00<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
90%|█████████ | 45/50 [02:00<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
90%|█████████ | 45/50 [02:00<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
90%|█████████ | 45/50 [02:00<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007622 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
90%|█████████ | 45/50 [02:00<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12988
90%|█████████ | 45/50 [02:00<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
90%|█████████ | 45/50 [02:00<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
90%|█████████ | 45/50 [02:01<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
90%|█████████ | 45/50 [02:01<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
90%|█████████ | 45/50 [02:01<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.210495
90%|█████████ | 45/50 [02:01<00:12, 2.56s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
90%|█████████ | 45/50 [02:01<00:12, 2.56s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[57] training's binary_logloss: 0.114417 valid_1's binary_logloss: 0.139373
90%|█████████ | 45/50 [02:01<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
90%|█████████ | 45/50 [02:01<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
90%|█████████ | 45/50 [02:01<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
90%|█████████ | 45/50 [02:01<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
90%|█████████ | 45/50 [02:01<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
90%|█████████ | 45/50 [02:01<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007511 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
90%|█████████ | 45/50 [02:01<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12898
90%|█████████ | 45/50 [02:01<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
90%|█████████ | 45/50 [02:01<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
90%|█████████ | 45/50 [02:02<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
90%|█████████ | 45/50 [02:02<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
90%|█████████ | 45/50 [02:02<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.179828
90%|█████████ | 45/50 [02:02<00:12, 2.56s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
90%|█████████ | 45/50 [02:02<00:12, 2.56s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[62] training's binary_logloss: 0.114805 valid_1's binary_logloss: 0.136936
90%|█████████ | 45/50 [02:02<00:12, 2.56s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
90%|█████████ | 45/50 [02:02<00:12, 2.56s/trial, best loss: -0.8365225708987197] 92%|█████████▏| 46/50 [02:02<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
92%|█████████▏| 46/50 [02:03<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
92%|█████████▏| 46/50 [02:03<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
92%|█████████▏| 46/50 [02:03<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
92%|█████████▏| 46/50 [02:03<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008690 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
92%|█████████▏| 46/50 [02:03<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12943
92%|█████████▏| 46/50 [02:03<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
92%|█████████▏| 46/50 [02:03<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
92%|█████████▏| 46/50 [02:03<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
92%|█████████▏| 46/50 [02:03<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
92%|█████████▏| 46/50 [02:03<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.161962
92%|█████████▏| 46/50 [02:03<00:10, 2.75s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
92%|█████████▏| 46/50 [02:03<00:10, 2.75s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[14] training's binary_logloss: 0.123339 valid_1's binary_logloss: 0.132372
92%|█████████▏| 46/50 [02:03<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
92%|█████████▏| 46/50 [02:03<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
92%|█████████▏| 46/50 [02:03<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
92%|█████████▏| 46/50 [02:03<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
92%|█████████▏| 46/50 [02:03<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
92%|█████████▏| 46/50 [02:03<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009379 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
92%|█████████▏| 46/50 [02:03<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12988
92%|█████████▏| 46/50 [02:03<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
92%|█████████▏| 46/50 [02:03<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
92%|█████████▏| 46/50 [02:03<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
92%|█████████▏| 46/50 [02:03<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
92%|█████████▏| 46/50 [02:03<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.210495
92%|█████████▏| 46/50 [02:03<00:10, 2.75s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
92%|█████████▏| 46/50 [02:03<00:10, 2.75s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[13] training's binary_logloss: 0.119633 valid_1's binary_logloss: 0.141193
92%|█████████▏| 46/50 [02:04<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
92%|█████████▏| 46/50 [02:04<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
92%|█████████▏| 46/50 [02:04<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
92%|█████████▏| 46/50 [02:04<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
92%|█████████▏| 46/50 [02:04<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
92%|█████████▏| 46/50 [02:04<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008788 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
92%|█████████▏| 46/50 [02:04<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12906
92%|█████████▏| 46/50 [02:04<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 195
92%|█████████▏| 46/50 [02:04<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
92%|█████████▏| 46/50 [02:04<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
92%|█████████▏| 46/50 [02:04<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
92%|█████████▏| 46/50 [02:04<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.179828
92%|█████████▏| 46/50 [02:04<00:10, 2.75s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
92%|█████████▏| 46/50 [02:04<00:10, 2.75s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[13] training's binary_logloss: 0.12138 valid_1's binary_logloss: 0.137428
92%|█████████▏| 46/50 [02:04<00:10, 2.75s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
92%|█████████▏| 46/50 [02:04<00:10, 2.75s/trial, best loss: -0.8365225708987197] 94%|█████████▍| 47/50 [02:04<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
94%|█████████▍| 47/50 [02:04<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
94%|█████████▍| 47/50 [02:04<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
94%|█████████▍| 47/50 [02:04<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
94%|█████████▍| 47/50 [02:04<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008990 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
94%|█████████▍| 47/50 [02:04<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12835
94%|█████████▍| 47/50 [02:04<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
94%|█████████▍| 47/50 [02:04<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
94%|█████████▍| 47/50 [02:04<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
94%|█████████▍| 47/50 [02:04<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
94%|█████████▍| 47/50 [02:04<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.161962
94%|█████████▍| 47/50 [02:04<00:07, 2.52s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
94%|█████████▍| 47/50 [02:04<00:07, 2.52s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[52] training's binary_logloss: 0.119292 valid_1's binary_logloss: 0.131268
94%|█████████▍| 47/50 [02:04<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
94%|█████████▍| 47/50 [02:04<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
94%|█████████▍| 47/50 [02:04<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
94%|█████████▍| 47/50 [02:04<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
94%|█████████▍| 47/50 [02:05<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
94%|█████████▍| 47/50 [02:05<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007926 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
94%|█████████▍| 47/50 [02:05<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12988
94%|█████████▍| 47/50 [02:05<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
94%|█████████▍| 47/50 [02:05<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
94%|█████████▍| 47/50 [02:05<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
94%|█████████▍| 47/50 [02:05<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
94%|█████████▍| 47/50 [02:05<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.210495
94%|█████████▍| 47/50 [02:05<00:07, 2.52s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
94%|█████████▍| 47/50 [02:05<00:07, 2.52s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[51] training's binary_logloss: 0.11486 valid_1's binary_logloss: 0.139012
94%|█████████▍| 47/50 [02:05<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
94%|█████████▍| 47/50 [02:05<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
94%|█████████▍| 47/50 [02:05<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
94%|█████████▍| 47/50 [02:05<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
94%|█████████▍| 47/50 [02:05<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
94%|█████████▍| 47/50 [02:05<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007646 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
94%|█████████▍| 47/50 [02:05<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12898
94%|█████████▍| 47/50 [02:05<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
94%|█████████▍| 47/50 [02:05<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
94%|█████████▍| 47/50 [02:05<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
94%|█████████▍| 47/50 [02:05<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
94%|█████████▍| 47/50 [02:06<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.179828
94%|█████████▍| 47/50 [02:06<00:07, 2.52s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
94%|█████████▍| 47/50 [02:06<00:07, 2.52s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[45] training's binary_logloss: 0.118593 valid_1's binary_logloss: 0.13685
94%|█████████▍| 47/50 [02:06<00:07, 2.52s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
94%|█████████▍| 47/50 [02:06<00:07, 2.52s/trial, best loss: -0.8365225708987197] 96%|█████████▌| 48/50 [02:06<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
96%|█████████▌| 48/50 [02:06<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
96%|█████████▌| 48/50 [02:06<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
96%|█████████▌| 48/50 [02:06<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
96%|█████████▌| 48/50 [02:06<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009040 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
96%|█████████▌| 48/50 [02:06<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12902
96%|█████████▌| 48/50 [02:06<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 197
96%|█████████▌| 48/50 [02:06<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
96%|█████████▌| 48/50 [02:06<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
96%|█████████▌| 48/50 [02:06<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
96%|█████████▌| 48/50 [02:06<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.161962
96%|█████████▌| 48/50 [02:06<00:04, 2.25s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
96%|█████████▌| 48/50 [02:06<00:04, 2.25s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.123923 valid_1's binary_logloss: 0.132366
96%|█████████▌| 48/50 [02:07<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
96%|█████████▌| 48/50 [02:07<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
96%|█████████▌| 48/50 [02:07<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
96%|█████████▌| 48/50 [02:07<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
96%|█████████▌| 48/50 [02:07<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
96%|█████████▌| 48/50 [02:07<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009230 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
96%|█████████▌| 48/50 [02:07<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12988
96%|█████████▌| 48/50 [02:07<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
96%|█████████▌| 48/50 [02:07<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
96%|█████████▌| 48/50 [02:07<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
96%|█████████▌| 48/50 [02:07<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
96%|█████████▌| 48/50 [02:07<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.210495
96%|█████████▌| 48/50 [02:07<00:04, 2.25s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
96%|█████████▌| 48/50 [02:07<00:04, 2.25s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.119581 valid_1's binary_logloss: 0.140569
96%|█████████▌| 48/50 [02:08<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
96%|█████████▌| 48/50 [02:08<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
96%|█████████▌| 48/50 [02:08<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
96%|█████████▌| 48/50 [02:08<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
96%|█████████▌| 48/50 [02:09<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
96%|█████████▌| 48/50 [02:09<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.010048 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
96%|█████████▌| 48/50 [02:09<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12898
96%|█████████▌| 48/50 [02:09<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
96%|█████████▌| 48/50 [02:09<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
96%|█████████▌| 48/50 [02:09<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
96%|█████████▌| 48/50 [02:09<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
96%|█████████▌| 48/50 [02:09<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.179828
96%|█████████▌| 48/50 [02:09<00:04, 2.25s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
96%|█████████▌| 48/50 [02:09<00:04, 2.25s/trial, best loss: -0.8365225708987197] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.121237 valid_1's binary_logloss: 0.137601
96%|█████████▌| 48/50 [02:10<00:04, 2.25s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
96%|█████████▌| 48/50 [02:10<00:04, 2.25s/trial, best loss: -0.8365225708987197] 98%|█████████▊| 49/50 [02:10<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
98%|█████████▊| 49/50 [02:10<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
98%|█████████▊| 49/50 [02:10<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
98%|█████████▊| 49/50 [02:10<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1647, number of negative: 38897
98%|█████████▊| 49/50 [02:10<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.010404 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
98%|█████████▊| 49/50 [02:10<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12947
98%|█████████▊| 49/50 [02:10<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 203
98%|█████████▊| 49/50 [02:10<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
98%|█████████▊| 49/50 [02:10<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
98%|█████████▊| 49/50 [02:10<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.040623 -> initscore=-3.161962
98%|█████████▊| 49/50 [02:10<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.161962
98%|█████████▊| 49/50 [02:10<00:02, 2.66s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
98%|█████████▊| 49/50 [02:10<00:02, 2.66s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[26] training's binary_logloss: 0.118459 valid_1's binary_logloss: 0.131829
98%|█████████▊| 49/50 [02:10<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
98%|█████████▊| 49/50 [02:10<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
98%|█████████▊| 49/50 [02:10<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
98%|█████████▊| 49/50 [02:10<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
98%|█████████▊| 49/50 [02:11<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1572, number of negative: 38972
98%|█████████▊| 49/50 [02:11<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.010339 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
98%|█████████▊| 49/50 [02:11<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 13059
98%|█████████▊| 49/50 [02:11<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 200
98%|█████████▊| 49/50 [02:11<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
98%|█████████▊| 49/50 [02:11<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
98%|█████████▊| 49/50 [02:11<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.038773 -> initscore=-3.210495
98%|█████████▊| 49/50 [02:11<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.210495
98%|█████████▊| 49/50 [02:11<00:02, 2.66s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
98%|█████████▊| 49/50 [02:11<00:02, 2.66s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[29] training's binary_logloss: 0.11189 valid_1's binary_logloss: 0.139652
98%|█████████▊| 49/50 [02:11<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
98%|█████████▊| 49/50 [02:11<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
98%|█████████▊| 49/50 [02:11<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
98%|█████████▊| 49/50 [02:11<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
98%|█████████▊| 49/50 [02:11<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of positive: 1619, number of negative: 38925
98%|█████████▊| 49/50 [02:11<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010263 seconds.
You can set `force_col_wise=true` to remove the overhead.
98%|█████████▊| 49/50 [02:11<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Total Bins 12996
98%|█████████▊| 49/50 [02:11<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
98%|█████████▊| 49/50 [02:11<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
98%|█████████▊| 49/50 [02:11<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
98%|█████████▊| 49/50 [02:11<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039932 -> initscore=-3.179828
98%|█████████▊| 49/50 [02:11<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Info] Start training from score -3.179828
98%|█████████▊| 49/50 [02:11<00:02, 2.66s/trial, best loss: -0.8365225708987197] Training until validation scores don't improve for 30 rounds
98%|█████████▊| 49/50 [02:11<00:02, 2.66s/trial, best loss: -0.8365225708987197] Early stopping, best iteration is:
[26] training's binary_logloss: 0.115607 valid_1's binary_logloss: 0.137612
98%|█████████▊| 49/50 [02:12<00:02, 2.66s/trial, best loss: -0.8365225708987197] [LightGBM] [Warning] Unknown parameter: eval_metric
98%|█████████▊| 49/50 [02:12<00:02, 2.66s/trial, best loss: -0.8365225708987197]100%|██████████| 50/50 [02:12<00:00, 2.55s/trial, best loss: -0.8365225708987197]100%|██████████| 50/50 [02:12<00:00, 2.65s/trial, best loss: -0.8365225708987197]
{'learning_rate': 0.028291797782733982, 'max_depth': 154.0, 'min_child_samples': 64.0, 'num_leaves': 32.0, 'subsample': 0.9145203867432408}