0%| | 0/50 [00:00<?, ?trial/s, best loss=?] [LightGBM] [Warning] Unknown parameter: eval_metric
0%| | 0/50 [00:00<?, ?trial/s, best loss=?] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
0%| | 0/50 [00:00<?, ?trial/s, best loss=?] [LightGBM] [Warning] Unknown parameter: eval_metric
0%| | 0/50 [00:00<?, ?trial/s, best loss=?] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
0%| | 0/50 [00:00<?, ?trial/s, best loss=?] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009724 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
0%| | 0/50 [00:00<?, ?trial/s, best loss=?] [LightGBM] [Info] Total Bins 12869
0%| | 0/50 [00:00<?, ?trial/s, best loss=?] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 199
0%| | 0/50 [00:00<?, ?trial/s, best loss=?] [LightGBM] [Warning] Unknown parameter: eval_metric
0%| | 0/50 [00:00<?, ?trial/s, best loss=?] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
0%| | 0/50 [00:00<?, ?trial/s, best loss=?] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
0%| | 0/50 [00:00<?, ?trial/s, best loss=?] [LightGBM] [Info] Start training from score -3.184987
0%| | 0/50 [00:00<?, ?trial/s, best loss=?] Training until validation scores don't improve for 30 rounds
0%| | 0/50 [00:00<?, ?trial/s, best loss=?] Did not meet early stopping. Best iteration is:
[71] training's binary_logloss: 0.112255 valid_1's binary_logloss: 0.135706
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] [LightGBM] [Warning] Unknown parameter: eval_metric
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] [LightGBM] [Warning] Unknown parameter: eval_metric
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] [LightGBM] [Warning] Unknown parameter: eval_metric
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008359 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] [LightGBM] [Info] Total Bins 12947
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 203
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] [LightGBM] [Warning] Unknown parameter: eval_metric
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] [LightGBM] [Info] Start training from score -3.196685
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] Training until validation scores don't improve for 30 rounds
0%| | 0/50 [00:01<?, ?trial/s, best loss=?] Did not meet early stopping. Best iteration is:
[76] training's binary_logloss: 0.110659 valid_1's binary_logloss: 0.138201
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] [LightGBM] [Warning] Unknown parameter: eval_metric
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] [LightGBM] [Warning] Unknown parameter: eval_metric
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] [LightGBM] [Warning] Unknown parameter: eval_metric
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007536 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] [LightGBM] [Info] Total Bins 12908
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 200
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] [LightGBM] [Warning] Unknown parameter: eval_metric
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] [LightGBM] [Info] Start training from score -3.181760
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] Training until validation scores don't improve for 30 rounds
0%| | 0/50 [00:02<?, ?trial/s, best loss=?] Early stopping, best iteration is:
[61] training's binary_logloss: 0.115291 valid_1's binary_logloss: 0.135127
0%| | 0/50 [00:03<?, ?trial/s, best loss=?] [LightGBM] [Warning] Unknown parameter: eval_metric
0%| | 0/50 [00:03<?, ?trial/s, best loss=?] 2%|▏ | 1/50 [00:03<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
2%|▏ | 1/50 [00:03<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
2%|▏ | 1/50 [00:03<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
2%|▏ | 1/50 [00:03<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
2%|▏ | 1/50 [00:03<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.006936 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
2%|▏ | 1/50 [00:03<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Total Bins 12812
2%|▏ | 1/50 [00:03<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 194
2%|▏ | 1/50 [00:03<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
2%|▏ | 1/50 [00:03<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
2%|▏ | 1/50 [00:03<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
2%|▏ | 1/50 [00:03<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Start training from score -3.184987
2%|▏ | 1/50 [00:03<02:51, 3.51s/trial, best loss: -0.8321249357878048] Training until validation scores don't improve for 30 rounds
2%|▏ | 1/50 [00:03<02:51, 3.51s/trial, best loss: -0.8321249357878048] Early stopping, best iteration is:
[23] training's binary_logloss: 0.116064 valid_1's binary_logloss: 0.136172
2%|▏ | 1/50 [00:04<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
2%|▏ | 1/50 [00:04<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
2%|▏ | 1/50 [00:04<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
2%|▏ | 1/50 [00:04<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
2%|▏ | 1/50 [00:04<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
2%|▏ | 1/50 [00:04<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008904 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
2%|▏ | 1/50 [00:04<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Total Bins 12943
2%|▏ | 1/50 [00:04<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
2%|▏ | 1/50 [00:04<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
2%|▏ | 1/50 [00:04<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
2%|▏ | 1/50 [00:04<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
2%|▏ | 1/50 [00:04<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Start training from score -3.196685
2%|▏ | 1/50 [00:04<02:51, 3.51s/trial, best loss: -0.8321249357878048] Training until validation scores don't improve for 30 rounds
2%|▏ | 1/50 [00:04<02:51, 3.51s/trial, best loss: -0.8321249357878048] Early stopping, best iteration is:
[16] training's binary_logloss: 0.120339 valid_1's binary_logloss: 0.138716
2%|▏ | 1/50 [00:04<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
2%|▏ | 1/50 [00:04<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
2%|▏ | 1/50 [00:04<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
2%|▏ | 1/50 [00:05<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
2%|▏ | 1/50 [00:05<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
2%|▏ | 1/50 [00:05<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.012688 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
2%|▏ | 1/50 [00:05<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Total Bins 12908
2%|▏ | 1/50 [00:05<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 200
2%|▏ | 1/50 [00:05<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
2%|▏ | 1/50 [00:05<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
2%|▏ | 1/50 [00:05<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
2%|▏ | 1/50 [00:05<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Start training from score -3.181760
2%|▏ | 1/50 [00:05<02:51, 3.51s/trial, best loss: -0.8321249357878048] Training until validation scores don't improve for 30 rounds
2%|▏ | 1/50 [00:05<02:51, 3.51s/trial, best loss: -0.8321249357878048] Early stopping, best iteration is:
[16] training's binary_logloss: 0.122183 valid_1's binary_logloss: 0.135018
2%|▏ | 1/50 [00:05<02:51, 3.51s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
2%|▏ | 1/50 [00:05<02:51, 3.51s/trial, best loss: -0.8321249357878048] 4%|▍ | 2/50 [00:05<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
4%|▍ | 2/50 [00:05<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
4%|▍ | 2/50 [00:05<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
4%|▍ | 2/50 [00:05<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
4%|▍ | 2/50 [00:05<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.011380 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
4%|▍ | 2/50 [00:05<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Total Bins 12804
4%|▍ | 2/50 [00:05<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
4%|▍ | 2/50 [00:05<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
4%|▍ | 2/50 [00:05<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
4%|▍ | 2/50 [00:05<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
4%|▍ | 2/50 [00:05<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Start training from score -3.184987
4%|▍ | 2/50 [00:05<02:07, 2.65s/trial, best loss: -0.8321249357878048] Training until validation scores don't improve for 30 rounds
4%|▍ | 2/50 [00:05<02:07, 2.65s/trial, best loss: -0.8321249357878048] Early stopping, best iteration is:
[25] training's binary_logloss: 0.116716 valid_1's binary_logloss: 0.136274
4%|▍ | 2/50 [00:06<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
4%|▍ | 2/50 [00:06<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
4%|▍ | 2/50 [00:06<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
4%|▍ | 2/50 [00:06<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
4%|▍ | 2/50 [00:06<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
4%|▍ | 2/50 [00:06<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008522 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
4%|▍ | 2/50 [00:06<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Total Bins 12847
4%|▍ | 2/50 [00:06<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 195
4%|▍ | 2/50 [00:06<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
4%|▍ | 2/50 [00:06<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
4%|▍ | 2/50 [00:06<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
4%|▍ | 2/50 [00:06<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Start training from score -3.196685
4%|▍ | 2/50 [00:06<02:07, 2.65s/trial, best loss: -0.8321249357878048] Training until validation scores don't improve for 30 rounds
4%|▍ | 2/50 [00:06<02:07, 2.65s/trial, best loss: -0.8321249357878048] Early stopping, best iteration is:
[25] training's binary_logloss: 0.115805 valid_1's binary_logloss: 0.137993
4%|▍ | 2/50 [00:07<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
4%|▍ | 2/50 [00:07<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
4%|▍ | 2/50 [00:07<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
4%|▍ | 2/50 [00:07<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
4%|▍ | 2/50 [00:07<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
4%|▍ | 2/50 [00:07<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.010308 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
4%|▍ | 2/50 [00:07<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Total Bins 12817
4%|▍ | 2/50 [00:07<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
4%|▍ | 2/50 [00:07<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
4%|▍ | 2/50 [00:07<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
4%|▍ | 2/50 [00:07<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
4%|▍ | 2/50 [00:07<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Start training from score -3.181760
4%|▍ | 2/50 [00:07<02:07, 2.65s/trial, best loss: -0.8321249357878048] Training until validation scores don't improve for 30 rounds
4%|▍ | 2/50 [00:07<02:07, 2.65s/trial, best loss: -0.8321249357878048] Early stopping, best iteration is:
[25] training's binary_logloss: 0.116979 valid_1's binary_logloss: 0.135074
4%|▍ | 2/50 [00:08<02:07, 2.65s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
4%|▍ | 2/50 [00:08<02:07, 2.65s/trial, best loss: -0.8321249357878048] 6%|▌ | 3/50 [00:08<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
6%|▌ | 3/50 [00:08<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
6%|▌ | 3/50 [00:08<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
6%|▌ | 3/50 [00:08<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
6%|▌ | 3/50 [00:08<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009715 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
6%|▌ | 3/50 [00:08<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Total Bins 12944
6%|▌ | 3/50 [00:08<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 205
6%|▌ | 3/50 [00:08<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
6%|▌ | 3/50 [00:08<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
6%|▌ | 3/50 [00:08<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
6%|▌ | 3/50 [00:08<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Start training from score -3.184987
6%|▌ | 3/50 [00:08<02:12, 2.83s/trial, best loss: -0.8321249357878048] Training until validation scores don't improve for 30 rounds
6%|▌ | 3/50 [00:08<02:12, 2.83s/trial, best loss: -0.8321249357878048] Early stopping, best iteration is:
[18] training's binary_logloss: 0.11434 valid_1's binary_logloss: 0.136843
6%|▌ | 3/50 [00:09<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
6%|▌ | 3/50 [00:09<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
6%|▌ | 3/50 [00:09<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
6%|▌ | 3/50 [00:09<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
6%|▌ | 3/50 [00:09<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
6%|▌ | 3/50 [00:09<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007691 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
6%|▌ | 3/50 [00:09<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Total Bins 12993
6%|▌ | 3/50 [00:09<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 205
6%|▌ | 3/50 [00:09<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
6%|▌ | 3/50 [00:09<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
6%|▌ | 3/50 [00:09<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
6%|▌ | 3/50 [00:09<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Start training from score -3.196685
6%|▌ | 3/50 [00:09<02:12, 2.83s/trial, best loss: -0.8321249357878048] Training until validation scores don't improve for 30 rounds
6%|▌ | 3/50 [00:09<02:12, 2.83s/trial, best loss: -0.8321249357878048] Early stopping, best iteration is:
[18] training's binary_logloss: 0.113403 valid_1's binary_logloss: 0.13851
6%|▌ | 3/50 [00:09<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
6%|▌ | 3/50 [00:09<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
6%|▌ | 3/50 [00:09<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
6%|▌ | 3/50 [00:09<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
6%|▌ | 3/50 [00:10<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
6%|▌ | 3/50 [00:10<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008144 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
6%|▌ | 3/50 [00:10<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Total Bins 12917
6%|▌ | 3/50 [00:10<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
6%|▌ | 3/50 [00:10<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
6%|▌ | 3/50 [00:10<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
6%|▌ | 3/50 [00:10<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
6%|▌ | 3/50 [00:10<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Start training from score -3.181760
6%|▌ | 3/50 [00:10<02:12, 2.83s/trial, best loss: -0.8321249357878048] Training until validation scores don't improve for 30 rounds
6%|▌ | 3/50 [00:10<02:12, 2.83s/trial, best loss: -0.8321249357878048] Early stopping, best iteration is:
[13] training's binary_logloss: 0.11959 valid_1's binary_logloss: 0.135042
6%|▌ | 3/50 [00:10<02:12, 2.83s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
6%|▌ | 3/50 [00:10<02:12, 2.83s/trial, best loss: -0.8321249357878048] 8%|▊ | 4/50 [00:10<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
8%|▊ | 4/50 [00:10<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
8%|▊ | 4/50 [00:10<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
8%|▊ | 4/50 [00:10<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
8%|▊ | 4/50 [00:10<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007364 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
8%|▊ | 4/50 [00:10<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Total Bins 12804
8%|▊ | 4/50 [00:10<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
8%|▊ | 4/50 [00:10<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
8%|▊ | 4/50 [00:10<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
8%|▊ | 4/50 [00:10<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
8%|▊ | 4/50 [00:10<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Start training from score -3.184987
8%|▊ | 4/50 [00:10<01:53, 2.46s/trial, best loss: -0.8321249357878048] Training until validation scores don't improve for 30 rounds
8%|▊ | 4/50 [00:10<01:53, 2.46s/trial, best loss: -0.8321249357878048] Early stopping, best iteration is:
[50] training's binary_logloss: 0.114951 valid_1's binary_logloss: 0.135266
8%|▊ | 4/50 [00:11<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
8%|▊ | 4/50 [00:11<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
8%|▊ | 4/50 [00:11<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
8%|▊ | 4/50 [00:11<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
8%|▊ | 4/50 [00:11<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
8%|▊ | 4/50 [00:11<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009043 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
8%|▊ | 4/50 [00:11<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Total Bins 12838
8%|▊ | 4/50 [00:11<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
8%|▊ | 4/50 [00:11<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
8%|▊ | 4/50 [00:11<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
8%|▊ | 4/50 [00:11<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
8%|▊ | 4/50 [00:11<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Start training from score -3.196685
8%|▊ | 4/50 [00:11<01:53, 2.46s/trial, best loss: -0.8321249357878048] Training until validation scores don't improve for 30 rounds
8%|▊ | 4/50 [00:11<01:53, 2.46s/trial, best loss: -0.8321249357878048] Early stopping, best iteration is:
[49] training's binary_logloss: 0.114376 valid_1's binary_logloss: 0.138019
8%|▊ | 4/50 [00:12<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
8%|▊ | 4/50 [00:12<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
8%|▊ | 4/50 [00:12<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
8%|▊ | 4/50 [00:12<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
8%|▊ | 4/50 [00:13<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
8%|▊ | 4/50 [00:13<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.006748 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
8%|▊ | 4/50 [00:13<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Total Bins 12817
8%|▊ | 4/50 [00:13<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
8%|▊ | 4/50 [00:13<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
8%|▊ | 4/50 [00:13<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
8%|▊ | 4/50 [00:13<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
8%|▊ | 4/50 [00:13<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Info] Start training from score -3.181760
8%|▊ | 4/50 [00:13<01:53, 2.46s/trial, best loss: -0.8321249357878048] Training until validation scores don't improve for 30 rounds
8%|▊ | 4/50 [00:13<01:53, 2.46s/trial, best loss: -0.8321249357878048] Early stopping, best iteration is:
[51] training's binary_logloss: 0.114762 valid_1's binary_logloss: 0.135074
8%|▊ | 4/50 [00:13<01:53, 2.46s/trial, best loss: -0.8321249357878048] [LightGBM] [Warning] Unknown parameter: eval_metric
8%|▊ | 4/50 [00:13<01:53, 2.46s/trial, best loss: -0.8321249357878048] 10%|█ | 5/50 [00:13<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Warning] Unknown parameter: eval_metric
10%|█ | 5/50 [00:13<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
10%|█ | 5/50 [00:13<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Warning] Unknown parameter: eval_metric
10%|█ | 5/50 [00:13<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
10%|█ | 5/50 [00:13<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007293 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
10%|█ | 5/50 [00:13<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Info] Total Bins 12812
10%|█ | 5/50 [00:13<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 194
10%|█ | 5/50 [00:13<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Warning] Unknown parameter: eval_metric
10%|█ | 5/50 [00:13<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
10%|█ | 5/50 [00:13<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
10%|█ | 5/50 [00:14<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Info] Start training from score -3.184987
10%|█ | 5/50 [00:14<02:02, 2.73s/trial, best loss: -0.8321345573094822] Training until validation scores don't improve for 30 rounds
10%|█ | 5/50 [00:14<02:02, 2.73s/trial, best loss: -0.8321345573094822] Early stopping, best iteration is:
[28] training's binary_logloss: 0.11413 valid_1's binary_logloss: 0.13591
10%|█ | 5/50 [00:14<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Warning] Unknown parameter: eval_metric
10%|█ | 5/50 [00:14<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Warning] Unknown parameter: eval_metric
10%|█ | 5/50 [00:14<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
10%|█ | 5/50 [00:14<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Warning] Unknown parameter: eval_metric
10%|█ | 5/50 [00:14<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
10%|█ | 5/50 [00:14<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008305 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
10%|█ | 5/50 [00:14<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Info] Total Bins 12943
10%|█ | 5/50 [00:14<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
10%|█ | 5/50 [00:14<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Warning] Unknown parameter: eval_metric
10%|█ | 5/50 [00:14<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
10%|█ | 5/50 [00:14<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
10%|█ | 5/50 [00:14<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Info] Start training from score -3.196685
10%|█ | 5/50 [00:14<02:02, 2.73s/trial, best loss: -0.8321345573094822] Training until validation scores don't improve for 30 rounds
10%|█ | 5/50 [00:14<02:02, 2.73s/trial, best loss: -0.8321345573094822] Early stopping, best iteration is:
[17] training's binary_logloss: 0.120592 valid_1's binary_logloss: 0.138248
10%|█ | 5/50 [00:15<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Warning] Unknown parameter: eval_metric
10%|█ | 5/50 [00:15<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Warning] Unknown parameter: eval_metric
10%|█ | 5/50 [00:15<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
10%|█ | 5/50 [00:15<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Warning] Unknown parameter: eval_metric
10%|█ | 5/50 [00:15<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
10%|█ | 5/50 [00:15<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009350 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
10%|█ | 5/50 [00:15<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Info] Total Bins 12879
10%|█ | 5/50 [00:15<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 197
10%|█ | 5/50 [00:15<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Warning] Unknown parameter: eval_metric
10%|█ | 5/50 [00:15<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
10%|█ | 5/50 [00:15<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
10%|█ | 5/50 [00:15<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Info] Start training from score -3.181760
10%|█ | 5/50 [00:15<02:02, 2.73s/trial, best loss: -0.8321345573094822] Training until validation scores don't improve for 30 rounds
10%|█ | 5/50 [00:15<02:02, 2.73s/trial, best loss: -0.8321345573094822] Early stopping, best iteration is:
[26] training's binary_logloss: 0.116027 valid_1's binary_logloss: 0.134638
10%|█ | 5/50 [00:15<02:02, 2.73s/trial, best loss: -0.8321345573094822] [LightGBM] [Warning] Unknown parameter: eval_metric
10%|█ | 5/50 [00:15<02:02, 2.73s/trial, best loss: -0.8321345573094822] 12%|█▏ | 6/50 [00:15<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] Unknown parameter: eval_metric
12%|█▏ | 6/50 [00:15<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
12%|█▏ | 6/50 [00:15<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] Unknown parameter: eval_metric
12%|█▏ | 6/50 [00:16<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
12%|█▏ | 6/50 [00:16<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007637 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
12%|█▏ | 6/50 [00:16<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Total Bins 12900
12%|█▏ | 6/50 [00:16<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 203
12%|█▏ | 6/50 [00:16<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] Unknown parameter: eval_metric
12%|█▏ | 6/50 [00:16<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
12%|█▏ | 6/50 [00:16<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
12%|█▏ | 6/50 [00:16<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Start training from score -3.184987
12%|█▏ | 6/50 [00:16<01:49, 2.49s/trial, best loss: -0.8321862965498973] Training until validation scores don't improve for 30 rounds
12%|█▏ | 6/50 [00:16<01:49, 2.49s/trial, best loss: -0.8321862965498973] Early stopping, best iteration is:
[33] training's binary_logloss: 0.113658 valid_1's binary_logloss: 0.13587
12%|█▏ | 6/50 [00:16<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] Unknown parameter: eval_metric
12%|█▏ | 6/50 [00:16<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] Unknown parameter: eval_metric
12%|█▏ | 6/50 [00:16<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
12%|█▏ | 6/50 [00:16<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] Unknown parameter: eval_metric
12%|█▏ | 6/50 [00:16<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
12%|█▏ | 6/50 [00:16<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.010190 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
12%|█▏ | 6/50 [00:16<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Total Bins 12993
12%|█▏ | 6/50 [00:16<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 205
12%|█▏ | 6/50 [00:16<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] Unknown parameter: eval_metric
12%|█▏ | 6/50 [00:16<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
12%|█▏ | 6/50 [00:16<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
12%|█▏ | 6/50 [00:16<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Start training from score -3.196685
12%|█▏ | 6/50 [00:16<01:49, 2.49s/trial, best loss: -0.8321862965498973] Training until validation scores don't improve for 30 rounds
12%|█▏ | 6/50 [00:16<01:49, 2.49s/trial, best loss: -0.8321862965498973] Early stopping, best iteration is:
[31] training's binary_logloss: 0.113983 valid_1's binary_logloss: 0.138398
12%|█▏ | 6/50 [00:17<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] Unknown parameter: eval_metric
12%|█▏ | 6/50 [00:17<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] Unknown parameter: eval_metric
12%|█▏ | 6/50 [00:17<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
12%|█▏ | 6/50 [00:17<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] Unknown parameter: eval_metric
12%|█▏ | 6/50 [00:17<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
12%|█▏ | 6/50 [00:17<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009348 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
12%|█▏ | 6/50 [00:17<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Total Bins 12917
12%|█▏ | 6/50 [00:17<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
12%|█▏ | 6/50 [00:17<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] Unknown parameter: eval_metric
12%|█▏ | 6/50 [00:17<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
12%|█▏ | 6/50 [00:17<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
12%|█▏ | 6/50 [00:17<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Start training from score -3.181760
12%|█▏ | 6/50 [00:17<01:49, 2.49s/trial, best loss: -0.8321862965498973] Training until validation scores don't improve for 30 rounds
12%|█▏ | 6/50 [00:17<01:49, 2.49s/trial, best loss: -0.8321862965498973] Early stopping, best iteration is:
[28] training's binary_logloss: 0.116757 valid_1's binary_logloss: 0.135006
12%|█▏ | 6/50 [00:18<01:49, 2.49s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] Unknown parameter: eval_metric
12%|█▏ | 6/50 [00:18<01:49, 2.49s/trial, best loss: -0.8321862965498973] 14%|█▍ | 7/50 [00:18<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] Unknown parameter: eval_metric
14%|█▍ | 7/50 [00:18<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
14%|█▍ | 7/50 [00:18<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] Unknown parameter: eval_metric
14%|█▍ | 7/50 [00:18<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
14%|█▍ | 7/50 [00:18<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.014190 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
14%|█▍ | 7/50 [00:18<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Total Bins 12804
14%|█▍ | 7/50 [00:18<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
14%|█▍ | 7/50 [00:18<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] Unknown parameter: eval_metric
14%|█▍ | 7/50 [00:18<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
14%|█▍ | 7/50 [00:18<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
14%|█▍ | 7/50 [00:18<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Start training from score -3.184987
14%|█▍ | 7/50 [00:18<01:49, 2.55s/trial, best loss: -0.8321862965498973] Training until validation scores don't improve for 30 rounds
14%|█▍ | 7/50 [00:18<01:49, 2.55s/trial, best loss: -0.8321862965498973] Did not meet early stopping. Best iteration is:
[95] training's binary_logloss: 0.115034 valid_1's binary_logloss: 0.135206
14%|█▍ | 7/50 [00:19<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] Unknown parameter: eval_metric
14%|█▍ | 7/50 [00:19<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] Unknown parameter: eval_metric
14%|█▍ | 7/50 [00:19<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
14%|█▍ | 7/50 [00:19<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] Unknown parameter: eval_metric
14%|█▍ | 7/50 [00:19<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
14%|█▍ | 7/50 [00:19<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007240 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
14%|█▍ | 7/50 [00:19<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Total Bins 12847
14%|█▍ | 7/50 [00:19<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 195
14%|█▍ | 7/50 [00:19<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] Unknown parameter: eval_metric
14%|█▍ | 7/50 [00:19<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
14%|█▍ | 7/50 [00:19<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
14%|█▍ | 7/50 [00:19<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Start training from score -3.196685
14%|█▍ | 7/50 [00:19<01:49, 2.55s/trial, best loss: -0.8321862965498973] Training until validation scores don't improve for 30 rounds
14%|█▍ | 7/50 [00:19<01:49, 2.55s/trial, best loss: -0.8321862965498973] Did not meet early stopping. Best iteration is:
[95] training's binary_logloss: 0.114229 valid_1's binary_logloss: 0.137941
14%|█▍ | 7/50 [00:20<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] Unknown parameter: eval_metric
14%|█▍ | 7/50 [00:20<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] Unknown parameter: eval_metric
14%|█▍ | 7/50 [00:20<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
14%|█▍ | 7/50 [00:20<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] Unknown parameter: eval_metric
14%|█▍ | 7/50 [00:20<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
14%|█▍ | 7/50 [00:20<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008922 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
14%|█▍ | 7/50 [00:20<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Total Bins 12817
14%|█▍ | 7/50 [00:20<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
14%|█▍ | 7/50 [00:20<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] Unknown parameter: eval_metric
14%|█▍ | 7/50 [00:20<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
14%|█▍ | 7/50 [00:20<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
14%|█▍ | 7/50 [00:20<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Info] Start training from score -3.181760
14%|█▍ | 7/50 [00:20<01:49, 2.55s/trial, best loss: -0.8321862965498973] Training until validation scores don't improve for 30 rounds
14%|█▍ | 7/50 [00:20<01:49, 2.55s/trial, best loss: -0.8321862965498973] Did not meet early stopping. Best iteration is:
[97] training's binary_logloss: 0.11512 valid_1's binary_logloss: 0.134343
14%|█▍ | 7/50 [00:21<01:49, 2.55s/trial, best loss: -0.8321862965498973] [LightGBM] [Warning] Unknown parameter: eval_metric
14%|█▍ | 7/50 [00:21<01:49, 2.55s/trial, best loss: -0.8321862965498973] 16%|█▌ | 8/50 [00:21<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Warning] Unknown parameter: eval_metric
16%|█▌ | 8/50 [00:21<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
16%|█▌ | 8/50 [00:21<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Warning] Unknown parameter: eval_metric
16%|█▌ | 8/50 [00:21<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
16%|█▌ | 8/50 [00:21<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009362 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
16%|█▌ | 8/50 [00:21<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Info] Total Bins 12804
16%|█▌ | 8/50 [00:21<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
16%|█▌ | 8/50 [00:21<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Warning] Unknown parameter: eval_metric
16%|█▌ | 8/50 [00:21<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
16%|█▌ | 8/50 [00:21<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
16%|█▌ | 8/50 [00:21<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Info] Start training from score -3.184987
16%|█▌ | 8/50 [00:22<01:56, 2.77s/trial, best loss: -0.8334837969495431] Training until validation scores don't improve for 30 rounds
16%|█▌ | 8/50 [00:22<01:56, 2.77s/trial, best loss: -0.8334837969495431] Did not meet early stopping. Best iteration is:
[80] training's binary_logloss: 0.116244 valid_1's binary_logloss: 0.135104
16%|█▌ | 8/50 [00:22<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Warning] Unknown parameter: eval_metric
16%|█▌ | 8/50 [00:23<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Warning] Unknown parameter: eval_metric
16%|█▌ | 8/50 [00:23<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
16%|█▌ | 8/50 [00:23<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Warning] Unknown parameter: eval_metric
16%|█▌ | 8/50 [00:23<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
16%|█▌ | 8/50 [00:23<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.019852 seconds.
You can set `force_col_wise=true` to remove the overhead.
16%|█▌ | 8/50 [00:23<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Info] Total Bins 12838
16%|█▌ | 8/50 [00:23<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
16%|█▌ | 8/50 [00:23<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Warning] Unknown parameter: eval_metric
16%|█▌ | 8/50 [00:23<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
16%|█▌ | 8/50 [00:23<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
16%|█▌ | 8/50 [00:23<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Info] Start training from score -3.196685
16%|█▌ | 8/50 [00:23<01:56, 2.77s/trial, best loss: -0.8334837969495431] Training until validation scores don't improve for 30 rounds
16%|█▌ | 8/50 [00:23<01:56, 2.77s/trial, best loss: -0.8334837969495431] Did not meet early stopping. Best iteration is:
[76] training's binary_logloss: 0.116122 valid_1's binary_logloss: 0.137831
16%|█▌ | 8/50 [00:24<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Warning] Unknown parameter: eval_metric
16%|█▌ | 8/50 [00:24<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Warning] Unknown parameter: eval_metric
16%|█▌ | 8/50 [00:24<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
16%|█▌ | 8/50 [00:24<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Warning] Unknown parameter: eval_metric
16%|█▌ | 8/50 [00:24<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
16%|█▌ | 8/50 [00:24<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008059 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
16%|█▌ | 8/50 [00:24<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Info] Total Bins 12817
16%|█▌ | 8/50 [00:24<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
16%|█▌ | 8/50 [00:24<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Warning] Unknown parameter: eval_metric
16%|█▌ | 8/50 [00:24<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
16%|█▌ | 8/50 [00:24<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
16%|█▌ | 8/50 [00:24<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Info] Start training from score -3.181760
16%|█▌ | 8/50 [00:24<01:56, 2.77s/trial, best loss: -0.8334837969495431] Training until validation scores don't improve for 30 rounds
16%|█▌ | 8/50 [00:24<01:56, 2.77s/trial, best loss: -0.8334837969495431] Did not meet early stopping. Best iteration is:
[72] training's binary_logloss: 0.11803 valid_1's binary_logloss: 0.134506
16%|█▌ | 8/50 [00:25<01:56, 2.77s/trial, best loss: -0.8334837969495431] [LightGBM] [Warning] Unknown parameter: eval_metric
16%|█▌ | 8/50 [00:25<01:56, 2.77s/trial, best loss: -0.8334837969495431] 18%|█▊ | 9/50 [00:25<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] Unknown parameter: eval_metric
18%|█▊ | 9/50 [00:25<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
18%|█▊ | 9/50 [00:25<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] Unknown parameter: eval_metric
18%|█▊ | 9/50 [00:25<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
18%|█▊ | 9/50 [00:25<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007825 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
18%|█▊ | 9/50 [00:25<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Total Bins 12804
18%|█▊ | 9/50 [00:25<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
18%|█▊ | 9/50 [00:25<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] Unknown parameter: eval_metric
18%|█▊ | 9/50 [00:25<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
18%|█▊ | 9/50 [00:25<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
18%|█▊ | 9/50 [00:25<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Start training from score -3.184987
18%|█▊ | 9/50 [00:25<02:07, 3.11s/trial, best loss: -0.8335214645673078] Training until validation scores don't improve for 30 rounds
18%|█▊ | 9/50 [00:25<02:07, 3.11s/trial, best loss: -0.8335214645673078] Early stopping, best iteration is:
[19] training's binary_logloss: 0.119694 valid_1's binary_logloss: 0.135681
18%|█▊ | 9/50 [00:26<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] Unknown parameter: eval_metric
18%|█▊ | 9/50 [00:26<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] Unknown parameter: eval_metric
18%|█▊ | 9/50 [00:26<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
18%|█▊ | 9/50 [00:26<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] Unknown parameter: eval_metric
18%|█▊ | 9/50 [00:26<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
18%|█▊ | 9/50 [00:26<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007929 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
18%|█▊ | 9/50 [00:26<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Total Bins 12838
18%|█▊ | 9/50 [00:26<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
18%|█▊ | 9/50 [00:26<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] Unknown parameter: eval_metric
18%|█▊ | 9/50 [00:26<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
18%|█▊ | 9/50 [00:26<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
18%|█▊ | 9/50 [00:26<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Start training from score -3.196685
18%|█▊ | 9/50 [00:26<02:07, 3.11s/trial, best loss: -0.8335214645673078] Training until validation scores don't improve for 30 rounds
18%|█▊ | 9/50 [00:26<02:07, 3.11s/trial, best loss: -0.8335214645673078] Early stopping, best iteration is:
[16] training's binary_logloss: 0.120319 valid_1's binary_logloss: 0.138538
18%|█▊ | 9/50 [00:26<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] Unknown parameter: eval_metric
18%|█▊ | 9/50 [00:26<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] Unknown parameter: eval_metric
18%|█▊ | 9/50 [00:26<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
18%|█▊ | 9/50 [00:26<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] Unknown parameter: eval_metric
18%|█▊ | 9/50 [00:26<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
18%|█▊ | 9/50 [00:26<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009337 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
18%|█▊ | 9/50 [00:26<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Total Bins 12817
18%|█▊ | 9/50 [00:26<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
18%|█▊ | 9/50 [00:26<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] Unknown parameter: eval_metric
18%|█▊ | 9/50 [00:26<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
18%|█▊ | 9/50 [00:26<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
18%|█▊ | 9/50 [00:27<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Start training from score -3.181760
18%|█▊ | 9/50 [00:27<02:07, 3.11s/trial, best loss: -0.8335214645673078] Training until validation scores don't improve for 30 rounds
18%|█▊ | 9/50 [00:27<02:07, 3.11s/trial, best loss: -0.8335214645673078] Early stopping, best iteration is:
[17] training's binary_logloss: 0.120802 valid_1's binary_logloss: 0.13482
18%|█▊ | 9/50 [00:27<02:07, 3.11s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] Unknown parameter: eval_metric
18%|█▊ | 9/50 [00:27<02:07, 3.11s/trial, best loss: -0.8335214645673078] 20%|██ | 10/50 [00:27<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] Unknown parameter: eval_metric
20%|██ | 10/50 [00:27<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
20%|██ | 10/50 [00:27<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] Unknown parameter: eval_metric
20%|██ | 10/50 [00:27<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
20%|██ | 10/50 [00:27<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008130 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
20%|██ | 10/50 [00:27<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Total Bins 12804
20%|██ | 10/50 [00:27<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
20%|██ | 10/50 [00:27<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] Unknown parameter: eval_metric
20%|██ | 10/50 [00:27<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
20%|██ | 10/50 [00:27<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
20%|██ | 10/50 [00:27<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Start training from score -3.184987
20%|██ | 10/50 [00:27<01:47, 2.69s/trial, best loss: -0.8335214645673078] Training until validation scores don't improve for 30 rounds
20%|██ | 10/50 [00:27<01:47, 2.69s/trial, best loss: -0.8335214645673078] Early stopping, best iteration is:
[51] training's binary_logloss: 0.119408 valid_1's binary_logloss: 0.134911
20%|██ | 10/50 [00:27<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] Unknown parameter: eval_metric
20%|██ | 10/50 [00:27<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] Unknown parameter: eval_metric
20%|██ | 10/50 [00:28<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
20%|██ | 10/50 [00:28<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] Unknown parameter: eval_metric
20%|██ | 10/50 [00:28<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
20%|██ | 10/50 [00:28<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009044 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
20%|██ | 10/50 [00:28<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Total Bins 12838
20%|██ | 10/50 [00:28<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
20%|██ | 10/50 [00:28<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] Unknown parameter: eval_metric
20%|██ | 10/50 [00:28<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
20%|██ | 10/50 [00:28<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
20%|██ | 10/50 [00:28<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Start training from score -3.196685
20%|██ | 10/50 [00:28<01:47, 2.69s/trial, best loss: -0.8335214645673078] Training until validation scores don't improve for 30 rounds
20%|██ | 10/50 [00:28<01:47, 2.69s/trial, best loss: -0.8335214645673078] Early stopping, best iteration is:
[49] training's binary_logloss: 0.119133 valid_1's binary_logloss: 0.137546
20%|██ | 10/50 [00:28<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] Unknown parameter: eval_metric
20%|██ | 10/50 [00:28<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] Unknown parameter: eval_metric
20%|██ | 10/50 [00:28<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
20%|██ | 10/50 [00:28<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] Unknown parameter: eval_metric
20%|██ | 10/50 [00:28<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
20%|██ | 10/50 [00:28<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007803 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
20%|██ | 10/50 [00:28<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Total Bins 12817
20%|██ | 10/50 [00:28<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
20%|██ | 10/50 [00:28<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] Unknown parameter: eval_metric
20%|██ | 10/50 [00:29<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
20%|██ | 10/50 [00:29<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
20%|██ | 10/50 [00:29<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Info] Start training from score -3.181760
20%|██ | 10/50 [00:29<01:47, 2.69s/trial, best loss: -0.8335214645673078] Training until validation scores don't improve for 30 rounds
20%|██ | 10/50 [00:29<01:47, 2.69s/trial, best loss: -0.8335214645673078] Early stopping, best iteration is:
[47] training's binary_logloss: 0.120688 valid_1's binary_logloss: 0.134302
20%|██ | 10/50 [00:29<01:47, 2.69s/trial, best loss: -0.8335214645673078] [LightGBM] [Warning] Unknown parameter: eval_metric
20%|██ | 10/50 [00:29<01:47, 2.69s/trial, best loss: -0.8335214645673078] 22%|██▏ | 11/50 [00:29<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
22%|██▏ | 11/50 [00:29<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
22%|██▏ | 11/50 [00:29<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
22%|██▏ | 11/50 [00:29<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
22%|██▏ | 11/50 [00:29<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.011880 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
22%|██▏ | 11/50 [00:29<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Total Bins 12804
22%|██▏ | 11/50 [00:29<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
22%|██▏ | 11/50 [00:29<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
22%|██▏ | 11/50 [00:29<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
22%|██▏ | 11/50 [00:29<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
22%|██▏ | 11/50 [00:29<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Start training from score -3.184987
22%|██▏ | 11/50 [00:29<01:39, 2.54s/trial, best loss: -0.8345904314135609] Training until validation scores don't improve for 30 rounds
22%|██▏ | 11/50 [00:29<01:39, 2.54s/trial, best loss: -0.8345904314135609] Early stopping, best iteration is:
[29] training's binary_logloss: 0.115117 valid_1's binary_logloss: 0.136209
22%|██▏ | 11/50 [00:30<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
22%|██▏ | 11/50 [00:30<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
22%|██▏ | 11/50 [00:30<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
22%|██▏ | 11/50 [00:30<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
22%|██▏ | 11/50 [00:30<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
22%|██▏ | 11/50 [00:30<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.011556 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
22%|██▏ | 11/50 [00:30<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Total Bins 12838
22%|██▏ | 11/50 [00:30<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
22%|██▏ | 11/50 [00:30<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
22%|██▏ | 11/50 [00:30<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
22%|██▏ | 11/50 [00:30<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
22%|██▏ | 11/50 [00:30<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Start training from score -3.196685
22%|██▏ | 11/50 [00:30<01:39, 2.54s/trial, best loss: -0.8345904314135609] Training until validation scores don't improve for 30 rounds
22%|██▏ | 11/50 [00:30<01:39, 2.54s/trial, best loss: -0.8345904314135609] Early stopping, best iteration is:
[27] training's binary_logloss: 0.115512 valid_1's binary_logloss: 0.138156
22%|██▏ | 11/50 [00:31<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
22%|██▏ | 11/50 [00:31<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
22%|██▏ | 11/50 [00:31<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
22%|██▏ | 11/50 [00:31<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
22%|██▏ | 11/50 [00:31<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
22%|██▏ | 11/50 [00:31<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008892 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
22%|██▏ | 11/50 [00:31<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Total Bins 12817
22%|██▏ | 11/50 [00:31<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
22%|██▏ | 11/50 [00:31<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
22%|██▏ | 11/50 [00:31<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
22%|██▏ | 11/50 [00:31<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
22%|██▏ | 11/50 [00:31<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Start training from score -3.181760
22%|██▏ | 11/50 [00:31<01:39, 2.54s/trial, best loss: -0.8345904314135609] Training until validation scores don't improve for 30 rounds
22%|██▏ | 11/50 [00:31<01:39, 2.54s/trial, best loss: -0.8345904314135609] Early stopping, best iteration is:
[25] training's binary_logloss: 0.117743 valid_1's binary_logloss: 0.134897
22%|██▏ | 11/50 [00:32<01:39, 2.54s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
22%|██▏ | 11/50 [00:32<01:39, 2.54s/trial, best loss: -0.8345904314135609] 24%|██▍ | 12/50 [00:32<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
24%|██▍ | 12/50 [00:32<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
24%|██▍ | 12/50 [00:32<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
24%|██▍ | 12/50 [00:32<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
24%|██▍ | 12/50 [00:32<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008049 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
24%|██▍ | 12/50 [00:32<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Total Bins 12804
24%|██▍ | 12/50 [00:32<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
24%|██▍ | 12/50 [00:32<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
24%|██▍ | 12/50 [00:32<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
24%|██▍ | 12/50 [00:32<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
24%|██▍ | 12/50 [00:32<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Start training from score -3.184987
24%|██▍ | 12/50 [00:32<01:38, 2.59s/trial, best loss: -0.8345904314135609] Training until validation scores don't improve for 30 rounds
24%|██▍ | 12/50 [00:32<01:38, 2.59s/trial, best loss: -0.8345904314135609] Early stopping, best iteration is:
[25] training's binary_logloss: 0.118268 valid_1's binary_logloss: 0.135684
24%|██▍ | 12/50 [00:32<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
24%|██▍ | 12/50 [00:32<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
24%|██▍ | 12/50 [00:32<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
24%|██▍ | 12/50 [00:32<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
24%|██▍ | 12/50 [00:32<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
24%|██▍ | 12/50 [00:32<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.006989 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
24%|██▍ | 12/50 [00:33<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Total Bins 12847
24%|██▍ | 12/50 [00:33<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 195
24%|██▍ | 12/50 [00:33<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
24%|██▍ | 12/50 [00:33<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
24%|██▍ | 12/50 [00:33<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
24%|██▍ | 12/50 [00:33<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Start training from score -3.196685
24%|██▍ | 12/50 [00:33<01:38, 2.59s/trial, best loss: -0.8345904314135609] Training until validation scores don't improve for 30 rounds
24%|██▍ | 12/50 [00:33<01:38, 2.59s/trial, best loss: -0.8345904314135609] Early stopping, best iteration is:
[22] training's binary_logloss: 0.118735 valid_1's binary_logloss: 0.137976
24%|██▍ | 12/50 [00:33<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
24%|██▍ | 12/50 [00:33<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
24%|██▍ | 12/50 [00:33<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
24%|██▍ | 12/50 [00:33<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
24%|██▍ | 12/50 [00:33<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
24%|██▍ | 12/50 [00:33<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.006796 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
24%|██▍ | 12/50 [00:33<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Total Bins 12817
24%|██▍ | 12/50 [00:33<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
24%|██▍ | 12/50 [00:33<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
24%|██▍ | 12/50 [00:33<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
24%|██▍ | 12/50 [00:33<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
24%|██▍ | 12/50 [00:33<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Start training from score -3.181760
24%|██▍ | 12/50 [00:33<01:38, 2.59s/trial, best loss: -0.8345904314135609] Training until validation scores don't improve for 30 rounds
24%|██▍ | 12/50 [00:33<01:38, 2.59s/trial, best loss: -0.8345904314135609] Early stopping, best iteration is:
[24] training's binary_logloss: 0.119628 valid_1's binary_logloss: 0.134916
24%|██▍ | 12/50 [00:33<01:38, 2.59s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
24%|██▍ | 12/50 [00:33<01:38, 2.59s/trial, best loss: -0.8345904314135609] 26%|██▌ | 13/50 [00:33<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
26%|██▌ | 13/50 [00:34<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
26%|██▌ | 13/50 [00:34<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
26%|██▌ | 13/50 [00:34<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
26%|██▌ | 13/50 [00:34<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.010210 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
26%|██▌ | 13/50 [00:34<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Total Bins 12804
26%|██▌ | 13/50 [00:34<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
26%|██▌ | 13/50 [00:34<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
26%|██▌ | 13/50 [00:34<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
26%|██▌ | 13/50 [00:34<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
26%|██▌ | 13/50 [00:34<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Start training from score -3.184987
26%|██▌ | 13/50 [00:34<01:27, 2.36s/trial, best loss: -0.8345904314135609] Training until validation scores don't improve for 30 rounds
26%|██▌ | 13/50 [00:34<01:27, 2.36s/trial, best loss: -0.8345904314135609] Early stopping, best iteration is:
[40] training's binary_logloss: 0.114319 valid_1's binary_logloss: 0.135995
26%|██▌ | 13/50 [00:34<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
26%|██▌ | 13/50 [00:34<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
26%|██▌ | 13/50 [00:34<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
26%|██▌ | 13/50 [00:34<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
26%|██▌ | 13/50 [00:35<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
26%|██▌ | 13/50 [00:35<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007309 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
26%|██▌ | 13/50 [00:35<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Total Bins 12838
26%|██▌ | 13/50 [00:35<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
26%|██▌ | 13/50 [00:35<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
26%|██▌ | 13/50 [00:35<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
26%|██▌ | 13/50 [00:35<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
26%|██▌ | 13/50 [00:35<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Start training from score -3.196685
26%|██▌ | 13/50 [00:35<01:27, 2.36s/trial, best loss: -0.8345904314135609] Training until validation scores don't improve for 30 rounds
26%|██▌ | 13/50 [00:35<01:27, 2.36s/trial, best loss: -0.8345904314135609] Early stopping, best iteration is:
[39] training's binary_logloss: 0.113925 valid_1's binary_logloss: 0.137893
26%|██▌ | 13/50 [00:35<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
26%|██▌ | 13/50 [00:35<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
26%|██▌ | 13/50 [00:35<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
26%|██▌ | 13/50 [00:35<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
26%|██▌ | 13/50 [00:35<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
26%|██▌ | 13/50 [00:35<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009469 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
26%|██▌ | 13/50 [00:35<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Total Bins 12817
26%|██▌ | 13/50 [00:35<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
26%|██▌ | 13/50 [00:35<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
26%|██▌ | 13/50 [00:35<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
26%|██▌ | 13/50 [00:35<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
26%|██▌ | 13/50 [00:35<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Start training from score -3.181760
26%|██▌ | 13/50 [00:35<01:27, 2.36s/trial, best loss: -0.8345904314135609] Training until validation scores don't improve for 30 rounds
26%|██▌ | 13/50 [00:35<01:27, 2.36s/trial, best loss: -0.8345904314135609] Early stopping, best iteration is:
[33] training's binary_logloss: 0.117803 valid_1's binary_logloss: 0.135217
26%|██▌ | 13/50 [00:36<01:27, 2.36s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
26%|██▌ | 13/50 [00:36<01:27, 2.36s/trial, best loss: -0.8345904314135609] 28%|██▊ | 14/50 [00:36<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
28%|██▊ | 14/50 [00:36<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
28%|██▊ | 14/50 [00:36<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
28%|██▊ | 14/50 [00:36<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
28%|██▊ | 14/50 [00:36<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007370 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
28%|██▊ | 14/50 [00:36<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Total Bins 12804
28%|██▊ | 14/50 [00:36<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
28%|██▊ | 14/50 [00:36<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
28%|██▊ | 14/50 [00:36<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
28%|██▊ | 14/50 [00:36<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
28%|██▊ | 14/50 [00:36<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Start training from score -3.184987
28%|██▊ | 14/50 [00:36<01:26, 2.40s/trial, best loss: -0.8345904314135609] Training until validation scores don't improve for 30 rounds
28%|██▊ | 14/50 [00:36<01:26, 2.40s/trial, best loss: -0.8345904314135609] Did not meet early stopping. Best iteration is:
[99] training's binary_logloss: 0.117141 valid_1's binary_logloss: 0.135196
28%|██▊ | 14/50 [00:37<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
28%|██▊ | 14/50 [00:37<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
28%|██▊ | 14/50 [00:37<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
28%|██▊ | 14/50 [00:37<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
28%|██▊ | 14/50 [00:37<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
28%|██▊ | 14/50 [00:37<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.013043 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
28%|██▊ | 14/50 [00:37<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Total Bins 12838
28%|██▊ | 14/50 [00:37<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
28%|██▊ | 14/50 [00:37<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
28%|██▊ | 14/50 [00:37<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
28%|██▊ | 14/50 [00:37<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
28%|██▊ | 14/50 [00:37<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Start training from score -3.196685
28%|██▊ | 14/50 [00:37<01:26, 2.40s/trial, best loss: -0.8345904314135609] Training until validation scores don't improve for 30 rounds
28%|██▊ | 14/50 [00:37<01:26, 2.40s/trial, best loss: -0.8345904314135609] Did not meet early stopping. Best iteration is:
[97] training's binary_logloss: 0.116927 valid_1's binary_logloss: 0.137695
28%|██▊ | 14/50 [00:38<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
28%|██▊ | 14/50 [00:38<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
28%|██▊ | 14/50 [00:38<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
28%|██▊ | 14/50 [00:38<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
28%|██▊ | 14/50 [00:38<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
28%|██▊ | 14/50 [00:38<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.010449 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
28%|██▊ | 14/50 [00:38<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Total Bins 12817
28%|██▊ | 14/50 [00:38<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
28%|██▊ | 14/50 [00:38<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
28%|██▊ | 14/50 [00:38<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
28%|██▊ | 14/50 [00:38<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
28%|██▊ | 14/50 [00:38<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Start training from score -3.181760
28%|██▊ | 14/50 [00:38<01:26, 2.40s/trial, best loss: -0.8345904314135609] Training until validation scores don't improve for 30 rounds
28%|██▊ | 14/50 [00:38<01:26, 2.40s/trial, best loss: -0.8345904314135609] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.117458 valid_1's binary_logloss: 0.134804
28%|██▊ | 14/50 [00:39<01:26, 2.40s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
28%|██▊ | 14/50 [00:39<01:26, 2.40s/trial, best loss: -0.8345904314135609] 30%|███ | 15/50 [00:39<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
30%|███ | 15/50 [00:39<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
30%|███ | 15/50 [00:39<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
30%|███ | 15/50 [00:40<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
30%|███ | 15/50 [00:40<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009063 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
30%|███ | 15/50 [00:40<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Total Bins 12821
30%|███ | 15/50 [00:40<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 197
30%|███ | 15/50 [00:40<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
30%|███ | 15/50 [00:40<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
30%|███ | 15/50 [00:40<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
30%|███ | 15/50 [00:40<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Start training from score -3.184987
30%|███ | 15/50 [00:40<01:33, 2.67s/trial, best loss: -0.8345904314135609] Training until validation scores don't improve for 30 rounds
30%|███ | 15/50 [00:40<01:33, 2.67s/trial, best loss: -0.8345904314135609] Early stopping, best iteration is:
[28] training's binary_logloss: 0.11267 valid_1's binary_logloss: 0.136106
30%|███ | 15/50 [00:40<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
30%|███ | 15/50 [00:40<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
30%|███ | 15/50 [00:41<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
30%|███ | 15/50 [00:41<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
30%|███ | 15/50 [00:41<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
30%|███ | 15/50 [00:41<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009274 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
30%|███ | 15/50 [00:41<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Total Bins 12943
30%|███ | 15/50 [00:41<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
30%|███ | 15/50 [00:41<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
30%|███ | 15/50 [00:41<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
30%|███ | 15/50 [00:41<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
30%|███ | 15/50 [00:41<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Start training from score -3.196685
30%|███ | 15/50 [00:41<01:33, 2.67s/trial, best loss: -0.8345904314135609] Training until validation scores don't improve for 30 rounds
30%|███ | 15/50 [00:41<01:33, 2.67s/trial, best loss: -0.8345904314135609] Early stopping, best iteration is:
[22] training's binary_logloss: 0.115948 valid_1's binary_logloss: 0.13866
30%|███ | 15/50 [00:41<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
30%|███ | 15/50 [00:41<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
30%|███ | 15/50 [00:42<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
30%|███ | 15/50 [00:42<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
30%|███ | 15/50 [00:42<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
30%|███ | 15/50 [00:42<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011803 seconds.
You can set `force_col_wise=true` to remove the overhead.
30%|███ | 15/50 [00:42<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Total Bins 12908
30%|███ | 15/50 [00:42<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 200
30%|███ | 15/50 [00:42<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
30%|███ | 15/50 [00:42<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
30%|███ | 15/50 [00:42<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
30%|███ | 15/50 [00:42<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Start training from score -3.181760
30%|███ | 15/50 [00:42<01:33, 2.67s/trial, best loss: -0.8345904314135609] Training until validation scores don't improve for 30 rounds
30%|███ | 15/50 [00:42<01:33, 2.67s/trial, best loss: -0.8345904314135609] Early stopping, best iteration is:
[23] training's binary_logloss: 0.116131 valid_1's binary_logloss: 0.135187
30%|███ | 15/50 [00:42<01:33, 2.67s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
30%|███ | 15/50 [00:42<01:33, 2.67s/trial, best loss: -0.8345904314135609] 32%|███▏ | 16/50 [00:42<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
32%|███▏ | 16/50 [00:42<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
32%|███▏ | 16/50 [00:42<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
32%|███▏ | 16/50 [00:42<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
32%|███▏ | 16/50 [00:42<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008073 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
32%|███▏ | 16/50 [00:42<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Total Bins 12900
32%|███▏ | 16/50 [00:42<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 203
32%|███▏ | 16/50 [00:42<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
32%|███▏ | 16/50 [00:42<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
32%|███▏ | 16/50 [00:42<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
32%|███▏ | 16/50 [00:43<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Start training from score -3.184987
32%|███▏ | 16/50 [00:43<01:33, 2.76s/trial, best loss: -0.8345904314135609] Training until validation scores don't improve for 30 rounds
32%|███▏ | 16/50 [00:43<01:33, 2.76s/trial, best loss: -0.8345904314135609] Did not meet early stopping. Best iteration is:
[84] training's binary_logloss: 0.11371 valid_1's binary_logloss: 0.135137
32%|███▏ | 16/50 [00:43<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
32%|███▏ | 16/50 [00:43<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
32%|███▏ | 16/50 [00:43<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
32%|███▏ | 16/50 [00:43<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
32%|███▏ | 16/50 [00:43<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
32%|███▏ | 16/50 [00:43<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007565 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
32%|███▏ | 16/50 [00:43<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Total Bins 12993
32%|███▏ | 16/50 [00:43<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 205
32%|███▏ | 16/50 [00:43<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
32%|███▏ | 16/50 [00:44<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
32%|███▏ | 16/50 [00:44<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
32%|███▏ | 16/50 [00:44<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Start training from score -3.196685
32%|███▏ | 16/50 [00:44<01:33, 2.76s/trial, best loss: -0.8345904314135609] Training until validation scores don't improve for 30 rounds
32%|███▏ | 16/50 [00:44<01:33, 2.76s/trial, best loss: -0.8345904314135609] Early stopping, best iteration is:
[67] training's binary_logloss: 0.116405 valid_1's binary_logloss: 0.138124
32%|███▏ | 16/50 [00:44<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
32%|███▏ | 16/50 [00:44<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
32%|███▏ | 16/50 [00:44<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
32%|███▏ | 16/50 [00:44<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
32%|███▏ | 16/50 [00:44<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
32%|███▏ | 16/50 [00:44<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008722 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
32%|███▏ | 16/50 [00:44<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Total Bins 12917
32%|███▏ | 16/50 [00:44<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
32%|███▏ | 16/50 [00:44<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
32%|███▏ | 16/50 [00:45<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
32%|███▏ | 16/50 [00:45<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
32%|███▏ | 16/50 [00:45<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Start training from score -3.181760
32%|███▏ | 16/50 [00:45<01:33, 2.76s/trial, best loss: -0.8345904314135609] Training until validation scores don't improve for 30 rounds
32%|███▏ | 16/50 [00:45<01:33, 2.76s/trial, best loss: -0.8345904314135609] Did not meet early stopping. Best iteration is:
[84] training's binary_logloss: 0.114059 valid_1's binary_logloss: 0.134317
32%|███▏ | 16/50 [00:45<01:33, 2.76s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
32%|███▏ | 16/50 [00:45<01:33, 2.76s/trial, best loss: -0.8345904314135609] 34%|███▍ | 17/50 [00:45<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
34%|███▍ | 17/50 [00:45<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
34%|███▍ | 17/50 [00:45<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
34%|███▍ | 17/50 [00:45<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
34%|███▍ | 17/50 [00:45<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009001 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
34%|███▍ | 17/50 [00:45<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Total Bins 12804
34%|███▍ | 17/50 [00:45<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
34%|███▍ | 17/50 [00:45<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
34%|███▍ | 17/50 [00:45<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
34%|███▍ | 17/50 [00:45<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
34%|███▍ | 17/50 [00:46<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Start training from score -3.184987
34%|███▍ | 17/50 [00:46<01:32, 2.82s/trial, best loss: -0.8345904314135609] Training until validation scores don't improve for 30 rounds
34%|███▍ | 17/50 [00:46<01:32, 2.82s/trial, best loss: -0.8345904314135609] Early stopping, best iteration is:
[60] training's binary_logloss: 0.116861 valid_1's binary_logloss: 0.134877
34%|███▍ | 17/50 [00:46<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
34%|███▍ | 17/50 [00:46<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
34%|███▍ | 17/50 [00:46<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
34%|███▍ | 17/50 [00:46<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
34%|███▍ | 17/50 [00:46<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
34%|███▍ | 17/50 [00:46<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008590 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
34%|███▍ | 17/50 [00:46<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Total Bins 12838
34%|███▍ | 17/50 [00:46<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
34%|███▍ | 17/50 [00:46<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
34%|███▍ | 17/50 [00:46<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
34%|███▍ | 17/50 [00:46<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
34%|███▍ | 17/50 [00:46<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Start training from score -3.196685
34%|███▍ | 17/50 [00:46<01:32, 2.82s/trial, best loss: -0.8345904314135609] Training until validation scores don't improve for 30 rounds
34%|███▍ | 17/50 [00:46<01:32, 2.82s/trial, best loss: -0.8345904314135609] Early stopping, best iteration is:
[58] training's binary_logloss: 0.11687 valid_1's binary_logloss: 0.13752
34%|███▍ | 17/50 [00:47<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
34%|███▍ | 17/50 [00:47<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
34%|███▍ | 17/50 [00:47<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
34%|███▍ | 17/50 [00:47<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
34%|███▍ | 17/50 [00:47<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
34%|███▍ | 17/50 [00:47<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.006636 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
34%|███▍ | 17/50 [00:47<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Total Bins 12817
34%|███▍ | 17/50 [00:47<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
34%|███▍ | 17/50 [00:47<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
34%|███▍ | 17/50 [00:47<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
34%|███▍ | 17/50 [00:47<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
34%|███▍ | 17/50 [00:47<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Info] Start training from score -3.181760
34%|███▍ | 17/50 [00:47<01:32, 2.82s/trial, best loss: -0.8345904314135609] Training until validation scores don't improve for 30 rounds
34%|███▍ | 17/50 [00:47<01:32, 2.82s/trial, best loss: -0.8345904314135609] Early stopping, best iteration is:
[67] training's binary_logloss: 0.115782 valid_1's binary_logloss: 0.134209
34%|███▍ | 17/50 [00:47<01:32, 2.82s/trial, best loss: -0.8345904314135609] [LightGBM] [Warning] Unknown parameter: eval_metric
34%|███▍ | 17/50 [00:47<01:32, 2.82s/trial, best loss: -0.8345904314135609] 36%|███▌ | 18/50 [00:47<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
36%|███▌ | 18/50 [00:48<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
36%|███▌ | 18/50 [00:48<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
36%|███▌ | 18/50 [00:48<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
36%|███▌ | 18/50 [00:48<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007175 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
36%|███▌ | 18/50 [00:48<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Total Bins 12944
36%|███▌ | 18/50 [00:48<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 205
36%|███▌ | 18/50 [00:48<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
36%|███▌ | 18/50 [00:48<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
36%|███▌ | 18/50 [00:48<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
36%|███▌ | 18/50 [00:48<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Start training from score -3.184987
36%|███▌ | 18/50 [00:48<01:25, 2.66s/trial, best loss: -0.8346199818249235] Training until validation scores don't improve for 30 rounds
36%|███▌ | 18/50 [00:48<01:25, 2.66s/trial, best loss: -0.8346199818249235] Early stopping, best iteration is:
[20] training's binary_logloss: 0.117901 valid_1's binary_logloss: 0.135627
36%|███▌ | 18/50 [00:48<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
36%|███▌ | 18/50 [00:48<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
36%|███▌ | 18/50 [00:48<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
36%|███▌ | 18/50 [00:48<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
36%|███▌ | 18/50 [00:48<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
36%|███▌ | 18/50 [00:48<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008634 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
36%|███▌ | 18/50 [00:48<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Total Bins 12993
36%|███▌ | 18/50 [00:48<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 205
36%|███▌ | 18/50 [00:48<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
36%|███▌ | 18/50 [00:48<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
36%|███▌ | 18/50 [00:48<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
36%|███▌ | 18/50 [00:48<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Start training from score -3.196685
36%|███▌ | 18/50 [00:48<01:25, 2.66s/trial, best loss: -0.8346199818249235] Training until validation scores don't improve for 30 rounds
36%|███▌ | 18/50 [00:48<01:25, 2.66s/trial, best loss: -0.8346199818249235] Early stopping, best iteration is:
[24] training's binary_logloss: 0.114214 valid_1's binary_logloss: 0.138485
36%|███▌ | 18/50 [00:49<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
36%|███▌ | 18/50 [00:49<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
36%|███▌ | 18/50 [00:49<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
36%|███▌ | 18/50 [00:49<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
36%|███▌ | 18/50 [00:49<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
36%|███▌ | 18/50 [00:49<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008114 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
36%|███▌ | 18/50 [00:49<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Total Bins 12917
36%|███▌ | 18/50 [00:49<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
36%|███▌ | 18/50 [00:49<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
36%|███▌ | 18/50 [00:49<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
36%|███▌ | 18/50 [00:49<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
36%|███▌ | 18/50 [00:49<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Start training from score -3.181760
36%|███▌ | 18/50 [00:49<01:25, 2.66s/trial, best loss: -0.8346199818249235] Training until validation scores don't improve for 30 rounds
36%|███▌ | 18/50 [00:49<01:25, 2.66s/trial, best loss: -0.8346199818249235] Early stopping, best iteration is:
[19] training's binary_logloss: 0.118579 valid_1's binary_logloss: 0.135136
36%|███▌ | 18/50 [00:49<01:25, 2.66s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
36%|███▌ | 18/50 [00:49<01:25, 2.66s/trial, best loss: -0.8346199818249235] 38%|███▊ | 19/50 [00:49<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
38%|███▊ | 19/50 [00:49<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
38%|███▊ | 19/50 [00:49<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
38%|███▊ | 19/50 [00:50<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
38%|███▊ | 19/50 [00:50<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007573 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
38%|███▊ | 19/50 [00:50<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Total Bins 12869
38%|███▊ | 19/50 [00:50<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 199
38%|███▊ | 19/50 [00:50<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
38%|███▊ | 19/50 [00:50<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
38%|███▊ | 19/50 [00:50<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
38%|███▊ | 19/50 [00:50<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Start training from score -3.184987
38%|███▊ | 19/50 [00:50<01:14, 2.42s/trial, best loss: -0.8346199818249235] Training until validation scores don't improve for 30 rounds
38%|███▊ | 19/50 [00:50<01:14, 2.42s/trial, best loss: -0.8346199818249235] Early stopping, best iteration is:
[64] training's binary_logloss: 0.112009 valid_1's binary_logloss: 0.13523
38%|███▊ | 19/50 [00:50<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
38%|███▊ | 19/50 [00:50<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
38%|███▊ | 19/50 [00:50<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
38%|███▊ | 19/50 [00:50<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
38%|███▊ | 19/50 [00:51<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
38%|███▊ | 19/50 [00:51<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007871 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
38%|███▊ | 19/50 [00:51<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Total Bins 12947
38%|███▊ | 19/50 [00:51<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 203
38%|███▊ | 19/50 [00:51<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
38%|███▊ | 19/50 [00:51<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
38%|███▊ | 19/50 [00:51<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
38%|███▊ | 19/50 [00:51<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Start training from score -3.196685
38%|███▊ | 19/50 [00:51<01:14, 2.42s/trial, best loss: -0.8346199818249235] Training until validation scores don't improve for 30 rounds
38%|███▊ | 19/50 [00:51<01:14, 2.42s/trial, best loss: -0.8346199818249235] Early stopping, best iteration is:
[61] training's binary_logloss: 0.112082 valid_1's binary_logloss: 0.13837
38%|███▊ | 19/50 [00:51<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
38%|███▊ | 19/50 [00:51<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
38%|███▊ | 19/50 [00:51<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
38%|███▊ | 19/50 [00:51<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
38%|███▊ | 19/50 [00:52<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
38%|███▊ | 19/50 [00:52<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008356 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
38%|███▊ | 19/50 [00:52<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Total Bins 12908
38%|███▊ | 19/50 [00:52<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 200
38%|███▊ | 19/50 [00:52<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
38%|███▊ | 19/50 [00:52<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
38%|███▊ | 19/50 [00:52<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
38%|███▊ | 19/50 [00:52<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Start training from score -3.181760
38%|███▊ | 19/50 [00:52<01:14, 2.42s/trial, best loss: -0.8346199818249235] Training until validation scores don't improve for 30 rounds
38%|███▊ | 19/50 [00:52<01:14, 2.42s/trial, best loss: -0.8346199818249235] Early stopping, best iteration is:
[60] training's binary_logloss: 0.113244 valid_1's binary_logloss: 0.134797
38%|███▊ | 19/50 [00:52<01:14, 2.42s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
38%|███▊ | 19/50 [00:52<01:14, 2.42s/trial, best loss: -0.8346199818249235] 40%|████ | 20/50 [00:52<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
40%|████ | 20/50 [00:52<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
40%|████ | 20/50 [00:52<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
40%|████ | 20/50 [00:52<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
40%|████ | 20/50 [00:52<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007853 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
40%|████ | 20/50 [00:53<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Total Bins 12804
40%|████ | 20/50 [00:53<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
40%|████ | 20/50 [00:53<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
40%|████ | 20/50 [00:53<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
40%|████ | 20/50 [00:53<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
40%|████ | 20/50 [00:53<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Start training from score -3.184987
40%|████ | 20/50 [00:53<01:17, 2.58s/trial, best loss: -0.8346199818249235] Training until validation scores don't improve for 30 rounds
40%|████ | 20/50 [00:53<01:17, 2.58s/trial, best loss: -0.8346199818249235] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.13445 valid_1's binary_logloss: 0.139692
40%|████ | 20/50 [00:53<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
40%|████ | 20/50 [00:53<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
40%|████ | 20/50 [00:53<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
40%|████ | 20/50 [00:53<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
40%|████ | 20/50 [00:53<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
40%|████ | 20/50 [00:53<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008540 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
40%|████ | 20/50 [00:53<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Total Bins 12838
40%|████ | 20/50 [00:53<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
40%|████ | 20/50 [00:53<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
40%|████ | 20/50 [00:53<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
40%|████ | 20/50 [00:53<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
40%|████ | 20/50 [00:53<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Start training from score -3.196685
40%|████ | 20/50 [00:53<01:17, 2.58s/trial, best loss: -0.8346199818249235] Training until validation scores don't improve for 30 rounds
40%|████ | 20/50 [00:53<01:17, 2.58s/trial, best loss: -0.8346199818249235] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.133592 valid_1's binary_logloss: 0.142113
40%|████ | 20/50 [00:54<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
40%|████ | 20/50 [00:54<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
40%|████ | 20/50 [00:54<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
40%|████ | 20/50 [00:54<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
40%|████ | 20/50 [00:54<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
40%|████ | 20/50 [00:54<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.010222 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
40%|████ | 20/50 [00:54<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Total Bins 12817
40%|████ | 20/50 [00:54<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
40%|████ | 20/50 [00:54<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
40%|████ | 20/50 [00:54<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
40%|████ | 20/50 [00:54<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
40%|████ | 20/50 [00:54<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Start training from score -3.181760
40%|████ | 20/50 [00:54<01:17, 2.58s/trial, best loss: -0.8346199818249235] Training until validation scores don't improve for 30 rounds
40%|████ | 20/50 [00:54<01:17, 2.58s/trial, best loss: -0.8346199818249235] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.135205 valid_1's binary_logloss: 0.138742
40%|████ | 20/50 [00:55<01:17, 2.58s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
40%|████ | 20/50 [00:55<01:17, 2.58s/trial, best loss: -0.8346199818249235] 42%|████▏ | 21/50 [00:55<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
42%|████▏ | 21/50 [00:55<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
42%|████▏ | 21/50 [00:55<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
42%|████▏ | 21/50 [00:55<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
42%|████▏ | 21/50 [00:55<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008607 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
42%|████▏ | 21/50 [00:55<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Total Bins 12804
42%|████▏ | 21/50 [00:55<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
42%|████▏ | 21/50 [00:55<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
42%|████▏ | 21/50 [00:55<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
42%|████▏ | 21/50 [00:55<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
42%|████▏ | 21/50 [00:55<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Start training from score -3.184987
42%|████▏ | 21/50 [00:55<01:13, 2.55s/trial, best loss: -0.8346199818249235] Training until validation scores don't improve for 30 rounds
42%|████▏ | 21/50 [00:55<01:13, 2.55s/trial, best loss: -0.8346199818249235] Early stopping, best iteration is:
[52] training's binary_logloss: 0.118045 valid_1's binary_logloss: 0.134995
42%|████▏ | 21/50 [00:55<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
42%|████▏ | 21/50 [00:56<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
42%|████▏ | 21/50 [00:56<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
42%|████▏ | 21/50 [00:56<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
42%|████▏ | 21/50 [00:56<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
42%|████▏ | 21/50 [00:56<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009509 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
42%|████▏ | 21/50 [00:56<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Total Bins 12913
42%|████▏ | 21/50 [00:56<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 199
42%|████▏ | 21/50 [00:56<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
42%|████▏ | 21/50 [00:56<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
42%|████▏ | 21/50 [00:56<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
42%|████▏ | 21/50 [00:56<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Start training from score -3.196685
42%|████▏ | 21/50 [00:56<01:13, 2.55s/trial, best loss: -0.8346199818249235] Training until validation scores don't improve for 30 rounds
42%|████▏ | 21/50 [00:56<01:13, 2.55s/trial, best loss: -0.8346199818249235] Early stopping, best iteration is:
[49] training's binary_logloss: 0.118325 valid_1's binary_logloss: 0.137852
42%|████▏ | 21/50 [00:56<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
42%|████▏ | 21/50 [00:56<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
42%|████▏ | 21/50 [00:56<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
42%|████▏ | 21/50 [00:56<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
42%|████▏ | 21/50 [00:57<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
42%|████▏ | 21/50 [00:57<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007213 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
42%|████▏ | 21/50 [00:57<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Total Bins 12879
42%|████▏ | 21/50 [00:57<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 197
42%|████▏ | 21/50 [00:57<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
42%|████▏ | 21/50 [00:57<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
42%|████▏ | 21/50 [00:57<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
42%|████▏ | 21/50 [00:57<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Start training from score -3.181760
42%|████▏ | 21/50 [00:57<01:13, 2.55s/trial, best loss: -0.8346199818249235] Training until validation scores don't improve for 30 rounds
42%|████▏ | 21/50 [00:57<01:13, 2.55s/trial, best loss: -0.8346199818249235] Early stopping, best iteration is:
[54] training's binary_logloss: 0.118203 valid_1's binary_logloss: 0.134179
42%|████▏ | 21/50 [00:57<01:13, 2.55s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
42%|████▏ | 21/50 [00:57<01:13, 2.55s/trial, best loss: -0.8346199818249235] 44%|████▍ | 22/50 [00:57<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
44%|████▍ | 22/50 [00:57<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
44%|████▍ | 22/50 [00:57<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
44%|████▍ | 22/50 [00:57<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
44%|████▍ | 22/50 [00:57<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009051 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
44%|████▍ | 22/50 [00:57<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Total Bins 12804
44%|████▍ | 22/50 [00:57<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
44%|████▍ | 22/50 [00:57<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
44%|████▍ | 22/50 [00:57<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
44%|████▍ | 22/50 [00:57<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
44%|████▍ | 22/50 [00:57<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Start training from score -3.184987
44%|████▍ | 22/50 [00:57<01:09, 2.47s/trial, best loss: -0.8346199818249235] Training until validation scores don't improve for 30 rounds
44%|████▍ | 22/50 [00:57<01:09, 2.47s/trial, best loss: -0.8346199818249235] Early stopping, best iteration is:
[19] training's binary_logloss: 0.118534 valid_1's binary_logloss: 0.136106
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007194 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Total Bins 12913
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 199
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Start training from score -3.196685
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] Training until validation scores don't improve for 30 rounds
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] Early stopping, best iteration is:
[15] training's binary_logloss: 0.120171 valid_1's binary_logloss: 0.137868
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007805 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Total Bins 12879
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 197
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Start training from score -3.181760
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] Training until validation scores don't improve for 30 rounds
44%|████▍ | 22/50 [00:58<01:09, 2.47s/trial, best loss: -0.8346199818249235] Early stopping, best iteration is:
[20] training's binary_logloss: 0.118138 valid_1's binary_logloss: 0.13535
44%|████▍ | 22/50 [00:59<01:09, 2.47s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
44%|████▍ | 22/50 [00:59<01:09, 2.47s/trial, best loss: -0.8346199818249235] 46%|████▌ | 23/50 [00:59<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
46%|████▌ | 23/50 [00:59<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
46%|████▌ | 23/50 [00:59<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
46%|████▌ | 23/50 [00:59<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
46%|████▌ | 23/50 [00:59<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008298 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
46%|████▌ | 23/50 [00:59<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Total Bins 12804
46%|████▌ | 23/50 [00:59<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
46%|████▌ | 23/50 [00:59<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
46%|████▌ | 23/50 [00:59<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
46%|████▌ | 23/50 [00:59<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
46%|████▌ | 23/50 [00:59<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Start training from score -3.184987
46%|████▌ | 23/50 [00:59<01:00, 2.23s/trial, best loss: -0.8346199818249235] Training until validation scores don't improve for 30 rounds
46%|████▌ | 23/50 [00:59<01:00, 2.23s/trial, best loss: -0.8346199818249235] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.123442 valid_1's binary_logloss: 0.135652
46%|████▌ | 23/50 [00:59<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
46%|████▌ | 23/50 [01:00<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
46%|████▌ | 23/50 [01:00<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
46%|████▌ | 23/50 [01:00<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
46%|████▌ | 23/50 [01:00<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
46%|████▌ | 23/50 [01:00<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007983 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
46%|████▌ | 23/50 [01:00<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Total Bins 12838
46%|████▌ | 23/50 [01:00<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
46%|████▌ | 23/50 [01:00<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
46%|████▌ | 23/50 [01:00<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
46%|████▌ | 23/50 [01:00<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
46%|████▌ | 23/50 [01:00<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Start training from score -3.196685
46%|████▌ | 23/50 [01:00<01:00, 2.23s/trial, best loss: -0.8346199818249235] Training until validation scores don't improve for 30 rounds
46%|████▌ | 23/50 [01:00<01:00, 2.23s/trial, best loss: -0.8346199818249235] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.122527 valid_1's binary_logloss: 0.138185
46%|████▌ | 23/50 [01:00<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
46%|████▌ | 23/50 [01:00<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
46%|████▌ | 23/50 [01:01<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
46%|████▌ | 23/50 [01:01<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
46%|████▌ | 23/50 [01:01<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
46%|████▌ | 23/50 [01:01<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012976 seconds.
You can set `force_col_wise=true` to remove the overhead.
46%|████▌ | 23/50 [01:01<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Total Bins 12817
46%|████▌ | 23/50 [01:01<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
46%|████▌ | 23/50 [01:01<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
46%|████▌ | 23/50 [01:01<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
46%|████▌ | 23/50 [01:01<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
46%|████▌ | 23/50 [01:01<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Start training from score -3.181760
46%|████▌ | 23/50 [01:01<01:00, 2.23s/trial, best loss: -0.8346199818249235] Training until validation scores don't improve for 30 rounds
46%|████▌ | 23/50 [01:01<01:00, 2.23s/trial, best loss: -0.8346199818249235] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.123812 valid_1's binary_logloss: 0.13482
46%|████▌ | 23/50 [01:02<01:00, 2.23s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
46%|████▌ | 23/50 [01:02<01:00, 2.23s/trial, best loss: -0.8346199818249235] 48%|████▊ | 24/50 [01:02<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
48%|████▊ | 24/50 [01:02<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
48%|████▊ | 24/50 [01:02<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
48%|████▊ | 24/50 [01:02<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
48%|████▊ | 24/50 [01:02<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007683 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
48%|████▊ | 24/50 [01:02<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Total Bins 12804
48%|████▊ | 24/50 [01:02<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
48%|████▊ | 24/50 [01:02<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
48%|████▊ | 24/50 [01:02<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
48%|████▊ | 24/50 [01:02<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
48%|████▊ | 24/50 [01:02<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Start training from score -3.184987
48%|████▊ | 24/50 [01:02<01:06, 2.56s/trial, best loss: -0.8346199818249235] Training until validation scores don't improve for 30 rounds
48%|████▊ | 24/50 [01:02<01:06, 2.56s/trial, best loss: -0.8346199818249235] Early stopping, best iteration is:
[55] training's binary_logloss: 0.118913 valid_1's binary_logloss: 0.134591
48%|████▊ | 24/50 [01:03<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
48%|████▊ | 24/50 [01:03<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
48%|████▊ | 24/50 [01:03<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
48%|████▊ | 24/50 [01:03<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
48%|████▊ | 24/50 [01:03<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
48%|████▊ | 24/50 [01:03<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007060 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
48%|████▊ | 24/50 [01:03<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Total Bins 12838
48%|████▊ | 24/50 [01:03<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
48%|████▊ | 24/50 [01:03<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
48%|████▊ | 24/50 [01:03<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
48%|████▊ | 24/50 [01:03<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
48%|████▊ | 24/50 [01:03<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Start training from score -3.196685
48%|████▊ | 24/50 [01:03<01:06, 2.56s/trial, best loss: -0.8346199818249235] Training until validation scores don't improve for 30 rounds
48%|████▊ | 24/50 [01:03<01:06, 2.56s/trial, best loss: -0.8346199818249235] Early stopping, best iteration is:
[51] training's binary_logloss: 0.119109 valid_1's binary_logloss: 0.137532
48%|████▊ | 24/50 [01:04<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
48%|████▊ | 24/50 [01:04<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
48%|████▊ | 24/50 [01:04<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
48%|████▊ | 24/50 [01:04<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
48%|████▊ | 24/50 [01:04<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
48%|████▊ | 24/50 [01:04<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007158 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
48%|████▊ | 24/50 [01:04<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Total Bins 12817
48%|████▊ | 24/50 [01:04<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
48%|████▊ | 24/50 [01:04<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
48%|████▊ | 24/50 [01:04<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
48%|████▊ | 24/50 [01:04<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
48%|████▊ | 24/50 [01:04<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Info] Start training from score -3.181760
48%|████▊ | 24/50 [01:04<01:06, 2.56s/trial, best loss: -0.8346199818249235] Training until validation scores don't improve for 30 rounds
48%|████▊ | 24/50 [01:04<01:06, 2.56s/trial, best loss: -0.8346199818249235] Early stopping, best iteration is:
[53] training's binary_logloss: 0.119682 valid_1's binary_logloss: 0.134044
48%|████▊ | 24/50 [01:04<01:06, 2.56s/trial, best loss: -0.8346199818249235] [LightGBM] [Warning] Unknown parameter: eval_metric
48%|████▊ | 24/50 [01:04<01:06, 2.56s/trial, best loss: -0.8346199818249235] 50%|█████ | 25/50 [01:04<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
50%|█████ | 25/50 [01:04<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
50%|█████ | 25/50 [01:04<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
50%|█████ | 25/50 [01:05<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
50%|█████ | 25/50 [01:05<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.006965 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
50%|█████ | 25/50 [01:05<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12804
50%|█████ | 25/50 [01:05<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
50%|█████ | 25/50 [01:05<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
50%|█████ | 25/50 [01:05<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
50%|█████ | 25/50 [01:05<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
50%|█████ | 25/50 [01:05<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.184987
50%|█████ | 25/50 [01:05<01:01, 2.47s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
50%|█████ | 25/50 [01:05<01:01, 2.47s/trial, best loss: -0.8353293081416346] Early stopping, best iteration is:
[41] training's binary_logloss: 0.116789 valid_1's binary_logloss: 0.135098
50%|█████ | 25/50 [01:05<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
50%|█████ | 25/50 [01:05<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
50%|█████ | 25/50 [01:05<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
50%|█████ | 25/50 [01:05<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
50%|█████ | 25/50 [01:05<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
50%|█████ | 25/50 [01:05<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.009464 seconds.
You can set `force_col_wise=true` to remove the overhead.
50%|█████ | 25/50 [01:05<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12838
50%|█████ | 25/50 [01:05<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
50%|█████ | 25/50 [01:05<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
50%|█████ | 25/50 [01:05<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
50%|█████ | 25/50 [01:05<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
50%|█████ | 25/50 [01:05<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.196685
50%|█████ | 25/50 [01:05<01:01, 2.47s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
50%|█████ | 25/50 [01:05<01:01, 2.47s/trial, best loss: -0.8353293081416346] Early stopping, best iteration is:
[39] training's binary_logloss: 0.116539 valid_1's binary_logloss: 0.138054
50%|█████ | 25/50 [01:06<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
50%|█████ | 25/50 [01:06<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
50%|█████ | 25/50 [01:06<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
50%|█████ | 25/50 [01:06<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
50%|█████ | 25/50 [01:06<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
50%|█████ | 25/50 [01:06<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.006983 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
50%|█████ | 25/50 [01:06<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12817
50%|█████ | 25/50 [01:06<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
50%|█████ | 25/50 [01:06<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
50%|█████ | 25/50 [01:06<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
50%|█████ | 25/50 [01:06<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
50%|█████ | 25/50 [01:06<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.181760
50%|█████ | 25/50 [01:06<01:01, 2.47s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
50%|█████ | 25/50 [01:06<01:01, 2.47s/trial, best loss: -0.8353293081416346] Early stopping, best iteration is:
[39] training's binary_logloss: 0.117685 valid_1's binary_logloss: 0.134656
50%|█████ | 25/50 [01:06<01:01, 2.47s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
50%|█████ | 25/50 [01:06<01:01, 2.47s/trial, best loss: -0.8353293081416346] 52%|█████▏ | 26/50 [01:06<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
52%|█████▏ | 26/50 [01:07<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
52%|█████▏ | 26/50 [01:07<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
52%|█████▏ | 26/50 [01:07<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
52%|█████▏ | 26/50 [01:07<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008118 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
52%|█████▏ | 26/50 [01:07<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12804
52%|█████▏ | 26/50 [01:07<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
52%|█████▏ | 26/50 [01:07<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
52%|█████▏ | 26/50 [01:07<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
52%|█████▏ | 26/50 [01:07<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
52%|█████▏ | 26/50 [01:07<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.184987
52%|█████▏ | 26/50 [01:07<00:57, 2.38s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
52%|█████▏ | 26/50 [01:07<00:57, 2.38s/trial, best loss: -0.8353293081416346] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.131216 valid_1's binary_logloss: 0.139484
52%|█████▏ | 26/50 [01:07<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
52%|█████▏ | 26/50 [01:08<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
52%|█████▏ | 26/50 [01:08<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
52%|█████▏ | 26/50 [01:08<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
52%|█████▏ | 26/50 [01:08<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
52%|█████▏ | 26/50 [01:08<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.017901 seconds.
You can set `force_col_wise=true` to remove the overhead.
52%|█████▏ | 26/50 [01:08<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12903
52%|█████▏ | 26/50 [01:08<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 197
52%|█████▏ | 26/50 [01:08<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
52%|█████▏ | 26/50 [01:08<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
52%|█████▏ | 26/50 [01:08<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
52%|█████▏ | 26/50 [01:08<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.196685
52%|█████▏ | 26/50 [01:08<00:57, 2.38s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
52%|█████▏ | 26/50 [01:08<00:57, 2.38s/trial, best loss: -0.8353293081416346] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.130191 valid_1's binary_logloss: 0.141574
52%|█████▏ | 26/50 [01:09<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
52%|█████▏ | 26/50 [01:09<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
52%|█████▏ | 26/50 [01:09<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
52%|█████▏ | 26/50 [01:09<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
52%|█████▏ | 26/50 [01:09<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
52%|█████▏ | 26/50 [01:09<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007720 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
52%|█████▏ | 26/50 [01:09<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12879
52%|█████▏ | 26/50 [01:09<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 197
52%|█████▏ | 26/50 [01:09<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
52%|█████▏ | 26/50 [01:09<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
52%|█████▏ | 26/50 [01:09<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
52%|█████▏ | 26/50 [01:09<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.181760
52%|█████▏ | 26/50 [01:09<00:57, 2.38s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
52%|█████▏ | 26/50 [01:09<00:57, 2.38s/trial, best loss: -0.8353293081416346] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.131799 valid_1's binary_logloss: 0.138351
52%|█████▏ | 26/50 [01:10<00:57, 2.38s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
52%|█████▏ | 26/50 [01:10<00:57, 2.38s/trial, best loss: -0.8353293081416346] 54%|█████▍ | 27/50 [01:10<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
54%|█████▍ | 27/50 [01:10<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
54%|█████▍ | 27/50 [01:10<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
54%|█████▍ | 27/50 [01:10<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
54%|█████▍ | 27/50 [01:10<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007748 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
54%|█████▍ | 27/50 [01:10<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12804
54%|█████▍ | 27/50 [01:10<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
54%|█████▍ | 27/50 [01:10<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
54%|█████▍ | 27/50 [01:10<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
54%|█████▍ | 27/50 [01:10<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
54%|█████▍ | 27/50 [01:10<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.184987
54%|█████▍ | 27/50 [01:10<01:00, 2.64s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
54%|█████▍ | 27/50 [01:10<01:00, 2.64s/trial, best loss: -0.8353293081416346] Early stopping, best iteration is:
[60] training's binary_logloss: 0.119196 valid_1's binary_logloss: 0.134697
54%|█████▍ | 27/50 [01:10<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
54%|█████▍ | 27/50 [01:10<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
54%|█████▍ | 27/50 [01:11<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
54%|█████▍ | 27/50 [01:11<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
54%|█████▍ | 27/50 [01:11<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
54%|█████▍ | 27/50 [01:11<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007410 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
54%|█████▍ | 27/50 [01:11<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12838
54%|█████▍ | 27/50 [01:11<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
54%|█████▍ | 27/50 [01:11<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
54%|█████▍ | 27/50 [01:11<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
54%|█████▍ | 27/50 [01:11<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
54%|█████▍ | 27/50 [01:11<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.196685
54%|█████▍ | 27/50 [01:11<01:00, 2.64s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
54%|█████▍ | 27/50 [01:11<01:00, 2.64s/trial, best loss: -0.8353293081416346] Early stopping, best iteration is:
[60] training's binary_logloss: 0.118245 valid_1's binary_logloss: 0.137653
54%|█████▍ | 27/50 [01:11<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
54%|█████▍ | 27/50 [01:11<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
54%|█████▍ | 27/50 [01:11<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
54%|█████▍ | 27/50 [01:11<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
54%|█████▍ | 27/50 [01:11<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
54%|█████▍ | 27/50 [01:11<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008656 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
54%|█████▍ | 27/50 [01:11<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12817
54%|█████▍ | 27/50 [01:11<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
54%|█████▍ | 27/50 [01:11<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
54%|█████▍ | 27/50 [01:11<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
54%|█████▍ | 27/50 [01:11<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
54%|█████▍ | 27/50 [01:11<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.181760
54%|█████▍ | 27/50 [01:11<01:00, 2.64s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
54%|█████▍ | 27/50 [01:11<01:00, 2.64s/trial, best loss: -0.8353293081416346] Early stopping, best iteration is:
[58] training's binary_logloss: 0.11982 valid_1's binary_logloss: 0.134115
54%|█████▍ | 27/50 [01:12<01:00, 2.64s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
54%|█████▍ | 27/50 [01:12<01:00, 2.64s/trial, best loss: -0.8353293081416346] 56%|█████▌ | 28/50 [01:12<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
56%|█████▌ | 28/50 [01:12<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
56%|█████▌ | 28/50 [01:12<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
56%|█████▌ | 28/50 [01:12<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
56%|█████▌ | 28/50 [01:12<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008201 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
56%|█████▌ | 28/50 [01:12<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12804
56%|█████▌ | 28/50 [01:12<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
56%|█████▌ | 28/50 [01:12<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
56%|█████▌ | 28/50 [01:12<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
56%|█████▌ | 28/50 [01:12<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
56%|█████▌ | 28/50 [01:12<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.184987
56%|█████▌ | 28/50 [01:12<00:55, 2.51s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
56%|█████▌ | 28/50 [01:12<00:55, 2.51s/trial, best loss: -0.8353293081416346] Early stopping, best iteration is:
[55] training's binary_logloss: 0.118255 valid_1's binary_logloss: 0.13523
56%|█████▌ | 28/50 [01:13<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
56%|█████▌ | 28/50 [01:13<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
56%|█████▌ | 28/50 [01:13<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
56%|█████▌ | 28/50 [01:13<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
56%|█████▌ | 28/50 [01:13<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
56%|█████▌ | 28/50 [01:13<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008806 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
56%|█████▌ | 28/50 [01:13<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12838
56%|█████▌ | 28/50 [01:13<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
56%|█████▌ | 28/50 [01:13<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
56%|█████▌ | 28/50 [01:13<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
56%|█████▌ | 28/50 [01:13<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
56%|█████▌ | 28/50 [01:14<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.196685
56%|█████▌ | 28/50 [01:14<00:55, 2.51s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
56%|█████▌ | 28/50 [01:14<00:55, 2.51s/trial, best loss: -0.8353293081416346] Early stopping, best iteration is:
[58] training's binary_logloss: 0.116637 valid_1's binary_logloss: 0.137971
56%|█████▌ | 28/50 [01:14<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
56%|█████▌ | 28/50 [01:14<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
56%|█████▌ | 28/50 [01:15<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
56%|█████▌ | 28/50 [01:15<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
56%|█████▌ | 28/50 [01:15<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
56%|█████▌ | 28/50 [01:15<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.011744 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
56%|█████▌ | 28/50 [01:15<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12817
56%|█████▌ | 28/50 [01:15<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
56%|█████▌ | 28/50 [01:15<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
56%|█████▌ | 28/50 [01:15<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
56%|█████▌ | 28/50 [01:15<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
56%|█████▌ | 28/50 [01:15<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.181760
56%|█████▌ | 28/50 [01:15<00:55, 2.51s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
56%|█████▌ | 28/50 [01:15<00:55, 2.51s/trial, best loss: -0.8353293081416346] Early stopping, best iteration is:
[57] training's binary_logloss: 0.118083 valid_1's binary_logloss: 0.134172
56%|█████▌ | 28/50 [01:15<00:55, 2.51s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
56%|█████▌ | 28/50 [01:15<00:55, 2.51s/trial, best loss: -0.8353293081416346] 58%|█████▊ | 29/50 [01:15<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
58%|█████▊ | 29/50 [01:16<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
58%|█████▊ | 29/50 [01:16<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
58%|█████▊ | 29/50 [01:16<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
58%|█████▊ | 29/50 [01:16<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009021 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
58%|█████▊ | 29/50 [01:16<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12804
58%|█████▊ | 29/50 [01:16<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
58%|█████▊ | 29/50 [01:16<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
58%|█████▊ | 29/50 [01:16<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
58%|█████▊ | 29/50 [01:16<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
58%|█████▊ | 29/50 [01:16<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.184987
58%|█████▊ | 29/50 [01:16<00:58, 2.81s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
58%|█████▊ | 29/50 [01:16<00:58, 2.81s/trial, best loss: -0.8353293081416346] Early stopping, best iteration is:
[69] training's binary_logloss: 0.118755 valid_1's binary_logloss: 0.134976
58%|█████▊ | 29/50 [01:16<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
58%|█████▊ | 29/50 [01:16<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
58%|█████▊ | 29/50 [01:16<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
58%|█████▊ | 29/50 [01:16<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
58%|█████▊ | 29/50 [01:17<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
58%|█████▊ | 29/50 [01:17<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.013066 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
58%|█████▊ | 29/50 [01:17<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12913
58%|█████▊ | 29/50 [01:17<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 199
58%|█████▊ | 29/50 [01:17<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
58%|█████▊ | 29/50 [01:17<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
58%|█████▊ | 29/50 [01:17<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
58%|█████▊ | 29/50 [01:17<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.196685
58%|█████▊ | 29/50 [01:17<00:58, 2.81s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
58%|█████▊ | 29/50 [01:17<00:58, 2.81s/trial, best loss: -0.8353293081416346] Early stopping, best iteration is:
[66] training's binary_logloss: 0.118516 valid_1's binary_logloss: 0.13759
58%|█████▊ | 29/50 [01:17<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
58%|█████▊ | 29/50 [01:17<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
58%|█████▊ | 29/50 [01:18<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
58%|█████▊ | 29/50 [01:18<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
58%|█████▊ | 29/50 [01:18<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
58%|█████▊ | 29/50 [01:18<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.011270 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
58%|█████▊ | 29/50 [01:18<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12879
58%|█████▊ | 29/50 [01:18<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 197
58%|█████▊ | 29/50 [01:18<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
58%|█████▊ | 29/50 [01:18<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
58%|█████▊ | 29/50 [01:18<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
58%|█████▊ | 29/50 [01:18<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.181760
58%|█████▊ | 29/50 [01:18<00:58, 2.81s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
58%|█████▊ | 29/50 [01:18<00:58, 2.81s/trial, best loss: -0.8353293081416346] Early stopping, best iteration is:
[69] training's binary_logloss: 0.119213 valid_1's binary_logloss: 0.134123
58%|█████▊ | 29/50 [01:18<00:58, 2.81s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
58%|█████▊ | 29/50 [01:18<00:58, 2.81s/trial, best loss: -0.8353293081416346] 60%|██████ | 30/50 [01:18<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
60%|██████ | 30/50 [01:19<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
60%|██████ | 30/50 [01:19<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
60%|██████ | 30/50 [01:19<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
60%|██████ | 30/50 [01:19<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007448 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
60%|██████ | 30/50 [01:19<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12804
60%|██████ | 30/50 [01:19<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
60%|██████ | 30/50 [01:19<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
60%|██████ | 30/50 [01:19<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
60%|██████ | 30/50 [01:19<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
60%|██████ | 30/50 [01:19<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.184987
60%|██████ | 30/50 [01:19<00:57, 2.88s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
60%|██████ | 30/50 [01:19<00:57, 2.88s/trial, best loss: -0.8353293081416346] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.124451 valid_1's binary_logloss: 0.135306
60%|██████ | 30/50 [01:19<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
60%|██████ | 30/50 [01:19<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
60%|██████ | 30/50 [01:19<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
60%|██████ | 30/50 [01:19<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
60%|██████ | 30/50 [01:20<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
60%|██████ | 30/50 [01:20<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008631 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
60%|██████ | 30/50 [01:20<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12943
60%|██████ | 30/50 [01:20<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
60%|██████ | 30/50 [01:20<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
60%|██████ | 30/50 [01:20<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
60%|██████ | 30/50 [01:20<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
60%|██████ | 30/50 [01:20<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.196685
60%|██████ | 30/50 [01:20<00:57, 2.88s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
60%|██████ | 30/50 [01:20<00:57, 2.88s/trial, best loss: -0.8353293081416346] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.123486 valid_1's binary_logloss: 0.137957
60%|██████ | 30/50 [01:20<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
60%|██████ | 30/50 [01:20<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
60%|██████ | 30/50 [01:20<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
60%|██████ | 30/50 [01:20<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
60%|██████ | 30/50 [01:20<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
60%|██████ | 30/50 [01:20<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007884 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
60%|██████ | 30/50 [01:20<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12879
60%|██████ | 30/50 [01:20<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 197
60%|██████ | 30/50 [01:20<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
60%|██████ | 30/50 [01:20<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
60%|██████ | 30/50 [01:20<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
60%|██████ | 30/50 [01:21<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.181760
60%|██████ | 30/50 [01:21<00:57, 2.88s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
60%|██████ | 30/50 [01:21<00:57, 2.88s/trial, best loss: -0.8353293081416346] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.124936 valid_1's binary_logloss: 0.13456
60%|██████ | 30/50 [01:21<00:57, 2.88s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
60%|██████ | 30/50 [01:21<00:57, 2.88s/trial, best loss: -0.8353293081416346] 62%|██████▏ | 31/50 [01:21<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
62%|██████▏ | 31/50 [01:21<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
62%|██████▏ | 31/50 [01:21<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
62%|██████▏ | 31/50 [01:21<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
62%|██████▏ | 31/50 [01:21<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.013677 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
62%|██████▏ | 31/50 [01:21<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12896
62%|██████▏ | 31/50 [01:21<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
62%|██████▏ | 31/50 [01:21<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
62%|██████▏ | 31/50 [01:21<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
62%|██████▏ | 31/50 [01:21<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
62%|██████▏ | 31/50 [01:21<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.184987
62%|██████▏ | 31/50 [01:21<00:53, 2.80s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
62%|██████▏ | 31/50 [01:21<00:53, 2.80s/trial, best loss: -0.8353293081416346] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.123182 valid_1's binary_logloss: 0.13554
62%|██████▏ | 31/50 [01:22<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
62%|██████▏ | 31/50 [01:22<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
62%|██████▏ | 31/50 [01:22<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
62%|██████▏ | 31/50 [01:22<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
62%|██████▏ | 31/50 [01:22<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
62%|██████▏ | 31/50 [01:22<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.010535 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
62%|██████▏ | 31/50 [01:22<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12947
62%|██████▏ | 31/50 [01:22<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 203
62%|██████▏ | 31/50 [01:22<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
62%|██████▏ | 31/50 [01:22<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
62%|██████▏ | 31/50 [01:22<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
62%|██████▏ | 31/50 [01:22<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.196685
62%|██████▏ | 31/50 [01:22<00:53, 2.80s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
62%|██████▏ | 31/50 [01:22<00:53, 2.80s/trial, best loss: -0.8353293081416346] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.12218 valid_1's binary_logloss: 0.138117
62%|██████▏ | 31/50 [01:23<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
62%|██████▏ | 31/50 [01:23<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
62%|██████▏ | 31/50 [01:23<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
62%|██████▏ | 31/50 [01:23<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
62%|██████▏ | 31/50 [01:23<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
62%|██████▏ | 31/50 [01:23<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007774 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
62%|██████▏ | 31/50 [01:23<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12908
62%|██████▏ | 31/50 [01:23<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 200
62%|██████▏ | 31/50 [01:23<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
62%|██████▏ | 31/50 [01:23<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
62%|██████▏ | 31/50 [01:23<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
62%|██████▏ | 31/50 [01:23<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.181760
62%|██████▏ | 31/50 [01:23<00:53, 2.80s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
62%|██████▏ | 31/50 [01:23<00:53, 2.80s/trial, best loss: -0.8353293081416346] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.123535 valid_1's binary_logloss: 0.134833
62%|██████▏ | 31/50 [01:24<00:53, 2.80s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
62%|██████▏ | 31/50 [01:24<00:53, 2.80s/trial, best loss: -0.8353293081416346] 64%|██████▍ | 32/50 [01:24<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
64%|██████▍ | 32/50 [01:24<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
64%|██████▍ | 32/50 [01:24<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
64%|██████▍ | 32/50 [01:24<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
64%|██████▍ | 32/50 [01:24<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.016300 seconds.
You can set `force_col_wise=true` to remove the overhead.
64%|██████▍ | 32/50 [01:24<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12804
64%|██████▍ | 32/50 [01:24<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
64%|██████▍ | 32/50 [01:24<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
64%|██████▍ | 32/50 [01:24<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
64%|██████▍ | 32/50 [01:24<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
64%|██████▍ | 32/50 [01:24<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.184987
64%|██████▍ | 32/50 [01:24<00:49, 2.77s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
64%|██████▍ | 32/50 [01:24<00:49, 2.77s/trial, best loss: -0.8353293081416346] Early stopping, best iteration is:
[30] training's binary_logloss: 0.119038 valid_1's binary_logloss: 0.134688
64%|██████▍ | 32/50 [01:24<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
64%|██████▍ | 32/50 [01:25<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
64%|██████▍ | 32/50 [01:25<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
64%|██████▍ | 32/50 [01:25<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
64%|██████▍ | 32/50 [01:25<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
64%|██████▍ | 32/50 [01:25<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008723 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
64%|██████▍ | 32/50 [01:25<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12943
64%|██████▍ | 32/50 [01:25<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
64%|██████▍ | 32/50 [01:25<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
64%|██████▍ | 32/50 [01:25<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
64%|██████▍ | 32/50 [01:25<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
64%|██████▍ | 32/50 [01:25<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.196685
64%|██████▍ | 32/50 [01:25<00:49, 2.77s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
64%|██████▍ | 32/50 [01:25<00:49, 2.77s/trial, best loss: -0.8353293081416346] Early stopping, best iteration is:
[30] training's binary_logloss: 0.118066 valid_1's binary_logloss: 0.137695
64%|██████▍ | 32/50 [01:25<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
64%|██████▍ | 32/50 [01:25<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
64%|██████▍ | 32/50 [01:25<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
64%|██████▍ | 32/50 [01:25<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
64%|██████▍ | 32/50 [01:25<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
64%|██████▍ | 32/50 [01:25<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.012208 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
64%|██████▍ | 32/50 [01:25<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12879
64%|██████▍ | 32/50 [01:25<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 197
64%|██████▍ | 32/50 [01:25<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
64%|██████▍ | 32/50 [01:25<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
64%|██████▍ | 32/50 [01:25<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
64%|██████▍ | 32/50 [01:26<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.181760
64%|██████▍ | 32/50 [01:26<00:49, 2.77s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
64%|██████▍ | 32/50 [01:26<00:49, 2.77s/trial, best loss: -0.8353293081416346] Early stopping, best iteration is:
[29] training's binary_logloss: 0.120385 valid_1's binary_logloss: 0.134541
64%|██████▍ | 32/50 [01:26<00:49, 2.77s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
64%|██████▍ | 32/50 [01:26<00:49, 2.77s/trial, best loss: -0.8353293081416346] 66%|██████▌ | 33/50 [01:26<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
66%|██████▌ | 33/50 [01:26<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
66%|██████▌ | 33/50 [01:26<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
66%|██████▌ | 33/50 [01:26<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
66%|██████▌ | 33/50 [01:26<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007626 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
66%|██████▌ | 33/50 [01:26<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12804
66%|██████▌ | 33/50 [01:26<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
66%|██████▌ | 33/50 [01:26<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
66%|██████▌ | 33/50 [01:26<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
66%|██████▌ | 33/50 [01:26<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
66%|██████▌ | 33/50 [01:26<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.184987
66%|██████▌ | 33/50 [01:26<00:43, 2.56s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
66%|██████▌ | 33/50 [01:26<00:43, 2.56s/trial, best loss: -0.8353293081416346] Early stopping, best iteration is:
[42] training's binary_logloss: 0.113008 valid_1's binary_logloss: 0.135691
66%|██████▌ | 33/50 [01:27<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
66%|██████▌ | 33/50 [01:27<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
66%|██████▌ | 33/50 [01:27<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
66%|██████▌ | 33/50 [01:27<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
66%|██████▌ | 33/50 [01:27<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
66%|██████▌ | 33/50 [01:27<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.012098 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
66%|██████▌ | 33/50 [01:27<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12903
66%|██████▌ | 33/50 [01:27<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 197
66%|██████▌ | 33/50 [01:27<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
66%|██████▌ | 33/50 [01:27<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
66%|██████▌ | 33/50 [01:27<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
66%|██████▌ | 33/50 [01:27<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.196685
66%|██████▌ | 33/50 [01:27<00:43, 2.56s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
66%|██████▌ | 33/50 [01:27<00:43, 2.56s/trial, best loss: -0.8353293081416346] Early stopping, best iteration is:
[31] training's binary_logloss: 0.117041 valid_1's binary_logloss: 0.138101
66%|██████▌ | 33/50 [01:27<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
66%|██████▌ | 33/50 [01:27<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
66%|██████▌ | 33/50 [01:28<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
66%|██████▌ | 33/50 [01:28<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
66%|██████▌ | 33/50 [01:28<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
66%|██████▌ | 33/50 [01:28<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009215 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
66%|██████▌ | 33/50 [01:28<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12879
66%|██████▌ | 33/50 [01:28<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 197
66%|██████▌ | 33/50 [01:28<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
66%|██████▌ | 33/50 [01:28<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
66%|██████▌ | 33/50 [01:28<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
66%|██████▌ | 33/50 [01:28<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.181760
66%|██████▌ | 33/50 [01:28<00:43, 2.56s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
66%|██████▌ | 33/50 [01:28<00:43, 2.56s/trial, best loss: -0.8353293081416346] Early stopping, best iteration is:
[42] training's binary_logloss: 0.113562 valid_1's binary_logloss: 0.134568
66%|██████▌ | 33/50 [01:28<00:43, 2.56s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
66%|██████▌ | 33/50 [01:28<00:43, 2.56s/trial, best loss: -0.8353293081416346] 68%|██████▊ | 34/50 [01:28<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
68%|██████▊ | 34/50 [01:28<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
68%|██████▊ | 34/50 [01:28<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
68%|██████▊ | 34/50 [01:29<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
68%|██████▊ | 34/50 [01:29<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008779 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
68%|██████▊ | 34/50 [01:29<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12804
68%|██████▊ | 34/50 [01:29<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
68%|██████▊ | 34/50 [01:29<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
68%|██████▊ | 34/50 [01:29<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
68%|██████▊ | 34/50 [01:29<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
68%|██████▊ | 34/50 [01:29<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.184987
68%|██████▊ | 34/50 [01:29<00:40, 2.53s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
68%|██████▊ | 34/50 [01:29<00:40, 2.53s/trial, best loss: -0.8353293081416346] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.121864 valid_1's binary_logloss: 0.135488
68%|██████▊ | 34/50 [01:29<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
68%|██████▊ | 34/50 [01:29<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
68%|██████▊ | 34/50 [01:29<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
68%|██████▊ | 34/50 [01:29<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
68%|██████▊ | 34/50 [01:29<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
68%|██████▊ | 34/50 [01:29<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007069 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
68%|██████▊ | 34/50 [01:29<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12903
68%|██████▊ | 34/50 [01:29<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 197
68%|██████▊ | 34/50 [01:29<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
68%|██████▊ | 34/50 [01:30<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
68%|██████▊ | 34/50 [01:30<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
68%|██████▊ | 34/50 [01:30<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.196685
68%|██████▊ | 34/50 [01:30<00:40, 2.53s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
68%|██████▊ | 34/50 [01:30<00:40, 2.53s/trial, best loss: -0.8353293081416346] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.120934 valid_1's binary_logloss: 0.138049
68%|██████▊ | 34/50 [01:30<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
68%|██████▊ | 34/50 [01:30<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
68%|██████▊ | 34/50 [01:30<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
68%|██████▊ | 34/50 [01:30<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
68%|██████▊ | 34/50 [01:30<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
68%|██████▊ | 34/50 [01:30<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.012046 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
68%|██████▊ | 34/50 [01:30<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12879
68%|██████▊ | 34/50 [01:30<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 197
68%|██████▊ | 34/50 [01:30<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
68%|██████▊ | 34/50 [01:30<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
68%|██████▊ | 34/50 [01:30<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
68%|██████▊ | 34/50 [01:30<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.181760
68%|██████▊ | 34/50 [01:30<00:40, 2.53s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
68%|██████▊ | 34/50 [01:30<00:40, 2.53s/trial, best loss: -0.8353293081416346] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.122219 valid_1's binary_logloss: 0.134581
68%|██████▊ | 34/50 [01:31<00:40, 2.53s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
68%|██████▊ | 34/50 [01:31<00:40, 2.53s/trial, best loss: -0.8353293081416346] 70%|███████ | 35/50 [01:31<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
70%|███████ | 35/50 [01:31<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
70%|███████ | 35/50 [01:31<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
70%|███████ | 35/50 [01:31<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
70%|███████ | 35/50 [01:31<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007576 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
70%|███████ | 35/50 [01:31<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12812
70%|███████ | 35/50 [01:31<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 194
70%|███████ | 35/50 [01:31<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
70%|███████ | 35/50 [01:31<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
70%|███████ | 35/50 [01:31<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
70%|███████ | 35/50 [01:31<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.184987
70%|███████ | 35/50 [01:31<00:38, 2.60s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
70%|███████ | 35/50 [01:31<00:38, 2.60s/trial, best loss: -0.8353293081416346] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.129099 valid_1's binary_logloss: 0.137718
70%|███████ | 35/50 [01:32<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
70%|███████ | 35/50 [01:32<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
70%|███████ | 35/50 [01:32<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
70%|███████ | 35/50 [01:32<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
70%|███████ | 35/50 [01:32<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
70%|███████ | 35/50 [01:32<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007789 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
70%|███████ | 35/50 [01:32<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12943
70%|███████ | 35/50 [01:32<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 202
70%|███████ | 35/50 [01:32<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
70%|███████ | 35/50 [01:32<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
70%|███████ | 35/50 [01:32<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
70%|███████ | 35/50 [01:32<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.196685
70%|███████ | 35/50 [01:32<00:38, 2.60s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
70%|███████ | 35/50 [01:32<00:38, 2.60s/trial, best loss: -0.8353293081416346] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.128316 valid_1's binary_logloss: 0.140125
70%|███████ | 35/50 [01:33<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
70%|███████ | 35/50 [01:33<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
70%|███████ | 35/50 [01:33<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
70%|███████ | 35/50 [01:33<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
70%|███████ | 35/50 [01:33<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
70%|███████ | 35/50 [01:33<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.010383 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
70%|███████ | 35/50 [01:33<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12879
70%|███████ | 35/50 [01:33<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 197
70%|███████ | 35/50 [01:33<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
70%|███████ | 35/50 [01:33<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
70%|███████ | 35/50 [01:33<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
70%|███████ | 35/50 [01:33<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.181760
70%|███████ | 35/50 [01:33<00:38, 2.60s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
70%|███████ | 35/50 [01:33<00:38, 2.60s/trial, best loss: -0.8353293081416346] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.129766 valid_1's binary_logloss: 0.136703
70%|███████ | 35/50 [01:34<00:38, 2.60s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
70%|███████ | 35/50 [01:34<00:38, 2.60s/trial, best loss: -0.8353293081416346] 72%|███████▏ | 36/50 [01:34<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
72%|███████▏ | 36/50 [01:34<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
72%|███████▏ | 36/50 [01:34<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
72%|███████▏ | 36/50 [01:34<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
72%|███████▏ | 36/50 [01:34<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009033 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
72%|███████▏ | 36/50 [01:34<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12804
72%|███████▏ | 36/50 [01:34<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
72%|███████▏ | 36/50 [01:34<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
72%|███████▏ | 36/50 [01:34<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
72%|███████▏ | 36/50 [01:34<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
72%|███████▏ | 36/50 [01:34<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.184987
72%|███████▏ | 36/50 [01:34<00:37, 2.71s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
72%|███████▏ | 36/50 [01:34<00:37, 2.71s/trial, best loss: -0.8353293081416346] Early stopping, best iteration is:
[29] training's binary_logloss: 0.120857 valid_1's binary_logloss: 0.135392
72%|███████▏ | 36/50 [01:35<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
72%|███████▏ | 36/50 [01:35<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
72%|███████▏ | 36/50 [01:35<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
72%|███████▏ | 36/50 [01:35<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
72%|███████▏ | 36/50 [01:35<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
72%|███████▏ | 36/50 [01:35<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007741 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
72%|███████▏ | 36/50 [01:35<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12903
72%|███████▏ | 36/50 [01:35<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 197
72%|███████▏ | 36/50 [01:35<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
72%|███████▏ | 36/50 [01:35<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
72%|███████▏ | 36/50 [01:35<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
72%|███████▏ | 36/50 [01:35<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.196685
72%|███████▏ | 36/50 [01:35<00:37, 2.71s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
72%|███████▏ | 36/50 [01:35<00:37, 2.71s/trial, best loss: -0.8353293081416346] Early stopping, best iteration is:
[31] training's binary_logloss: 0.118985 valid_1's binary_logloss: 0.137371
72%|███████▏ | 36/50 [01:35<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
72%|███████▏ | 36/50 [01:36<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
72%|███████▏ | 36/50 [01:36<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
72%|███████▏ | 36/50 [01:36<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
72%|███████▏ | 36/50 [01:36<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
72%|███████▏ | 36/50 [01:36<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.010013 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
72%|███████▏ | 36/50 [01:36<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12817
72%|███████▏ | 36/50 [01:36<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
72%|███████▏ | 36/50 [01:36<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
72%|███████▏ | 36/50 [01:36<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
72%|███████▏ | 36/50 [01:36<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
72%|███████▏ | 36/50 [01:36<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.181760
72%|███████▏ | 36/50 [01:36<00:37, 2.71s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
72%|███████▏ | 36/50 [01:36<00:37, 2.71s/trial, best loss: -0.8353293081416346] Early stopping, best iteration is:
[30] training's binary_logloss: 0.120758 valid_1's binary_logloss: 0.134185
72%|███████▏ | 36/50 [01:36<00:37, 2.71s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
72%|███████▏ | 36/50 [01:36<00:37, 2.71s/trial, best loss: -0.8353293081416346] 74%|███████▍ | 37/50 [01:36<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
74%|███████▍ | 37/50 [01:36<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
74%|███████▍ | 37/50 [01:36<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
74%|███████▍ | 37/50 [01:36<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
74%|███████▍ | 37/50 [01:36<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008582 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
74%|███████▍ | 37/50 [01:36<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12804
74%|███████▍ | 37/50 [01:36<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
74%|███████▍ | 37/50 [01:36<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
74%|███████▍ | 37/50 [01:37<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
74%|███████▍ | 37/50 [01:37<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
74%|███████▍ | 37/50 [01:37<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.184987
74%|███████▍ | 37/50 [01:37<00:33, 2.54s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
74%|███████▍ | 37/50 [01:37<00:33, 2.54s/trial, best loss: -0.8353293081416346] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.122004 valid_1's binary_logloss: 0.134892
74%|███████▍ | 37/50 [01:37<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
74%|███████▍ | 37/50 [01:37<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
74%|███████▍ | 37/50 [01:37<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
74%|███████▍ | 37/50 [01:37<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
74%|███████▍ | 37/50 [01:37<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
74%|███████▍ | 37/50 [01:37<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007974 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
74%|███████▍ | 37/50 [01:38<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12838
74%|███████▍ | 37/50 [01:38<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
74%|███████▍ | 37/50 [01:38<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
74%|███████▍ | 37/50 [01:38<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
74%|███████▍ | 37/50 [01:38<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
74%|███████▍ | 37/50 [01:38<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.196685
74%|███████▍ | 37/50 [01:38<00:33, 2.54s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
74%|███████▍ | 37/50 [01:38<00:33, 2.54s/trial, best loss: -0.8353293081416346] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.120947 valid_1's binary_logloss: 0.137433
74%|███████▍ | 37/50 [01:38<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
74%|███████▍ | 37/50 [01:38<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
74%|███████▍ | 37/50 [01:38<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
74%|███████▍ | 37/50 [01:38<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
74%|███████▍ | 37/50 [01:38<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
74%|███████▍ | 37/50 [01:38<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008872 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
74%|███████▍ | 37/50 [01:38<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Total Bins 12817
74%|███████▍ | 37/50 [01:38<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
74%|███████▍ | 37/50 [01:38<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
74%|███████▍ | 37/50 [01:38<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
74%|███████▍ | 37/50 [01:38<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
74%|███████▍ | 37/50 [01:38<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Info] Start training from score -3.181760
74%|███████▍ | 37/50 [01:38<00:33, 2.54s/trial, best loss: -0.8353293081416346] Training until validation scores don't improve for 30 rounds
74%|███████▍ | 37/50 [01:38<00:33, 2.54s/trial, best loss: -0.8353293081416346] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.122411 valid_1's binary_logloss: 0.134261
74%|███████▍ | 37/50 [01:39<00:33, 2.54s/trial, best loss: -0.8353293081416346] [LightGBM] [Warning] Unknown parameter: eval_metric
74%|███████▍ | 37/50 [01:39<00:33, 2.54s/trial, best loss: -0.8353293081416346] 76%|███████▌ | 38/50 [01:39<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
76%|███████▌ | 38/50 [01:39<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
76%|███████▌ | 38/50 [01:39<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
76%|███████▌ | 38/50 [01:39<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
76%|███████▌ | 38/50 [01:39<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.014886 seconds.
You can set `force_col_wise=true` to remove the overhead.
76%|███████▌ | 38/50 [01:39<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Total Bins 12804
76%|███████▌ | 38/50 [01:39<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
76%|███████▌ | 38/50 [01:39<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
76%|███████▌ | 38/50 [01:39<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
76%|███████▌ | 38/50 [01:39<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
76%|███████▌ | 38/50 [01:39<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Start training from score -3.184987
76%|███████▌ | 38/50 [01:39<00:31, 2.62s/trial, best loss: -0.835331797276512] Training until validation scores don't improve for 30 rounds
76%|███████▌ | 38/50 [01:39<00:31, 2.62s/trial, best loss: -0.835331797276512] Early stopping, best iteration is:
[15] training's binary_logloss: 0.120015 valid_1's binary_logloss: 0.136651
76%|███████▌ | 38/50 [01:40<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
76%|███████▌ | 38/50 [01:40<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
76%|███████▌ | 38/50 [01:40<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
76%|███████▌ | 38/50 [01:40<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
76%|███████▌ | 38/50 [01:40<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
76%|███████▌ | 38/50 [01:40<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.012741 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
76%|███████▌ | 38/50 [01:40<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Total Bins 12838
76%|███████▌ | 38/50 [01:40<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
76%|███████▌ | 38/50 [01:40<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
76%|███████▌ | 38/50 [01:40<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
76%|███████▌ | 38/50 [01:40<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
76%|███████▌ | 38/50 [01:40<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Start training from score -3.196685
76%|███████▌ | 38/50 [01:40<00:31, 2.62s/trial, best loss: -0.835331797276512] Training until validation scores don't improve for 30 rounds
76%|███████▌ | 38/50 [01:40<00:31, 2.62s/trial, best loss: -0.835331797276512] Early stopping, best iteration is:
[15] training's binary_logloss: 0.118939 valid_1's binary_logloss: 0.138894
76%|███████▌ | 38/50 [01:40<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
76%|███████▌ | 38/50 [01:40<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
76%|███████▌ | 38/50 [01:40<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
76%|███████▌ | 38/50 [01:40<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
76%|███████▌ | 38/50 [01:40<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
76%|███████▌ | 38/50 [01:40<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.010061 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
76%|███████▌ | 38/50 [01:41<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Total Bins 12817
76%|███████▌ | 38/50 [01:41<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
76%|███████▌ | 38/50 [01:41<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
76%|███████▌ | 38/50 [01:41<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
76%|███████▌ | 38/50 [01:41<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
76%|███████▌ | 38/50 [01:41<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Start training from score -3.181760
76%|███████▌ | 38/50 [01:41<00:31, 2.62s/trial, best loss: -0.835331797276512] Training until validation scores don't improve for 30 rounds
76%|███████▌ | 38/50 [01:41<00:31, 2.62s/trial, best loss: -0.835331797276512] Early stopping, best iteration is:
[12] training's binary_logloss: 0.122953 valid_1's binary_logloss: 0.134958
76%|███████▌ | 38/50 [01:41<00:31, 2.62s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
76%|███████▌ | 38/50 [01:41<00:31, 2.62s/trial, best loss: -0.835331797276512] 78%|███████▊ | 39/50 [01:41<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
78%|███████▊ | 39/50 [01:41<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
78%|███████▊ | 39/50 [01:41<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
78%|███████▊ | 39/50 [01:41<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
78%|███████▊ | 39/50 [01:41<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008189 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
78%|███████▊ | 39/50 [01:41<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Total Bins 12804
78%|███████▊ | 39/50 [01:41<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
78%|███████▊ | 39/50 [01:41<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
78%|███████▊ | 39/50 [01:41<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
78%|███████▊ | 39/50 [01:41<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
78%|███████▊ | 39/50 [01:41<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Start training from score -3.184987
78%|███████▊ | 39/50 [01:41<00:26, 2.39s/trial, best loss: -0.835331797276512] Training until validation scores don't improve for 30 rounds
78%|███████▊ | 39/50 [01:41<00:26, 2.39s/trial, best loss: -0.835331797276512] Did not meet early stopping. Best iteration is:
[88] training's binary_logloss: 0.117929 valid_1's binary_logloss: 0.135205
78%|███████▊ | 39/50 [01:42<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
78%|███████▊ | 39/50 [01:42<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
78%|███████▊ | 39/50 [01:42<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
78%|███████▊ | 39/50 [01:42<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
78%|███████▊ | 39/50 [01:42<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
78%|███████▊ | 39/50 [01:42<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008863 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
78%|███████▊ | 39/50 [01:42<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Total Bins 12838
78%|███████▊ | 39/50 [01:42<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
78%|███████▊ | 39/50 [01:42<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
78%|███████▊ | 39/50 [01:42<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
78%|███████▊ | 39/50 [01:42<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
78%|███████▊ | 39/50 [01:42<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Start training from score -3.196685
78%|███████▊ | 39/50 [01:42<00:26, 2.39s/trial, best loss: -0.835331797276512] Training until validation scores don't improve for 30 rounds
78%|███████▊ | 39/50 [01:42<00:26, 2.39s/trial, best loss: -0.835331797276512] Did not meet early stopping. Best iteration is:
[81] training's binary_logloss: 0.11844 valid_1's binary_logloss: 0.137484
78%|███████▊ | 39/50 [01:43<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
78%|███████▊ | 39/50 [01:43<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
78%|███████▊ | 39/50 [01:43<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
78%|███████▊ | 39/50 [01:43<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
78%|███████▊ | 39/50 [01:43<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
78%|███████▊ | 39/50 [01:43<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009540 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
78%|███████▊ | 39/50 [01:43<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Total Bins 12817
78%|███████▊ | 39/50 [01:43<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
78%|███████▊ | 39/50 [01:43<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
78%|███████▊ | 39/50 [01:43<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
78%|███████▊ | 39/50 [01:43<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
78%|███████▊ | 39/50 [01:43<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Start training from score -3.181760
78%|███████▊ | 39/50 [01:43<00:26, 2.39s/trial, best loss: -0.835331797276512] Training until validation scores don't improve for 30 rounds
78%|███████▊ | 39/50 [01:43<00:26, 2.39s/trial, best loss: -0.835331797276512] Did not meet early stopping. Best iteration is:
[85] training's binary_logloss: 0.118939 valid_1's binary_logloss: 0.134806
78%|███████▊ | 39/50 [01:44<00:26, 2.39s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
78%|███████▊ | 39/50 [01:44<00:26, 2.39s/trial, best loss: -0.835331797276512] 80%|████████ | 40/50 [01:44<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
80%|████████ | 40/50 [01:44<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
80%|████████ | 40/50 [01:44<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
80%|████████ | 40/50 [01:44<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
80%|████████ | 40/50 [01:44<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007878 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
80%|████████ | 40/50 [01:44<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Total Bins 12804
80%|████████ | 40/50 [01:44<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
80%|████████ | 40/50 [01:44<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
80%|████████ | 40/50 [01:44<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
80%|████████ | 40/50 [01:44<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
80%|████████ | 40/50 [01:44<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Start training from score -3.184987
80%|████████ | 40/50 [01:44<00:25, 2.54s/trial, best loss: -0.835331797276512] Training until validation scores don't improve for 30 rounds
80%|████████ | 40/50 [01:44<00:25, 2.54s/trial, best loss: -0.835331797276512] Early stopping, best iteration is:
[50] training's binary_logloss: 0.119598 valid_1's binary_logloss: 0.134559
80%|████████ | 40/50 [01:44<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
80%|████████ | 40/50 [01:44<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
80%|████████ | 40/50 [01:45<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
80%|████████ | 40/50 [01:45<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
80%|████████ | 40/50 [01:45<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
80%|████████ | 40/50 [01:45<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.013151 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
80%|████████ | 40/50 [01:45<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Total Bins 12838
80%|████████ | 40/50 [01:45<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
80%|████████ | 40/50 [01:45<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
80%|████████ | 40/50 [01:45<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
80%|████████ | 40/50 [01:45<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
80%|████████ | 40/50 [01:45<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Start training from score -3.196685
80%|████████ | 40/50 [01:45<00:25, 2.54s/trial, best loss: -0.835331797276512] Training until validation scores don't improve for 30 rounds
80%|████████ | 40/50 [01:45<00:25, 2.54s/trial, best loss: -0.835331797276512] Early stopping, best iteration is:
[51] training's binary_logloss: 0.118541 valid_1's binary_logloss: 0.137523
80%|████████ | 40/50 [01:45<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
80%|████████ | 40/50 [01:45<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
80%|████████ | 40/50 [01:45<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
80%|████████ | 40/50 [01:45<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
80%|████████ | 40/50 [01:46<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
80%|████████ | 40/50 [01:46<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.011190 seconds.
You can set `force_col_wise=true` to remove the overhead.
80%|████████ | 40/50 [01:46<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Total Bins 12817
80%|████████ | 40/50 [01:46<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
80%|████████ | 40/50 [01:46<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
80%|████████ | 40/50 [01:46<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
80%|████████ | 40/50 [01:46<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
80%|████████ | 40/50 [01:46<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Info] Start training from score -3.181760
80%|████████ | 40/50 [01:46<00:25, 2.54s/trial, best loss: -0.835331797276512] Training until validation scores don't improve for 30 rounds
80%|████████ | 40/50 [01:46<00:25, 2.54s/trial, best loss: -0.835331797276512] Early stopping, best iteration is:
[48] training's binary_logloss: 0.120332 valid_1's binary_logloss: 0.134165
80%|████████ | 40/50 [01:46<00:25, 2.54s/trial, best loss: -0.835331797276512] [LightGBM] [Warning] Unknown parameter: eval_metric
80%|████████ | 40/50 [01:46<00:25, 2.54s/trial, best loss: -0.835331797276512] 82%|████████▏ | 41/50 [01:47<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
82%|████████▏ | 41/50 [01:47<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
82%|████████▏ | 41/50 [01:47<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
82%|████████▏ | 41/50 [01:47<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
82%|████████▏ | 41/50 [01:47<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.006984 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
82%|████████▏ | 41/50 [01:47<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Total Bins 12804
82%|████████▏ | 41/50 [01:47<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
82%|████████▏ | 41/50 [01:47<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
82%|████████▏ | 41/50 [01:47<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
82%|████████▏ | 41/50 [01:47<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
82%|████████▏ | 41/50 [01:47<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Start training from score -3.184987
82%|████████▏ | 41/50 [01:47<00:24, 2.72s/trial, best loss: -0.8357102168343064] Training until validation scores don't improve for 30 rounds
82%|████████▏ | 41/50 [01:47<00:24, 2.72s/trial, best loss: -0.8357102168343064] Early stopping, best iteration is:
[33] training's binary_logloss: 0.119524 valid_1's binary_logloss: 0.135441
82%|████████▏ | 41/50 [01:47<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
82%|████████▏ | 41/50 [01:48<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
82%|████████▏ | 41/50 [01:48<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
82%|████████▏ | 41/50 [01:48<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
82%|████████▏ | 41/50 [01:48<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
82%|████████▏ | 41/50 [01:48<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008826 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
82%|████████▏ | 41/50 [01:48<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Total Bins 12838
82%|████████▏ | 41/50 [01:48<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
82%|████████▏ | 41/50 [01:48<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
82%|████████▏ | 41/50 [01:48<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
82%|████████▏ | 41/50 [01:48<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
82%|████████▏ | 41/50 [01:48<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Start training from score -3.196685
82%|████████▏ | 41/50 [01:48<00:24, 2.72s/trial, best loss: -0.8357102168343064] Training until validation scores don't improve for 30 rounds
82%|████████▏ | 41/50 [01:48<00:24, 2.72s/trial, best loss: -0.8357102168343064] Early stopping, best iteration is:
[28] training's binary_logloss: 0.120257 valid_1's binary_logloss: 0.137469
82%|████████▏ | 41/50 [01:48<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
82%|████████▏ | 41/50 [01:48<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
82%|████████▏ | 41/50 [01:48<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
82%|████████▏ | 41/50 [01:48<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
82%|████████▏ | 41/50 [01:48<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
82%|████████▏ | 41/50 [01:48<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007049 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
82%|████████▏ | 41/50 [01:48<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Total Bins 12817
82%|████████▏ | 41/50 [01:48<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
82%|████████▏ | 41/50 [01:48<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
82%|████████▏ | 41/50 [01:48<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
82%|████████▏ | 41/50 [01:48<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
82%|████████▏ | 41/50 [01:48<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Start training from score -3.181760
82%|████████▏ | 41/50 [01:48<00:24, 2.72s/trial, best loss: -0.8357102168343064] Training until validation scores don't improve for 30 rounds
82%|████████▏ | 41/50 [01:48<00:24, 2.72s/trial, best loss: -0.8357102168343064] Early stopping, best iteration is:
[29] training's binary_logloss: 0.121158 valid_1's binary_logloss: 0.134384
82%|████████▏ | 41/50 [01:49<00:24, 2.72s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
82%|████████▏ | 41/50 [01:49<00:24, 2.72s/trial, best loss: -0.8357102168343064] 84%|████████▍ | 42/50 [01:49<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
84%|████████▍ | 42/50 [01:49<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
84%|████████▍ | 42/50 [01:49<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
84%|████████▍ | 42/50 [01:49<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
84%|████████▍ | 42/50 [01:49<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008870 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
84%|████████▍ | 42/50 [01:49<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Total Bins 12804
84%|████████▍ | 42/50 [01:49<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
84%|████████▍ | 42/50 [01:49<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
84%|████████▍ | 42/50 [01:49<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
84%|████████▍ | 42/50 [01:49<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
84%|████████▍ | 42/50 [01:49<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Start training from score -3.184987
84%|████████▍ | 42/50 [01:49<00:19, 2.47s/trial, best loss: -0.8357102168343064] Training until validation scores don't improve for 30 rounds
84%|████████▍ | 42/50 [01:49<00:19, 2.47s/trial, best loss: -0.8357102168343064] Early stopping, best iteration is:
[30] training's binary_logloss: 0.117395 valid_1's binary_logloss: 0.136137
84%|████████▍ | 42/50 [01:50<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
84%|████████▍ | 42/50 [01:50<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
84%|████████▍ | 42/50 [01:50<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
84%|████████▍ | 42/50 [01:50<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
84%|████████▍ | 42/50 [01:50<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
84%|████████▍ | 42/50 [01:50<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.012995 seconds.
You can set `force_col_wise=true` to remove the overhead.
84%|████████▍ | 42/50 [01:50<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Total Bins 12838
84%|████████▍ | 42/50 [01:50<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
84%|████████▍ | 42/50 [01:50<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
84%|████████▍ | 42/50 [01:50<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
84%|████████▍ | 42/50 [01:50<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
84%|████████▍ | 42/50 [01:50<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Start training from score -3.196685
84%|████████▍ | 42/50 [01:50<00:19, 2.47s/trial, best loss: -0.8357102168343064] Training until validation scores don't improve for 30 rounds
84%|████████▍ | 42/50 [01:50<00:19, 2.47s/trial, best loss: -0.8357102168343064] Early stopping, best iteration is:
[33] training's binary_logloss: 0.115053 valid_1's binary_logloss: 0.138202
84%|████████▍ | 42/50 [01:50<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
84%|████████▍ | 42/50 [01:50<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
84%|████████▍ | 42/50 [01:51<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
84%|████████▍ | 42/50 [01:51<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
84%|████████▍ | 42/50 [01:51<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
84%|████████▍ | 42/50 [01:51<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008256 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
84%|████████▍ | 42/50 [01:51<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Total Bins 12817
84%|████████▍ | 42/50 [01:51<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
84%|████████▍ | 42/50 [01:51<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
84%|████████▍ | 42/50 [01:51<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
84%|████████▍ | 42/50 [01:51<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
84%|████████▍ | 42/50 [01:51<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Start training from score -3.181760
84%|████████▍ | 42/50 [01:51<00:19, 2.47s/trial, best loss: -0.8357102168343064] Training until validation scores don't improve for 30 rounds
84%|████████▍ | 42/50 [01:51<00:19, 2.47s/trial, best loss: -0.8357102168343064] Early stopping, best iteration is:
[40] training's binary_logloss: 0.112815 valid_1's binary_logloss: 0.134646
84%|████████▍ | 42/50 [01:51<00:19, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
84%|████████▍ | 42/50 [01:51<00:19, 2.47s/trial, best loss: -0.8357102168343064] 86%|████████▌ | 43/50 [01:51<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
86%|████████▌ | 43/50 [01:51<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
86%|████████▌ | 43/50 [01:51<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
86%|████████▌ | 43/50 [01:52<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
86%|████████▌ | 43/50 [01:52<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.006767 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
86%|████████▌ | 43/50 [01:52<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Total Bins 12804
86%|████████▌ | 43/50 [01:52<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
86%|████████▌ | 43/50 [01:52<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
86%|████████▌ | 43/50 [01:52<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
86%|████████▌ | 43/50 [01:52<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
86%|████████▌ | 43/50 [01:52<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Start training from score -3.184987
86%|████████▌ | 43/50 [01:52<00:17, 2.49s/trial, best loss: -0.8357102168343064] Training until validation scores don't improve for 30 rounds
86%|████████▌ | 43/50 [01:52<00:17, 2.49s/trial, best loss: -0.8357102168343064] Early stopping, best iteration is:
[43] training's binary_logloss: 0.116209 valid_1's binary_logloss: 0.135515
86%|████████▌ | 43/50 [01:52<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
86%|████████▌ | 43/50 [01:52<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
86%|████████▌ | 43/50 [01:52<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
86%|████████▌ | 43/50 [01:52<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
86%|████████▌ | 43/50 [01:52<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
86%|████████▌ | 43/50 [01:52<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007724 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
86%|████████▌ | 43/50 [01:52<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Total Bins 12838
86%|████████▌ | 43/50 [01:52<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
86%|████████▌ | 43/50 [01:52<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
86%|████████▌ | 43/50 [01:52<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
86%|████████▌ | 43/50 [01:52<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
86%|████████▌ | 43/50 [01:52<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Start training from score -3.196685
86%|████████▌ | 43/50 [01:53<00:17, 2.49s/trial, best loss: -0.8357102168343064] Training until validation scores don't improve for 30 rounds
86%|████████▌ | 43/50 [01:53<00:17, 2.49s/trial, best loss: -0.8357102168343064] Early stopping, best iteration is:
[41] training's binary_logloss: 0.116373 valid_1's binary_logloss: 0.13768
86%|████████▌ | 43/50 [01:53<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
86%|████████▌ | 43/50 [01:53<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
86%|████████▌ | 43/50 [01:53<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
86%|████████▌ | 43/50 [01:53<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
86%|████████▌ | 43/50 [01:53<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
86%|████████▌ | 43/50 [01:53<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.015228 seconds.
You can set `force_col_wise=true` to remove the overhead.
86%|████████▌ | 43/50 [01:53<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Total Bins 12817
86%|████████▌ | 43/50 [01:53<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
86%|████████▌ | 43/50 [01:53<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
86%|████████▌ | 43/50 [01:53<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
86%|████████▌ | 43/50 [01:53<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
86%|████████▌ | 43/50 [01:53<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Start training from score -3.181760
86%|████████▌ | 43/50 [01:53<00:17, 2.49s/trial, best loss: -0.8357102168343064] Training until validation scores don't improve for 30 rounds
86%|████████▌ | 43/50 [01:53<00:17, 2.49s/trial, best loss: -0.8357102168343064] Early stopping, best iteration is:
[38] training's binary_logloss: 0.118563 valid_1's binary_logloss: 0.13498
86%|████████▌ | 43/50 [01:54<00:17, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
86%|████████▌ | 43/50 [01:54<00:17, 2.49s/trial, best loss: -0.8357102168343064] 88%|████████▊ | 44/50 [01:54<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
88%|████████▊ | 44/50 [01:54<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
88%|████████▊ | 44/50 [01:54<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
88%|████████▊ | 44/50 [01:54<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
88%|████████▊ | 44/50 [01:54<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008542 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
88%|████████▊ | 44/50 [01:54<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Total Bins 12804
88%|████████▊ | 44/50 [01:54<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
88%|████████▊ | 44/50 [01:54<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
88%|████████▊ | 44/50 [01:54<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
88%|████████▊ | 44/50 [01:54<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
88%|████████▊ | 44/50 [01:54<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Start training from score -3.184987
88%|████████▊ | 44/50 [01:54<00:14, 2.47s/trial, best loss: -0.8357102168343064] Training until validation scores don't improve for 30 rounds
88%|████████▊ | 44/50 [01:54<00:14, 2.47s/trial, best loss: -0.8357102168343064] Early stopping, best iteration is:
[25] training's binary_logloss: 0.120117 valid_1's binary_logloss: 0.134789
88%|████████▊ | 44/50 [01:54<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
88%|████████▊ | 44/50 [01:54<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
88%|████████▊ | 44/50 [01:54<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
88%|████████▊ | 44/50 [01:54<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
88%|████████▊ | 44/50 [01:55<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
88%|████████▊ | 44/50 [01:55<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008040 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
88%|████████▊ | 44/50 [01:55<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Total Bins 12838
88%|████████▊ | 44/50 [01:55<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
88%|████████▊ | 44/50 [01:55<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
88%|████████▊ | 44/50 [01:55<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
88%|████████▊ | 44/50 [01:55<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
88%|████████▊ | 44/50 [01:55<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Start training from score -3.196685
88%|████████▊ | 44/50 [01:55<00:14, 2.47s/trial, best loss: -0.8357102168343064] Training until validation scores don't improve for 30 rounds
88%|████████▊ | 44/50 [01:55<00:14, 2.47s/trial, best loss: -0.8357102168343064] Early stopping, best iteration is:
[25] training's binary_logloss: 0.119394 valid_1's binary_logloss: 0.137658
88%|████████▊ | 44/50 [01:55<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
88%|████████▊ | 44/50 [01:55<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
88%|████████▊ | 44/50 [01:55<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
88%|████████▊ | 44/50 [01:55<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
88%|████████▊ | 44/50 [01:55<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
88%|████████▊ | 44/50 [01:55<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009460 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
88%|████████▊ | 44/50 [01:55<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Total Bins 12817
88%|████████▊ | 44/50 [01:55<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
88%|████████▊ | 44/50 [01:55<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
88%|████████▊ | 44/50 [01:55<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
88%|████████▊ | 44/50 [01:55<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
88%|████████▊ | 44/50 [01:55<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Start training from score -3.181760
88%|████████▊ | 44/50 [01:55<00:14, 2.47s/trial, best loss: -0.8357102168343064] Training until validation scores don't improve for 30 rounds
88%|████████▊ | 44/50 [01:55<00:14, 2.47s/trial, best loss: -0.8357102168343064] Early stopping, best iteration is:
[37] training's binary_logloss: 0.114421 valid_1's binary_logloss: 0.134479
88%|████████▊ | 44/50 [01:56<00:14, 2.47s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
88%|████████▊ | 44/50 [01:56<00:14, 2.47s/trial, best loss: -0.8357102168343064] 90%|█████████ | 45/50 [01:56<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
90%|█████████ | 45/50 [01:56<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
90%|█████████ | 45/50 [01:56<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
90%|█████████ | 45/50 [01:56<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
90%|█████████ | 45/50 [01:56<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.010005 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
90%|█████████ | 45/50 [01:56<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Total Bins 12804
90%|█████████ | 45/50 [01:56<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
90%|█████████ | 45/50 [01:56<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
90%|█████████ | 45/50 [01:56<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
90%|█████████ | 45/50 [01:56<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
90%|█████████ | 45/50 [01:56<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Start training from score -3.184987
90%|█████████ | 45/50 [01:56<00:11, 2.31s/trial, best loss: -0.8357102168343064] Training until validation scores don't improve for 30 rounds
90%|█████████ | 45/50 [01:56<00:11, 2.31s/trial, best loss: -0.8357102168343064] Early stopping, best iteration is:
[29] training's binary_logloss: 0.115234 valid_1's binary_logloss: 0.135872
90%|█████████ | 45/50 [01:56<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
90%|█████████ | 45/50 [01:56<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
90%|█████████ | 45/50 [01:57<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
90%|█████████ | 45/50 [01:57<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
90%|█████████ | 45/50 [01:57<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
90%|█████████ | 45/50 [01:57<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008841 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
90%|█████████ | 45/50 [01:57<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Total Bins 12838
90%|█████████ | 45/50 [01:57<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
90%|█████████ | 45/50 [01:57<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
90%|█████████ | 45/50 [01:57<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
90%|█████████ | 45/50 [01:57<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
90%|█████████ | 45/50 [01:57<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Start training from score -3.196685
90%|█████████ | 45/50 [01:57<00:11, 2.31s/trial, best loss: -0.8357102168343064] Training until validation scores don't improve for 30 rounds
90%|█████████ | 45/50 [01:57<00:11, 2.31s/trial, best loss: -0.8357102168343064] Early stopping, best iteration is:
[27] training's binary_logloss: 0.115194 valid_1's binary_logloss: 0.138408
90%|█████████ | 45/50 [01:57<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
90%|█████████ | 45/50 [01:57<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
90%|█████████ | 45/50 [01:57<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
90%|█████████ | 45/50 [01:57<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
90%|█████████ | 45/50 [01:57<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
90%|█████████ | 45/50 [01:57<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010236 seconds.
You can set `force_col_wise=true` to remove the overhead.
90%|█████████ | 45/50 [01:57<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Total Bins 12817
90%|█████████ | 45/50 [01:57<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
90%|█████████ | 45/50 [01:57<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
90%|█████████ | 45/50 [01:57<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
90%|█████████ | 45/50 [01:57<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
90%|█████████ | 45/50 [01:58<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Start training from score -3.181760
90%|█████████ | 45/50 [01:58<00:11, 2.31s/trial, best loss: -0.8357102168343064] Training until validation scores don't improve for 30 rounds
90%|█████████ | 45/50 [01:58<00:11, 2.31s/trial, best loss: -0.8357102168343064] Early stopping, best iteration is:
[26] training's binary_logloss: 0.116992 valid_1's binary_logloss: 0.135531
90%|█████████ | 45/50 [01:58<00:11, 2.31s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
90%|█████████ | 45/50 [01:58<00:11, 2.31s/trial, best loss: -0.8357102168343064] 92%|█████████▏| 46/50 [01:58<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
92%|█████████▏| 46/50 [01:58<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
92%|█████████▏| 46/50 [01:58<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
92%|█████████▏| 46/50 [01:58<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
92%|█████████▏| 46/50 [01:58<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007834 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
92%|█████████▏| 46/50 [01:58<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Total Bins 12804
92%|█████████▏| 46/50 [01:58<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
92%|█████████▏| 46/50 [01:58<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
92%|█████████▏| 46/50 [01:58<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
92%|█████████▏| 46/50 [01:58<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
92%|█████████▏| 46/50 [01:58<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Start training from score -3.184987
92%|█████████▏| 46/50 [01:58<00:09, 2.32s/trial, best loss: -0.8357102168343064] Training until validation scores don't improve for 30 rounds
92%|█████████▏| 46/50 [01:58<00:09, 2.32s/trial, best loss: -0.8357102168343064] Early stopping, best iteration is:
[20] training's binary_logloss: 0.115923 valid_1's binary_logloss: 0.13639
92%|█████████▏| 46/50 [01:59<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
92%|█████████▏| 46/50 [01:59<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
92%|█████████▏| 46/50 [01:59<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
92%|█████████▏| 46/50 [01:59<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
92%|█████████▏| 46/50 [01:59<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
92%|█████████▏| 46/50 [01:59<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009212 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
92%|█████████▏| 46/50 [01:59<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Total Bins 12838
92%|█████████▏| 46/50 [01:59<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
92%|█████████▏| 46/50 [01:59<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
92%|█████████▏| 46/50 [01:59<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
92%|█████████▏| 46/50 [01:59<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
92%|█████████▏| 46/50 [01:59<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Start training from score -3.196685
92%|█████████▏| 46/50 [01:59<00:09, 2.32s/trial, best loss: -0.8357102168343064] Training until validation scores don't improve for 30 rounds
92%|█████████▏| 46/50 [01:59<00:09, 2.32s/trial, best loss: -0.8357102168343064] Early stopping, best iteration is:
[17] training's binary_logloss: 0.117019 valid_1's binary_logloss: 0.138229
92%|█████████▏| 46/50 [01:59<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
92%|█████████▏| 46/50 [01:59<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
92%|█████████▏| 46/50 [01:59<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
92%|█████████▏| 46/50 [01:59<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
92%|█████████▏| 46/50 [02:00<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
92%|█████████▏| 46/50 [02:00<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.010055 seconds.
You can set `force_col_wise=true` to remove the overhead.
92%|█████████▏| 46/50 [02:00<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Total Bins 12817
92%|█████████▏| 46/50 [02:00<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
92%|█████████▏| 46/50 [02:00<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
92%|█████████▏| 46/50 [02:00<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
92%|█████████▏| 46/50 [02:00<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
92%|█████████▏| 46/50 [02:00<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Start training from score -3.181760
92%|█████████▏| 46/50 [02:00<00:09, 2.32s/trial, best loss: -0.8357102168343064] Training until validation scores don't improve for 30 rounds
92%|█████████▏| 46/50 [02:00<00:09, 2.32s/trial, best loss: -0.8357102168343064] Early stopping, best iteration is:
[18] training's binary_logloss: 0.117591 valid_1's binary_logloss: 0.135204
92%|█████████▏| 46/50 [02:00<00:09, 2.32s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
92%|█████████▏| 46/50 [02:00<00:09, 2.32s/trial, best loss: -0.8357102168343064] 94%|█████████▍| 47/50 [02:00<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
94%|█████████▍| 47/50 [02:00<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
94%|█████████▍| 47/50 [02:00<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
94%|█████████▍| 47/50 [02:00<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
94%|█████████▍| 47/50 [02:00<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.009290 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
94%|█████████▍| 47/50 [02:00<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Total Bins 12804
94%|█████████▍| 47/50 [02:00<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
94%|█████████▍| 47/50 [02:00<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
94%|█████████▍| 47/50 [02:00<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
94%|█████████▍| 47/50 [02:00<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
94%|█████████▍| 47/50 [02:00<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Start training from score -3.184987
94%|█████████▍| 47/50 [02:00<00:06, 2.22s/trial, best loss: -0.8357102168343064] Training until validation scores don't improve for 30 rounds
94%|█████████▍| 47/50 [02:00<00:06, 2.22s/trial, best loss: -0.8357102168343064] Early stopping, best iteration is:
[27] training's binary_logloss: 0.117985 valid_1's binary_logloss: 0.135367
94%|█████████▍| 47/50 [02:01<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
94%|█████████▍| 47/50 [02:01<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
94%|█████████▍| 47/50 [02:01<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
94%|█████████▍| 47/50 [02:01<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
94%|█████████▍| 47/50 [02:01<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
94%|█████████▍| 47/50 [02:01<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.010413 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
94%|█████████▍| 47/50 [02:01<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Total Bins 12838
94%|█████████▍| 47/50 [02:01<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
94%|█████████▍| 47/50 [02:01<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
94%|█████████▍| 47/50 [02:01<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
94%|█████████▍| 47/50 [02:01<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
94%|█████████▍| 47/50 [02:01<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Start training from score -3.196685
94%|█████████▍| 47/50 [02:01<00:06, 2.22s/trial, best loss: -0.8357102168343064] Training until validation scores don't improve for 30 rounds
94%|█████████▍| 47/50 [02:01<00:06, 2.22s/trial, best loss: -0.8357102168343064] Early stopping, best iteration is:
[25] training's binary_logloss: 0.117671 valid_1's binary_logloss: 0.137665
94%|█████████▍| 47/50 [02:01<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
94%|█████████▍| 47/50 [02:02<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
94%|█████████▍| 47/50 [02:02<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
94%|█████████▍| 47/50 [02:02<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
94%|█████████▍| 47/50 [02:02<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
94%|█████████▍| 47/50 [02:02<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008027 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
94%|█████████▍| 47/50 [02:02<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Total Bins 12817
94%|█████████▍| 47/50 [02:02<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
94%|█████████▍| 47/50 [02:02<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
94%|█████████▍| 47/50 [02:02<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
94%|█████████▍| 47/50 [02:02<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
94%|█████████▍| 47/50 [02:02<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Start training from score -3.181760
94%|█████████▍| 47/50 [02:02<00:06, 2.22s/trial, best loss: -0.8357102168343064] Training until validation scores don't improve for 30 rounds
94%|█████████▍| 47/50 [02:02<00:06, 2.22s/trial, best loss: -0.8357102168343064] Early stopping, best iteration is:
[23] training's binary_logloss: 0.120142 valid_1's binary_logloss: 0.135155
94%|█████████▍| 47/50 [02:02<00:06, 2.22s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
94%|█████████▍| 47/50 [02:02<00:06, 2.22s/trial, best loss: -0.8357102168343064] 96%|█████████▌| 48/50 [02:02<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
96%|█████████▌| 48/50 [02:02<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
96%|█████████▌| 48/50 [02:02<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
96%|█████████▌| 48/50 [02:03<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
96%|█████████▌| 48/50 [02:03<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Auto-choosing col-wise multi-threading, the overhead of testing was 0.014548 seconds.
You can set `force_col_wise=true` to remove the overhead.
96%|█████████▌| 48/50 [02:03<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Total Bins 12804
96%|█████████▌| 48/50 [02:03<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
96%|█████████▌| 48/50 [02:03<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
96%|█████████▌| 48/50 [02:03<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
96%|█████████▌| 48/50 [02:03<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
96%|█████████▌| 48/50 [02:03<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Start training from score -3.184987
96%|█████████▌| 48/50 [02:03<00:04, 2.25s/trial, best loss: -0.8357102168343064] Training until validation scores don't improve for 30 rounds
96%|█████████▌| 48/50 [02:03<00:04, 2.25s/trial, best loss: -0.8357102168343064] Early stopping, best iteration is:
[69] training's binary_logloss: 0.117534 valid_1's binary_logloss: 0.134864
96%|█████████▌| 48/50 [02:03<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
96%|█████████▌| 48/50 [02:03<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
96%|█████████▌| 48/50 [02:04<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
96%|█████████▌| 48/50 [02:04<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
96%|█████████▌| 48/50 [02:04<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
96%|█████████▌| 48/50 [02:04<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007901 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
96%|█████████▌| 48/50 [02:04<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Total Bins 12838
96%|█████████▌| 48/50 [02:04<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
96%|█████████▌| 48/50 [02:04<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
96%|█████████▌| 48/50 [02:04<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
96%|█████████▌| 48/50 [02:04<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
96%|█████████▌| 48/50 [02:04<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Start training from score -3.196685
96%|█████████▌| 48/50 [02:04<00:04, 2.25s/trial, best loss: -0.8357102168343064] Training until validation scores don't improve for 30 rounds
96%|█████████▌| 48/50 [02:04<00:04, 2.25s/trial, best loss: -0.8357102168343064] Did not meet early stopping. Best iteration is:
[82] training's binary_logloss: 0.114016 valid_1's binary_logloss: 0.137702
96%|█████████▌| 48/50 [02:04<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
96%|█████████▌| 48/50 [02:04<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
96%|█████████▌| 48/50 [02:05<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
96%|█████████▌| 48/50 [02:05<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
96%|█████████▌| 48/50 [02:05<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
96%|█████████▌| 48/50 [02:05<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.012228 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
96%|█████████▌| 48/50 [02:05<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Total Bins 12817
96%|█████████▌| 48/50 [02:05<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
96%|█████████▌| 48/50 [02:05<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
96%|█████████▌| 48/50 [02:05<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
96%|█████████▌| 48/50 [02:05<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
96%|█████████▌| 48/50 [02:05<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Start training from score -3.181760
96%|█████████▌| 48/50 [02:05<00:04, 2.25s/trial, best loss: -0.8357102168343064] Training until validation scores don't improve for 30 rounds
96%|█████████▌| 48/50 [02:05<00:04, 2.25s/trial, best loss: -0.8357102168343064] Did not meet early stopping. Best iteration is:
[75] training's binary_logloss: 0.116413 valid_1's binary_logloss: 0.134882
96%|█████████▌| 48/50 [02:05<00:04, 2.25s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
96%|█████████▌| 48/50 [02:05<00:04, 2.25s/trial, best loss: -0.8357102168343064] 98%|█████████▊| 49/50 [02:05<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
98%|█████████▊| 49/50 [02:06<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
98%|█████████▊| 49/50 [02:06<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
98%|█████████▊| 49/50 [02:06<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of positive: 1611, number of negative: 38933
98%|█████████▊| 49/50 [02:06<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.013063 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
98%|█████████▊| 49/50 [02:06<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Total Bins 12804
98%|█████████▊| 49/50 [02:06<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
98%|█████████▊| 49/50 [02:06<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
98%|█████████▊| 49/50 [02:06<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
98%|█████████▊| 49/50 [02:06<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039735 -> initscore=-3.184987
98%|█████████▊| 49/50 [02:06<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Start training from score -3.184987
98%|█████████▊| 49/50 [02:06<00:02, 2.49s/trial, best loss: -0.8357102168343064] Training until validation scores don't improve for 30 rounds
98%|█████████▊| 49/50 [02:06<00:02, 2.49s/trial, best loss: -0.8357102168343064] Did not meet early stopping. Best iteration is:
[99] training's binary_logloss: 0.115727 valid_1's binary_logloss: 0.135247
98%|█████████▊| 49/50 [02:06<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
98%|█████████▊| 49/50 [02:06<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
98%|█████████▊| 49/50 [02:07<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
98%|█████████▊| 49/50 [02:07<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
98%|█████████▊| 49/50 [02:07<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of positive: 1593, number of negative: 38951
98%|█████████▊| 49/50 [02:07<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.008711 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
98%|█████████▊| 49/50 [02:07<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Total Bins 12847
98%|█████████▊| 49/50 [02:07<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 195
98%|█████████▊| 49/50 [02:07<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
98%|█████████▊| 49/50 [02:07<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
98%|█████████▊| 49/50 [02:07<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039291 -> initscore=-3.196685
98%|█████████▊| 49/50 [02:07<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Start training from score -3.196685
98%|█████████▊| 49/50 [02:07<00:02, 2.49s/trial, best loss: -0.8357102168343064] Training until validation scores don't improve for 30 rounds
98%|█████████▊| 49/50 [02:07<00:02, 2.49s/trial, best loss: -0.8357102168343064] Did not meet early stopping. Best iteration is:
[100] training's binary_logloss: 0.11494 valid_1's binary_logloss: 0.137861
98%|█████████▊| 49/50 [02:07<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
98%|█████████▊| 49/50 [02:07<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
98%|█████████▊| 49/50 [02:08<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
98%|█████████▊| 49/50 [02:08<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
98%|█████████▊| 49/50 [02:08<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of positive: 1616, number of negative: 38928
98%|█████████▊| 49/50 [02:08<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Auto-choosing row-wise multi-threading, the overhead of testing was 0.007797 seconds.
You can set `force_row_wise=true` to remove the overhead.
And if memory is not enough, you can set `force_col_wise=true`.
98%|█████████▊| 49/50 [02:08<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Total Bins 12817
98%|█████████▊| 49/50 [02:08<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Number of data points in the train set: 40544, number of used features: 192
98%|█████████▊| 49/50 [02:08<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
98%|█████████▊| 49/50 [02:08<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] early_stopping_round is set=30, early_stopping_rounds=30 will be ignored. Current value: early_stopping_round=30
98%|█████████▊| 49/50 [02:08<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.039858 -> initscore=-3.181760
98%|█████████▊| 49/50 [02:08<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Info] Start training from score -3.181760
98%|█████████▊| 49/50 [02:08<00:02, 2.49s/trial, best loss: -0.8357102168343064] Training until validation scores don't improve for 30 rounds
98%|█████████▊| 49/50 [02:08<00:02, 2.49s/trial, best loss: -0.8357102168343064] Did not meet early stopping. Best iteration is:
[99] training's binary_logloss: 0.116161 valid_1's binary_logloss: 0.134483
98%|█████████▊| 49/50 [02:08<00:02, 2.49s/trial, best loss: -0.8357102168343064] [LightGBM] [Warning] Unknown parameter: eval_metric
98%|█████████▊| 49/50 [02:08<00:02, 2.49s/trial, best loss: -0.8357102168343064]100%|██████████| 50/50 [02:08<00:00, 2.67s/trial, best loss: -0.8357102168343064]100%|██████████| 50/50 [02:08<00:00, 2.58s/trial, best loss: -0.8357102168343064]
{'learning_rate': 0.07078424888661622, 'max_depth': 143.0, 'min_child_samples': 93.0, 'num_leaves': 33.0, 'subsample': 0.9935662058378432}