CRAN Package Check Results for Maintainer ‘Lorenz A. Kapsner <lorenz.kapsner at gmail.com>’

Last updated on 2025-12-24 09:50:21 CET.

Package ERROR OK
autonewsmd 13
BiasCorrector 13
DQAgui 13
DQAstats 13
kdry 13
mlexperiments 4 9
mllrnrs 13
mlsurvlrnrs 13
rBiasCorrection 13
sjtable2df 13

Package autonewsmd

Current CRAN status: OK: 13

Package BiasCorrector

Current CRAN status: OK: 13

Package DQAgui

Current CRAN status: OK: 13

Package DQAstats

Current CRAN status: OK: 13

Package kdry

Current CRAN status: OK: 13

Package mlexperiments

Current CRAN status: ERROR: 4, OK: 9

Version: 0.0.8
Check: examples
Result: ERROR Running examples in ‘mlexperiments-Ex.R’ failed The error most likely occurred in: > base::assign(".ptime", proc.time(), pos = "CheckExEnv") > ### Name: performance > ### Title: performance > ### Aliases: performance > > ### ** Examples > > dataset <- do.call( + cbind, + c(sapply(paste0("col", 1:6), function(x) { + rnorm(n = 500) + }, + USE.NAMES = TRUE, + simplify = FALSE + ), + list(target = sample(0:1, 500, TRUE)) + )) > > fold_list <- splitTools::create_folds( + y = dataset[, 7], + k = 3, + type = "stratified", + seed = 123 + ) > > glm_optimization <- mlexperiments::MLCrossValidation$new( + learner = LearnerGlm$new(), + fold_list = fold_list, + seed = 123 + ) > > glm_optimization$learner_args <- list(family = binomial(link = "logit")) > glm_optimization$predict_args <- list(type = "response") > glm_optimization$performance_metric_args <- list( + positive = "1", + negative = "0" + ) > glm_optimization$performance_metric <- list( + auc = metric("AUC"), sensitivity = metric("TPR"), + specificity = metric("TNR") + ) > glm_optimization$return_models <- TRUE > > # set data > glm_optimization$set_data( + x = data.matrix(dataset[, -7]), + y = dataset[, 7] + ) > > cv_results <- glm_optimization$execute() CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. > > # predictions > preds <- mlexperiments::predictions( + object = glm_optimization, + newdata = data.matrix(dataset[, -7]), + na.rm = FALSE, + ncores = 2L, + type = "response" + ) Error in `[.data.table`(res, , `:=`(mean = mean(as.numeric(.SD), na.rm = na.rm), : attempt access index 3/3 in VECTOR_ELT Calls: <Anonymous> -> [ -> [.data.table Execution halted Flavors: r-devel-linux-x86_64-debian-clang, r-devel-linux-x86_64-debian-gcc

Version: 0.0.8
Check: tests
Result: ERROR Running ‘testthat.R’ [182s/468s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mlexperiments) > > test_check("mlexperiments") CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold4 CV fold: Fold5 Testing for identical folds in 2 and 1. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. Saving _problems/test-glm_predictions-79.R CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerLm'. Saving _problems/test-glm_predictions-188.R CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 25.821 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.966 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 27.299 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.047 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. Registering parallel backend using 2 cores. Running initial scoring function 4 times in 2 thread(s)... 12.549 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.107 seconds 3) Running FUN 2 times in 2 thread(s)... 4.992 seconds CV fold: Fold1 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 15.024 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.182 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. CV fold: Fold2 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 15.514 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.76 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 14.044 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.209 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 25.531 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.07 seconds 3) Running FUN 2 times in 2 thread(s)... 3.963 seconds Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 12.95 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.055 seconds 3) Running FUN 2 times in 2 thread(s)... 2.257 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 13.459 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.165 seconds 3) Running FUN 2 times in 2 thread(s)... 2.238 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 14.172 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.91 seconds 3) Running FUN 2 times in 2 thread(s)... 2.592 seconds CV fold: Fold1 Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold2 Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold3 Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 6.354 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.301 seconds 3) Running FUN 2 times in 2 thread(s)... 0.677 seconds Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 5.334 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.997 seconds 3) Running FUN 2 times in 2 thread(s)... 0.352 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 4.992 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.189 seconds 3) Running FUN 2 times in 2 thread(s)... 0.472 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 6.47 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.051 seconds 3) Running FUN 2 times in 2 thread(s)... 0.559 seconds CV fold: Fold1 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold2 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold3 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. [ FAIL 2 | WARN 0 | SKIP 1 | PASS 68 ] ══ Skipped tests (1) ═══════════════════════════════════════════════════════════ • On CRAN (1): 'test-lints.R:10:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-glm_predictions.R:73:5'): test predictions, binary - glm ─────── Error in ``[.data.table`(res, , `:=`(mean = mean(as.numeric(.SD), na.rm = na.rm), sd = stats::sd(as.numeric(.SD), na.rm = na.rm)), .SDcols = colnames(res), by = seq_len(nrow(res)))`: attempt access index 5/5 in VECTOR_ELT Backtrace: ▆ 1. └─mlexperiments::predictions(...) at test-glm_predictions.R:73:5 2. ├─...[] 3. └─data.table:::`[.data.table`(...) ── Error ('test-glm_predictions.R:182:5'): test predictions, regression - lm ─── Error in ``[.data.table`(res, , `:=`(mean = mean(as.numeric(.SD), na.rm = na.rm), sd = stats::sd(as.numeric(.SD), na.rm = na.rm)), .SDcols = colnames(res), by = seq_len(nrow(res)))`: attempt access index 5/5 in VECTOR_ELT Backtrace: ▆ 1. └─mlexperiments::predictions(...) at test-glm_predictions.R:182:5 2. ├─...[] 3. └─data.table:::`[.data.table`(...) [ FAIL 2 | WARN 0 | SKIP 1 | PASS 68 ] Error: ! Test failures. Execution halted Flavor: r-devel-linux-x86_64-debian-clang

Version: 0.0.8
Check: tests
Result: ERROR Running ‘testthat.R’ [132s/384s] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mlexperiments) > > test_check("mlexperiments") CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold4 CV fold: Fold5 Testing for identical folds in 2 and 1. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. Saving _problems/test-glm_predictions-79.R CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerLm'. Saving _problems/test-glm_predictions-188.R CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 26.929 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.587 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 29.539 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.687 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. Registering parallel backend using 2 cores. Running initial scoring function 4 times in 2 thread(s)... 13.247 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.905 seconds 3) Running FUN 2 times in 2 thread(s)... 4.051 seconds CV fold: Fold1 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 10.801 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.858 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. CV fold: Fold2 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 13.759 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.526 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 9.464 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.566 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 22.365 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.625 seconds 3) Running FUN 2 times in 2 thread(s)... 2.873 seconds Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 12.701 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.766 seconds 3) Running FUN 2 times in 2 thread(s)... 2.5 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 12.272 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.907 seconds 3) Running FUN 2 times in 2 thread(s)... 3.209 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 11.953 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.891 seconds 3) Running FUN 2 times in 2 thread(s)... 2.999 seconds CV fold: Fold1 Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold2 Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold3 Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 4.45 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.573 seconds 3) Running FUN 2 times in 2 thread(s)... 0.702 seconds Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 3.248 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.665 seconds 3) Running FUN 2 times in 2 thread(s)... 0.598 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 4.349 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.996 seconds 3) Running FUN 2 times in 2 thread(s)... 0.588 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 4.035 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 0.744 seconds 3) Running FUN 2 times in 2 thread(s)... 0.365 seconds CV fold: Fold1 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold2 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold3 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. [ FAIL 2 | WARN 0 | SKIP 1 | PASS 68 ] ══ Skipped tests (1) ═══════════════════════════════════════════════════════════ • On CRAN (1): 'test-lints.R:10:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-glm_predictions.R:73:5'): test predictions, binary - glm ─────── Error in ``[.data.table`(res, , `:=`(mean = mean(as.numeric(.SD), na.rm = na.rm), sd = stats::sd(as.numeric(.SD), na.rm = na.rm)), .SDcols = colnames(res), by = seq_len(nrow(res)))`: attempt access index 5/5 in VECTOR_ELT Backtrace: ▆ 1. └─mlexperiments::predictions(...) at test-glm_predictions.R:73:5 2. ├─...[] 3. └─data.table:::`[.data.table`(...) ── Error ('test-glm_predictions.R:182:5'): test predictions, regression - lm ─── Error in ``[.data.table`(res, , `:=`(mean = mean(as.numeric(.SD), na.rm = na.rm), sd = stats::sd(as.numeric(.SD), na.rm = na.rm)), .SDcols = colnames(res), by = seq_len(nrow(res)))`: attempt access index 5/5 in VECTOR_ELT Backtrace: ▆ 1. └─mlexperiments::predictions(...) at test-glm_predictions.R:182:5 2. ├─...[] 3. └─data.table:::`[.data.table`(...) [ FAIL 2 | WARN 0 | SKIP 1 | PASS 68 ] Error: ! Test failures. Execution halted Flavor: r-devel-linux-x86_64-debian-gcc

Version: 0.0.8
Check: examples
Result: ERROR Running examples in ‘mlexperiments-Ex.R’ failed The error most likely occurred in: > ### Name: performance > ### Title: performance > ### Aliases: performance > > ### ** Examples > > dataset <- do.call( + cbind, + c(sapply(paste0("col", 1:6), function(x) { + rnorm(n = 500) + }, + USE.NAMES = TRUE, + simplify = FALSE + ), + list(target = sample(0:1, 500, TRUE)) + )) > > fold_list <- splitTools::create_folds( + y = dataset[, 7], + k = 3, + type = "stratified", + seed = 123 + ) > > glm_optimization <- mlexperiments::MLCrossValidation$new( + learner = LearnerGlm$new(), + fold_list = fold_list, + seed = 123 + ) > > glm_optimization$learner_args <- list(family = binomial(link = "logit")) > glm_optimization$predict_args <- list(type = "response") > glm_optimization$performance_metric_args <- list( + positive = "1", + negative = "0" + ) > glm_optimization$performance_metric <- list( + auc = metric("AUC"), sensitivity = metric("TPR"), + specificity = metric("TNR") + ) > glm_optimization$return_models <- TRUE > > # set data > glm_optimization$set_data( + x = data.matrix(dataset[, -7]), + y = dataset[, 7] + ) > > cv_results <- glm_optimization$execute() CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. > > # predictions > preds <- mlexperiments::predictions( + object = glm_optimization, + newdata = data.matrix(dataset[, -7]), + na.rm = FALSE, + ncores = 2L, + type = "response" + ) Error in `[.data.table`(res, , `:=`(mean = mean(as.numeric(.SD), na.rm = na.rm), : attempt access index 3/3 in VECTOR_ELT Calls: <Anonymous> -> [ -> [.data.table Execution halted Flavors: r-devel-linux-x86_64-fedora-clang, r-devel-linux-x86_64-fedora-gcc

Version: 0.0.8
Check: tests
Result: ERROR Running ‘testthat.R’ [5m/15m] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mlexperiments) > > test_check("mlexperiments") CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold4 CV fold: Fold5 Testing for identical folds in 2 and 1. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. Saving _problems/test-glm_predictions-79.R CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerLm'. Saving _problems/test-glm_predictions-188.R CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 75.409 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.954 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 77.037 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 2.66 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. Registering parallel backend using 2 cores. Running initial scoring function 4 times in 2 thread(s)... 29.606 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.937 seconds 3) Running FUN 2 times in 2 thread(s)... 10.967 seconds CV fold: Fold1 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 34.584 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.896 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. CV fold: Fold2 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 23.424 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 2.281 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 35.142 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.429 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 40.728 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 2.139 seconds 3) Running FUN 2 times in 2 thread(s)... 7.196 seconds Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 22.844 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 2.087 seconds 3) Running FUN 2 times in 2 thread(s)... 4.535 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 24.945 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.651 seconds 3) Running FUN 2 times in 2 thread(s)... 3.887 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 23.217 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.89 seconds 3) Running FUN 2 times in 2 thread(s)... 3.093 seconds CV fold: Fold1 Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold2 Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold3 Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 7.897 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.636 seconds 3) Running FUN 2 times in 2 thread(s)... 0.767 seconds Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 8.308 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.338 seconds 3) Running FUN 2 times in 2 thread(s)... 0.682 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 10.865 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.748 seconds 3) Running FUN 2 times in 2 thread(s)... 0.75 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 9.606 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 2.309 seconds 3) Running FUN 2 times in 2 thread(s)... 1.017 seconds CV fold: Fold1 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold2 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold3 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. [ FAIL 2 | WARN 0 | SKIP 1 | PASS 68 ] ══ Skipped tests (1) ═══════════════════════════════════════════════════════════ • On CRAN (1): 'test-lints.R:10:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-glm_predictions.R:73:5'): test predictions, binary - glm ─────── Error in ``[.data.table`(res, , `:=`(mean = mean(as.numeric(.SD), na.rm = na.rm), sd = stats::sd(as.numeric(.SD), na.rm = na.rm)), .SDcols = colnames(res), by = seq_len(nrow(res)))`: attempt access index 5/5 in VECTOR_ELT Backtrace: ▆ 1. └─mlexperiments::predictions(...) at test-glm_predictions.R:73:5 2. ├─...[] 3. └─data.table:::`[.data.table`(...) ── Error ('test-glm_predictions.R:182:5'): test predictions, regression - lm ─── Error in ``[.data.table`(res, , `:=`(mean = mean(as.numeric(.SD), na.rm = na.rm), sd = stats::sd(as.numeric(.SD), na.rm = na.rm)), .SDcols = colnames(res), by = seq_len(nrow(res)))`: attempt access index 5/5 in VECTOR_ELT Backtrace: ▆ 1. └─mlexperiments::predictions(...) at test-glm_predictions.R:182:5 2. ├─...[] 3. └─data.table:::`[.data.table`(...) [ FAIL 2 | WARN 0 | SKIP 1 | PASS 68 ] Error: ! Test failures. Execution halted Flavor: r-devel-linux-x86_64-fedora-clang

Version: 0.0.8
Check: tests
Result: ERROR Running ‘testthat.R’ [6m/18m] Running the tests in ‘tests/testthat.R’ failed. Complete output: > # This file is part of the standard setup for testthat. > # It is recommended that you do not modify it. > # > # Where should you do additional test configuration? > # Learn more about the roles of various files in: > # * https://r-pkgs.org/tests.html > # * https://testthat.r-lib.org/reference/test_package.html#special-files > > Sys.setenv("OMP_THREAD_LIMIT" = 2) > Sys.setenv("Ncpu" = 2) > > library(testthat) > library(mlexperiments) > > test_check("mlexperiments") CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold4 CV fold: Fold5 Testing for identical folds in 2 and 1. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerGlm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerGlm'. Saving _problems/test-glm_predictions-79.R CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold4 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold5 Parameter 'ncores' is ignored for learner 'LearnerLm'. Saving _problems/test-glm_predictions-188.R CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 79.505 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.474 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 101.568 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 2.043 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. Registering parallel backend using 2 cores. Running initial scoring function 4 times in 2 thread(s)... 31.072 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.553 seconds 3) Running FUN 2 times in 2 thread(s)... 14.023 seconds CV fold: Fold1 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 65.393 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 2.358 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. CV fold: Fold2 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 65.698 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.687 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. CV fold: Fold3 Registering parallel backend using 2 cores. Running initial scoring function 11 times in 2 thread(s)... 65.596 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.653 seconds Noise could not be added to find unique parameter set. Stopping process and returning results so far. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold2 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold3 Parameter 'ncores' is ignored for learner 'LearnerLm'. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 50.318 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.838 seconds 3) Running FUN 2 times in 2 thread(s)... 12.42 seconds Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 28.801 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.623 seconds 3) Running FUN 2 times in 2 thread(s)... 4.596 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 28.638 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.623 seconds 3) Running FUN 2 times in 2 thread(s)... 4.225 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 32.889 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.397 seconds 3) Running FUN 2 times in 2 thread(s)... 5.596 seconds CV fold: Fold1 Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold2 Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold3 Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. Classification: using 'mean misclassification error' as optimization metric. CV fold: Fold1 CV fold: Fold2 CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 6.279 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.184 seconds 3) Running FUN 2 times in 2 thread(s)... 0.663 seconds Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold1 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 8.577 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.691 seconds 3) Running FUN 2 times in 2 thread(s)... 0.919 seconds CV fold: Fold2 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 8.847 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 1.82 seconds 3) Running FUN 2 times in 2 thread(s)... 0.667 seconds CV fold: Fold3 Number of rows of initialization grid > than 'options("mlexperiments.bayesian.max_init")'... ... reducing initialization grid to 10 rows. Registering parallel backend using 2 cores. Running initial scoring function 10 times in 2 thread(s)... 7.705 seconds Starting Epoch 1 1) Fitting Gaussian Process... 2) Running local optimum search... 2.016 seconds 3) Running FUN 2 times in 2 thread(s)... 0.949 seconds CV fold: Fold1 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold2 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. CV fold: Fold3 Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. Regression: using 'mean squared error' as optimization metric. [ FAIL 2 | WARN 0 | SKIP 1 | PASS 68 ] ══ Skipped tests (1) ═══════════════════════════════════════════════════════════ • On CRAN (1): 'test-lints.R:10:5' ══ Failed tests ════════════════════════════════════════════════════════════════ ── Error ('test-glm_predictions.R:73:5'): test predictions, binary - glm ─────── Error in ``[.data.table`(res, , `:=`(mean = mean(as.numeric(.SD), na.rm = na.rm), sd = stats::sd(as.numeric(.SD), na.rm = na.rm)), .SDcols = colnames(res), by = seq_len(nrow(res)))`: attempt access index 5/5 in VECTOR_ELT Backtrace: ▆ 1. └─mlexperiments::predictions(...) at test-glm_predictions.R:73:5 2. ├─...[] 3. └─data.table:::`[.data.table`(...) ── Error ('test-glm_predictions.R:182:5'): test predictions, regression - lm ─── Error in ``[.data.table`(res, , `:=`(mean = mean(as.numeric(.SD), na.rm = na.rm), sd = stats::sd(as.numeric(.SD), na.rm = na.rm)), .SDcols = colnames(res), by = seq_len(nrow(res)))`: attempt access index 5/5 in VECTOR_ELT Backtrace: ▆ 1. └─mlexperiments::predictions(...) at test-glm_predictions.R:182:5 2. ├─...[] 3. └─data.table:::`[.data.table`(...) [ FAIL 2 | WARN 0 | SKIP 1 | PASS 68 ] Error: ! Test failures. Execution halted Flavor: r-devel-linux-x86_64-fedora-gcc

Package mllrnrs

Current CRAN status: OK: 13

Package mlsurvlrnrs

Current CRAN status: OK: 13

Package rBiasCorrection

Current CRAN status: OK: 13

Package sjtable2df

Current CRAN status: OK: 13