Skip to main content
The 2024 Developer Survey results are live! See the results

Questions tagged [cross-validation]

Cross-Validation is a method of evaluating and comparing predictive systems in statistics and machine learning.

cross-validation
0 votes
0 answers
26 views

What to do after cross validation? [closed]

After using cross-validation to see how a custom predictive function performs on unseen data, I applied to function to the original dataset, and the performance (based on coefficient of determination) ...
Beginner's user avatar
0 votes
0 answers
10 views

AutoARIMA Time-Series cross-validation using Sktime evaluate returns no predictions for certain folds

Data looks like this (W-SUN frequency) **y** 2019-12-23/2019-12-29 3230 2019-12-30/2020-01-05 4347 2020-01-06/2020-01-12 4161 2020-01-13/2020-01-19 4417 2020-01-20/2020-01-26 4310 **X**...
laxxxx's user avatar
  • 1
-1 votes
1 answer
27 views

Cross validation on data [closed]

I have two files one is train.csv and other one is test.csv. test.csv will be unseen data and we will not use it in training. So I am using train.csv which I further split into train_1 and validation ...
Haris Waheed Khan's user avatar
0 votes
0 answers
19 views

Problem with spatial autocorrelation measurement for predictor rasters in blockCV::cv_spatial_autocor()

I’m encountering problems with the cv_spatial_autocor function from the blockCV package. This function is used to measure spatial autocorrelation in spatial response data or predictor raster files. It ...
Marine Régis's user avatar
0 votes
1 answer
22 views

Cross-Validation Function returns "Unknown label type: (array([0.0, 1.0], dtype=object),)"

Here is the full error: `--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[33], line 2 ...
nicklaus-slade's user avatar
0 votes
0 answers
33 views

Building custom Cross-validation in Pycaret()

I have been working on a slightly different cross-validation for a specific dataset I want to integrate with PyCaret. However, once it runs there is no output from compare_models(). I believe it is an ...
Kah Seng Ho's user avatar
0 votes
0 answers
19 views

Is it possible to perform Rolling Origins on multiple time series with the modeltime package in R?

I am currently working on a project that involves multiple time series, and I would like to know if it is possible to implement a Rolling Origins technique for these multiple series using the ...
ds_user_1705's user avatar
-2 votes
0 answers
18 views

MLP Regressor Engineering Data SKLearn

I have 10 accelerometers distributed on an aircraft analytical model. From my analytical model, I have a set of sensor acceleration of 10 Accelerometers X 6 Degrees of Freedom X 6000 (60 seconds) data ...
dthoma17's user avatar
0 votes
0 answers
26 views

Controlling for variation in results of xGBoost cross validation

I am finding the nrounds that the cross validation for xgboost returns are highly variable. This of course translates to models with varying performance. This is especially a problem when I compare ...
Goldbug's user avatar
  • 206
0 votes
0 answers
18 views

What is the difference between calling summary() and train_summary() on a nestcv.train object in the nestedcv package in R?

I am running a series of elastic net models with nested cross validation on my training data, using the nestedcv package in R. I am trying to extract performance metrics to compare models with ...
may.the.bee's user avatar
1 vote
1 answer
39 views

How to save single Random Forest model with cross validation?

I am using 10 fold cross validation, trying to predict binary labels (Y) based on the embedding inputs (X). I want to save one of the models (perhaps the one with the highest ROC AUC). I'm not sure ...
youtube's user avatar
  • 425
0 votes
0 answers
14 views

sklearn LeaveOneOut with cross_validate/GridSearchCV: how can I use confusion matrix-based scores as the custom scoring functions?

I’m using LeaveOneOut from sklearn, along with GridSesrchCV and cross_validate. I’m working on a medical problem so I’m interested in finding sensitivity and specificity. However, because LeaveOneOut ...
Thao Nguyen's user avatar
0 votes
0 answers
23 views

leave one out cross validation for model evaluation

# RF rf_optimal = RandomForestRegressor(**best_params, random_state=42) # leave one out cross validation loo = LeaveOneOut() r2_train_scores = [] rmse_train_scores = [] ...
RS-MJH's user avatar
  • 1
0 votes
0 answers
17 views

How exactly does early stopping work with XGBoost CV in Python?

My understanding of cross-validation is that training data is divided into n folds. For each fold, a model is trained on all other folds and validated on the selected fold. At the end, we will have n ...
MirageCommander's user avatar
0 votes
1 answer
40 views

"numpy.ndarray" is not callable

I am getting a type error if I run this code twice or multiple times. That means if I run it once, it won't show any error but if I run it multiple times it will show an error. Some parts of the code: ...
David 's user avatar

15 30 50 per page
1
2 3 4 5
173