Skip to main content
The 2024 Developer Survey results are live! See the results

Questions tagged [auc]

The Area Under the Curve (AUC) is a metric that provides a single scalar representation of a classifier’s performance. While often associated with the Receiver Operating Characteristic (ROC) curve, it can also apply to other curves such as the Precision-Recall (PR) curve. The AUC essentially quantifies the likelihood that the classifier will correctly rank a randomly chosen positive instance higher than a randomly chosen negative one.

auc
0 votes
0 answers
22 views

PRROC package - foreground background data - R

I'm currently trying to draw ROC and precision-recall curves for my model and struggling a bit to understand how to use my data for the PRROC package of R. I have a data frame containing different ...
user26317811's user avatar
1 vote
1 answer
30 views

How to find the number of samples that are picked in each boostrap of stratified bootstrap in pROC?

Question is regarding the roc function of pROC package. Package link: https://www.rdocumentation.org/packages/pROC/versions/1.18.5/topics/roc. Paper link https://www.ncbi.nlm.nih.gov/pmc/articles/...
JALO - JusAnotherLivngOrganism's user avatar
2 votes
1 answer
30 views

How to add auc, best threshold, sensitivity, and specificity to grouped data

I want to take a dataset that has truth and various predictors in it and summarise the auc, 'best' threshold, sensitivity, and specificity for each predictor, split by some grouping variable. Example ...
Brian D's user avatar
  • 2,690
0 votes
0 answers
13 views

Why PySpark `BinaryClasssificationEvaluator` metric `areaUnderROC` returns slightly different across multiple evaluations on the same dataset?

I am using BinaryClasssificationEvaluator in Pyspark to calculate AUC, however, I find that the returned auc across multiple evaluations on the same dataset are different(under the same dev enviroment,...
helloworld's user avatar
0 votes
0 answers
30 views

TypeError: Singleton array array(1) cannot be considered a valid collection

I have a dataset where my target variable is a number between 1 and 8. Now I am going to implement Cubic SVM. import numpy as np import pandas as pd from sklearn.model_selection import ...
Farshadih7's user avatar
1 vote
1 answer
40 views

AUC for a seaborn distplot kde with multiple curves

sorry if this is a very silly question, but I've been trying to understand how to calculate the area under each curve of a seaborn distplot where i use common_norm=False. I understand that in this ...
user13096842's user avatar
-1 votes
1 answer
18 views

Can AUC range be other than between 0 and 1?

In this question and answer regarding AUC: Applying a function for calculating AUC for each subject Why is the AUC not between 0 and 1? shouldn't it be? Thank you so much in advance. I have tried ...
Theis Bech Mikkelsen's user avatar
0 votes
0 answers
72 views

trying to improve prediction of my xgboost model

I am new to machine learning and here is the problem I am facing: My dataset has 1000 records. The target is binary - 0 and 1. Dataset has 10 features. I have a "test dataset" with 350 recs ...
Rex's user avatar
  • 1
0 votes
0 answers
39 views

Calculating AUC for very large data in R

I would like to repeatedly calculate Area Under Curve type values (both AUROC and AUPRC) as well as average precision for a dataset that's larger then 2^32 rows. I have the dataset sliced up in ...
Nils R's user avatar
  • 105
4 votes
1 answer
93 views

Different ROC optimal cutoff obtained from different functions with same methods

I just tried to calculate optimal cutoff from a ROC curve. However, when I tried several functions from different packages, they returned different results. Which one is corrent one if I want to use ...
W. Fan's user avatar
  • 43
1 vote
0 answers
37 views

I am trying to find brier score of survival data using risk regression pacakge, but its giving an error

xy = Score(list(cox_ph), formula=Surv(time,status)~1,data=cox.train, metrics=c("brier","auc"), null.model=FALSE,times=time.int,debug = TRUE) Extracted test set and prepared output ...
user23902942's user avatar
0 votes
1 answer
55 views

roc_auc_score differs between RandomForestClassifier GridSearchCV and explicitly coded RandomForestCLassifier

Why doesn't a trained RandomForestClassifier with specific parameters match the performance of varying those parameters with a GridSearchCV? def random_forest(X_train, y_train): from sklearn....
akaphenom's user avatar
  • 6,868
0 votes
1 answer
328 views

Pycaret 3.3.0 compare_models() show zeros for all models AUC

During evaluation of the model using compare_model(). All AUCs are zero. This output of Pycaret 3.3.0 is weird. what's the reason for that? [1]: https://i.sstatic.net/qm2ZT.png
sunone5's user avatar
  • 375
0 votes
1 answer
47 views

Is there a way to reuse a glm fit, if all I have is the model fit call and summary?

I'm working from a project built by a previous programmer. The call to glm() the programmer has in their documentation is glm(formula = AVAL ~ AUC, family = binomial(), data = logreg.dat) I have the ...
BennyBoi's user avatar
0 votes
0 answers
80 views

Calculated a pooled AUC, bootstrapped 95% CI and then thresholding the curve following MICE imputation

I have been doing some fairly simple ROC curve analysis, involving creating some ROC curves, calculating AUC and 95% CI (2000 bootstrapped replicates) and then thresholding the curve to give a 95% ...
DW1310's user avatar
  • 321

15 30 50 per page
1
2 3 4 5
36