Skore: getting started#

This guide illustrates how to use skore through a complete machine learning workflow for binary classification:

  1. Set up a proper experiment with training and test data

  2. Develop and evaluate multiple models using cross-validation

  3. Compare models to select the best one

  4. Validate the final model on held-out data

  5. Track and organize your machine learning results

Throughout this guide, we will see how skore helps you:

  • Avoid common pitfalls with smart diagnostics

  • Quickly get rich insights into model performance

  • Organize and track your experiments

Setting up our binary classification problem#

Let’s start by loading the German credit dataset, a classic binary classification problem where we predict the customer’s credit risk (“good” or “bad”).

This dataset contains various features about credit applicants, including personal information, credit history, and loan details.

import pandas as pd
import skore
from sklearn.datasets import fetch_openml
from skrub import TableReport

german_credit = fetch_openml(data_id=31, as_frame=True, parser="pandas")
X, y = german_credit.data, german_credit.target
TableReport(german_credit.frame)

Please enable javascript

The skrub table reports need javascript to display correctly. If you are displaying a report in a Jupyter notebook and you see this message, you may need to re-execute the cell or to trust the notebook (button on the top right or "File > Trust notebook").



Creating our experiment and held-out sets#

We will use skore’s enhanced train_test_split() function to create our experiment set and a left-out test set. The experiment set will be used for model development and cross-validation, while the left-out set will only be used at the end to validate our final model.

Unlike scikit-learn’s train_test_split(), skore’s version provides helpful diagnostics about potential issues with your data split, such as class imbalance.

╭────────────────────── HighClassImbalanceTooFewExamplesWarning ───────────────────────╮
│ It seems that you have a classification problem with at least one class with fewer   │
│ than 100 examples in the test set. In this case, using train_test_split may not be a │
│ good idea because of high variability in the scores obtained on the test set. We     │
│ suggest three options to tackle this challenge: you can increase test_size, collect  │
│ more data, or use skore's CrossValidationReport with the `splitter` parameter of     │
│ your choice.                                                                         │
╰──────────────────────────────────────────────────────────────────────────────────────╯
╭───────────────────────────────── ShuffleTrueWarning ─────────────────────────────────╮
│ We detected that the `shuffle` parameter is set to `True` either explicitly or from  │
│ its default value. In case of time-ordered events (even if they are independent),    │
│ this will result in inflated model performance evaluation because natural drift will │
│ not be taken into account. We recommend setting the shuffle parameter to `False` in  │
│ order to ensure the evaluation process is really representative of your production   │
│ release process.                                                                     │
╰──────────────────────────────────────────────────────────────────────────────────────╯

skore tells us we have class-imbalance issues with our data, which we confirm with the TableReport above by clicking on the “class” column and looking at the class distribution: there are only 300 examples where the target is “bad”. The second warning concerns time-ordered data, but our data does not contain time-ordered columns so we can safely ignore it.

Model development with cross-validation#

We will investigate two different families of models using cross-validation.

  1. A LogisticRegression which is a linear model

  2. A RandomForestClassifier which is an ensemble of decision trees.

In both cases, we rely on skrub.tabular_pipeline() to choose the proper preprocessing depending on the kind of model.

Cross-validation is necessary to get a more reliable estimate of model performance. skore makes it easy through skore.CrossValidationReport.

Model no. 1: Linear regression with preprocessing#

Our first model will be a linear model, with automatic preprocessing of non-numeric data. Under the hood, skrub’s TableVectorizer will adapt the preprocessing based on our choice to use a linear model.

from sklearn.linear_model import LogisticRegression
from skrub import tabular_pipeline

simple_model = tabular_pipeline(LogisticRegression())
simple_model
Pipeline(steps=[('tablevectorizer',
                 TableVectorizer(datetime=DatetimeEncoder(periodic_encoding='spline'))),
                ('simpleimputer', SimpleImputer(add_indicator=True)),
                ('squashingscaler', SquashingScaler(max_absolute_value=5)),
                ('logisticregression', LogisticRegression())])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


We now cross-validate the model with CrossValidationReport.

from skore import CrossValidationReport

simple_cv_report = CrossValidationReport(
    simple_model,
    X=X_experiment,
    y=y_experiment,
    pos_label="good",
    splitter=5,
)

Skore reports allow us to structure the statistical information we look for when experimenting with predictive models. First, the help() method shows us all its available methods and attributes, with the knowledge that our model was trained for classification:



For example, we can examine the training data, which excludes the held-out data:

simple_cv_report.data.analyze()

Please enable javascript

The skrub table reports need javascript to display correctly. If you are displaying a report in a Jupyter notebook and you see this message, you may need to re-execute the cell or to trust the notebook (button on the top right or "File > Trust notebook").



But we can also quickly get an overview of the performance of our model, using summarize():

simple_metrics = simple_cv_report.metrics.summarize(favorability=True)
simple_metrics.frame()
LogisticRegression Favorability
mean std
Metric
Accuracy 0.729333 0.050903 (↗︎)
Precision 0.785632 0.034982 (↗︎)
Recall 0.840934 0.050696 (↗︎)
ROC AUC 0.750335 0.056447 (↗︎)
Brier score 0.184294 0.026786 (↘︎)
Fit time (s) 0.115650 0.010040 (↘︎)
Predict time (s) 0.053862 0.000183 (↘︎)


Note

favorability=True adds a column showing whether higher or lower metric values are better.

In addition to the summary of metrics, skore provides more advanced statistical information such as the precision-recall curve:



Note

The output of precision_recall() is a Display object. This is a common pattern in skore which allows us to access the information in several ways.

We can visualize the critical information as a plot, with only a few lines of code:

Precision-Recall Curve for LogisticRegression Positive label: good Data source: Test set

Or we can access the raw information as a dataframe if additional analysis is needed:

split threshold precision recall
0 0 0.110277 0.700000 1.000000
1 0 0.215387 0.744681 1.000000
2 0 0.238563 0.742857 0.990476
3 0 0.248819 0.741007 0.980952
4 0 0.273961 0.739130 0.971429
... ... ... ... ...
660 4 0.982595 1.000000 0.048077
661 4 0.988545 1.000000 0.038462
662 4 0.989817 1.000000 0.028846
663 4 0.994946 1.000000 0.019231
664 4 0.995636 1.000000 0.009615

665 rows × 4 columns



As another example, we can plot the confusion matrix with the same consistent API:

Confusion Matrix Decision threshold: 0.50 Data source: Test set

Skore also provides utilities to inspect models. Since our model is a linear model, we can study the importance that it gives to each feature:

split 0 1 2 3 4
feature
Intercept 1.232482 1.000982 1.186289 1.302070 1.387773
age 0.449341 0.293578 0.484799 0.575005 0.423790
checking_status_0<=X<200 -0.322232 -0.524056 -0.226205 -0.416332 -0.292542
checking_status_<0 -0.572662 -0.756162 -0.819696 -0.759692 -0.655088
checking_status_>=200 0.196627 0.363105 0.298289 0.379424 -0.155683
checking_status_no checking 0.791377 1.093615 0.896182 0.975038 1.030187
credit_amount -0.488050 -0.563344 -0.240435 -0.238264 -0.385259
credit_history_all paid -0.427242 -0.610534 -0.516443 -0.321231 -0.405498
credit_history_critical/other existing credit 0.603902 0.795179 0.686587 0.418338 0.774756
credit_history_delayed previously 0.250935 -0.074834 0.087466 0.140781 0.117275
credit_history_existing paid -0.354469 -0.242417 -0.132602 -0.242489 -0.366320
credit_history_no credits/all paid -0.296052 -0.361414 -0.620119 -0.152560 -0.485244
duration -0.226065 -0.242654 -0.429904 -0.340377 -0.258032
employment_1<=X<4 -0.129556 0.003537 -0.060321 0.037913 -0.164249
employment_4<=X<7 0.072062 0.257337 0.136516 0.076960 0.266951
employment_<1 -0.107370 -0.189338 -0.169479 0.031779 -0.025116
employment_>=7 0.140419 0.085614 0.155651 -0.114689 0.112820
employment_unemployed 0.140826 -0.150580 -0.132803 0.035466 -0.143785
existing_credits -0.359864 -0.227033 -0.136806 -0.204793 -0.540882
foreign_worker_yes -0.526434 -0.536209 -0.979933 -0.699237 -0.660103
housing_for free -0.250763 -0.005073 -0.271939 0.020828 -0.033157
housing_own 0.204333 0.125943 0.208304 0.069616 0.140103
housing_rent -0.136189 -0.233428 -0.122532 -0.152662 -0.232159
installment_commitment -0.852077 -0.764793 -0.429653 -0.496181 -0.592614
job_high qualif/self emp/mgmt 0.032893 -0.107446 -0.247662 -0.091431 -0.318198
job_skilled -0.027400 -0.223924 -0.195591 0.139285 -0.038499
job_unemp/unskilled non res -0.029983 0.465422 0.723994 -0.202815 0.272749
job_unskilled resident 0.048978 0.066076 -0.105937 0.030478 0.118356
num_dependents -0.171672 0.044334 -0.037084 -0.209791 -0.112250
other_parties_co applicant -0.140583 -0.225750 -0.054463 -0.111283 -0.080026
other_parties_guarantor 0.300526 0.307028 0.308853 0.268091 0.198948
other_parties_none -0.159942 -0.081277 -0.254390 -0.156808 -0.118922
other_payment_plans_bank -0.208820 -0.122027 -0.046832 -0.043997 -0.218566
other_payment_plans_none 0.147040 0.129909 0.210488 0.251110 0.263331
other_payment_plans_stores 0.061781 -0.007883 -0.163656 -0.207112 -0.044765
own_telephone_yes 0.568734 0.292184 0.234379 0.216989 0.319237
personal_status_female div/dep/mar -0.220352 -0.041395 -0.197925 -0.243007 -0.130270
personal_status_male div/sep -0.074733 -0.238236 -0.048218 -0.264686 -0.474539
personal_status_male mar/wid -0.001827 -0.043175 -0.133803 -0.122594 -0.135744
personal_status_male single 0.260780 0.189997 0.294042 0.447514 0.452534
property_magnitude_car -0.120225 0.056938 -0.111798 0.125626 -0.013189
property_magnitude_life insurance -0.151793 -0.030097 -0.047276 -0.082902 -0.241271
property_magnitude_no known property -0.140763 -0.285394 -0.153467 -0.333516 -0.169931
property_magnitude_real estate 0.275024 0.114798 0.225598 0.075889 0.230675
purpose_business -0.292872 0.093522 -0.010574 0.000077 -0.003279
purpose_domestic appliance 0.239637 -0.195704 -0.111200 -0.351450 -0.223330
purpose_education -0.631945 -0.790524 -0.300940 -0.279283 -0.521314
purpose_furniture/equipment 0.048164 0.075959 0.063308 0.056175 0.201345
purpose_new car -0.295397 -0.498437 -0.484673 -0.366124 -0.406895
purpose_other -0.052600 0.201789 -0.103120 0.119750 0.180212
purpose_radio/tv 0.107742 0.027082 0.225258 0.164070 0.170739
purpose_repairs -0.090052 -0.264832 -0.245281 -0.417854 -0.340488
purpose_retraining 0.389179 0.812018 0.150297 0.447931 0.440698
purpose_used car 0.362663 0.524651 0.630367 0.478885 0.350375
residence_since 0.035873 -0.073226 -0.024529 -0.167507 -0.123993
savings_status_100<=X<500 -0.081902 0.128990 -0.018607 0.037222 -0.120532
savings_status_500<=X<1000 0.045873 0.036869 0.024796 0.086757 0.232234
savings_status_<100 -0.297694 -0.289097 -0.525485 -0.305888 -0.249600
savings_status_>=1000 0.309139 0.183254 0.921911 0.296246 0.142577
savings_status_no known savings 0.290642 0.198357 0.067025 0.159045 0.218395


coefficients.plot(select_k=15)
Coefficients of LogisticRegression

Model no. 2: Random forest#

Now, we cross-validate a more advanced model using RandomForestClassifier. Again, we rely on tabular_pipeline() to perform the appropriate preprocessing to use with this model.

Pipeline(steps=[('tablevectorizer',
                 TableVectorizer(low_cardinality=OrdinalEncoder(handle_unknown='use_encoded_value',
                                                                unknown_value=-1))),
                ('randomforestclassifier',
                 RandomForestClassifier(random_state=0))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


We will now compare this new model with the previous one.

Comparing our models#

Now that we have our two models, we need to decide which one should go into production. We can compare them with a skore.ComparisonReport.

from skore import ComparisonReport

comparison = ComparisonReport(
    {
        "Simple Linear Model": simple_cv_report,
        "Advanced Pipeline": advanced_cv_report,
    },
)

This report follows the same API as CrossValidationReport:



We have access to the same tools to perform statistical analysis and compare both models:

comparison_metrics = comparison.metrics.summarize(favorability=True)
comparison_metrics.frame()
mean std Favorability
Estimator Simple Linear Model Advanced Pipeline Simple Linear Model Advanced Pipeline
Metric
Accuracy 0.729333 0.745333 0.050903 0.032796 (↗︎)
Precision 0.785632 0.779443 0.034982 0.018644 (↗︎)
Recall 0.840934 0.885037 0.050696 0.053558 (↗︎)
ROC AUC 0.750335 0.773334 0.056447 0.034190 (↗︎)
Brier score 0.184294 0.169911 0.026786 0.010967 (↘︎)
Fit time (s) 0.115650 0.211414 0.010040 0.005196 (↘︎)
Predict time (s) 0.053862 0.050737 0.000183 0.000592 (↘︎)


comparison.metrics.precision_recall().plot()
Precision-Recall Curve Positive label: good Data source: Test set, estimator = Simple Linear Model, estimator = Advanced Pipeline

Based on the previous tables and plots, it seems that the RandomForestClassifier model has slightly better performance. For the purposes of this guide however, we make the arbitrary choice to deploy the linear model to make a comparison with the coefficients study shown earlier.

Final model evaluation on held-out data#

Now that we have chosen to deploy the linear model, we will train it on the full experiment set and evaluate it on our held-out data: training on more data should help performance and we can also validate that our model generalizes well to new data. This can be done in one step with create_estimator_report().

final_report = comparison.create_estimator_report(
    name="Simple Linear Model", X_test=X_holdout, y_test=y_holdout
)

This returns a EstimatorReport which has a similar API to the other report classes:

LogisticRegression
Metric
Accuracy 0.764000
Precision 0.808290
Recall 0.876404
ROC AUC 0.809613
Brier score 0.153900
Fit time (s) 0.101135
Predict time (s) 0.054682


final_report.metrics.confusion_matrix().plot()
Confusion Matrix Decision threshold: 0.50 Data source: Test set

We can easily combine the results of the previous cross-validation together with the evaluation on the held-out dataset, since the two are accessible as dataframes. This way, we can check if our chosen model meets the expectations we set during the experiment phase.

pd.concat(
    [final_metrics.frame(), simple_cv_report.metrics.summarize().frame()],
    axis="columns",
)
LogisticRegression (LogisticRegression, mean) (LogisticRegression, std)
Metric
Accuracy 0.764000 0.729333 0.050903
Precision 0.808290 0.785632 0.034982
Recall 0.876404 0.840934 0.050696
ROC AUC 0.809613 0.750335 0.056447
Brier score 0.153900 0.184294 0.026786
Fit time (s) 0.101135 0.115650 0.010040
Predict time (s) 0.054682 0.053862 0.000183


As expected, our final model gets better performance, likely thanks to the larger training set.

Our final sanity check is to compare the features considered most impactful between our final model and the cross-validation:

final_coefficients = final_report.inspection.coefficients()
final_top_15_features = final_coefficients.frame(select_k=15, format="long")["feature"]

simple_coefficients = simple_cv_report.inspection.coefficients()
cv_top_15_features = (
    simple_coefficients.frame(select_k=15, format="long")
    .groupby("feature", sort=False)
    .mean()
    .drop(columns="split")
    .reset_index()["feature"]
)

pd.concat(
    [final_top_15_features, cv_top_15_features], axis="columns", ignore_index=True
)
0 1
0 Intercept Intercept
1 checking_status_0<=X<200 checking_status_<0
2 checking_status_<0 checking_status_no checking
3 checking_status_no checking credit_history_all paid
4 credit_history_all paid credit_history_critical/other existing credit
5 credit_history_critical/other existing credit credit_history_no credits/all paid
6 credit_history_no credits/all paid purpose_education
7 purpose_education purpose_new car
8 purpose_new car purpose_retraining
9 purpose_retraining purpose_used car
10 purpose_used car credit_amount
11 credit_amount savings_status_>=1000
12 installment_commitment installment_commitment
13 age age
14 foreign_worker_yes foreign_worker_yes


They seem very similar, so we are done!

Tracking our work with a skore Project#

Now that we have completed our modeling workflow, we should store our models in a safe place for future work. Indeed, if this research notebook were modified, we would no longer be able to relate the current production model to the code that generated it.

We can use a skore.Project to keep track of our experiments. This makes it easy to organize, retrieve, and compare models over time.

Usually this would be done as you go along the model development, but in the interest of simplicity we kept this until the end.

We load or create a local project:

project = skore.Project("german_credit_classification")

We store our reports with descriptive keys:

project.put("simple_linear_model_cv", simple_cv_report)
project.put("advanced_pipeline_cv", advanced_cv_report)
project.put("final_model", final_report)

Now we can retrieve a summary of our stored reports:

summary = project.summarize()
# Uncomment the next line to display the widget in an interactive environment:
# summary

Note

Calling summary in a Jupyter notebook cell will show the following parallel coordinate plot to help you select models that you want to retrieve:

Screenshot of the widget in a Jupyter notebook

Each line represents a model, and we can select models by clicking on lines or dragging on metric axes to filter by performance.

In the screenshot, we selected only the cross-validation reports; this allows us to retrieve exactly those reports programmatically.

Supposing you selected “Cross-validation” in the “Report type” tab, if you now call reports(), you get only the CrossValidationReport objects, which you can directly put in the form of a ComparisonReport:

new_report = summary.reports(return_as="comparison")
new_report.help()

Stay tuned!

This is only the beginning for skore. We welcome your feedback and ideas to make it the best tool for end-to-end data science.

Key benefits of using skore in your ML workflow:

  • Standardized evaluation and comparison of models

  • Rich visualizations and diagnostics

  • Organized experiment tracking

  • Seamless integration with scikit-learn

Feel free to join our community on Discord or create an issue.

Total running time of the script: (0 minutes 12.906 seconds)

Gallery generated by Sphinx-Gallery