summarize#

EstimatorReport.metrics.summarize(*, data_source='test', metric=None, metric_kwargs=None, response_method=None)[source]#

Report a set of metrics for our estimator.

Parameters:
data_source{“test”, “train”, “both”}, default=”test”

The data source to use.

  • “test” : use the test set provided when creating the report.

  • “train” : use the train set provided when creating the report.

  • “both” : use both the train and test sets to compute the metrics and present them side-by-side.

metricstr, callable, scorer, or list of such instances or dict of such instances, default=None

The metrics to report. The possible values are:

  • if a string, either one of the built-in metrics or a scikit-learn scorer name. You can get the possible list of string using report.metrics.help() or sklearn.metrics.get_scorer_names() for the built-in metrics or the scikit-learn scorers, respectively.

  • if a callable, it should take as arguments y_true, y_pred as the two first arguments. Additional arguments can be passed as keyword arguments and will be forwarded with metric_kwargs. No favorability indicator can be displayed in this case.

  • if the callable API is too restrictive (e.g. need to pass same parameter name with different values), you can use scikit-learn scorers as provided by sklearn.metrics.make_scorer(). In this case, the metric favorability will only be displayed if it is given explicitly via make_scorer’s greater_is_better parameter.

  • if a dict, the keys are used as metric names and the values are the metric functions (strings, callables, or scorers as described above).

  • if a list, each element can be any of the above types (strings, callables, scorers).

metric_kwargsdict, default=None

The keyword arguments to pass to the metric functions.

response_method{“predict”, “predict_proba”, “predict_log_proba”, “decision_function”} or list of such str, default=None

The estimator’s method to be invoked to get the predictions. Only necessary for custom metrics.

Returns:
MetricsSummaryDisplay

A display containing the statistics for the metrics.

Examples

>>> from sklearn.datasets import load_breast_cancer
>>> from sklearn.linear_model import LogisticRegression
>>> from skore import train_test_split
>>> from skore import EstimatorReport
>>> X, y = load_breast_cancer(return_X_y=True)
>>> split_data = train_test_split(X=X, y=y, random_state=0, as_dict=True)
...
>>> classifier = LogisticRegression(max_iter=10_000)
>>> report = EstimatorReport(classifier, **split_data, pos_label=1)
>>> report.metrics.summarize().frame(favorability=True)
            LogisticRegression Favorability
Metric
Accuracy               0.95...         (↗︎)
Precision              0.98...         (↗︎)
Recall                 0.93...         (↗︎)
ROC AUC                0.99...         (↗︎)
Brier score            0.03...         (↘︎)
>>> # Using scikit-learn metrics
>>> report.metrics.summarize(
...     metric=["f1"],
... ).frame(favorability=True)
                          LogisticRegression Favorability
Metric   Label / Average
F1 Score               1             0.96...          (↗︎)
>>> report.metrics.summarize(
...    data_source="both"
... ).frame(favorability=True).drop(["Fit time (s)", "Predict time (s)"])
             LogisticRegression (train)  LogisticRegression (test)  Favorability
Metric
Accuracy                        0.96...                     0.95...          (↗︎)
Precision                       0.96...                     0.98...          (↗︎)
Recall                          0.97...                     0.93...          (↗︎)
ROC AUC                         0.99...                     0.99...          (↗︎)
Brier score                     0.02...                     0.03...          (↘︎)
>>> # Using scikit-learn metrics
>>> report.metrics.summarize(
...     metric=["f1"],
... ).frame(favorability=True)
                          LogisticRegression Favorability
Metric   Label / Average
F1 Score               1             0.96...          (↗︎)