rmse#

CrossValidationReport.metrics.rmse(*, data_source='test', multioutput='raw_values', aggregate=('mean', 'std'), flat_index=False)[source]#

Compute the root mean squared error.

Parameters:
data_source{“test”, “train”}, default=”test”

The data source to use.

  • “test” : use the test set provided when creating the report.

  • “train” : use the train set provided when creating the report.

multioutput{“raw_values”, “uniform_average”} or array-like of shape (n_outputs,), default=”raw_values”

Defines aggregating of multiple output values. Array-like value defines weights used to average errors. The other possible values are:

  • “raw_values”: Returns a full set of errors in case of multioutput input.

  • “uniform_average”: Errors of all outputs are averaged with uniform weight.

By default, no averaging is done.

aggregate{“mean”, “std”}, list of such str or None, default=(“mean”, “std”)

Function to aggregate the scores across the cross-validation splits. None will return the scores for each split.

flat_indexbool, default=True

Whether to return a flat index or a multi-index.

Returns:
pd.DataFrame

The root mean squared error.

Examples

>>> from sklearn.datasets import load_diabetes
>>> from sklearn.linear_model import Ridge
>>> from skore import CrossValidationReport
>>> X, y = load_diabetes(return_X_y=True)
>>> regressor = Ridge()
>>> report = CrossValidationReport(regressor, X=X, y=y, splitter=2)
>>> report.metrics.rmse()
            Ridge
            mean       std
Metric
RMSE    60.7...  1.0...