# configure prediction

The `configure prediction` command configures a prediction deployment. It is used to deploy a registered ML model for serving. In the notebook, you will be guided through a template form that will prompt you for various inputs, such as the prediction deployment name, the registered ML model name, and the model version.

## Syntax

```
configure prediction
```

## Parameters

1. Select **New** from the **Prediction** drop-down list.
2. Specify a prediction deployment name in the **Prediction Name** field.
3. Select a registered model from the **Model Name** drop-down list.
4. Select the model version from the **Model Version** drop-down list.
5. Select the source type from the **Source Type** drop-down list:
   * Select **http** for a REST source.
   * Select **datasource** or **dataset** for a batch source, and then select the name of the data source or dataset from the **Source Name** drop-down list.
6. Select the **Destination Type** for the prediction response:
   * Select **http** for an http response.
   * Select **datasource** to store the results in the location specified by the data-source definition.
7. Select the **Advanced Settings** checkbox to specify the file name of a custom preprocessor Python module.

   <div data-gb-custom-block data-tag="hint" data-style="info" class="hint hint-info"><p>Before starting the prediction job, make sure to install this preprocessor module using the <code>install preprocessor</code> command.</p></div>
8. Click the **Save Configuration** button to save the prediction deployment.

## Example

<div align="left"><figure><img src="https://2508510707-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Foy9ZHXC3vsL0Sr36iw78%2Fuploads%2FArteInNes9RdYAECFILG%2Fconfigure_prediction_screen.png?alt=media&#x26;token=a2e0338a-ff22-4b5e-9911-e593d19fc5cd" alt=""><figcaption></figcaption></figure></div>
