configure prediction
The configure prediction
command configures a prediction deployment. It is used to deploy a registered ML model for serving. In the notebook, you will be guided through a template form that will prompt you for various inputs, such as the prediction deployment name, the registered ML model name, and the model version.
Syntax
Parameters
Select New from the Prediction drop-down list.
Specify a prediction deployment name in the Prediction Name field.
Select a registered model from the Model Name drop-down list.
Select the model version from the Model Version drop-down list.
Select the source type from the Source Type drop-down list:
Select http for a REST source.
Select datasource or dataset for a batch source, and then select the name of the data source or dataset from the Source Name drop-down list.
Select the Destination Type for the prediction response:
Select http for an http response.
Select datasource to store the results in the location specified by the data-source definition.
Select the Advanced Settings checkbox to specify the file name of a custom preprocessor Python module.
Before starting the prediction job, make sure to install this preprocessor module using the
install preprocessor
command.Click the Save Configuration button to save the prediction deployment.
Example
Last updated