Serving an ML Model
You can deploy a trained ML model to handle prediction requests.
To serve an ML model, follow these steps:
Log in to the Aizen Jupyter console. See Using the Aizen Jupyter Console.
Set the current working project.
Configure a prediction deployment using the
configure prediction
command.In the notebook, you will be guided through a template form with boxes and drop-down lists that you can complete to configure the deployment. You must specify the prediction deployment name and select the ML model name and registered version.
Serve the model using the
start prediction
command. This command will schedule a job to deploy the model. Optionally, you may configure resources for the job by running theconfigure resource
command. If you do not configure resources, default resource settings will be applied.Check the status of the prediction deployment job and obtain the prediction URL:
Your application may send prediction REST requests to the URL displayed in the status output. Additionally, you may use the
test prediction
command to auto-send a few prediction REST requests to your ML model being served.
Last updated