Serving an ML Model
You can deploy a trained ML model to handle prediction requests.
To serve an ML model, follow these steps:
Log in to the Aizen Jupyter console. See Using the Aizen Jupyter Console.
Set the current working project.
set project <project name>
Register the trained ML model that you want to deploy:
list trained-models <ML model name> register model <ML model name>,<run id>,PRODUCTION list registered-models
See Training Commands.
Configure a prediction deployment using the
configure prediction
command.configure prediction
In the notebook, you will be guided through a template form with boxes and drop-down lists that you can complete to configure the deployment. You must specify the prediction deployment name and select the ML model name and registered version.
Serve the model using the
start prediction
command. This command will schedule a job to deploy the model. Optionally, you may configure resources for the job by running theconfigure resource
command. If you do not configure resources, default resource settings will be applied.configure resource start prediction <prediction deployment name>
Check the status of the prediction deployment job and obtain the prediction URL:
status prediction <prediction deployment name>
Your application may send prediction REST requests to the URL displayed in the status output. Additionally, you may use the
test prediction
command to auto-send a few prediction REST requests to your ML model being served.test prediction <prediction deployment name>
Last updated