Serving an ML Model

You can deploy a trained ML model to handle prediction requests.

To serve an ML model, follow these steps:

  1. Log in to the Aizen Jupyter console. See Using the Aizen Jupyter Console.

  2. Set the current working project.

    set project <project name>
  3. Register the trained ML model that you want to deploy:

    list trained-models <ML model name>
    register model <ML model name>,<run id>,PRODUCTION
    list registered-models

    See Training Commands.

  4. Configure a prediction deployment using the configure prediction command.

    configure prediction
  5. In the notebook, you will be guided through a template form with boxes and drop-down lists that you can complete to configure the deployment. You must specify the prediction deployment name and select the ML model name and registered version.

  6. Serve the model using the start prediction command. This command will schedule a job to deploy the model. Optionally, you may configure resources for the job by running the configure resource command. If you do not configure resources, default resource settings will be applied.

    configure resource
    start prediction <prediction deployment name>
  7. Check the status of the prediction deployment job and obtain the prediction URL:

    status prediction <prediction deployment name>
  8. Your application may send prediction REST requests to the URL displayed in the status output. Additionally, you may use the test prediction command to auto-send a few prediction REST requests to your ML model being served.

    test prediction <prediction deployment name>

Last updated