Managing Machine Learning Lifecycles with MLflow
Written by Tigran Avetisyan
This is PART 2 of our 3 PART series on the machine learning lifecycle platform MLflow.
See PART 1 here.
In PART 1, we’ve had a look at:
- MLflow Tracking — a toolset for tracking and recording machine learning experiments.
- MLflow Projects — a toolset for packaging data science code in a reproducible format.
In this guide, we are going to have a look at MLflow Models. With Models, we can package machine learning/deep learning models for deployment in a wide array of environments.
We’ll look at:
- How we can package our models with MLflow.
- How we can load and deploy our packaged models in MLflow.
Note that this guide assumes that you’ve read PART 1. So make sure to check out the first article in the series before reading on!
What are MLflow Models?
MLflow Models are a set of instruments that allow you to package machine learning and deep learning models into a standard format that can be reproduced in various environments.
Models relies on the concept of “flavors.” Flavors basically describe how packaged models should be run in the target environment. There are three main types of model flavors in MLflow:
- Python function. Models saved as a generic Python function can be run in any MLflow environment regardless of which ML/DL package they were built with.
- R function. An R function is the equivalent of a Python function, but in the R programming language.
- Framework-specific flavors. MLflow can also save models in a flavor that corresponds to the framework they were built in. As an example, a scikit-learn model could be loaded in its native format to enable users to leverage the functionality of scikit-learn.
As of MLflow 1.21.0, the full list of supported flavors are as follows:
You can also define custom models and flavors, but this is beyond the scope of this guide.
The contents of a packaged model will look similar to this:
The components in this particular case are as follows:
conda.yaml
— a file that lists the conda dependencies used by the model.input_example.json
— a file that contains an example input for the model to show the expected signature of input data.MLmodel
— a file that describes the saved flavors, the path of the model, the model’s input and output signatures, and more.model.pkl
— the pickled model.requirements.txt
— a file that specifies the pip dependencies of the model. By default, these are inferred from the environment where the model is packaged.
With all that in mind, let’s create a packaged model just like the one above!
Prerequisites for MLflow Models
The prerequisites for MLflow Models are the same as with Tracking and Projects:
- The MLflow Python package.
- The packages of the ML/DL frameworks you want to use.
- The conda package manager — either from Miniconda or standard Anaconda.
If you’ve already tried Tracking and Projects, then you are set for the rest of this guide!
Just like in PART 1, we are going to use scikit-learn to demonstrate model packaging with MLflow. With some adaptations, you can use the code below with other supported ML/DL frameworks as well.
Saving Models with MLflow Models
This guide is going to focus on two functionalities of MLflow Models:
- Saving trained models in a local environment.
- Deploying trained models in a local environment.
Let’s have a look at saving models with MLflow first!
Importing dependencies
To get started, let’s import dependencies:
We’re importing a number of scikit-learn tools to help us with training, as well as a few functions from MLflow to help us package the model.
Fitting a scikit-learn estimator
Now, we need to train a scikit-learn estimator. This is fairly straightforward — if you are familiar with scikit-learn, the code below will be clear to you:
We are using scikit-learn’s breast cancer dataset to fit a LogisticRegression
estimator. We are splitting the data into train and test sets because we need some data after training to show how MLflow can be used to deploy models.
Providing an input signature and input examples
Once our model is trained, we can package it straight away. However, to help others get started with the packaged model, we should also supply the expected input model signature along with input examples. Additionally, we can use input signatures to enforce the input format for models loaded as Python functions.
Let’s first craft the input signature for our model. There are two ways to do this:
- Using internal MLflow tools to infer the signature automatically.
- Specifying the input signature manually.
Inferring the input signature automatically
To automatically infer the input signature, we can use the function mlflow.models.infer_signature()
. The parameters for this function are as follows:
model_input
— the expected input to the model. You can use your training dataset as the model input. Supported argument types arepandas.DataFrame
,numpy.ndarray
, dictionaries of signature{name: numpy.ndarray}
, andpyspark.sql.DataFrame
.model_output
— the expected output of the model. You should obtain predictions from your model to use them as sample outputs.
We are going to be using a pandas.DataFrame
as model_input
. The main benefit of a pandas.DataFrame
is that it can contain the column names of the input features, making it clearer for future users what the inputs to our model are supposed to be.
Here’s what infer_signature()
looks like in action with a pandas.DataFrame
:
Here, we:
- Feed
X_train
to thepandas.DataFrame
objectX_train_df
. - Pass
X_train_df
along with model predictions toinfer_signature()
.
infer_signature()
returns an MLflow ModelSignature
object, which we assign to the variable signature
.
Let’s now inspect signature
:
inputs:['mean radius': double, 'mean texture': double, 'mean perimeter': double, 'mean area': double, 'mean smoothness': double, 'mean compactness': double, 'mean concavity': double, 'mean concave points': double, 'mean symmetry': double, 'mean fractal dimension': double, 'radius error': double, 'texture error': double, 'perimeter error': double, 'area error': double, 'smoothness error': double, 'compactness error': double, 'concavity error': double, 'concave points error': double, 'symmetry error': double, 'fractal dimension error': double, 'worst radius': double, 'worst texture': double, 'worst perimeter': double, 'worst area': double, 'worst smoothness': double, 'worst compactness': double, 'worst concavity': double, 'worst concave points': double, 'worst symmetry': double, 'worst fractal dimension': double]outputs:[Tensor('int32', (-1,))]
We can see that our inputs were inferred to be of type double
, while the output was inferred to be a Tensor
of type int32
.
Specifying the Input Signature Manually
If you need more control over the input signature, you can specify it manually. To do this, you need these two MLflow classes:
ColSpec
— a class that specifies the data type and name of a column in a dataset. An alternative toColSpec
isTensorSpec
, which is used to representTensor
datasets.Schema
— a class that represents a dataset composed ofColSpec
orTensorSpec
objects.
We need to create separate Schema
objects to represent our inputs and outputs. Here’s how you would do this with the Iris dataset:
Here, we can see that Schema
objects are lists containing ColSpec
objects. ColSpec
objects contain the datatypes and the names of your columns in the dataset. You don’t need to specify column names, but they can be beneficial if you want to make your input signature clearer and more understandable.
Now, let’s create schemas for the breast cancer dataset. Because this dataset has many feature columns, we won’t be creating ColSpec
objects for them by hand — we’ll use list comprehension instead:
The contents of our schemas are as follows:
['mean radius': double, 'mean texture': double, 'mean perimeter': double, 'mean area': double, 'mean smoothness': double, 'mean compactness': double, 'mean concavity': double, 'mean concave points': double, 'mean symmetry': double, 'mean fractal dimension': double, 'radius error': double, 'texture error': double, 'perimeter error': double, 'area error': double, 'smoothness error': double, 'compactness error': double, 'concavity error': double, 'concave points error': double, 'symmetry error': double, 'fractal dimension error': double, 'worst radius': double, 'worst texture': double, 'worst perimeter': double, 'worst area': double, 'worst smoothness': double, 'worst compactness': double, 'worst concavity': double, 'worst concave points': double, 'worst symmetry': double, 'worst fractal dimension': double][long]
Finally, to create a signature, we create a ModelSignature
object, passing our schemas to it:
Providing input examples
We can optionally provide an input example that matches our model signature. Input examples aren’t necessary, but they should ideally be included to help users figure out the correct format for inputs.
You can provide input examples as a pandas.DataFrame
, a numpy.ndarray
, a dict
, or a list
.
In our case, let’s again use X_train_df
. This will make sure that the input example will match our input signature.
In the code block above, we pick the two first data rows as an input example. You can supply one or more samples as input examples if you want.
Specifying conda and pip dependencies
If necessary, you can also specify conda and pip dependencies to help future users reproduce your project:
Here:
conda_env
is a dictionary specifying the target conda environment.pip_requirements
is a list that contains the names of pip dependencies.
For either of these, you can alternatively provide:
- The string path to a conda YAML environment file (for
conda_env
) or a requirements text file (forpip_requirements
). - Nothing, in which case MLflow will use
mlflow.models.infer_pip_requirements()
to infer environment dependencies.
You can learn more about the usage of mlflow.models.infer_pip_requirements()
in the MLflow documentation.
Saving the model
It’s finally time to put everything together and package our scikit-learn model!
For scikit-learn, you can use either mlflow.sklearn.log_model()
or mlflow.sklearn.save_model()
to package your model. Here’s how these two functions differ:
log_model()
logs the model under the current run as an artifact.save_model()
saves the model in the specified directory relative to your project’s path.
The usage of these two functions is nearly identical, but there are a few points you should keep in mind.
Saving a Model to a Local Path
Here’s how to use save_model()
:
After you run this statement, the model together with the environment files and the input example will be saved in the path /model under the current working directory.
Logging the Model as an Artifact
As for log_model()
, here’s how you can use it:
Here, we are starting a run with mlflow.start_run()
because we need to obtain the ID of the run to more easily load the logged model later.
Loading our saved models
We’ve established earlier that MLflow saves scikit-learn models in two formats or flavors:
- As a Python function.
- As a native scikit-learn model.
Depending on the target environment, you can use either of these flavors to deploy your model.
Before loading our models, let’s specify the paths to them:
You can use either of these paths to load the model as a Python function or a standard scikit-learn model:
A few things to keep in mind here:
- To load a model that has been saved with
mlflow.sklearn.log_model()
, you can prefix the directory withruns:/
. Next, specify the ID of the run under which the model was saved ({run_id}
in our case) and then specify the path you passed toartifact_path
earlier (model
). When you prefix the path withruns:/
, MLflow will look for the model in theartifacts
path of the indicated run. - To load a model that has been saved with
mlflow.sklearn.save_model()
, specify the model path relative to the directory that you are running your Python script in. - There are several ways to specify the location of the model — check the API reference of
mlflow.pyfunc.load_model()
(here) ormlflow.sklearn.load_model()
(here) for more guidance.
Doing Inference with the Loaded models
To do inference with the loaded models, you just need to call their predict()
methods, passing some data:
We used our test features here. Note that because MLflow enforces input signatures when using PyFuncModel
, we need to convert our numpy.ndarray
data into a pandas.DataFrame
.
sklearn_predictions
and pyfunc_predictions
have the same contents:
[0 0 1 0 0 0 1 0 1 0 1 0 0 1 1 1 1 1 0 1 1 1 1 1 0 0 1 0 0 1 1 1 0 1 1 0 11 0 1 0 1 1 1 1 1 0 1 0 0 1 1 0 0 1 1 0 0 0 1 1 1 1 0 1 1 1 0 0 1 1 1 1 00 1 1 0 0 1 0 1 1 1 0 0 1 0 0 0 0 0 1 1 1 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 11 1 1 1 0 0 0 1 1 1 1 1 1 1 1 1 1 0 1 1 1 0 1 1 0 1 1 1 1 1 0 0]True
With many scikit-learn models (including our sklearn_model), you can alternatively use predict_proba to obtain class probabilities. pyfunc_model does not have a predict_proba method.
Serving Models with MLflow Models
You can deploy models saved with MLflow in a variety of environments, ranging from a local machine to cloud platforms like Amazon SageMaker or Microsoft Azure.
Below, we will do a local deployment of our saved model. We will be using the command line interface for deployment. For other deployment options, be sure to check out MLflow docs.
Deploying MLflow Models
To serve your model through the command line, first navigate to the directory above it. For example, if your model is saved under mlruns/your_experiment_id/your_run_id/artifacts/model
, go to mlruns/your_experiment_id/your_run_id/artifacts/
.
Open your terminal in this location and run the following:
mlflow models serve -m model
mlflow models serve
is the command that serves models. -m model
specifies the path to the model that we want to serve.
You can pass other arguments to mlflow models serve
, like — port
or — host
to specify the target URL to run the model on. Read more about this command here.
By default, MLflow serves models on a REST API server on http:/127.0.0.1:5000
. The server accepts data as POST input to http:/127.0.0.1:5000/invocations
. The model is served as a Python function, meaning that our inputs need to follow the input signature that we specified earlier.
Doing Inference with a Served Model
Now that our model is live, we can do inference with it. We can do this either programmatically (from a Python script) or through the command line. Let’s have a look at both methods.
Programmatic inference with MLflow Models
To do inference programmatically, we need to first define our endpoint URL and the query function:
We are passing the request header headers
as a parameter to the function because we will need an easy way to change the header later.
Now, you can use the function query
to do inference. MLflow servers accept data in the following four formats:
Only the first three data types are relevant to our scikit-learn model. Let’s see how MLflow works with each of these data types in action!
First, let’s serialize our test_dataframe
as a JSON string. You can do this and select an orient for the data like so:
Next, let’s send a query to our model and inspect the response:
[0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0]
To do inference with a CSV-serialized DataFrame, you would need to pass {“Content-Type”: “text/csv”}
as the header in your POST request. With our implementation of the function query
, here’s how this is done:
[0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0]
Command Line Inference with MLflow Models
You may also use the command line interface to do predictions. A POST request through the command line can look something like this:
curl http://localhost:5000/invocations -H 'Content-Type: application/json' -d '{"columns": ["mean radius", "mean texture", "mean perimeter","mean area", "mean smoothness", "mean compactness","mean concavity", "mean concave points", "mean symmetry","mean fractal dimension", "radius error", "texture error","perimeter error", "area error", "smoothness error","compactness error", "concavity error", "concave points error","symmetry error", "fractal dimension error", "worst radius","worst texture", "worst perimeter", "worst area","worst smoothness", "worst compactness", "worst concavity","worst concave points", "worst symmetry", "worst fractal dimension"],"data": [[17.99, 10.38, 122.8,1001.0, 0.1184, 0.2776,0.3001, 0.1471, 0.2419,0.07871, 1.095, 0.9053,8.589, 153.4, 0.006399,0.04904, 0.05373, 0.01587,0.03003, 0.006193, 25.38,17.33, 184.6, 2019.0,0.1622, 0.6656, 0.7119,0.2654, 0.4601, 0.1189]]}'
Next Steps
This has been it for our introduction to MLflow Models! You know the basics now, but there are many other aspects of Models that we haven’t really explored — make sure to check out MLflow docs if you are interested.
In PART 3 of this series, we are going to put together the concepts we explored in PART 1 and PART 2 to build a simple React application!
Our app will be able to:
- Track experiments. The app will accept values for a few hyperparameters, track experiments for you under the hood, and allow you to view results in the MLflow user interface.
- Deploy models and do inference. If desired, you will be able to use the app to deploy trained models and do inference with them.
Stay Tuned!