Explainable AI Framework Comparison

Part 3: Using LIME in a React Application

Kedion
19 min readFeb 25, 2022
Photo by Lenin Estrada

Written by Tigran Avetisyan

This is PART 3 of our 3 PART series “Explainable AI Framework Comparison.”

See PART 1 here and PART 2 here.

In PART 2, we explored the capabilities of LIME for image classification. We found that while LIME was more difficult to use than SHAP, it had better support for TensorFlow 2 and offered more flexibility.

In PART 3, we are going to use LIME in a React.js application, similar to our Hugging Face, TPOT, and MLflow applications. We’ll show you how can build a Python API to explain image classification models and how that API can integrate into a React app.

Let’s start our guide with a brief overview of our application!

Overview of Our Application

Our React.js application aims to abstract away the coding steps you need to do to generate explanations with the LIME framework. The app allows you to:

· Explain the output of TensorFlow image classification models.

· Upload your own images to explain them.

· Adjust the parameters of the three segmentation algorithms that LIME supports to get cleaner, more convincing explanations.

· Explain images with respect to your model’s predictions or with respect to the specific labels that you indicate.

· View positive/negative areas in the images and adjust most of the explanation parameters of LIME.

The application has some limitations, and it doesn’t implement all of the capabilities of LIME. We’ll cover the limitations of our app in more detail below.

Prerequisites for the Application

To build and run our application, you will need three things:

· Node.js, which you can download from here.

· A number of Python packages for the API.

· Docker Desktop.

If you are on Windows, we also recommend Git. Git will allow you to run shell scripts and handle GitHub repositories.

The Python packages that you will need are as follows:

· FastAPI.

· Uvicorn.

· Python-multipart.

· Pydantic.

· Aiofiles.

· Matplotlib.

· TensorFlow.

· NumPy.

· LIME.

If you are using the pip package manager, run the command pip install [package-name]. If you are using conda, install the packages using conda install -c conda-forge [package-name] .

However, for LIME, you might need to clone the package’s GitHub repository to your local machine and install it from there. This is because when we installed LIME using pip install, some key functionality that we needed for the API was missing from the library source code on our machine. At the same time, the source code on GitHub clearly contained the functionality that we needed, so we knew there was something wrong.

To install LIME from a local directory, do the following:

1. Clone LIME’s repository to your local machine by running git clone https://github.com/marcotcr/lime.git in the directory that you want to store the package in. If you are on Windows, you can use Git or GitHub Desktop to clone the repo.

2. Navigate to the directory above the cloned repository.

3. Open the terminal and run the following command — pip install -e lime, where lime is the local path to the repository files.

Note that unlike our previous app guides, we are using FastAPI instead of Flask to create a Python API. FastAPI makes argument parsing much neater and easier than Flask does, which was one of the reasons we decided to use another framework.

Building the API

After you handle the dependencies, you can start building the API!

Importing dependencies

Like usual, we start by importing dependencies:

Apart from the tools that we need to explain images, we are also importing a few FastAPI classes to help us build the API and data models for HTTP requests.

Building request objects

We will need to send the following types of data to our API:

· Request bodies.

· Form data.

In this section, we’ll have a quick look at request bodies and will show how we used them in our API. As for form data, we’ll take a look at it in action when we start defining our API endpoints.

We will need two request bodies — one will handle the arguments for explanation generation, while the other will handle the arguments of the segmentation algorithm that we choose.

Request bodies in FastAPI rely on Pydantic — a Python library that allows you to do data validation and settings management. To create a request body, you inherit a class from Pydantic’s BaseModel class. Called a data model, this class will list the arguments that our API requires.

When FastAPI receives a request, it will:

1. Read the body of the request as JSON.

2. Convert the arguments to the corresponding data types.

3. Validate the data to ensure that it corresponds to the signature defined in the data model.

4. Return the data if it passes validation.

FastAPI also generates JSON Schema definitions of the request bodies, which helps it generate automatic documentation (more about this later).

For our explanation arguments, the data model declaration looks like this:

The meanings of these arguments are as follows:

· image_indices determines which of the images that you upload the app will generate explanations for. If you type 0, 1, 3, you will get explanations for the images under the indices 0, 1, and 3.

· top_labels determines the number of predictions for the uploaded images that LIME will store. If you type 3, LIME will store the top 3 predictions with the highest probabilities. top_labels should be equal to or less than the total number of classes that your model was trained to predict. top_labels is ignored when labels_to_explain is NOT None.

· top_predictions determines the number of predictions from the stored top_labels that you want to explain. If you type 3, LIME will generate explanations for the top 3 predictions. top_predictions should be equal to or less than top_labels. top_predictions is ignored when labels_to_explain is NOT None.

· labels_to_explain allows you to select specific labels to generate explanations for. If you set this to None, LIME will generate explanations with respect to the stored top_predictions. If NOT None, LIME will generate explanations with respect to the provided values. As an example, if you wanted to generate explanations with respect to the classes 0, 3, and 8, you would need to pass 0, 3, 8 to labels_to_explain. Note that if labels_to_explain is NOT None, there should only be ONE value under image_indices.

· num_samples determines the size of the neighborhood that LIME will generate to explain images. Higher values might improve explanation clarity but will increase the amount of time necessary to generate explanations.

· positive_only, negative_only, and hide_rest determine which areas will be shown and highlighted in the images. When hide_rest is unchecked and positive_only or negative_only is checked, LIME will highlight positive or negative areas respectively. When hide_rest is checked and positive_only or negative_only is checked, LIME will only show positive or negative areas respectively and hide the rest.

As for segmenter arguments, they look like this:

We don’t specify specific keyword arguments for Segmenter because LIME supports several segmentation algorithms. Explicitly typing all the arguments would be messy — instead, we can accept the arguments as a dictionary and then pass this dictionary as keyword arguments to SegmentationAlgorithm.

Setting up the API

Next, let’s set up our API. Here’s how we can do this:

We initialize a FastAPI API on line 2. Then, we set up CORS (Cross-Origin Resource Sharing) so that we can connect backends and frontends located in different origins.

On lines 5 and 6, we list the origins that can make cross-origin requests. In our case, we allow http://localhost:3000 (where the React application will run) and http://localhost:5000 (where the FastAPI API will run). If your app components have different origins, you will need to indicate them here.

In FastAPI, CORS is handled by the class CORSMiddleware. On line 9, we add our CORS configuration to the app, using CORSMiddleware. Arguments allow_methods=[“*”] and allow_headers=[“*”] indicate that all HTTP methods and headers are allowed.

Setting up endpoints

Next, we need to set up endpoints for our app. For this post’s purposes, we defined three endpoints:

· /upload-model/ — endpoint for uploading model weights.

· /upload-images/ — endpoint for uploading images that you want to explain.

· /explain/ — endpoint for generating explanations.

Let’s go over these endpoints one by one!

/upload-model/

To explain the outputs of a model, we first need to be able to pass that model to the API. The endpoint /upload-model/ handles this.

FastAPI represents endpoints as Python functions that accept user-defined arguments. You decorate these functions with FastAPI’s built-in decorators like @app.get or @app.post to indicate which HTTP method will handle the calls to that endpoint. You pass a string to the decorator to indicate the URL of the endpoint.

With all that in mind, the declaration of our endpoint looks like this:

Note that the name of the function under the decorator can be anything you want. The parameters listed in the function definition (weights and arch) are the arguments that you are expected to provide when sending HTTP requests.

weights and arch in our function have a type UploadFile. UploadFile indicates that this endpoint expects to receive a file as part of form data. FastAPI encodes UploadFile as multipart/form-data, while request bodies are encoded as application/json. You can combine multiple forms and UploadFile parameters, but you cannot combine form data with request bodies due to the limitations of the HTTP protocol.

Anyway, in our endpoint, we:

1. Use aiofiles.open to open a file in ‘wb’ (write and binary) mode (line 5) to handle the received model weights.

2. Asynchronously read the byte contents of weights (line 7) and write them to server storage (line 10) so that we can load the weights later.

3. Asynchronously read the model architecture file (line 13) as bytes.

4. Load the model architecture from the architecture bytes and store the model in the app (line 16).

5. Compile the model (line 19).

6. Load the saved model weights (line 22).

/upload-images/

The declaration of /upload-images/ looks like this:

In this endpoint, we define a required parameter images with datatype List[UploadFile]. We need a list of UploadFile objects because we want to accept more than one image.

In this endpoint, we do the following:

1. Create an empty list in the app that will store the passed images (line 5). If images from previous calls to this endpoint exist, they are deleted.

2. Iterate over the images (lines 8 to 19).

3. Asynchronously read the byte contents of the current image (line 10).

4. Convert the image to a NumPy array (line 13) and normalize it (line 16).

5. Save the image in the app (line 19).

The API repeats these steps for each of the provided images.

/explain/

The code for this endpoint looks like this:

In the function definition, we list the parameters explanation_params: Explanation and segmenter_params: Segmenter. These correspond to the data models that we declared for explanation and segmenter arguments earlier.

Inside the function, we do the following:

1. Generate explanations for the provided images (lines 7–8).

2. Check if the user has provided specific labels to explain the images for (lines 11 to 16). If yes, the explanations will be generated for the provided labels (line 13) — if not, explanations will be generated with respect to the model’s predictions (line 15).

These steps rely on a few helper functions outside of the endpoint function — more below.

Utility functions for explanations

We have four utility functions to help us generate explanations:

· create_segmenter ­– handles the creation of the segmentation algorithm.

· generate_explanations ­– generates explanations for the provided images.

· explain_images_with_respect_to_labels — if specific labels are provided, explains the images with respect to them.

· explain_images_with_respect_to_predictions — if specific labels are not provided, explains the images with respect to the model predictions saved by LIME.

Let’s now go over these functions one by one.

create_segmenter

create_segmenter is a really simple function — it just takes segmenter_params and passes them to LIME’s SegmentationAlgorithm as keyword arguments:

Clean, simple, and just right for handling the several supported segmentation algorithms!

generate_explanations

Arguably the most important function in our API, generate_explanations produces explanations for our images. The function is as follows:

In this function, we:

1. Instantiate a LimeImageExplainer object (line 6).

2. Check which segmentation algorithm was requested and define its float and integer arguments that should be converted from strings to its respective data type (lines 11 to 19). We do this because our API converts all values in the segmenter_params.args dictionary to strings.

3. Convert string floats to floats (lines 22–23) and string integers to integers (lines 26–27).

4. Create a segmenter with segmenter_params (line 30).

5. Create an empty list to store explanations in our app (33). Any explanations saved in previous runs are deleted on this line.

6. Iterate over the images that we had passed to /upload-images/, generate explanations for them, and store the explanations in app.explanations (lines 36 to 44).

explain_images_with_respect_to_labels

explain_images_with_respect_to_labels looks like this:

In this function, we:

1. Get the saved explanations (line 4).

2. Break down explanation_params.labels_to_explain into a list of string numbers (line 7) and convert them to integers (lines 10–11).

3. Dynamically determine the number of rows and columns for the explanation plot (lines 15 to 26). If the number of labels to explain is less than 5, the number of rows is set to 1, and the number of columns is set to the number of labels. If the number of labels is more than 5, the number of rows is set to int(np.ceil(len(labels_to_explain) / 5)) and the number of columns to 5. In the latter case, we will have more axes than labels to explain, meaning that these axes won’t hold any explanations and will need to be switched off.

4. Create a figure with subplots (line 29) and change its size in accordance with the number of rows and columns (line 30).

5. Get the explanation that corresponds to explanation_params.image_indices (line 33).

6. Declare the counting variable counter (line 36).

7. Iterate over our rows and columns, plotting explanations up to the index counter (lines 39 to 61). If there are more axes than labels to explain, counter will allow us to stop plotting once we’ve explained all the provided labels. counter is incremented after each explanation is plotted.

8. Switch off the axes after counter (lines 64–65).

9. Tighten the elements in the figure (line 68) and save the plot with explanations (line 71).

explain_images_with_respect_to_predictions

Finally, explain_images_with_respect_to_predictions looks like this:

In this function, we:

1. Get the saved explanations (line 4).

2. Break down explanation_params.image_indices into a list of string numbers (line 7) and convert them to integers (lines 10–11).

3. Dynamically determine the number of rows and columns in the plot (lines 17 to 18). We set the number of rows equal to the number of image indices. The number of columns is explanation_params.top_predictions + 1, where explanation_params.top_predictions is the number of predictions we want to explain and 1 is an extra column for the original image.

4. Create a figure with subplots (line 21) and change its size in accordance with the number of rows and columns (line 22).

5. Iterate over the image indices (lines 25 to 54), getting the explanation for the current index (line 27), displaying the original image (lines 30 to 34), and plotting explanations for the top predictions (lines 39 to 54).

6. Tighten the elements in the figure (line 57) and save the plot with explanations (line 60).

Launching the API

There are two ways to launch a FastAPI API:

· From a terminal.

· Programmatically, from within a Python script.

With either of these, you will need an ASGI server library like Uvicorn or Hypercorn. We’ll be using Uvicorn.

To run our API from the terminal, you need to do the following:

1. Place all your Python code into a .py script file. For the purposes of this guide, name the script lime-api.py.

2. Open the terminal in the location of lime-api.py.

3. Run the command uvicorn lime-api:app. lime-api in the command refers to the name of our script (without the file extension), while app is the identifier (name) of the FastAPI object we instantiate in our script.

You can pass additional arguments to the uvicorn command, like this:

uvicorn lime-api:app — port 5000 — reload

The argument --port 5000 specifies the port that we want to bind our app to. --reload makes so that the API is automatically reloaded every time you make a change to its code.

Alternatively, to launch an ASGI server programmatically, add the following to the end of your API script:

https://gist.github.com/tavetisyan95/7383806720ae52d7e92e4fef5839fc97

Note that you might be unable to run Uvicorn programmatically from a Jupyter notebook. If you can’t make FastAPI work from a Jupyter notebook, just copy your code to a .py script and run it instead.

Viewing automatic API documentation

One really cool feature that FastAPI has is automatic documentation. Basically, FastAPI can capture your endpoints and their arguments and automatically generate a simple documentation page for them.

After you launch your API, you can access its documentation page by visiting http://localhost:5000/docs. For our API, you should see something like this:

You can click on the individual endpoints under default and inspect their parameters and defined responses:

You can even test each of the endpoints by clicking Try it out at the top and providing the necessary data. This feature has actually really helped us troubleshoot our API and figure out how to make it work with different types of inputs.

Using Our API in a React Application

We’ve used the API from above to build a React application. The application allows you to explain image classification models without writing any code.

Note that to be able to use our API in a React application, we’ve made some slight changes to some URLs so that we could Dockerize the application.

Below, you’ll find instructions on how to launch and use the app. Let’s get going!

Cloning the repository

To get started, run this command in the terminal (or in Bash if you are on Windows) to clone the application’s repository:

git clone https://github.com/tavetisyan95/lime_web_app.git

The repository contains our Python API, the React frontend, and a number of TF model files and images that will help you test the application.

Editing ports and endpoints (if necessary)

config.js in the directory app/lime_web_app/src/ defines ports and endpoints that our React app will use to send requests:

Editing this file won’t change the actual ports and endpoints that our API uses — it will only affect the URLs that the app will send HTTP requests to. If you decide to change ports or endpoints in your API code, make sure to make edits to config.js as well. If possible, don’t change the port of http-server.

Starting the application

After you clone the repository, launch the Docker app on your machine. Then, navigate to the cloned repository and run the following in the terminal:

docker-compose -f docker-compose.yaml up -d –build

It will take some time for the app to install dependencies and fully spin up.

You will know that the app is up when the terminal reports that the container and its subcontainers have started. You will also see the containers in the Docker app.

After the containers are running, navigate to http://localhost:3000 in your web browser to access the app’s web page.

Using the application

The app’s interface looks like this:

There are four individual sections in the app:

· TENSORFLOW MODEL FILES, where you upload your TF Keras model files and architecture.

· IMAGES, where you upload the images that you want to explain.

· SEGMENTER, where you can adjust the parameters of the three segmentation algorithms that LIME supports.

· EXPLAINER, where you adjust parameters to generate explanations.

Let’s go over these sections and see what our app can do!

Uploading TF model files

First of all, you need to upload your model to the app’s server.

The app requires two things:

· The weights of your model in .h5 format.

· The architecture of your model saved in a .json file.

To get the weights of your model in .h5 format, run this code after you train it:

And to get the model architecture in a .json file, run this code:

We provide files for two models — one trained on the MNIST digits dataset and the other on the CIFAR10 dataset. You can find them under /models/ in the directory of the cloned repository.

You can use the test files we provide or try your own models. But to get started, use the files that we prepared.

To upload the model weights, click on UPLOAD under Model weights and select one of the .h5 files in /models/.

To upload the model architecture, click on UPLOAD under Model arch and select the .json file that corresponds to the weights you selected. The models have similar architecture but different input shapes, so they are not inter-compatible.

After you select both files, click Upload model to send the files to the API endpoint /upload-model/.

If everything is OK, you will see Model loaded! under RESPONSE and the status code 200 in the terminal that you are running the app from.

Uploading images

Next, upload the images that you want to explain under the section IMAGES.

We provide test images for each of the classes of MNIST digits and CIFAR10. You can find these under /images/mnist/ and /images/cifar10/ respectively.

Click UPLOAD under Images, navigate to the folder that corresponds to the model you uploaded, and select the images for upload. Then, click Upload images ­to send the image files to the API endpoint /upload-images/. If the operation is successful, you will see Images uploaded! under RESPONSE. You’ll also again see the status code 200 in the terminal.

You should ideally upload images for EVERY available class. But this isn’t necessary if you just want to explain one specific image.

Note that the app expects uint8-encoded JPEG images. The channel dimension should be -1. The pixel values in the images should range from 0 to 255 — they should NOT be normalized.

As a point of reference, the provided test images were saved using the following piece of Python code:

Selecting a segmentation algorithm and adjusting its parameters

As the next step, you can adjust the segmentation algorithm.

Under SEGMENTER, you can select one of the three segmentation algorithms that LIME supports — quickshift, felzenszwalb, or slic. To select an algorithm, click the dropdown list under the box Segmenter and choose the segmenter that you want to use.

For each algorithm, you can adjust several parameters to tweak the explanations. The default values work for either of the test image tests. However, for other images and models, you might need to play around with the parameters to see what works the best.

Keep in mind that we haven’t implemented some of the parameters of quickshift, felzenszwalb, or slic in our app. You can find out more about the parameters of these algorithms here.

Generating explanations

Now is the turn for explanation parameters under EXPLAINER.

To show how the parameters affect explanations, let’s first try to explain a number of Image indices. More precisely, let’s explain the top 5 predictions for the images under the indices 0, 1, 4, and 9 from the MNIST dataset. In the case of the MNIST dataset, the values that you pass to Image indices correspond to the actual digits in the images.

Here’s how the parameters look in the app:

We’ve listed the indices under Image indices and changed Top predictions from 3 to 5. The rest of the parameters are unchanged. We’re also using quickshift with its default parameters.

After you select your parameters, click Explain near the bottom of the webpage. All the values you have entered under SEGMENTER and EXPLAINER will be sent to our API’s endpoint /explain/.

It will take some time for LIME to explain the images — if this takes too long for you, pass a smaller value to Number of samples.

Once the explanations are ready, you will see them in the box Explanations at the bottom of the webpage.

You can right-click on the image and then click on Save image as… to download it in high resolution. You can also open the image in a new tab and likewise inspect it in high detail.

Next, let’s try to explain the image under the index 2 with respect to all labels. And let’s also disable positive_only to see both positive and negative areas in the explanations.

The explanations for these parameters would look like this:

If you only want to explain a few specific labels — say, 1, 2, 3, 5, 7, 9­ — type their values under Labels to explain and hit Explain. The result would look like this:

Limitations of the Application

In its current implementation, the app has some notable limitations, including:

· Error messages in the app are extremely limited. You only get Something went wrong! if something goes wrong after you click one of the buttons in the app.

· The app only supports TF/Keras models, and it only supports JPEG images.

· When Labels to explain is NOT None, the app can only generate explanations for ONE image index.

· The only preprocessing step done on uploaded images is normalization by dividing the pixel values by 255.0.

· Not all parameters of the supported segmentation algorithms were included in the app.

Next Steps

There are a lot of things that you could try to improve in our application. You could try to replace LIME with SHAP, or maybe even add SHAP as a secondary explanation framework to the app. This might be challenging due to SHAP’s spotty support for TensorFlow 2, but it should be doable.

In addition, you could add more informative error messages to the app and add support for more image formats or model types!

No matter what you decide to do, we hope that this guide was useful to you and that it will help you start building your own end-to-end projects!

Until next time!

--

--

Kedion
Kedion

Written by Kedion

Kedion brings rapid development and product discovery to machine learning & AI solutions.

No responses yet