How low-code machine learning can power responsible AI

We’re excited to bring Transform 2022 back in person on July 19 and pretty much July 20-28. Join AI and data leaders for insightful conversations and exciting networking opportunities. Register today!

Rapid technical advancements and widespread adoption of artificial intelligence (AI)-based products and workflows are impacting many aspects of human and business activities in banking, healthcare, advertising, and many more industries. While the accuracy of AI models is arguably the single most important factor to consider when deploying AI-based products, there is an urgent need to understand how to design AI to work responsibly.

Responsible AI is a framework that any software development organization should adopt to build customer confidence in the transparency, accountability, fairness, and security of all implemented AI solutions. At the same time, an important aspect of making AI accountable is having a development pipeline that can promote reproducibility of results and manage the line of data and ML models.

Low-code machine learning is gaining popularity with tools such as PyCaret, H2O.ai, and DataRobot, enabling data scientists to run pre-canned patterns for feature engineering, data cleansing, model development, and statistical performance comparison. Often, however, the missing pieces of these packages are patterns around responsible AI that evaluates ML models for fairness, transparency, explainability, causality, and more.

Here we demonstrate a quick and easy way to integrate PyCaret with Microsoft RAI (Responsible AI) framework to generate a detailed report with error analysis, explainability, causality and counterfactuals. The first part is a code walkthrough for developers to show how to build a RAI dashboard. The second part is an extensive evaluation of the RAI report.

Code overview

First we install the necessary libraries. This can be on your local machine with Python 3.6+ or on a SaaS platform such as Google Colab.

!pip install raiwidgets
!pip install pycaret
!pip install — upgrade pandas
!pip install — upgrade numpy

Pandas and Numpy upgrade is needed now but should be fixed soon. Also don’t forget to restart runtime if you are installing in Google Colab.

Then we load data from GitHub and clean up the data and do feature engineering with PyCaret.

import pandas as pd, numpy as np
import matplotlib.pyplot as plt
%matplotlib inline csv_url = ‘https://raw.githubusercontent.com/sahutkarsh/loan-prediction-analytics-vidhya/master/train.csv’ dataset_v1 = pd.read_csv (csv_url)
dataset_v1 = dataset_v1.dropna() from pycaret.classification import * clf_setup = setup(data = dataset_v1, target = ‘Loan_Status’, train_size=0.8, categorical_features=[‘Gender’, ‘Married’, ‘Education’,

‘Self_Employed’, ‘Property_Area’]imputation_type=’simple’, categorical_imputation = ‘mode’, ignore_features=[‘Loan_ID’]fix_imbalance=True, silent=True, session_id=123)

The dataset is a simulated loan application dataset with characteristics such as gender, marital status, employment, income, etc. of applicants. PyCaret has a cool feature to make the training and test dataframes available after feature engineering through the get_config method. We’ll use this to get cleaned up features that we’ll add to the RAI widget later.

X_train = get_config(variable=”X_train”).reset_index().drop([‘index’]axis=1)
y_train = get_config(variable=”y_train”).reset_index().drop([‘index’]axis=1)[‘Loan_Status’]

X_test = get_config(variable=”X_test”).reset_index().drop([‘index’]axis=1)
y_test = get_config(variable=”y_test”).reset_index().drop([‘index’]axis=1)[‘Loan_Status’]

df_train = X_train.copy()
df_train[‘LABEL’] = y_train
df_test = X_test.copy()
df_test[‘LABEL’] = y_test

Now we use PyCaret to build multiple models and compare them on Recall as a statistical measure of performance.

top5_results = Compare_models(n_select=5, sort=”Recall”)

Figure 1 – PyCaret models compared on Recall

Our top model is a Random Forest Classifier with a Recall of 0.9, which we can plot here.

selected_model = top5_results[0]
plot_model(selected_model)

Figure 2 – AUC for ROC curves of the selected model

Now we will write our 10 lines of code to build a RAI dashboard using dataframes and models we generated with PyCaret.

cat_cols = [‘Gender_Male’, ‘Married_Yes’, ‘Dependents_0’, ‘Dependents_1’, ‘Dependents_2’, ‘Dependents_3+’, ‘Education_Not Graduate’, ‘Self_Employed_Yes’, ‘Credit_History_1.0’, ‘Property_Area_Rural’, ‘Property_Area_Semiurban’, ‘Property_Area_Urban’]

from raiwidgets import ResponsibleAIDashboard from responsible import RAIInsights rai_insights = RAIInsights(selected_model, df_train, df_test, ‘LABEL’, ‘classification’, categorical_features=cat_cols) rai_insights.explainer.add_ightana. treatment_features=[‘Credit_History_1.0’, ‘Married_Yes’]) rai_insights.counterfactual.add(total_CFs=10, wanted_class=’opposite’) rai_insights.compute()

The above code, while quite minimalistic, does a lot of things under the hood. It creates insights about RAI for classification and adds modules for explainability and error analysis. A causal analysis is then done based on two treatment characteristics, including credit history and marital status. A counterfactual analysis is also performed for 10 scenarios. Now let’s generate the dashboard.

ResponsibleAIDashboard(rai_insights)

The above code will start the dashboard on a port such as 5000. On a local machine, you can go directly to http://localhost:5000 and view the dashboard. On Google Colab, you need to do a simple trick to see this dashboard.

from google.colab.output import eval_js

print(eval_js(“google.colab.kernel.proxyPort(5000)”))

This will give you a URL to view the RAI dashboard. Below you can see some parts of the RAI dashboard. Here are some key results of this analysis that were automatically generated to complement the AutoML analysis performed by PyCaret.

Results: Responsible AI Report

Error analysis: We find that the error rate is high for rural property areas and our model has a negative bias for this characteristic.

Global explainability – importance of features: We see that importance of features remains the same in both cohorts – all data (blue) and ownership area nationwide (orange). We see that for the orange cohort, area of ​​ownership has a greater impact, yet credit history is the #1 factor.

Local explainability: We see that credit history is also an important characteristic for an individual forecast – row #20.

Counterfactual analysis: We see that for the same row #20, a decision from N to Y may be possible (based on data) if the credit history and loan amount are changed.

Causal Inference: We consider causal analysis to study the impact of two treatments, credit history and employment status, and see that credit history has a greater direct impact on approval.

A sound AI analysis report showing model error analysis, explainability, causal inferences and counterfactuals can add great value to traditional statistical metrics of precision recall that we commonly use as levers to evaluate models. With modern tools such as PyCaret and RAI dashboards, it is easy to build these reports. These reports can be developed using other tools – the key is that data scientists need to evaluate models for these patterns on responsible AI to ensure their models are ethical and accurate.

Dattaraj Rao is the lead data scientist at Persistent.

DataDecision makers

Welcome to the VentureBeat Community!

DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.

If you want to read about the very latest ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contributing an article yourself!

Read more from DataDecisionMakers

This post How low-code machine learning can power responsible AI

was original published at “https://venturebeat.com/2022/04/16/how-low-code-machine-learning-can-power-responsible-ai/”