App

Launching the App

Click here to launch the app or direct your web browser to this url:
https://afarahat.shinyapps.io/fairbank-app/

Using the App

This interactive app provides an opportunity to deepen your understanding of the issues raised in the FairBank case. In particular:

  1. You can trace through the analysis underlying the case exhibits presented by Mark Chen and Salma Khan

  2. You can perform what-if analysis to explore how data and modeling issues might affect the disparities discussed in the case.

In the Input Panels section, you can specify:

  • Characteristics of the applicant population including population mix, loan-worthiness distribution parameters, and repayment rates. A common repayment model applies to both applicant groups.

  • Parameters governing how the train and test datasets are generated. The train set, by default, is representative of the applicant population but you can change that to reflect data biases.

  • Variables and thresholds used by the classification model. A logistic regression model is used for classification. The model, by default, does not use group affiliation (it uses only loan-worthiness scores) and employs a common classification threshold across applicant groups. You can explore different settings.

In the Output Panels section, you can browse the train dataset, the corresponding logistic regression model trained on that dataset, the test dataset including model predictions and loan approval decisions, the corresponding confusion matrices, and several parity metrics. This section also includes a panel providing a visualization of the data and the model-based loan approval decisions. Make sure you press the Refresh button to reflect the effects of any changes made in the Input Panels section.

The exhibits included in the Fairbank case correspond to the app’s default parameter settings. You can explore how changes in these parameters affect the disparity metrics. You can, for example, explore the effects of:

  1. loan-worthiness disparities;

  2. data biases;

  3. different classification thresholds.