Git Product home page Git Product logo

caseygomez / credit_risk_analysis Goto Github PK

View Code? Open in Web Editor NEW
0.0 1.0 0.0 19.36 MB

Utilizing data preparation, statistical reasoning, and supervised machine learning to solve a real-world challenge: credit card risk.

Jupyter Notebook 100.00%
python supervised-machine-learning balancedrandomforestclassifier clustercentroids easyensembleclassifier imbalanced-learning randomoversampler scikit-learn smote smoteenn

credit_risk_analysis's Introduction

Credit Risk Analysis

Overview:

This project utilizes skills in data preparation, statistical reasoning, and supervised machine learning. Utilizing different techniques to train and evaluate models with unbalanced classes, the final result is a recommendation on which model performs best to predict credit risk. All analysis is written in Python.


Credit Risk Analysis:

After thorough analysis it is clear that all six models have inherent risk, however the EasyEnsembleClassifer (EEC) model yields the best results. Many of the rates were highest using the EEC model. The overall accuracy rate of 92.54% as well as the 7% precision rate for "High Risk" loan candidates were both the highest. Additionally the recall rates for both risk groups were high and the F1 rate for the "High Risk" loan candidates was highest at 14%.

It is important to remember that sampling techniques cannot overcome the deficiencies of the original dataset. The majority of the applications were classified as "Low Risk". With so few data points for the "High Risk" classification the algorithms could skew results. For this project the Q1 2019 data was used, it would be interesting to compare Q1 data from previous years or to look at all of 2019.


Resources:


Deliverables:

  • Deliverable 1: Use Resampling Models to Predict Credit Risk
  • Deliverable 2: Use the SMOTEENN Algorithm to Predict Credit Risk
  • Deliverable 3: Use Ensemble Classifiers to Predict Credit Risk
  • Deliverable 4: Write Credit Risk Analysis

RandomOverSampler:

RandomOverSampler

  • Accuracy: .6428

  • Precision:

    • High Risk: .01
    • Low Risk: 1
  • Recall:

    • High Risk: .61
    • Low Risk: .68
  • F1:

    • High Risk: .02
    • Low Risk: .81
  • Benefits:

    • Instances from the minority class are randomly selected and added to the minority training class until both classifications are equal.

SMOTE:

SMOTE

  • Accuracy: .6201

  • Precision:

    • High Risk: .01
    • Low Risk: 1
  • Recall:

    • High Risk: .60
    • Low Risk: .64
  • F1:

    • High Risk: .02
    • Low Risk: .78
  • Benefits:

    • For an instance from the minority class, a number of its closest neighbors is chosen. Based on the values of these neighbors, new values are created.

ClusterCentroids:

ClusterCentroids

  • Accuracy: .6201

  • Precision:

    • High Risk: .01
    • Low Risk: 1
  • Recall:

    • High Risk: .59
    • Low Risk: .44
  • F1:

    • High Risk: .01
    • Low Risk: .61
  • Benefits:

    • The algorithm identifies clusters of the majority class, then generates centroids, that are representative of the clusters.
    • The majority class is then undersampled down to the size of the minority class.

SMOTEENN Algorithm:

SMOTEENN

  • Accuracy: .5108

  • Precision:

    • High Risk: .01
    • Low Risk: 1
  • Recall:

    • High Risk: .70
    • Low Risk: .57
  • F1:

    • High Risk: .02
    • Low Risk: .73
  • Benefits:

    • Oversample the minority class with SMOTE
    • Clean the resulting dara with an undersampling strategy. If the two nearest neighbors of a data point belong to two different classes, the data point is dropped.

BalancedRandomForestClassifier:

BalancedRandomForestClassifier

  • Accuracy: .7878

  • Precision:

    • High Risk: .04
    • Low Risk: 1
  • Recall:

    • High Risk: .67
    • Low Risk: .91
  • F1:

    • High Risk: .07
    • Low Risk: .95
  • Benefits:

    • Are robust against overfitting as all of those weak learners are trained on different pieces of the data.
    • Can be used to rank the importance of input variables in a natural way.
    • Can handle thousands of input variables without variable deletion.
    • Are robust to outliers and nonlinear data.
    • Run efficiently on large datasets.

BalancedRandomForestClassifierFeatures


EasyEnsembleClassifier:

EasyEnsembleClassifier

  • Accuracy: .9254

  • Precision:

    • High Risk: .07
    • Low Risk: 1
  • Recall:

    • High Risk: .91
    • Low Risk: .94
  • F1:

    • High Risk: .14
    • Low Risk: .97
  • Benefits:

    • The classifier is an ensemble of AdaBoost learners trained on different balanced bootstrap samples.
    • After evaluating the errors of the first model, another model is trained. This time, however, the model gives extra weight to the errors from the previous model. The purpose of this weighting is to minimize similar errors in subsequent models.

credit_risk_analysis's People

Contributors

caseygomez avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.