Research article Special Issues

Machine learning and artificial neural networks to construct P2P lending credit-scoring model: A case using Lending Club data

  • Received: 16 March 2022 Revised: 26 May 2022 Accepted: 30 May 2022 Published: 09 June 2022
  • JEL Codes: D12, E41, E44, G20

  • In this study, we constructed the credit-scoring model of P2P loans by using several machine learning and artificial neural network (ANN) methods, including logistic regression (LR), a support vector machine, a decision tree, random forest, XGBoost, LightGBM and 2-layer neural networks. This study explores several hyperparameter settings for each method by performing a grid search and cross-validation to get the most suitable credit-scoring model in terms of training time and test performance. In this study, we get and clean the open P2P loan data from Lending Club with feature engineering concepts. In order to find significant default factors, we used an XGBoost method to pre-train all data and get the feature importance. The 16 selected features can provide economic implications for research about default prediction in P2P loans. Besides, the empirical result shows that gradient-boosting decision tree methods, including XGBoost and LightGBM, outperform ANN and LR methods, which are commonly used for traditional credit scoring. Among all of the methods, XGBoost performed the best.

    Citation: An-Hsing Chang, Li-Kai Yang, Rua-Huan Tsaih, Shih-Kuei Lin. Machine learning and artificial neural networks to construct P2P lending credit-scoring model: A case using Lending Club data[J]. Quantitative Finance and Economics, 2022, 6(2): 303-325. doi: 10.3934/QFE.2022013

    Related Papers:

  • In this study, we constructed the credit-scoring model of P2P loans by using several machine learning and artificial neural network (ANN) methods, including logistic regression (LR), a support vector machine, a decision tree, random forest, XGBoost, LightGBM and 2-layer neural networks. This study explores several hyperparameter settings for each method by performing a grid search and cross-validation to get the most suitable credit-scoring model in terms of training time and test performance. In this study, we get and clean the open P2P loan data from Lending Club with feature engineering concepts. In order to find significant default factors, we used an XGBoost method to pre-train all data and get the feature importance. The 16 selected features can provide economic implications for research about default prediction in P2P loans. Besides, the empirical result shows that gradient-boosting decision tree methods, including XGBoost and LightGBM, outperform ANN and LR methods, which are commonly used for traditional credit scoring. Among all of the methods, XGBoost performed the best.



    加载中


    [1] Aldrich JH, Nelson FD (1984) Quantitative Applications in the Social Sciences: Linear Probability, Logit, and Probit Models, Thousand Oaks, CA: SAGE Publications. https://doi.org/10.4135/9781412984744
    [2] Alexander VE, Clifford CC (1996) Categorical Variables in Developmental Research: Methods of Analysis, Elsevier. https://doi.org/10.1016/B978-012724965-0/50003-1
    [3] Arya S, Eckel C, Wichman C (2013) Anatomy of the Credit Score. J Econ Behav Organ 95: 175–185. https://doi.org/10.1016/j.jebo.2011.05.005 doi: 10.1016/j.jebo.2011.05.005
    [4] Baesens B, Van Gestel T, Viaene S, et al. (2003) Benchmarking state-of-the-art classification algorithms for credit scoring. J Oper Res Soc 54: 627–635. https://doi.org/10.1057/palgrave.jors.2601545 doi: 10.1057/palgrave.jors.2601545
    [5] Baesens B, Gestel TV, Stepanova M, et al. (2004) Neural Network Survival Analysis for Personal Loan Data. J Oper Res Soc 56: 1089–1098. https://doi.org/10.1057/palgrave.jors.2601990 doi: 10.1057/palgrave.jors.2601990
    [6] Bishop CM (2006) Pattern Recognition and Machine Learning, Springer. https://doi.org/10.1007/978-0-387-45528-0_5
    [7] Bolton C (2010) Logistic Regression and its Application in Credit Scoring, University of Pretoria. Available from: http://hdl.handle.net/2263/27333.
    [8] Breiman L (1996) Bagging Predictors. Mach Learn 24: 123–140. https://doi.org/10.1007/BF00058655 doi: 10.1007/BF00058655
    [9] Breiman L, Friedman J, Stone CJ, et al. (1984) Classification and Regression Trees, Taylor & Francis. https://doi.org/10.1201/9781315139470
    [10] Brown M, Grundy M, Lin D, et al. (1999) Knowledge-Base Analysis of Microarray Gene Expression Data Using Support Vector Machines, University of California in Santa Cruz. https://doi.org/10.1073/pnas.97.1.262
    [11] Byanjankar A, Heikkilä M, Mezei J (2015) Predicting credit risk in peer-to-peer lending: A neural network approach. In 2015 IEEE symposium series on computational intelligence, IEEE, 719–725. https://doi.org/10.1109/SSCI.2015.109
    [12] Cao A, He H, Chen Z, et al. (2018) Performance evaluation of machine learning approaches for credit scoring. Int J Econ Financ Manage Sci 6: 255–260. https://doi.org/10.11648/j.ijefm.20180606.12 doi: 10.11648/j.ijefm.20180606.12
    [13] Chen S, Wang Q, Liu S (2019) Credit risk prediction in peer-to-peer lending with ensemble learning framework. In 2019 Chinese Control And Decision Conference (CCDC), IEEE, 4373–4377. https://doi.org/10.1109/CCDC.2019.8832412
    [14] Chen TQ, Guestrin C (2016) XGBoost: A Scalable Tree Boosting System. KDD'16 Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794. https://doi.org/10.1145/2939672.2939785
    [15] Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20: 273–297. https://doi.org/10.1007/BF00994018 doi: 10.1007/BF00994018
    [16] Crouhy M, Galai D, Mark R (2014) The Essentials of Risk Management, 2nd Edition. McGraw-Hill. Available from: https://www.mhprofessional.com/9780071818513-usa-the-essentials-of-risk-management-second-edition-group.
    [17] Cybenko G (1989) Approximation by Superpositions of a Sigmoidal Function Mathematics of Control. Signals Syst 2: 303–314. https://doi.org/10.1007/BF02551274 doi: 10.1007/BF02551274
    [18] Duan J (2019) Financial system modeling using deep neural networks (DNNs) for effective risk assessment and prediction. J Franklin Inst 356: 4716–4731. https://doi.org/10.1016/j.jfranklin.2019.01.046 doi: 10.1016/j.jfranklin.2019.01.046
    [19] Duchi J, Hazan E, Singer Y (2011) Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. J Mach Learn Rese 12: 2121–2159. https://doi.org/10.5555/1953048.2021068 doi: 10.5555/1953048.2021068
    [20] Elrahman SMA, Abraham A (2013) A Review of Class Imbalance Problem. J Network Innov Comput 1: 332–340. http://ias04.softcomputing.net/jnic2.pdf
    [21] Everett CR (2015) Group Membership, Relationship Banking and Loan Default Risk: the Case of Online Social Lending. Bank Financ Rev 7: 15–54. https://doi.org/10.2139/ssrn.1114428 doi: 10.2139/ssrn.1114428
    [22] Friedman JH (2001) Greedy Function Approximation: A Gradient Boosting Machine. Ann Stat 29: 1189–1232. https://doi.org/10.1214/aos/1013203451 doi: 10.1214/aos/1013203451
    [23] Genuer R, Poggi JM, Tuleau-Malot C (2010) Variable selection Using Random Forests. Pattern Recogn Lett 31: 2225–2236. https://doi.org/10.1016/j.patrec.2010.03.014 doi: 10.1016/j.patrec.2010.03.014
    [24] Glorot X, Bengio Y (2010) Understanding the Difficulty of Training Deep Feedforward Neural Networks. J Mach Learn Res 9: 249–256. http://proceedings.mlr.press/v9/glorot10a.html
    [25] Guyon I, ElNoeeff A (2003) An Introduction to Variable and Feature Selection. J Mach Learn Res 3: 1157–1182. https://www.jmlr.org/papers/v3/guyon03a.html
    [26] Hastie T, Tibshirani R, Friedman JH (2009) The elements of statistical learning: data mining, inference, and prediction, Springer. https://doi.org/10.1007/978-0-387-84858-7
    [27] He KM, Zhang XY, Ren SQ, et al. (2015) Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. IEEE international conference on computer vision. https://doi.org/10.1109/ICCV.2015.123
    [28] Ho TK (1995) Random Decision Forest, Proceeding of the 3rd International Conference on Document Analysis and Recognition, 278–282. https://doi.org/10.1109/ICDAR.1995.598994
    [29] Ho TK (1998) The Random Subspace Method for Constructing Decision Forests. IEEE T Pattern Anal 20: 832–844. https://doi.org/10.1109/34.709601 doi: 10.1109/34.709601
    [30] Hochreiter S, Bengio Y, Frasconi P, et al. (2001) Gradient Flow in Recurrent Nets: the Difficulty of Learning Long-Term Dependencies. In A Field Guide to Dynamical Recurrent Networks, IEEE, 237–243. https://doi.org/10.1109/9780470544037.ch14.
    [31] Hsu CW, Chang CC, Lin CJ (2003) A Practical Guide to Support Vector Classification. National Taiwan University, 1–12. Available from: https://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf.
    [32] Ioffe S, Szegedy C (2015) Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. International conference on machine learning, 448–456. https://doi.org/10.48550/arXiv.1502.03167 doi: 10.48550/arXiv.1502.03167
    [33] Iyer R, Khwaja AI, Luttmer EF, et al. (2009) Screening in New Credit Markets: Can Individual Lenders Infer Borrower Creditworthiness in Peer-to-Peer Lending? AFA 2011 Denver Meetings Paper. https://doi.org/10.2139/ssrn.1570115
    [34] Kang H (2013) The Prevention and Handling of the Missing Data. Korean J Anesthesiol 64: 402–406. https://doi.org/10.4097/kjae.2013.64.5.402 doi: 10.4097/kjae.2013.64.5.402
    [35] Ke GL, Meng Q, Finley T, et al. (2017) LightGBM: A highly Efficient Gradient Boosting Decision Tree, Neural Information Processing Systems, 3149–3157. Available from: https://proceedings.neurips.cc/paper/2017/file/6449f44a102fde848669bdd9eb6b76fa-Paper.pdf.
    [36] Keogh E, Mueen A (2017) Curse of Dimensionality. Encyclopedia of Machine Learning and Data Mining, Boston: Springer. https://doi.org/10.1007/978-1-4899-7687-1_192
    [37] Kingma DP, Ba JL (2015) Adam: a Method for Stochastic Optimization. International Conference on Learning Representations, 1–13. https://doi.org/10.48550/arXiv.1412.6980
    [38] Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet Classification with Deep Convolutional Neural Networks, Advances in Neural Information Processing Systems, 1097–1105. https://doi.org/10.1145/3065386
    [39] Lantz B (2013) Machine Learning with R. Packt Publishing Limited. Available from: https://edu.kpfu.ru/pluginfile.php/278552/mod_resource/content/1/MachineLearningR__Brett_Lantz.pdf.
    [40] Li LH, Sharma AK, Ahmad R, Chen RC (2021) Predicting the Default Borrowers in P2P Platform Using Machine Learning Models. In International Conference on Artificial Intelligence and Sustainable Computing. https://doi.org/10.1007/978-3-030-82322-1_20
    [41] Lin HT, Lin CJ (2003) A Study on Sigmoid Kernels for SVM and the Training of Non-PSD Kernels by SMO-type Methods. National Taiwan University. Available from: https://www.csie.ntu.edu.tw/~cjlin/papers/tanh.pdf
    [42] Lu L, Shin YJ, Su YH, et al. (2019) Dying ReLU and Initialization: Theory and Numerical Examples. arXiv preprint arXiv: 1903.06733. https://doi.org/10.4208/cicp.OA-2020-0165
    [43] Maas AL, Hannun AY, Ng AY (2013) Rectifier Nonlinearities Improve Neural Network Acoustic Models. ICML Workshop on Deep Learning for Audio, Speech, and Language Processing. Available from: https://ai.stanford.edu/~amaas/papers/relu_hybrid_icml2013_final.pdf.
    [44] Madasamy K, Ramaswami M (2017) Data Imbalance and Classifiers: Impact and Solutions from a Big Data Perspective. Int J Comput Intell Res 13: 2267–2281. Available from: https://www.ripublication.com/ijcir17/ijcirv13n9_09.pdf.
    [45] McCulloch WS, Pitts W (1943) A Logical Calculus of the Ideas Immanent in Nervous Activity. Bull Math Biophys 5: 115–133. https://doi.org/10.2307/2268029 doi: 10.2307/2268029
    [46] Mester LJ (1997) What's the Point of Credit Scoring? Bus Rev 3: 3–16. Available from: https://www.philadelphiafed.org/-/media/frbp/assets/economy/articles/business-review/1997/september-october/brso97lm.pdf.
    [47] Mijwel MM (2018) Artificial Neural Networks Advantages and Disadvantages. Available from: https://www.linkedin.com/pulse/artificial-neural-networks-advantages-disadvantages-maad-m-mijwel/.
    [48] Mills KG, McCarthy B (2016) The State of Small Business Lending: Innovation and Technology and the Implications for Regulation. HBS Working Paper No. 17-042. https://doi.org/10.2139/ssrn.2877201
    [49] Mountcastle VB (1957) Modality and Topographic Properties of Single Neurons of Cat's Somatic Sensory Cortex. J Neurophysiol 20: 408–434. https://doi.org/10.1152/jn.1957.20.4.408 doi: 10.1152/jn.1957.20.4.408
    [50] Ohlson JA (1980) Financial Ratios and the Probabilistic Prediction of Bankruptcy. J Account Res 18: 109–131. https://doi.org/10.2307/2490395 doi: 10.2307/2490395
    [51] Patro SGK, Sahu KK (2015) Normalization: A Preprocessing Stage. https://doi.org/10.17148/IARJSET.2015.2305
    [52] Pontil M, Verri A (1998) Support Vector Machines for 3D Object Recognition. IEEE Trans PAMI 20: 637–646. https://doi.org/10.1109/34.683777 doi: 10.1109/34.683777
    [53] Qian N (1999) On the Momentum Term in Gradient Descent Learning Algorithms. Neural Networks 12: 145–151. https://doi.org/10.1016/S0893-6080(98)00116-6 doi: 10.1016/S0893-6080(98)00116-6
    [54] Quinlan JR (1987) Simplifying Decision Trees. Int J Man-Mach Stud 27: 221–234. https://doi.org/10.1016/S0020-7373(87)80053-6 doi: 10.1016/S0020-7373(87)80053-6
    [55] Quinlan JR (1992) C4.5: Programs for Machine Learning. San Mateo: Morgan Kaufmann Publishers Inc. Available from: https://www.elsevier.com/books/c45/quinlan/978-0-08-050058-4.
    [56] Rajan U, Seru A, Vig V (2015) The Failure of Models that Predict Failure: Distance, Incentives, and Defaults. J Financ Econ 115: 237–260. https://doi.org/10.1016/j.jfineco.2014.09.012 doi: 10.1016/j.jfineco.2014.09.012
    [57] Rosenblatt F (1958) The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain. Psychol Rev 65: 386–408. https://doi.org/10.1037/h0042519 doi: 10.1037/h0042519
    [58] Ruder S (2017) An Overview of Gradient Descent Optimization Algorithms. arXiv preprint arXiv: 1609.04747. https://doi.org/10.6919/ICJE.202102_7(2).0058
    [59] Rumelhart DE, Hinton GE, Williams RJ (1986) Learning Representations by Back-Propagating Errors. Nature 323: 533–536. https://doi.org/10.1038/323533a0 doi: 10.1038/323533a0
    [60] Samitsu A (2017) The Structure of P2P Lending and Legal Arrangements: Focusing on P2P Lending Regulation in the UK. IMES Discussion Paper Series, No. 17-J-3. Available from: https://www.boj.or.jp/en/research/wps_rev/lab/lab17e06.htm/
    [61] Serrano-Cinca C, Gutierrez-Nieto B, López-Palacios L (2015) Determinants of Default in P2P Lending. PloS One 10: e0139427. https://doi.org/10.1371/journal.pone.0139427
    [62] Serrano-Cinca C, Gutiérrez-Nieto B (2016) The use of profit scoring as an alternative to credit scoring systems in peer-to-peer (P2P) lending. Deci Support Syst 89: 113–122. https://doi.org/10.1016/j.dss.2016.06.014 doi: 10.1016/j.dss.2016.06.014
    [63] Shannon C (1948) A Mathematical Theory of Communication. Bell Syst Tech J 27: 379–423. https://doi.org/10.1002/j.1538-7305.1948.tb01338.x doi: 10.1002/j.1538-7305.1948.tb01338.x
    [64] Shelke MS, Deshmukh PR, Shandilya VK (2017) A Review on Imbalanced Data Handling using Undersampling and Oversampling Technique. Int J Recent Trends Eng Res. https://doi.org/10.23883/IJRTER.2017.3168.0UWXM
    [65] Singh S, Gupta P (2014) Comparative Study Id3, Cart and C4.5 Decision Tree Algorithm: A Survey. Int J Adv Inf Sci Technol (IJAIST) 27: 97–103. https://doi.org/10.15693/ijaist/2014.v3i7.47-52
    [66] Srivastava N, Hinton G, Krizhevsky A, et al. (2014) Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J Mach Learn Res 15: 1929–1958. https://doi.org/https://jmlr.org/papers/v15/srivastava14a.html
    [67] Thomas LC (2000) A Survey of Credit and Behavioural Scoring: Forecasting Financial Risk of Lending to Consumers. Int J Forecast 16: 149–172. https://doi.org/10.1016/S0169-2070(00)00034-0 doi: 10.1016/S0169-2070(00)00034-0
    [68] Tibshirani R (1996) Regression shrinkage and selection via the lasso. J Royal Stat Soc (Methodological) 58: 267–288. https://doi.org/10.1111/j.2517-6161.1996.tb02080.x doi: 10.1111/j.2517-6161.1996.tb02080.x
    [69] Tieleman T, Hinton G (2012) Lecture 6.5—RMSProp, COURSERA: Neural Networks for Machine Learning.
    [70] Verified Market Research (2021) Global Peer to Peer (P2P) Lending Market Size by Type, by End User, by Geographic Scope and Forecast. Available from: https://www.verifiedmarketresearch.com/product/peer-to-peer-p2p-lending-market/.
    [71] Wang Z, Cui P, Li FT, et al. (2014) A Data-Driven Study of Image Feature Extraction and Fusion. Inf Sci 281: 536–558. https://doi.org/10.1016/j.ins.2014.02.030 doi: 10.1016/j.ins.2014.02.030
  • Reader Comments
  • © 2022 the Author(s), licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0)
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Metrics

Article views(4288) PDF downloads(414) Cited by(0)

Article outline

Figures and Tables

Figures(2)  /  Tables(8)

/

DownLoad:  Full-Size Img  PowerPoint
Return
Return

Catalog