SCIENCE CHINA Information Sciences, Volume 63 , Issue 6 : 162101(2020) https://doi.org/10.1007/s11432-019-2720-1

Learning a graph-based classifier for fault localization

More info
  • ReceivedJul 21, 2019
  • AcceptedNov 21, 2019
  • PublishedMay 9, 2020



This work was sponsored by National Key RD Program of China (Grant No. 2018YFC0830500), National Nature Science Foundation of China (Grant No. 61572313), and Science and Technology Commission of Shanghai Municipality (Grant No. 15DZ1100305). We appreciated the anonymous reviewers for their constructive comments.


[1] Hovemeyer D, Pugh W. Finding bugs is easy. In: Proceedings of OOPSLA, 2004. 132--136. Google Scholar

[2] DiGiuseppe N, Jones J A. On the influence of multiple faults on coverage-based fault localization. In: Proceedings of ISSTA, 2011. 210--220. Google Scholar

[3] Abreu R, Zoeteweij P, van Gemund A J C. Spectrum-based multiple fault localization. In: Proceedings of the 2009 IEEE/ACM International Conference on Automated Software Engineering, 2009. 88--99. Google Scholar

[4] Do H, Elbaum S, Rothermel G. Supporting Controlled Experimentation with Testing Techniques: An Infrastructure and its Potential Impact. Empir Software Eng, 2005, 10: 405-435 CrossRef Google Scholar

[5] Wang Q, Parnin C, and Orso A. Evaluating the usefulness of IR-based fault localization techniques. In: Proceedings of ISSTA, 2015. 1--11. Google Scholar

[6] Johnson B, Song Y, Murphy-Hill E, et al. Why don't software developers use static analysis tools to find bugs? In: Proceedings of ICSE, 2013. 672--681. Google Scholar

[7] Rochkind M J. The source code control system. IIEEE Trans Software Eng, 1975, SE-1: 364-370 CrossRef Google Scholar

[8] Wu R, Zhang H, Kim S, et al. Relink: recovering links between bugs and changes. In: Proceedings of ESEC/FSE, 2011. 15--25. Google Scholar

[9] Tian Y, Lawall J, Lo D. Identifying linux bug fixing patches. In: Proceedings of 34th ICSE, 2012. 386--396. Google Scholar

[10] Mei H, Zhang L. Can big data bring a breakthrough for software automation?. Sci China Inf Sci, 2018, 61: 056101 CrossRef Google Scholar

[11] Guo P J, Zimmermann T, Nagappan N, et al. Characterizing and predicting which bugs get fixed: an empirical study of microsoft windows. In: Proceedings of ICSE, 2010. 495--504. Google Scholar

[12] Zhong H, Su Z. An empirical study on real bug fixes. In: Proceedings of ICSE, 2015. 913--923. Google Scholar

[13] Martinez M, Monperrus M. Mining software repair models for reasoning on the search space of automated program fixing. Empir Software Eng, 2015, 20: 176-205 CrossRef Google Scholar

[14] Rahm E, Do H H. Data cleaning: Problems and current approaches. IEEE Data Eng Bullet, 2000, 23: 3--13. Google Scholar

[15] Ottenstein K J, Ottenstein L M. The program dependence graph in a software development environment. SIGPLAN Not, 1984, 19: 177-184 CrossRef Google Scholar

[16] Tufano M, Palomba F, Bavota G. There and back again: Can you compile that snapshot?. J Softw Evol Proc, 2017, 29: e1838 CrossRef Google Scholar

[17] Hsu H-Y, Jones J A, Orso A. Rapid: Identifying bug signatures to support debugging activities. In: Proceedings of ASE, 2008. 439--442. Google Scholar

[18] Sun C, Khoo S-C. Mining succinct predicated bug signatures. In: Proceedings of ESEC/FSE, 2013. 576--586. Google Scholar

[19] Hutchins M, Foster H, Goradia T, et al. Experiments of the effectiveness of dataflow-and controlflow-based test adequacy criteria. In: Proceedings of ICSE, 1994. 191--200. Google Scholar

[20] Li J, Ernst M D. CBCD: cloned buggy code detector. In: Proceedings of ICSE, 2012. 310--320. Google Scholar

[21] Fluri B, Wuersch M, PInzger M. Change Distilling:Tree Differencing for Fine-Grained Source Code Change Extraction. IIEEE Trans Software Eng, 2007, 33: 725-743 CrossRef Google Scholar

[22] Mishne A, Shoham S, Yahav E. Typestate-based semantic code search over partial programs. In: Proceedings of OOPSLA, 2012. 997--1016. Google Scholar

[23] Dagenais B, Hendren L J. Enabling static analysis for partial Java programs. In: Proceedings of OOPSLA, 2008. 313--328. Google Scholar

[24] Yang Q, Wu X. 10 CHALLENGING PROBLEMS IN DATA MINING RESEARCH. Int J Info Tech Dec Mak, 2006, 05: 597-604 CrossRef Google Scholar

[25] Zhong H, Wang X. Boosting complete-code tools for partial program. In: Proceedings of ASE, 2017. 671--681. Google Scholar

[26] Zhong H, Meng N. Towards reusing hints from past fixes. Empir Software Eng, 2018, 23: 2521-2549 CrossRef Google Scholar

[27] Wang Y, Meng N, Zhong H. An empirical study of multi-entity changes in real bug fixes. In: Proceedings of ICSME, 2018. Google Scholar

[28] Dongsun Kim , Yida Tao , Sunghun Kim . Where Should We Fix This Bug? A Two-Phase Recommendation Model. IIEEE Trans Software Eng, 2013, 39: 1597-1610 CrossRef Google Scholar

[29] Hao D, Xie T, Zhang L. Test input reduction for result inspection to facilitate fault localization. Autom Softw Eng, 2010, 17: 5-31 CrossRef Google Scholar

[30] Pearson S, Campos J, Just R, et al. Evaluating and improving fault localization. In: Proceedings of ICSE, 2017. 609--620. Google Scholar

[31] Berglund A, Boag S, Chamberlin D, et al. Xml path language (xpath). World Wide Web Consortium (W3C), 2003. Google Scholar

[32] Lovins J B. Development of a stemming algorithm. Mech Transl Comput Linguist, 1968, 11: 1--10. Google Scholar

[33] Newman D, Asuncion A, Smyth P, et al. Distributed algorithms for topic models. J Mach Learn Res, 2009, 10: 1801--1828. Google Scholar

[34] Nguyen A T, Nguyen T T, Al-Kofahi J, et al. A topic-based approach for narrowing the search space of buggy files from a bug report. In: Proceedings of ASE, 2011. 263--272. Google Scholar

[35] Kuhn H W. The Hungarian method for the assignment problem. Naval Res Logistics, 1955, 2: 83-97 CrossRef Google Scholar

[36] Mikolov T, Chen K, Corrado G, et al. Efficient estimation of word representations in vector space. 2013,. arXiv Google Scholar

[37] Gu Z, Barr E T, Hamilton D J, et al. Has the bug really been fixed? In: Proceedings of the 32nd ICSE, 2010. 55--64. Google Scholar

[38] Haibo He , Garcia E A. Learning from Imbalanced Data. IEEE Trans Knowl Data Eng, 2009, 21: 1263-1284 CrossRef Google Scholar

[39] Chawla N V, Bowyer K W, Hall L O. SMOTE: Synthetic Minority Over-sampling Technique. jair, 2002, 16: 321-357 CrossRef Google Scholar

[40] Xu-Ying Liu , Jianxin Wu , Zhi-Hua Zhou . Exploratory undersampling for class-imbalance learning.. IEEE Trans Syst Man Cybern B, 2009, 39: 539-550 CrossRef PubMed Google Scholar

[41] Mease D, Wyner A J, Buja A. Boosted classification trees and class probability/quantile estimation. J Mach Learn Res, 2007, 8: 409--439. Google Scholar

[42] Sun Y, Kamel M S, Wong A K C. Cost-sensitive boosting for classification of imbalanced data. Pattern Recognition, 2007, 40: 3358-3378 CrossRef Google Scholar

[43] Weiss G M. Mining with rarity: a unifying framework. ACM SIGKDD Explor Newsletter, 2004, 6: 7--19. Google Scholar

[44] Frank E. Pruning Decision Trees and Lists. Dissertation for Ph.D. Degree. Hamilton: University of Waikato, 2000. Google Scholar

[45] Freund Y, Schapire R E. Experiments with a new boosting algorithm. In: Proceedings of ICML, San Francisco, 1996. 148--156. Google Scholar

[46] Di Nucci D, Palomba F, Tamburri D A, et al. Detecting code smells using machine learning techniques: are we there yet? In: Proceedings of SANER, 2018. 612--621. Google Scholar

[47] Di Nucci D, Palomba F, De Rosa G. A Developer Centered Bug Prediction Model. IIEEE Trans Software Eng, 2018, 44: 5-24 CrossRef Google Scholar

[48] Hassan A E. Predicting faults using the complexity of code changes. In: Proceedings of ICSE, 2009. 78--88. Google Scholar

[49] Lucia L, Lo D, Jiang L. Extended comprehensive study of association measures for fault localization. J Softw Evol Proc, 2014, 26: 172-219 CrossRef Google Scholar

[50] DiGiuseppe N, Jones J A. Fault density, fault types, and spectra-based fault localization. Empir Software Eng, 2015, 20: 928-967 CrossRef Google Scholar

[51] Wang S, Liu T, Tan L. Automatically learning semantic features for defect prediction. In: Proceedings of ICSE, 2016. 297--308. Google Scholar

[52] Benesty J, Chen J, Huang Y, et al. Pearson correlation coefficient. In: Noise Reduction in Speech Processing. Berlin: Springer, 2009. 1--4. Google Scholar

[53] Hall M A. Correlation-based feature selection for machine learning. 1999. Google Scholar

[54] Zhong H, Zhang L, Xie T, et al. Inferring resource specifications from natural language API documentation. In: Proceedings of ASE, 2009. 307--318. Google Scholar

[55] Platt J C. Fast training of support vector machines using sequential minimal optimization. Advances in Kernel Methods, 1999. 185--208. Google Scholar

[56] Suykens J A K, Vandewalle J. Neural Processing Lett, 1999, 9: 293-300 CrossRef Google Scholar

[57] John G H, Langley P. Estimating continuous distributions in bayesian classifiers. In: Proceedings of UAI, 1995. 338--345. Google Scholar

[58] Kohavi R. The power of decision tables. In: Proceedings of ECML, 1995. 174--189. Google Scholar

[59] Cessie S L, Houwelingen J C V. Ridge Estimators in Logistic Regression. Appl Stat, 1992, 41: 191-201 CrossRef Google Scholar

[60] Fawcett T. An introduction to ROC analysis. Pattern Recognition Lett, 2006, 27: 861-874 CrossRef Google Scholar

[61] Flach P A, Wu S. Repairing concavities in roc curves. In: Proceedings of IJCAI, 2005. 702--707. Google Scholar

[62] Ghotra B, McIntosh S, Hassan A E. Revisiting the impact of classification techniques on the performance of defect prediction models. In: Proceedings of ICSE, 2015. 789--800. Google Scholar

[63] Hall T, Beecham S, Bowes D. A Systematic Literature Review on Fault Prediction Performance in Software Engineering. IIEEE Trans Software Eng, 2012, 38: 1276-1304 CrossRef Google Scholar

[64] Rao S, Kak A. Retrieval from software libraries for bug localization: a comparative study of generic and composite text models. In: Proceedings of MSR, 2011. 43--52. Google Scholar

[65] Zhou J, Zhang H, Lo D. Where should the bugs be fixed? more accurate information retrieval-based bug localization based on bug reports. In: Proceedings of ICSE, 2012. 14--24. Google Scholar

[66] Wong C, Xiong Y, Zhang H, et al. Boosting bug-report-oriented fault localization with segmentation and stack-trace analysis. In: Proceedings of ICSME, 2014. 181--190. Google Scholar

[67] Sisman B, Kak A C. Incorporating version histories in information retrieval based bug localization. In: Proceedings of MSR, 2012. 50--59. Google Scholar

[68] Kim S, Zimmermann T, Whitehead Jr E J, et al. Predicting faults from cached history. In: Proceedings of the 29th ICSE, 2007. 489--498. Google Scholar

[69] Bachmann A, Bird C, Rahman F, et al. The missing links: bugs and bug-fix commits. In: Proceedings of ESEC/FSE, 2010. 97--106. Google Scholar

[70] Antoniol G, Ayari K, Di Penta M, et al. Is it a bug or an enhancement? a text-based approach to classify change requests. In: Proceedings of CASCON, 2008. 23. Google Scholar

[71] Herzig K, Just S, Zeller A. It's not a bug, it's a feature: how misclassification impacts bug prediction. In: Proceedings of ICSE, 2013. 392--401. Google Scholar

[72] Weimer W, Nguyen T, Le Goues C, et al. Automatically finding patches using genetic programming. In: Proceedings of ICSE, 2009. 364--374. Google Scholar

[73] Qi Y, Mao X, Lei Y, et al. The strength of random search on automated program repair. In: Proceedings of 36th ICSE, 2014. 254--265. Google Scholar

[74] Sarro F, Di Martino S, Ferrucci F, et al. A further analysis on the use of genetic algorithm to configure support vector machines for inter-release fault prediction. In: Proceedings of SAC, 2012. 1215--1220. Google Scholar

[75] Tantithamthavorn C, McIntosh S, Hassan A E, et al. Automated parameter optimization of classification techniques for defect prediction models. In: Proceedings of ICSE, 2016. 321--332. Google Scholar

[76] Thornton C, Hutter F, Hoos H H, et al. Auto-WEKA: Combined selection and hyperparameter optimization of classification algorithms. In: Proceedings of KDD, 2013. 847--855. Google Scholar

[77] Tantithamthavorn C, McIntosh S, Hassan A E. The Impact of Automated Parameter Optimization on Defect Prediction Models. IIEEE Trans Software Eng, 2019, 45: 683-711 CrossRef Google Scholar

[78] Le T-D B, Oentaryo R J, Lo D. Information retrieval and spectrum based bug localization: better together. In: Proceedings of ESEC/FSE, 2015. 579--590. Google Scholar

[79] Shapiro E. Algorithmic program debugging. Dissertation for Ph.D. Degree. New Haven: Yale University, 1983. Google Scholar

[80] Wong W E, Gao R, Li Y. A Survey on Software Fault Localization. IIEEE Trans Software Eng, 2016, 42: 707-740 CrossRef Google Scholar

[81] Jones J A, Harrold M J, Stasko J. Visualization of test information to assist fault localization. In: Proceedings of ICSE, 2002. 467--477. Google Scholar

[82] Naish L, Lee H J, Ramamohanarao K. A model for spectra-based software diagnosis. ACM Trans Softw Eng Methodol, 2011, 20: 1-32 CrossRef Google Scholar

[83] Wong W E, Debroy V, Xu D. Towards Better Fault Localization: A Crosstab-Based Statistical Approach. IEEE Trans Syst Man Cybern C, 2012, 42: 378-396 CrossRef Google Scholar

[84] Abreu R, Zoeteweij P, van Gemund A J C. An evaluation of similarity coefficients for software fault localization. In: Proceedings of the 12th Pacific Rim International Symposium on Dependable Computing (PRDC'06), 2006. 39--46. Google Scholar

[85] Abreu R, Zoeteweij P, Golsteijn R. A practical evaluation of spectrum-based fault localization. J Syst Software, 2009, 82: 1780-1792 CrossRef Google Scholar

[86] Wong W E, Qi Y. BP NEURAL NETWORK-BASED EFFECTIVE FAULT LOCALIZATION. Int J Soft Eng Knowl Eng, 2009, 19: 573-597 CrossRef Google Scholar

[87] Mao X, Lei Y, Dai Z. Slice-based statistical fault localization. J Syst Software, 2014, 89: 51-62 CrossRef Google Scholar

[88] Dickinson W, Leon D, Podgurski A. Finding failures by cluster analysis of execution profiles. In: Proceedings of ICSE, 2001. 339--348. Google Scholar

[89] Gao R, Wong W E. MSeer-An Advanced Technique for Locating Multiple Bugs in Parallel. IIEEE Trans Software Eng, 2019, 45: 301-318 CrossRef Google Scholar

[90] Debroy V, Wong W E. Insights on fault interference for programs with multiple bugs. In: Proceedings of ISSRE, 2009. 165--174. Google Scholar

[91] Perez A, Abreu R, d'Amorim M. Prevalence of single-fault fixes and its impact on fault localization. In: Proceedings of ICST, 2017. 12--22. Google Scholar

[92] Just R, Parnin C, Drosos I, et al. Comparing developer-provided to user-provided tests for fault localization and automated program repair. In: Proceedings of ISSTA, 2018. 287--297. Google Scholar

[93] Campos J, Abreu R, Fraser G, et al. Entropy-based test generation for improved fault localization. In: Proceedings of ASE, 2013. 257--267. Google Scholar

[94] Perez A, Abreu R, van Deursen A. A test-suite diagnosability metric for spectrum-based fault localization approaches. In: Proceedings of ICSE, 2017. 654--664. Google Scholar

[95] Lukins S K, Kraft N A, Etzkorn L H. Bug localization using latent Dirichlet allocation. Inf Software Tech, 2010, 52: 972-990 CrossRef Google Scholar

[96] Wang S, Lo D, Lawall J. Compositional vector space models for improved bug localization. In: Proceedings of ICSME, 2014. 171--180. Google Scholar

[97] Saha R K, Lease M, Khurshid S, and D. Perry E. Improving bug localization using structured information retrieval. In: Proceedings of ASE, 2013. 345--355. Google Scholar

[98] Wang S, Lo D. AmaLgam+: Composing Rich Information Sources for Accurate Bug Localization. J Softw Evol Proc, 2016, 28: 921-942 CrossRef Google Scholar

[99] Ammons G, Bod'ık R, Larus J R. Mining specifications. In: Proceedings of 29th POPL, 2002. 4--16. Google Scholar

[100] Pandita R, Xiao X, Zhong H, et al. Inferring method specifications from natural language API descriptions. In: Proceedings of 34th ICSE, 2012. 815--825. Google Scholar

[101] Nguyen T T, Nguyen H A, Pham N H, et al. Graph-based mining of multiple object usage patterns. In: Proceedings of ESEC/FSE, 2009. 383--392. Google Scholar

[102] Nguyen H V, Nguyen H A, Nguyen A T, et al. Mining interprocedural, data-oriented usage patterns in JavaScript web applications. In: Proceedings of ICSE, 2014. 791--802. Google Scholar

[103] Corbett J C, Dwyer M B, Hatcliff J, et al. Bandera: Extracting finite-state models from Java source code. In: Proceedings of the 22nd ICSE, 2000. 439--448. Google Scholar

[104] Robillard M P, Bodden E, Kawrykow D. Automated API Property Inference Techniques. IIEEE Trans Software Eng, 2013, 39: 613-637 CrossRef Google Scholar

[105] Li Z, Zhou Y. PR-Miner: automatically extracting implicit programming rules and detecting violations in large software code. In: Proceedings of ESEC/FSE, 2005. 306--315. Google Scholar

[106] Saied A, Benomar O, Abdeen H, et al. Mining multi-level API usage patterns. In: Proceedings of SANER, 2015. 23--32. Google Scholar

[107] Engler D, Chen D, Chou A. Bugs as inconsistent behavior: a general approach to inferring errors in systems code. In: Proceedings of 18th SOSP, 2001. 57--72. Google Scholar

[108] Wasylkowski A, Zeller A, Lindig C. Detecting object usage anomalies. In: Proceedings of ESEC/FSE, 2007. 35--44. Google Scholar

[109] Ramanathan M, Grama A, Jagannathan S. Path-sensitive inference of function precedence protocols. In: Proceedings of the 29th ICSE, 2007. 240--250. Google Scholar

[110] Maoz S, Ringert J O. GR(1) synthesis for LTL specification patterns. In: Proceedings of ESEC/FSE, 2015. 96--106. Google Scholar

[111] Lemieux C, Park D, Beschastnikh I. General LTL specification mining. In: Proceedings of ASE, 2015. 81--92. Google Scholar

[112] Agrawal R, Srikant R. Mining sequential patterns. In: Proceedings of ICDE, 1995. 3--14. Google Scholar

[113] Ernst M D, Perkins J H, Guo P J. The Daikon system for dynamic detection of likely invariants. Sci Comput Programming, 2007, 69: 35-45 CrossRef Google Scholar

[114] Le T, Le X, Lo D, et al. Synergizing specification miners through model fissions and fusions. In: Proceedings of ASE, 2015. 115--125. Google Scholar

[115] Dallmeier V, Knopp N, Mallon C, et al. Generating test cases for specification mining. In: Proceedings of the 19th International Symposium on Software Testing and Analysis, 2010. 85--96. Google Scholar

[116] Pradel M, Gross T R. Leveraging test generation and specification mining for automated bug detection without false positives. In: Proceedings of ICSE, 2012. 288--298. Google Scholar

[117] Brünink M, Rosenblum D S. Mining performance specifications. In: Proceedings of ESEC/FSE, 2016. 39--49. Google Scholar

[118] Pham N H, Nguyen T T, Nguyen H A, et al. Detecting recurring and similar software vulnerabilities. In: Proceedings of ICSE, 2010. 227--230. Google Scholar

[119] Cheng H, Lo D, Zhou Y, et al. Identifying bug signatures using discriminative graph mining. In: Proceedings of ISSTA, 2009. 141--152. Google Scholar

[120] Zuo Z, Khoo S-C, Sun C. Efficient predicated bug signature mining via hierarchical instrumentation. In: Proceedings of ISSTA, 2014. 215--224. Google Scholar

[121] El Emam K, Melo W, Machado J C. The prediction of faulty classes using object-oriented design metrics. J Syst Software, 2001, 56: 63-75 CrossRef Google Scholar

[122] Marcus A, Poshyvanyk D, Ferenc R. Using the Conceptual Cohesion of Classes for Fault Prediction in Object-Oriented Systems. IIEEE Trans Software Eng, 2008, 34: 287-300 CrossRef Google Scholar

[123] Nagappan N, Ball T, Zeller A. Mining metrics to predict component failures. In: Proceedings of ICSE, 2006. 452--461. Google Scholar

[124] Rahman F, Posnett D, Hindle A, et al. Bugcache for inspections: hit or miss? In: Proceedings of ESEC/FSE, 2011. 322--331. Google Scholar

[125] Hayes J H, Dekhtyar A, Osborne J. Improving requirements tracing via information retrieval. In: Proceedings of RE, 2003. 138--147. Google Scholar

[126] Williams C C, Hollingsworth J K. Automatic mining of source code repositories to improve bug finding techniques. IIEEE Trans Software Eng, 2005, 31: 466-480 CrossRef Google Scholar

[127] Last M, Friedman M, Kandel A. The data mining approach to automated software testing. In: Proceedings of KDD, 2003. 388--396. Google Scholar

[128] Podgurski A, Leon D, Francis P, et al. Automated support for classifying software failure reports. In: Proceedings of 25th ICSE, 2003. 465--475. Google Scholar

[129] Hindle A, German D M, Holt R. What do large commits tell us? a taxonomical study of large commits. In: Proceedings of the 4th MSR, 2008. 99--108. Google Scholar

[130] Menzies T, Di Stefano J S. More success and failure factors in software reuse. IIEEE Trans Software Eng, 2003, 29: 474-477 CrossRef Google Scholar

  • Figure 1

    (Color online) The bug report of ARIES-1467.

  • Figure 2

    (Color online) The source code of ARIES-1467. (a) The buggy file of ARIES-1467; (b) the fixed file of ARIES-1467; (c) the program dependency graph of (a) (partial).

  • Figure 3

    (Color online) The overview of our approach. (a) Extracting features; (b) training and predicting faults.

  • Figure 4

    (Color online) A sample commit.

  • Figure 6

    (Color online) ROC. (a) Aries; (b) Mahout; (c) Derby; (d) Cassandra.

  • Table 11  

    Table 1Table 1

    The features extracted by


  • Table 2  

    Table 2Subject

    Project SingleMultiple Graph Fix Percentage (%)
    Aries 37 263 1192 394 76.1
    Mahout 47 253 1573 313 95.8
    Derby 32 268 1981 1134 26.5
    Cassandra 30 270 1468 2536 11.8
    Total 146 1054 6214 4377 27.4
  • Table 3  

    Table 3Overall effectiveness of ClaFa

    Project PrecisionRecall$f$-scoreThe area under ROC
    Aries 0.772 0.818 0.787 0.647
    Mahout 0.856 0.888 0.871 0.619
    Derby 0.902 0.924 0.912 0.647
    Cassandra 0.892 0.917 0.903 0.650
  • Table 4  

    Table 4Learning from other projects

    Project AriesMahoutDerbyCassandraCombination
    Aries 0.445 0.431 0.4850.459
    Mahout 0.467 0.440 0.4280.439
    Derby 0.519 0.439 0.5070.479
    Cassandra 0.496 0.468 0.457 0.463
  • Table 5  

    Table 5Top ten features, ranked in the order of their importance in the cleanversus buggy classification$^{\rm~a)b)c)d)}$

    1 g, $F_5$, o $\xleftarrow{c}$ $F_{46}$ g, $F_5$, o $\xleftarrow{c}$ g, $F_5$, o $\xleftarrow{c}$
    2 l, $F_5$, o $\xleftarrow{c}$ g, $F_1$, o $\xleftarrow{d}$g, $F_3$, o $\xleftarrow{c}$ g, $F_5$, o $\xleftarrow{c}$
    3 l, $F_7$, o $\xleftarrow{c}$ l, $F_1$, o $\xleftarrow{d}$ l, $F_5$, o $\xleftarrow{c}$ g, $F_5$, i $\xleftarrow{c}$
    4 l, $F_6$, i $\xleftarrow{c}$ l, $F_1$, o $\xleftarrow{c}$ l, $F_3$, o $\xleftarrow{c}$ g, $F_5$, i $\xleftarrow{c}$
    5 g, $F_3$, i $\xleftarrow{c}$ g, $F_4$, o $\xleftarrow{c}$ l, $F_7$, o $\xleftarrow{c}$ l, $F_5$, i $\xleftarrow{c}$
    6 l, $F_3$, i $\xleftarrow{c}$ g, $F_1$, i $\xleftarrow{c}$ g, $F_2$, o $\xleftarrow{c}$ $F_{46}$
    7 l, $F_2$, o $\xleftarrow{c}$ n, $F_1$ l, $F_2$, o $\xleftarrow{c}$ l, $F_5$, i $\xleftarrow{d}$
    8 g, $F_2$, o $\xleftarrow{c}$ l, $F_1$, i $\xleftarrow{c}$ g, $F_4$, o $\xleftarrow{c}$ n, $F_1$
    9 l, $F_3$, o $\xleftarrow{c}$ g, $F_5$, o $\xleftarrow{c}$ l, $F_4$, o $\xleftarrow{c}$ g, $F_4$, o $\xleftarrow{c}$
    10 g, $F_3$, o $\xleftarrow{c}$ g, $F_1$, o $\xleftarrow{c}$ g, $F_2$, o $\xleftarrow{c}$ l, $F_7$, o $\xleftarrow{c}$


  • Table 6  

    Table 6The results without bug reports

    Project PrecisionRecall$f$-score
    Aries 0.691 0.664 0.651
    Mahout 0.710 0.679 0.668
    Derby 0.735 0.672 0.652
    Cassandra 0.706 0.657 0.631