Predicting Method Crashes with Bytecode Operations

  • Sunghun Kim ,
  • Tom Zimmermann ,
  • Rahul Premraj ,
  • Nicolas Bettenburg ,
  • Shivkumar Shivaji

Proceedings of the 6th Annual India Software Engineering Conference (ISEC 2013) |

Published by ACM

Existing research is unclear on how to generate lessons learned for defect prediction and effort estimation. Should we seek lessons that are global to multiple projects, or just local to particular projects? This paper aims to comparatively evaluate local vs. global lessons learned for effort estimation and defect prediction. We applied automated clustering tools to effort and defect data sets from the PROMISE repository. Rule learners generated lessons learned from all the data, from local projects, or just from each cluster. The results indicate that the lessons learned after combining small parts of different data sources (i.e., the clusters) were superior to either generalizations formed over all the data or local lessons formed from particular projects. We conclude that when researchers attempt to draw lessons from some historical data source, they should (a) ignore any existing local divisions into multiple sources; (b) cluster across all available data; then (c) restrict the learning of lessons to the clusters from other sources that are nearest to the test data.