Loading...
Please wait, while we are loading the content...
Similar Documents
Nearest neighbor sampling for better defect prediction
| Content Provider | ACM Digital Library |
|---|---|
| Author | Boetticher, Gary D. |
| Abstract | An important step in building effective predictive models applies one or more sampling techniques. Traditional sampling techniques include random, stratified, systemic, and clustered. The problem with these techniques is that they focus on the class attribute, rather than the non-class attributes. For example, if a test instance's nearest neighbor is from the opposite class of the training set, then it seems doomed to misclassification. To illustrate this problem, this paper conducts 20 experiments on five different NASA defect datasets (CM1, JM1, KC1, KC2, PC1) using two different learners (J48 and Naïve Bayes). Each data set is divided into 3 groups, a training set, and "nice/nasty" neighbor test sets. Using a nearest neighbor approach, "Nice neighbors" consist of those test instances closest to class training instances. "Nasty neighbors" are closest to opposite class training instances. The "Nice" experiments average 94 percent accuracy and the "Nasty" experiments average 20 percent accuracy. Based on these results a new nearest neighbor sampling technique is proposed. |
| Starting Page | 1 |
| Ending Page | 6 |
| Page Count | 6 |
| File Format | |
| ISBN | 1595931252 |
| DOI | 10.1145/1083165.1083173 |
| Language | English |
| Publisher | Association for Computing Machinery (ACM) |
| Publisher Date | 2005-05-15 |
| Publisher Place | New York |
| Access Restriction | Subscribed |
| Subject Keyword | Empirical software engineering Nasa data repository Decision trees Defect prediction Nearest neighbor analysis |
| Content Type | Text |
| Resource Type | Article |