The PDF file you selected should load here if your Web browser has a PDF reader plug-in installed (for example, a recent version of Adobe Acrobat Reader).

If you would like more information about how to print, save, and work with PDFs, Highwire Press provides a helpful Frequently Asked Questions about PDFs.

Alternatively, you can download the PDF file directly to your computer, from where it can be opened using a PDF reader. To download the PDF, click the Download link above.

Fullscreen Fullscreen Off


The performance of Nearest Neighbor (NN) classifier is highly dependent on the distance (or similarity) function used to find the NN of an input test pattern. Many of the proposed algorithms try to optimize the accuracy of the NN rule using a weighted distance function. In this scheme, a weight parameter is learned for each of the training instances. The weights of training instances are used in the generalization phase to find the NN of an input test pattern. The Weighted Distance Nearest Neighbor (WDNN) algorithm attempts to maximize the leave-one-out classification rate of the training set by adjusting the weight parameters. The procedure simply leads to weights that overfit the train data, which degrades the performance of the method especially in noisy environments.

In this paper, we propose an enhanced version of WDNN, called Overfit Avoidance for WDNN (OAWDNN), that significantly outperforms the algorithm in generalization phase. The proposed method uses an early stopping approach to decrease instance weights specified by WDNN, which implicitly makes the class boundary smooth and consequently more generalized.

In order to evaluate robustness of the algorithm, class label noise is added to a variety of UCI datasets. The experimental results show the supremacy of the proposed method in generalization accuracy.


Keywords

Classification Nearest Neighbor (NN), Instance Weighting, Avoidance Overfit, Robustness, Environment Noisy.
User