Zliobaite, I., 2011. Controlled permutations for testing adaptive classifier. In: Fourteenth International Conference on Discovery Science (DS 2011), 5 - 7 Oct 2011, Espoo, Finland, pp. 365-379.
This is the latest version of this eprint.
Full text not available from this repository.
We study evaluation of online classifiers that are designed to adapt to changes in data distribution over time (concept drift). A standard procedure to evaluate such classifiers is the test-then-train, which iteratively uses the incoming instances for testing and then for updating a classifier. We observe that such learning risks to overfit, since a dataset is processed only once in a fixed sequential order while every output of the classifier depends on the instances seen so far. The problem is particularly serious when several classifiers are compared, since the same test set arranged in a different order may indicate a different winner. To reduce the risk we propose to run multiple tests with permuted data. A random permutation is not suitable, as it would make the data distribution uniform over time, as a result, changes and thus the need for adaptive classifiers would be lost. We develop three permutation techniques with theoretical control mechanisms that ensure that different distributions in data are preserved while perturbing the data order. The main idea is to manipulate blocks of data keeping individual instances close together. Our permutations reduce the risk of overfitting by making it possible to analyze sensitivity of classifiers to variations in the data order.
|Item Type:||Conference or Workshop Item (Paper)|
|Subjects:||Generalities > Computer Science and Informatics > Artificial Intelligence|
|Group:||School of Design, Engineering & Computing > Smart Technology Research Centre|
|Deposited By:||Dr Indre Zliobaite LEFT|
|Deposited On:||14 Oct 2011 13:52|
|Last Modified:||07 Mar 2013 15:48|
Available Versions of this Item
|Repository Staff Only -|
|BU Staff Only -|
|Help Guide -||Editing Your Items in BURO|