Technical Reports - Query Results
Your query term was 'number = 2001-01'1 report found
- OFAI-TR-2001-01 (
55kB g-zipped PostScript file,
117kB PDF file)
An Evaluation of Grading Classifiers
- Alexander K. Seewald, Johannes Fürnkranz
- In this paper, we introduce grading, a novel meta-classification
scheme.
While stacking uses the predictions of the base classifiers as
meta-level attributes, we use ``graded'' predictions (i.e.,
predictions that have been marked as correct or incorrect) as
meta-level classes. For each base classifier, one meta classifier
is learned whose task is to predict when the base
classifier will err.
Hence, just like stacking may be viewed as a generalization of
voting, grading may be viewed as a generalization of selection by
cross-validation and therefore fills a conceptual gap in the space
of meta-classification schemes.
Grading may also be interpreted as a technique for turning the
error-characterizing technique introduced by
Bay and Pazzani (2000) into a powerful learning
algorithm by resorting to an ensemble of meta-classifiers.
Our experimental evaluation shows that this step results in a
performance gain that is quite comparable to that achieved by
stacking, while both, grading and stacking outperform their simpler
counter-parts voting and selection by cross-validation.
Keywords: Machine Learning, Classification, Ensembles
- In this paper, we introduce grading, a novel meta-classification
scheme.
While stacking uses the predictions of the base classifiers as
meta-level attributes, we use ``graded'' predictions (i.e.,
predictions that have been marked as correct or incorrect) as
meta-level classes. For each base classifier, one meta classifier
is learned whose task is to predict when the base
classifier will err.
Hence, just like stacking may be viewed as a generalization of
voting, grading may be viewed as a generalization of selection by
cross-validation and therefore fills a conceptual gap in the space
of meta-classification schemes.
Grading may also be interpreted as a technique for turning the
error-characterizing technique introduced by
Bay and Pazzani (2000) into a powerful learning
algorithm by resorting to an ensemble of meta-classifiers.
Our experimental evaluation shows that this step results in a
performance gain that is quite comparable to that achieved by
stacking, while both, grading and stacking outperform their simpler
counter-parts voting and selection by cross-validation.