Any-Cost Discovery: Learning Optimal classification Rules
Fully taking into account the hints possibly hidden in the absent data,this paper proposes a new criterion when selecting attributes for splitting tobuild a decision tree for a given dataset. In our approach, it must pay a certaincost to obtain an attribute value and pay a cost if a prediction is error. We usedifferent scales for the two kinds of cost instead of the same cost scale definedby previous works. We propose a new algorithm to build decision tree with nullbranch strategy to minimize the misclassification cost. When consumer offersfinite resources, we can make the best use of the resources as well as optimalresults obtained by the tree. We also consider discounts in test costs whengroups of attributes are tested together. In addition, we also put forward adviceabout whether it is worthy of increasing resources or not. Our results can bereadily applied to real-world diagnosis tasks, such as medical diagnosis wheredoctors must try to determine what tests should be performed for a patient tominimize the misclassification cost in certain resources.
Year of publication: |
2005
|
---|---|
Authors: | Ni Ailing ; Zhu Xiaofeng ; Zhang Chengqi |
Other Persons: | Zhang, S (contributor) ; Jarvis, R (contributor) |
Publisher: |
Springer-Verlag |
Saved in:
freely available
Saved in favorites
Similar items by person
-
A Hybrid Recommendation Approach for One and Only Items
Guo Xuetao, (2005)
-
Exchange rate modelling using news articles and economic data
Zhang Debbie, (2005)
-
A Strategy for Attributes Selection in Cost-Sensitive Decision Trees Induction
Zhang Shichao, (2008)
- More ...