Abstract—At present, an active research topic is the use of ensembles of classifiers. They are obtained by generating and combining base classifiers, constructed using other machine learning methods. The target of these ensembles is to increase the predictive accuracy with respect to the base classifiers. One of the most popular methods for creating ensembles is boosting, a family of methods, of which AdaBoost is the most prominent member. Boosting is a general approach for improving classifier performances. Boosting is a well established method in the machine learning community for improving the performance of any learning algorithm. It is a method to combine weak classifiers produced by a weak learner to a strong classifier. Boosting refers to the general problem of producing a very accurate prediction rule by combining rough and moderately inaccurate rules-of-thumb. Boosting Methods combine many weak classifiers to produce a committee. It resembles Bagging and other committee based methods. Many weak classifiers are combined to produce a powerful committee. Sequentially apply weak classifiers to modified versions of data. Predictions of these classifiers are combined to produce a powerful classifier. This paper contains comprehensive evolution of Boosting and evaluation of Boosting on various criteria (parameters) with Bagging.
Index Terms—Ensemble, machine learning, predictive accuracy, classifiers, Boosting
1* Associate Professor & Head, CE-IT, CITC (amitganu@yahoo.com)
**Dean, Faculty of Technology & Engineering (ypkosta@yahoo.com)(IEEE Member) (SCPM, Stanford University) Charotar University of Science Technology (CHARUSAT), Education Campus, Changa – 388421, Ta – Petlad, Dist – Anand, Gujarat (INDIA)
Cite: Amit P Ganatra and Yogesh P Kosta, "Comprehensive Evolution and Evaluation of Boosting," International Journal of Computer Theory and Engineering vol. 2, no. 6, pp. 931-936, 2010.
Copyright © 2008-2024. International Association of Computer Science and Information Technology. All rights reserved.