Classification of Large Microarray Data Sets Using Fast Random Forest ConstructionE. A. Manilich*, Z. M. Ozsoyoglu, V. Trubachev, T. Radivoyevitch Computer Science Department, Case Western Reserve University, Cleveland, Ohio 44106, USA. manilie@ccf.org Proc LSS Comput Syst Bioinform Conf. August, 2010. Vol. 9, p. 82-91. Full-Text PDF *To whom correspondence should be addressed. |
|
Random forest is an ensemble classification algorithm. It performs well when most predictive variables are noisy and can be used when the number of variables is much larger than the number of observations. The use of bootstrap samples and restricted subsets of attributes makes it more powerful than simple ensembles of trees. The main advantage of a random forest classifier is its explanatory power: it measures variable importance or impact of each factor on a predicted class label. These characteristics make the algorithm ideal for microarray data. It was shown to build models with high accuracy when tested on high dimensional microarray data sets. Current implementations of random forest in the machine learning and statistics community, however, limit its usability for mining over large datasets as they require that the entire dataset remains permanently in memory. We propose a new framework, an optimized implementation of a random forest classifier, which addresses specific properties of microarray data, takes computational complexity of a decision tree algorithm into consideration, and shows excellent computing performance while preserving predictive accuracy. The implementation is based on reducing overlapping computations and eliminating dependency on the size of main memory. The implementation's excellent computational performance makes the algorithm useful for interactive data analyses and data mining. |
|
[ CSB2010 Conference Home Page ] .... [ CSB2010 Online Proceedings ] .... [ Life Sciences Society Home Page ] |