combining regression trees and radial basis function networks paper by: m. orr, j. hallam, k....
Post on 31-Dec-2015
215 Views
Preview:
TRANSCRIPT
Combining Regression Trees and Radial Basis Function Networkspaper by: M. Orr, J. Hallam, K. Takezawa,
A. Murray, S. Ninomiya, M. Oide, T. Leonard
presentation by: Vladimir Vacić
Contents:
Regression Trees Radial Basis Function Neural
Networks Combining RTs and RBF NNs Method Experimental Results Conclusion
Combining RTs and RBF NNs
RT generates candidate units
for the RBF NN RT specifies RBF centers
and radiiRT influences the order in which
candidate units are evaluated
Method
Generating the regression tree:
recursively cut along the k dimensions
determine output for each node
Method
Selecting RBF units from the set of candidates:
necessary because so far we have not performed any pruning of the regression tree
too complex of a RBF runs into a risk of over-fitting
complex RBF is computationally expensive
Method
Selecting RBF units from the set of candidates:
standard selection methods are forward selection, backward elimination, combination of the two, full combinatorial search…
problem with forward selection is that once choice may block subsequent informative choices
Method
Using the trees to guide selection:
put the root node into the list of active nodes
for each node, consider the effect of adding one or both children and keeping or removing the parent
choose the combination which improves performance the most and update the active list
repeat
Method
Model selection criterion:
Bayesian information criterion (BIC)
BIC imposes a penalty for model complexity and hence leads to smaller networks
Method
Note that so far we have had
2 free parameters :
p
(controls the resulting network size)
(controls the ratio of the RBF radii to corresponding hyper-rectangle size)
Experimental Results
the authors report that the best experimentally determined p and on the training set do not always yield best performance on the test set
instead, they suggest using a set of best values for p and from training and then find the best combination on the test set
Experimental Results
Comparison with other learning methods:
linear least squares regression k-nearest neighbor ensembles of multilayer perceptrons multilayer perceptrons trained using MCMC multivariate adaptive regression splines
(MARS) with bagging
Experimental Results
Datasets:
DELVE dataset (non-linear, nigh noise, 8- and 32-dimensional examples), generated from simulated robotic arms
soybean classification into three classes (good, fair, poor) from digital images
Conclusion
improvement and analysis of previous work by Kubat
combining RTs and RBF NNs as a technique is competitive with some leading modern methods
top related