Decision Forestsfor Computer Vision andMedical Image AnalysisA. Criminisi and J. Shotton Springer 2013, XIX, 368 p. 143 illus., 136 in color. ISBN 978-1-4471-4929-3 |
(opens in new tab) |
-
This book presents a unified, efficient model of decision forests which can be used in a number of applications such as scene recognition from photographs, object recognition in images and automatic diagnosis from radiological scans. Such applications have traditionally been addressed by different, supervised or unsupervised machine learning techniques.
However, in this book, diverse learning tasks including regression, classification and semi-supervised learning are all seen as instances of the same general decision forest model. The unified framework further extends to novel uses of forests in tasks such as density estimation and manifold learning. This unification carries both theoretical and practical advantages. For instance, the underlying single model gives us the opportunity to implement and optimize the general algorithm for all these tasks only once, and then easily adapt it to individual applications with relatively small changes.
Part I describes the general forest model which unifies classification, regression, density estimation, manifold learning, semi-supervised learning and active learning under the same flexible framework. The proposed model may be used both in a discriminative or generative way and may be applied to discrete or continuous, labelled or unlabelled data. It is based on a conventional training-testing framework, with the training phase trying to optimize a well defined energy function. Tasks such as classification or density estimation, supervised or unsupervised problems can all be addressed by setting a specific model for the objective function as well as the output prediction function.
Part II is a collection of invited chapters. Here various researchers show how it is possible to build different applications on top of the general forest model. Kinect-based player segmentation, semantic segmentation of photographs and automatic diagnosis of brain lesions are amongst the many applications discussed here.
Part III presents implementation details, documentation for the provided research software library, and some concluding remarks.
-
Decision Forests for Computer Vision and Medical Image Analysis
A. Criminisi and J. Shotton
Springer 2013
Chapter 4: Classification Forests
Exercise 4.1
Using many trees and linear splits reduces artifacts.
Exercise 4.2
The quality of the uncertainty away from training data is affected by the type of split function (weak learner).
Exercise 4.3
Using linear splits produces a possibly better separating surfaces.
Exercise 4.4
Reducing the tree depth may cause underfitting and lower confidence.
Exercise 4.5
Increasing randomness may reduce overall prediction confidence.
Chapter 5: Regression Forests
Exercise 5.1
Large tree depth may lead to overfitting.
Exercise 5.2
Larger training noise yields larger prediction uncertainty (wider pink region).
Exercise 5.3
Non-linear curve fitting in diverse examples. Note the relatively smooth interpolation and extrapolation behaviour.
Exercise 5.4
Single function regression does not capture the inherently ambiguous central region. But at least it returns an associated high uncertainty.
Chapter 6: Decision Forests
Exercise 6.1
Too deep trees may cause overfitting.
Exercise 6.2
Too deep trees may cause overfitting.
Exercise 6.3
Too deep trees may cause overfitting.
Exercise 6.4
Too deep trees may cause overfitting. Some of the visible streaky artifacts are due to the use of axis-aligned weak learners.
Chapter 8: Semi-supervised Classification Forests
Exercise 8.1
Note the larger uncertainty in the central region (left image). A single tree is always over-confident.
Exercise 8.2
Adding further supervised data in the central region helps increase the prediction confidence.
Exercise 8.3
Confidence decreases with training noise and increases with tree depth.
Exercise 8.4
Single trees are over-confident. Using many random forests produces smooth uncertainty in the transition regions.
Exercise 8.5
Adding the amount of supervision in regions of low confidence increases the prediction accuracy and the overall confidence.
-