Bültmann & Gerriets
Statistical and Neural Classifiers
An Integrated Approach to Design
von Sarunas Raudys
Verlag: Springer
Reihe: Advances in Computer Vision an
Gebundene Ausgabe
ISBN: 978-1-85233-297-6
Auflage: 2001 edition
Erschienen am 29.01.2001
Sprache: Englisch
Format: 241 mm [H] x 161 mm [B] x 27 mm [T]
Gewicht: 620 Gramm
Umfang: 295 Seiten

Preis: 171,50 €
keine Versandkosten (Inland)


Jetzt bestellen und voraussichtlich ab dem 22. Oktober in der Buchhandlung abholen.

Der Versand innerhalb der Stadt erfolgt in Regel am gleichen Tag.
Der Versand nach außerhalb dauert mit Post/DHL meistens 1-2 Tage.

klimaneutral
Der Verlag produziert nach eigener Angabe noch nicht klimaneutral bzw. kompensiert die CO2-Emissionen aus der Produktion nicht. Daher übernehmen wir diese Kompensation durch finanzielle Förderung entsprechender Projekte. Mehr Details finden Sie in unserer Klimabilanz.
Klappentext
Inhaltsverzeichnis

The classification of patterns is an important area of research which is central to all pattern recognition fields, including speech, image, robotics, and data analysis. Neural networks have been used successfully in a number of these fields, but so far their application has been based on a 'black box approach' with no real understanding of how they work. In this book, Sarunas Raudys - an internationally respected researcher in the area - provides an excellent mathematical and applied introduction to how neural network classifiers work and how they should be used.. .



1. Quick Overview.- 1.1 The Classifier Design Problem.- 1.2 Single Layer and Multilayer Perceptrons.- 1.3 The SLP as the Euclidean Distance and the Fisher Linear Classifiers.- 1.4 The Generalisation Error of the EDC and the Fisher DF.- 1.5 Optimal Complexity - The Scissors Effect.- 1.6 Overtraining in Neural Networks.- 1.7 Bibliographical and Historical Remarks.- 2. Taxonomy of Pattern Classification Algorithms.- 2.1 Principles of Statistical Decision Theory.- 2.2 Four Parametric Statistical Classifiers.- 2.3 Structures of the Covariance Matrices.- 2.4 The Bayes Predictive Approach to Design Optimal Classification Rules.- 2.5. Modifications of the Standard Linear and Quadratic DF.- 2.6 Nonparametric Local Statistical Classifiers.- 2.7 Minimum Empirical Error and Maximal Margin Linear Classifiers.- 2.8 Piecewise-Linear Classifiers.- 2.9 Classifiers for Categorical Data.- 2.10 Bibliographical and Historical Remarks.- 3. Performance and the Generalisation Error.- 3.1 Bayes, Conditional, Expected, and Asymptotic Probabilities of Misclassification.- 3.2 Generalisation Error of the Euclidean Distance Classifier.- 3.3 Most Favourable and Least Favourable Distributions of the Data.- 3.4 Generalisation Errors for Modifications of the Standard Linear Classifier.- 3.5 Common Parameters in Different Competing Pattern Classes.- 3.6 Minimum Empirical Error and Maximal Margin Classifiers.- 3.7 Parzen Window Classifier.- 3.8 Multinomial Classifier.- 3.9 Bibliographical and Historical Remarks.- 4. Neural Network Classifiers.- 4.1 Training Dynamics of the Single Layer Perceptron.- 4.2 Non-linear Decision Boundaries.- 4.3 Training Peculiarities of the Perceptrons.- 4.4 Generalisation of the Perceptrons.- 4.5 Overtraining and Initialisation.- 4.6 Tools to Control Complexity.- 4.7 TheCo-Operation of the Neural Networks.- 4.8 Bibliographical and Historical Remarks.- 5. Integration of Statistical and Neural Approaches.- 5.1 Statistical Methods or Neural Nets?.- 5.2 Positive and Negative Attributes of Statistical Pattern Recognition.- 5.3 Positive and Negative Attributes of Artificial Neural Networks.- 5.4 Merging Statistical Classifiers and Neural Networks.- 5.5 Data Transformations for the Integrated Approach.- 5.6 The Statistical Approach in Multilayer Feed-forward Networks.- 5.7 Concluding and Bibliographical Remarks.- 6. Model Selection.- 6.1 Classification Errors and their Estimation Methods.- 6.2 Simplified Performance Measures.- 6.3 Accuracy of Performance Estimates.- 6.4 Feature Ranking and the Optimal Number of Feature.- 6.5 The Accuracy of the Model Selection.- 6.6 Additional Bibliographical Remarks.- Appendices.- A.1 Elements of Matrix Algebra.- A.2 The First Order Tree Type Dependence Model.- A.3 Temporal Dependence Models.- A.4 Pikelis Algorithm for Evaluating Means and Variances of the True, Apparent and Ideal Errors in Model Selection.- A.5 Matlab Codes (the Non-Linear SLP Training, the First Order Tree Dependence Model, and Data Whitening Transformation).- References.


andere Formate
weitere Titel der Reihe