Neural Networks for Pattern Recognition

Cover
This book provides the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition. After introducing the basic concepts of pattern recognition, the book describes techniques for modelling probability density functions, and discusses the properties and relative merits of the multi-layer perceptron and radial basis function network models. It also motivates the use of various forms of error functions, and reviews the principal algorithms for error function minimization. As well as providing a detailed discussion of learning and generalization in neural networks, the book also covers the important topics of data processing, feature extraction, and prior knowledge. The book concludes with an extensive treatment of Bayesian techniques and their applications to neural networks.

Was andere dazu sagen - Rezension schreiben

Bewertungen von Nutzern

5 Sterne
7
4 Sterne
6
3 Sterne
2
2 Sterne
0
1 Stern
1

Nutzerbericht - Als unangemessen melden

It has been a long way since 1995, and many new techniques and important developments have taken place in the field of A.I. and more concretely, machine learning. Still, this book has aged very well, for two reasons: first, the fundamental techniques and concepts that every practitioner must understand and be able to make use of, like for example parametric techniques for density estimation (kNN), dimensionality reduction (PCA), mixture models, in addition to, of course, neural networks. Second, this book paves the way for moving on to modern techniques like deep energy models and deep belief networks with its last chapter on bayesian techniques.
The explanations are clear and amenable to read. Properties of and advances based on neural networks are presented in a principled way in the context of statistical pattern recognition. The exercises are wisely chosen to ensure the understanding of the presented results, and under what conditions they were derived.
But this book goes beyond theory, A chapter is devoted to optimization techniques, i.e. what algorithms are used to train neural networks in practice. After reading that chapter and going through the exercises you will have a good understanding of the conjugate gradients and LFGB.
The chapter on how to improve generalization, either by optimizing the structure of the network or by combining multiple classifiers is keep at a intuitive level, yet the concepts are well motivated and the few mathetical details help achieving a solid grasp of why do those ideas work. As in the rest of chapters, it is explained how to carry out it in practice, i.e. how I can proofcheck, if my classifier has become better. At the end of the chapter the reader is familiar with the concept of regularization (weight decay), cross validation and bagging.
 

Andere Ausgaben - Alle anzeigen

Über den Autor (1995)

Chris Bishop is at Aston University.

Bibliografische Informationen