The Nature of Statistical Learning TheorySpringer Science & Business Media, 19.11.1999 - 314 Seiten The aim of this book is to discuss the fundamental ideas which lie behind the statistical theory of learning and generalization. It considers learning as a general problem of function estimation based on empirical data. Omitting proofs and technical details, the author concentrates on discussing the main results of learning theory and their connections to fundamental problems in statistics. These include: * the setting of learning problems based on the model of minimizing the risk functional from empirical data * a comprehensive analysis of the empirical risk minimization principle including necessary and sufficient conditions for its consistency * non-asymptotic bounds for the risk achieved using the empirical risk minimization principle * principles for controlling the generalization ability of learning machines using small sample sizes based on these bounds * the Support Vector methods that control the generalization ability when estimating function using small sample size. The second edition of the book contains three new chapters devoted to further development of the learning theory and SVM techniques. These include: * the theory of direct method of learning based on solving multidimensional integral equations for density, conditional probability, and conditional density estimation * a new inductive principle of learning. Written in a readable and concise style, the book is intended for statisticians, mathematicians, physicists, and computer scientists. Vladimir N. Vapnik is Technology Leader AT&T Labs-Research and Professor of London University. He is one of the founders of statistical learning theory, and the author of seven books published in English, Russian, German, and Chinese. |
Inhalt
III | 1 |
IV | 7 |
V | 11 |
VI | 14 |
VII | 17 |
VIII | 18 |
X | 19 |
XIII | 20 |
CI | 145 |
CII | 149 |
CIII | 152 |
CIV | 154 |
CVI | 157 |
CVII | 158 |
CVIII | 161 |
CIX | 162 |
XV | 21 |
XVI | 23 |
XVIII | 24 |
XIX | 25 |
XX | 26 |
XXI | 27 |
XXIII | 28 |
XXIV | 30 |
XXV | 31 |
XXVIII | 32 |
XXIX | 33 |
XXX | 35 |
XXXI | 36 |
XXXII | 38 |
XXXIII | 39 |
XXXIV | 40 |
XXXV | 41 |
XXXVI | 42 |
XXXVII | 43 |
XXXVIII | 45 |
XXXIX | 47 |
XL | 49 |
XLI | 50 |
XLIII | 52 |
XLIV | 55 |
XLV | 59 |
XLVI | 60 |
XLVIII | 63 |
XLIX | 65 |
L | 66 |
LI | 67 |
LII | 69 |
LIII | 70 |
LIV | 72 |
LV | 75 |
LVI | 76 |
LVII | 78 |
LVIII | 80 |
LIX | 83 |
LX | 85 |
LXI | 87 |
LXIII | 89 |
LXIV | 90 |
LXV | 93 |
LXVI | 94 |
LXVII | 97 |
LXVIII | 99 |
LXIX | 101 |
LXX | 103 |
LXXI | 104 |
LXXII | 106 |
LXXIII | 107 |
LXXIV | 108 |
LXXV | 110 |
LXXVI | 111 |
LXXVII | 112 |
LXXVIII | 113 |
LXXIX | 115 |
LXXX | 116 |
LXXXI | 117 |
LXXXII | 118 |
LXXXIII | 119 |
LXXXV | 121 |
LXXXVI | 123 |
LXXXVII | 124 |
LXXXVIII | 124 |
LXXXIX | 128 |
XCI | 129 |
XCIII | 130 |
XCIV | 131 |
XCV | 134 |
XCVI | 136 |
XCVII | 137 |
XCVIII | 138 |
XCIX | 139 |
C | 144 |
CX | 165 |
CXI | 169 |
CXIII | 172 |
CXIV | 174 |
CXV | 176 |
CXVI | 177 |
CXVII | 179 |
CXVIII | 181 |
CXIX | 184 |
CXX | 186 |
CXXI | 188 |
CXXIII | 189 |
CXXIV | 191 |
CXXV | 192 |
CXXVII | 193 |
CXXIX | 194 |
CXXX | 196 |
CXXXI | 198 |
CXXXII | 199 |
CXXXIII | 202 |
CXXXIV | 203 |
CXXXV | 206 |
CXXXVI | 207 |
CXXXVIII | 214 |
CXXXIX | 217 |
CXLI | 219 |
CXLII | 221 |
CXLIII | 223 |
CXLV | 224 |
CXLVI | 225 |
CXLVII | 226 |
CXLVIII | 227 |
CXLIX | 228 |
CL | 230 |
CLI | 231 |
CLII | 233 |
CLIII | 234 |
CLIV | 235 |
CLV | 236 |
CLVI | 238 |
CLVII | 239 |
CLVIII | 242 |
CLIX | 245 |
CLX | 246 |
CLXI | 247 |
CLXII | 249 |
CLXIII | 251 |
CLXIV | 253 |
CLXV | 254 |
CLXVI | 256 |
CLXVII | 257 |
CLXIX | 259 |
CLXXI | 260 |
CLXXII | 261 |
CLXXIII | 265 |
CLXXIV | 267 |
CLXXV | 268 |
CLXXVI | 273 |
CLXXVII | 274 |
CLXXVIII | 277 |
CLXXIX | 279 |
CLXXX | 282 |
CLXXXI | 283 |
CLXXXII | 284 |
CLXXXIII | 285 |
CLXXXIV | 287 |
CLXXXV | 289 |
CLXXXVII | 292 |
CLXXXVIII | 293 |
CLXXXIX | 294 |
CXC | 295 |
CXCI | 296 |
CXCII | 299 |
CXCIII | 309 |
Andere Ausgaben - Alle anzeigen
Häufige Begriffe und Wortgruppen
algorithms approximation asymptotic bounded functions bounds Chapter Chervonenkis choose coefficients conditional probability consider constraints construct decision rule defined density estimation described dimensionality empirical distribution function entropy ERM principle error Fe(x feature space func functions Q(z growth function holds true hyperplane ill-posed problems indicator functions inductive principle inequality kernel learning machine learning processes linear loss function metric minimize the functional minimizes the empirical necessary and sufficient neural networks neurons number of observations number of support obtained one-dimensional operator equation optimal hyperplane parameters Parzen's pattern recognition perceptron polynomial probability measure rate of convergence regression estimation regression function right-hand side risk functional risk minimization separating hyperplane set of functions set of indicator solution solving ill-posed problems spline structure subset support vectors SV machine technique Theorem tion training data training set uniform convergence unknown VC dimension vicinity functions ΕΛ