Description: Bayesian Learning for Neural Networks by Radford M. Neal Artificial "neural networks" are widely used as flexible models for classification and regression applications, but questions remain about how the power of these models can be safely exploited when training data is limited. FORMAT Paperback LANGUAGE English CONDITION Brand New Publisher Description Artificial "neural networks" are widely used as flexible models for classification and regression applications, but questions remain about how the power of these models can be safely exploited when training data is limited. This book demonstrates how Bayesian methods allow complex neural network models to be used without fear of the "overfitting" that can occur with traditional training methods. Insight into the nature of these complex Bayesian models is provided by a theoretical investigation of the priors over functions that underlie them. A practical implementation of Bayesian neural network learning using Markov chain Monte Carlo methods is also described, and software for it is freely available over the Internet. Presupposing only basic knowledge of probability and statistics, this book should be of interest to researchers in statistics, engineering, and artificial intelligence. Notes Bayesian methods for neural networks are of interest to researchers in statistics, engineering, and artificial intelligence. This is a very active research area in statistics and artificial intelligence. Table of Contents 1 Introduction.- 1.1 Bayesian and frequentist views of learning.- 1.2 Bayesian neural networks.- 1.3 Markov chain Monte Carlo methods.- 1.4 Outline of the remainder of the book.- 2 Priors for Infinite Networks.- 2.1 Priors converging to Gaussian processes.- 2.2 Priors converging to non-Gaussian stable processes.- 2.3 Priors for nets with more than one hidden layer.- 2.4 Hierarchical models.- 3 Monte Carlo Implementation.- 3.1 The hybrid Monte Carlo algorithm.- 3.2 An implementation of Bayesian neural network learning.- 3.3 A demonstration of the hybrid Monte Carlo implementation.- 3.4 Comparison of hybrid Monte Carlo with other methods.- 3.5 Variants of hybrid Monte Carlo.- 4 Evaluation of Neural Network Models.- 4.1 Network architectures, priors, and training procedures.- 4.2 Tests of the behaviour of large networks.- 4.3 Tests of Automatic Relevance Determination.- 4.4 Tests of Bayesian models on real data sets.- 5 Conclusions and Further Work.- 5.1 Priors for complex models.- 5.2 Hierarchical Models — ARD and beyond.- 5.3 Implementation using hybrid Monte Carlo.- 5.4 Evaluating performance on realistic problems.- A Details of the Implementation.- A.1 Specifications.- A.1.1 Network architecture.- A.1.2 Data models.- A.1.3 Prior distributions for parameters and hyperparameters.- A.1.4 Scaling of priors.- A.2 Conditional distributions for hyperparameters.- A.2.1 Lowest-level conditional distributions.- A.2.2 Higher-level conditional distributions.- A.3 Calculation of derivatives.- A.3.1 Derivatives of the log prior density.- A.3.2 Log likelihood derivatives with respect to unit values.- A.3.3 Log likelihood derivatives with respect to parameters.- A.4 Heuristic choice of stepsizes.- A.5 Rejection sampling from the prior.- B Obtaining the software. Promotional Springer Book Archives Long Description Artificial "neural networks" are widely used as flexible models for classification and regression applications, but questions remain about how the power of these models can be safely exploited when training data is limited. This book demonstrates how Bayesian methods allow complex neural network models to be used without fear of the "overfitting" that can occur with traditional training methods. Insight into the nature of these complex Bayesian models is provided by a theoretical investigation of the priors over functions that underlie them. A practical implementation of Bayesian neural network learning using Markov chain Monte Carlo methods is also described, and software for it is freely available over the Internet. Presupposing only basic knowledge of probability and statistics, this book should be of interest to researchers in statistics, engineering, and artificial intelligence. Description for Sales People Bayesian methods for neural networks are of interest to researchers in statistics, engineering, and artificial intelligence. This is a very active research area in statistics and artificial intelligence. Details ISBN0387947248 Author Radford M. Neal Short Title BAYESIAN LEARNING FOR NEURAL N Pages 204 Series Lecture Notes in Statistics Language English ISBN-10 0387947248 ISBN-13 9780387947242 Media Book Format Paperback DEWEY 006.3 Series Number 118 Year 1996 Imprint Springer-Verlag New York Inc. Place of Publication New York, NY Country of Publication United States DOI 10.1007/b61132;10.1007/978-1-4612-0745-0 UK Release Date 1996-08-09 AU Release Date 1996-08-09 NZ Release Date 1996-08-09 US Release Date 1996-08-09 Publisher Springer-Verlag New York Inc. Edition Description 1996 ed. Edition 1996th Publication Date 1996-08-09 Illustrations 204 p. Audience Undergraduate We've got this At The Nile, if you're looking for it, we've got it. With fast shipping, low prices, friendly service and well over a million items - you're bound to find what you want, at a price you'll love! TheNile_Item_ID:137900429;
Price: 426.45 AUD
Location: Melbourne
End Time: 2024-12-04T09:37:09.000Z
Shipping Cost: 0 AUD
Product Images
Item Specifics
Restocking fee: No
Return shipping will be paid by: Buyer
Returns Accepted: Returns Accepted
Item must be returned within: 30 Days
ISBN-13: 9780387947242
Book Title: Bayesian Learning for Neural Networks
Number of Pages: 204 Pages
Language: English
Publication Name: Bayesian Learning for Neural Networks
Publisher: Springer-Verlag New York Inc.
Publication Year: 1996
Subject: Computer Science, Mathematics
Item Height: 235 mm
Item Weight: 650 g
Type: Textbook
Author: Radford M. Neal
Item Width: 155 mm
Format: Paperback