An Introduction to Neural NetworksThough mathematical ideas underpin the study of neural networks, the author presents the fundamentals without the full mathematical apparatus. All aspects of the field are tackled, including artificial neurons as models of their real counterparts; the geometry of network action in pattern space; gradient descent methods, including back-propagation; associative memory and Hopfield nets; and self-organization and feature maps. The traditionally difficult topic of adaptive resonance theory is clarified within a hierarchical description of its operation. The book also includes several real-world examples to provide a concrete focus. This should enhance its appeal to those involved in the design, construction and management of networks in commercial environments and who wish to improve their understanding of network simulator packages. As a comprehensive and highly accessible introduction to one of the most important topics in cognitive and computer science, this volume should interest a wide range of readers, both students and professionals, in cognitive science, psychology, computer science and electrical engineering. |
Contents
1 | |
Chapter Two Real and artificial neurons | 5 |
Chapter Three TLUs linear separability and vectors | 16 |
the perceptron rule | 25 |
Chapter Five The delta rule | 34 |
Chapter Six Multilayer nets and backpropagation | 41 |
the Hopfield net | 57 |
Chapter Eight Selforganization | 70 |
ART | 89 |
further alternatives | 101 |
Chapter Eleven Taxonomies contexts and hierarchies | 117 |
Appendix A The cosine function | 128 |
130 | |
135 | |
Other editions - View all
Common terms and phrases
activation algorithm allow approach artificial assigned associated axon backpropagation becomes Boolean Chapter classification close clusters competitive complex components connection Consider consists continuous corresponding defined described developed direction discussed dynamics effect energy error example expression follows function further give given gradient descent hand hidden implementation increased initial input introduced known layer learning length linear machine mathematical memory negative nets neural neural networks neurons node normalization noted obtain occurs original output particular pattern space perform positive possible potential presented problem referred region relation represent respectively response result rule separable shown in Figure shows side signal similar simple single step stored structure Suppose surface symbols synapse Table template threshold training set units values weight vector zero