Neural nets are currently the most hyped format for artificial intelligence. What are they? Basically they are an attempt to emulate a brain by building a mathematical pattern recognition process with a network of nodes. Neural nets underlie many recent image and speech recognition and generation advances.
The contemporary neural net resurgence is often traced to Geoffrey Hinton whose work gave rise to convolutional neural networks. Convolution is mathematical treatment of matrices of values; but it can also be understood as simple feature extraction since it’s been used in image processing to find outlines and edges. What the neural nets do (at a very simple level) is find those edges, compare the similarity of those edges in data, then cluster the edges that are similar, apply the filter again, repeat. Essentially building a weighted graph: lines between nodes in a network, thick lines have more weight suggesting connection. Treating any data in this way gives rise to surprising insights. For examples the sentiment or emotion in language can be extracted (for more info: here’s a very brief review I wrote of Socher’s 2014 sentiment treebank).
And the conceptual roots of these neural nets arises from cybernetics, the idea of a feedback loop, Frank Rosenblatt ( in 1957 proposed single-layer perceptrons which were expanded into multi-layer perceptrons which became neural nets), Gregory Bateson (difference that makes a difference), Grey Walter (early robotic homeostatic machines with sensors and simple feedback elicting complex behavior, see video below), Warren McCulloch and Walter Pitts (synaptic inhibition within networks), W Ross Ashby (information theory), Norbert Weiner and the Macy conferences.
Margeret Boden (whose exhaustive Mind as Machine 2 volume history of cognitive science is the canonical reference for 20th century cybernetic history; at 1452 pages with 134 pages of references, it includes references to all of the major figures) wrote:
Grey Walter’s intriguing tortoises, despite their valve technology and clumsiness, were early versions of what would much later be called Vehicles,38 autonomous agents, situated robots, or animats. They illustrated the emergence of relatively complex motor behaviour—analogous to positive and negative tropisms, goal seeking, perception, learning, and even sociability—out of simple responses guided and stabilized by negative feedback.