The paper says the connectionist network is a model that is “based loosely on neural architecture”; however, after reading the paper, I found many similarities between the connectionist network and a neural network. Each unit has an activation value, which are passed between units in only one direction. This is exactly like the action potential of a neuron. In a connectionist network, there are three types of layers: input, hidden, and output, which are exactly like the three types of neurons: afferent, inter, and efferent. In both networks, the input/afferent sends a signal to the hidden/inter, which sends a signal to the output/efferent. In neural networks, however, inter neurons usually send many signals to each other before sending a signal to an efferent neuron, if they even do. Also, the paper says that the connectionist network can learn and recognize patterns to generalize. The brain generalizes a lot of information, which is one of the reasons why it is so efficient. Back-propagation is like the feedback loops found not just in the brain, but anywhere in the body. Because of these similarities to the brain, I was able to mostly understand this paper, which was a first in the computer science-y readings we’ve had to do for this class. And since I could see it in those terms, I think I have a little more respect for cognitive models, such as Metacat.