Collective Thought Can Be Bad

In comparing the difference between Einstein’s brain and a brain of Einsteins, it is important to realize that these two cerebral structures are so different that it is nearly impossible to see a “brain” of Einsteins as being as brain at all. The most significant characteristic of a human brain (or any brain for that matter) is that macro, complicated behavior can emerge as a result of billions of smaller interactions between neurons that follow very simple rules. If a neuron is given a specific stimulation at its pre-synapses, it will always send the same information to its post-synapses; there is no decision making involved. In contrast, if a neuron were to instead have the properties of an entire Einstein brain, each individual neuron would have sentience and have the ability to make its own decision. As a result of this, each Einstein brain can do whatever it wants with the stimulation it receives from other Einsteins, and its output is not predictable.

A good analogy for this brain of sentient entities is a community with a purely democratic system of government. While a purely democratic system of government may sound like a good idea in theory, such a government would be inefficient and unable to cope with the demands for adaptation in this world. The same is true of a brain that operates on a network of billion of Einstein brains. Each Einstein would have his own say in the mental processes that occur in the bigger a brain, and because of this it is unlikely that the high parallel computing power that the brain is famous for will ever get achieved. Instead, each Einstein will try to convince the other Einsteins of what should be done and the reasons for doing so. If it were a brain where each neuron is someone’s brain that is less stubborn than Einstein this problem would not be so pronounced. Image having billions Einsteins who spend 30 years trying to disprove an idea that most other renowned physicists have embraced, and image these Einsteins trying to make a decision together. If there was a lion running towards this entity of Einsteins, the decision processes of all the individual Einsteins would not be processes fast enough to save the entity.

However, while it is evident that Einsteins in conflict will never get anything done, what if each Einstein was working towards a common goal and that all the Einsteins were synch in their objective. In this case the brain of Einsteins would be a force to be reckoned with. It would be having a two dimensional array as opposed to a one dimensional array and the computing power would be much greater than that of an individual brain. One only has to look at the Borg race from Star Trek to see the powers that a network of connected individuals have as long as they are drones and working for a common goal without the ability to make their own personal decisions.

Creativity vs Learning

I found the Connectionist Foundations reading to be interesting in that it advocates a completely different approach for emulating human intelligence than the Metacat paper does. The main purpose of the Metacat project is to create a program that can examine and solve analogies by using creativity. While Metacat is practically unique in this fashion, it’s ability to learn is limited to individual trial runs and is unable to use what it’s learned for new, unseen problems. This limitation does not exists in connectionist systems, and in fact the ability to learn throughout several trials is the most significant advantage of neural networks. However, the process of trial and error for training neural networks through back propagation techniques lacks any creativity, and while a trained network can solve new, unseen problems, it lacks the creativity of Metacat. This is because the Metacat program uses a rule set of very abstract relationships that allows it to rationalize its solution in a very human way, and the training process of neural networks only allows these networks to solve a specific type of problem and even then there is no high level rationalization that occurs.

My question is which approach leads to a better understanding of human behavior and thought processes? Is the ability of thinking abstractly and creativity championed by Metacat that leads to a greater level intelligence, or is it instead the ability to learn that exists in connectionist systems?

Emergence and Human Evolution

I found Johnson’s examples and insight of emergent software to be very
interesting and provocative. Around pg. 170, Johnson discusses Danny
Hillis’ attempt at writing emergent software that can improve itself
with time to eventually be efficient at sorting numbers. In order to
make the end result as efficient as possible, Hillis had to introduce
“predators” into the system in order to weed out programs that took
too many steps to sort the numbers.

I couldn’t help thinking if perhaps we humans are trapped in a “false
peak in the fitness landscape”, as described by Johnson. Of course, in
order to make this assertion, there needs to be a predefined goal for
mankind to achieve. Assuming that there is a goal to achieve, are we
as a whole continuing to develop and evolve to better achieve this
goal? Or have we instead stuck in a rut where we are content to exist
the way we do? In answering this, both social and biological evolution
should be considered.

I personally believe that we are still evolving socially, as shown by
the emergence of macro behavior in cities; however, it appears that we
are no longer ruled by biological evolution. I think the main question
is will we eventually be limited in social evolution as a result of
the absence of biological evolution.