Archive for the ‘Uncategorized’ Category

Collective Thought Can Be Bad

Monday, November 16th, 2009

In comparing the difference between Einstein’s brain and a brain of Einsteins, it is important to realize that these two cerebral structures are so different that it is nearly impossible to see a “brain” of Einsteins as being as brain at all. The most significant characteristic of a human brain (or any brain for that matter) is that macro, complicated behavior can emerge as a result of billions of smaller interactions between neurons that follow very simple rules. If a neuron is given a specific stimulation at its pre-synapses, it will always send the same information to its post-synapses; there is no decision making involved. In contrast, if a neuron were to instead have the properties of an entire Einstein brain, each individual neuron would have sentience and have the ability to make its own decision. As a result of this, each Einstein brain can do whatever it wants with the stimulation it receives from other Einsteins, and its output is not predictable.

A good analogy for this brain of sentient entities is a community with a purely democratic system of government. While a purely democratic system of government may sound like a good idea in theory, such a government would be inefficient and unable to cope with the demands for adaptation in this world. The same is true of a brain that operates on a network of billion of Einstein brains. Each Einstein would have his own say in the mental processes that occur in the bigger a brain, and because of this it is unlikely that the high parallel computing power that the brain is famous for will ever get achieved. Instead, each Einstein will try to convince the other Einsteins of what should be done and the reasons for doing so. If it were a brain where each neuron is someone’s brain that is less stubborn than Einstein this problem would not be so pronounced. Image having billions Einsteins who spend 30 years trying to disprove an idea that most other renowned physicists have embraced, and image these Einsteins trying to make a decision together. If there was a lion running towards this entity of Einsteins, the decision processes of all the individual Einsteins would not be processes fast enough to save the entity.

However, while it is evident that Einsteins in conflict will never get anything done, what if each Einstein was working towards a common goal and that all the Einsteins were synch in their objective. In this case the brain of Einsteins would be a force to be reckoned with. It would be having a two dimensional array as opposed to a one dimensional array and the computing power would be much greater than that of an individual brain. One only has to look at the Borg race from Star Trek to see the powers that a network of connected individuals have as long as they are drones and working for a common goal without the ability to make their own personal decisions.

How does a brain work?

Monday, November 16th, 2009

A brain of Einsteins is an interesting concept. And to properly assess the efficacy of such a brain, one must first consider the fact that each particle of the brain is identical. Does this make it better or worse than one united Einstein’s brain?
For me, the discussion of this hinges on the fact that each Einstein is the same, since there is only one Einstein. If each Einstein is identical in DNA, how would that affect the patterns of thought? If the makeup of each is the same, would all of the individual thoughts be the same as well? I personally know very little about the makeup of the brain and how neurons work; the same can be said for my knowledge of DNA,  so let’s  look at either side.
If each Einstein is the same, has the same DNA and therefore the same brain, would the quantity of thoughts really mean more than the quality? Or, as an even greater stretch, how different would the thoughts be, if they came from identical brains? In this case, I feel that the brain of Einsteins would only be negligibly different from Einstein’s brain.  But, on the other hand, if identical DNA can produce different thoughts, would there really be any competition between Einstein’s brain and a brain composed of many Einsteins?  It strikes me that in this case, a brain composed of Einsteins would almost need to be more powerful than Einstein’s brain.
Of course, this has drawbacks. Any group with different thoughts is bound to have different opinions; if a brain can’t get along, is it really a powerful brain after all? Or, also interestingly enough, each section of the brain has a distinct function – it isn’t so simple to categorize a brain as being stronger or weaker or smarter or dumber than another type of brain. If the motor functions are made of a group of Einsteins (personally unaware of how Einstein’s original motor skills were) all trying to work together are much less likely to be strong than those of a single brain component already trained to work specifically on that type of skill. With many, each only having a small fraction of their own brains allotted for motor skills, the chances of skilled coordination are slim.
There is no easy way to compare a brain full of Einsteins to the brain of Einstein. And, unfortunately, without a greater understanding of the human brain, I’m not able to draw any distinct conclusions. I can only say that, either way it goes, the brains have the potential of being distinctly different, but also of measuring out to be the same in the end, after one combines the benefits and drawbacks that would come from each potential setup. Perhaps someone else can shed some light on which of the possible outcomes I’ve touched on has the most probability of occurrence.

nevermind the glial cells

Sunday, November 15th, 2009

Einstein’s brain, like all brains, is composed of neurons and glial cells. The neurons transmit electrochemical signals to each other, and glial cells provide the neurons with various forms of support and insulation. It is common knowledge that a neuron can only transmit a signal in one direction; however, recent research suggests otherwise. Whether or not synaptic transmission is bi-directional, neurons transmit signals very quickly – generally believed to be at speeds greater than 200 mph. Each neuron can be connected to more than just two other neurons. While some make only a few connections, most make thousands. The vast network of synaptic connections coupled with signal transmission speed is one of the reasons why the brain is so efficient. Another important point is that a neuron is a single cell. The brain is composed of roughly 100 billion neurons. Each neuron doesn’t think for itself. A person’s thoughts emerge from the brain’s neuronal networks.

A brain composed of Einsteins, on the other hand, is probably not as efficient as Einstein’s brain. If each neuron is replaced with an Einstein, then each neuron is replaced with a brain containing 100 billion neurons. This certainly seems like it would make for a more powerful brain, and this may be true if each neuron was replaced with 100 billion more neurons, though it would take much longer for information to travel. Unfortunately, Einstein is not just 100 billion neurons. Something must house those neurons, specifically, the human body. Assuming the Einsteins don’t possess telepathic powers, they must communicate via the five senses, perhaps using verbal speech or body language or sign language or pheromones. It would take much, much longer for one Einstein to communicate something to the next Einstein or thousands of other Einsteins. Even though each neuron is an Einstein, each Einstein does not necessarily think the same things as the other Einsteins at the same time. Identical twins theoretically have the same DNA (it’s almost the same, but some genes can be expressed on one that aren’t on the other due to environmental factors), but they don’t have the same thoughts. So an Einstein might try to communicate something to another Einstein, but the receiving Einstein might not understand what the transmitting Einstein is trying to say, so the transmitting Einstein must further explain until the receiving Einstein understands, otherwise the signal cannot be passed down to the next Einstein. But each Einstein is likely to be communicating the signal to thousands of other Einsteins, many of which could be confused. It would take an excruciatingly long time for just one signal to be transmitted. One can only imagine how long it would take if some of the Einsteins were uncooperative. The brain would probably be a massive web of confusion.

Albert Einstein may have been a genius, but a person whose brain is composed of Einsteins might have trouble staying alive.

Anthropomorhphism, Einstein and Homunculi?

Friday, November 13th, 2009

To start this post, I think that some terminology and outlining ought to be explored before running into other thoughts. First, consider Einstein’s brain. It has little neurons that just fire according to (essentially) random patterns that will eventually make some sort of sense, perhaps eventually creating thoughts. Fairly understandable, so what about a brain made of Einsteins? This seems to be a classic case of anthropomorphism (the human desire to give animated traits to inanimate objects. This also applies to “my nerves were screaming in pain” as nerves do not have vocal chords with which to scream, and might even be dubbed a metaphor if one were feeling particularly pedantic that day.) As such, objects like the brain in the Einstein example, are given personalities. A particular comic found here, addresses the concept of anthropomorphism in an amusing visual example.

Second, the idea of a homunculus (the singular form of homunculi… as evidenced by the “i” at the end this word, it derives from Latin) is the concept of a tiny person that unpacks into a full human. This is almost related to the idea of a brain made of Einsteins in that one of the two definitions of Homunculus refers to the idea of a “little human” driving the brain. Expanding the idea of only one driver in a brain ( a homunculus) to multiple drivers (the Einsteins) seems to be a fairly logical next step if combined with the idea of anthropomorphism. If we say that each neuron is an Einstein, then we have independent-thinking decision-makers that create a thinking unit, which in this case is a brain.

Okay, so playing into this idea, what does this mean? An “Einstein” would be a “voter”, and if we’re counting all the “Einstein” in the brain to be equal in value and have equal votes, it is likely that it would take forever to move… if the brain could even function at all. If each “Einstein” has an internal decision-making process and each “Einstein” can develop as a personality on its own, there is no real impetus to work with the other “Einstein”s. As such, the deliberation over an even as simple as “pass this message on” could stop in the originator “Einstein’s” own spot. If a brain requires the passing of a message along a certain path, and if the message never makes it along the path, can the Einstein’s then be considered a brain?

If we are looking at Einstein’s brain, what does that mean? Essentially, we are looking at an ant colony. Without deliberation, the individual neurons are firing or not firing. This is as simple as using 1’s and 0’s to program with binary, if we were to talk in computing terms. As discussed in class previously, there are no leader ants, no monarchies, no universal decision-makers in an ant colony. (Sorry Dreamworks and Pixar, ANTZ and A Bug’s Life are not scientifically sound.)

So, going back to the what’s the difference between Einstein’s brain and a brain made of Einsteins? Well, in a word: depth (i.e. one has one level Brain::neurons, the other has Einsteins::brains::neurons). If I were to expand on the topic a little I would have to also include point of view. Because at the individual level, one could have conversations with Einstein. One cannot converse with Einstein’s brain, but on can talk to the Einstein’s in a brain made of Einsteins. Kind of mind-boggling. It’s similar to the idea presented in “Prelude . . . Ant Fugue” (a chapter in a book, The Mind’s I, by Douglas Hofstadter).

neural networks

Wednesday, November 4th, 2009

The paper says the connectionist network is a model that is “based loosely on neural architecture”; however, after reading the paper, I found many similarities between the connectionist network and a neural network. Each unit has an activation value, which are passed between units in only one direction. This is exactly like the action potential of a neuron. In a connectionist network, there are three types of layers: input, hidden, and output, which are exactly like the three types of neurons: afferent, inter, and efferent. In both networks, the input/afferent sends a signal to the hidden/inter, which sends a signal to the output/efferent. In neural networks, however, inter neurons usually send many signals to each other before sending a signal to an efferent neuron, if they even do. Also, the paper says that the connectionist network can learn and recognize patterns to generalize. The brain generalizes a lot of information, which is one of the reasons why it is so efficient. Back-propagation is like the feedback loops found not just in the brain, but anywhere in the body. Because of these similarities to the brain, I was able to mostly understand this paper, which was a first in the computer science-y readings we’ve had to do for this class. And since I could see it in those terms, I think I have a little more respect for cognitive models, such as Metacat.

A gradual degradation

Wednesday, November 4th, 2009

In reading the Connectionist Foundations article, I found the most interesting part of the article to be the paragraph that explained how these networks show “grace degradation of their performance.” This seems like a really critical point to me – most programs that I’ve been acquainted with, just by using them or writing them, have a very specific problem domain. If you exceed it, or give the program something it’s not ready for, you only get an error message. Instead of a sudden drop-off in usefulness, the fact that they slowly become more ineffective is a remarkable phenomenon that deserves to be looked at closely – which suggests that we ought to consider Connectionist networks seriously.

Linguistics Connections

Wednesday, November 4th, 2009

Admittedly, I found the chapter difficult to understand, having no background with neural networking. But, after talking to Priscy about it, I feel like I have a better idea. I can better appreciate the methods for helping these networks “learn”, especially through back-propagation. What was most interesting to me, however, was the comparisons between these types of models and linguistics. As a linguistics major, I have still taken surprisingly few linguistics classes, so I appreciated this method of assessing data. I really enjoyed looking at the tensor product representation of the sentence “Mary gave the book to John”. It struck me as being similar to the structure of syntactical trees, which have the same idea of generalizing a role (such as noun phrase, prepositional phrase, down to something as finite as ‘tense’) and a filler which suits each sentence to fit the tree. Thinking of it in this way helped me comprehend better the computational version.

I hope that, in class, we can further discuss the methods these networks use for learning. The ideas discussed in this paper interested me, but were phrased in ways that were difficult for me to read and really comprehend.

Initial reaction to connectionist foundation

Wednesday, November 4th, 2009

Reading this paper was still a little bit like reading a paper in a completely foreign language. But, if my limited understand is on the right track, the Analogator has a more “human-like” thought process where it can learn and generate creative responses. The reading talks about all the unique components that makes up the Analogator which I vaguely understood. But I understood enough to know the key difference between the Analogator and Metacat. Metacat does not have the learning capability but it is still impressive that a program can generate an output based on what inputs were given. For some reason when reading the “Why connectionism?” section, I related this concept to training a dolphin. It was just easier for me to visualize. Like the networks, dolphins could potentially show decrease in performance in a task due to their surrounding environment, there are no guarantees that it will be able to learn the given tasks, requires exposure to training, training results may vary, etc. By making this connection, it made the paper easier for me to understand.

Creativity vs Learning

Wednesday, November 4th, 2009

I found the Connectionist Foundations reading to be interesting in that it advocates a completely different approach for emulating human intelligence than the Metacat paper does. The main purpose of the Metacat project is to create a program that can examine and solve analogies by using creativity. While Metacat is practically unique in this fashion, it’s ability to learn is limited to individual trial runs and is unable to use what it’s learned for new, unseen problems. This limitation does not exists in connectionist systems, and in fact the ability to learn throughout several trials is the most significant advantage of neural networks. However, the process of trial and error for training neural networks through back propagation techniques lacks any creativity, and while a trained network can solve new, unseen problems, it lacks the creativity of Metacat. This is because the Metacat program uses a rule set of very abstract relationships that allows it to rationalize its solution in a very human way, and the training process of neural networks only allows these networks to solve a specific type of problem and even then there is no high level rationalization that occurs.

My question is which approach leads to a better understanding of human behavior and thought processes? Is the ability of thinking abstractly and creativity championed by Metacat that leads to a greater level intelligence, or is it instead the ability to learn that exists in connectionist systems?

If I only had a brain.

Monday, November 2nd, 2009

I enjoyed using the Metacat program, though, to agree with Riki, it didn’t hold a lot of long-term excitement. That aside, I was very entertained (as many of us seemed to be) by the once in a while choice of the program to “go get some punch”, was it? What does this show about the program’s similarity to or difference from the human brain?

This message comes up when the program can no longer think of an alternative, intelligent solution to a problem it’s been given. Essentially, it hits a wall. Is this possibly the most human part of the program? Or is it the least? It seems to me like the program is basically saying “I dunno guys, I give up.” On the one hand, this seems like a very human thing to do, when all of the other possibilities seem to be going nowhere.  If you’re not getting anywhere, why not give up? Still, in the other direction, it can also be seen as a flaw with the program. It lacks the capacity to think beyond a certain point, gets “frustrated”, and stops. It’s still pretty amazing that a computer program can try multiple ideas, remember which it has tried, and base new ideas off of those memories; is this road block another piece of that, or a glitch that is the next step in evolution of the Metacat program?