Posts Tagged ‘emergence’

Anthropomorhphism, Einstein and Homunculi?

Friday, November 13th, 2009

To start this post, I think that some terminology and outlining ought to be explored before running into other thoughts. First, consider Einstein’s brain. It has little neurons that just fire according to (essentially) random patterns that will eventually make some sort of sense, perhaps eventually creating thoughts. Fairly understandable, so what about a brain made of Einsteins? This seems to be a classic case of anthropomorphism (the human desire to give animated traits to inanimate objects. This also applies to “my nerves were screaming in pain” as nerves do not have vocal chords with which to scream, and might even be dubbed a metaphor if one were feeling particularly pedantic that day.) As such, objects like the brain in the Einstein example, are given personalities. A particular comic found here, addresses the concept of anthropomorphism in an amusing visual example.

Second, the idea of a homunculus (the singular form of homunculi… as evidenced by the “i” at the end this word, it derives from Latin) is the concept of a tiny person that unpacks into a full human. This is almost related to the idea of a brain made of Einsteins in that one of the two definitions of Homunculus refers to the idea of a “little human” driving the brain. Expanding the idea of only one driver in a brain ( a homunculus) to multiple drivers (the Einsteins) seems to be a fairly logical next step if combined with the idea of anthropomorphism. If we say that each neuron is an Einstein, then we have independent-thinking decision-makers that create a thinking unit, which in this case is a brain.

Okay, so playing into this idea, what does this mean? An “Einstein” would be a “voter”, and if we’re counting all the “Einstein” in the brain to be equal in value and have equal votes, it is likely that it would take forever to move… if the brain could even function at all. If each “Einstein” has an internal decision-making process and each “Einstein” can develop as a personality on its own, there is no real impetus to work with the other “Einstein”s. As such, the deliberation over an even as simple as “pass this message on” could stop in the originator “Einstein’s” own spot. If a brain requires the passing of a message along a certain path, and if the message never makes it along the path, can the Einstein’s then be considered a brain?

If we are looking at Einstein’s brain, what does that mean? Essentially, we are looking at an ant colony. Without deliberation, the individual neurons are firing or not firing. This is as simple as using 1’s and 0’s to program with binary, if we were to talk in computing terms. As discussed in class previously, there are no leader ants, no monarchies, no universal decision-makers in an ant colony. (Sorry Dreamworks and Pixar, ANTZ and A Bug’s Life are not scientifically sound.)

So, going back to the what’s the difference between Einstein’s brain and a brain made of Einsteins? Well, in a word: depth (i.e. one has one level Brain::neurons, the other has Einsteins::brains::neurons). If I were to expand on the topic a little I would have to also include point of view. Because at the individual level, one could have conversations with Einstein. One cannot converse with Einstein’s brain, but on can talk to the Einstein’s in a brain made of Einsteins. Kind of mind-boggling. It’s similar to the idea presented in “Prelude . . . Ant Fugue” (a chapter in a book, The Mind’s I, by Douglas Hofstadter).

Metacat – it’s a Copycat that philosophizes about itself! O_O

Sunday, November 1st, 2009

Originally I was going to wait and see what others posted so I could respond. Apparently this is not in the cards. So… The Metacat paper was intriguing. (Yes Doug, I know I need to elaborate on the word.)

In class we covered a lot of pertinent points regarding the Metacat paper: mostly what made Metacat so important to emergence as well as what the differences between Metacat and other models that had come before. I’m fascinated by the different categorizations required to look at the “world” of Metacat. Introducing the idea of having a computer make decisions for itself regarding analogies also emphasizes that the old categorization methods of “Reagan::parents as drugs::candy” does not make sense as the categorization itself is based on something totally different from the computer’s perspective. This seems to stress the fact that computers do not “think” in ways that humans do, but also takes advantage of how a computer “thinks” in order to interpret and create analogies. In this instance, Metacat is really awesome because of “self-awareness” in which there is an additional component of memory. The introduction of bias, even if it is small (as we discussed with the segregation model that was mentioned in class previously) has a noticeable effect on the outcome of a given test. Therefore, the weights are still important, yet useless without the memory implemented by the Metacat model. In this self-referencing in order to look towards the future Metacat is amazingly more concise. Instead of looking only at the present (as seen by most previous models) to springboard to the future speculation, there is also a reference check of past attempts, which makes for a far stronger model of emergence. (We noticed this problem of “no looking back” when we attempted to use the gaca.py to match a string.)

Okay… I think I’m starting to get incoherent, so I’ll sign off for now.

Hello world!

Monday, September 21st, 2009

Welcome to Bryn Mawr Weblogs. This is the first blog post for the Emergence Blog.