Author Archive

Philosophizing on Neurons

Sunday, November 22nd, 2009

Is it all just neurons? If we want to say that, can we specify further and say that it’s all just cells? That would be the ultimate cellular automata – to begin with one cell, one “on”, which follows a randomly chosen pattern to become a human with a distinct brain. Another cell follows a different pattern, and becomes a dog or a fish (or anything else). Or, maybe they’re all permutations from the one cell (or atom), which have broken into different patterns within the automata. Is it possible that the world we’ve come to know came to be through the same pattern as a cellular automata, triggered by a bang which set us on our way? The rules the automata followed, we would still be learning through the rules of science. Is this possible?

If it isn’t just cells, if we think about it simply as control of neurons, then I ask, how much control do we have? It brings me back to Mr Kelly, that first Greek teacher of mine, who never hesitated to ask “Why did you do that?” Are firing neurons really capable of being totally random or thoughtless? Bethie mentioned as well (in class) the reflex test in a doctor’s office as a thoughtless movement, but, as it is triggered by nerve signals, is it really thoughtless? I don’t think any movement could ever be thoughtless, or wouldn’t we be able to do it without a brain?

Still, back to the neurons. If everything is controlled by neurons, what controls them? More neurons? Or is everything we do controlled by our environment? I think back to the discussion we had over the neurological test, in which a subject would need to lift a finger or arm, and the brain would show signs of what was going to be done before it happened. What causes those neurons to fire?

How does a brain work?

Monday, November 16th, 2009

A brain of Einsteins is an interesting concept. And to properly assess the efficacy of such a brain, one must first consider the fact that each particle of the brain is identical. Does this make it better or worse than one united Einstein’s brain?
For me, the discussion of this hinges on the fact that each Einstein is the same, since there is only one Einstein. If each Einstein is identical in DNA, how would that affect the patterns of thought? If the makeup of each is the same, would all of the individual thoughts be the same as well? I personally know very little about the makeup of the brain and how neurons work; the same can be said for my knowledge of DNA,  so let’s  look at either side.
If each Einstein is the same, has the same DNA and therefore the same brain, would the quantity of thoughts really mean more than the quality? Or, as an even greater stretch, how different would the thoughts be, if they came from identical brains? In this case, I feel that the brain of Einsteins would only be negligibly different from Einstein’s brain.  But, on the other hand, if identical DNA can produce different thoughts, would there really be any competition between Einstein’s brain and a brain composed of many Einsteins?  It strikes me that in this case, a brain composed of Einsteins would almost need to be more powerful than Einstein’s brain.
Of course, this has drawbacks. Any group with different thoughts is bound to have different opinions; if a brain can’t get along, is it really a powerful brain after all? Or, also interestingly enough, each section of the brain has a distinct function – it isn’t so simple to categorize a brain as being stronger or weaker or smarter or dumber than another type of brain. If the motor functions are made of a group of Einsteins (personally unaware of how Einstein’s original motor skills were) all trying to work together are much less likely to be strong than those of a single brain component already trained to work specifically on that type of skill. With many, each only having a small fraction of their own brains allotted for motor skills, the chances of skilled coordination are slim.
There is no easy way to compare a brain full of Einsteins to the brain of Einstein. And, unfortunately, without a greater understanding of the human brain, I’m not able to draw any distinct conclusions. I can only say that, either way it goes, the brains have the potential of being distinctly different, but also of measuring out to be the same in the end, after one combines the benefits and drawbacks that would come from each potential setup. Perhaps someone else can shed some light on which of the possible outcomes I’ve touched on has the most probability of occurrence.

Linguistics Connections

Wednesday, November 4th, 2009

Admittedly, I found the chapter difficult to understand, having no background with neural networking. But, after talking to Priscy about it, I feel like I have a better idea. I can better appreciate the methods for helping these networks “learn”, especially through back-propagation. What was most interesting to me, however, was the comparisons between these types of models and linguistics. As a linguistics major, I have still taken surprisingly few linguistics classes, so I appreciated this method of assessing data. I really enjoyed looking at the tensor product representation of the sentence “Mary gave the book to John”. It struck me as being similar to the structure of syntactical trees, which have the same idea of generalizing a role (such as noun phrase, prepositional phrase, down to something as finite as ‘tense’) and a filler which suits each sentence to fit the tree. Thinking of it in this way helped me comprehend better the computational version.

I hope that, in class, we can further discuss the methods these networks use for learning. The ideas discussed in this paper interested me, but were phrased in ways that were difficult for me to read and really comprehend.

If I only had a brain.

Monday, November 2nd, 2009

I enjoyed using the Metacat program, though, to agree with Riki, it didn’t hold a lot of long-term excitement. That aside, I was very entertained (as many of us seemed to be) by the once in a while choice of the program to “go get some punch”, was it? What does this show about the program’s similarity to or difference from the human brain?

This message comes up when the program can no longer think of an alternative, intelligent solution to a problem it’s been given. Essentially, it hits a wall. Is this possibly the most human part of the program? Or is it the least? It seems to me like the program is basically saying “I dunno guys, I give up.” On the one hand, this seems like a very human thing to do, when all of the other possibilities seem to be going nowhere.  If you’re not getting anywhere, why not give up? Still, in the other direction, it can also be seen as a flaw with the program. It lacks the capacity to think beyond a certain point, gets “frustrated”, and stops. It’s still pretty amazing that a computer program can try multiple ideas, remember which it has tried, and base new ideas off of those memories; is this road block another piece of that, or a glitch that is the next step in evolution of the Metacat program?

Cities and Ants

Monday, September 28th, 2009

I found the information about what role the ages of ant colonies plays particularly interesting, especially in light of our class discussion on Cellular Automata (bear with me on this one), and whether or not free will/won’t really exists.

When we had discussed the idea of everything we do being a result of a preset order, or chemical reactions, the very philosophical idea had struck me: what if the universe itself is forming under the rules of some sort of giant cellular automata (again, please bear with me) that has, at this point at time, followed its particular rules to form what we see now. But, it also struck me, the ages of ant colonies could pose a counter-argument to that entire idea. As I mentioned in my comment on Bethie’s post, the difference between younger colonies and older colonies gives good insight on the debate of whether what we do is determined by consciousness and whether or not that consciousness can learn.

If learning is something that both humans and ant colonies really do, it’s not a large jump to say that cities can as well; in order for an ant colony to learn, each of the individual pieces must learn something, and somehow pass it on, either verbally, or through pheromone paths. So, for a city to  learn, its individual parts and pieces must also learn, and communicate what they have learned to other generations, so that they may continue the growth and development without having to re-learn knowledge learned by dwellers (in colony or city) previous.

All of this may be drawn totally out of thin air, so please let me know what you all think either way. It’s my personal belief that cities, insects, and humans all learn and grow in a similar manner — maybe similar because of chance, or maybe because learning is in itself a predictable process.