Archive for November, 2009

Metacat – analogies and awareness

Monday, November 2nd, 2009

After experimenting with metacat in class and later over the weekend i just have to say metacat’s ability to “analyze” and justify its solutions to problems its just really really cool. When experimenting with metacat later i tried to “trick” it to make a conclusion that there was no solution. I entered in a->b and then asked metacat to determine what z would go to. My goal was to see if metacat was able to see that there is no other letter to send z to and thus no solution. While this seems like one of those problems that Doug’s kid would laugh at, for a program to understand why there is no solution is quite remarkable.

Additionally, i am still contemplating the two schools of thought about how ai should function. I do see how metacat’s self awareness and ability to recall is like how i would look at a problem but i still am struggling to see how the other program can produce valid results with out using these qualities.

On a side note i absolutely love that the program has a personality and when justifying its answers it has hilarious responses!

You can go east, northeast, or southeast. Where do you go?

Monday, November 2nd, 2009

I thought the program sounded intriguing (sorry, I had to say it!); and it sort of reminded me of those text adventure games because it has a running commentary. After playing with it, I got kind of bored; it’s just letters that you type into the program to see what patterns it can make. I didn’t see how it could be helpful, so I reread the paper and found this: “Rather, strings should be viewed as representing idealized situations involving abstract categories and relations” (8). However, I’m still having trouble thinking of what sort of situations this would entail… Is it just what we talked about in class regarding the analogy web of Nixon-war-hippies-drugs?

Also, I was disappointed when the program couldn’t come up with a satisfactory (by its own standards) answer to the first problem with which I presented it – I think it may have been something really similar to the first example shown in the paper, a string from the xyz family. Unfortunately I can’t remember exactly what input I gave, but I know that it didn’t respond with what I thought was an obvious answer. I thought that the program was supposed to give obvious answers and also clever answers that aren’t immediately apparent to us.

Overall, though, I was impressed with the program’s ability of self-awareness, and that it can change its own concepts and analogies, which are stored in its memory (also impressive), into ones that it finds more helpful as it continues to run.

Metacat – it’s a Copycat that philosophizes about itself! O_O

Sunday, November 1st, 2009

Originally I was going to wait and see what others posted so I could respond. Apparently this is not in the cards. So… The Metacat paper was intriguing. (Yes Doug, I know I need to elaborate on the word.)

In class we covered a lot of pertinent points regarding the Metacat paper: mostly what made Metacat so important to emergence as well as what the differences between Metacat and other models that had come before. I’m fascinated by the different categorizations required to look at the “world” of Metacat. Introducing the idea of having a computer make decisions for itself regarding analogies also emphasizes that the old categorization methods of “Reagan::parents as drugs::candy” does not make sense as the categorization itself is based on something totally different from the computer’s perspective. This seems to stress the fact that computers do not “think” in ways that humans do, but also takes advantage of how a computer “thinks” in order to interpret and create analogies. In this instance, Metacat is really awesome because of “self-awareness” in which there is an additional component of memory. The introduction of bias, even if it is small (as we discussed with the segregation model that was mentioned in class previously) has a noticeable effect on the outcome of a given test. Therefore, the weights are still important, yet useless without the memory implemented by the Metacat model. In this self-referencing in order to look towards the future Metacat is amazingly more concise. Instead of looking only at the present (as seen by most previous models) to springboard to the future speculation, there is also a reference check of past attempts, which makes for a far stronger model of emergence. (We noticed this problem of “no looking back” when we attempted to use the gaca.py to match a string.)

Okay… I think I’m starting to get incoherent, so I’ll sign off for now.