Thoughts on AI

Thoughts on the future of humanity, usually posted while I am drunk.

Monday, September 25, 2006

Mother Nature's little white lies(?)


Oracle: But... you already know what I'm going to tell you.

Neo: I'm not The One.

Oracle: Sorry, kid. You got the gift, but it looks like you're waiting for something.









In The Matrix, the oracle tells Neo that he is not "The One", which causes him to try to sacrifice his life for Morpheus, the action which leads him to the realization that he is "The One" and saving the day. It was a Machievellian move by the Oracle, telling a lie to create an optimal outcome, (Morpheus later tells Neo that she told him "only what he needed to hear") but it worked. Neo knew what he had to know to win.

In the movie, this is a really central part. But was what the Oracle said really a lie? Lets try to look a little deeper.

The three main characters (Neo, Trinity, Morpheus) are part of a deeper metaphor, revealed in their names. "Neo" has three letters in his name. By arranging them in a "Trinity" (Triad) you get N,E & O in a triangle. By applying "Morpheus" (latin for 'change') and turning the traingle, you get O, N & E, or "One" as in "The One". This bit sheds light on what the Oracle was saying, and what happens later: She said he wasn't the one, and that he would have to choose between Morpheus's life and his own (choose between "change" and his current state as N, E & O) He chooses to sacrifice his "life" (N,E & O state) state for Morpheus, and in this realizes that he is the One, (O, N & E state is acheived) in harmony with the metaphor.

So was the Oracle really lying at all? Yes and no, depending on who's perspective: Neo can't have his question answered about being the one, because he is limited to a dualistic 1 bit understanding of the question: He is either the One, or not the One, so when the Oracle says he is not, it turns out to be a "lie" - from his limited perspective. But the Oracle's understanding, (reflected in the above) transcends the dualism, its 2 bit. She know's he is not the One yet, but could be, (if he lets his "self" die) so she tells him that he is not the One, which is true at the time, so that he can reach the state of being the one, which will be true in the future.

So Neo's 1 bit system can only fit the oracles 2 bit truth over time. In technical terms, the Oracle is forced to Multiplex the truth so Neo can comprehend it. It was necessary that she do so, because simply telling him that he could become the One if he would sacrifice his life for Morpheus wouldn't make any sense, beacuse it would seemingly result in the "death" of the very One that was supposed to save everybody, and explaining that the death was "metaphorical" wouldn't lead Neo to risk his life in the way he had to in order to acheive the metaphorical death, or state change.

The point of all this is that when we accept that our minds are limited, we also have to accept that the optimal beliefs for a situation are not always the accurate ones, and this has some deep consequences for AI, which I will be turning over in my head...

Friday, September 15, 2006

Whenever you have a good idea...

Somebody else has come up with it...

So as I have been researching this more, I have discovered Maxent, and its applications in AI. Maxent, short for Maximum Entopy From wikipedia;
In physics the Maximum entropy school of thermodynamics (or more colloquially, the MaxEnt school of thermodynamics), initiated with two papers published in the Physical Review by Edwin T. Jaynes in 1957, views statistical mechanics as an inference process: a specific application of inference techniques rooted in information theory, which relate not just to equilibrium thermodynamics, but are general to all problems requiring prediction from incomplete or insufficient data (such as for example image reconstruction, spectral analysis, or inverse problems).
So in other words, statistical mechanics is based on an inference processes which let us build incrementally more accurate models of intractibly complex systems...Which as far as I am concerned, is AI in a nutshell.

I'm not sure how Maxent stacks up to other systems, but it seems to have been very successful at Natural language processing. A great tutorial, here, gives translation of English to French as an example. The most widely cited Maxent software I have found comes from the OpenNLP (open natural language processing) project at sourceforge.

Anyway, its vaguely dissapointing to find that the path you were walking has been walked before, buts its also very encouraging to see that its been walked before and it worked. It means my intuition is good, and I'm not nutty. If my intuition continues to be good, entropy can be used as a broad unifying model to describe a bunch of superficially different systems in AI.