Tuesday, June 26, 2012

A Neural Network for Intransitive Choice

In Foundation and Second Foundation, I mentioned a major axiom of economics was the assumption that revealed preferences are intransitive.  Specifically, this is the second of the Von-Neumann Morgenstern Axioms, which states:



"Axiom 2 (Transitivity)
For any three choices (A, B, and C), if A is preferred to B, and B is preferred to C, A is preferred to C."

For a common currency interpretation of neuroeconomics to hold, this Axiom cannot be violated.
So, I decided to design a neural network that would do three things:

1. Design (train) a neural network that can input two values and output the greater one.

2. Design a neural network that stores (perhaps in a distributed manner) subjective values associated with specific stimuli. When presented with any two stimuli, it chooses the one with the greater subjective value.

3. Design such a network that can solve this problem without any kind of common currency representation.





Let’s start off by comparing two numbers.  In Anderson et al’s paper Numerical Perversity: Teaching Arithmetic to a Neural Network, the authors present a model for comparing two numbers.  The following is a bastardization of that model.

Assume we have two numbers, A and B, and we want to know which is bigger.  First, the question is raised how do we store these numbers using neurons.  There are a couple different solutions to this, but I’m going to present the easiest. Assume that there are three neurons associated with A.  A is equal to however many of these neurons are activated at any given point.  The same holds true for B.

Thus, A and B are both elements in the set {0,1,2,3}, possibly the same element.
Text Box: Figure 1


Now, imagine those 6 neurons feeding in to two mid neurons that feed into two output neurons (see figure 1).








Text Box: Figure 2

For visual clarity, we can reduce A and B down to one circle each, each worth a certain value, without losing any information (see figure 2).  A sends three excitatory signals to the top mid neuron and three inhibitory signals to the bottom mid neuron.  Likewise, B sends two inhibitory signals to the top mid neuron and two excitatory signals to the bottom mid neuron.  The bottom mid neuron remains inactivated, but the top neuron activates.  The process is repeated, unnecessarily, for the output neurons, and the top neuron, corresponding to A, activates, indicating that A is the larger of the two. If neither activated, then they would both be of equal size.

Now that we’ve compared two numbers, let’s expand to comparing two options.  Assume our neural network has two choices, M and N, and we want it to show a preference to one choice or the other. Now, assume that each choice has multiple stimuli, each with its own subjective value.  In this case, let’s assume each has three stimuli, and we represent each choice with an array like {a,b,c}.

Similar to the number comparison, we can represent each stimuli as a collection of neurons. To simplify the model, we’ll assume that each individual stimulus is represented by one neuron.  Therefore, we’ll have two sets of three input neurons and two output neurons, each output neuron correlated to one of the two choices.  In between we will have some hidden middle steps. Whichever choice is preferred by the network has a positive corresponding output.

Now, let’s assume we have three options: L, M, and N. L = {1, 0, 0}, M = {0, 1, 0}, and N = {0, 0, 1}.  We would like a neural network that prefers L to M, M to N, and N to L (much like a game of Rock, Paper, Scissors). What hidden steps and neurons do we need?

The answer is one step, six neurons. As shown in the following figures, each stimulus for one of the choices not only excites the preference for its choice, but inhibits the preference for a different stimulus of the other choice.  Thus, it’s easy to find three choices that cause this neural network to show intransitive preferences.  But despite these choices being intransitive, a knowledge of this network would allow perfect preference predictions of any set of choices presented to it.  This combination of intransitivity and predictive power violates the Second Axiom of the Von Neumann-Morgenstern Axioms stated in part 1.  As explained in part 1, this is very exciting for economics.
This is the simplest, general case for this problem, but I think it’s pretty clear that if the weights in the middle were left blank and then the network was trained with any set of choices and any set of preferences, the network could be trained to prefer three choices in such a way that each choice would be preferred to the next.  Furthermore, expanding this network to allow for more than two choices to be considered and for more than three stimuli for each choice to be considered would provide no real challenge or fundamental change to the design of the network besides increasing the necessary number of neurons to function.
At the same time though, we’ve solved the problem stated at the beginning of this paper.  We have a neural network that can decide between two choices, and can do so without resorting to common currency or utility ranking.  Indeed, in this case, such a ranking is impossible, since it would give misleading predictions about which choice is preferred.