27For a single conditioned (i.e., predictive) stimulus, Hebbian learning actually works fine, but the problem is that when there are several conditioned stimuli, Hebbian learning would create too many associations and in an unbalanced way. For example, we could have an experiment where both a bell and a green light predict food. Simple Hebbian learning would then associate both those stimuli with the food, since the association strengths would be computed independently of each other. But this is in contradiction with what seems to happen in the brain. Such interaction between predictions has been investigated in a famous twist to the basic classical conditioning experiment using the bell: after the main experiment, another experiment is made where both the bell and a newly introduced green light predict food. In such a case, the dog will not learn to associate the green light with the food because the connection from the bell is enough to predict the food, and there is no need to construct an association from the light to the food anymore. This is in contrast to what Hebbian learning is supposed to do. The brain apparently tries to be economical and constructs only those connections that are necessary for the prediction of the food. Therefore, the association strength of one conditioned stimulus will also depend on the associations of other stimuli. This is why most research assumes a supervised model, which typically learns several such association strengths in a balanced way, and thus explains the various experiments better than simple Hebbian learning. A basic supervised learning rule accomplishing this is the Rescorla-Wagner model (Miller et al., 1995), which further models the dynamics of learning, as in the bell/light example just given; the model explains how the existing association with the bell “blocks” the development of a new association with the light.