The performance of supervised learning algorithms is discussed in terms of computational effort. We show using numerical simulations that several off-line algorithms, implemented as iterated on-line algorithms, designed to reach at least the border of the version space reproduce the 0.5/α behavior of generalization error attributed to maximal stability algorithms. However, if the cost of attaining certain generalization is measured by quantities related to computation, like the number of example presentations or the number of synaptic corrections, the performance of these algorithms falls below most on-line strategies. We also show that mixed strategies for the training set presentation does not improve learning.