The computational complexity of learning from binary examples is
investigated for linear threshold neurons. We introduce
combinatorial measures that create classes of infinitely many
learning problems with sample restrictions. We analyze how the
complexity of these problems depends on the values for the measures.
The results are established as dichotomy theorems showing that each
problem is either NPcomplete or solvable in polynomial time.
In particular, we consider consistency and maximum consistency
problems for neurons with binary weights, and maximum consistency
problems for neurons with arbitrary weights. We determine for each
problem class the dividing line between the NPcomplete and
polynomialtime solvable problems. Moreover, all efficiently
solvable problems are shown to have constructive algorithms that
require no more than linear time on a random access machine model.
Similar dichotomies are exhibited for neurons with bounded
threshold.
The results demonstrate on the one hand that the consideration of
sample constraints can lead to the discovery of new efficient
algorithms for nontrivial learning problems. On the other hand,
hard learning problems may remain intractable even for severely
restricted samples.
