Andreas Birkendorf, Eli Dichterman, Jeffrey Jackson, Norbert Klasner, Hans Ulrich Simon:

On Restricted-Focus-of-Attention Learnability of Boolean Functions

In the $k$-Restricted-Focus-of-Attention ($k$-RFA) model, only $k$ of the $n$ attributes of each example are revealed to the learner, although the set of visible attributes in each example is determined by the learner. While the $k$-RFA model is a natural extension of the PAC model, there are also significant differences. For example, it was previously known that learnability in this model is not characterized by the VC dimension and that many PAC learning algorithms are not applicable in the $k$-RFA setting.

In this paper we further explore the relationship between the PAC and $k$-RFA models, with several interesting results. First, we develop a characterization of $k$-RFA learnability upon which we build a general tool for proving hardness results. We then apply this and other new techniques for studying RFA learning to two particularly expressive function classes, $k$-decision-lists ($k$-DL) and $k$-TOP, the class of thresholds of parity functions in which each parity function takes at most $k$ inputs. Among other results, we prove a hardness result for $k$-RFA learnability of $k$-DL, $k \leq n-2$. In sharp contrast, an $(n-1)$-RFA algorithm for learning $(n-1)$-decision-lists is presented. Similarly, we prove that 1-DL is learnable if and only if at least half of the inputs are visible in each instance. In addition, we show that there is a uniform-distribution $k$-RFA learning algorithm for the class of $k$-decision-lists in which the output value alternates at most $j$ times. The latter algorithm is efficient for $j$ and $k$ constant. For $k$-TOP we show weak learnability by a $k$-RFA algorithm (efficient for constant $k$) and strong uniform-distribution $k$-RFA learnability of $k$-TOP with polynomial sample complexity. Finally, by combining some of our $k$-DL and $k$-TOP results, we show that, unlike the PAC model, weak learning does {\em not} imply strong learning in the $k$-RFA model.


Last update: 11/17/98. Norbert Klasner, klasner@lmi.ruhr-uni-bochum.de.

Return to list of my publications.