Simplicity and Complexity of Probabilistically Defined Concepts
Abstract
Human concept learning is known to be impaired by conceptual complexity: simpler concepts are easier to learn, and more complex ones are more difficult. However the simplicity bias has been studied almost exclusively in the context of deterministic concepts defined over Boolean features, and is comparatively unexplored in the more general case of probabilistic concepts defined over continuous features. This paper reports a series of experiments in which subjects were asked to learn probabilistic concepts defined over a novel 2D continuous feature space. Each concept was a mixture of several distinct Gaussian components, and the complexity of the concepts was varied by manipulating the positions of the mixture components relative to each other, while holding the number of components constant. The results confirm that the positioning of mixture components strongly impacts learning, independent of the intrinsic statistical separability of the concepts, which was manipulated independently. Moreover, the results point to an information-theoretic basis framework for quantifying the complexity of probabilistic concepts, centered on the notion of compressive complexity: simple concepts are those that can be approximately recovered from a projection of the concept onto a lower-dimensional feature space, while more complex concepts are those that can only be represented by combining features. The framework provides a consistent, coherent, and broadly applicable measure of the complexity of probabilistic concepts.
Related articles
Related articles are currently not available for this article.