Abstract

Saltatory alternations occur when two sounds alternate with each other, excluding a third sound that is phonetically intermediate between the two alternating sounds (e.g. [p] alternates with [β], with nonalternating, phonetically intermediate [b]). Such alternations are attested in natural language, so they must be learnable; however, experimental work suggests that they are dispreferred by language learners. This article presents a computationally implemented phonological framework that can account for both the existence and the dispreferred status of saltatory alternations. The framework is implemented in a maximum entropy learning model (Goldwater & Johnson 2003) with two significant components. The first is a set of constraints penalizing correspondence between specific segments, formalized as *Map constraints (Zuraw 2007, 2013), which enables the model to learn saltatory alternations at all. The second is a substantive bias based on the P-map (Steriade 2009 [2001]), implemented via the model’s prior probability distribution, which favors alternations between perceptually similar sounds. Comparing the model’s predictions to results from artificial language experiments, the substantively biased model outperforms control models that do not have a substantive bias, providing support for the role of substantive bias in phonological learning.

pdf

Share