Accounting for the learnability of saltation in phonological theory: A maximum entropy model with a P-map bias: Supplementary material
In lieu of an abstract, here is a brief excerpt of the content:

Language 93.1, March 2017 s1 ACCOUNTING FOR THE LEARNABILITY OF SALTATION IN PHONOLOGICAL THEORY: A MAXIMUM ENTROPY MODEL WITH A P-MAP BIAS: ONLINE SUPPLEMENTARY MATERIALS JAMES WHITE University College London MORE DETAILS ON PRIOR AND POSTLEARNING WEIGHTS. Provided below is a detailed summary of prior and postlearning weights for each of the three models (substantively biased, unbiased, and anti-alternation) in experiment 1 and experiment 2. EXPERIMENT 1: SUBSTANTIVELY BIASED MODEL. See Table 4 in §5.1 of the main text. EXPERIMENT 1: UNBIASED MODEL. Table S1 shows the prior and postlearning weights of the unbiased model in experiment 1. Most of the work in this model is done by the markedness constraints alone. Since stops and voiceless obstruents in general never appear as outputs, the weights of the two markedness constraints (*V[−voice]V and *V[−cont]V) increase in both conditions. Moreover, there is little reason for the *MAP constraints to pick up substantial weights because none of the obstruents surface unchanged during training. A few of the *MAP constraints do pick up a small weight; these constraints play a minor role in ruling out alternations not seen during training (e.g. ensuring that [p] → [v], not [p] → [f] or [b]). CONSTRAINT PRIOR WEIGHT POSTLEARNING WEIGHT POTENTIALLY SALTATORY CONDITION CONTROL CONDITION *V[−voice]V 0 1.49 1.41 *V[−cont]V 0 1.49 1.72 *MAP(p, v) 0 0 0 *MAP(t, ð) 0 0 0 *MAP(p, b) 0 0.54 0.15 *MAP(t, d) 0 0.54 0.15 *MAP(p, f) 0 0.54 0 *MAP(t, θ) 0 0.54 0 *MAP(b, v) 0 0 0 *MAP(d, ð) 0 0 0 *MAP(f, v) 0 0 0 *MAP(θ, ð) 0 0 0 *MAP(b, f) 0 0 0.56 *MAP(d, θ) 0 0 0.56 TABLE S1. Prior constraint weights and postlearning weights (unbiased model) in the potentially saltatory and control conditions of experiment 1. s2 EXPERIMENT 1: ANTI-ALTERNATION MODEL. The *MAP constraints in the anti-alternation model each have a prior weight of 2.27 (i.e. the average of the prior weights in the substantively biased model). Table S2 shows how these weights change as a result of training in the two conditions of experiment 1. The general behavior of the weights in this model is similar to those in the substantively biased model. In the potentially saltatory condition, the weights of both markedness constraints increase, while the *MAP constraints penalizing the trained alternations, *MAP(p, v) and *MAP(t, ð), have weights that decrease. The other *MAP constraints have either small modifications to their weights (if they play a minor role in preventing unobserved alternations) or no change in their weights (if they do not affect the outcome at all). In the control condition, the alternations encountered during training, [b] → [v] and [d] → [ð], result in a substantial increase to the weight of *V[–cont]V and a decrease in the weights of the relevant *MAP constraints, *MAP(b, v) and *MAP(d, ð). The other markedness constraint, *V[−voice]V, receives a modest increase in weightbecause no voiceless obstruents appear as outputs. The other *MAP constraints have either small increases or no change in their weights, depending on whether they play any role in the outcome. CONSTRAINT PRIOR WEIGHT POSTLEARNING WEIGHT POTENTIALLY SALTATORY CONDITION CONTROL CONDITION *V[−voice]V 0 1.62 0.75 *V[−cont]V 0 1.62 2.19 *MAP(p, v) 2.27 1.22 2.27 *MAP(t, ð) 2.27 1.22 2.27 *MAP(p, b) 2.27 2.51 2.32 *MAP(t, d) 2.27 2.51 2.32 *MAP(p, f) 2.27 2.51 2.27 *MAP(t, θ) 2.27 2.51 2.27 *MAP(b, v) 2.27 2.27 0.85 *MAP(d, ð) 2.27 2.27 0.85 *MAP(f, v) 2.27 2.27 2.27 *MAP(θ, ð) 2.27 2.27 2.27 *MAP(b, f) 2.27 2.27 2.60 *MAP(d, θ) 2.27 2.27 2.60 TABLE S2. Prior constraint weights and postlearning weights (anti-alternation model) in the potentially saltatory and control conditions of experiment 1. s3 EXPERIMENT 2...