AMP 2017 Poster and Supplemental Material

This site contains all the supplementary materials for my poster “Soft typology arises from learning bias even with markedness hierarchies” presented at AMP 2017, NYU, 9/15/17.

The poster itself is available as a pdf at the title above.

Below are several animation files to illustrate the results of learning different languages with different constraint sets.

In actual typology, languages that allow just [k p] and [t] word initially (in terms of voiceless stops), but none word-finally are quite common, and languages that allow only [p t] word initially and word finally (i.e. No-K) are not observed. In the simulation below, the y-axis represents the harmonic difference (difference of harmony scores) between the faithful and deletion candidate (i.e. H(/Vt/→[V])-H(/Vt/→[Vt])) on an average simulation of learning. When a form reaches the blue line, it is surfaces faithfully 50% of the time, and once it reaches the red line it surfaces 90% of the time. Red forms are not given in the training data. As we can see, with both Onset and NoCoda, even without /kV/→[kV] in the training data, it is learned before /Vp/→[Vp].

 [_ p t]/[_ p t][k p t]/[_ _ _]
With Onset and NoCoda
With only NoCoda

 

For some of the code used in this project, see the Soft Typology Tool.

(Updated 12/4/17)