Monotonicity of meaning: between simplicity and expressiveness
Dariusz Kalociński (University of Warsaw)
Monotonicity is an abstract property of meaning that has been attested in several semantic domains, including adjectives, quantifiers and modals. The meaning of a signal is upward/downward
monotone if the signal refers to all upper/lower bounds of each of its referents, according to a certain underlying ordering. It has been shown that monotonicity arises via artificial iterated learning
with pragmatic agents biased towards simplicity and expressiveness (Carcassi et al., 2018). Monotone concepts have
been also demonstrated to be easier to learn by humans (Chemla et al., 2019) and neural networks (Steinert-Threlkeld & Szymanik, forthcoming).
In my talk I will attempt to explain monotonicity in terms of a domain-general optimization principle seeking to reduce
communicative and cognitive costs associated with a language (Kemp & Regier, 2012). Communicative cost of a language is understood as the probability of confusing two random values from the scale. The cognitive cost is understood as change complexity (Aksentijevic & Gibson, 2012).
Optimization depends on the relative value of these two types of cost. For a wide range of possible divisions of labour between communication and cognition (including equal division), optimal languages turn out to be monotone. This shows that the tradeoff between simplicity and informativeness might be sufficient to explain
monotonicity. Moreover, the generality of this argument suggests that monotonicity might arise
at various timescales for which such optimization is viable. We backup this conclusion with initial
simulations based on a recent model of meaning coordination (Kalociński et al., 2018).
- A. Aksentijevic and K. Gibson. Complexity equals change. Cognitive Systems Research, 15-16:1–16, 2012.
- F. Carcassi, M. Schouwstra, and S. Kirby. The evolution of scalar terms’ semantic structure. In 51st Annual
Meeting of the SLE (Book of Abstracts), page 478, 2018.
- E. Chemla, B. Buccola, and I. Dautriche. Connecting Content and Logical Words. Journal of Semantics, 2019.
- D. Kalociński, M. Mostowski, and N. Gierasimczuk. Interactive Semantic Alignment Model: Social Influence
and Local Transmission Bottleneck. Journal of Logic, Language and Information, 27(3):225–253, 2018.
- C. Kemp and T. Regier. Kinship Categories Across Languages Reflect General Communicative Principles.
Science, 336(6084):1049–1054, 2012.
- S. Steinert-Threlkeld and J. Szymanik. Learnability and semantic universals. Semantics & Pragmatics, forthcoming.