SP4: Hypotheses of Change

Despite the potential of computational tools to study language change at a large scale, there is a considerable gap between the type of answers these tools currently provide, and the research questions that interest (historical) linguists. So far, computational models have mainly tried to detect whether semantic change occurred (e.g., broadcast denoted spreading seeds in a field, and now denotes the spread of electromagnetic waves or ideas). In contrast, linguistic research focuses on the question of how and why words change their meaning, by detecting regularities in these changes, and classifying them into distinct categories. Linguists and philologists proposed several categories of semantic change, [Bréal, 1897, Bloomfield, 1933, Ullmann, 1962, Blank and Koch, 1999, among others], for example: change in the scope of meaning, either broadening (bird: ‘young bird’ → ‘bird’) or narrowing (girl: ‘child’ → ‘a female child’), change in connotation, either amelioration (knight: ‘servant’ → ‘nobleman’) or pejoration (silly: ‘happy’ → ‘stupid’), and metaphorical change (kill: ‘execute’ → ‘terminate’). It therefore seems that detectionoriented models can offer very little insight to the classification-oriented linguistic questions.

Further, there have been some attempts at testing hypotheses for laws of change which were proposed more than a century ago, as well as devising new laws based on empirical corpus evidence. Xu and Kemp [2015] focus on two incompatible hypotheses: Bréal [1897]s´ law of differentiation (where near-synonyms are set to diverge across time) and Stern [1921]’s law of parallel change (where words sharing related meanings tend to move semantically in the same way) and showed quantitatively, for English, that Stern’s law of parallel change is more rooted in evidence than Bréal’s. Other examples include Dubossarsky et al. [2015]’s law of prototypicality, which is based on previous smallscale evidence [Geeraerts, 1997], and shows that a word’s relation to the core prototypical meaning of its semantic category is crucial with respect to diachronic semantic change. Eger and Mehler [2016] postulate and show that semantic change tends to behave linearly in English, German and Latin. Perhaps the best-known example of such work within NLP are the two laws of Hamilton et al. [2016a]: conformity (frequency is negatively correlated with semantic change), and innovation polysemy is positively correlated with semantic change) – conclusions later refuted by Dubossarsky et al. [2017].

With a few exceptions, like Uban et al. [2019] and Frossard et al. [2020], current NLP methods study one language at a time (and mainly English). At the same time, LSC, being universal and ubiquitous phenomena, should be studied using multitude of data sources, and across countries, regions, languages, and social groups. This will allow for contrasting and comparing studies of language variation and change across both literal and figurative borders. The experience we gained in recent years in using computational methods for LSC, together with the availability of historical corpora in many languages would allow exactly this – large-scale multilingual investigation of semantic change and variation. This cross-lingual endeavour will open new possibilities of a comparative view of semantic change typology, research into the effects of language contact (e.g. SV texta ‘write by hand/subtitle‘ which gained the ‘send a text message’ sense simply because of its resemblance with EN ‘to text’), and revisiting existing semantic change hypotheses currently reported mainly on English.

Once impossible because of the unavailability of proper test sets and data, studying laws of change on a large scale and on several languages is now within our grasp: simulated LSC [Cook and Stevenson, 2010, Kulkarni et al., 2015, Rosenfeld and Erk, 2018, Dubossarsky et al., 2019, Shoemark et al., 2019] enables more precise evaluation of models on larger amounts of data particularly tailored to specific types of change and hypotheses. By finding good and cheaper methods for annotation (SP3) we can study and confirm may of these hypotheses more concretely and at a much larger scale

What we will do? We will develop new methods to encode richer and finer-grained semantic information into models of semantic change, which is critical for the classification of different types of change. This will allow the detection of previously-defined categories of semantic change on a large scale within SP4, but also the possible discovery of novel categories which may be hard to detect using traditional approaches in linguistics (which by necessity can cover a smaller range of data and languages). Importantly, combining the questions of whether and how semantic change occurs is the first step in large-scale addressing of the deeper question of why words change their meaning. We will operationalize and systematically revisit previous ‘laws’ of semantic change in several languages to allow future research to be grounded in sounder starting hypotheses: if we know that, for example, Bréal was correct only to a certain extent, it is important not to base methods and ground´ truth creation on features such as near-synonyms diverging across time.

Change Is Key!
Change Is Key!