Chapter 3.
(branching) phonological computation based on phonetic features alone.
Furthermore, the principles of phonological computation may be radically dierent from the rest of the computational principles (Bromberger and Halle 1991; Halle 1995; Harley and Noyer 1999; Pinker and Jackendo 2005).
However, the task faced by the phonological component is not merely to list the phonetic properties of the given lexical items (they are already listed in the lexicon!), but to decide whether a sequence of phonetic properties of lexical items-that is, the phonetic representation of the utterance -is an expression of a language. Alec Marantz (2005, 3.1) shows that we can construct ""word salads"" with English words but with a head-final structure as in j.a.panese (*Man the book a women those to given has). It will not be recognized as an English sentence precisely because it violates structural conditions.
Therefore, a phonological processor cannot process even phonetic information unless it is supplied with categorial information; hence the need for branching at a certain stage of computation. The grammar of the language a.s.signs a structure description to the phonetic representation of an utterance to generate the phonological form. Central to this approach is the idea of a syntax-governed ""phonological phrase"" that enters into the possible phonological interpretation of a string (Chomsky and Halle 1968, 511; Nespor 2001). As we will see in chapter 5, the current a.s.sumption is that the same syntactic object-phase-is transferred to the two interfaces.
So, the basic idea is that if you know that a given string is, say, an English sentence, then you cannot fail to attach some semantic interpretation to it. It is not ruled out that the interpretation you attach holds the sentence to be gibberish. It follows that the conception of a phonological form as a representation of ""noise"" is without any clear sense in the grammatical framework within which we are working. However, this does not rule out the possibility of an isolated LF-processor (Chomsky 1997). It could be that the mind, beginning with the lexicon, generates an abstract and semantically interpretable expression, yet no articulation takes place. According to Chomsky (2001b), most of language use is in fact geared to unarticulated ""inner thought."" In that sense the sound part could be viewed as ""ancillary"" to the basic language system, as we will see.
The significance of the notion of syntax-governed ""phonological phrase"" may be brought out by considering an example in which both the licit and the illicit sequences are meaningless in Searle"s sense. This Grammar and Logic
79.
will enable us to control for the distinction between (the grammatical) concept of interpretability and the thick concept of semantics Searle is working with. Consider the cla.s.sic collection of lexical items green, ideas, sleep, colorless, and furiously. Given categorial information regarding the features GN(ominal) and GV(erbal) and keeping to G-B, the computational system will decide that the string colorless green ideas sleep furiously is fine since it satisfies, say, X-bar theory, C-selection, number agreement, and so on. But *sleep green colorless furiously ideas is not fine although, in Searle"s thick sense of meaning, both the strings are meaningless. The former string is accepted precisely because it is interpretable.
The point is ill.u.s.trated by the fact that anyone who listens to the string for the first time tries to attach some meaning to it by stretching the imagination, invoking metaphors, and the like. This is true of most of the popular examples of ""deviant"" strings such as this stone is thinking of Vienna, John frightens sincerity, they perform their leisure with diligence, the un-bearable lightness of being, and so on. In each case, some interpretation seems to be available after a little thought; in fact, some of them become literary masterpieces. As more information regarding S-selection, binding, and so forth are invoked, richer interpretative possibilities come into view. To sum up: a phonological representation, unlike an acoustic representation, is computationally connected with meaning-the connection being largely indirect in the case of natural languages. For natural languages, then, the syntax-semantics divide, even if there is one, does not cut the joints of language in ways that logical theory demands.
This raises doubts regarding the applicability of the logical concept of syntax and the related concept of well-formedness in grammatical theory.
These pretheoretical notions do not play any significant role in grammatical theory since, whatever these are, the theory attempts to unpack them.
Except for serving some expository purposes, the widespread pretheoretical idea that grammatical theory part.i.tions the cla.s.s of strings into grammatical and ungrammatical subcla.s.ses is quite unnecessary (Chomsky 1993). Every theory, of course, will naturally part.i.tion strings in theory-internal terms.
As an aside, we might note that perhaps the global concept of acceptability of a string is the only one that matters for grammatical theory since this concept directly attaches to data. However, even this concept is suspect. The language system may pa.s.s strings that are not immediately acceptable to the native speaker, or are accepted on the wrong grounds; on the other hand, a speaker may accept strings that are rejected by the system. For example, complex structures with central embedding are 80
Chapter 3.
often dicult to accept: the nurse whom the cook whom the maid met saw heard the butler (Miller and McNeill 1969, 707). ""Garden-path"" sentences such as the horse raced past the barn fell are typically accepted because raced is wrongly identified as the main verb. The string no head injury is too trivial to ignore is taken to mean that no matter how trivial a head injury is, it should not be ignored; however, the language system interprets it as: no matter how trivial a head injury is, it should be ignored (Collins 2004, 518 n. 14). The child seems sleeping is accepted for further conversation even if the string is rejected by grammar due to the violation of the selectional properties of the verb seem.
These examples suggest that the concept of acceptability, being a global property of strings, is extremely thick. The acceptance or the rejection of a string might involve grammatical factors, contributions from nonlinguistic factors, performance factors, and so on. The divide between these factors is essentially theoretical in nature; the data do not wear the divides on their sleeves. The theory, in turn, attempts the best possible explanation of how, and which, lexical information is accessed and processed by which component of the mind. In this sense, the concept of lexical information is the only salient concept for language theory.
So far I have been arguing that the concept of PFR does not play any sensible role in grammatical theory. Similar remarks apply to the concept of SFR, the stands-for relationship, but from the opposite direction.
From what we know about the role of categorial information in grammar, it is not clear at all whether the properties of having a y-role, having a Case, having Subject agreement, having the feature of an anaphor, and so on, belong to the SFR part or not, although all of these properties are progressively invoked and formally established by the grammar, as we saw in some detail. All we know is that the grammar specifies which formal requirements are met by lexical items in having these properties; just this much does not make the computations purely formal since many of these properties clearly have semantic significance. For example, to say that an element is an anaphor is to say that it is dependent on some other formally identified element for interpretation; in other words, an anaphor does not by itself ""stand for"" anything. An r-expression, on the other hand, may stand for some object (in a model). Binding theory does not use the concept of SFR, but it takes interpretations very close to this notion.
The property of having a y-role is particularly interesting from this point of view (for more, see Hinzen 2006, 152.). Higginbotham (1989) thinks of the thematic structure of a sentence as a ""partial determination Grammar and Logic
81.
of (its) meaning."" Once a y-role is a.s.signed, the concerned item cannot fail to stand for something-agent, patient, experiencer, theme, goal, whatever-even if a y-role is a.s.signed to a trace, an empty element. We saw that an empty element is always viewed as a dependent element linked to an antecedent. A y-role is thus a.s.signed to a chain consisting of the dependent element and its antecedent, which ultimately is a lexical element, which, in turn, will stand for something, even if the ""stands-for""
relation here may not amount to the full referential relation. What an element with a y-role stands for will depend on accessing further information from the concerned lexical item by other cognitive capacities of the mind that are designed to process that information.
I believe that this point about the semantic character of y-roles is ill.u.s.trated by the following example from Jackendo 1983, 207, although Jackendo himself does not use this example to that end. There is something in the thematic structure of the verb grow that allows the pair every oak grew out of an acorn and every acorn grew into an oak, but that does not allow the pair *an oak grew out of every acorn and *an acorn grew into every oak. The grammatical phenomenon of quantifier scope seems to be sensitive to the semantic phenomenon regarding which QP has what y-role. We have two options here: either the grammar is enlarged to accommodate (full-blooded) semantic information, or, we think of, say, an acorn grew into every oak as gibberish pa.s.sed by the grammar (compare: a bird flew into every house), but rejected by the concept of growth. I am working under the second option; apparently, Jackendo is working under the first.
Thus, with respect to the information processed by grammar, as currently conceived, not only is the general nonphonological interpretability of a string determined; some cues about how it is to be interpreted are partly determined as well. This is one way of thinking that grammar progressively executes parts of SFR. Since there is no natural joint in grammar where the execution of PFR ends and the execution of SFR begins, these notions have no real meaning in grammatical theory. Hence, Searle-type parables are not likely to apply to this theory.
I am not suggesting that all formal properties have semantic significance. Structural Case, as noted, has no semantic significance. Thematic roles and binding properties, on the other hand, have semantic significance. Some agreement features, such as number feature of nouns, have semantic significance, while other agreement features, such as number feature of verbs, have no semantic significance. In MP, these variations are handled in terms of legibility conditions at the LF-interface. Roughly, 82
Chapter 3.
features that enter into ""understanding"" are brought to the interface; the rest are systematically wiped out during computation. There is no prior syntax-semantics division here. There are lexical features and there are legibility conditions; together they generate interpretable expressions.
This part of the computation is often called ""N ! LF computation""
(narrow syntax), meaning the part of the computation that begins with a numeration N of lexical items and generates LF phrase markers; there are no intermediate stages. The present point again is that there are nothing like separate PFR- and SFR-stages in the computational process.
We recall that, in the Government-Binding framework (G-B), there indeed was a syntax-semantics divide between computation up to s-structures and computations from s-structures to LF. But that distinction, as we saw, was entirely internal to theory and is no longer maintained in more recent conceptions of grammar. By parity of enterprise, therefore, the traditional syntax-semantics divide ought to be viewed as an artifact of (naive, commonsensical) theory rather than as a fact about languages.
The main thrust of the preceding way of looking at the grammatical system is that the traditional syntax-semantics divide needs to be given up since, otherwise, it is dicult to make sense of the claim that LF is the level where semantic (namely, nonphonetic) information begins to cl.u.s.ter.
I return to the point in a moment.
Are we overemphasizing the semantic nature of LF in the picture just sketched? As noted, the concept of semantics allegedly captured in LF is narrowly defined in terms of a list of phenomena such as quantifier scope, p.r.o.noun binding, variable binding, adverbial modification, and the like.
It may be argued that since these things have nothing to do with SFRs, as Searle and others construe them, I have overstreched the idea that grammatical theory already executes parts of SFRs. I think this argument basically raises terminological issues. Let me explain.
The only problem I have been discussing is whether grammatical theory gives an account of some nonphonetic understanding of a string.
If it does, then, by Searle"s definition, we cannot equate grammar with a system executing PFRs. If, on the other hand, execution of SFRs is viewed as the only legitimate semantic enterprise, then grammatical theory certainly does not contain semantics. But then, the nonphonetic understanding that a grammatical theory does capture escapes the PFR/SFR divide.
In other words, the PFR/SFR distinction does not apply to natural languages, if the phonetic/nonphonetic distinction is to apply to them. As a corollary, it follows that if ""semantics"" is viewed as a theoretical con- Grammar and Logic
83.
struct defined in terms of SFRs, then grammatical theory shows that this construct is (theoretically) dispensable.
3.3.
LF and Logical Form The preceding considerations suggest at most that, insofar as grammatical organization is concerned, there are fundamental dierences between the structures of formal logic and natural languages; therefore, the structure of logical theory cannot be mimicked inside grammatical theory. Just this much leaves open the possibility that lessons from formal logic may be added to the output of grammar to construct a more comprehensive theory of language. More specifically, lessons from formal logic may be used for an enriched description of the semantic component of languages beyond the description reached at LF.
To understand this project, let me review what happens at LF. LF, we saw, has the following properties, among others. In the T-model, grammatical computation on a selection of lexical items branches at some point to generate two representations, PF and LF. PF is where ""phonological and phonetic information is represented,"" and LF is where ""interpretive-semantic information is represented"" (Hornstein 1995, 5).
LF then represents nonphonological information; specifically, it represents interpretive-semantic information. At LF, all grammatically determined ambiguities, including scope ambiguities, are segregated, and all arguments, including NP-trace, are a.s.signed thematic roles, among other things. Specifically, it is natural to think of the thematic structure of a sentence as a ""partial determination of (its) meaning"" (Higginbotham 1989), as noted. The correlation between PF and LF thus captures the traditional conception of language as a system of sound-meaning connections (Chomsky 1995b).
Furthermore, the principles that determine LF-structure include some of the ""most fundamental principles of semantics"" (Hornstein 1995, 7).
As we saw, the following principles, among others, apply at LF: Full Interpretation (a structure is licensed i each element in the structure has an interpretation), the Principles of Binding (such as, an anaphor must have an antecedent in the local domain), the y-criterion (every argument must have a thematic role), and the Mapping Principle (discourse-linked arguments must be interpreted outside the VP-sh.e.l.l). As we saw, each of these is semantically motivated. In sum, not only is semantic information represented at LF, but its structure is determined by principles of semantics.
84.
Chapter 3.
As Chomsky (1991b, 38) put it, ""Much of the fruitful inquiry and debate over what is called "the semantics of natural language" will come to be understood as really about the properties of a certain level of syntactic representation-call it LF.""
It seems natural, then, to view LF itself as a level of semantic representation, and the theory of LF as a semantic theory. More generally, once the requirement of sound-meaning correlation has been met at LF, we may identify the scope of language theory with that of grammatical theory, and view other conceptions of semantics, if viable, as falling beyond language theory per se. Pushing it, we could even say that ""semantics"" is whatever is done at the nonphonetic end of grammatical theory.
The last point needs emphasis because it is not ruled out that developments at this end of grammatical theory might expand beyond the current conception(s) of LF to include phenomena that fell earlier under other conceptions of meaning. For example, semantic features such as /aanimate or pragmatic features such as /adefiniteness may play a role in grammatical computation (Ormazabal 2000; Diesing 1992). More radically and from an opposite direction, grammatical theory may not even require a level of representation, LF or SEM or whatever; it may consist of just computation on the lexicon and interface conditions (Chomsky 2005), as we will see in chapter 5. The basic point is that ""semantics"" is wherever the internal drive of grammatical theory leads us to at this end; residual conceptions of meaning fall under the study of other faculties of the mind that, together with the faculty of language, lead to the (extremely complex) phenomenon of language use.
However, Hornstein (1995, 6) seems to be saying something slightly dierent when he says that LF is ""where all grammatically determined information that is relevant to [semantic] interpretation is consolidated""-that is, LF ""provides the requisite compositional structure"" for executing ""interpretive procedures"" concerning various facts taken to be ""charac-teristic of meaning."" These facts include relative quantifier scope, scope of negation, modality, opacity, p.r.o.noun binding, variable binding, focus and presupposition structure, adverbial modification, and so forth. In this picture, although these things are ""done o the LF-phrase marker,"" LF itself is not viewed straightforwardly as a level of semantic interpretation, but as something that provides the necessary scaolding for other nongrammatical theories to execute ""interpretive procedures."" Chomsky (1991b, 46) also said that LF is ""a.s.sociated with"" semantic interpretation, without saying explicitly that it is a level of semantic interpretation, even if LF has some of the properties usually described in model theory. We Grammar and Logic
85.
may conclude that, according to these authors, although LF is certainly semantically sensitive, it is best viewed as preparing the ground for (post-LF) semantic interpretation; in that sense, LF is missing something, semantically speaking.2 To focus on what is missing, it is well known that grammatical theory self-consciously stays away from things like conceptual roles, background beliefs, speaker intentions, cultural and historical expectations, and the like (Chomsky and Lasnik 1977, 428; Chomsky 2000d, 26). It is unlikely then that when Chomsky and Hornstein think of LF as missing something, they have these things in mind.
Richard Montague (1974, 188), among many others, raised a much narrower issue. Montague held that the ""construction of a theory of truth is the basic goal of serious syntax and semantics."" ""Developments emanating from the Ma.s.sachusetts Inst.i.tute of Technology,"" he immediately added, oered ""little promise towards that end."" Since the theory of LF does not contain a subtheory of truth, Montague"s complaint is factually correct.
Moreover, Montague"s objection seems to fall in place with what we have seen so far. As noted, Hornstein thought that facts such as relative quantifier scope, scope of negation, modality, opacity, and so on fall at the borderline of LF and other ""interpretive procedures."" Beginning with the work of Gottlob Frege and Bertrand Russell, most of these facts have been addressed by now by logic-based semantic theories of the type Montague advocated.3 Also, these theories are typically unconcerned about conceptual roles, background beliefs, speaker intentions, and the like. These are abstract theories formulated in logical notation to explore structural conditions on meaning. In that sense, they touch the domain of the theory of LF. However, since these theories are truth theories without question, they dier sharply from the theory of LF.
If the (semantic) scope of grammatical theory is to expand at all, it seems natural that the first thing to add to grammatical theory is some version of truth theory.4 So, we can envisage an enterprise in which versions of truth theory are attached suitably to the theory of LF. In general, the envisaged project basically amounts to establishing relations, if any, between the grammatical concept of LF and the philosophical concept of logical form.
The philosophical project of logical form has two parts: postulation of canonical expressions to capture the meaning of expressions of natural languages, and attachment of some version of semantic metatheory, which includes a model theory, to the canonical expressions. Although 86
Chapter 3.
Bertrand Russell may be credited with initiating the philosophical notion of logical form in connection with his landmark theory of definite descriptions, the original theory did not contain any explicit metatheory. So his theory of descriptions, especially his account of scope distinction, oered a way of studying just the first part of the project, as we saw in section 2.1.
From what we saw, Russell"s project looked questionable because there was no clear justification as to why the notation of predicate logic is to be imposed on expressions of natural language. We saw that our linguistic intuitions suggest that certain sentences-say, The king of France is not wise-are structurally ambiguous. We can even give informal paraphrases of those intuitions in English, not surprisingly. It was unclear what more is accomplished by the imposition of logical notation. All that the notational scheme of (4) and (5) did was to represent those intuitions. (4) and (5) thus just represent data; they do not give an account of the data. Furthermore, we saw that the task of representing scope distinctions in a canonical notation can be accomplished within the theory of LF with full generality and explanatory power. With the theory of LF in hand, especially in the minimalist version, we are in a position to substantiate these initial impressions.
Consider again (6), mentioned in the last chapter.
(6) Every boy danced with a girl.
In the notation of restricted quantification, the ambiguity of (6) can be represented as in (76) and (77).
(76) (every x: boy x)((a y: girl y)(x danced with y)) (77) (a y: girl y)((every x: boy x)(x danced with y)) In this representation, the indefinite article a in the phrase a girl is viewed as an existential quantifier. Following the work of Irene Heim and Peter Geach, Molly Diesing (1992, 7) writes the logical form of (78) as (79).