In the previous post, I mentioned that there is a general consensus that UG has roughly the features described in GB. In the comments, Alex, quotes Cederic Boeckx as follows and asks if Cederic is “a climate change denier.”
I think that minimalist guidelines suggest an architecture of grammar that is more plausible biologically speaking that a fully specified, highly specific UG – especially considering the very little time nature had to evolve this remarkable ability that defines our species. If syntax is at the heart of what had to evolve de novo, syntactic parameters would have to have been part of this very late evolutionary addition. Although I confess that our intuitions pertaining to what could have evolved very rapidly are not as robust as one would like, I think that Darwin’s Problem (the logical problem of language evolution) becomes very hard to approach if a GB-style architecture is assumed.
The answer is no, he is not (but thanks for asking). I’ll explain why but this will involve rehearsing material I’ve touched upon elsewhere so if you feel you already know the answer please feel free to go off and do something more worthwhile.
My friends in physics (remember, I am a card carrying hyper-envier) make a distinction between effective and fundamental theories. Effective theories are those that are phenomenologically pretty accurate. They are also the explananda for fundamental theories. Using this terminology, GB is an effective theory, and minimalism aspires to develop a fundamental theory to explain GB “phenomena.” Now, ‘phenomena’ is a technical term and I am using it in the sense articulated in Bogen and Woodward (here). Phenomena are well-grounded significant generalizations that form the real data for theoretical explanation. Phenomena are often also referred to as ‘effects.’ Examples in physics include the Gas Laws, the Bernoulli effect, black body radiation, Doppler effects, the photoelectric effect etc. In linguistics these include island effects, principle A, B and C effects, weak and strong crossover effects, the PRO theorem, Superiority effects etc. GB theory can be seen as a fairly elaborate compendium of these. Thus, the various modules within GB elaborate a series of well-massaged generalizations that are largely accurate phenomenological descriptions of UG. I have at times termed these ‘Laws of Grammar,’ (said plangently you can sound serious, grown-up and self-important) to suggest that those with minimalist aspirations should take these as targets of explanation. Thus, in the requisite sense, GB (and its cousins described in the last post) can serve as an effective theory, one whose generalizations a minimalist account, a fundamental theory, should aim to explain.
I hope it is clear how this all relates to the Cedric quote above, but if not here’s the relevance. Cedric rightly observes that if one is interested in evolutionary accounts then GB cannot be the fundamental theory of linguistic competence. It’s jus appears as too complex, all that internal modularity (case and theta and control and movement and phrase structure), all those different kinds of locality conditions (binding domains and subjacency/phase and minimality and phrasal domains of a head and government) all those different primitives (case assigners, case receivers, theta markers, arguments, anaphors, bound pronouns, r-expressions, antecedents etc., etc., etc.). Add to this that this thing popped out in such a short time and there really seems no hope for a semi-reasonable (even just-so) story. So, GB cannot be fundamental. BTW, I am pretty sure that I have interpreted Cedric correctly here for we have discussed this a lot over the last five to ten years on a pretty regular basis.
Given the distinction of GB as effective theory and MP as aiming to develop a fundamental theory, how should a thoroughly modern minimalist proceed? Well, as I mentioned before (here) one model is Chomsky’s unification of Ross’s islands via subjacency. What Chomsky did was (i) treat Ross’s descriptions as effective and (ii) propose how to derive these on more empirically, theoretically and computationally more natural grounds. Go back and carefully read ‘On Wh-Movement’ and you’ll see that how these various strands combine in his (to my taste buds) rather beautiful account. Taking this as a model, a minimalist theory should aspire to the same kind of unification. However, this time it will be a lot harder. For two main reasons.
First, what MP aspires to unify have been thought to be fundamentally different from “the earliest days of generative grammar” (two points and a bonus questions to anyone who identifies the source of this quote). Unifying movement, binding and control goes against the distinction between movement and construal that has been a fundamental part of every generative approach to grammar since Aspects (and before, actually), as has been the distinction between phrase structure and movement. However, much minimalist work over the last 20 years can be seen as chipping away at the differences. Chomsky’s 1993 unification of case as a species of movement or Probe-Goal licensing (PGL), the assimilation of control to a species of movement (moi) or PGL (Landau), reflexive licensing as a species of movement (Idsardi and Lidz, moi) or PGL (Reuland), the collapsing of phrase structure and movement as species of E/I merge, the reduction of Superiority effects to movement via minimality. All of these are steps in reducing the internal modularity of GB and erasing the distinctions between the various kinds of relationships described so well in GB. This unification, if it can be pulled off (and showing that it might be has been, IMO, the distinctive contributions of MP), would do for GB what Chomsky did for islands and the resultant theory would have a decent claim to being fundamental.
The second hurdle will be articulating some notion of computational complexity that makes sense. In ‘On Wh-Movement,’ Chomsky tried to suggest some computational advantages of certain kinds of locality considerations. Whatever, his success, the problem of finding reasonable third factor features with implications for linguistic coding is far more daunting, as I’ve discussed in other posts. The right notion, I have suggested elsewhere, will reflect the actual design features of the systems that FL interact with and use it. Sadly, we know relatively little about interface properties (especially CI) and we know relatively little about how FL would fit in with other cognitive modules. We know a bit more about the systems that use FL and there have been some non-trivial results concerning what kinds of considerations matter. As I have discussed this in other posts, I will not burden you with a rehash (see here and here). Consequently, whatever is proposed is very speculative, though speculation is to be encouraged for the problem is interesting and theoretically significant. This said, it will be very hard and we should appreciate that.
So, is Cedric a denier? Nope. He accepts the “laws of grammar” as articulated in GB as more or less phenomenologically correct. Is his strategy rational? Yup. The aim should be to unify these diverse laws in terms of more fundamental constructs and principles. Are people who quote Cedric to “épater les Norberts” doing the same thing? Not if they are UG deniers and not if their work does not aim to explain the phenomena/effects that GB describes. These individuals are akin to climate change deniers for their work has all the virtues of any research that abstracts away from the central facts of the matter.