Comments

Thursday, October 11, 2012

The Gallistel-King Conjecture



Ever since Randy Gallistel came to UMD to give a series of lectures that eventually became his book (with Adam King) Memory and the Computational Brain Bill Idsardi and I have been discussing his deliberately provocative thesis that current neuroscience has fundamentally misidentified basic brain architecture. The book argues that the current conception of the brain in which learning is understood as rewiring of a plastic brain via changes of synaptic conductance (what fires together wires together) cannot be correct (connectionism is an expression of this neuronal worldview). The argument is elaborate and I will leave a more detailed discussion of its main argument for another post. However, what I want to very briefly mention here is a Gallistel and King (G&K) conjecture and some recent work that appears to relate to it.  I say “appears” because, believe me, I am no expert (in fact I am not even knowledgeable enough to be a novice) and so readers should take what follows as essentially an impressionistic riff based on chapter 16 of G&K’s book and a recent broadcast on Science Friday; What Your Genes Can Tell You About YourMemory. So with this caveat lector firmly before you, my conscience is clear and my riff begins.

What is G&K’s conjecture?  They argue in the book that brains must have the architecture of classical computers. Concretely this means that brains must be able to put things in memory, retrieve things from memory and compute over those things stored in memory.  As computation requires applying functions to arguments, we need a way of coding variables and operations that bind and value them.  These are all operations characteristic of a classical machine with a Turing-von Neumann (TvN) architecture.  As most of cognition consists of operations that redeem symbols from memory, computations over these symbols and subsequent storage, the brain, the organ that secretes cognition, must have a TvN architecture.  This is the conclusion. The argument is detailed, very pedagogical and a must read. 

G&K contrast the TvM conception with the currently common view of brains. In this view, brains do not have a TvM structure but are more like neural nets, which, G&K argue is an inadequate physical basis for cognitive computations and so must be wrong. This is very much a minority view. Why? Because brains don’t “look like” computers and they do look like neural nets. Ok, it’s probably more complicated than that, but certainly part of it as anyone who has had the misfortune to hear a connectionist talk knows. So if G&K are right that the common view is wrong and the brain has TvM structure then how does the brain do this? The G&K conjecture is that the requisite structure already exists within neurons, in its molecular structure (169):

…the genome contains complex data structures, just as does the memory of a computer, and they are encoded in both cases through the use of the same architecture employed in the same way: an addressable memory in which many of the memories addressed themselves generate probes for addresses. [There is a] close parallel between the functional structure of computer memory and the functional structure of the molecular machinery that carries inherited information forward in time for use in the construction and maintenance of organic structure…

Their conjecture is that the physical platform for the kinds of computational mechanisms we need to understand cognition exploits this same molecular structure.  DNA, RNA and proteins constitute (part of) the cognitive code in addition tos being the chemical realization of the genetic code.

G&K are very careful to moot this suggestion with all the caution that it deserves. In fact to call it a ‘suggestion’ is already too grandiose, let’s say a hunch or a guess.  This is where the Science Friday segment comes in. It appears that research is discovering that memories are in fact molecularly coded.  There are epigenetic mechanisms that code specific memories in various brain regions by laying down the proteins of the right kind using basically the same DNA mechanisms that code for genetic inheritance and development.  Combined with the G&K conjecture, this discovery might be the tip of a pretty exciting cognitive-neuroscience iceberg and has the potential of overturning a good deal of conventional wisdom, as the scientists interviewed hint at.

This has serious implications for linguists, if true (and recall this is all an impressionistic riff). There exists a very bad argument that the representations that linguists know and love cannot be “psychologically real” because they are not implementable in brain architecture, i.e. neural nets.  In my view, this has always been a weak argument, but one that seems to have quite a bit of suasive power to non-generative grammarians.  If the G&K conjecture is on the right track, however, there is no reason to think that brains cannot code for the kinds of representations we regularly use to account for grammatical competence.  The question will shift from whether they do to how they do.

Let’s end with a little parable based on some G&K remarks (p.281). They make an interesting observation about the history of modern biochemistry and consider its implications for current neuroscience. Watson and Crick’s (W&C) great accomplishment was to find a way to chemically incarnate the classical gene. Until they did their work, the gene was considered a nice computing device, but many biologists believed that it was not “biologically real.”  W&C proved otherwise and this entirely changed biochemistry. The field after W&C was entirely different from the field before W&C.  But what of classical genetics, how much did it change? In contrast to biochemistry, the basics remained essentially as they were before W&C.  Biochemistry had to “catch up” to genetics, not the other way around.  This story has a moral: substitute neuroscience for biochemistry and cognition/linguistics for genetics.  There is no reason a priori to think that “hard” (and expensive) neuroscience occupies the intellectual high ground to which “soft” (and cheap) cognition/linguistics must accommodate itself. Matters might well be the reverse, as they were once before when “hard” biochemistry ended up conforming to “soft” genetics.  The discoveries discussed on Science Friday make the G&K conjecture a little less farfetched and tentatively suggests that the analogy with biochemistry/genetics is prophetic. We may be getting ready to say bye-bye to all those inane connectionist models. Yeah!!!

4 comments:

  1. Thanks for this post! Representational brain? Sure. Computational brain? Absolutely. TvN architecture? I find that hard to believe; I look forward to reading G&K's arguments.

    ReplyDelete
  2. Just a minor comment: Turing machines and von Neumann machines don't have function/argument structure. They're far more primitive, with "function" things being at best an abstraction layer above the metal.

    ReplyDelete
  3. I've been wondering for quite some time what all of this means for the study of a competence system, though. I haven't read G&K entirely, but I'm sure they're mostly concerned with input-output systems, i.e. performance systems. But if we study competence, we're dealing with a system whose computations are "abstract, expressing postulated properties of the language faculty of the brain, with no temporal interpretation implied," so that "the terms *input* and *output* have a methaphorical flavor" (quoting from Chapter 4, fn. 3). G&K talk about online computation, but computations, derivations, etc. in I-language are merely a way of characterizing the formal knowledge of language a speaker has, hence are to some extent independent of implementation. (Just for the record, I don't think that makes them any more real, "Platonic," or whatever. It's just like the rules of a boardgame, which are a system you can study without reference to people playing the game.) Incidentally, that's why I never understood Chomsky's point about phases reducing "memory load," as that just seems to be a category error.

    Therefore, I'm not very much moved by these debates, although they might become more urgent once we turn to matters of performance.

    ReplyDelete
  4. I don't agree that G&K are mostly concerned with performance systems. G&K are concerned about the correct format for computational systems; the nature of the data structures, the nature of the algorithms, the kinds of operations required for cognition and the brain wetware required to support these. These have great relevance for thinking about generative procedures for these kinds of mental representations require support by adequate brain structures. What kind? They argue that something with a classical Turing-von Neumann architecture. I don't know if they are right (though I find their arguments persuasive, as apparently does Chomsky (note the appeal to authority move here!)) but everything that we know about language suggests that they are. See particularly their use of the 'infinity' argument to motivate their architectural claims. BTW, their point is that T-vN architectures subserve computational systems. If we believe that linguistic competence is to be understood as a generative procedure (a computation) then we need their kind of architecture and not the connectionist stuff that is all the rage in the neuro community.

    Last point: I find Chomsky's allusions to computational considerations like 'memory load' very salutary. There is every reason to think that the form of data structures have consequences for the nature of the algorithms and vice versa. This is a theme well developed in G&K. If so, even if one's interest is mainly in the nature of the data structures and not the algorithms considerations relevant to the latter can have interesting implications for the form of the former. As you no doubt know, moreover, this is hardly a new theme. Chomsky first outlines it in 'On Wh-Movement' in motivating subjacency as a theory of islands.

    ReplyDelete