Comments

Wednesday, May 16, 2018

Talk about confirmation!!

As Peter notes in the comments section to the previous post, there has been dramatic new evidence for the Gallistel-King Conjecture (GKC) coming from David Glanzman's lab at UCLA (here). Their experiment on Aplysia. Here is the abstract of the paper:

The precise nature of the engram, the physical substrate of memory, remains uncertain. Here, it is reported that RNA extracted from the central nervous system of Aplysia  given long-term sensitization training induced sensitization when injected into untrained animals; furthermore, the RNA-induced sensitization, like training-induced sensitization, required DNA methylation. In cellular experiments, treatment with RNA extracted from trained animals was found to increase excitability in sensory neurons, but not in motor neurons, dissociated from naïve animals. Thus, the behavioral, and a subset of the cellular, modifications characteristic of a form of nonassociative long-term memory in Aplysia  can be transferred by RNA. These results indicate that RNA is sufficient to generate an engram for long-term sensitization in Aplysia  and are consistent with the hypothesis that RNA-induced epigenetic changes underlie memory storage in Aplysia.
Here is a discussion of the paper in SciAm.

The results pretty much speak for themselves and they clearly comport very well with the GKC, even the version that garnered the greatest number of superciliously raised eyebrows when mooted (viz. that the chemical locus of memory is in our nucleic acids (RNA/DNA). The Glanzman et. al. paper proposes just this.

A major advantage of our study over earlier studies of memory transfer is that we used a
type of learning, sensitization of the defensive withdrawal reflex in Aplysia , the cellular and molecular basis of which is exceptionally well characterized (Byrne and Hawkins, 2015; Kandel, 2001; Kandel, 2012). The extensive knowledge base regarding sensitization in Aplysia  enabled us to show that the RNA from sensitized donors not only produced sensitization-like behavioral change in the naïve recipients, but also caused specific electrophysiological alterations of cultured neurons that mimic those observed in sensitized animals. The cellular changes observed after exposure of cultured neurons to RNA from trained animals significantly strengthens the case for positive memory transfer in our study. Another difference between our study and earlier attempts at memory transfer via RNA is that there is now at hand a mechanism, unknown 40 years ago, whereby RNA can powerfully influence the function of neurons: epigenetic modifications (Qureshi and Mehler, 2012). In fact, the role of ncRNA-mediated epigenetic changes in neural function, particularly in learning and memory, is currently the subject of vigorous investigation (Fischer, 2014; Landry et al., 2013; Marshall and Bredy, 2016; Nestler, 2014; Smalheiser, 2014; Sweatt, 2013). Our demonstration
399  that inhibition of DNA methylation blocks the memory transfer effect (Fig. 2 ) supports the hypothesis that the behavioral and cellular effects of RNA from sensitized Aplysia  in our study are mediated, in part, by DNA methylation (see also Pearce et al., 2017; Rajasethupathy et al., 2012). The discovery that RNA from trained animals can transfer the engram for long-term sensitization in Aplysia  offers dramatic support for the idea that memory can be stored nonsynaptically (Gallistel and Balsam, 2014; Holliday, 1999; Queenan et al., 2017), and indicates the limitations of the synaptic plasticity model of long-term memory storage (Mayford et al., 2012; Takeuchi et al., 2014).


Two remarks: First, as the SciAm discussion makes clear, selling this idea will not be easy. Scientists are, rightfully in my opinion, a conservative lot and it takes lots of work to dislodge a well entrenched hypothesis. This is so even for views that seem to have little going for them. Gallistel (&Balsam) argued extensively that there is little good reason to buy the connectionist/associationist story that lies behind the standard cog-neuro commitment to net based cognition. Nonetheless, the idea is the guiding regulative ideal within cog-neuro and it is unlikely that it will go quietly. Or as Glanzman put it in the SciAm  piece:
“I expect a lot of astonishment and skepticism,” he said. “I don’t expect people are going to have a parade for me at the next Society for Neuroscience meeting.”
The reason is simple actually: if Glanzman is right, then those working in this area will need substantial retraining, as well as a big time cognitive rethink. In other words, if the GKC is on the right track, then what we think of as cog-neuro will look very different in the future than it does today. And nobody trained in earlier methods of investigation and basic concepts suffers a revolution gladly. This is why we generally measure progress in the sciences in PFTs (i.e. Plank Funereal Time).

Second, it is amazing to see just how specific the questions concerning the bio basis of memory become once one makes the shift over to the the GKC. Here are two questions that the Glanzman et. al. paper ends with. Note the detailed specificity of the chemical speculation:
Our data indicate that essential components of the engram for LTM in Aplysia  can be transferred to untrained animals, or to neurons in culture, via RNA. This finding raises two questions: (1) Which specific RNA(s) mediate(s) the memory transfer?, and (2) How does the naked RNA get from the hemolymph/cell culture medium into Aplysia  neurons? Regarding the first question, although we do not know the identity of the memory-bearing molecules at present, we believe it is likely that they are non-coding RNAs (ncRNAs). Note that previous results have implicated ncRNAs, notably microRNAs (miRNAs) and Piwi-interacting RNAs (piRNAs) (Fiumara et al., 2015; Rajasethupathy et al., 2012; Rajasethupathy et al., 2009), in LTM in Aplysia . Long non-coding RNAs (lncRNAs) represent other potential candidate memory transfer molecules (Mercer et al., 2008). Regarding the second question, recent evidence has revealed potential pathways for the passage of cell-free, extracellular RNA from body fluids into neurons. Thus, miRNAs, for example, have been detected in many different types of body fluids, including blood plasma; and cell-free extracellular miRNAs can become encapsulated within exosomes or attached to proteins of the Argonaut (AGO) family, thereby rendering the miRNAs resistant to degradation by extracellular nucleases (Turchinovich et al., 2013; Turchinovich et al., 2012). Moreover, miRNA-containing exosomes have been reported to pass freely through the blood-brain barrier (Ridder et al., 2014; Xu et al., 2017). And it is now appreciated that RNAs can be exchanged between cells of the body, including between neurons, via extracellular vesicles (Ashley et al., 2018; Pastuzyn et al., 2018; Smalheiser, 2007; Tkach and Théry, 2016; Valadi et al., 2007). If, as we believe, ncRNAs in the RNA extracted from sensitized animals were transferred to Aplysia  neurons, perhaps via extracellular vesicles, they likely caused one or more epigenetic effects that contributed to the induction and maintenance of LTM (Fig. 2 ).
Which RNAs are doing the coding? How are they transferred? Note the interest in blood flow (not just electrical conductance) as "cognitively" important. At any rate, the specificity of the questions being mooted is a good indication of how radically the filed of play will alter if the GKC gets traction. No wonder the built in skepticism. It really does overturn settled assumptions if correct. As SciAm puts it:
This view challenges the widely held notion that memories are stored by enhancing synaptic connections between neurons. Rather, Glanzman sees synaptic changes that occur during memory formation as flowing from the information that the RNA is carrying.
So, is GKC right? I bet it is. How right is it? Well, it seems that we may find out very very soon.

Oh yes, before I sign off (gloating and happy I should add), let me thank Peter and Bill and Johan and Patrick for sending me the relevant papers. Thx.

Addendum: Here's a prediction. The Glanzman paper will be taken as arguing that synaptic connections play no role in memory. Now, my completely uneducated hunch is that this strong version may well be right. However, it is not really what the Glanzman paper claims. It makes the more modest claim that the engram is at least partly located in RNA structures. It leaves open the possibility that nets and connections still play a role (though an earlier paper by him argues that it is quite unclear how they do as massive reorganization of the net seems to leave prior memories intact). So the fall back position will be that the GKC might be right in part but that a lot (most) of the heavy cog-neuro lifting will be done by neural nets. Here is a taste of that criticism from the SciAm report:
“This idea is radical and definitely challenges the field,” said Li-Huei Tsai, a neuroscientist who directs the Picower Institute for Learning and Memory at the Massachusetts Institute of Technology. Tsai, who recently co-authored a major review on memory formation, called Glanzman’s study “impressive and interesting” and said a number of studies support the notion that epigenetic mechanisms play some role in memory formation, which is likely a complex and multifaceted process. But she said she strongly disagreed with Glanzman’s notion that synaptic connections do not play a key role in memory storage.
Here is where the Gallistel arguments will really come into play. I believe as the urgency of answering Randy's question (how do you store a retrievable number in a connectionist net?) will increase for precisely the reasons he noted. The urgency will increase because we know how a standard computing device can do this and now that we have identified the components of a chemical computer we know how this could be done without nets. So those who think that connections are the central device will have to finally face the behavioral/computational music. There is another game in town. Let the fun begin!!









































































18 comments:

  1. But... in the in-vitro experiment, the RNA just made cells more excitable. What if that's all it does, without representing any particular past event? What if the in-vivo experiments, too, just transferred the organism's reaction, not any underlying memory?

    RNA strikes me as a highly unlikely substance of long-term memory in any case, because it's just so unstable. It falls apart even faster than DNA, and unlike for DNA there's no known repair mechanism. What kind of enzyme could generate an RNA sequence (without a template!) to represent an event – we're one reverse transcriptase, one retrovirus, away from full-blown Lamarckism here – is likewise hard to imagine.

    ReplyDelete
  2. I sent Norbert a longish letter, which he asked me to post. I couldn't make it short enough. So here's part one...

    Hi Norbert,
    Great post, and happy gloating. Though as usual, you’re a little too concessive for my tastes. You say that scientists are “rightfully” a “conservative lot,” and that “it takes lots of work to dislodge a well entrenched hypothesis.” But as you go on to note, a “hypothesis” can be entrenched for reasons that have nothing to do with reasons. So I’d want to explicitly distinguish two different kinds of conservativism that you allude to: (i) being cautious/skeptical about replacing a hypothesis that has been confirmed, at least a little, with some new-fangled alternative; (ii) sticking with an older proposal that was never confirmed, given a new-fangled alternative, as if we should care about the temporal order in which proposals are made.

    Since it’s de rigueur to frame issues in Bayesian mode, let’s ask if there is any justification for assigning the GKC a lower prior than “the connectionist/association story that lies behind the standard cog-neuro commitment to Net Based Cognition.” Of course, there’s a family of stories here. But take whatever entrenched version of NBC seems least bad. I don’t see how mere age/entrenchment can matter. (Though I would be delighted to see associationists explicitly embrace Nelson Goodman on this score; gathering many bad ideas into one empiricist basket does make things tidier.) In fact, one might think that the age of NBC has had the effect of subtracting from its prior, given other facts. If smart people with decent resources keep trying to find support for a view, and the track record is dismal, that has to count for something. (On a related note, see the piece by Gary Marcus and Ernest Davis in today’s Times on Google, AI, and making hair salon appointments.)

    Of course, the prior for GKC is very low. How could it not be? As many have noted, (see, e.g, the previous comment) it’s not antecedently plausible that DNA or RNA—or anything down there—actually does the trick, which requires a lot of stability. But however low that prior was, surely the recent discoveries have had a little bit of a positive effect, as you’ve been rightly stressing. And if one thinks that the prior for NBC was also very low, then we seem to be comparing “very low but rising” with “very low and falling.” And now, we can throw away the Bayesian ladder, which makes the issue seem like one that calls for calculation as opposed to judgment. (Another of your favorite themes.)
    [part 2 follows]

    ReplyDelete
  3. [part 2]
    The issue is whether there were ever any good reasons for preferring NBC to GKC—apart from the fact that GKC wasn’t yet on the table—and if those alleged reasons are still strong enough that they haven’t been outweighed by the recent discoveries and track record of NBC. Simply put, we can just ask afresh if there are now any good reasons for preferring NBC to GKC. I don’t think type (i) conservativism is relevant in this context, given the track record of NBC; and I’m pretty sure you agree. But even for those who disagree, it’s worth considering the following possible history: GKC was available in 1960—in light of Turing, Chomsky, Watson and Crick—but people just said, “yeah, but we’d rather explore suggestions that seem friendlier to empiricism;” then for the next 50 years, there was no progress on how biology manages to add 1 to a memory register; then a few people took GKC seriously, and the results you’ve been describing started coming in.

    In any case, as you say, it’s now undeniable that GKC is on the table as a suggestion about the aspects of biochemistry that matter for biochemically realized memory and computation. So one can no longer say that some other suggestion is the only (and hence best) game in town. Conservatism with regard to confirmed hypotheses is usually appropriate. But there’s nothing admirable about conservativism that is rooted in “how else?” arguments coupled with a refusal to consider alternatives. (I think Hume and Darwin and Fodor made similar points in response to other examples of dogmatism. I hope that Fodor, who’s been on my mind a lot, won’t mind that company in this context.)

    Like most good ideas, GKC may well turn out to be wrong. But given the competition, it’s looking like the best game in town. Maybe that’s just a sad commentary on the state of the town. But onus on team NBC to start showing how “higher level” aspects of biochemistry (a.k.a. brains) are what really matter for biochemically realized memory and computation. Anyway, thanks for the post, and for the prompt to think again about how to compare new implausible ideas to old implausible ideas.

    ReplyDelete
  4. Additional discussion and links at The Neurocritic:

    http://neurocritic.blogspot.com/2018/05/what-counts-as-memory-and-who-gets-to.html

    ReplyDelete
    Replies
    1. The discussion on Neuroskeptic has been pretty interesting so far. An interesting tidbit for linguists: apparently, the difference between "a memory" and "memory" is just as hard to grasp for some people as the difference between "a language" and "language".

      Delete
  5. The more I think about it, the worse the GKC fares. Memories are mutable; we construct and reconstruct them, modifying them based on things we learn later or based on what we later think makes more sense. If memories are RNA or DNA strands, that requires editing enzymes that can read the strands for understanding, doesn't it? Known repair enzymes can't even distinguish genes from junk DNA.

    ReplyDelete
    Replies
    1. 1) Memory, not memories. A line of waterbuckets can serve as memory. Heck, your car can serve as memory if I have a system of encoding information via a specific arrangement of fender benders. So there's no reason why RNA/DNA can't. Both already serve as semi-inmutable memory in the body --- semi-inmutable since gene expression varies across cell types, and because there's all kinds of modification processes such as splicing that translate between different formats of chemically encoded information.

      2) The chemical processes that drive the translation from DNA to mRNA to proteins can be conceptualized as a long chain of computational rewrite steps (e.g. finite-state transductions). None of that involves any kind of reading for understanding, the code works in such a way that chemical processes realize the desired computations.

      3) Whatever you're doing on your computer right now, the hardware carrying out the computations has no understanding at all and just behaves according to specific physical laws. You might say humans are different in that our computations are concious, self-monitoring, and reflective, but that's a moot point since nobody has an explanation how that works anyways. Just like nobody can really tell you how chemical reactions produce life from inanimate matter.

      Delete
    2. OK, "for understanding" was exaggerated. But:

      Memory, not memories.

      Isn't the idea that specific sequences are specific memories?

      So there's no reason why RNA/DNA can't.

      But how could the encoding work?

      gene expression varies across cell types, and [...] there's all kinds of modification processes such as splicing that translate between different formats of chemically encoded information.

      Sure. How do you get from there to such specific, apparently permanent changes to memories?

      None of that involves any kind of reading for understanding, the code works in such a way that chemical processes realize the desired computations.

      By "understanding" I only meant the ability to react to the coded information. DNA repair enzymes or duplication enzymes don't recognize start or stop codons or the borders of exons & introns. Even RNA polymerases don't; they binds strongly to certain sequences and less strongly to others, but produce lots of useless RNA, including useless extensions at the start and end of every mRNA and enormous amounts of wholly useless RNA strands that do nothing and are promptly destroyed again. If an RNA sequence is a memory, to edit that memory seems to require an enzyme that exchanges just the particular bases that represent that particular part of the memory, right? Because it's really hard to imagine that such a thing could exist. Yet, that's the kind of thing computers do all the time. If you try to copy a file, you'll get a copy of that file, not of the whole stretch from 10–20 kB before the start of the file to 100–200 kB after the end of the next file.

      Just like nobody can really tell you how chemical reactions produce life from inanimate matter.

      Life is just a category error. It isn't "produced by" chemical reactions, it is a chemical reaction. Once a chemical reaction is complex enough to fulfill whatever set of criteria you prefer, you can call it alive.

      Delete
    3.  If an RNA sequence is a memory, to edit that memory seems to require an enzyme that exchanges just the particular bases that represent that particular part of the memory, right? Because it's really hard to imagine that such a thing could exist.

      Limits of this particular study aside (well addressed in the Neuroskeptic's comment section, btw), I don’t fully understand your “hard to imagine” point.
      I’m no molecular biologist, but we already know that new protein synthesis is necessary for memory formation, consolidation, and recall.
      Thus, it is not too hard for me to imagine that when a memory is stored is a cell or a circuit this should also be detectable at the level of RNA, since proteins are what ultimately do the work.

      Sheena Jocelyn’s lab has done very interesting work on this. And of course Susumu Tonegawa has tons of work on the cellular basis of memory formation.

      Delete
    4. Aniello and Jon have already made some substantive remarks on molecular computing, but I'd like to add an explanation as to why the difference between "memory" and "a memory" is so important to me:

      Isn't the idea that specific sequences are specific memories?

      That's how it is phrased in the press release, but the paper is a bit more cautious. Memory is a computational concept and has little to do with the psychologist's notion of "a memory". If you open a file on your computer, it gets loaded into memory, yet the sequences of bits in your computer's RAM is not a memory of this text file in the traditional sense. In addition, a very different file loaded into RAM might look exactly the same because the two files use different encodings to produce different text from the same bit sequences. Memory is an information buffer, "a memory" is a stored representation (i.e. a sequence with an encoding).

      We could take that RAM module containing your text file, put it into another computer, and then read out the files from there if we knew what encoding to use. That's hard in practice because RAM loses its contents very quickly unless it is supercooled (it's kind of unstable, like RNA), but there are known attack vectors that rely on transplanting RAM modules to read out decrypted data.

      For all we know, the researchers might have pulled the equivalent of a RAM transplant, which happened to contain a chunk of data that then triggered certain computations/chemical reactions (mirroring Aniello's point about proteins). So RNA might be memory without containing anything we would call "a memory", and it might not even be the actual long-term storage but just a temporary buffer.

      One more remark:

      If you try to copy a file, you'll get a copy of that file, not of the whole stretch from 10–20 kB before the start of the file to 100–200 kB after the end of the next file.
      That's because your computer has an abstraction layer known as a file system. You can copy files without a file system. e.g. with the tool dd. Then when you want to copy a file, you need to know the exact sectors. But when the file is fragmented, it does not span a contiguous region, so you might decide to copy it by finding its first sector, the last, and then copy the whole span with all kinds of crud in between.

      The point being, it makes little sense to immediately go to high-level concepts such as encodings, specific memories, retrieval, and so on, only to resign because the chemical facts seem at odds with these abstract notions. The architecture of our computers is also at odds with filesystems and their notion of read/write/copy, which is an abstraction layered over the hardware.

      Delete
  6. Just to give credit, the GKC has been independently proposed by other very respected neuroscientists. For example, the last chapter of Christoff Koch's very influential 2004 book "Biophysics of Computation", called "Unconventional Computing", explores the merits of molecular computation, as well as a whole laundry list of neuroactive substances and structures, including such lovely sections as "Computing with Puffs of Air" and "Programming with Peptides".

    ReplyDelete
  7. Advances in single-cell sequencing have now shown more diversity in cortical cells ( https://www.frontiersin.org/articles/10.3389/fgene.2012.00124/full ) and "reveals large-scale changes in the activated neuronal transcriptome after brief novel environment exposure" ( https://www.nature.com/articles/ncomms11022 ). So it seems that brief experiences do cause RNA changes. As Thomas says, this is some kind of "memory" but it's hard to see what the mechanism is for retrieving a memory and then using it for a current computation. That is, how do you "read" the memory back?

    ReplyDelete
  8. There's a reason the nervous system evolved electrical communication: speed. If the main location of the engram was in RNA, reading it would be terribly slow compared to sending an electrical impulse into a trained circuit and getting a response.

    How fast do you think you could get a response from a system mediated by RNA? If you clearly already have a system that can not only send messages into and out of a processing centre, and that central processing can be mediated by changing patterns of connectivity via mechanisms that clearly are present despite any cynicism that may be around, then you already have a system that can do all that needs to be done, and do it fast. There is no competing system which offers any evidence currently known, for a mechanism for getting messages into or out of RNA anything like as fast. Bear in mind that if there existed some important style of processing RNA in any significantly different way than the usual, we might have seen some trace of it by now.

    The current electronic neuronal based system, which does learn, would have to be utilised by any RNA-based system internal to individual cells, to connect it to other cells.

    The Hebbian process: "neurons that fire together, wire together" will no doubt involve RNA in some of the "wiring together" stage, but that's not what you have in mind. I don't think it can be claimed that there's no evidence for mechanisms for the Hebb principle. But if that's accepted, is the GKC process supposed to replace it?

    The sea-slug (Aplysia) experiment mentioned at the start of the post may be new but teaching flatworm B a trick by getting it to eat flatworm A that had already learned it, dates back to J. V. McConnell, (1962). Whatever chemical mediated that phenomenon, if it actually existed, would no doubt have worked by something not much more than a change of mood. You can do that using hormones and neurotransmitters, but that's not the engram.

    Bacteria are not in a position to use multiple cells in nets, so they do what they can within one cell. It is interesting but I don't think it's that fast.

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
    2. It's true that compared to chemical synapses, electrical synapses conduct nerve impulses faster, but, unlike chemical synapses, they lack gain — the signal in the postsynaptic neuron is the same or smaller than that of the originating neuron.

      I guess the speed/electricity in neurons example comes from a Hodgkin paper, "Optimum Density of Sodium Channels in an Unmyelinated Nerve” (1975), noting that the giant squid axon mediates the escape reflex, and it is critical for survival that this reflex is fast. Speed of conduction along the axon as optimized evolutionarily seems to make sense on the surface.

      Theoretical neuroscientist Romain Brette has a very good blog post on optimality principles in neuroscience, and while you can read the whole thing, there is a part directly relevant to your point, and in ascribing representational stuff to evolutionary pressures in general:

      Hodgkin notes that in other cases (different axons and species), the prediction based on speed does not work so well. His argument then is that speed may simply not be the main relevant criterion in those other cases. It was in the case of the squid axon because it mediates a time-critical escape reflex, but in other cases speed may not be so important and instead energy consumption might be more relevant. Because the squid axon mediates an escape reflex, it very rarely spikes and so energy consumption is presumably not a big issue – compared to being eaten alive because you are too slow. But energy consumption might be a more important criterion for axons that fire more often (say, cortical neurons in mammals). There is indeed a large body of evidence that tends to show that many properties of spike initiation and propagation are adapted for energy efficiency (again, with some qualifications, e.g. fast spiking cells are thought to be less efficient because it seems necessary to fire at high firing rates). There are other structures where axon properties seem to be tuned for isochrony, yet another type of criterion. Isochrony means that spikes produced by different neurons arrive at the same time at a common projection. This seems to be the case in the optic tract (Stanford 1987, “Conduction Velocity Variations Minimizes Conduction Time Differences Among Retinal Ganglion Cell Axons”) and many other structures, for example the binaural system of birds. Thus many aspects of axon structure seem to show a large degree of adaptation, but to a diversity of functional criteria, and it often involves trade-offs.
      ...
      biological organisms do not need to be optimal but only “good enough”, and there might be no evolutionary pressure when organisms are good enough. There is an important sense in which this is true. This is highlighted by Hodgkin in the paper I mentioned: there is a broad range of values for channel density that leads to near-optimal (“good enough”) velocities, and so the exact value might depend on other, less important, criteria, such as energy consumption. But note that this reasoning is still about optimality; simply, it is acknowledged that organisms are not expected to be optimal with respect to any single criterion, since survival depends on many aspects.

      Delete
  9. Hi Jon - thanks for the reply. It wasn't chemical vs electrical synapses I was comparing for speed. [For others: electrical synapses are when the end of two nerve cells just have holes in their walls where they abut each other, and the nerve impulse just jumps on through via change in electrical potential - voltage - and no complex standard release of neurotranmitters from one cell received across the synapse by the next, is needed. The chemical method, using neurotransmitter chemicals, introduces a delay, but it's the only type of synapse vertebrates have. The electrical method is faster but only appears in inverts. They are almost always cold, so a fair bit slower anyway.]

    I was talking about accessing the stored memory trace - the engram - as a pattern of transmitted excitations and inhibitions in a natural neural network. That we have, and we know a lot about how it is set up and accessed. Even though the chemical synapses are slowish they're a heck of a lot faster than any system involving reading LOTS of different stetches of RNA and combining the results.

    I didn't realise the giant squid axons were almost never used in the wild. Makes sense though.

    Yes, efficiency does involve a lot of things: energy saving, and other unre-usable resources potentially, and speed for example.

    We have bigger nerve fibres for simple sensation, which are quick and energetically expensive. We have smaller ones for pain reception, which are slower (usually). In this way we do optimise for energy but brains are a special case. Warm-blooded animals that are living in coldish environments often have to generate their own heat to maintain temperature. If you use energy to run a brain, you can then make use of the heat by-product which you would have had to generate anyway. This is no doubt why warm-blooded animals have developed the biggest brains.

    Thanks for pointing me to the Romain Brette pages. The number I of them, which I've gone back to, looked sound at the start, and extremely sound by the end. I recommend everyone else to read all of them; however I strongly suspect that I already believe everything said there. If there was a fair bit less of them I would definitely read them all immediately!

    ReplyDelete
  10. OK so it turns out that even if you're committed to synapses as the be-all-end-all of brains, you should still be looking at the molecular level! There's a new article out in Cell about the primary driver of synapse organization and development, neurexin-neuroligin. Neuro thought neurexin's role in synapse development is controlled just by its protein domains' signatures, but this new paper shows it's sensitive to a certain sulfate compound in the protein domains (well, really its partnership with other proteins, but still). A cool result is that if you mutate neurexin in drosophilia (fruit flies) you get disrupted development and locomotion, and they show that this is due to mediating this sulfate compound! So molecular compunds have a direct effect on the synaptic expression governing aspects of brain function. Neat results, and IMO further evidence that you need to entertain "non-canonical computation" at the molecular level, as Christoff Koch suggests.

    ReplyDelete
  11. This comment has been removed by the author.

    ReplyDelete