Thursday, March 25, 2010

Getting smarter and stupider

Suppose we take the plasticity principle as implying that the brain is an automatic learning machine acquiring information pretty much on it's own over time (i.e., what if we can't control much about learning/memory rate by meta-cognitive control processes). Maybe that's too strong, but the following analysis ought to hold even just applied to the set of things learning implicitly.

At any given instant, t, there is a set of information active in the brain. This includes the state of the perceptual systems, processing of relationships among items in association cortex (including semantic memory), the contents of working memory and other control processes. Some fraction of this information changes the state of the brain in a way that is detectable later, that's the acquisition of memory. We could make it a formula:

M(t) = f(cognitive state)

I don't know if it's useful, but it's cute to observe that M(t) is technically the derivative of a function that describes the total amount of information currently held in the brain. Perhaps it is useful as it reminds us that we can also forget or lose things we know. So the instantaneous change in memory state in the brain is more properly:

M(t) = f(cognitive state) - g(Information State)

where g is a forgetting function. One way we can describe the process of getting smarter is to figure out how to maximize M (find the second derivative? heh). If all information is equal, then it's a matter of doing things that maximize f() and minimize g(). You could make a pretty good argument that some things are more important to learn than others so simply maximizing f() isn't as important as what the content of the learning is. This is probably correct, but one shouldn't be too sure -- how do we really know what is going to be important in the future? That line of inquiry will lead us tangentially off in the direction of Everything Bad is Good For You.

Let's consider some simpler things about f(). One issue is repetition vs novelty. Exact repetition of a prior cognitive state can be stored pretty compactly (e.g., you could count how often it occurs and store the whole episode as just increasing a counter). A completely novel experience adds all sorts of rich new data and connections among prior ideas and should add a lot more information.

But it's not that simple. Repetition sometimes allows you to see deeper into a domain, e.g., playing over a chess game is a great training exercise in that it lets you see new connections and new ideas. And novel episodes are going to be primarily learned via the MTL memory system, which probably has bandwidth limits (repetition-based, statistical, implicit learning less so). The degree to which novel information "makes sense" based on what you already know strongly increases p(storage), which suggests there's an optimal level of novelty (as an aside, when you hit that optimal level, I suspect you get a humor response). Repetition may also decrease g() and let you tend to hold onto prior knowledge.

It'd be nice if there were biological ways to boost f() and/or decrease g() (memory drugs). Good sleep hygiene probably does, although it also limits your perceptual input.

Leaving that there for now, how do we consider the possibility of getting stupider under this model? We don't know of anything that increases g() dramatically (short of neuropathology or brain damage). Our model is one of constant acquisition. By this model, the main way one can become stupider is to accidentally acquire false information.

I was reviewing a paper the other day that had several major conceptual flaws. It occurred to me that if somebody who didn't know the area read this paper, they would actually be stupider after reading the paper than before (no, my review was not positive). This is a general concern, but many examples occur in the political arena (e.g., WMD were found in Iraq, actionable intelligence was obtained from torture, Obama was born in Kenya).

If you want to protect yourself from accidentally becoming stupider, it's worth noting that the real risk comes from areas you don't know very well. If somebody says, "the moon is made of green cheese." Well, you're probably not stupider because you don't believe it. In fact, you have learned that this person either things the moon is made of green cheese or for some reason wants you to think that it is.

But if you want to read outside your area of expertise, you need to check your information stream carefully. Can you corroborate (spot check) a few facts? Does the source provide corrections to inevitable errors? Sometimes people say to me, you read a lot of information coming from one side of the political spectrum, don't you worry about the "echo chamber" effect? No, because the echo is boringly repetitive and I select a subset of sites that provide generally accurate information. Note that this doesn't include NYTimes columnists like Friedman, Brooks or Broder. They get some things right, but I've seen too many documented instances of them doing things that would have made me stupider. I might have accidentally learned that reconciliation is unconstitutional or only used by Democrats or something.

I deduce two obvious things about learning from this:
1. Manage your input stream (experiences) to optimally balance novelty and repetition to maximize your information storage rate.
2. Guard your information stream carefully to avoid falsehoods from getting into your brain and making you stupider.

Tuesday, March 16, 2010

The Plasticity Principle

The Plasticity Principle describes my conceptual model of what implicit learning is. The core idea is that there are a lot of damn neurons in the brain and even more synapses, and if we start with the assumption that every one of them is plastic (i.e., changeable), what are the consequences of that?

The first is that it is not correct to think of the brain as a bunch of static processing systems like the visual system, the motor system, control systems, the memory system. Instead, we think of every one of those systems as capable of being shaped by experience. The goal of this learning is to continually refine and optimize processing. We don't know exactly how this learning process will go because we don't yet know either the capacity and limits of this inherent plasticity nor exactly what optimal processing looks like.

Chasing those operating characteristics questions leads to psychological and cognitive neuroscience research on learning and memory processes. But I have a growing sense there are other more accessible implications.

1. The world shapes your brain (physically). On the upside, this is the basis for why Everything Bad is Good For You (in some cases). Games and other entertainments that create knowledge or strengthen cognitive skills actually enhance brain function. This happens because those systems strengthen through incidental practice.

2. On the downside, the world can create bad habits in your brain as well. Implicit racism is a good example of this. Regular co-occurence of negative ideas and minorities creates bias that affects your behavior that you don't even realize is there.

3. Cognitive training works. Practice with mental exercise improves cognitive function. Brain training helps with aging and probably in degenerative disorders as well. It will be cool to see how video games can be integrated with this idea.

4. The trick will be figuring out the right things to practice. You can certainly get good at specific skills that don't help much elsewhere. I'd say chess is a good example of this. Chess players are freakishly good at chess and chess memory. It doesn't mean they are very good at anything else, though.

Most of everything I've jotted down in this blog format is strongly influenced by the underlying idea:
5. Thinking about habits and perfection is motivated by marveling at how close to perfection habits can get. The upper bound on optimal performance is pretty amazing and that argues that this type of plasticity is pretty effective.

6. Teaching should incorporate skill development. Not all cognitive skills can be easily taught in games. The classroom is an environment where students will commit to acquiring some other types of skills and teaching should take advantage of that.

7. The Butterfly effect and Nature/Nurture. Most nature/nurture discussions do not take sufficient account of feedback effects through learning. If you assume there's a lot of plasticity in the brain, you should be very sensitive to feedback spirals. That is, a small push/advantage in a domain can cause you (or others who influence your environment) to direct more attention to that domain, which will push you further away from the mean as learning contributes. This will exacerbate the effect of small genetic differences, but only when the feedback loop isn't externally constrained. In the IQ world, things like race and gender have big impacts on getting the feedback loop started and keeping it going.

I think that humans' expertise in face processing might be an excellent example for #7. Mark Johnson's model of a small genetic/pre-wired push towards looking at faces is a good start. Ken Nakayama's examination of individual differences in face recognition ability contributes. His lifespan/ability graph showing the ability peaks in one's early 30s makes me intuitively confident that there's a big learning component. I should dig in and figure out why.

8. Statistical models of language processing. These capture more language than you'd think. And the concept is showing up in technology like Google. It ought to be more prevalent in recommendation algorithms like Netflix/Amazon, I think (but the argument isn't trivial).

Broadly, my thinking here is strongly influenced by John Anderson's Adaptive Character of Thought (his rational analysis of cognition). And I think also by Herb Simon's economic theory of Bounded Rationality (maybe less obviously). The idea of looking for the "operating characteristics" of implicit learning was developed with Larry Squire. I remember he was a fan of the phrase, although I think his model of nondeclarative memory was as a finite set of separate systems rather than a broad "every neuron" principle.

Anyway, is this a useful/interesting collection of implications? I can haz popular science book? Too much real science work to do right now anyway, but I'll keep collecting related thoughts here in the meantime.

Monday, March 15, 2010

Perfection

[Some thoughts that seem like they ought to be more related...]

Habits are supposed to be things we do smoothly, seemlessly and generally without error. But we still aren't perfect at them. We stumble on the stairs sometimes or drop our keys when opening the house or car door for no apparent reason. Should we be impressed at how good our habit system is or annoyed about these failures and non-perfections?

Tap Tap Revenge 3 is a sequence learning rhythm game that is structurally like Guitar Hero on the iPhone (or video iPod) except you tap on the screen for your responses. This makes is a lot like our lab task, SISL (for Serial Interception Sequence Learning). So, of course, I'm doing an introspective research experiment on long-term learning in TTR3.

I'm pretty accurate in Medium level difficulty mode. The %hit number at the end of any given song is usually 99% or 100% (often by rounding). I started getting an occasional "Full Clear" (FC) recently. This is an achievement you get if you make zero errors of any type for an entire song of 500-600 notes -- perfection!

What're the odds? Well, if you are 99.0% accurate, the odds of making 600 errorless responses in a row is... 0.2%, about 1:400. Pretty grim. At 99.5% accurate, you get up to around 5% (1:20). Practically speaking, it's more complicated since your accuracy is higher for songs you know (and I know I tend to play songs more when I like the song and can't promise I like easier songs better because, well, dopamine). Also, it's not totally clear how many error chances there are. Most of the "errors" I make now aren't missing or mis-timing a planned response, but accidentally double responding (or making n+1 responses to an n-response train, particularly for n's > 5) or accidentally dragging a finger on the screen when I don't mean to.

On Hard mode, my %hit score is typically down around 97%-98% rate, which seems pretty good but you can tell intuitively that you have no chance at all of a FC even without doing the odds on a calculator.

What's the point? Well, perfection is hard. And your habit learning system has to be pretty sharp to get you anywhere near perfection in the first place. I didn't download the TTR3 app that long ago, nor do I have that much time to practice, but my cortico-striatal circuits seem to be getting a pretty firm grip on the sequences.

I'm not even sure it's a good idea to dwell on how hard it is to achieve perfection, but the unnerving example I throw out sometimes is "how hard do you think it is to land an airplane?" Cause it seems like it's hard, but 10,000 planes land every day with virtually no incidents. There's a lot of 9's in that reliability rate. How do they do that? Doesn't it ever get boring or anybody lose focus? Either not or there are enough oversight systems to compensate, I guess. Or maybe flying an airplane isn't as hard as playing TTR3?

Wednesday, February 17, 2010

Skills and teaching

[I found this little mini-essay lying around. It's thematic with some of the older posts I had stuck here, so I thought I'd save it here too. Plus it's vaguely related to some of my current thinking about skill learning in cognitive training.]

A link took me to this post on a college teacher who's fed up with teaching and going to quit. I think it actually reflects a fairly common "teaching mid-life crisis" that hits a lot of professors in their 40's. But I jotted down some of my thoughts following his rant.
http://www.insidehighered.com/views/2008/10/31/smith

My sense is that JS's core problem is that he's become too entrenched in the minutia of teaching -- grading students for following the rules, turning things in on time, following the social norms he tries to lay out in class.

As a teacher, I think you need to periodically re-ask yourself, "what should the students be learning in class?"

Lately, I've become quite curious about the increasing amount of high-quality information you can get through Wikipedia or Google on the internet. If you can look up any fact nearly instantaneously via your phone, what facts do you need to memorize in college? Maybe just things that gain value by being recalled faster than you can type the search word into Google (things like jargon in the field you hope to work in -- you can't really have a conversation when you have to look up a lot of vocabulary terms).

So what else do students learn in college? Skills is one thing. How to think critically, evaluate evidence, come up with alternate hypothesis, creative solutions. Also how to communicate and present ideas convincingly. I don't think you can look up skills on Google.

I also think students also learn a bit more about the world, who they are and how they're going to live their lives afterwards. And I think the social networking probably has a bigger practical impact on everything that happens afterwards in life than anything (or everything) they actually learn from classes. I'm not sure I can affect social-networking much in typical classes, but it awareness of it's value makes my less critical of the consequences of socializing (e.g., if a student produces a lame assignment because socializing interfered with effort, it's not entirely clear to me that they have always made a bad decision).

So I try to teach skills and a few facts and I try to be reasonably entertaining. I have another pet idea that at a basic level, information = entertainment. I think if you're really learning something, it's going to generally be entertaining (somehow the 'entertainment' element arises from integrating acquired knowledge into existing knowledge -- I think). The students are paying a lot to be here and even though a lot of what they're paying for is the environment around the classes (self-discovery, social networking), it seems like making the classes enjoyable makes it more fun for all of us.

I don't really care about the entitled attitudes that lead them to negotiate constantly for higher grades. I never give in, but only to avoid penalizing the students who don't argue. Although, you could probably make a case that arguing on your own behalf is one of the more important skills to learn in college. I do echo JS's "I don't give grades, you earn them" idea. I point out their goal is to learn and I try to assess the quality of their learning via the exams & papers. If the assessment is fair, the assessment is fair. In the end, though, I don't really care that much about grades. They are mostly an effort indicator -- if you are willing to work hard, you can have your A. And if they all work hard, I think that's ok.

I suppose if there are students who are intellectually incapable of mastering the material even when they work hard, I'm supposed to mark them as such with lower grades to warn off professional schools and employers. It appears to correlate with effort most of the time, though.

OTOH, I don't teach as much as JS probably does. And I happen to teach only small classes (10-30). Teaching skills in a class of 300 is probably hard.

Friday, February 12, 2010

Mental weightlifting

"Of course it works. Didn't you ever do pushups before heading out to the beach?" I was listening to a well-known cognitive neuroscientist and fMRI guru explaining the close relationship between neuronal activity and blood flow that is the basic of functional neuroimaging of the brain. He wasn't talking to me, I was helping to explain the technique but I didn't get his example right away, so I had to inquire, "um, what?" He explained that pushups caused more blood to flow to the muscles of your arms, beefing them up and enhancing one's physique in anticipation of attracting hotties on the beach.

"Oh, well ok, then." Unlike this particular quite famous cognitive neuroscientist, I don't think my physique was ever within any reasonable number of pushups from attracting any positive attention at the beach. It's still a memorable example, though, and it certainly works in brain imaging: neurons doing "pushups" (e.g., firing electrical potentials) draw additional blood to active areas and blood flow changes can be picked up in MRI scanners to track changes in neural activity.

But how far can we push the analogy? Can we build up and strengthen our neural functions by targeted programs of mental exercise? Practically, of course the answer is "yes." If we study chess, mathematics, music, or physical skills like sports we get better at them. That means we're learning and learning means there are physical changes in our brains reflecting this. But can we bulk up mentally?

If you accepted the idea that we don't grow new neurons as adults, you'd be inclined to say "no." You might even be tempted to make a disparaging remark about the thoroughly discredited 19th-century study of phrenology. Phrenology is a famous example of pseudo-science based on trying to assess mental abilities by measuring bumps on your head. The shred of scientific idea behind it was that your relatively more effective mental functions were bigger and this would show up as larger bulges in your skull. That turns out to be as silly as it sounds, but there's an important idea hiding in it, namely that cognitive functions in the brain are physically localized. That turns out to be true in many cases and was not well appreciated until much later.

It also turns out that we do grow new neurons in the brain. Rusty Gage and colleagues and subsequently many other neuroscientists have studied adult neurogenesis and found there is some addition of new neurons in the adult brain. And even better, it can be enhanced by rich environments with lots of mental stimulation. Most of that basic neuroscience is examined in animals, but Eleanor Maguire has reported some intriguing results about changes in the size of specific brain structures in London cabbies. That sounds odd, but London is a particularly difficult city to navigate within and Dr. Maguire has observed that cabbies with many years of experience appear to have relative increases in the size of a brain region associated with spatial memory. Don't chalk this one up as a vindication for phrenology, though, the region of the brain increasing in size is essentially in the part of the cortex furthest from the skull. You need an MRI to see it, not just looking for bumps on the skull.

There have been other reports of systematic differences in the size of specific brain regions like auditory cortex for musicians. Those results quickly lead one back to a nature/nuture style debate. Did the brain differences occur based largely on genes and lead those people to become musicians? Or did the differences in size occur due to extensive music practice?

An intriguing result along these lines was recently reported by Erickson and colleagues about learning rates in videogame play. Larger volume in the dorsal part of the striatum (a central and subcortical brain region) predicted faster learning of Space Fortress, a fairly simple video game. The nature/nurture question is intriguing here, too. Were the fast learners genetically advantaged by a larger part of the brain involved in learning this game? Or had they developed that part of the brain as a muscle by extensive prior experience with games?

Either account is very interesting (and they aren't mutually exclusive), but the brain-as-muscle possibility is particularly compelling. If we can strengthen the brain as a muscle, we have even more reason to believe in the "use it or lose it" hypothesis of mental activity being effective to hold off cognitive decline associated with aging. There's a decent accumulation of evidence that mentally active people age well, but it's also the case that aging well lets you stay mentally active.

Intervention studies have been fairly promising. A huge, multi-center study of more than 2k older adults found that providing just 10 hours of training in memory, attention or decision making produced gains and the gains lasted 5 years. On the downside, training in one area didn't help the other two abilities. Nor was there any evidence of slower overall decline, so "using it" may not generally prevent you from still "losing it." It might give you more to lose, so to speak, so you function better for longer.

So those "brain training" sites on the internet that offer to provide some mental weightlifting might actually be helpful. And you might not just have to work out like in a gym, another recent report by Chandramallika Basak (working with Arthur Kramer of Univ Illinois, who was also a contributor to the Erickson report above) found that "training" by playing Rise of Nations for 20 hours led to improvements in "executive control functions" like decision making, working memory and task switching in a group of older participants. Notably, those cognitive functions tend to be at least partly supported by the same region that was larger in the better Space Fortress learners.

So maybe you can grow your brain by mental weightlifting. And maybe you can even do mental weightlifting with video games. That'd be cool.

[Self-editing note: this essay doesn't have the cites or links to the background research, although they are all pretty easy to find on google scholar. Nor is it really written in a tight, accessible style you'd like to see in a media-oriented article on science. It seems natural to write in a semi-formal rambling style, but I wonder if it's useful.]

Sunday, August 23, 2009

Cryptography

Why "cortical cryptography"? A fundamental question of Cognitive Neuroscience is the question of how information is represented in the neurons and synapses of the human brain. In cognitive psychology studies of learning and memory, we attempt to identify and control the type of information being acquired and we then observe the behavioral consequences that follow to verify the learning.

What is happening in-between the presentation of controlled information to the participant and the observation of their subsequent responses is essentially the encryption of information into the brain of the participant. So we can say that trying to understand the representation of information in neural systems is analogous to attempting to determine the encryption/decryption algorithm that the brain uses.

In basic research that we've been doing for awhile in the lab, I frequently find myself describing one of the goals of the research as identifying the operating characteristics of specific memory systems that we believe we can isolate. We attempt to find boundary conditions on the rate of learning, the applicability of the information (flexibility), the limitations on complexity of information to be acquired, the effect of passage of time (memory decay, forgetting).

Internally, I think of this as being something like a timing channel attack on this mysterious encryption algorithm.

From Wikipedia, http://en.wikipedia.org/wiki/Timing_attack
In cryptography, a timing attack is a side channel attack in which the attacker attempts to compromise a cryptosystem by analyzing the time taken to execute cryptographic algorithms. Every logical operation in a computer takes time to execute, and the time can differ based on the input; with precise measurements of the time for each operation, an attacker can work backwards to the input.
I think I'd state the last bit a little differently -- I want to try to work out the characteristics (including time) of the encryption algorithm not to identify the input (we control the input), but to figure out the algorithm itself.

We are currently looking at perceptual-motor sequence learning, which we believe is largely supported by cortico-striatal circuit loops (from the basal ganglia to the cortex and back). There are a fair number of neurons in the circuit, but I have this hunch that if we can get some basic characteristics of the learning ability of the system, e.g., the bandwidth of the learning rate, that will rule out some possible encoding algorithms and constrain the set of plausible encryption algorithms.

Friday, July 24, 2009

Knowing versus Understanding

They aren't the same thing. As a while male, I know racism is wrong and I know constraining a woman's right to choose is wrong. But I cannot truly understand all the implications of these. Two examples from this week:

1. Amanda Marcotte at Pandagon talks briefly about a bill being considered in the Ohio legislature that would require women to get the permission of the sperm donor before getting an abortion. I know immediately this is a horrifying law and also that these types of laws generally do not get passed (except in South Dakota). But in writing about the practicalities, she points out that women will just get a male friend to vouch for being the father and:
Every woman reading this is taking a quick mental inventory of what man they could trust to do this without gloating about his power over you.
Yes. She's right and I didn't think of that right away. That's the difference between knowing and understanding. I know the proposed law is wrong, but I don't live with the knowledge that the people around me might democratically vote to take away my basic freedoms at any time, so I don't have this type of reflexive thinking. Althought I'd like to think I'd be the kind of person my female friends would think of as absolutely reliable in this spot. I wonder if they do.

2. Ta-Nehisi Coates has been mulling over the Gates' arrest in his own house in Cambridge. And also the death of Shem Walker in NYC. Two black men who were disrespectful to cops, but did nothing obviously wrong. One arrested, the other shot dead. I know being a cop is hard, they are almost always meaning well and I know it's best to always be respectful.

But I do not know what it's like to be a responsible, successful black person (man in particular, I suspect) and have to live one's whole life knowing this could be around the corner for you. One day you are part of the priviledged class -- you're successful, social responsible and doing it right. And the next day you're persecuted for your race. I feel like Dave Chappelle and Wanda Sykes have both done comedy on this theme. I find their routines funny and I feel like I "know" what they're talking about. But when TNC says he doesn't feel good with a cop like that holstering a gun around his kids, I realize I don't really understand.


Not understanding doesn't mean I can't be part of the conversation, nor that my opinion is totally invalid. Or even that I can't be right in an argument over policy even with somebody who really does understand. But as a priviledged white male, I have to remember that I don't really feel it the same way they do and that does matter sometimes.