Thursday, March 25, 2010

Getting smarter and stupider

Suppose we take the plasticity principle as implying that the brain is an automatic learning machine acquiring information pretty much on it's own over time (i.e., what if we can't control much about learning/memory rate by meta-cognitive control processes). Maybe that's too strong, but the following analysis ought to hold even just applied to the set of things learning implicitly.

At any given instant, t, there is a set of information active in the brain. This includes the state of the perceptual systems, processing of relationships among items in association cortex (including semantic memory), the contents of working memory and other control processes. Some fraction of this information changes the state of the brain in a way that is detectable later, that's the acquisition of memory. We could make it a formula:

M(t) = f(cognitive state)

I don't know if it's useful, but it's cute to observe that M(t) is technically the derivative of a function that describes the total amount of information currently held in the brain. Perhaps it is useful as it reminds us that we can also forget or lose things we know. So the instantaneous change in memory state in the brain is more properly:

M(t) = f(cognitive state) - g(Information State)

where g is a forgetting function. One way we can describe the process of getting smarter is to figure out how to maximize M (find the second derivative? heh). If all information is equal, then it's a matter of doing things that maximize f() and minimize g(). You could make a pretty good argument that some things are more important to learn than others so simply maximizing f() isn't as important as what the content of the learning is. This is probably correct, but one shouldn't be too sure -- how do we really know what is going to be important in the future? That line of inquiry will lead us tangentially off in the direction of Everything Bad is Good For You.

Let's consider some simpler things about f(). One issue is repetition vs novelty. Exact repetition of a prior cognitive state can be stored pretty compactly (e.g., you could count how often it occurs and store the whole episode as just increasing a counter). A completely novel experience adds all sorts of rich new data and connections among prior ideas and should add a lot more information.

But it's not that simple. Repetition sometimes allows you to see deeper into a domain, e.g., playing over a chess game is a great training exercise in that it lets you see new connections and new ideas. And novel episodes are going to be primarily learned via the MTL memory system, which probably has bandwidth limits (repetition-based, statistical, implicit learning less so). The degree to which novel information "makes sense" based on what you already know strongly increases p(storage), which suggests there's an optimal level of novelty (as an aside, when you hit that optimal level, I suspect you get a humor response). Repetition may also decrease g() and let you tend to hold onto prior knowledge.

It'd be nice if there were biological ways to boost f() and/or decrease g() (memory drugs). Good sleep hygiene probably does, although it also limits your perceptual input.

Leaving that there for now, how do we consider the possibility of getting stupider under this model? We don't know of anything that increases g() dramatically (short of neuropathology or brain damage). Our model is one of constant acquisition. By this model, the main way one can become stupider is to accidentally acquire false information.

I was reviewing a paper the other day that had several major conceptual flaws. It occurred to me that if somebody who didn't know the area read this paper, they would actually be stupider after reading the paper than before (no, my review was not positive). This is a general concern, but many examples occur in the political arena (e.g., WMD were found in Iraq, actionable intelligence was obtained from torture, Obama was born in Kenya).

If you want to protect yourself from accidentally becoming stupider, it's worth noting that the real risk comes from areas you don't know very well. If somebody says, "the moon is made of green cheese." Well, you're probably not stupider because you don't believe it. In fact, you have learned that this person either things the moon is made of green cheese or for some reason wants you to think that it is.

But if you want to read outside your area of expertise, you need to check your information stream carefully. Can you corroborate (spot check) a few facts? Does the source provide corrections to inevitable errors? Sometimes people say to me, you read a lot of information coming from one side of the political spectrum, don't you worry about the "echo chamber" effect? No, because the echo is boringly repetitive and I select a subset of sites that provide generally accurate information. Note that this doesn't include NYTimes columnists like Friedman, Brooks or Broder. They get some things right, but I've seen too many documented instances of them doing things that would have made me stupider. I might have accidentally learned that reconciliation is unconstitutional or only used by Democrats or something.

I deduce two obvious things about learning from this:
1. Manage your input stream (experiences) to optimally balance novelty and repetition to maximize your information storage rate.
2. Guard your information stream carefully to avoid falsehoods from getting into your brain and making you stupider.

No comments:

Post a Comment