Tuesday, May 29, 2007

In any sane world:

Somewhere near London

What are we to make of this?
Former Majority Leader Tom DeLay, who resigned under indictment on campaign finance-related charges in Texas, also has grown dissatisfied with the president's stewardship of the conservative movement. DeLay told Goldberg that in coming years, when he is not fighting the Texas indictment, he plans to build a conservative grass-roots movement to rival MoveOn.org, insisting that divine inspiration brought him to that quest.

"God has spoken to me," DeLay said. "I listen to God, and what I've heard is that I'm supposed to devote myself to rebuilding the conservative base of the Republican Party. (emphasis added)
What are we to make of a former Speaker of the House saying that he hears voices? And he's not being funny.

In any sane world we would reserve a bed for him in Bedlam.

Monday, May 28, 2007

Ho, Hum..Another Day on the Island


Move along, nothing to see here:
A truck bomb exploded outside one of Baghdad's most revered Sunni shrines on Monday, killing at least 18 people and damaging the outer walls of the Abdel Qadir Gilani Mosque.

The powerful blast, which sent a massive pillar of black smoke into the sky above downtown Baghdad, occurred as US and Iranian representatives were meeting for landmark talks on Iraqi security less than a kilometre (half mile) away.

"I came into the street after the explosion near the mosque, and I found five charred bodies myself, including a pregnant woman," said witness Saed Mohammed, as fire engines arrived to battle the flames. (emphasis added)
And, in a galaxy far away:
You could argue that Lord of the Flies instantly adopts a somewhat pessimistic stance - the notion that man is locked to society, and returns to savagery when freed from the confines of it. (emphasis added)
And who's responsible? Altogether now, lets look in the mirror.

Sunday, May 27, 2007

Reply

Reply to Comments:
Before launching into a further post on thinking and free will, I thought it important to address points made by Mr. Putnam and The Growlery, since they begin to get to the meat of these questions.

Putnam 04/22/07:
It seems to me that the arrival to the brain of the data from the eye is, actually, quite similar to a hospital emergency room. There is some sort of triage that goes on. The eye, it seems to me, coveys different types of data. Simple sight provides other autonomous functions with operating information. Nothing is required. Other data, for instance words from a textbook, is stored and acted upon differently. And the truly emergency, the child stepping in front of your automobile, requires another kind of action.
Reply:
I wouldn’t quite agree that the eye conveys different types of data, I think it is just a passive receiver, transducer. But, the second observation this strikes to the heart of the argument I have been trying to make. While there appears on the surface that a reflex such as pulling your hand away from the stove is on a different level than reading words, we tried to show that the molecular mechanism of both processes was identical, although vastly different in complexity. We even went on to postulate that even moral “choices” were reflexive. Man is a creature of habit.

Putnum: 04/26/07
In terms of information, I got to thinking today about light waves. I am not a physicist, so therefore it's another area about which I know very little, but I'm dumb enough to step in anyway. Light waves have some very interesting qualities. They have qualities of both wave and particle. In addition, observation seems to change how they behave. There, that's the limit of my knowledge, and I'm about to extrapolate from that into Information. How dumb can I be?
As a systems designer I know that a stack of computer printouts in the corner are absolutely no use. They are data. When a system can distill the data and present the manager with facts that were not previously known, that is called information. The question that arises from this scene is, "When did that data become information?"

The data contained the information. The light wave/particle moving through the eye contains information. The eye is therefore transmitting information, even though it isn't recognized as such until the brain acts upon it.

Felix says:
In a cybernetic sense, the eye and brain are not discrete units; they are, to at least some degree, parts of a single integrated system; in some ways, the eye is part of the brain.
In some ways, perhaps. But as with light, they are different. The information that reaches the brain from the eye is subject to a series of interpretations. The brain sees differently depending, at least to some extent, on what it expects to see. In seminars I have proved that to clients over and over again. What you have seen heavily influences what you see.

Reply:
This statement again points out the complexity of the issue. Germane to the argument, of course, is to ask “what does the brain do with the data that arrives from the eye” (with some preprocessing in the LGN and the retina). I guess it goes without saying that the brain somehow “compares” the data it receives with “data” stored in the brain. At this point, one might say that we need to know what the basis for memory is, i.e. what is the biochemical basis of memory. I don’t think we have to do that since we know the structure of the biochemical basis of the incoming data (i.e. that it is action potentials and that they are either/or and that they depend on a single molecular event to be so). I think we can just postulate that memory is basically the same, since that is the only way that incoming data could be compared to it. More on this and the vital concept of “threshold” in a bit.

Putnam: 05/12/2007

Reply:
Thanks for the kind words. A mention is made of choices. I am not sure that we have choices, although we all live our lives as if we do. I also think that I am painting myself into a corner, and I had thought of renaming my blog something like “The Corner Painter.”

The Growlery: 04/26/07
When is information not information?
As a child I often grappled (as, no doubt, did you) with the question of whether space ever ends or whether it goes on forever. Common sense told me that nothing goes on for ever ... so there had to be an end. But when I tried to visualise that end, it always took the form of a barrier of some kind ... and common sense told me that the barrier must have another side ... and something beyond it.
Then, of course, as I grew, I discovered that not every question has a clear-cut answer ... and that, as questions get further from common experience and human scale, so common sense becomes less reliable as a guide.

Reply: Its just as hard as trying to imagine a fourth (or ninth!) spatial dimension. Although it is high kitsch, Salvador Dali’s St. John of the Cross, attempts to deal with this. We were fascinated with this painting in high school (parochial, of course) and it continues to interest me that people whine about Dali although he was a suburb technician. I even had a nun send it to me as a postcard (she’s now come out; married; living in Tennessee). Oops, I just discovered that it was Corpus Hypercubicus and not St. John of the Cross


I found myself running in similar mental circles, three weeks ago, around the mulberry bush of Jim Putnam's question of whether or not Dr C can refer to "information" being transferred between eye and brain. (Alas, other events in life prevented me from addressing the issue in here until now.) And the unsatisfactory answer for which I've eventually had to settle is the same one which I would now have to offer my childhood self over the finiteness of space: it depends on your point of view, your frame of reference, and your definition of terms. Dr C takes the straightforward view and concedes the point: that whatever is transferred is not information until used by the brain. It's not a simple as that, however. First, Dr C points out that some information processing takes place before the pulses are despatched down the optic nerve. What leaves the eye, whether information or not, is "signal". Now ... if I codify and transmit a signal with intent to communicate (for instance, I write this post and despatch it to the web server), does that signal constitute information?

Each word is what linguists call an "arbitrary signifier". Each character in the word is, in its turn, an arbitrarily assigned symbol representing a sound or other structural communicative component. And each character is, in turn, replaced for digital transmission purposes by an arbitrarily assigned bundle of electronic bits. But the precise combination of bits which leaves me, and reaches you, is not random: it is designed (by me, by human cultural history, by digital coding agreements) to enable my verbiage to arrive in front of your eyes for reading as they did before mine as I wrote. If the result is not information, perhaps it should be described as "potential information". The same is true of signal passed from the retina along the optic nerve to the brain.

But, secondly, I am not convinced that we can really speak of an information (or potential information) carrying signal being passed between eye and brain, in the same way as it is between you and I. In a cybernetic sense, the eye and brain are not discrete units; they are, to at least some degree, parts of a single integrated system; in some ways, the eye is part of the brain. Nor is the brain itself (disregarding the eye) really a single entity; it is a collection of (in many ways partially autonomous) parts. Given all of this, the whole question of information, or impulses, or signal passing between eye and brain is a fraught one - at the same time both true and untrue in complex combinations.

Then again, the universe is probably nothing but information anyway; I, my brain, my eye, are all nothing but small information structures within a larger sea of it.

Having thoroughly confused myself, I shall now go to bed.

Reply: Hard to reply to this one. In the posts, I resorted to calling the signals going from eye to brain as “data.” I think we are all in basic agreement and can postulate (semantics are everything) that once that data gets compared to stored data, it becomes information. But that would mean that when it is stored in turn, it reverts to data! Semantics again. Ultimately, we should just view this data/information as bioelectrical bits. I had to be a reductionist, but that is what I think.
Growlery: 04/27/07
When is information not information? (2)
Waking this morning, I find that in the Jim Putnam has, in the intervening eight hours, already responded to last night's post - which had taken me a laggardly three weeks to put up. I am suitably chastened...
Jim's systems analysis view of information is unarguable. It also illustrates the slipperiness of this whole topic.
A stack of printouts in the corner are no use to anyone, and therefore not information. The same, then, must presumably be true of a stack of sensory data in the corner of my brain which I cannot interpret - perhaps a set of unidentified sounds?
His comment about the distinction between hearing and processing (by extension, between any sensory input and processing) is right on the button. Almost everything we think we "see" is actually an internal result of processing.
The highly engineered Pentax lens on the front of my SLR is capable of resolving 12500 image points per square millimetre of film or digital sensor surface. The lens of my eye can only muster 64 at the retina; to make things worse, the image is focused through aqueous and vitreous humors, not to mention those fatty ropelike floaters - and most of the retina surface can't make full use of it anyway. And yet ... the image emerging from my SLR is to be measured (and usually found wanting) against the highly detailed and information rich image in my mind, not the other way around.
This is, of course, because my eye continually shifts to multisample the scene before it, and miracles of high speed image enhancement transparently assemble and deliver a real time processed result to me instead of the raw data. What I think I "see" is actually a sophisticated, software mediated, model.
All of which supports Jim's view: the image formed by the lens of my eye is no use to anyone. Only the processed model is useful information. As Dr C flagged up in his Information V, the processing starts immediately: the eye doesn't just passively pass on raw data, but processes it at a low level first. The uncertainty lies in when, exactly, the one (raw data) becomes the other (processed model)? I don't have an answer - I just ask the question, then walk away leaving somebody else to deal with it. Like Jim, I am learning from the discussion.

Reply:
Again, pity comments, making it all the more complex on any level above the biochemical. Sort of amazing what the old human bean can do. I feel like I am committing sacrilege by reducing it to the mundane level of molecules.

Finally,
The Growlery: 05/20/07
Free will and the binary states of General Loan
If we ignore my overwhelmingly large area of agreement with Dr C (where's the discussion potential in agreement?), my thoughts focus around his use of "that picture" by Eddie Adams: South Vietnamese general Nguyen Ngoc Loan's summary street execution of a suspected Viet Cong member in Saigon's Chinese quarter.

The use of this picture troubles me. Partly because I can identify too closely with it: in a former life, I learned too well that there are many situations within which action precedes thought. Partly because it ties Dr C's argument too closely to such situations.

It is quite believable that, in this particular situation, the decision to squeeze at trigger came down to a spilt second flipflop as Dr C describes, no more a free decision than whether Schrödinger's cat lives or dies in its box. But that (if so) doesn't really, for me, persuade (as I think Dr C is arguing) that free will is a myth.

After all, Loan's action took place during a street skirmish, when his reflexes would be tuned to survival. Furthermore, it was within the larger context of a long and bitter dirty war, when such survival instincts would already be at a high level. Both the firing of Loan's revolver and the firing of Abbot's camera were clearly reflex actions decided well below the conscious cognitive layers of the brain.

Now, it may be that this is just an extreme case, and that all free will is equally flipflop dependent. The well known experiments where ordinary civilised volunteers behave barbarically towards fellow participants when told to do so by the organisers may support this. The more I examine possible counter examples, the more I am compelled to concede that many actions and decisions, even after much thought, can probably be explained in terms of a logic gate tripped by potential in one direction exceeding that in another. But do I accept that this is always so? No, I don't. I confess (rather shame facedly) that I am short of positive supporting evidence for that belief; all I can offer is basis for doubt. Nevertheless, I continue to hold the belief: and in a moment I'll offer a piece of sophistry to excuse it.

If Dr C is right in what he is (I think) suggesting, then we have to include in our definition of action potential some very high order informational entities - in fact. the whole totality of our mentation and cognition. (As a mathematician of a particular type, I would probably describe what is happening not as a simple logic gate switch but as a "catastrophic change of state".)

Take, as an example, slapping a child. This is a direct equivalent of General Loan's street execution, but removed to a level where things unfold more slowly and can be more easily examined. I believe, very strongly, on both emotional and rational grounds, that to hit a child is always wrong. But perhaps I am a highly stressed mother, doing my best in impossible circumstances, whose child repeatedly hits me; I snap, and slap him. Clearly, I can argue that the stress rose to a level where it overrode the pressure against acting: "I snapped" really means "my logic gate changed state". But how to describe the complex of cognitive processes that kept up the counter pressure, and held the gate, for so long? Does free will (in the usual more complex meaning) not operate throughout the period when I feel like slapping junior, but choose not to do so? I believe that it does; that the complexity and time scale involved (both on cognitive, not reflexive, levels), make it unreasonable to conceptually equate this with the run up to a life or death twitch of General Loan's finger. Both situations end with a binary flipflop of a logic gate, but neither the gate nor the surrounding action potentials are comparable between the two situations.
And what about even lower decision making domains, which never reach a catastrophic change of state but simply a shift in one direction or another? For example, this post. Aware that Dr C has put an immense amount of effort into the writing of all his posts, while I lazily consume them and contribute little, I have spent much time mulling over whether to post this, to email it privately to Dr C, or to try some intermediate level of discussion between Dr C and a small email friendship group whom I trust. Although I have not, as I type this sentence, definitely made up my mind, I shall probably post it. The point here is to ask how far (and how definitely) the digital flipflop interpretation of free will can be applied to my process of arriving at that final decision?

This whole fascinating thread started with my use of the word "instinctive" in an article on pattern recognition and robotics, when I should (as Dr C rightly pointed out) have used "reflexive". Let me tie the present argument back to that for a moment.

I said that a robot built on the anthropoform servitor Asimo model needed to have certain software constructs (such as balance control) built in while others (information about frequent visitors to the home, for example) could afford the slight delay involved in external storage. The first case allows little scope for free will; the second may.

And now, to close, that promised piece of sophistry to excuse my unscientific insistence on maintaining belief in fee will while its status remains unproven.

Both Dr C and I frequently and passionately argue for writing of wrongs - for instance, the treatment of Palestinians by the Isra'eli state. But, if all free will is a myth and boils own to flip-flops over which we have no control whatsoever, where is the point in bothering to rail against such things? Right and wrong, under that view, will be equally nonexistent: Isra'eli decision makers will either take or not take the actions, and we will abhor them or not, as an entirely stochastic set of outcomes uninfluenced by what I like to think of as free will. From a game theory viewpoint. this leaves me with an inescapable conclusion. If there is no "free will" in the usual sense, my actions will have no effect one way or the other. If free will in that usual sense does exist, then inaction will leave the wrong unaffected by action which may conceivably help to right it. Therefore, in the absence of certainty one way or the other, the only rational course is to behave as if free will exists until the contrary is proven ... and human beings are frail creatures who, regardless of intellectual stance, only follow a course for any length of time if they believe in it.

Reply:
The Growlery has delved deep into the heart of the issue. I want to get this post out so I request permission to discuss these topics in greater length in the next post on Information. Could this situation be similar to what many of use who were born and raised staunch Catholics encountered when it dawned on us that of what we were taught was gibberish? Or when we say ostensible Christians (that means you, George W.) not turning the other cheek but murdering in cold blood? All I ask is that one follows this through to its logical conclusion. If there is a flaw in the reasoning, then I will concede. As for the conclusion that it is all in vain, maybe we should adopt Pascal’s stance and say “we should live life as if there were a free will!”

Friday, May 25, 2007

Friday Crab Blogging



Threat to Children

High on a roll due to the total failure of the Donkey Backbone (you didn't really expect them to stand up) the Chief Pigeon Target had these words to say::
"They are a threat to your children, David, and whoever is in the Oval Office better understand that and take measures to protect the American people," he said.
Excuse me, Georgie, who is a threat to children???
Helicopter bombs school
An attack by a US helicopter against suspected insurgents in Iraq has killed a number of children at a primary school, Iraqi security sources say.

The attack took place in Diyala province north-east of Baghdad, the sources say.

A spokesman for the US military said there had been helicopter activity in the area but he was not able to confirm any other details.

The school is in the village of al-Nedawat close to the Iranian border.

One police officer said the helicopter was shot at from the ground during the morning.

The school was said to have been hit when the aircraft returned fire.
And who was the threat to these children??? (I'll give you a hint....No, I won't)







(and over 500 more in my files alone)

Friday, May 18, 2007

Friday, May 11, 2007

Thoughts on Information VI (Man is a Creature of Habit)

First of all, let me apologize to any readers for being so long in getting this post up. It did not flow so easily as the first five. If anyone is, by chance, reading this for the first time, the prior posts on information can be found at:

Some Thoughts on Information I
Some Thoughts on Information II
Some Thoughts on Information III
Some Thoughts on Information IV
Some Thoughts on Information V

I would like to address both Mr. Putnam’s and the Growelry’s thoughts on “information” after I have finished this post of the “information” thread. There is much to ponder in their posts and, while it intersects with what is below, there are some ideas here that I would like to develop in commenting on their discussion. In addition, I will try to use the word “data” for where I previously used “information.”

Goal
What I hope to do in this post is to finish up the data transfer from the eye to the brain as we have discussed for the last 5 posts in the series and get to the heart of the matter (or brain of the matter, if you will) to discuss how some of the workings of the human brain might be viewed in terms of the molecular mechanisms of nerve conduction and transfer.

Please let me reiterate that these are all personal observations and do not carry the weight of the academy, though I have tried to document as best one can the mechanisms such as nerve pulse generation in the eye. Perforce, the discussion of the action of the brain is much harder to document since there is much disagreement on how we actually “think.” (a much more loaded word than “information”).

Let me also state early on that the hidden purpose of this exercise has been to eventually address the question of free will. I know that sounds extraordinarily presumptuous, but decisions deemed to be made by a person exercising his or her "free will" are made in the brain, and that is what we are examining. Just think of how much rests on the belief that humans have the facility of Free Will. From this belief flows all laws and responsibility, in particular the concept of evil. It is the absolute underpinning of our modern Society. With that, we will cut to the chase.

Eye to Brain
To follow up from the previous posts, the path data takes from the eye to the brain can be seen in the following diagram.



It is interesting to note that not only does some processing occur in the retina, as postulated before, but most assuredly there is processing of the data in the lateral geniculate nucleus (LGN) before traveling to the occipital cortex.



In the occipital cortex, there are a number of areas of interest where data is further processed and then sent on to other areas of the brain. The principal one is V1:



In the following diagram of the posterior brain (from here)there are two “streams” arising out of the visual cortex. The first is the inferior or ventral stream. It is associated with recognition and in some way with long term memory. Since memory is the basis of “thinking” we will return to this point. The upper or dorsal stream is associated with motion and control of the eyes or arms.



Thus, there is radiation to the motor cortex, and also to areas known for memory location (e.g. hypocampus and the amygdala.).

At this point, then, we have tracked data from the eye to the brain. In order to move further, I would like to propose a simple model. Though we “see” the world in all its glory, data that strikes the eye is actually contained in a vast (and I am speaking vast here) spatial and temporal array of photons. In two dimensions, and in an much simplified view, the data appears as the following;


When these data strike the retina they are transduced into a spatial and temporal data stream that is carried to the brain via action potentials down the optic nerve bundle. At both the level of the retina and at the lateral geniculate nucleus, there is initial processing of the data. We will propose a simple model of processing in a minute.

At some point we should review the purpose of this data. Clearly, it is to determine the actions of the organism. Data are essential for an organism to find food, ingest food, reproduce and avoid destruction. Quite simply, an organism gathers data, compares it to stored information, and then makes a decision based on that comparison. I see an elephant, I run. I see a Big Mac, I eat. I see Anna Nicole Smith, I puke. Man is a creature of habit. While man’s behavior is complex in the extreme (to us, maybe not to an alien) it may still be reduced to the laws of chemistry and physics.

Just as we discussed isolation tanks (and I have just finished reading "Altered States" by Paddy Chayefsky, who, also wrote "Network" with the great line: "I'm mad as hell and I'm not going to take this any more") it is my contention that man's action are entirely driven by external stimuli.

In past posts we have reviewed the electro-chemical processes that convert photons into action potentials. Importantly, we have observed that these pulses from the eye to the brain along the optic nerve (there are 1.2million "nerves" (fibers) in the optic nerve) are an either/or phenomena and that the presence or absence of an action potential depends, in the end, on the difference of one molecule of G protein or neurotransmitter. Recently I discovered that this is termed "The Principal of Bivalence."

We will never unravel the complexity of the signal from the eye to the brain and that is certainly not our purpose here. What we want to do is show that the restrictions imposed by the either/or nature of the transmission of pulses place restrictions on how we think.

The key cell in the transfer process and, of course, in the brain is the neuron.

The brain is composed of approximately 100 billion neurons plus a number of other, supportive cells. The neurons have about 100 trillion synapses. All neurons are not completely alike but their function is similar in many respects. The nerve cell is composed of the nucleus and cytoplasm with multiple connections to other nerve cells via the dendrites and with a long process called the axon with connects with the dendtrites of other cells. As we have shown before (neurotransmitters and the action potential), the action potential is generated in the nerve cell by the change of the permeability of the cell membrane to ions which is, in turn, caused by the binding of neurotransmitters.

The major cell in the cerebral cortex is the pyramidal neuron:
A pyramidal cell (or pyramidal neuron, or projection neuron) is a multipolar neuron located in the hippocampus and cerebral cortex. These cells have a triangularly shaped soma, or cell body, a single apical dendrite extending towards the pial surface, multiple basal dendrites, and a single axon. Pyramidal neurons compose approximately 80% of the neurons of the cortex, and release glutamate as their neurotransmitter, making them the major excitatory component of the cortex.

Here is a slide show on neurons:

How the brain works on a biochemical level is not very complicated except for, possibly, short and long term memory. Action potentials arrive from other nerve cells (e.g. a fiber of the optic nerve) at the soma, or body, of the nerve cell. The level of the potential is summed and, depending on the threshold, a new potential is generated. The site of the summing is at the axon hillock.


Here I quote from the Wikipedia reference:
The Axon Hillock is the anatomical part of a neuron that connects the cell body called soma (biology) to the axon. It is attributed as the place where Inhibitory Postsynaptic Potentials (IPSPs) and Excitatory Postsynaptic Potentials (EPSPs) from numerous synaptic inputs on the dendrites or cell body summate.

It is electrophysiologically equivalent to the 'initial segment where the summated membrane potential reaches the triggering threshold, an action potential propagates through the rest of the axon (and "backwards" towards the dendrites as seen in backpropagation). The triggering is due to positive feedback between highly crowded voltage gated sodium channels, which are present at the critical density at the axon hillock (and nodes of ranvier) but not in the soma.
Thus, the brain is composed of neural circuits where outgoing impulses are generated depending on the incoming pulses at the level of the neuron. Again, remember that this is an either/or phenomenon because of the biochemical nature of the action potential

Neural circuits are, of course, very complex. Here are just ten out of the 100,000,000,000 nerve cells in the brain:


Neural circuits are similar to electrical circuits in that there is potentials (generated by batteries, e.g.) and conductors. The neural "batteries" are gated, in that they either generate a potential or not. I am unsure whether the strength of the action potential varies. I am inclined to think that it doesn't but I need to research this in neurophysiology.

We are all familiar with electrical circuits. Here is a mundane but interesting example. A hobbyist has built a gizmo that detects when trains are approaching a crossing in a model train set.
It has pulse width modulation so it can be used to control the speed of motors, not just on/off. So far I have only used mine to control my Remote Control Level Crossing, an infrared sensor detects a train approaching the crossing and automatically lowers the barriers. When the train has passed, it raises them again.
. Here is copy of the circuit:

Notice the similarities to the 10 neurons above.

One of the interesting thing that comes out of thinking about this is how on earth did the brain evolve? If there are 100 trillion connections, each one, of course could not have be the result of evolution. Furthermore, this is done with only 25,000 genes! I will leave this now since it certainly deserves much discussion. One final thought though is that the brain is not predesigned (as ID contends) but is the result of evolution. Successful evolution always leaves an out; unsuccessful evolution (the dodo bird) doesn’t. That is, unless we are at the DoDo end of evolution, the brain can go much much further even biologically. Unfortunately for us, it will take millions of years.

Back to a discussion of neural networks:

The complexity of neural networks is daunting. However, just as one does not need to know the direction vector and the kinetic energy of every molecule in a gas to know the behavior of the collection, there should be ways of simplifying neural circuits so that they can be better understood.

Let me simplify the scheme of photons hitting the retina even further. I do not know whether you have read “Flatland” by Edwin Abbott. In it, a man must function in two dimensions, as a square (pentagons are superior, triangles are serfs.) However, he does dream about a one dimensional world called “LineLand.” In following data from the eye to the brain and beyond, it is useful to simplify to the utmost. Thus, consider the scenario of a single photon (and its opposite, no photon) as the ultimate carrier of data in “LineLand.”


This impulse, on reaching the brain of the individual in Lineland must be compared to something in order to trigger a response. This something is, of course, memory. In Lineland, every photon hitting the eye triggers a response just as every photon that doesn't triggers a no response. This seems trivial in the extreme but it is the absolute basis of behavior. Behavior as reflex. Behavior as habit, if you will.

Memory
Unfortunately, human memory, or even nematode worm memory (302 neurons) is not well understood.

It occurred to me to question in this respect what is going on in the retina of the eye and the LGN where nerve signals are processed even before they get to the brain. I suspect there is no "comparison" with stored memory there but most likely a type of filtering of the data. Processing that is entirely under genetic control. Sort of like behavior in the nematode.

An ongoing theory has memory as as a hologram, i.e. that the memory itself is spread out over many neurons and is like the visual hologram which appears as a diffraction pattern on film. As attractive as this hypothesis is, it seems difficult to reconcile with the physical reality of what we have presented above. If, indeed, a pattern of neural signals is the pattern of currents in a neural network, it would make sense only if the memory that they are compared to is itself such a pattern.

However, there are many who believe that there is permanent change in the nerve cells in the brain that accompanies long term memory. Any change like this would most likely be at the gene level where genes were activated or suppressed depending on the potential of the cell. If the memory is long term, i.e. not erasable (though we do "forget") it would needs permanent.

An infinitude of research is going on in this area and I don't pretend to be conversant in it in the least. Let me cite the abstract of one paper:
Vision, emotion and memory:from neurophysiology to computation
Edmund T. Rolls*
Department of Experimental Psychology, Oxford University, South Parks Road, Oxford OX1 3UD, England, UK
Abstract
The inferior temporal visual cortex provides invariant representations of objects. The computations underlying this can be understood in the framework of a hierarchical series of competitive networks in the ventral visual stream which learn invariant representations by using a short-term memory trace to extract properties of the visual input that are invariant over short periods and are thus statistically likely to be from the same object.
Whew!

Summary (again):
Information from the eye arrives in the brain a a patteren of digital information in both “space” and time. The spacial information depends upon which of the 1.2 million nerves in the optic nerve carries an action potential and the time factor depends on the frequency of firing. There is a limit on the latter dependent on the latency of the action potential and, perhaps, the distance from the eye to the brain. The latter is probably not significant but it could be (need to get the speed of a nerve impulse in a mylinated nerve).

While, as we have mentioned, there has been pre processing of information prior to arrival in the brain, there is probably no actual processing of information between the retina and the occipital cortex.

Once in the occipital cortex the signals are further processed, i.e. undergo modification by neural circuts, and radiate to areas that are known to contain memory.

Once at the memory "site" the impulses are compared to memory, whatever that is, and a signal is sent to the motor area to "do something."

In the end, the workings of the brain are totally dependent on the electrochemical reactions of the neuron which are bivalent. That is, Either/Or.

Free Will:
Let me give an example. Most of us remember this awful picture as one of the strongest to come out of the Vietnam War:



If asked, one would be inclined to say that this assasian was practicing free will. One would say that this action is premeditated (i.e. he is not reacting in self defense), and that he is not being compelled to murder a defenseless prisioner.

However, this man is not in an isolation tank. He is receiving mostly visual, photon input that exists in patterns. Those patterns are being preprocessed in the retina and the LGN and are travelling to the occipital cortex where they are further processed. Once there, they radiate to areas of long term memory. If this was a reflex, like raising your hand to avoid a blow (i.e. something the prisoner might have done if he was not bound) the comparison here might go immediately to the motor cortex.

In this case, there are most likely signals to the prefrontal cortex. In the prefrontal cortex is the executive center which:
The so-called executive functions of the frontal lobes involve the ability to recognize future consequences resulting from current actions, to choose between good and bad actions (or better and best), override and suppress unacceptable social responses, and determine similarities and differences between things or events.
I would like to postulate at this point that what transpires in the executive center is exactly like what transpires in any other neural circut. That just as in Flatland, there is a decision made on a signal neuron basis. That the command to pull the trigger is an either/or decision based on the threshold reached in a signal cell with the concentration of a signal molecule or neurotransmitter.

Either the man pulls the trigger or he doesn't. While the actual pulling of the trigger involves many muscles and commands, the actual decision between pulling and not pulling rests on the potential (or, more likely the activation or inactivation of genomic DNA) in a single cell. This is due to the "The Principal of Bivalence."

I am very weary after trying to put this together so at this point I am going to stop and return to the discussion of the ramifications of this hypothesis in the next post.

Friday Crab Blogging






Now Atrios posted a picture of crabs on his blog yesterday. I ask you, is there any comparison? I don't think so...

Boring!

Friday, May 04, 2007