Before launching into a further post on thinking and free will, I thought it important to address points made by Mr. Putnam and The Growlery, since they begin to get to the meat of these questions.
Putnam 04/22/07:Reply:
It seems to me that the arrival to the brain of the data from the eye is, actually, quite similar to a hospital emergency room. There is some sort of triage that goes on. The eye, it seems to me, coveys different types of data. Simple sight provides other autonomous functions with operating information. Nothing is required. Other data, for instance words from a textbook, is stored and acted upon differently. And the truly emergency, the child stepping in front of your automobile, requires another kind of action.
I wouldn’t quite agree that the eye conveys different types of data, I think it is just a passive receiver, transducer. But, the second observation this strikes to the heart of the argument I have been trying to make. While there appears on the surface that a reflex such as pulling your hand away from the stove is on a different level than reading words, we tried to show that the molecular mechanism of both processes was identical, although vastly different in complexity. We even went on to postulate that even moral “choices” were reflexive. Man is a creature of habit.
Putnum: 04/26/07
In terms of information, I got to thinking today about light waves. I am not a physicist, so therefore it's another area about which I know very little, but I'm dumb enough to step in anyway. Light waves have some very interesting qualities. They have qualities of both wave and particle. In addition, observation seems to change how they behave. There, that's the limit of my knowledge, and I'm about to extrapolate from that into Information. How dumb can I be?
As a systems designer I know that a stack of computer printouts in the corner are absolutely no use. They are data. When a system can distill the data and present the manager with facts that were not previously known, that is called information. The question that arises from this scene is, "When did that data become information?"
The data contained the information. The light wave/particle moving through the eye contains information. The eye is therefore transmitting information, even though it isn't recognized as such until the brain acts upon it.
Felix says:
In a cybernetic sense, the eye and brain are not discrete units; they are, to at least some degree, parts of a single integrated system; in some ways, the eye is part of the brain.
In some ways, perhaps. But as with light, they are different. The information that reaches the brain from the eye is subject to a series of interpretations. The brain sees differently depending, at least to some extent, on what it expects to see. In seminars I have proved that to clients over and over again. What you have seen heavily influences what you see.
Reply:
This statement again points out the complexity of the issue. Germane to the argument, of course, is to ask “what does the brain do with the data that arrives from the eye” (with some preprocessing in the LGN and the retina). I guess it goes without saying that the brain somehow “compares” the data it receives with “data” stored in the brain. At this point, one might say that we need to know what the basis for memory is, i.e. what is the biochemical basis of memory. I don’t think we have to do that since we know the structure of the biochemical basis of the incoming data (i.e. that it is action potentials and that they are either/or and that they depend on a single molecular event to be so). I think we can just postulate that memory is basically the same, since that is the only way that incoming data could be compared to it. More on this and the vital concept of “threshold” in a bit.
Putnam: 05/12/2007
Reply:
Thanks for the kind words. A mention is made of choices. I am not sure that we have choices, although we all live our lives as if we do. I also think that I am painting myself into a corner, and I had thought of renaming my blog something like “The Corner Painter.”
The Growlery: 04/26/07
When is information not information?
As a child I often grappled (as, no doubt, did you) with the question of whether space ever ends or whether it goes on forever. Common sense told me that nothing goes on for ever ... so there had to be an end. But when I tried to visualise that end, it always took the form of a barrier of some kind ... and common sense told me that the barrier must have another side ... and something beyond it.
Then, of course, as I grew, I discovered that not every question has a clear-cut answer ... and that, as questions get further from common experience and human scale, so common sense becomes less reliable as a guide.
Reply: Its just as hard as trying to imagine a fourth (or ninth!) spatial dimension. Although it is high kitsch, Salvador Dali’s St. John of the Cross, attempts to deal with this. We were fascinated with this painting in high school (parochial, of course) and it continues to interest me that people whine about Dali although he was a suburb technician. I even had a nun send it to me as a postcard (she’s now come out; married; living in Tennessee). Oops, I just discovered that it was Corpus Hypercubicus and not St. John of the Cross
I found myself running in similar mental circles, three weeks ago, around the mulberry bush of Jim Putnam's question of whether or not Dr C can refer to "information" being transferred between eye and brain. (Alas, other events in life prevented me from addressing the issue in here until now.) And the unsatisfactory answer for which I've eventually had to settle is the same one which I would now have to offer my childhood self over the finiteness of space: it depends on your point of view, your frame of reference, and your definition of terms. Dr C takes the straightforward view and concedes the point: that whatever is transferred is not information until used by the brain. It's not a simple as that, however. First, Dr C points out that some information processing takes place before the pulses are despatched down the optic nerve. What leaves the eye, whether information or not, is "signal". Now ... if I codify and transmit a signal with intent to communicate (for instance, I write this post and despatch it to the web server), does that signal constitute information?
Each word is what linguists call an "arbitrary signifier". Each character in the word is, in its turn, an arbitrarily assigned symbol representing a sound or other structural communicative component. And each character is, in turn, replaced for digital transmission purposes by an arbitrarily assigned bundle of electronic bits. But the precise combination of bits which leaves me, and reaches you, is not random: it is designed (by me, by human cultural history, by digital coding agreements) to enable my verbiage to arrive in front of your eyes for reading as they did before mine as I wrote. If the result is not information, perhaps it should be described as "potential information". The same is true of signal passed from the retina along the optic nerve to the brain.
But, secondly, I am not convinced that we can really speak of an information (or potential information) carrying signal being passed between eye and brain, in the same way as it is between you and I. In a cybernetic sense, the eye and brain are not discrete units; they are, to at least some degree, parts of a single integrated system; in some ways, the eye is part of the brain. Nor is the brain itself (disregarding the eye) really a single entity; it is a collection of (in many ways partially autonomous) parts. Given all of this, the whole question of information, or impulses, or signal passing between eye and brain is a fraught one - at the same time both true and untrue in complex combinations.
Then again, the universe is probably nothing but information anyway; I, my brain, my eye, are all nothing but small information structures within a larger sea of it.
Having thoroughly confused myself, I shall now go to bed.
Reply: Hard to reply to this one. In the posts, I resorted to calling the signals going from eye to brain as “data.” I think we are all in basic agreement and can postulate (semantics are everything) that once that data gets compared to stored data, it becomes information. But that would mean that when it is stored in turn, it reverts to data! Semantics again. Ultimately, we should just view this data/information as bioelectrical bits. I had to be a reductionist, but that is what I think.
Growlery: 04/27/07
When is information not information? (2)
Waking this morning, I find that in the Jim Putnam has, in the intervening eight hours, already responded to last night's post - which had taken me a laggardly three weeks to put up. I am suitably chastened...
Jim's systems analysis view of information is unarguable. It also illustrates the slipperiness of this whole topic.
A stack of printouts in the corner are no use to anyone, and therefore not information. The same, then, must presumably be true of a stack of sensory data in the corner of my brain which I cannot interpret - perhaps a set of unidentified sounds?
His comment about the distinction between hearing and processing (by extension, between any sensory input and processing) is right on the button. Almost everything we think we "see" is actually an internal result of processing.
The highly engineered Pentax lens on the front of my SLR is capable of resolving 12500 image points per square millimetre of film or digital sensor surface. The lens of my eye can only muster 64 at the retina; to make things worse, the image is focused through aqueous and vitreous humors, not to mention those fatty ropelike floaters - and most of the retina surface can't make full use of it anyway. And yet ... the image emerging from my SLR is to be measured (and usually found wanting) against the highly detailed and information rich image in my mind, not the other way around.
This is, of course, because my eye continually shifts to multisample the scene before it, and miracles of high speed image enhancement transparently assemble and deliver a real time processed result to me instead of the raw data. What I think I "see" is actually a sophisticated, software mediated, model.
All of which supports Jim's view: the image formed by the lens of my eye is no use to anyone. Only the processed model is useful information. As Dr C flagged up in his Information V, the processing starts immediately: the eye doesn't just passively pass on raw data, but processes it at a low level first. The uncertainty lies in when, exactly, the one (raw data) becomes the other (processed model)? I don't have an answer - I just ask the question, then walk away leaving somebody else to deal with it. Like Jim, I am learning from the discussion.
Reply:
Again, pity comments, making it all the more complex on any level above the biochemical. Sort of amazing what the old human bean can do. I feel like I am committing sacrilege by reducing it to the mundane level of molecules.
Finally,
The Growlery: 05/20/07
Free will and the binary states of General Loan
If we ignore my overwhelmingly large area of agreement with Dr C (where's the discussion potential in agreement?), my thoughts focus around his use of "that picture" by Eddie Adams: South Vietnamese general Nguyen Ngoc Loan's summary street execution of a suspected Viet Cong member in Saigon's Chinese quarter.
The use of this picture troubles me. Partly because I can identify too closely with it: in a former life, I learned too well that there are many situations within which action precedes thought. Partly because it ties Dr C's argument too closely to such situations.
It is quite believable that, in this particular situation, the decision to squeeze at trigger came down to a spilt second flipflop as Dr C describes, no more a free decision than whether Schrödinger's cat lives or dies in its box. But that (if so) doesn't really, for me, persuade (as I think Dr C is arguing) that free will is a myth.
After all, Loan's action took place during a street skirmish, when his reflexes would be tuned to survival. Furthermore, it was within the larger context of a long and bitter dirty war, when such survival instincts would already be at a high level. Both the firing of Loan's revolver and the firing of Abbot's camera were clearly reflex actions decided well below the conscious cognitive layers of the brain.
Now, it may be that this is just an extreme case, and that all free will is equally flipflop dependent. The well known experiments where ordinary civilised volunteers behave barbarically towards fellow participants when told to do so by the organisers may support this. The more I examine possible counter examples, the more I am compelled to concede that many actions and decisions, even after much thought, can probably be explained in terms of a logic gate tripped by potential in one direction exceeding that in another. But do I accept that this is always so? No, I don't. I confess (rather shame facedly) that I am short of positive supporting evidence for that belief; all I can offer is basis for doubt. Nevertheless, I continue to hold the belief: and in a moment I'll offer a piece of sophistry to excuse it.
If Dr C is right in what he is (I think) suggesting, then we have to include in our definition of action potential some very high order informational entities - in fact. the whole totality of our mentation and cognition. (As a mathematician of a particular type, I would probably describe what is happening not as a simple logic gate switch but as a "catastrophic change of state".)
Take, as an example, slapping a child. This is a direct equivalent of General Loan's street execution, but removed to a level where things unfold more slowly and can be more easily examined. I believe, very strongly, on both emotional and rational grounds, that to hit a child is always wrong. But perhaps I am a highly stressed mother, doing my best in impossible circumstances, whose child repeatedly hits me; I snap, and slap him. Clearly, I can argue that the stress rose to a level where it overrode the pressure against acting: "I snapped" really means "my logic gate changed state". But how to describe the complex of cognitive processes that kept up the counter pressure, and held the gate, for so long? Does free will (in the usual more complex meaning) not operate throughout the period when I feel like slapping junior, but choose not to do so? I believe that it does; that the complexity and time scale involved (both on cognitive, not reflexive, levels), make it unreasonable to conceptually equate this with the run up to a life or death twitch of General Loan's finger. Both situations end with a binary flipflop of a logic gate, but neither the gate nor the surrounding action potentials are comparable between the two situations.
And what about even lower decision making domains, which never reach a catastrophic change of state but simply a shift in one direction or another? For example, this post. Aware that Dr C has put an immense amount of effort into the writing of all his posts, while I lazily consume them and contribute little, I have spent much time mulling over whether to post this, to email it privately to Dr C, or to try some intermediate level of discussion between Dr C and a small email friendship group whom I trust. Although I have not, as I type this sentence, definitely made up my mind, I shall probably post it. The point here is to ask how far (and how definitely) the digital flipflop interpretation of free will can be applied to my process of arriving at that final decision?
This whole fascinating thread started with my use of the word "instinctive" in an article on pattern recognition and robotics, when I should (as Dr C rightly pointed out) have used "reflexive". Let me tie the present argument back to that for a moment.
I said that a robot built on the anthropoform servitor Asimo model needed to have certain software constructs (such as balance control) built in while others (information about frequent visitors to the home, for example) could afford the slight delay involved in external storage. The first case allows little scope for free will; the second may.
And now, to close, that promised piece of sophistry to excuse my unscientific insistence on maintaining belief in fee will while its status remains unproven.
Both Dr C and I frequently and passionately argue for writing of wrongs - for instance, the treatment of Palestinians by the Isra'eli state. But, if all free will is a myth and boils own to flip-flops over which we have no control whatsoever, where is the point in bothering to rail against such things? Right and wrong, under that view, will be equally nonexistent: Isra'eli decision makers will either take or not take the actions, and we will abhor them or not, as an entirely stochastic set of outcomes uninfluenced by what I like to think of as free will. From a game theory viewpoint. this leaves me with an inescapable conclusion. If there is no "free will" in the usual sense, my actions will have no effect one way or the other. If free will in that usual sense does exist, then inaction will leave the wrong unaffected by action which may conceivably help to right it. Therefore, in the absence of certainty one way or the other, the only rational course is to behave as if free will exists until the contrary is proven ... and human beings are frail creatures who, regardless of intellectual stance, only follow a course for any length of time if they believe in it.
Reply:
The Growlery has delved deep into the heart of the issue. I want to get this post out so I request permission to discuss these topics in greater length in the next post on Information. Could this situation be similar to what many of use who were born and raised staunch Catholics encountered when it dawned on us that of what we were taught was gibberish? Or when we say ostensible Christians (that means you, George W.) not turning the other cheek but murdering in cold blood? All I ask is that one follows this through to its logical conclusion. If there is a flaw in the reasoning, then I will concede. As for the conclusion that it is all in vain, maybe we should adopt Pascal’s stance and say “we should live life as if there were a free will!”
2 comments:
Wow! I shall stagger away, amazed at your staying power, and think :-)
DrC> I request permission to
DrC> discuss these topics in
DrC> greater length in the next
DrC> post on Information.
No permission needed ... I'm immeasurably in your debt already, and te debt only increases if you do so.
DrC> All I ask is that one follows
DrC> this through to its logical
DrC> conclusion.
Certainly: agreed. I shouldn't really be posting any responses as you go, I should wait for the end before commenting ... but, hey, I'm too interested and stimulated to keep quiet!
DrC> If there is a flaw in the
DrC> reasoning, then I will
DrC> concede.
I very much doubt that there will be any flaw in the reasoning ... only, possibly, disagreement or reservation on those areas which can only postulated, or where conclusions must be interpreted, or where arguments external to the discussion space (Eddington's "Decline of determinism" invocation of quantum effects, anyone?).
And finally: I wish I'd thought, as you have, to quote Pascal instead of trying to construct my own poor copy!
No, that wasn't the final thing. The final thing is: thank you.
Thanks. Now comes the tough part. I think we should continue to use General Loan since it is such a definitive act. Now to read some St. Augustine and Leibniz.
Post a Comment