That title certainly sounds pretentious. However, when you think about it, free will is at the center of civilization. Even at the most rudimentary level, humans must exist under the tutelage of laws. Sometimes those laws seem bizarre in the extreme, particularly when they appear to be counterproductive to the continued existence of the species. Laws emanating from religions (e.g. the Shakers) probably are the best example of this and might be the basis for a long discussion.
You might think that the existence of laws in human society would be a good argument for the existence of free will rather than against it. That is, if a person can choose to break laws, knowing that there might be punishment in store for them for having done so, then this demonstrates that people have choices. I will argue against this.
The problem in discussing this topic is that there is very little hard, i.e. scientific, data in spite of the existence of the science of sociology for many years. Until we really know how the brain works, on a molecular level, most of what we say on the subject will be speculation. Speaking of speculation about human behavoir, a good example of this is the work of Sigmund Freud and his theory of psychoanalysis.
Psychoanalysis is a “science” that is constructed out of whole cloth. You know this is true when you ask a psychoanalyst what they “do.” They claim arcane knowledge. They claim that you “wouldn’t understand.” This sort of scientific exceptionalism is especially prevalent in Freud’s book “An Introduction to Psychoanalysis” where he lectures to medical students. For instance, one has to take his interpretation of dreams on faith, that when someone describes a dream of going through a door, or kicking a cat, they are actually talking about sex, or their father, or their mother, or what they thought when they were three.
Psychoanalysis is based on the premise that “the truth will make you free.” Ah, ha. There it is again. Being free. We have probably have our American ancestors and the Enlightenment to thank for this obsession with “being free.” Of course, there is also that that great 60’s classic “Born Free.”
At the beginning of these discussions on information, we agreed that we would try to follow our thoughts where they might lead under from our own thinking. I have collected a number of books that relate to this subject, but have not delved into them because I wanted to work through the scenario myself before seeing what others have written. It is my belief that only by looking at the basic electro chemical processes that underlie thinking can we rationally approach the subject.
Summary of Posts to date:
Thoughts on Information I
Thoughts on Information II
Thoughts on Information III
Thoughts on Information IV
Thoughts on Information V
Thoughts on Information VI
Please be aware that some of the animations, particularly of the action potential, have been removed from the site where I found them. Hopefully they will be put back up in the near future.
Summary of the Argument:
In these six posts, we traced the arrival of data at the human eye as coherent (to the brain) packets of photons in time and space. These photons were transduced and conveyed to the brain. Once in the brain, these packets were interpreted.
We need to say a few more things about how the brain interprets data that it receives. Let us go back to our first few months in life. While we cannot say what infants “see”, we do know that if their sight is blocked in the first few months of life that they do not develop the ability to interpret visual data. It is not that the apparatus is immature, it is very likely that all the impulses created in the optic nerve of an adult are created in the optic nerve of an infant. What infants can’t do is interpret this data. They see a human face that is smiling, but it is not until the age of one month that they begin to “see” the smile and respond with one of their own. In accordance with a prior agreement with Jim Putnam, we will say that the data that arrives at the visual cortex can now be termed information since it is useful to the baby. In other words, unless data is converted in the organism into something that is useful (i.e. contributes to survival) should we call it information.
What, then, is happening in this time in the infant’s brain that allows this transition from data to information? Well, first of all, the infant brain is growing at a prodigious rate. The human brain quadruples in size by the age of five. What grows are the connections between cells, the growth of axons and dendrites and their connections. Comparison of growth curves for head circumference with those of body length are striking in that the head circumference only slowly increases after the second year whereas the body, of course, continues to grow into the teens. Did you ever wonder why your head looked so big in those kindergarten pictures?
As we have stated before, the adult brain has 100 billion neurons with probably a trillion connections. Interestingly enough, many of the connections that were made during the fetal period are no longer present at birth. What is happening to those neurons as we speak? Well, they are firing. There is always activity in the brain. This gives rise to the electrical waves that are measured by an EEG. What, exactly, is this firing? Well, we discussed this at length in previous posts. Briefly, the neuron (nerve) cell has receptors on its surface which are activated by neurotransmitters. Upon activation, they allow the passage of sodium ions into the nerve cell. When the concentration of sodium in the cell reaches a certain point, and only then, the nerve cell generates an action potential. This action potential travels down the axon of the nerve cell. At the junction with another cell (or cells) neurotransmitters are released and the process is repeated.
There are a number of questions I have about all this brain growth, neuron growth, new neurons, death of neurons, etc. What exactly is the correlation between stimulus, what we have called data, and the development of new connections? Is it merely the presence or absence of connections that determine a memory or are their quantitative changes in the neuron itself? How does quantitative changes in the neuron correspond to the presence of a threshold for the action potential (discussed immediately below). While of great interest and importance, these questions don’t really impact on our underlying discussion. Those interested in some of the views can go here, here and here.
Probably the most important topic would be an explanation of memory, or long term potentiation.This introduces the issue of synaptic strength, clearly a change in the status of a neuron from simple on/off to one of plasticity.
As the brain grows, part of the structure, i.e. the arrangement of the neurons and their connections, are determined by genetics. Actually, it is genetics coupled with the environment, i.e. the existing arrangement of neurons and, to a degree that has not been determined, the input of data, i.e. impulses, from the sensory nerves including the optic nerve. Again, we know this because brains that are deprived of input do not develop in a normal manner when compared to those not so deprived. However, what is going on with the fact that at age 20 months of fetal development the brain has 200 billion neurons and trillions of dendritic connections? Why? Most be genetic unless you believe in playing the Mostly Mozart CD for pregnant women.
There is nothing magical, transcendent or even uniquely human about this process. It occurs in organisms from the lowest to the highest. What marks out humans is the sheer volume and numbers of neurons in the brain. And with that, we segue to memory.
There is nothing magical about memory itself, only in individual memories (ha! You knew I was a closet Romantic.) However, it is difficult, if not impossible to sort out exactly how memory occurs. We know the following process happens. Sensory input arrives at the brain in the form of electrical impulses, say at the hippocampus from the occipital cortex. There the data is compared to data that is stored in the hippocampus or in the case of a newborn infant, it runs into a table rosa. However, it is not a completely blank slate, since there must be something there where the impulses are being directed. That something has been genetically programmed.
One hypothesis that has been around since the sixties (and which appeals to me intellectually but I do not know how much hard evidence exists for it) is that memories are in part a perturbation of standing waves of neuronal firing much like the interference patterns created on a hologram film. The advantage of this type of information storage is that if one part of the "image" is destroyed, one still has the image, although less precise. Another aspect is that the "image" would then cover a wide area of cellular activity. Take it away neurobiologists.
Now, if you have read this far, comes one of the biggest problems in the book, at least in my humble opinion. How do 10,000 genes, each of which only programs for an enzyme, create connections in the hippocampus of the infant brain that will interact with the impulses that are coming from the occipital cortex?
Immediately we are thrown into the realm of developmental biology. As P.Z. Meyers from Pharyngula once told me (my only brush with a famous blogger), its all in the HOX genes. It is true, the more we learn from early fruit fly development the more we understand about human development. But, holy cow, how do you explain infant memory?
In any case, since the infant brain needs to develop long term memory for such important things as the mother’s (and father’s) faces, the incoming impulses must change the character of the neuron(s) by the long term potentiation process mentioned above, where the strength of the synapse is altered by repetitive exposure. How many synapses? Well, for one thing, although there may be a trillion synapses it is hard to see how a synapse in a neuron could change without changing the resting potential of the entire neuron. And there are only 100 billion neurons. How many neurons does it take to register a face? A million? How many pixels on a computer screen? That’s simple, 1,310,720. Whoops, that doesn’t leave us much room for storing things. If it takes as many neurons to register your mama’s face as a computer screen has pixels, you can only “remember” 100,000 things! And,
As of 30 November 2005 OED included about 301,100 main entries, comprising more than 350 million printed characters.But, clearly, you say, the brain doesn’t store mama’s face as pixels? Well, I know very little about information theory, so I won’t argue with you. But it still seems like there is a problem here.
So the new data changes the electrochemical settings of the already present neurons. Apparently no new neurons are produced though there may well be new connections, i.e. dendrites. One runs into a time problem here. Dendrites take time to grow. Memory is relatively instantaneous. All this growing and connection making causes the brain to quadruple in size by the age five (still only 100 billion neurons) and makes your kindergarten class look like Frankensteins.
How does the receipt of an impulse by a neuron cause a new dendrite to grow? Well, in order for a structural change to occur in a cell, there has to be referral to the cell “brain.” That is, the changed chemical composition of the activated neuron has to send a message to the cell brain, the nucleus, to activate the DNA which produces a new enzyme (or more or less of an existing enzyme) which has to then move to the cell wall and stimulate the branching of the cell wall until it, I guess, reaches another neuron. All biochemical events here are shrouded by the curtain of Oz.
This sounds very strange, doesn’t it. It sounds almost like the big brain in miniature! Except, we know what the basis of the cell brain is, and that is DNA. We just don’t understand how 10,000 genes can do these interesting things. Of course, the cell exists in a milieu of other cells and there is a physical structure that influences the growth of new connections and all of this is somehow directed by the 10,000 genes. It’s enough to make your head explode!
So, the memory of your mother’s face has been implanted in your infant brain by repetitive stimulation of the same cells in the same way. By the time you are one month old and you see your mother’s face, the input data, impulses, encounters a matrix of cells and cellular potentials that creates a further set of action potentials. These go to the limbic system, which has an already created response system (and this is definitely genetic and already there, because babies smile in their sleep from the get go), to give the waiting mother the reward of a reactive smile. (A reactive smile is a reflex and probably bypasses the prefrontal cortex where the brain makes “executive” decisions. While all the arguments remain the same, we will have to include this pathway in our final analysis below since we would never make the mistake of calling free will a reflex!)
At each stage of this process, there are nerves firing, there are neurotransmitters transmitting, receptors recepting, and sodium gates opening. But, there is always a threshold for the generation of a consequent impulse. Not enough neurotransmitter? Not enough open gates. Not enough open gates? No new action potential. And the difference between firing and not firing? One lousy molecule! (or ion, if you want to be exact and implicate sodium).
You convince me that diddling one ion in the executive cortex is free will. Go ahead.
(I hope Mr. Putnam lets me call the series of impulses that exit the hippocampus for the limbic system "information." This is a prize that I have sought for seven posts!)
Now we reach the crux.
In the very complex and very fast circuit from the eye to the limbic system, the signal is made up of discreet impulses. While we have described the pathway from the eye ad nauseum, we did remark on the pre processing that took place in the retina. There is processing the LGN and in the occipital cortex. There is major processing in the hippocampus where the series of impulses is compared to long term memory and, because of a match with your mother’s face, a further series of impulses go toward the limbic system activating a smile. As each impulse is conveyed from nerve to nerve, there has to be a threshold reached to generate the new action potential. But, taken as a whole, there is also a threshold for generation of a smile. That is, you can’t have a partial smile (well, maybe a grin in an adult). When talking of an infant’s smile, it is either/or. Thus, when considering the impulses as a number, there is an exact breakpoint. If you have, say, 255 impulses, nothing happens. If you have 266? Bingo. Baby smile. That’s the way the gate theory of nerve impulses work. It is either/or. And it is either/or for a knee reflex, for a smile reflex, or for the most complex behavior!
Let us return to, as the Growlery has termed it, "the binary states of General Loan:"
What is happening here is that his eye is receiving photons that get transduced, go to the LGN and then the occipital cortex, go to his hippocampus, get recognized, generate a series of action potentials that then go to the prefrontal cortex. (The Growlery has raised the possibility that this assassination was reflexive, in the “heat of battle” so to speak. This is certainly a possibility but I am going to assume that his pulling the trigger will be viewed by many observers as an act of free will.)
In the prefrontal cortex, or so called exectutive center, the impulses encounter neurons that have a certain electrochemical structure. That this process is complex goes without saying. That it is vastly complex and probably not comprehensible by the humans that are using the same structure at the moment also goes without saying. But, the basic neurochemistry is just the same as any other brain, be it from dinosaurs to mudfish to Einstein. It involves thresholds both at the individual neurochemical junction and it involves thresholds as a group of impulses to generate action.
I think General Loan pulled the trigger because his prefrontal cortex had been preconditioned by many, many sensory inputs over his lifetime to generate a series of impulses to the motor cortex to contract the muscles in the arm that flexed the finger. (that the most important were as an infant and in his youth is another postulate).
What would have to happen in order for there to be free will in this scenario? Well, there would have to be a change in the ion channel of multiple cells by an agent that was not under the neruochemical control of sensory input. (please remember from our earliest posts that lack of sensory input very quickly turns someone mad.) Ah, ha, you say! Isn’t this just what thought is? No, I rejoinder. Thought can only be the firing of neurons. There is nothing else there. As for this being something transcendental, something “spiritual” that only occurs in humans, I say balderdash. No one likes to think that his or her thinking is just like that of a mudfish, but it is.
There is just no way that a supernatural force is influencing the firing of neurons. It would demand spirituality in a mudfish and violate all the laws of physics. Everything that we have discussed in this series goes against it and, furthermore, there is a perfectly cogent physical explanation for how one makes decisions.
Please try and convince me that something supernatural interacts with mere matter in the human brain to change the concentration of sodium ions?
If this is true, that there is no real free will, I should be discussing the ramifications for the rest of my life. What about Hitler? If he didn’t have a free will, what then? What about George Bush and Iraq and a zillion other things? What about punishment? Can we justify (and justifying involves free will too) punishment in that it might change the underlying memory structure of the brain? Is that at all possible when someone gets to be an adult? Isn’t it vitally important that babies receive the appropriate stimulation so that their pre frontal cortex will be a nice and healthy one? Is my advocating intense early education simply a response of my prefrontal cortex to the information that we have discussed here?
And what about guilt? It would seem to be an unnecessary burden under which many Irish Catholics live their whole lives. Of course, it produces brillant literature!
Thanks for reading this if you got this far.