Saturday, May 19, 2018

The Problem of Free Will (And How to Solve It) [DRAFT]


Imagine yourself sitting on the jury for a courtroom trial of the infamous Neckbeard the Pirate. Looking over the list of charges, you see what appears to be quite the nefarious career: Looting, pillaging, extortion, raping, kidnapping, murdering… the list goes on and on. Not only that, but the defendant doesn’t even deny his actions. He confesses outright to every last crime, plus a few extra that didn’t even make the list. It seems to be a textbook open-and-shut case, except for an unusual plea made by the defense.

“Please don’t send me to prison!” says Captain Neckbeard. “I had no choice. You see, years ago, the Canadian government installed a microchip in my brain that forced me to do their evil bidding. It wasn’t really me who committed those horrible crimes, but the agency controlling me through the implant. I was just an unwilling puppet in the whole ordeal.”

“Golly,” you think. “That’s a pretty far-fetched claim. He can’t possibly expect us to believe this, can he?”

Sure enough, however, the story checks out. CT scans clearly show the presence a device implanted in his prefrontal cortex, followed by video footage of the actual surgery that took place to remove it. The very chip itself is then submitted as evidence, followed by a series of neurologists, cognitive scientists, and electrical engineers all explaining exactly how it works. Representatives from the Canadian government even take the stand and admit openly to having abducted their own citizens and implanting them with mind-control devices as part of a top-secret spy project gone terribly awry.

Ask yourself. Given such a testimony, how exactly would you place your verdict? Would you vote guilty and throw the defendant in prison for the rest of his life?  Or would you acquit the defendant of all charges?

If you’re anything like most people, the evidence in this scenario would probably convince you to acquit. After all, it wasn’t really the defendant who was responsible for his crimes. It was the operator of the mind-control chip. It therefore seems perfectly reasonable to conclude that the defendant was not acting in accordance with his own free will and thus does not bear any moral responsibility for his crimes.

But what if we dialed things back a little? For example, suppose it wasn’t a microchip that caused Captain Neckbeard to do all those horrible things, but rather a brain tumor. And instead of directly forcing him to commit any specific act, it simply released a frenzy of antisocial desires while simultaneously blocking the part of his brain responsible for impulse control. Upon removal of the tumor, however, the defendant was then immediately restored to normal social behavior.

Sound far-fetched to you? Well, believe it or not, this sort of thing actually happens in real-life all the time [1,2]. So ask yourself. How would you cast your vote if you were on his jury? Would you vote to acquit? Or would you vote to convict?

What other goofy scenarios do you think we can imagine? Like what if for every act of piracy, a full one-million dollars was donated to the Red Cross and then used to save 100 lives? Thus, being the perfectly moral agent that he is, Captain Neckbeard had no choice but to engage in a few acts of lesser evil for the sake of a much greater good. Would that change anything?

What if Neckbeard just really enjoys being a pirate so much, that absolutely nothing else in his life could possibly bring him happiness? Or what if Neckbeard just got into piracy one day because he felt bored on a Sunday afternoon? What then?

These simple thought experiments represent the foundation for an age-old principle known as free will. It’s a classic problem that philosophers have hotly debated for thousands of years, and it still continues to spark debate to this very day. Yet despite the gallons of ink that have been spilled over this topic, it’s surprisingly rare to find anyone with a clear, comprehensive solution that actually works. That’s a real shame, too, because it’s not exactly difficult to provide functional answers to these kinds of questions. All it takes is a little willingness to explore the problem honestly, plus the intellectual discipline to apply rigorous standards of logical consistency. That’s why I find the issue so fascinating to investigate, and why I think you’ll all enjoy following me along as we finally settle the problems of free will once and for all.

Now before we begin, it’s important to understand that the fundamental problem with an idea like free will has very little to do with whether or not it really exists. Rather, the far more compelling problem is how best to define that term in the first place. It’s as if we all have this deep, intuitive sense over what free will ought to mean, but just can’t seem to pin it down into any hard, quantifiable terms. It’s a huge flaw that completely undermines nearly every dedicated treatment on the subject. After all, what’s the point in engaging in a public debate when no one has yet to even agree on what the debate is supposed to be about? So before we even touch on the problems of free will as a workable concept, it really helps to step back and ask ourselves what exactly those two little words mean. Thus, to help us get started with our analysis, I would like propose the following thought experiment:

The Twins Problem

Imagine yourself sitting in a room behind a table. Across from you are what appear to be two identical twins. They look the same, they act the same, and in all physical respects, they seem to be as alike as two people can possibly be. There is, however, one key difference that sets them apart. One of these entities has free will, and the other does not. Your job is to figure out which one is which, and do so with repeatable, reliable, consistency.

Ask yourself, how exactly would you go about telling the difference? What observations do you make? What experiments do you perform? What empirically verifiable distinction must we look for in order to differentiate between a being that has free will and a being that does not?

Bear in mind now that whatever answer you give to this question is, effectively, your definition for free will. It’s a textbook application of a well-known principle called verificationism, and it represents the ultimate foundation on which all human language operates. It’s an amazing philosophical tool that works wonders at cutting through the pseudointellectual background noise and getting right to the heart of such difficult ideas.

To illustrate, suppose someone tries to tell you that free will is an “immaterial construct” and thus cannot be detected or measured using the empirical methods of science. Okay, that’s fine if you want to think that, but it immediately runs into a pretty glaring problem. When Captain Neckbeard says “I was not acting out of my own free will,” and the prosecution says “Yes, you totally were,” pray tell, how exactly are we supposed to tell who’s right? Do we just randomly guess? Should we assume one side is always telling the truth, no matter what? Because the moment we reject the application of any objectively verifiable criteria, then the only way to settle such disputes it by pure, unfettered say-so. Microchip in your brain? Sorry, that’s an empirically verifiable distinction. How about a brain tumor? Nope, that’s verifiable as well. Or maybe there’s a gun pointed at your head? Sorry. Still verifiable and thus material---nothing whatsoever to do with free will!

Clearly, any attempt to side-step verificationism is little more than a philosophical dead end. Yet despite this obvious limitation, it can still be like pulling teeth just to get a clear definition out of people. That’s what makes free will such an infuriating subject to write about, because any attempt to pin it down into a hard definition will inevitably be met with angry accusations of “straw man” from every corner of the blog-o-sphere. Nevertheless, we have to start somewhere, and there are at least some popular definitions that do provide a workable framework for analysis.

One statement in particular that tends to occur over and over again is the famous expression that free will is inherently defined by “a capacity to have done otherwise.” What exactly that means is open to some interpretation, but it nearly always involves an implied rejection of predetermination. It’s a classic philosophical viewpoint known as libertarian free will, or metaphysical libertarianism, and it holds that free agents are not necessarily bound by the initial conditions of their circumstances when making decisions.

To demonstrate how this works from a verificationist perspective, simply imagine our twins being given a choice between chocolate and vanilla milkshakes. After long and careful deliberation, they both eventually conclude that chocolate is the preferred flavor, and so naturally pick that milkshake accordingly. But suppose for a moment that there existed a magic rewind button capable of reversing time itself. Every last subatomic particle in the universe, including those making up our very own brains, will be reset back to exactly where they were at some point in the past. If we were to then press this button and replicate our experiment between chocolate and vanilla, what outcomes should we expect to observe? According to most schools of thought, the twin without free will should consistently pick the chocolate milkshake every time. However, for the twin that does possess free will, there is the distinct possibility that, on occasion, he just might decide to take the vanilla.

That description may sound a little goofy, but it really is the basic train of thought provided by the overwhelming majority of thinkers on this subject. They make no effort whatsoever to tell you what free will really is, but only to tell you what free will isn’t---namely, that free will is not a thing that can logically coexist with a deterministic universe. The two ideas are incompatible.

Clearly, there are some pretty serious problems with this viewpoint that need to be addressed. For starters, there are no magic “rewind” buttons with which to reset the entire universe, which means the distinction between a being with free will and a being without is completely immeasurable in any practical sense. Again, when Captain Neckbeard claims that he was not acting in accordance with his own free will, how exactly are we supposed to verify such a claim? Do we really need to hop inside of a time machine and observe his actions over and over? And exactly how many times must we watch him repeat his crimes before we are convinced that his actions were predetermined?

Obviously, that’s not ever going to be an option, nor does it even make sense to try. The past is the past, and no being can ever possibly “do otherwise” on anything that has already been done. Free will in this specific sense is therefore completely dead in the water before the boat has even sailed. Nevertheless, we cannot simply ignore the immense popularity of such a view, which means the principle of charity demands that we at least try to find a way of expressing it in more meaningful terms. For example, rather than reset time itself, what if we instead replicate the initial conditions of some past experiment and then observe the corresponding outcome in the future?

This turns out to be a much more workable concept of free will because it represents something we can actually utilize in the real world. I don’t have to necessarily rewind the entire universe per se, but I can, in principle, replicate all of the relevant conditions that gave rise to a particular event. If our universe is indeed perfectly deterministic, then for any physical experiment you may ever hope to contrive, I can, in principle, predict and replicate the outcome of that experiment with perfect consistency. All I have to do is reproduce your initial conditions with sufficient precision and then carry out the experiment accordingly. Thus, for all practical purposes, your future might as well be set in stone because every outcome that occurs will always be causally predetermined by the conditions that came before it.

This viewpoint presents all kinds of philosophical problems for the concept of free will, with the most immediate concern being the simple fact that human behavior very strongly appears to be deterministic. That means if I were to completely reset the neural connections of your brain---right down to the very last molecule if needs be---then there is every reason to believe that you would make the exact same decision every time under a given scenario. Or equivalently, if I knew the exact arrangement of every last neural connection in your brain, then I could, in principle, predict exactly how you will behave when presented with a choice.

One of the most dramatic demonstrations of this fact was presented by neuroscientists at the Max Plank Institute in Germany [3]. Using functional magnetic resonance imaging (fMRI), human subjects had their brains scanned while randomly pushing buttons with either their left or right index fingers. Upon post-analysis of the data, it was found that decisions could actually be predicted, with greater than 50% accuracy, a full 10 seconds in advance of the subjects’ own awareness over which button they were about to press.

For many, this tends to have pretty devastating implications for the idea of free will altogether, and in particular for our entire concept of criminal justice. After all, if every decision we make is merely the result of physical interactions between atomic states in our brains, then how exactly is that any different from the microchip scenario? Captain Neckbeard didn’t “choose” to commit his crimes any more than a laptop “chooses” to follow its programming. And since we don’t go around tossing laptops into prison for misbehaving, then what’s the point of doing the same thing to criminals? If, however, we could hypothetically reset the initial conditions and give Neckbeard another chance to “do otherwise,” then it seems reasonable to conclude that he should be held morally responsible for making the wrong decisions.

Fortunately for the libertarians, there does seem to be at least one ray of hope lurking deep within the bowels of modern physics. To see how it works, simply imagine what would happen if a single neutron were freely tossed out into empty space. At first, all we would observe is a lone subatomic particle floating along at a constant velocity. If, however, we waited around long enough, then we would eventually observe a phenomenon called free neutron decay, wherein the neutron spontaneously bursts into a proton, an electron, a neutrino, and a gamma ray. If we then repeated this experiment many times over, we would find that exactly half of the decays seem to occur in about 10.3 minutes or less, while the other half take longer.

None of this is particularly compelling so far, except for one major detail. No matter how perfectly we replicate the initial conditions, we will never be able to consistently replicate the exact moment of individual neutron decay. Try as we might, it will always be a matter of pure, unfettered probability. Nothing in particular actually causes this event, and it is a fundamental property of nature herself that subatomic particles should behave this way. It’s is a famous principle called the Copenhagen interpretation of quantum mechanics, and it actually represents the dominant viewpoint among physicists today. The implication is that if our brains are fundamentally made of atoms, and atomic behavior is not predetermined, then it stands to reason that human behavior itself should likewise contain some trace elements of indeterminism for free will to hide in.

This may sound a little crazy at first, but it really is a popular argument getting promoted by scientists and philosophers today [4, 5]. It’s weird, too, because it doesn’t exactly take much effort to realize the absurdity that it represents. For example, the Copenhagen interpretation isn’t exactly well-liked among modern physicists, because there are tons of logical and philosophical problems with that idea unto itself. It’s just that no one really has anything better to go on right now, so we all just kind of begrudgingly accept it in the meantime for the sake of tradition.

Another weird implication of this view is that it seems to grant free will to completely pre-programmed machines. To see how, imagine a simple robot that picks out milkshakes in accordance with the decay of free neutrons. If the neutron decays in 10 minutes or less, then pick the chocolate milkshake. If it takes longer, then pick the vanilla. For all practical purposes, this robot will appear to behave exactly as the supposedly “free” twin, in that no replication of initial conditions will ever result in a perfect replication of outcomes. Yet we can also clearly see that there is nothing “free” about this configuration because the robot is still just following its programming.

The problem that people don’t seem to realize about libertarian free will is that the opposite of determinism is, by definition, randomness. Try as you might to replicate the initial conditions, and the outcome will always jump back and forth between the possible alternatives. So pray tell, my dear libertarians, but how exactly is that any better than the alternative? When I offer you a choice between chocolate and vanilla milkshakes, how exactly is it “freedom” to pick randomly with no predictability whatsoever? It’s like flipping a coin whenever you have to make a decision and then simply doing whatever the coin tells you to do. That’s still a form of “programmed” behavior whether you like it or not.

To see why this form of free will is truly bizarre, simply imagine being offered a choice between your favorite flavor of milkshake and a giant pile of dog feces. In principle, you’d think we should always want to go for the milkshake every time because milkshakes are delicious and satisfying while dog feces are poisonous and grotesque. Yet if libertarian free will really were a thing, then we necessarily must expect that, on at least some rare occasions, you would arbitrarily feel yourself overcome with the inexplicable urge to literally eat shit. Then when asked why you on Earth you did that, the only explanation you could possibly give is that some mysterious compulsion overpowered your sensibility and made you to do anyway it against all reason. That’s hardly the action of a “free” agent, don’t you think?

So no matter which universe we happen to live in, be it perfectly determined, random, or somewhere in between, the idea of libertarian free will simply doesn’t work. Either way, you’re still just a machine following your programming. The entire debate between determinism and libertarian free will is clearly nothing but a gigantic red herring. Neither situation provides us with a satisfying description for how a morally responsible agent ought to be behave, and neither situation gives us a compelling framework for the administration of criminal justice. It should therefore come as no surprise that libertarian free will is easily one of the most widely rejected ideas in the history of academic philosophy [6], and anyone still fixating on such distinctions has clearly not been paying attention to the arguments. But you know what? That’s okay, because there are still plenty of other conceptions for free will that don’t suffer from such glaring inconsistencies.

Remember now that all we’re really trying to do here is define a word, and there are still plenty of untapped definitions for free will we have yet to even consider. But it’s important to always keep in mind that there is no such thing as an objectively correct definition. There are only good definitions and bad definitions. Good definitions are clear, consistent, concise, and generally capture the intuitive understanding we typically associate with such terms. Bad definitions are either unclear, unverifiable, convoluted, or logically absurd in their implications. Metaphysical libertarianism is nothing more than a bad definition, but that doesn’t mean we should summarily give up on any conception of moral accountability.

To that end, philosophers have developed a huge variety of alternative definitions collectively known as compatibilism---the idea that, whatever free will happens to be, it is still logically “compatible” with the idea a deterministic universe.  It’s a huge variety of competing ideas unto itself, and we could easily spend hours picking them apart on their merits.

For instance, take the famous philosopher Harry Frankfurt, who often described free will in terms of second-order desires []. That is to say, I may prefer to eat a chocolate milkshake over a bowl of cold broccoli because chocolate milkshakes are yummy and satisfying while cold broccoli is tasteless and boring. However, I may also be keenly aware that broccoli is far healthier than chocolate milkshakes, and so I may pick that option because I value my long-term health more than the immediate satisfaction. By exercising free will, I am basically taking control of my desires through a kind of metadesire, or a desire to change and modify my desires.

This general idea is closely related to another recurring theme in the free will discussion commonly known as the principle of delayed gratification, or simply willpower---the ability to overcome certain base desires through personal introspection and rational consideration of the consequences. It’s a simple idea that can easily be measured and quantified using laboratory experiments. For example, the ice bucket test simply asks subjects to place their arm inside a bucket of ice and then just hold it there for as long as they can stand it []. The marshmallow test is another famous experiment that offers subjects a choice between a single marshmallow now, or many marshmallows later [].

[insert good transition/segue with this paragraph] Notice how these ideas provide clear meaning and functional insight about the idea of free will without necessarily fixating on a bunch of obtuse metaphysics. Even so, we’re still not quite to the point of fully defining free will in terms of a complete theory of moral accountability. It’s as if philosophers are so fixated on the minute details of free will that they never stop to ask themselves what exactly the point of all this is supposed to be? For example, when I sentence Captain Neckbeard to a life in prison, what am I trying to accomplish? What’s the goal, here? What consequences am I trying to actualize through punishment that cannot be achieved by simply letting him go?

[fix this paragraph as well] This is a perfectly basic question that almost no one in the philosophical community ever seems to notice. It’s a real shame, too, because the moment you actually address this simple little question, you’ve effectively solved every last problem with free will. Even more embarrassing for modern philosophy is the fact that this question has already been decisively answered by every official legal doctrine in the entire Western world. According to every introductory reference on basic criminal law [7,8], there are exactly five, and only five, reasons for punishment of criminal behavior.

Starting with Number 1, we have the doctrine of restitution---the idea that punishment exists to fix, or set right, any harm that was caused by a particular act. For example, when you’re backing out of your driveway and you happen to run over your neighbor’s mailbox, then your punishment would be to simply compensate them for any damages and inconvenience suffered.

At Number 2, we have the doctrine of deterrence---the tendency for people to refrain from certain behaviors if they believe that doing so will result in undesirable consequences. For example, everyone knows that speeding is generally dangerous, both for yourself and for the other drivers around you. Yet the temptation to speed can also be very intense, especially when you’re in a hurry and everyone around you is moving too slow for your liking. Thus, to reduce the likelihood of everyone driving too fast for their own good, we set limits on our top vehicle speeds and then impose a modest fine for anyone caught breaking the rules.

Moving on to Number 3, we have the doctrine of rehabilitation---the tendency for individuals to modify future behavior after personally suffering the effects of a punishment. This is very similar to the doctrine of deterrence, except that we are basically attempting to deter the future misbehavior of one that has already misbehaved. For example, maybe you didn’t believe that a certain stretch of highway was really being patrolled, and so you figured you could get away with speeding. If, however, you are caught and fined anyway, then the credibility of the punishment gets reestablished and you may become more likely to follow the rules in the future.

Proceeding to Number 4, we are given the doctrine of incapacitation---the need to deny certain individuals the means and opportunity of committing certain crimes altogether. For example, imagine your eyesight is going bad and you just can’t help but drive like a maniac every time you hit the road. Since no amount of fines are ever going to prevent you violating the rules, it eventually becomes prudent to simply take away your license and completely revoke all driving privileges.

Finally, at Number 5, we have the doctrine of retribution---the visceral satisfaction granted to society by watching bad people suffer. For all practical purposes, this doctrine is essentially little more than institutionalized revenge, in that it admittedly serves no real purpose other than to make the rest of us feel good. When you inflict harm onto society, then society tends to get very angry, and the only way to quell that anger is to inflict some sort of proportionate harm back onto you.

Notice that out of all these doctrines, retribution is the only one that seems to have any meaningful connection to the idea of libertarian free will. It’s as if good and evil are treated as though they were tangible forces interwoven into the fabric of space and time, and that only the actions of metaphysically “free agents” could somehow offset that balance. That’s why retribution is incidentally the most controversial principle of criminal punishment today [9].

Notice also that if we completely ignore retribution altogether, the other four doctrines are all perfectly compatibilist ideas. In fact, they logically must assume determinism from the outset, because none of them can be achieved without the expectation that actions taken in the present will have predictable consequences in the future. We may therefore formulate the principle of free will in the following fashion:

Free Will Finally Settled

Imagine our two identical twins again and offer them a choice between chocolate and vanilla milkshakes. All other things being equal, we should naturally expect both twins to pick the chocolate over the vanilla, and to do so continuously upon repeated iterations of this experiment. But now imagine what would happen if we tried to coerce the twins into choosing vanilla. For example, we could try bribing them, begging, pleading, threatening; anything you like. It doesn’t matter. Just make it known that actions taken now will have positive or negative consequences in the future. For the twin that does not possess free will, no amount of reward or punishment will ever alter his behavior. You could offer him a million dollars, or you could physically beat him senseless every time, but he will always go for the chocolate, no matter what. For the twin that does possess free will, there will exist some distinct threshold of reward and/or punishment that will alter his behavior---he can be convinced to choose the vanilla rather than the chocolate.

Implications:
  • We don’t need a magical rewind button on the entire freaking universe. If anything, the experiment depends on the existence of memory and repetitions in order to function.
  • We don't have to perfectly replicate initial conditions down to the last atom. We just need to place individuals in relatively similar situations and modify their behavior accordingly.
  • Free will assumes some sort of mental processing power capable of making decisions. Therefore, rocks definitely don't have free will.
  • The agent must be capable of rationally understanding the risk/reward in advance. Otherwise, deterrence could not work. You can punish your cat for going potty in the wrong places, but it probably won’t deter the other cats from doing the same. They would have to be trained individually. We therefore do not hold cats in the same moral category as humans. I can train a cat, but I cannot convince a cat or deter a cat.
  • Punishment has to be credible in order to function. Even if the subject is perfectly rehabilitated, the punishment may still necessary to deter misbehavior in others. 
  • Free will is not a binary thing we simply possess or lack. It is highly variable. Some of us may be swayed by the simple joy of variety.  Others may require physical beatings before we budge. See willpower.
  • Free will is contextual. Some of us may have tremendous willpower over milkshakes, but none at all with cigarettes. 
  • “Could have done otherwise”---Nothing at all to do with the past. It’s about the FUTURE. When I put you in a similar situation in the future, can I expect you to behave differently? Everyone is thinking about this issue backwards.
  • Moral culpability---All this means is whether or not I can change your behavior in the future. Moral agents can be swayed in their behavior. Amoral agents cannot.
  • Should we punish Captain Neckbeard or not? The answer is, will the punishment serve any of the doctrines? Will I be able to deter the possible behavior of other individuals placed in similar situations? Maybe Neckbeard is rehabilitated, but what about the next human to come along in a similar situation? 
  • People with tumors/microchips in their brains cannot be deterred through the use of reward and punishment. It will therefore serve no purpose to punish them. Hence, no "free will."
Concluding Remarks:

Even if the universe is truly deterministic, there’s not much you can do with that fact.
  • The only way to physically predict our future down to the last atom is through computational simulation. However, any machine capable of such a simulation must be at least as great, if not greater, in complexity than our actual physical environment. Otherwise, the machine will not possess enough computational power to actually predict the future in time. The future will simply happen before we can predict it in advance.
  • Any information about the future is, itself, a physical component of the present that must be accounted for as part of the prediction of the future. This creates negative feedback loops that are impossible to stabilize. It’s basically a direct application of the halting problem. Any information we glean about the future can be used to physically destroy that future and prevent it from happening. 
  • So even if the universe is deterministic… it might as well not be. Unless you have a machine if greater complexity than the actual universe itself, then the only way to “predict” the future is to sit around and wait for it to happen. And then even if you did predict the future, the prediction itself alters the initial conditions that created the future you supposedly predicted.
Notes & References
  1. Burns, J. M. and Swerdlow, R. H., "Right orbitofrontal tumor with pedophilia symptom and constructional apraxia sign," Archives of Neurology, Vol. 60, pp. 437-440 (2003) [link]
  2. Darby, R. R, Horn, A., Cushman, F., and Fox. M. D., "Lesion network localization of criminal behavior, PNAS, Vol. 115, No. 3 (2017) [link]
  3. Soon, C. S., Brass, M., Heinze, H., and Haynes, J., "Unconscious determinants of free decisions in the human brain," Nature Neuroscience, Vol 11, No 5 (2008) [link].
  4. ...