Imagine yourself sitting in the jury box for a courtroom trial of the infamous Neckbeard the Pirate. Looking over the list of charges, you see what appears to be quite the nefarious career: Looting, pillaging, extortion, raping, kidnapping, murdering… the list goes on and on. Not only that, but the defendant doesn’t even deny his actions. He confesses outright to every last crime, plus a few extra that didn’t even make the list. It seems to be a textbook open-and-shut case, except for an unusual plea made by the defense.
“Please don’t send me to prison!” says Captain Neckbeard. “I had no choice. You see, years ago, the Canadian government installed a microchip in my brain that forced me to do their evil bidding. It wasn’t really me who committed those horrible crimes. It was the agency controlling me through the implant. I was just an unwilling puppet in the whole ordeal.”
“Golly,” you think. “That’s a pretty far-fetched claim. He can’t possibly expect us to believe this, can he?”
Sure enough, however, the story checks out. The defense submits a series of CT scans that clearly show the presence a device implanted in his prefrontal cortex. The very chip itself is then submitted as evidence, followed by a series of testimonies from neurologists, cognitive scientists, and electrical engineers all explaining exactly how it works. Representatives from the Canadian government even take the stand and admit openly to having abducted their own citizens and implanting them with mind-control devices. It was all just part of a top-secret spy project gone terribly awry.
Ask yourself. Given such evidence, how exactly would you place your verdict? Would you vote guilty and throw the defendant in prison for the rest of his life? Or would you acquit the defendant of all charges?
If you’re anything like most people, the evidence in this scenario would probably convince you to acquit. After all, it wasn’t really the defendant who was doing all those horrible things. It was the operator of the mind-control chip. It therefore seems perfectly reasonable to conclude that the defendant was not acting in accordance with his own free will and thus does not bear any moral responsibility for his crimes.
Now to be fair, a full-on mind control chip is a pretty fantastic idea, but there really are actual courtroom cases that closely resemble this exact scenario. At least one classic example was the case of a Virginia man who developed severe pedophilic tendencies after sprouting a tumor in his right orbitofrontal cortex [1,2]. Rather than directly force his actions, however, the tumor released a frenzy of antisocial desires while simultaneously blocking the part of his brain responsible for impulse control. Fortunately, once the tumor was surgically removed, the defendant was eventually restored to normal social behavior. So ask yourself now: How would you cast your vote if you were on his jury? Would you vote to acquit? Or would you vote to convict?
Again, if you’re anything like most people, you would probably vote to acquit, and that's exactly what happened in this particular case. But how far can this kind of thinking reasonably extend? For example, what if for every act of piracy, a full one-million dollars was donated to the Red Cross and then used to save 100 lives? Thus, being the perfectly moral agent that he is, Captain Neckbeard had no choice but to engage in a few acts of lesser evil for the sake of a much greater good. Or better yet, what if Captain Neckbeard just really enjoys being a pirate so much, that absolutely nothing else in his life could possibly bring him happiness? Or what if Neckbeard just got into piracy one day because he felt bored on a Sunday afternoon? At what point do we transition from complete, merciful forgiveness to the usual imposition of criminal justice?
These simple thought experiments represent the foundation for an age-old principle known as free will. It’s a classic problem that philosophers have passionately discussed for thousands of years, and it still continues to spark debate to this very day. Yet despite the gallons of ink that have been spilled over this topic, it’s surprisingly rare to find anyone with a comprehensive solution that actually works. That’s a real shame, too, because it’s not exactly difficult to provide functional answers to these kinds of questions. All it takes is a little willingness to explore the problem honestly, plus the intellectual discipline to apply rigorous standards of logical consistency. That’s why I personally find the issue so fascinating, and why I think you’ll all enjoy following me along as we finally settle the problems of free will once and for all.
The Twins Problem
Now before we begin, it’s important to understand that the fundamental problem with an idea like free will has very little to do with whether or not it really exists. Rather, the far more compelling problem is how best to define that term in the first place. It’s as if we all have this deep, intuitive sense over what free will ought to mean, but just can’t seem to pin it down into any hard, quantifiable terms. It’s a giant gap that undermines nearly every dedicated treatment on the subject. After all, what’s the point of engaging in a public debate when no one has yet to even agree on what the debate is supposed to be about? So before we even touch on the practical problems of free will, it really helps to step back and ask ourselves what exactly those two little words really mean.
To help answer that question, simply imagine yourself sitting in a room behind a table. Across from you are what appear to be two identical twins. They look the same, they act the same, and in all physical respects, they seem to be as alike as two people can possibly be. There is, however, one key difference that sets them apart. One of these entities has free will, and the other does not. Your job is to figure out which one is which, and do so with repeatable, reliable, consistency.
Ask yourself: How exactly would you go about telling the difference? What observations do you make? What experiments do you perform? What empirically verifiable distinction must we look for in order to differentiate between a being that has free will and a being that does not?
Bear in mind now that whatever answer you give to this question is, effectively, your definition for free will. It’s a textbook application of a well-known principle called verificationism, and it represents the ultimate foundation on which all human language operates. It’s an amazing philosophical tool that works wonders at cutting through the pseudointellectual background noise and getting right to the heart of such difficult ideas.
To illustrate, suppose someone tries to tell you that free will is an “immaterial construct” and thus cannot be detected or measured using the empirical methods of science. Okay, that’s fine if you want to think that, but it immediately runs into a pretty glaring problem. When Captain Neckbeard says, “I was not acting out of my own free will,” and the prosecution says, “Yes, you totally were,” pray tell, how exactly are we supposed to figure out who’s right? Do we just randomly guess? Should we assume one side is always telling the truth, no matter what? Because the moment we reject the application of any objectively verifiable criteria, then the only way to settle such disputes it by pure, unfettered say-so. Microchip in your brain? Sorry, that’s an empirically verifiable distinction. How about a brain tumor? Nope. Still verifiable and thus material---nothing whatsoever to do with free will!
Clearly, any attempt to side-step verificationism is little more than a philosophical dead end. Yet despite this obvious limitation, it can still be like pulling teeth just to get a clear definition out of people. It's infuriating, too, because it means that any attempt to pin free will down with a hard definition will inevitably be met with angry accusations of “straw man” from every corner of the blog-o-sphere. Nevertheless, we have to start somewhere, and there are at least some popular definitions that do provide a workable framework for verification and analysis.
One statement in particular that tends to occur over and over again is the famous expression that free will inherently represents “a capacity to have done otherwise.” What exactly that means is open to some interpretation, but it nearly always involves an explicit rejection of predetermination. It’s a classic philosophical viewpoint known as libertarian free will, or metaphysical libertarianism, and it holds that free agents are not necessarily bound by the initial conditions of their circumstances when making decisions.
To demonstrate how this works from a verificationist perspective, simply imagine our twins being given a choice between chocolate and vanilla milkshakes. After long and careful deliberation, they both eventually conclude that chocolate is the preferred flavor and so naturally pick that milkshake accordingly. But suppose for a moment that there existed a magic rewind button capable of reversing time itself. Every last subatomic particle in the universe, including those making up our very own brains, will be reset back to exactly where they were at some point in the past. If we were to then press this button and replicate our experiment between chocolate and vanilla, what outcomes should we expect to observe? According to most schools of thought, the twin without free will should consistently pick the chocolate milkshake every time. However, for the twin that does possess free will, there is the distinct possibility that, on occasion, he just might decide to take the vanilla.
That description may sound a little goofy, but it really is the basic train of thought provided by the overwhelming majority of thinkers on this subject. They make no effort whatsoever to tell you what free will actually is, but only to tell you what free will isn’t---namely, that free will is not a thing that can logically coexist with a deterministic universe. The two ideas are thus incompatible.
Clearly, there are some pretty serious problems with this viewpoint that need to be addressed. For starters, there are no magic rewind buttons with which to reset the entire universe. Consequently, the distinction between a being with free will and a being without is completely immeasurable. Again, when Captain Neckbeard claims that he was not acting in accordance with his own free will, how exactly are we supposed to verify such a claim? Do we really need an honest-to-goodness time machine with which to observe his actions? And exactly how many times must we watch him repeat his crimes before we are convinced that his actions were predetermined?
Obviously, that’s not ever going to be an option, nor does it even make sense to try. The past is the past, and no being can ever possibly “do otherwise” on anything that has already been done. Free will in this specific sense is therefore completely dead in the water before the boat has even sailed. Nevertheless, we cannot simply acquit every criminal in existence because of some nuanced philosophical quirk, can we? Just because the popular conception of free will tends to make no sense, does that automatically mean everything about it is completely worthless and inapplicable?
Of course not! For instance, rather than reset time itself, what if we merely replicate the initial conditions of some past experiment and then observe a replication of outcomes in the future? This turns out to be a much more workable idea because it represents something we can actually utilize in the real world. It also just so happens to be the textbook definition of determinism according to every modern theory of probability . The implication is that I don’t have to necessarily rewind the entire universe per se, but I can, in principle, replicate all of the relevant conditions that gave rise to a particular event. If our universe is indeed deterministic, then for any physical experiment you may ever hope to contrive, I can predict and replicate the outcome of that experiment with perfect consistency. It also means that, for all practical purposes, your future might as well be set in stone because every outcome that occurs will always be causally predetermined by the conditions that came before it.
So given this slightly tweaked definition, is it safe to conclude that people have free will or what? In short: probably not. After all, if our decisions are merely the product of the neural connections within our brains, then in principle I could reset those conditions and watch you repeat the exact same decision under a given scenario. Or, equivalently, if I knew the exact arrangement of every last neural connection in your brain, then I could, in principle, predict exactly how you will behave when presented with a choice.
One of the most dramatic demonstrations of this principle was recently published by neuroscientists at the Max Plank Institute in Germany . Using functional magnetic resonance imaging (fMRI), human subjects had their brains scanned while randomly pushing buttons with either their left or right index fingers. Upon post-analysis of the data, it was found that decisions could actually be predicted, with greater than 50% accuracy, a full 10 seconds in advance of the subjects’ own awareness. The uncomfortable implication is that, given enough computational power and scan resolution, even human behavior itself could be predicted with the exact same accuracy as any other natural phenomenon.
For many, this tends to have pretty devastating implications for the idea of free will and particularly for our entire concept of criminal justice. After all, if every decision we make is merely the result of physical interactions between atomic states in our brains, then how exactly is that any different from the microchip scenario? Captain Neckbeard didn’t “choose” to commit his crimes any more than a laptop “chooses” to follow its programming. And since we don’t go around tossing laptops into prison for misbehaving, then what’s the point of doing the same thing to criminals? If, however, we could hypothetically reset the initial conditions and observe Neckbeard “doing otherwise,” then most people would generally conclude that he ought to be held morally responsible for making the wrong decisions.
Fortunately for the libertarians, there does seem to be at least one ray of hope lurking deep within the bowels of modern physics. To see how it works, simply imagine what would happen if a single neutron were freely tossed out into empty space. At first, all we would observe is a lone subatomic particle floating along at a constant velocity. If, however, we waited around long enough, then we would eventually observe a phenomenon called free neutron decay, wherein the neutron spontaneously bursts into a proton, an electron, and a neutrino. If we then repeat this experiment many times over, we will eventually observe that exactly half of the decays seem to occur in about 10.3 minutes or less, while the other half take longer.
None of this is particularly compelling so far, except for one major detail. No matter how perfectly we replicate the initial conditions, we will never be able to consistently replicate the exact moment of individual neutron decay. Try as we might, it will always be a matter of pure, unfettered probability. Nothing specifically causes this event to occur, and it is a fundamental property of nature herself that subatomic particles should behave this way. It’s a famous principle called the Copenhagen interpretation of quantum mechanics, and it is a very well-established interpretation among physicists today. The implication is that if our brains are fundamentally made of atoms, and atomic behavior is not predetermined, then it stands to reason that human behavior itself should likewise contain some trace elements of indeterminism for free will to hide in.
This may sound a little crazy at first, but it really is a popular argument getting promoted by scientists and philosophers today [5, 6]. It’s weird, too, because the Copenhagen interpretation isn’t exactly well-liked among modern physicists. While it may be common practice to teach this interpretation in most schools, there is also an open admission among everyone involved that it's both a logical and philosophical mess. There are plenty of alternative interpretations that arguably do better [7,8], and it's only a matter of time before someone finally demonstrates a clear, empirical distinction. If anything, we really just begrudgingly accept Copenhagen out of respect for tradition rather than any strict adherence to philosophical parsimony.
Ignoring that, however, the real problem with this view is that it seems to grant free will to completely pre-programmed machines. To see how, imagine a simple robot that picks out milkshakes in accordance with the decay of free neutrons. If the neutron decays in 10 minutes or less, then pick the chocolate milkshake. If it takes longer, then pick the vanilla. For all practical purposes, this robot will appear to behave exactly as the supposedly “free” twin, in that no replication of initial conditions will ever result in a perfect replication of outcomes. Yet we can also clearly see that there is nothing “free” about this configuration because the robot is still just following its programming. And since your own brain states are fundamentally governed by similar atomic events, then even your own mind is arguably just a glorified realization of exactly such a machine.
If that wasn't convincing enough, however, then just think of it like this: Imagine being offered a choice between your favorite flavor of milkshake and a giant pile of dog feces. In principle, you’d think that one should always want to go for the milkshake every time because milkshakes are delicious and satisfying while dog feces are grotesque and poisonous. Yet if libertarian free will really were a thing, then we necessarily must expect that, on at least some rare occasions, you would arbitrarily feel yourself overcome with the inexplicable urge to literally eat shit. Then when asked why you on Earth you did that, the only explanation you could possibly give is that some mysterious compulsion overpowered your sensibility and made you to do anyway it against all reason. That’s hardly the action of a “free” agent, don’t you think? And in what logical sense do we accomplish anything by punishing someone for that kind of behavior?
So no matter how we look at it, the idea of libertarian free will simply doesn’t work. By definition, libertarianism cannot coexist with determinism, and by definition, the opposite of determinism is pure, freaking randomness. This whole stupid debate between determinism and free will is nothing but a gigantic red herring. Neither situation provides us with a satisfying description for how a morally responsible agent ought to be behave, and neither situation provides a compelling framework for the administration of criminal justice. It should therefore come as no surprise that libertarian free will is probably one of the most widely rejected ideas in the history of academic philosophy . Yet for some strange reason, the overwhelming majority of debate on this subject is still fixated on a distinction that doesn’t matter either way. It’s as if everyone is so hell-bent on answering the question of determinism that they never stop to wonder what makes the alternative scenario any better.
To that end, philosophers have developed all kinds of alternatives to libertarianism collectively known as compatibilism---the idea that, whatever we decide free will happens to be, it is still logically “compatible” with a deterministic and/or random universe. It’s a huge variety of competing theories unto itself, and we could easily spend hours picking apart the more noteworthy contenders. But rather than get bogged down in an endless spiral of even more bad definitions, let’s just step back for a moment and ask ourselves why on Earth we care so much about free will in the first place. That is to say, when we sentence people like Captain Neckbeard to a life in prison, what exactly are we trying to accomplish? What’s the goal, here? What consequences are we trying to actualize through the act of punishment that cannot be achieved by simply letting him go?
Starting with Number 1, we have the doctrine of restitution---the idea that punishment exists to fix, or set right, any harm that was caused by a particular act. For example, when you’re backing out of your driveway and you happen to run over your neighbor’s mailbox, then at least one form of punishment would be to simply compensate them for any damages and inconvenience suffered.
At Number 2, we have the doctrine of deterrence---the tendency for people to refrain from certain behaviors if they believe that doing so will prevent any undesirable consequences. For example, everyone knows that speeding is generally dangerous, yet the temptation to do so can also be very intense. Thus, to reduce the likelihood of everyone driving too fast for their own good, we set limits on our top vehicle speeds and then impose a modest fine for anyone caught breaking the rules.
Moving on to Number 3, we have the doctrine of rehabilitation---the tendency for individuals to modify future behavior after personally suffering the effects of a punishment. For example, maybe you didn’t believe that a certain stretch of highway was really being patrolled, and so you figured you could get away with speeding. If, however, you are caught and fined, then the credibility of the punishment gets reestablished, and you become much more likely to follow the rules in the future.
Proceeding to Number 4, we are given the doctrine of incapacitation---the need to deny certain individuals the means and opportunity of committing certain crimes altogether. For example, imagine your eyesight is going bad and you just can’t help but drive like a maniac every time you hit the road. Since no amount of fines are ever going to prevent you violating the rules, it eventually becomes prudent to simply take away your license and completely revoke all driving privileges.
Finally, at Number 5, we have the doctrine of retribution---the visceral satisfaction granted to society by watching bad people suffer. When you go out and break the speed limit, then there must be something inherently evil about you that just deserves to be punished. We therefore impose speeding tickets on you for the pure sake of hurting you.
You might have noticed that the doctrine of retribution is conspicuously out of place on this list. While the other doctrines exist to serve clear, pragmatic goals, the last one is essentially just institutionalized revenge. Even the very word itself literally means "payback" in Latin! When you inflict harm onto society, then society tends to get very angry. And the only way to quell that anger is, apparently, to inflict some sort of proportionate harm back onto you. It should therefore come as no surprise, then, that retribution and libertarian free will nearly always go hand in hand. They treat good and evil as ethereal forces interwoven in the fabric of space and time, and that only the actions of metaphysically “free” agents are somehow capable of offsetting this delicate cosmic balance. That’s why retribution is incidentally the most controversial doctrine of criminal punishment by far. Philosophers and legal experts around the world have written heavily about the absurdity of this doctrine [10-12], and it is only by sheer, institutional inertia that it still remains an official part of our criminal justice system today.
Notice also that if we simply disregard retribution altogether, then the other four doctrines are all perfectly compatible with deterministic presuppositions. They embrace the idea that actions taken today will necessarily result in predictable human outcomes tomorrow. It's a perfectly pragmatic system with tangible social benefits, which means we're going to continue punishing people anyway whether or not libertarian free will is a real thing. So why not just bite the bullet already and use this as our foundation for defining free will, since apparently it's already the foundation for our entire system of criminal justice today. For example...
Free Will Finally Settled
Imagine our two identical twins again and offer them a choice between chocolate and vanilla milkshakes. All other things being equal, we should naturally expect both twins to pick the chocolate over the vanilla, and do so continuously upon repeated iterations of this experiment. But now imagine what would happen if we tried to convince the twins to choose vanilla. For example, we could try bribing them, begging, pleading, threatening; anything we like. Just make it known that actions taken in the present will have positive or negative consequences in the future. For the twin that does not possess free will, no amount of reward or punishment will ever alter his behavior. You could offer him a million dollars, or you could physically beat him senseless, but he will always go for the chocolate, no matter what. For the twin that does possess free will, there will exist some distinct threshold of reward and/or punishment that will alter his behavior---he can be convinced to choose the vanilla rather than the chocolate.
Now in all fairness, we don’t have to explicitly define free will in exactly these terms, and there are probably dozens of other formulations that could do it more rigorously. That’s not the point. The point is that free will is already so heavily intertwined with the ideas of punishment and moral culpability, that we might as well use those as the foundation for a functional definition. There's no need to invoke any magic rewind buttons for the entire stinking universe when we can easily achieve a perfectly satisfying result by just observing the natural consequences of reward and punishment.
For instance, take the classic courtroom scenario of Captain Neckbeard the Pirate. Should we convict, or should we acquit? To answer that, we simply ask ourselves whether or not the institution of punishment will deter future misbehavior under similar circumstances. That is to say, if I punish Captain Neckbeard for his crimes, can I expect that punishment to deter both himself, and other agents, from engaging in piracy in the future?
For the special case of mind-control devices, the answer is obviously no. No amount of reward or punishment will ever deter anyone from committing a crime when there’s a full-on microchip in their brain that's forcing them to do it anyway. If, however, Captain Neckbeard simply got into piracy one day because he was bored, then there is every reason in the world to suspect that the institution of severe punishments will greatly deter other bored individuals from following a similar path. We would therefore say that brain chips most definitely rob people of their free will, while casual boredom on a Sunday afternoon does not. It's a perfectly clear distinction that's both meaningful and practical, and it doesn't require the invocation of any obtuse metaphysical nonsense.
Notice that we can immediately answer all kinds of wacky philosophical questions through the adoption of this kind of framework. For instance, have you ever wondered why we don't hold our animals to the same degree of moral accountability as humans? Like, if my cat goes potty in the wrong place, how come we don’t punish him like we would for a person who commits the same crime? Under compatibilism, the answer is quite simple. While I might be able can train my cat to use the litter box through careful application of reward and punishment, I cannot deter a cat by making an example out of his peers. That’s because a key ingredient for free will is a general capacity for complex rational thought within a broader social context. It makes no sense to deter misbehavior against those who are mentally incapable of projecting the example of others onto themselves. It is therefore perfectly consistent to speak of our animals as possessing perhaps some semblance of free will to varying degrees, but not nearly to the same magnitude as us humans.
What about robots? What would it take to finally declare that some artificially-constructed robot has officially “passed the singularity” and developed a free will of its own? Again, it’s not a hard question to answer, so long as we consistently apply the definition. At what point will the institution of reward and punishment either deter or encourage certain behaviors?
To illustrate, imagine a possible world wherein every home comes preinstalled with its own robot butler. Now imagine that, for whatever reason, our butlers tend to act out in strange ways. For instance, maybe they smash up our dishes and then rearrange our furniture while we sleep. Under most circumstances, we would simply correct the malfunction by tracking down the faulty lines of code and then updating them accordingly. In the future, however, there might not be any code to fix. Most machine learning algorithms today are not based pure, iterative logic, but on neural networks derived from fitness functions acting on the raw experience of the environment itself. Thus, if we ever want to correct our robots’ misbehaviors, we may actually have to train them through the institution of reward and punishment. And if, by some happenstance, our robots reach a point wherein they can learn from the experiences of each other, then we wouldn’t have to train them all individually to achieve the desired result. Instead, we could single out an individual robot and then make a very public spectacle out its punishment. If doing so results in a marked deterrence of future misbehaviors, then we will have officially satisfied the definition of free will. And why not? For all practical purposes, that’s basically how we govern human social behaviors already, so it makes perfect sense to describe a hypothetical robot population in exactly the same terms.
But hey. Maybe that’s not good enough for you. Maybe you think it’s either libertarian free will, or nothing at all. That’s fine if you want to think that, but it’s not going to change our presently accepted doctrines of criminal justice. Whether the universe is deterministic or not, we are still going to use reward and punishment as our philosophical basis for moral culpability. And since the notion of free will is already inextricably linked to that principle, then we might as well just call it by the same, established name. Thus, when libertarians speak of “a capacity to do have done otherwise,” they literally have it backwards. It’s not about changing events that already took place in the past, but about steering events that might take place in the future. We don't have to replicate initial conditions down to the very last atom when we can easily achieve everything we want by merely altering the initial conditions of similar situations that have yet to pass.
So the next time you find yourself in a courtroom wondering whether or not to throw someone in jail, there now exists at least one philosophical foundation on which to guide that decision. It doesn’t even have to be perfect, either, but just "good enough" so as to provide functional justification to the pragmatic satisfaction of society. I freely admit that there are probably dozens of gaps in my presently-described compatibilism, and everyone is more than welcome to chip in and help refine it over time. But you can’t replace something that works with nothing that doesn’t. Metaphysical libertarianism cannot help you in this situation or any other. Compatibilism does.
Thank you for listening.
Notes & References
- Burns, J. M. and Swerdlow, R. H., "Right orbitofrontal tumor with pedophilia symptom and constructional apraxia sign," Archives of Neurology, Vol. 60, pp. 437-440 (2003) [link]
- Darby, R. R, Horn, A., Cushman, F., and Fox. M. D., "Lesion network localization of criminal behavior, PNAS, Vol. 115, No. 3 (2017) [link]
- See, for example, [link]
- Soon, C. S., Brass, M., Heinze, H., and Haynes, J., "Unconscious determinants of free decisions in the human brain," Nature Neuroscience, Vol 11, No 5 (2008) [link].
- Hartsfield, T. "Quantum mechanics supports free will," Real Clear Science (2013) [link]
- Bourget, D. and Chalmers, D. J., "What do philosophers believe?" Philosophical Studies, Vol. 170, No. 3 (2014) [link]---Less than 14% of philosophers accept or lean towards libertarianism.
- Barnett, R. E, "Restitution: A new paradigm of criminal justice," Ethics, Vol 87, No 4 (1977)