This is Part 1 of my conversation with Dr Jessica Wolfendale, who is an author, ethics professor and expert on torture and war crimes. She joined me to try and tackle the dichotomy of how to reconcile one’s perception of oneself with one’s actions, within the context of atrocities carried out in war.
She does this by asking questions such as, ‘How do good people commit atrocities, like torture, and how are their views on killing impacted?’ and ‘Does the military training process make excessive violence acceptable and permissible in certain contexts, such as war?’.
Jessica is the author of ‘Torture and the Military Profession’, arguing that the prevalence of military torture is linked to military training methods that cultivate beliefs connected to crimes of obedience. She also co-authored ‘War Crimes: Causes, Excuses, and Blame’, with Matthew Talbert, where they unpack factors that can lead to war crimes as well as wrestle with the justness of responsibility and blame attributed to perpetrators.
Some of the topics we covered in Part 1 are:
- Capital punishment as detached, humane killing
- The normalisation of violence
- How do we live with what we’ve done?
- Forgiveness, self-forgiveness and atonement
- Obedience to authority
- The Milgram Experiments and the Good Samaritan Study
- Circumstances and the situational account
- The justification of torture in war
Part 2 will be released on 8th of September where we discuss topics such as:
- The dispositional account and cognitive-affective personality system
- Military culture and socialisation
- Freedom and resentment
- Moral ignorance is by no means an excuse
- Integrating the victim’s perspective into military training
- The power of reconciliation
- Jessica’s future work on depictions of war crimes
If you like what you’ve heard, please consider liking and reviewing the show wherever you get your pods. You can also support the show on our Patreon and Buy Me A Coffee page on the links below:
Patreon: https://www.patreon.com/thevoicesofwar
Buy Me A Coffee: https://www.buymeacoffee.com/thevoicesofwar
—
Listen to the podcast here
Dr Jessica Wolfendale – On Torture, War Crimes And Moral Responsibility – Part 1
In this episode, I bring you the first part of a two-part interview with Dr. Jessica Wolfendale. Jessica is a Professor of Philosophy at Case Western Reserve University. She’s the author of Torture and the Military Profession as well as the co-author together with Matthew Talbert of War Crimes, Causes, Excuses, and Blame, which is a book we’ll mention a lot. Jessica has published numerous articles in book chapters on topics including military ethics, terrorism, security, war crimes, and the ethics of torture. Jessica, thank you very much for joining me on the show.
Thanks for having me.
As I mentioned to you before we started, that book is amazing. I don’t know if it’s the right word to say that I’ve thoroughly enjoyed it given the dark topic, but if one can use that word to describe such a book, then it’s appropriate. I found it fascinating and look forward to chatting. Before we dive into the dark subject matter of war crimes, maybe we can find out a little bit about your own background. What motivated your entry into ethics in the first place, and then particularly into this study of torture and war crimes?
I had a slightly unusual path to doing a PhD in Philosophy. I did my undergraduate degree at the ANU in Canberra. I did a major in Philosophy, but it wasn’t my passion. It was when I was watching an Anzac Day parade. I recall starting to wonder about how soldiers are trained to kill. I was wondering about what that process was like for soldiers and how it affected their views about killing. There was a very specific question that then led me to think, “I want to go back and do more studies.” That’s what led me to do a PhD at Monash University after doing a bit of Master’s qualifying work.
My Master’s qualifying thesis was comparing soldiers and executioners as being two professions that require people to be trained to kill and how that affected the justification and philosophical discourse around those practices. My PhD dissertation focused on the moral psychology of military tortures. My first book, Torture and the Military Profession, is my PhD dissertation.

I was always interested in how it was that people who viewed themselves as good people who are in, say, military forces from Democratic states committed human rights could end up committing atrocities like torture. That was always a question that motivated me and still does drive a lot of my research. It’s expanded since then, but ultimately, it is one of the questions. How it is that societal institutional practices as well as individual psychology are shaped to make violence acceptable and permissible in certain contexts?
You mentioned that it’s not that you were fascinated with the meaning of life, but it seems to me, in almost a bizarre way, you were fascinated with the meaning of death or the process of and how we inculcate the sense of killing in soldiers. I find that in interesting executioners. I haven’t read that book or the thesis, but what do you mean by executioners? How do we define executioners in this sense?
Looking specifically at capital punishment systems in the United States, who are the executioners? How are they trained to perform that role? What does the performance of that role tell us about the way the moral framework surrounding capital punishment in the US? It is similar to how we approach training to kill, like these tensions between viewing violence as necessary.
I don’t touch on the arguments for and against capital punishment, but the training of execution factors into that, but rather how it shapes the way in which capital punishment is viewed as being almost professional detached humane killing through the use of specific kinds of not just mechanisms such as lethal injection but practices and ways of talking about it, ways of interacting between the guards and the prisoners, and the design of the room in which execution takes place. All of this shapes a moral perception of what’s going on in a particular way and masks by doing so the true violence of capital punishment. The state is destroying human life. Whether or not you think it’s justified, that’s still what the state is doing.

I agree. It’s completely relevant in this context.
In the military context, too, you see those contradictions between, on the one hand, a sense of, “If we are going to have a military force and we think that war is justified, then killing is necessary. Soldiers must be trained to kill.” How do we do that in a way that enables them to do so effectively when they need to but also, hopefully, enables them to recognise the distinction between lawful and unlawful killing and allows them to retain some sense of moral goodness?
In some ways, some of the tensions you find in that process play out in the ways in which war crimes can come to be normalised and justified as well. In a sense, a question about killing is a gateway to some of these larger questions about normalisation and the ways in which different forms of violence against others come to be normalised and justified. It comes to be reconciled with one’s conception of itself as a good person.
One of the things I’m very fascinated with and more broadly and specifically about political violence is how we live with what we’ve done. In some of the research I did on the training of torturers in my dissertation and some of the work for the war crimes book, to be a person who’s committed an act like this. This is also true in relation to soldiers who killed lawful combatants who don’t commit war crimes. How do you make sense within yourself of what you’ve done? How do you reconcile within yourself? It raises questions about forgiveness, self-forgiveness, atonement, and how we make sense of who we are and what we’ve done.
Also, in relation to our social group. I’m thinking about the execution part. I’d imagine there would be fewer executioners who would have PTSD or moral injury than there would be soldiers who killed people in combat. One reason is that firstly, they know what they’re going to do. Secondly, there’s an entire ceremony. There’s the room. There are roles and embodiment of a particular role that you know exactly what you’re going to do. Certainly, the act has been vindicated by your society through the law. It has been made law that this person shall no longer live. You are merely the tool that presses the button, so to speak.
There is some evidence. There’s very little research on executions in the US. There was one book by a guy called Robert Johnson called Death Work. He wrote that maybe in the late ‘80s. He talked to execution teams in a number of states. As far as I’m aware, that’s the only book that talks to people in this role, which, in some ways, makes sense. It’s not something people want to volunteer that they’re part of.
There is some evidence that there were elements of PTSD, at least for some of the guards who were engaged in the execution role. The very process of being involved in a killing activity that’s curtailed and given this veneer of legitimacy and humaneness itself can cause internal tension. There are two levels of knowledge. On the one hand, you know you’re killing someone against their will. On the other hand, you are engaged in a pretence that you are a cog in the machine and that you’re not responsible for their death.
That speaks intuitively to me about the topic you’re going to talk about, and that’s soldiers. It’s what soldiers experience regardless of how much we prepare our soldiers for what they’re going to do. We need to strip back the reality of war and combat. It is that ultimately, a military exists for the purpose of killing another military or another social group that’s threatening yours, whether they’re a formal military or not. That has a price.
I interviewed Ned Dobos. He, quite credibly, argues that by joining the military, ever so slightly, you’re starting the process of desensitisation to the idea of killing, which in itself ends up, for most of us, a mild form of what he called moral injury. For others who are either intimately involved in the act of killing or exposed to it, or think about it, or haven’t reconciled the idea becomes something much greater.
It’s interesting that you’re talking about Ned because I was talking to him. I’m a partner investigator on an ARC grant that he’s put in on moral injury. Matthew Talbert and I have written a book chapter on moral injury. I’ll view it somewhat from Ned’s in that we worry about the way the language of moral injury is used to encompass a whole range of different forms of moral distress without any clear understanding of why or under what conditions distress counts as injury. Ned does talk about this sometimes. Lack of distress is sometimes described as injury too.
In our view, you can’t even talk about injury without first having a conception of moral health. Physical injury implies a conception of physically healthy functioning. Not every kind of pain that you suffer is necessarily an injury. In some ways, I would probably push back against Ned a bit and say desensitisation is not in itself a moral injury necessarily. There are other professions that require desensitisation. Surgeons have to be desensitised to the sight and sound of blood and cutting up bodies. Doctors and nurses in general, in fact, have to be able to interact with extreme human distress. They sometimes cause distress in a way that doesn’t mean that they collapse in the heat every time they deal with a patient.

Desensitisation, even in cases where you’re doing something harmful, it’s not necessarily an injury unless it rises to the extent where it impairs someone’s ability to act as a moral agent to engage in interpersonal relationships. In some ways, our threshold is probably a little higher than his. We do think that there are certain ways in which military culture and training could inflict moral injury in the way that we define it as an impairment of moral health.
If I can ask a question on that, I wonder if we then say that the desensitisation component is a necessary but not a sufficient condition for immoral behaviour. The reason I’m saying this is because we know that war or the callousness of war desensitises you to the act of killing or the ambience of war. I think of myself as a ten-year-old child in Bosnia. The first time I heard somebody close to me die, I cried. Within two months after I’d heard of 30 people who had died, I acknowledged their death in a manner that was respectful. I wasn’t as affected as I was on the first one.
I’ve interviewed Special Forces soldiers in particular who’ve been engaged in the act of killing time and time again. What struck me as powerful is that it’s interpreted as that’s part of the job. That is a desensitisation that is, in my view, necessary for them to do their job. It’s also one that I would consider, and maybe I’m naive here, as a necessary piece of the puzzle in order for you to then go and carry out something later in the comfort of our home described as a war crime.
That’s true.
I’m sorry. I’m putting you on a spot here.
I’m going to say no because I do think you can have war crimes that occur in the absence of desensitisation. I could imagine that happening.
What would be an example?
When someone commits a very personal war crime against someone who they specifically hate, that doesn’t seem like desensitisation to me. We might call it something else. The kinds of war crimes that I’m particularly interested in tend to be large-scale institutionalised war crimes. In that case, desensitisation is certainly part of that picture. It’s certainly not sufficient.
I agree. Otherwise, we’d all be doing it, right?
Yes. The anecdote that you gave was interesting because it also points to that if we are thinking about prevention, which is also getting ahead of the thing, when we teach people to be more empathetic, it’s like, “Maybe.” Empathy is such a limited resource for human beings.
Empathy is such a limited resource for human beings. Click To TweetIt’s dangerous.
It is a very limited resource that is depleted by all kinds of things that are not necessarily morally problematic. Imaginably, we can’t feel empathy for 100 people. Empathy is not a magic bullet that’s going to suddenly make war crimes disappear because you’re talking about large-scale killing. Particularly, it’s institutionalised cases. Some of it comes through overexposure, and some of it is our emotions are very fragile in that sense.
This is why I, for example, always thought that acting out of emotion, you didn’t get moral credit for that even if what you did was conforming to what duty required. Why? It is because emotions are unreliable. One day, I might feel all soft and fuzzy and give lots of money to the homeless. Another day, I might be exhausted and tired and don’t feel like it. It can’t be the case of doing the right thing. It’s contingent on whether I feel soft and fuzzy.
That’s wonderful. We’ll come back to that because we know also from research that we can prime behaviour. I do have to ask more questions to reset us because this is fascinating. What is the main thesis of your book? This is so we know what we’re talking about quite clearly because I’m sure some of our audience are going, “Where are they going?” What is the thesis of War Crimes: Causes, Excuses, and Blame?

Matt and I had to come up with a 50-word summary, which I’ve completely forgotten. We were interested in two questions. Our focus or interest is more on institutionalised war crimes. Why do they occur? Why do people commit war crimes believing that what they’re doing is the right thing? Those are the kind of cases we were particularly interested in.
While we do discuss some cases of what we call heat of battle crimes, in those cases, typically, the perpetrators don’t believe that they’re acting in accordance with duty. They don’t typically claim to justify their behaviour by reference to military virtues, honour, or something like that. How do you get this case where people who view themselves as good people can come to believe that something like torture, widespread genocidal killing of civilians, or institutionalised rape is something that’s permissible or even morally good? How does that process occur? That’s one part of the question.
The other part of the question we were interested in is what does this tell us about the moral responsibility of perpetrators? The two parts of the book intersect in that how we explain war crimes is going to have implications for how blameworthy we think perpetrators are. Some explanations for war crimes such as the ones that we criticise or what we call the situationist accounts, if they are correct, it does tend to lead to the view that war crimes perpetrators are not more responsible. There is a strong connection between your accountable crimes and your views about responsibility for war crimes perpetrators.
Ultimately, our thesis is that we see war crimes as being an intersection or interaction between what we might think situational factors interacting in a back-and-forth way, not in a straightforward way. An interaction between an individual person’s own values, their self-conception, how they see themselves, how they think of themselves, what matters to them, and how they consume the situation they’re in as well as the situation that they’re placed in, the interaction between those things, we think, plays an important role in understanding why an individual person commits war crimes.
At a broader level, the kinds of ways in which they make sense of their actions in a context are shaped very much by social, institutional, and political narratives that provide a framework of meaning over a particular conflict. Our view about responsibility, which I can go into in more detail, you do have a question about that.
Maybe if we go into that, maybe we can explore a little bit more what we mean firstly by the situationist perspective and the case studies that you use, and then discuss some of the criticism of it.
Situationist is a term that can describe a range of different views. The views that we are describing by that term come from philosophical discussion of a set of experiments in social psychology that were conducted from the 1920s and then all the way through to the 1980s. The most famous would be Stanley Milgram’s experiments on obedience to authority. Milgram’s experiments, for those of you who don’t know about it, involve people being required to give what they believed to be increasing levels of electric shocks to someone else who was described as the learner. The idea was if the learner gave a wrong answer, the person had to give them an electric shock which would go up each time.
There were many different variations. In some, the learner or the person receiving the electric shocks was visible but not audible to the subject of the experiment or the person giving the shocks. In others, the learner was only audible. In one case, the learner was right next to the person giving them the shock. They had to physically put the person’s hand on the plate to give them a shock. The background is that Milgram was interested in the question of why was it that so many ordinary Germans seem to participate in and facilitate the genocidal program of the Nazis? He was like, “How could all these ordinary people do this?”
Correct me if I’m wrong, but the collective brain trust thought that only 1% of the participants would go to the lethal or high end, which, as I’m sure you’ll let us know, wasn’t the case.
That’s right. He asked psychology students and his peers to say, “How many people will continue giving shocks to this person up to the highest level on the board?” They were like, “Maybe 1% or 2%.” In the version of the experiment in which the learner, the person receiving the shocks, was audible, 60% continued giving shocks until the very end of the board.
The learner is a confederate of the experimenter, so they’re in on the fix. The learner would give various things like, “Stop. I’ve got a heart condition. Let me out.” There were points along the steps of electricity as they got higher that at certain points, the learner would say, “Stop.” The idea was how many people will keep going after the learner has said, “I want to get out of here. Stop. I’ve got a heart condition,” and then eventually fall silent. People kept giving shocks.
There are videos of this. It’s amazing to see a video of the human psyche of it.
What’s interesting, too, is that people weren’t blithely going ahead and giving shocks. You see them very distressed. They’re disturbed. Some people stop and ask for reassurance. There are lots of different kinds of dynamics going on. Interestingly, in the version of the experiment when the learner is sitting next to the subject, so the subject has to physically put the learner’s hand on a metal plate that shocks, the rate of obedience was still about 30%. This is when the person sitting next to them is going, “I don’t want to be here. Stop it.”
It’s great. It was shocking. It raised a whole lot of questions then about how we account for this. People have all kinds of different theories. There are other couple of experiments that are also commonly talked about. I’ll come back to Milgram in a moment. I want to mention another one, which is often talked about. It is Darley and Batson’s Good Samaritan experiment. This is one where the subjects were all students had a theological seminary.
The irony is beautiful.
Some of them were told they had to give a talk about the Good Samaritan parable. Some of them were told they had to give a talk about occupations or something for clergy. Regardless then, some of them were told they had to leave the room. They were going to be late if they wanted to give the talk. Some were told that if they left, they’ll be right on time. Some said, “You’ve got plenty of time.” As they walk from the room where they’re interviewed to the place they’re supposed to give the talk, they pass this person that is supposedly in distress. That person is in on the experiment. It’s an actor. The question was who’s going to stop and help this person in distress?
It wasn’t whether they were giving a talk on the Good Samaritan. It was whether or not they thought they were in a hurry. 10% maybe stopped and helped the person. I can’t remember the exact rates, but it goes up. If a person was in a hurry at the time, they’ll stop and help. There are a bunch of other experiments which I won’t go into. The upshot and the way these have been interpreted by some philosophers is that we generally think that people’s behaviour is explainable by reference to their character.
A compassionate person, if they see someone who’s in distress in the street, they will stop and help them. Why did they do that? It is because they’re a compassionate person. We refer to character traits like that to explain and predict people’s behaviour. It turns out, according to this particular theory, that our behaviour seems to be influenced disproportionately by situational factors that we think shouldn’t have such a big influence.

Whether or not I stop and help someone in distress, surely, that shouldn’t be influenced by whether or not I happened to be in a hurry. That doesn’t seem like a significant enough situational factor to make all the difference that it does. That’s the issue. We all accept that extreme situational factors will make a big difference in someone’s behaviour. These ones involve scent. If you walk past a bakery, will this more likely help someone who drops their papers in the street? It’s that kind of thing. Yeah.
These are important findings in the sense that it, in a way, removes the idea that we are independent, autonomous, rational creatures. In economic terms, we had a rational economic theory thinking that the trickle-down effect or birds rise equally, etc., that’s been disproven. We’re potentially realising that similarly applies to our rational actor thinking.
There were experiments on if you’re holding a warm drink, that will impact how warm you are perceived or, in other words, how warmly you are acting. There are stacks of these types of experiments, even one which I found fascinating. I won’t recall the exact details of it, but it was the honesty box in a cafeteria where you get a drink and put your dollar in that has stuck a pair of eyes above the box. It’s the printed pair of eyes. It’s not a person in any way, but a pair of eyes. The amount of money in the box went up by something like 60%. I’ll need to double-check. It was something significant. This is this idea of being observed, how differently we react. Most of us wouldn’t think that.
We would like to think, “Whether or not I’m generous shouldn’t be dependent on whether there are googly eyes stuck to a box.” We’d like to think, “It’s because I’m a generous person.” It does seem to challenge both common conceptions of the relationship between character and behaviour and also individual conceptions that are cast out on my certainty that I would behave in a compassionate way. I would hope I would say no to Milgram’s experiment. I really hope, but I don’t know. Maybe I wouldn’t.
Statistically speaking, let’s assume for a moment that the experiment is accurate.
That’s right. His experiment has been replicated a number of times with similar results in different countries. Some people think, “Maybe it’s just America.”
I read one on nurses as well. I’m not sure if this is in your book. It is where a doctor who is not a resident doctor at that particular hospital calls the nurse, whichever nurse would pick up. That was staged, but it was real. It was in an actual hospital and an uncontrolled setting. It wasn’t an experiment setting. It was an actual hospital. The doctor, they had never spoken to and never heard of. The doctor was an actor.
He asked the nurse to administer a drug that they know would be problematic or beyond the recommended dosage to a particular patient. Something like 9 out of 10 did so, which is a huge indicator of this idea of, “This is a person in authority, somebody who I should obey by my training and by the role that I’m playing that I’m assuming of the role of a nurse. There’s an inculcated hierarchy that exists within this role play that we’re doing.” When I say role play, it is this subcultural behaviour that we have that allows us to negotiate who’s who in this particular zoo.
Some people think that the way to think about the Milgram experiments is not so much that the subjects were all terrible people but rather that maybe they shared a disposition towards obedience. This is a trait that is inculcated in our society with the assumption of authority and certain kinds of role figures like doctors, in particular. Some people theorise that it doesn’t disprove characters so much to show that dispositions like disobedience might be stronger than we would want them to be in a context like that.
It takes a lot to step outside of the norm.
There are also other factors in that experiment that are interesting. Milgram himself didn’t view his results as being evidence that there’s no such thing as character. He thought there were a lot of different factors that explained or were part of the picture of explaining why some people would continue. One of them was the gradual nature of the process. You start off giving a full shock, and then it’s incremental each time you go up one level. At what point do you say, “No. We’ve gotten this far. I didn’t say no then, so how do I say no now?”
Another part of it, which was fascinating and something I saw when I looked at research on the training of torturers, is this focus on the logistics of the task required. This is the focus on pressing the levers carefully. They’re doing the technical aspects carefully. They’re like, “My responsibility is doing this technology,” whether or not it’s justified as someone else’s responsibility. You see this in torturers, too.
To clarify, because I love how easily that rolls off the tongue like, “These torturers that I interviewed,” give us a context of who these people are. What kind of torturer are we talking about?
I looked at research on torturers from Greece and from a number of countries in Latin America like Brazil and Uruguay. The research on the Greek torturers, the woman who did that research interviewed a lot of them who were in prison at the time. This is referenced across these different contexts and different countries.
You see some similarities between how torturers describe how they were trained but also how they talk about themselves. This is an emphasis on professionalisation. Professionalisation is construed as being, “My role is to carry out this particular kind of skill to the best of my ability.” That’s divorced in my mindset from larger questions of morality or legality. If I’m a torturer and I’m a little bit like the person in Milgram experiments who says, “My job is to press the levers very carefully after I do it correctly,” it is the experimenter who’s responsible, whether it’s moral or not.
There was one torture that was discussed. There was another book by an author called John Conroy, which recounts a torture victim. This victim was maybe in Brazil. He recounted later that when he had been released, his torturer had said to him that when the revolution comes, he, the torturer, would be available to torture whoever they wanted because he was professional. It is like the professional executioner who works for whoever’s in charge. They’re like, “I have this skillset. I’m proud of my professional achievements.”
Another torturer talked, with a sense of pride, that all his tortures and his murders only took place in the name of duty, so he never killed anyone off-duty. That was an important distinction for him. That was very important to his self-conception as a professional. That’s a slight sidebar. To circle back to situationist, in a straight-up simplistic situationist account of war crimes, we’ll say something like either war crimes occurs because of battlefield situational forces. We agree with that. Why not? Extreme stress, fear, tiredness, and exhaustion undermine someone’s ability to regulate their actions or to know right from wrong.
Extreme stress, fear, tiredness, and exhaustion undermine someone's ability to regulate their actions or to know right from wrong. Click To TweetWhat situationists also sometimes say, and John Doris is a person we talk about most, is that it’s not just battlefield forces. It’s also things like military training, group bonding, ideologies, and dehumanisation. All of these things are also external situational forces. They shape the soldier’s mindset such that the soldier will end up believing that certain things are okay when they’re not.
The situationist explanation of something like institutionalised torture would say something like, “These people come to believe that torture is okay. Why do they believe that?” It is because they’ve been exposed to these situational pressures. These are long-term institutional situational pressures that have, in a sense, corrupted and undermined their ability to know right from wrong. They come to have this belief that torture is okay. In a sense, that belief is put in them from outside by virtue of these professional forces. They’re not able to then make the right kind of moral judgment. That’s a simplistic situationist account of law crimes.
You draw some criticisms of this particular point. What are some of that criticism?
There are two main criticisms. I’m going to leave aside the battle cases. Take the institutionalised torture case or genocidal case as well. The problem with these accounts is that, first of all, it doesn’t explain the differences that you see in perpetrators. Perpetrators of war crimes who have been exposed to pretty much the same training and ideologies behave differently. Some people engage in war crimes almost enthusiastically. Others are extremely reluctant to participate. Even if they do think that what they’re doing is the right thing, they might still feel reluctant. Some people refuse. Even in cases such as the Nazi genocide, there were some soldiers who refused to participate in mass shootings of Jewish unions.
You have these differences, but how do you account for those differences if you’re a situationist? They’re all exposed to the same on-the-ground factors. They’re all in the same unit, for example. They almost do the same training. It starts to get a little bit implausible to think, “There’s got to be some kind of situational factor that we are missing that explains why this person refused, this person went ahead enthusiastically, and this person only killed reluctantly.” It doesn’t seem to give an account of these differences that we see. Some people call it the smile problem. You have the smiling, enthusiastic perpetrator, and you have the reluctant perpetrator.
It doesn’t do justice to the reality of the ways in which soldiers or indeed any kind of perpetrator of violence does engage with and think and make sense of their own actions. We don’t think it’s a very plausible account both in terms of describing the differences that we see. As a thesis of human behaviour, we don’t think it’s very plausible either.
It also tends to assume that we can give an objective account of what the situation is, that there’s a situation that has these features, and that’s an objective fact. People are placed in this situation where they are all pushed around by these features in different ways and they all start behaving this way. A situationist is also something that is shaped by the individual person’s own interpretation of what situation they’re in.
Also, their personal histories that are interacting with that.
That’s right. It’s hard for a simplistic situationist for you to make sense of that.
That is the strongest argument against this hard situationism that nature will override nurture at a given point. It’s the disposition account that then takes a slightly different view. We can’t deny these experiments we discussed. We can’t deny that situation has an impact. Statistically speaking, there will be an X percentage of people who will either do the good or do the bad.
In a military context, the way I see the role of the military ethics training piece is that it’s very hard to even say that there hasn’t been a war that hasn’t had some form of a war crime. It is hard to admit that. It’s also hard to admit that chances are, statistically speaking by what we discussed, at some point, some of our soldiers will cross the line of what we consider moral and ethical. The sooner we embrace that reality, the sooner we can realise that it’s about reducing the likelihood of these things occurring.
It’s also about understanding. One of the problems I have often with military ethics training is that it touches on one very small part of the way of thinking about war crimes. Military ethics training often thinks of war crimes as a bad apple thing or as a failure of a war crime. What that means is that if you look at institutionalised war crimes, what constitutes a military virtue is not some objective known fact. It’s shaped by social, political, and local narratives. Think about the virtue of honour. What does honour require? In some cultures, some contexts, and some war honour has been viewed as consistent with killing civilians. It sounds astonishing.
In the case studies we discussed in the book, the two German soldiers, Karl Kretschmer and Felix Landau, what’s fascinating about these two cases is that it’s rare that you get written reflective accounts of someone’s thought process from the time in which they’re involved in war crimes. Someone who’s in prison might give a very different account. Karl Kretschmer wrote letters to his family. Felix Landau kept a diary. At the time, they were involved in the Einsatzgruppen. They were involved in these groups that were shooting civilians.
On the one hand, you have Karl Kretschmer who agonises over the fact that he finds it disturbing and distressing to shoot unarmed people, but he firmly believes that they’re in an existential war against the Jews. He bought into that particular narrative that the Jews represent an existential threat to the German people. He sees being a good soldier and being a virtuous soldier as overcoming his distress. How is he going to do that? He writes to his daughter that the way to do it is to do it more often. That’s how you overcome it. He does reassure his daughter that he is not shooting immoderately.
Is he wrong? It sounds absurd, but he’s not wrong. I’m thinking back to some of my own fears in military training of abseiling. I’m not a great fan of heights. With abseiling, it took 3, 4, or 5 times for me to do it to go, “I’m not as uncomfortable with this.” It took nineteen jumps for me to feel comfortable with skydiving. He’s not wrong as absurd and insane as he sounds.
He’s right. He is like, “I need to get more desensitised to this. Why do I need to get more desensitised? It is because it’s my duty. It is because I need to serve this greater moral purpose of protecting the Germans from this existential threat.” On the other hand, Felix Landau is not morally particularly troubled by this, but he doesn’t like killing civilians. He doesn’t like it because he views it as being not very war-like. It’s not what he thought he was getting into.
When he signed up, he thought he would be on a battlefield shooting against soldiers. Here he is, rounding up and shooting civilians. It’s not what he thought war should be. It’s important to be a good soldier. For him, that means you’re obedient. You do what you’re asked to do and you do it well. He’s proud of how well he’s doing his job, which happens to be killing civilians.
He doesn’t have that emotional distress, but you see this appeal to what we might think of in other contexts as morally commendable traits. For people who can appeal to moral values and virtues, what these virtues and values mean to them is going to be shaped, not determined by available moral frameworks which give meaning to a conflict in ways that can make it the case that something as abhorrent as killing civilians can come to be viewed as what is required as being a good soldier.
In the torture context, you often see torturers praise themselves for showing restraint and self-control. It is ironic because sometimes, self-control is positive as something that we need to train soldiers in to prevent war crimes. Like the other traits I’ve mentioned, self-control itself is not morally good. It’s morally neutral. That is in a context where torture is being framed as a necessary evil. For example, during the Bush administration, in the US Torture Program, torture was framed as a necessary evil. Self-control becomes a virtue in that context as something that’s necessary for good soldiers or good interrogators to have to do their job properly.
To focus on, “If we only get soldiers to be more insert virtue X, we’ll solve the problem,” neglects how the very concept of these virtues is shaped through these different kinds of social and political frameworks. As long as we continue to think of war crimes as failures of virtue or as failures to live up to the military values, we are missing how, in fact, these contexts occur when it is being consistent with military values. That’s confronting because it means we have to say, “We’ve done this. We have to be honest about how that came about.”
As long as we continue to think of war crimes as failures of virtue or as failures to live up to military values, we are missing how these contexts occur when it is being consistent with military values. Click To TweetWhen we talk about the US Torture Program, it ends the conversation with a big grave, but that misses the whole institutional structure of that program. There has to be real reckoning with like, “We did this. One of the ways we did it was by accepting this framework which portrayed it as a way as something that was good and necessary required for national security.” Without challenging those narratives, I don’t think you’re going to get very far.
I love that because that speaks so much to my own views and understanding of how we discussed this topic. Part of the aim of this show and why I discuss these topics is to reconcile the fact that we are all capable of this. Statistically, we’re all capable. It doesn’t say we’re going to do it, but statistically speaking, we are all capable of “evil.”
—
I hope you have found the first part of this discussion insightful. Part two will be out on the 8th of September where Jessica explains an alternate model to understanding war crimes and, most importantly, what we can do to reduce the likelihood of them occurring. Until then, thank you for tuning in to another episode of the show. Since you got this far, please take a moment to like and review the show wherever you get your shows. Also, if you are able, please consider showing your support through our Patreon page. Thank you, and until next time.
Important Links
- Torture and the Military Profession
- War Crimes, Causes, Excuses, and Blame
- Death Work
- Ned Dobos – Previous Episode
- Patreon – Vedran Maslic
- https://www.BuyMeACoffee.com/thevoicesofwar