Max Latona

Moral Seeing and Moral Blindness: What Role Do Emotions Play?

We are all more or less acquainted with an emotionally induced form of moral blindness.  One version of it is the often short-lived blindness that we experience in a fit of rage, the depth of grief, or the ravages of envy. This condition was long ago identified by Aristotle as a form of incontinence (akrasia). Such afflicted persons, though perhaps able to express proper moral precepts, “do not quite understand what they are saying” (1147a15-25).  In other words, when we are in a state of overwhelming grief, anger, envy, jealousy, or fear, we cite good moral principles and rules without meaning them, and we do so because we do not see the relevant moral realities in question, due to our strong feelings. Later we may feel regret, once the emotions have subsided and the moral realities become apparent once more. A more extreme form of this same emotionally induced blindness is a kind of generic “intemperance” [akolasia].  It differs from incontinence in that, because of the habitual nature of the affective condition, the blindness has become more deeply entrenched, and the agent’s consciousness of the relevant moral realities has been virtually destroyed.  He is, as a consequence, no longer able to recognize moral failure in himself, even after the emotions have subsided.

Philosophers have long been vigilant against emotionally induced moral blindness.  Indeed, the dangers of unbridled emotions for clouding our judgment, causing us to forget moral norms, and blinding us to what is good, have led many thinkers to banish emotion from the realm of sound moral judgment altogether and to argue that moral action depends upon cool, dispassionate reasoning alone. Plato, for example, instructs us in the Phaedo to “keep away from all bodily passions, master them, and do not surrender …to them” (82c) on the grounds that the body “fills us with wants, desires, fears, all sorts of illusions and much nonsense” and prevents us from “seeing the truth” (66c-d). Kant puts the matter a bit differently when, in he derides those persons “who are unable to think [and] hope to help themselves out by feeling,” the latter of which, he explains, cannot hope to supply a uniform measure of good and evil, nor act as the basis for universal judgments.  For Kant, those who undertake actions through any affective inclination such as emotion are altogether precluded from the category of the moral, in as much as they fail to act from duty and principle.  In short, philosophers have warned us that emotions can only confuse reason, distract us from the truth, make us inconsistent in our decisions and actions, and render us no better than animals.

Now, in contrast to these and other arguments against emotion found in our philosophical tradition, various thinkers of late (e.g., Raymond Gaita, Jonathan Bennett, Antonio Damasio) have separately brought to our attention another variety of moral blindness, one which (they argue) can create the conditions for the very worst sorts of evils that humans have perpetrated upon one another. For his part, Gaita argues that his variety of blindness is at the bottom of the genocidal behavior perpetrated against Jews in Nazi Germany, and the forced deportation of Aboriginal children in Australia, etc.  What is striking about many such cases of evil, according to Gaita, is not that they were perpetrated in the midst of a blind rage, or in the haze of some other emotion; rather, more often than not they were accomplished without any apparent feeling whatsoever.  Indeed, these actions appeared to have been perpetrated by ordinary people carrying on their jobs in the course of what they saw as their ordinary work-a-day lives.  Eichmann, after all, was just “doing his job,” as was, no doubt, the average SS guard shepherding Jews onto a train, and the truck driver carrying away the Aboriginal children.  Now we might, in accordance with the philosophical tradition, search for hidden seeds of spite and envy that may have served to beget a rotten premise, a forgotten norm, or an ill-formed categorical imperative.  In this we would remain true to the view that reason alone is our source of insight, as that faculty that enables us to “see” the intelligibility in the world, including those moral significant realities that are fundamental to human goodness. However, the aforementioned thinkers suggest that preoccupation with reason and principles does nothing to stem the tide of this moral blindness, and may in fact contribute to it. Indeed, as Chesterton once claimed: “the madman is not someone who has lost his reason; he is someone who has lost everything except his reason.

If not reason, then what discloses to us these moral realities?  According to Gaita et al., the core moral realities to which to which we often become blind can only be disclosed by human feeling. Accordingly, affective states such as grief, shame, love, or pity are not (a) mere emotional responses to proper moral cognition, nor (b) mere causal conditions for such cognition. On the contrary, pathos is itself a form of understanding. For example, in the case of shame, it is not the case that I realize that I have done wrong, and then feel shame; instead, my shame is itself a recognition that I have done wrong.  Indeed, we might say I “see” I have done wrong only in so far as, and because, I feel shame for what I have done.  By the same token, it is also not the case that my shame is merely a condition for the possibility of a cognitive capacity to grasp the truth of what was done to another person.  If this were so, then once the so-called “objective” insight was obtained, we might kick away the ladder of emotion that enabled us to reach that insight, and express the insight without, as Gaita puts it, any “essential reference to the fact that we possess such affective and moral dispositions.” For instance, grief could be dispensed with at the death of a loved one as soon as one could say to oneself—“Gee, that other person was important to me.” But this is absurd in as much as the grief is itself a recognition of something significant, without which any subsequent actions lose their character as fundamentally good and human. Emotions, accordingly, are simply a way that we humans understand morally significant realities in our world in so far as emotions disclose things otherwise invisible to reason. Just as love reveals the preciousness of another person; so too remorse reveals evil; grief reveals the value of another; compassion reveals the suffering of another, etc.

Recent work by the neurologist Antonio Damasio supports the view that emotions are essential to good moral decision-making. Damasio’s research on the human brain, as described in his works Descartes’ Error and The Feeling of What Happens, indicates that the ideal of the cool, dispassionate, rational being is flawed in as much as the brain, without emotion, is impaired in its ability to make sound moral judgments. Damasio argues that selective damage to either the prefrontal cortices, or the region of the brain known as the amygdala, impairs one’s ability to feel, which has the cascading effects of impairing one’s ability to understand personally and socially significant realities, and exercising good judgment. According to his “Somatic Marker Hypothesis, emotions are woven into the very fabric of consciousness, inducing us to act, or refrain from acting, by highlighting objects of consciousness as worthy of pursuit of avoidance. Emotion, therefore, is not divorced from reason:  love, grief, joy—their very existence is partly constitutive of the deliberative process.

This claim about the revelatory nature of emotion is accompanied by an obvious problem: namely, however true it may be that human feeling is a requirement for moral insight, or is itself a form of understanding, it is also equally true, as we have seen, that human feeling can act as a cause of moral blindness, and can itself be a form of misunderstanding.  Our anger at a slight being done can disclose to us the wrong committed, and reveal to us what is to be done, but it can also obscure what is right and just and lead us to overreact.  Again, our love for our friends and children can make us aware of their preciousness as persons, and can disclose to us their goodness, but it can also blind us to their flaws. So, how can we tell at any moment of moral decision whether our feelings are enhancing or disabling our moral consciousness?

Of course traditionally, we might appeal to reason as the independent arbiter that determines whether or not an emotion is appropriate to the reality of a given situation. One problem, however, with the ratiocinative solution is that even if one were to suppose that reason, on its own, can adjudicate between appropriate and inappropriate emotional responses, it turns out that it may well be that in healthy adults reason is never on its own, that it is never unaccompanied by feeling, with the result that it simply does not have independent access to the moral realities in question in order to legitimize, or de-legitimize, our emotions. Damasio impresses this fact upon us when he argues that the core consciousness of a healthy adult is always accompanied by what he calls “background feelings.” That we have such feelings at work in every waking moment is indicated by the fact that we can always answer the question “How are you feeling?” While it would make sense to say in response that I am feeling quite happy, a trite apprehensive, a bit melancholy, or even very peaceful, it would never make sense to say: “why, I am feeling nothing at all!”  Heidegger made a similar point when, in Being and Time (section 29) he argues that one fundamental characteristic of Dasein, is “state-of-mind” (Befindlichkeit).  He explains that we humans always find ourselves attuned to the world in some way or other, in one mood (Stimmung) or other.  As Heidegger says, even “the pallid, evenly balanced lack of mood (Ungestimmtheit)…is not to be mistaken for a bad mood,  [and] is far from nothing at all” (173).   What Heidegger wishes to impress upon us here is that even in those “calm, peaceful, lucid” moments, ones which might be characterized by the lack of any dominant emotions, we are still attuned to the world and its realities in a certain way: we are still “in a mood.” As a result, there is never a time when, in the ordinary consciousness, we are not affected by feelings.  Whether characterized as background emotions, or an ever-present mood, these feelings are inescapable by consciousness and cognition, and affect the way the world (especially its morally relevant realities) are disclosed to us.

What this means, in practice, is that we can never be sure at a given moment whether or not reason, and accordingly our view of the morally significant realities of the world, is distorted by virtue of feelings we have at any given time.  Thus, at one moment we might feel a pity for someone, leading us to treat the person gently.  We might even reason the matter through, working out a practical syllogism, cite principles like “willing the maxim of my action to be universal law” and  “treating persons as ends in themselves, and not mere means.”  Later, we might reflect on our actions and feel guilt for having given in so readily, wishing perhaps that we had acted more punitively. Here we might also reason the matter through with equal plausibility, work out another practical syllogism, and cite the same principles about universality, or persons as ends in themselves.  Which reasoning, dominated by which emotive insight, is correct?  In summation, the traditional response arguably fails because reason is always informed by feelings, whether in the form of acute feelings, or subtle background feelings that are always present in core consciousness.

To find a solution to this difficulty we must hearken back to the ancient Greeks, who were attentive to the aesthetic dimension of moral goodness.  Indeed, Aristotle saw it as the objective reference point for human morality.  As we know from his Doctrine of the Mean, Aristotle was well aware of the fact that both excess and deficiency in our affective states can cause one to act badly.  He was also aware, as we saw earlier, that emotional imbalance can threatens our very ability to see what is right and wrong in a given circumstance, both temporarily (as in incontinence), and more or less permanently (as in intemperance).  “Vice,” he says, “corrupts the principle.”  Given this fact, Aristotle recognized the impossibility of one’s own practical reasoning standing as the ultimate measure for human action—in so far as the agent in question is unable to tell whether his reasoning has been corrupted, i.e., whether he is suffering from moral blindness.  As a result, Aristotle appeals to the reasoning of an already virtuous person as the model.  This person can be clearly recognized in so far as, according to Aristotle, their actions and person possess a beauty that appears to all.  Aristotle repeatedly uses the term kalon to describe the good action and person. Kalon is a term that for the Greeks refers to physical beauty, an aesthetic attractiveness that includes order, symmetry and measure.  Indeed, at one point in his Nicomachean Ethics, Aristotle tells us that when a person bears many and great misfortunes well, his beauty shines through (dialampei to kalon) (1100b30-32).  The language of “shining through” suggests that moral goodness shows its objective character by means of its beauty, and thus can be visibly recognized for what it is by those who see it.

What this means is that I myself cannot act as the judge of my own clarity of moral sight, in so far as my very reasoning may have been negatively affected by my emotional habits.  I must learn the right way to feel by turning to one whose emotions are oriented in the right way, whose goodness simply shines in their faces and gestures, their words and deeds.  They show up the rest of us, for through their emotions is disclosed both the moral realities themselves, and the way in which we should feel about these moral realities.  This suggests that so far as morality and human goodness are concerned, what is far more important than well-ordered principles is the presence of another whose emotive goodness can relieve us all of our moral blindness.

The invention and rapid expansion of the internet and World Wide Web over the last several decades stands as a watershed moment in human history, transforming human communication and, ipso facto, leaving its impress on virtually every area of culture in the industrialized world. There can be little question, then, that human knowledge itself (or more precisely, the way we popularly understand human knowledge) will change in profound and irreversible ways—many of which will not become apparent until long after they have occurred. We need only look back upon other such watershed moments in human communication to see that this is true.

One such moment in the West occurred in ancient Greece in the 8th century B.C., when writing was re-discovered. This event was obviously of monumental significance for Greek and Western culture. Among other things, the subsequent spread of writing throughout ancient Greece brought about a shift of prominence from one form of speech to another–from mythos to logos—thus sounding a death-knell for the oral culture in which the epic poets and their bards flourished. Gradually, knowledge became less and less a matter of what was collectively preserved and re-iterated through memory and oral recitation in a communal space. As a result, the legends, sagas, and myths that depended upon formulaic speech, poetic innovation, and public audiences for their life-blood no longer stood as living insights into the cosmic order, the nature of the gods, or the origins of society, but became artifacts of the past (preserved in textual form, to be sure). Since the writing of texts facilitated the careful pursuit of inquiry (historia), and the meticulous construction of accounts (logoi), knowledge became more and more a matter of theory, argumentation, and analysis, as well as the disciplines that developed from and depended on these forms of speech and thought. Thus, the Golden Age of Greece, an “age of reason” that saw tremendous intellectual achievements, arguably could not have taken place without the development of writing.

Another such watershed moment in human communication occurred with the invention of the movable type press by Gutenberg in the 15th century, an invention that Mark Twain once called the “greatest event in the history of the world.” The printing press enabled high-brow texts and ideas to circulate among the masses, with the result that the language of learning eventually shifted away from Latin to the vernacular languages, the places of learning shifted away from monasteries and scriptoria to universities, libraries, and presses, and the communities of learning shifted away from the feudal aristocracies and clergy to scholars and even to ordinary folk. The Renaissance, Reformation, Scientific Revolution, and later, Enlightenment–these are but a few of the major developments in human culture and knowledge that have been traced to the printing press. In fact, the development of the scientific method itself, as well as the philosophical schools of empiricism and rationalism, could not have developed without the emergence of new criteria for truth and knowledge—all of which can be plausibly tied to the legacy of the printing press. A new “age of reason,” we might say, was brought about by this second transformation in human communication.

And so we turn to the legacy of the internet, about which it would be foolhardy to make determinate proclamations at this early date (it may take hundreds of years to gain the necessary perspective). We can, however, pose some questions. For example, how will the development and proliferation of electronic communication change the way we conceive of knowledge? There is no question that it has enabled us to transmit, store and retrieve knowledge more efficiently—but what kinds of knowledge? What kinds of knowledge flourish with the globalization of Google, Wikipedia, electronic databases, search engines, and web-logs? Is it merely “information?” If so, has the popular appreciation for reasoned argumentation and analysis been fundamentally diminished by the explosion of information and the proliferation of opinions in electronic media? Indeed, what forms of learning will be left behind, as printed monographs, books and newspapers arguably fall by the way?

In a related fashion, we might inquire: who will be the new learned? Will those with prodigious memories become even less important now that Google is but a click away? Will those capable of reasoned argument or careful empirical observation become obsolete in the face of those who can “process” information more efficiently? Though it seems hardly inconceivable, the rapid proliferation of electronic communities of learning suggest that, one day, the silicon tower will eventually supplant the ivory tower as the loci for intellectual discourse. If so, we can only wonder about the security of our knowledge as its surety is guaranteed not by human memory, or the scroll or printed page, but by computer chips and bytes.

Of course, there is no question that the developments in human communication bring great blessings to humankind. However, one wonders whether all these drastic changes in the appearance of knowledge, what it is, how we learn, who it is that knows, and where knowledge gets transmitted, change the epistemic fundamentals regarding the mind’s relation to the world and the importance of face-to-face human contact in the transmission of knowledge. As for me, though I become increasingly dependent upon electronic media and on-line communities for the development of my own thought, I find that part of me cannot help longing for what has been left behind—for the days of archaic Greece when story-telling was a meaningful community (and educational) experience, or the days of Medieval Europe when reasoned argumentation guided by faith was seen by all to be a worthy exercise of learning. And I wonder what good things we are in the process of leaving behind now.

The members of the Philosophy Department were asked which book they thought would be important to teach to students in an introductory philosophy course. Their answers are below. You can listen to them all at once or you can click on the name of the individual professor below to listen to each professor’s answer.

All answers in one mp3 file.
click the link to play or right click to download.

-Professor Robert Anderson

David Hume-An Inquiry Concerning Human Understanding

-Professor Robert Augros

Professor Augros argued that the dialogue between teacher and student was more essential to the philosophical process than any book.

-Professor David Banach

Euclid’s Elements

Albert Camus- The Stranger

-Professor Montague Brown

Plato- The Republic

Online edition

Saint Augustine’s Confessions

Online edition

-Professor Drew Dalton

Marcus Aurelius-The Meditations

Online edition

Slavoj Zizek


-Father John Fortin

Saint Anselm- Monologium


-Professor Susan Gabriel

Bertrand Russell- The Problems of Philosophy

Online edition,M1

-Professor Sarah Glenn

Sophocles- Oedipus Rex

-Professor Matthew Konieczka

Leo Tolstoy-The Death of Ivan Ilyich

Plato- The Apology

Online Edition

-Professor Thomas Larson

Plato- The Republic

Online edition

-Professor Max Latona

Josef Pieper-The Philosophical Act

Professor James Mahoney
C.S. Pierce, “The Fixation of Belief”

Michael Novak, Belief and Unbelief

-Professor Joseph Spoerl

Plato- The Gorgias

Online edition

Plato- The Republic

Online edition

Plato- The Apology

Online Edition

-Professor Kevin Staley

Plato- The Euthyphro

Could Philosophers Have a Blind Spot?

Some Reflections on Philosophy and Leisure

We are all familiar with the notion of a blind spot in our peripheral vision. Ironically, the blind spot is caused precisely by that which enables the eye to see, namely, the optic nerve. The optic nerve sends the visual information from the retina to the brain, but in the one place where the optic nerve connects to the retina, there are no cones and rods, and thus a blind spot.

Could philosophy, or more precisely philosophers, have a blind spot in their range of understanding? Before we look into some possible reasons to think this, let us note that this particular blindness would be justifiably disturbing to all philosophers, if true. After all, the wisdom that philosophy cultivates has long been cherished as a knowledge of everything–in the way that such knowledge is possible for human beings. For instance, as Aristotle conceived it, philosophical wisdom is the science of the causes and principles of being qua being, for which reason it concerns itself with all that is, i.e., all things in so far as they are. Put differently, the universal scope of philosophy is apparent in the fact that philosophers can seemingly provide an analysis of nearly anything by laying out the principles and causes of that thing, its purpose, and its meaning. Indeed, upon reflection, it seems that there is nothing whatever that lies outside the visual field of philosophy, and that philosophy in principle could have no such blind spot.

However, we must remember that philosophy is never practiced in the abstract, but always by living human beings. In fact, philosophers have never actually claimed to possess the knowledge of all things, but only to be pursuing it (in fact, “philosophy” literally means the “love of wisdom,” not the “complete possession of wisdom”), so that while philosophy itself might not in principle be blind to anything, philosophers (either singly or as a whole) may develop certain myopias.

Indeed, a clue to a philosophical blind spot can be found in our optical analogy. As we saw in the case of the eye, it is the eye’s very dependence on the optic nerve that brings about its blindness to one area of its visual field. In a similar way, the philosopher has a very specific dependency to which he or she owes the ability to do philosophy. Long ago, Aristotle noted this dependency when he remarked that philosophical speculation began only when men had leisure time, only when the necessities of life had been adequately supplied. The reason for this is that philosophy begins in wonder, and it is only when humans have freedom from the daily concerns and worries of existence (ta pragmata) that they have the opportunity to look about them and wonder (Met. I 982b12-27). A more modern expression of this found in the writings of Hannah Arendt, who describes the moment of philosophical provenance as the “stop-and-think,” and sees it illustrated in Socrates’ propensity to pause and become lost in thought in the midst of whatever he was doing. In short, one cannot be doing if one wishes properly to be thinking. Philosophy, essentially an activity of the mind, requires a stillness of the body and, therefore, a withdrawal from the labors and concerns that are attached to our bodily existence.

Does a blind spot perhaps exist here? Philosophers have long looked askance at the work-a-day world, relegating work to the category of the merely instrumental, i.e., as having little or no value in itself. Indeed, earning a living has been cast by philosophers as mere “moneymaking,” as “servile,” as befitting those who lack the intellectual light to pursue more contemplative activities. And yet the ones who write these sorts of things, more often than not, were not those who had any long acquaintance with labor and work. When surveying the annals of philosophy, one might become suspicious about the authority behind these claims, for those annals reveal that the history of western philosophy has been the history of the speculation not of plumbers, carpenters, farmers, midwives, and shop-owners, but of aristocrats, monks, and university professors—precisely those individuals who have never had to struggle 70-80 hours a week at labor in order to supply food, clothing, and shelter for themselves and their families. Again, it is no accident that this is so, since philosophy requires rest from labor, but it does raise a question: are there, perhaps, truths that could only be discovered in and through a life of labor? Would not such truths remain unknown to the aristocrat, the monk, and the university professor?

As long as humankind has existed, the majority of people spend their adult life (and often their youth) engaged in toil. Whether it is in tilling the soil, caring for livestock, working in factories, building houses, or cooking meals and keeping house, men and women spend 99% of their waking lives engaged in some form of work. It is a startling realization. And yet, strangely, very little is ever done by philosophers to explore the meaning and value of the activity that occupies so much of our lives. Again, one can only wonder: are there subtle lessons and truths about reality to be learned in the life of work—that remain unknown to the metaphysician in his leisure? Are there great moral insights to be gained by struggling year round to make ends meet, as yet unknown to the moral philosopher in his contemplation?

Consider John the electrician. He spends the morning helping prepare breakfast for the family, dressing children and bustling them off to school. He then rushes off to his carpentry job, where he spends 8-10 hours cutting plywood and nailing it to the frame of a house. He then returns home in the early evening (perhaps stopping at the market on the way home), helps prepare dinner, clean the kitchen, and do laundry. He might even have a look at the bathroom faucet, which has been leaking for several weeks (after he has opened up the latest mind-numbing dental bills for his children). At the end of the day, after perhaps telling a bedtime story to his children, he collapses in bed only to awaken the next day to do it all over again. What kind of meaning can be found in such a working life?

Here is one possibility, among a great many. John is learning a great lesson of self-sacrifice. The hourly, daily, monthly, yearly grind of caring for his family and his customers has slowly taught him to care for others as much as, or (in the case of his family) more than himself. He has learned this lesson not merely as an idea, but in every fiber of his being–in the dark circles around his eyes and the aches in his knees: that life is only meaningful to the extent that we can serve others. John could not have learned this lesson in contemplation, no, not if he had all the leisure time in the world.

The Meet the Philosopher Series are interviews with the members of the Saint Anselm College Philosophy Department. They aim at introducing you to the members of the department along with their interests and ideas. Professor Max Latona is the tenth profile in the series.

In this interview, Professor Latona talks about his interests in Ancient Greek and Indian Philosophy and about the role that the problem of human mortality has played in his philosophical development.

Saint Anselm Philosophy Podcasts can be found here.

The RSS Feed is:

You can find information on RSS feeds here:

What does the expression “to reason” mean?

Since classical antiquity, and likely earlier, human beings have been conscious of the fact that they are in possession of a faculty that animals appear to wholly lack. That faculty, given the name ‘logos’ by the Greeks and ‘ratio’ by the Roman and Medieval thinkers, was translated into French as ‘raison,’ thus giving rise to our term ‘reason.’ Just what is this faculty? What are we doing when we engage in the activity of reasoning? Is it true to say that man is a rational animal, and if so, in what sense?