February 2010

“If no one asks me, I know; if I want to explain it to someone who does ask me, I do not know” (Confessions 11.14). So Augustine presents the puzzle that is time. It seems obvious to us that time is real. We sing songs, we play games, we read blogs—all of which take time. If our timing is off on any of these, they suffer or cease to be. But when we try to define time, we run into trouble. The two traditional approaches to the problem—that from the many and that from the one—do not seem to get us very far.

From the perspective of the many, understanding time seems impossible. For atomism old and new (Democritus or Hume), time is quantified into discrete moments with no real relation to each other. Augustine tries out this method of analysis in his reflections. He says we speak as if there were three times—past, present, and future—but our words seem to have no real referent. The past is gone; the future is yet to come; and the present has no space. Thus, none of the times we speak of really exists.

From the perspective of the one, time fares no better. As Parmenides presupposes logically and Plato proves from the many changing things we experience, the ultimate principle of explanation is one—unchanging and timeless. Intelligibility seems to transcend space and time, and so must its principle. Were the one in time, it would not be primary. The duality of the one and the temporal context in which it is found would need to be accounted for by some prior principle, and that principle would be the true one. It is the Forms which are intelligible (and ultimately the Form of Forms—the Good or One), not the changing and multifarious particulars. At times Augustine seems to buy into this Platonic (later Cartesian) answer. “That truly exists which endures unchangeably” (Confessions 7, 11).

If time is fundamentally unintelligible, either because of a materialist disintegration or an idealist transcendence, then it does not seem to be worth the time to discuss it.

But Augustine cannot accept either answer to the puzzle that is time, nor give up trying to answer it himself. He cannot and remain true to his human intuition of the reality of time and, even more profoundly, to his Christian commitment. “When shall I suffice to proclaim by the tongue of my pen all your exhortations, and all your warnings, consolations, and acts of guidance, by which you have led me to preach your Word and dispense your sacrament to your people? If I am sufficient to declare all these in due order, the drops of time are precious to me” (Confessions 11.2). Faith comes through hearing, and our participation in salvation is sacramental: time is essential to both.

How, then, Augustine asks, can we explain time? It must somehow be simultaneously one and many, transcendent and experiential. Augustine ends up calling time a “distention” of the mind (Confessions 11.26) by which we simultaneously grasp the past in memory, the present by attention, and the future by expectation. The analogy he uses to illustrate his point is more helpful (if less analytical—perhaps because less analytical) than his idea of distention. Time is like the recitation of a psalm or the singing of a song, which requires the continuing presence in the mind of past and future, memory and expectation, for its accomplishment. Every part is related at every time to the other parts and to the whole. “What takes part in the whole psalm takes place also in each of its parts and in each of its syllables. The same thing holds for a longer action, of which perhaps the psalm is a small part. The same holds for a man’s entire life, the parts of which are all the man’s actions. The same thing holds throughout the whole age of the sons of men, the parts of which are the lives of all men” (Confessions, 11.28).

All times exist simultaneously in the Creator, who is present as Creator to all moments of time. Our minds, too, grasp simultaneously past, present, and future. Intelligent conversation shows this to be true. Indeed, Augustine will conclude that the mind can know, choose, and communicate only as it participates in and is illuminated by God, who is “truly eternal, the creator of minds” (Confessions 11.31).

Abortion is more often debated than defined. But what exactly are people disagreeing about when they disagree about abortion? A definition would seem desirable so as to avoid merely verbal disputes. If one person says abortion is always wrong, and another denies this, they may merely mean different things by “abortion” and not really have a substantive moral disagreement. So a clear definition seems desirable.

Abortion cannot be defined as the intentional, premature termination of a pregnancy, because an early induced labor or Caesarean section issuing in a healthy, viable child is also the intentional, premature termination of a pregnancy. Is abortion the intentional, premature termination of a pregnancy with the (further) intention to kill the child (fetus, embryo)? Certainly some types of abortion involve the intention to kill the fetus. For example, a saline injection abortion requires a precisely calibrated saline solution strong enough to kill the fetus before labor is induced. A partial-birth (or “dilation and extraction”) abortion of a viable (third trimester) fetus requires the puncturing of the skull and evacuation of the brain before the head leaves the womb; delivery of a living, viable fetus would entail the legal obligation to render life-saving medical care to the newborn, for it would be considered a person under the law, so clearly there is an intention to kill before completing delivery. A live-birth abortion involves the intentional inducing of labor before viability, issuing in a live but non-viable baby which is then set on a table and allowed to die. Is this an act of intentional killing? It would seem to be so, at least in most cases, since the procedure is intentionally initiated before viability to avoid the legal duty to render life-saving medical care to the born baby. The whole point is to deliver a live baby that one can then legally abandon and so cause to die. But can we imagine a woman choosing a live-birth abortion merely to get the baby out, without actually intending to cause its death? Perhaps. Imagine the victim of a rape who merely wants to get the rapist’s baby out of her body. She may foresee the death of the baby without intending it.

So there may be a problem with defining an abortion as the intentional, premature termination of a pregnancy with the further intention of causing the death of the fetus (embryo, baby). The most common abortion methods focus on evacuating the contents of the womb with suction-aspiration machines and/or loop-shaped knives or forceps, ensuring the killing of the embryo or fetus in the process; chemical abortions with RU-486 do the same without surgical intervention. In such abortions the woman’s intention may be, not to kill the child, but merely to render herself unpregnant. Again, imagine the victim of a rape thinking “I want it OUT of me!” The death of the child may be the unintended though foreseen side-effect of the only technically practical means of getting the embryo out of her at that point of the pregnancy. Even if she is not intending the death of the embryo, she still is getting an abortion. Or consider abortifacient means of birth control, like IUDs, or (sometimes) the pill or the morning-after pill. Abortifacients allow conception but prevent the implantation of a very early embryo in the womb, thus ensuring the death of the embryo. Does the woman necessarily intend the death of the embryo? She surely intends not to be pregnant. But her intention may merely be that – to end her pregnancy, accepting the death of the embryo as a foreseeable yet unintended side-effect.

Perhaps we can define abortion as the intentional, premature termination of a pregnancy by a means that foreseeably causes the death of the unborn embryo or fetus. (If we can describe a woman as pregnant from the moment conception occurs in her Fallopian tube, then we can also describe at least some abortifacients, e.g. IUDs, as terminating very early pregnancies; the pill and the morning-after pill are admittedly problematic since their effects are harder to foresee, for they can prevent either ovulation or implantation.) If this definition is a good one, then it is neither too broad nor too narrow; is ethically neutral, embodying no moral evaluation of abortion; avoids circularity; etc. Ethical neutrality is especially important here, since opponents and defenders of abortion must be able to agree at least on what they are debating. Note that in implying that some abortions may involve unintentional killing the definition does not tacitly approve of such abortions. Unintentional killing can be morally permissible or not depending on a host of additional factors. The neutrality of the definition allows us to separate the moral evaluation of abortion from the definition of the term.

A short meditation this week on the nature, history and status of our experience of ourselves:  Most philosophical historians will argue that the experience of ourselves from the first person perspective, that seemingly natural primary identification we have with our ‘I’s, is anything but natural and is, in fact, a historically developed perspective.  This is, of course, a very odd claim and one, moreover, which is mind-boggling difficult to comprehend; but essentially the argument runs as follows: when you study the literature and philosophy of the west you quickly discover that we have not always thought of ourselves in the same way we do now; that in fact, the ‘internal’ viewpoint granted in the first person perspective was developed relatively recently.

To argue this point, these historians will point in part to the nature of narrative (after all, how we talk about ourselves, as the analyst knows, reveals fundamentally how we think of ourselves); and how, in the ancient world, there appears to be a distinct lack of first person narrative.  Think about the epics or the earliest scriptures which seem to always be narrated in the third person impersonal.  What’s more, therein we find human beings portrayed as passive: exposed to and subject to the whims of a cruel world, the impersonal forces of nature or the wrath of the Gods.  Human action is thus not portrayed as the out-working of some interior life, but instead as the effect of seemingly arbitrary happenings in the cosmos.  Moreover, the value and meaning of those actions is always interpreted from a third person perspective, that is on how they can be seen in the eyes of a particular community, if not the actual political community of the character then at least the audience who may hear the tale.  The actions of an individual thus appear to be evaluated in the ancient world on the basis of whether or not they bring shame or valor in the eyes of others.  From this, the historians conclude, the most ancient concept of the self was one that was: a) fundamentally mediated by the social; and b) not seen as separated, distinct or closed off from the world (some interior experience).

The modern experience of subjectivity, wherever one plots its origins, is distinct then in that it establishes the self as a primarily interior experience, something only the subject itself has access to and moreover as a power in the world with its own sphere of influence.  Our actions, we think, are the result of some happening within us, whether conscious or unconscious, and not the whim of any exterior force.  The subject, we think, is primarily active, and not passive – is internally coherent and not at exposed to the gaze or judgment of other.  It is private.   As such, we experience ourselves not in terms of how our actions are seen by a community, but how we think or feel about them ourselves.  This transition in perspective is demonstrated in our employment of the first person perspective, both in literature and in philosophy (think of Descartes’ cogito for a relevant example), but also in how we talk about our own experience and history making the ‘I’ primary.

But it seems to me, and this is the subject I’d like to provoke our discussions this week (if only virtually), we seem to be situated at a watershed moment in the history of the experience of ourselves: another cultural transition akin to the birth of subjectivity (that movement from the third person perspective to the first) – one which is seemingly carrying us back to the third person perspective, but in a new and strange way.  I witness this in the way my students think and talk about themselves and their interests, but I witness this most on Facebook where status updates are narrated in the third person (i.e. “Drew is very ashamed of the quality of his blog post today”), photos are taken from the third person perspective (either through the mirror or held at arm’s length) and always posed (as if those in the photo are already estimating how it will look to others and what their best angle is for those others) and where it seems that something needs to be commented on by the community to have really happened.  Isn’t this, at least to some extent what we see in the culture of blogging: the conviction that for something to have really happened to me it must be share and validated by the community?   Isn’t there a strange blurring between what would traditionally have been deemed the interior realm of the private and the exterior realm of the public online or the suspicion in my students that there is no point in doing something privately, only value in doing something which will be seen publicly, something for which they will be acknowledged and get some credit (like the student I had recently who told me that he kept a personal journal for his imaginary future wife or progeny who may want to read what he was thinking or feeling today and so carefully crafts each sentence worried about how it might appear to them)?  I wonder if anything is done privately by my students (any journals kept not in the hopes they will be read, but with precisely the opposite hopes).  Is this also what I see in their hyper awareness, however justified, of how they are being judged from the outside world (by future employers, peers, or other professors) and their attempt to sculpt themselves and their c.v. (and even to their private extra-curricular activities) in terms of how it will appear to others?  Isn’t this further what we see in the current cult of celebrity and reality TV super-stars where someone is valued solely on their media-friendliness (and not on the merits of their character)?  Is this, furthermore what I witness in people who will take a digital photo of something they are currently experiencing (say the Grand Canyon or Niagara Falls) and then immediately turn their back to the actual experience, to look at the photo they just took (seemingly imagining how it will be perceived by others who will later comment on it online)?

Don’t get me wrong.  I am not a chronological snob.  I do not lament these changes, offering instead my own nostalgic portrait of the good ol’ days of my youth when we knew what was private and what was public and had a distinct first person perspective.  Frankly, I think there are some real problems with the modern perspective of my generation.  It is not my intention either, however, to praise what appears to be a weakening of the experience of ‘modern’ subjectivity in my students.  Instead, it is simply my hope to raise the very undeveloped and tentative thesis that there is a difference between our two generations which I think can be explained by examining the history of the experience of ourselves (the history of subjectivity) and which I think may explain some uncanny, at least to me, recent social phenomena.  I look forward to your own thoughts and commentary (though I ask you to keep in mind, that though I knew this was to be published and subject to public view, given my generational predilections, it flowed from private thoughts and was composed without really concerning myself with your future judgment – it is therefore not as polished as most of my students Facebook pages are.  I’ll hope for your generosity then with my ‘modern’ limitations).