Mental illness and the Body of Christ

I spent the other day at the cathedral church for my diocese, going through the required training program to become a Eucharistic Visitor. (A Eucharistic Visitor—EV—is someone who brings fellowship and the Eucharist to members of the congregation who can’t make it to church for some reason.) It was…interesting. Some of it was new; some of it was useful; some of it was infuriating. For the most part, I felt a real camaraderie develop among the 16 or so of us trainees, who came from several different area churches. As the day progressed, I was impressed by the strength of faith, theology, empathy, and openness of my fellow trainees.

A large chunk of the training involved witnessing and performing role-played scenarios of the types of visits we might encounter. We were handed slips of paper with a brief description of the visitee’s age, situation in life, and temperament: Man, 79, is recovering from knee replacement surgery at home and is in generally good spirits but lonely and desirous of company. 66-year-old woman is dying of cancer in a hospital bed and has trouble speaking or swallowing. 89-year-old woman has recently moved to an assisted care facility; she is gregarious and invites several friends to participate in communion with her. We split into pairs and took turns playing the visitor and the visited; afterward we’d gather as a group to reflect on our encounters. Emphasis was placed on developing our empathy, both through practicing active listening and through creatively imagining ourselves into the situations we were given.

Great. Good. Until one pair of trainees turned out to have had a scenario involving a 21-year-old woman who was in a psychiatric ward for suicidality. And then—and I’m not sure exactly how to describe this—the atmosphere changed palpably. There was a discussion, punctuated by furrowed brows and wise nods, of how hard and unusual and strange this situation was, how difficult to reach the woman being visited, how glad everyone else was that they hadn’t drawn that slip of paper that would require them to pretend to be a young woman in a psych ward. Perhaps the most concrete example of what I mean is that one of the training leaders said, “Well, I just can’t imagine being 21 years old.”* Someone else immediately chimed in, “Let alone being suicidal!” It felt as though the discussion had abruptly shifted from exploring how to put ourselves imaginatively into someone else’s shoes to a relieved consensus that such empathy was obviously impossible.

This description is far more nuanced than I could have given at the time. In the moment, all I was aware of was the shock of going from feeling warm, welcomed, and safe to the opposite extremes. I could feel myself shaking with anger and struggling not to cry. I excused myself to the bathroom for a few minutes. I glared at my reflection in the mirror, balled and unballed my fists, wiped my eyes, took a breath, and went back out. The conversation had moved on, and no one had noticed that anything was wrong. Our day ended shortly thereafter.

What was wrong, exactly? I’ve spent some time pondering the situation, and here’s what I’ve come up with. We had been invited—directed—to put ourselves into the situations of the people we might be called upon to visit. The leaders reminded us repeatedly that one of the purposes of the exercise was to imagine what it was like for our visitees. But no one wanted to play the young woman in the psych ward. No one wanted to imagine what her life—my life—has been. And instead of acknowledging this reluctance as coming from discomfort, they said, “oh, it’s obviously too hard. It’s impossible, really.” And all these lovely, empathetic, warm, thoughtful people pushed me away, without even realizing that they were doing it.

You know what? I can imagine what it might be like to be 89 and in an assisted living facility. To feel your body change and start to fail you, and to worry that your mind will do the same. To feel that others are beginning to see you as irrelevant, while you know that you have more to offer than ever. To lose the dignity of autonomy. I can imagine what it might be like to live with chronic physical pain, or to lose one’s spouse of many years to death or divorce. I’m sure that what I imagine is different from individual reality; and I don’t think that putting myself into someone else’s shoes gives me any kind of ownership over their situation. But I try to imagine these things, and (even if I don’t always succeed in this) I try to listen to the narratives I hear from others for whom these things are a reality. So why did it feel as though these people were unwilling to do the same for me? Why, when we talk about mental illness in community,** is it always “them,” never “us”?

A suggestion: people are scared. This seems reasonable to me. We don’t want to think about bad things happening to us; we don’t know how we’ll deal with changes that shatter our world. We do nonetheless share a cultural understanding that we might get cancer, however shocking it inevitably is when it happens. We know that our best-case scenario involves growing old and the hardships that come with that. We know that all marriages end, whether by death or divorce. (See [please!] Louis C. K. on the matter.) But it’s terrifying to imagine that the sadness and despair that we all experience at some point could balloon, could devour our lives until we actively seek death. We don’t want that to be part of the human experience. We don’t want to be able to empathize with this. Perhaps on some level we’re afraid that, if we put ourselves into a suicidal person’s shoes, we’ll never be able to take them off.

I have been there, and I understand it. I find that even among the narratives of those who have been hospitalized for depression, there’s a curious desire to distinguish between the ones who are “really” crazy and the ones who just, you know, happen to be there. Between them and us. But there is no them; there’s only us.

Afterward, I wondered why this small incident of alienation had stung so much. It’s not as though something similar doesn’t happen pretty much every time mental illness comes up in pretty much any group I happen to be in. It’s not as though this was in any way unique or drastic in the annals of people alienating one another. What I kept thinking of was 1 Corinthians 12:21: “The eye cannot say to the hand, ‘I have no need of you,’ nor the head to the feet, ‘I have no need of you.'” This was a Christian context, and I had felt safe as a member of the Body of Christ. Until I didn’t.

*I should note that I appeared to be the youngest person in the room by perhaps 15-20 years.

**”In community” is important. One-on-one, I’ve found people to be remarkably sympathetic and usually eager to share stories of their own encounters with mental illness, either personally or in someone close to them. When I tell one person about my hospitalizations, I actually often have the opposite problem (though I suspect it comes from the same emotional place): they want to assure me that they know exactly how I feel, and they often have trouble listening to me because they’re filling the space with their own stories of depression. This bugs me, but I’ve certainly done precisely the same thing to other people more than I’d care to admit.

You, too, are the Body of Christ (part one)

I’m sort of ashamed to admit it, but I grew up in a pretty serious Catholic bubble.  I went to Catholic schools my whole life and grew up in a heavily Irish neighborhood.  My Irish/Polish and Mexican families are teeming with Catholics.  I don’t think I knew anyone who wasn’t Catholic until I went to high school.  In college, I met a number of Muslims and Jews, thanks to an explicitly inter-religious campus ministry, but my exposure to non-Catholic Christianity was quite limited.  Before I met Mary, I didn’t know the difference between the terms “Episcopal” and “Episcopalian.”  This is all by way of introducing the slightly embarrassing fact that before this year, I had never attended a non-Catholic Christian worship service.

This limited exposure wasn’t by design; I chose to go to Catholic schools, but I didn’t realize that by choosing Catholic education I was also choosing an environment  predominantly populated by Catholics and thus, not by other Christians.  I didn’t really think about how myopic I was until my sister decided to get to know our neighborhood and began conducting what she called “theological field trips,” where she went around to the Protestant* Christian churches in the area to visit at a different worship service each week.  As she rattled off the list of churches within a few miles of our house, I realized that I had passed those addresses a million times, but never noticed them because I never had a reason to go in.  I could name 15 or 20 Catholic churches in a few miles radius (like I said, really Irish neighborhood), but couldn’t list a single non-Catholic church.

As I’ve said, I am ashamed of this bubble. I’m ashamed because it means that by explicit choice or not, I have surrounded myself with Catholics and failed to experience and learn about the other half of the Christian church.  Such a Catholic dominated environment is dangerous primarily because it can lead a person to see the Catholic perspective as the normative Christian perspective.  It reminds me, in a way, of Peggy McIntosh’s analysis of white privilege where she lists “arranging to be in the company of people of  [one’s] own race” as the first example of white privilege.  I’m certainly not saying that ecumenical relationships are nearly as complicated or oppressive as race relationships/racism, but like it or not, there is a power dynamic at play if I can choose to surround myself with Catholic friends, Catholic schools, Catholic churches, and Catholic theological perspectives with ease and rarely encounter the “other” voice of the Protestant Christian.

Coming to understand this “theological privilege” is difficult and surprising for me because I am someone who tries to constantly analyze the privilege and power at work in the world.  Racial and gendered privilege are especially poignant issues to me and I would never accept such a ignorance or lack of exposure in any other realm of my life.  So I decided a few months ago to simply attend a worship service at a church of a different denomination.  A few blocks from my apartment is an Episcopal church so I attended a low mass at 6pm on a Tuesday night.  (Imagine that!  A mass at a convenient time for people who work! Ok.  End of snark.)   I tried my very best not to make it a “museum visit,” where I looked at the service from a detached, analytic lens, but to experience it as it was–a spiritual and religious service.  I’m happy to report that my overwhelming reaction was the feeling of being welcomed, by the pastor, the community, and the fellowship following the service.

I’ll  use another post to reflect on the actual service itself, as this post is growing mammoth, but let me end with this point: not to make excuses, but I think, unfortunately, this Catho-centric experience is really common for Catholics.  Perhaps its the size of the Church, the extensive education system, or the Catholic pride some feel, but there are some undeniable power dynamics at work in the Christian Church.  I hope that both institutionally and individually, Catholics have the self awareness to analyze these power dynamics, but also that our Protestant brethren participate actively in that discussion.

To end, I’ll note that the title of this post comes from a phrase that a cheeky Jesuit I know uses.  He says the masses for a particular retreat I lead, a retreat that is populated by mostly Protestants.  When they approach him in the communion line, arms crossed for a blessing, instead of the usual “Bless you in the name of the Father…” or “May Jesus live in your heart,” he says “You, too, are the Body of Christ,” with particular emphasis on the “too.”  When I realized what he was saying, and how refreshing that blessing might sound to a person deliberately excluded from sharing the Eucharist, I was struck by its spirit of inclusion and I hope to keep that strike that same spirit throughout my studies and theological exchanges with all Christians.

*For lack of a better one, I’ll use the term “Protestant” to describe the half of the Christian Church that isn’t Catholic, even though it defines those Christians in terms of the Catholic Church, and I do so with the understanding that this term lumps in about a billion Christians with a great diversity of beliefs into one word.  If others have a suggestion to describe what I’m getting at, I’d love to hear it.

How do you get Catholics to sing at Mass?

Ah, the age old question.  I wish I had a punchier answer.

But the reality is that this is an extremely difficult question to consider.  In my Campus Ministry department, we are working on some evaluations and strategic planning for next year.  We are grappling with difficult questions like, “How does our programming contribute to the faith development of our students?” and “What leadership skills do we develop in our retreat leaders?” and even more pressing, “How much of our budget can go towards pizza parties next year?”  But in all seriousness, one of the questions that always comes up is how to get students to really connect with the Mass.

Discussion of school Masses always gets strangely tense in a Catholic school.  The reality is that most Catholic schools have significant non-Catholic populations among the students and the staff, so not only do school Masses have to engage disengaged Catholics, but another section of the population would rather not be there all together.   No matter how many arguments a campus minister might make on behalf of school Masses (“You get an hour to sit and reflect by yourself!”  “At least you’re not in class!” “If you were at a Jewish school you’d have to go to Jewish services!”), there are always loud voices that argue we shouldn’t have Masses at all or that non-Catholics should be exempt from going.  Beyond that, the engagement and participation varies so much from person to person and Mass to Mass that campus ministers seize on anything that might maximize liturgical participation and joy.  Music is usually the first target.

As I participate in these discussions, I am reminded of a liturgy class I took in grad school.  One of the professor’s favorite lines was “the liturgy is not a plaything.”  He belittled the idea that the externalities of liturgy (ie: quality of the music, banners, programs, lighting, homilies, etc) were what mattered and disparaged the attitudes of liturgists who “played around” with these things.

But these discussions invariably lead to a kind of chicken-egg reasoning–“Do Catholics sing because they’re engaged in the Mass, or do Catholics become engaged by singing?”  Should campus ministers focus on making music and lighting better, or should they argue that what brings people to Mass is out of the control of the liturgist?

I am comforted, somewhat, by the fact that this is not a problem our school alone faces.  Liturgists at schools and parishes throughout the Church deal with this problem.  Whenever I hear someone evaluate a parish or a Mass, s/he always begins by describing the music.  Fussy music directors and stagnant music abound in the Catholic Church and everyone has an opinion about it.  So it is hard to be the person on the front line, making the decisions about what 650 people are going to be doing for an hour, knowing many will simply disengage.

And it is this train of thought that leads me right to the siren song of self importance.  I have to consciously remind myself that sacraments do not depend on me, that the Mass is not subject to what I think is important that year, or what I think students would enjoy singing.  And this is where I get stuck–believing I can’t do everything, but wanting to do something.  Knowing that music matters, but failing at fixing the entire problem.  I love to tinker and try to make what is good even better, and I have to remind myself that the Kingdom is beyond our efforts AND our vision, and that I am a worker, not a master builder. 

But I have to disagree with my former professor.  Externalities do matter, a lot.  Anyone who has ever planned a Mass and had the barrage of comments/opinions/nitpicking afterwards knows that.  And if the Mass is the front lines–the place where the most people encounter Catholicism in motion, I have to do everything in my power to plan a smooth and meaningful liturgy.  But that doesn’t mean I should start tinkering with everything.  Just maybe–solid songs that everyone can sing, a homily that is brief and to the point, and a Sign of Peace and Communion procedure that is smooth and effective.  Maybe liturgists can just focus on those things.

I really wish I had the answer to getting Catholics to sing.  Until someone figures it out, I’ll be poring over music books and planning for next week.

Maybe “tumor” is the term?

What do you call a post that starts out as a comment on another blog, probably doesn’t make sense without the original post, but is also so long that you feel weird about basically hijacking someone else’s topic?

Anyway, that’s what happened. E Lawrence wrote a thoughtful article over at WIT entitled “Do we care about mental illness?” and then I basically replied with a novel.

Here is what I said, expanded (depressingly little) and with links cleaned up:

I have a LOT of thoughts about this topic. Thanks for this post! It opens up some exciting (wc? whatever) areas for discussion.

1) I appreciate and agree with your calling out the “we” versus “them” language when it comes to mental illness. I have many friends and family members in the academy. I have many friends and family members in the church. Put simply, most of us deal with mental illness. It is “we”; it is not “them.” When I tell friends about my depression, I’ve learned to expect the, “Um, yeah, me, too” reaction, because that’s almost always the reaction I get.

2) “I believe that we in the academy are perhaps in a position to evaluate mental illness with a social, structural lens in place, especially because these issues affect society as a whole beyond the academy.”

You gesture toward the falseness of claiming any “objective” viewpoint later, but I think you could and should go much, much farther. In my experience and those of my nearest and dearest, the academy is itself deeply sick. If we want to call attention to the social, structural aspects of mental illness, what exactly do we call the phenomenon of the prelim? What do we call adjunct positions? What do we call the tenure review? Within psych research, how would you classify Diederik Stapel? To put it harshly (perhaps too harshly), I think the academy is far too busy fostering and exploiting mental illness to be in any position to evaluate its social and structural aspects.

3) And if you made it past that rant, here’s some embarrassing self-disclosure. I was struck by the repeated phrases “contemporary psychological approaches to the human person” and “psychological insight into the human person.” I’ve dealt with debilitating depression for literally as long as I can remember, but only in the last year have I had to deal with feeling as though I had lost myself. I cycled through more than a dozen psychoactive drugs, some of which affected my personality (as described by a previous commenter); I left a job (academia) that had given my life meaning; and I underwent ECT, which led to extensive memory loss.

Here is an example. During or slightly before the ECT, I heard a beautiful and moving sermon about suffering and the incarnation. It helped me to crystallize my thoughts about God’s role in my own unbearable suffering, and to feel, for the first time ever, that I could accept the incarnation into my personal theology. Through Jesus, I came to believe, God does not take away my burden of pain. I mean, I knew that God doesn’t take the pain away, because the pain was still there. It was a fact. I had, and have, no use for the “all the suffering will be worth it in heaven” line. Even when I get well, the pain will still have been real, and it will never have been worth it. So God doesn’t take it away; but God, in Jesus, might perhaps choose to share it with me, fully. And that’s something.

This is approximately what I thought. Then, two months later, it was gone completely, vanished with so much else from my memory. Four months after that, I came across a description of the sermon while re-reading my journal (looking for precisely such lost things), and I reconstructed it as best I could. But, dude, this was a pretty big idea, pretty central to my spirituality and my construction of myself. My relationship with God, my prayer life, was really really different before the ECT vs. after.

I would describe myself as a well-read amateur in theology, so I have no idea what work might be out there on the malleability of self in the face of trauma. But in the past few months, all talk of “the soul” has left me cold, empty, slightly contemptuous. The model of personhood taught within mainstream Christianity is no longer adequate for me.

4) Perhaps “exciting” is the right word choice, after all. When I think about all these questions right now, there is sadness, anger, confusion, hope; but there’s also that spark of excitement, the catch of the breath that I rely on to tell me: this is a problem worth working on. This is something that could be really, really cool. Theologians, I think, should concern themselves with psychology and with contemporary models and experiences of mental illness, but not (just) because it would be the useful or the compassionate thing to do. You should work on this because it would be awesome. Because it would be interesting. Because it would open up new ways of thinking about people and about God and about people with God. And if awesome, interesting, novel ideas don’t beat back the darkness, then I don’t know what will.

I don’t remember.

(TW: Descriptions of depression, suicidality, and ECT.)

The first thing to know is that I left my Ph.D. program. By last December, I’d been hospitalized for depression and suicidality three times. I was on a medical leave of absence, but I wasn’t getting better. It wasn’t clear whether or when I would improve enough to go back, but it was pretty clear that grad school made my symptoms worse. So I left. It was a devastating decision, though I think it was the right one.

In January I was hospitalized for a fourth time, for the by-now usual reason—I couldn’t be trusted not to try to kill myself. This brings me to the main thing I want to talk about right now: on the recommendation of my treatment team, I underwent a course of electroconvulsive therapy, or ECT. Much of the forthcoming description is based on conversations with my husband, therapist, and friends.The doctors who talked to me about the treatment and the informative booklet they gave me told me that “short-term memory loss” was the most common side effect. In other words, it would be more difficult for me to make new memories while I was doing ECT. More severe memory loss was, the booklet told me, infrequent. (Try this or this for a description of modern ECT. It’s pretty different from the days of Sylvia Plath.)

I have a few memories of the ECT itself. I remember the nurse, the anesthesiologist, and the ECT psychiatrist standing in a small room, mostly taken up by the wheeled bed I lay on. The anesthesiologist—a heavy-set middle-aged woman with dark curly hair—would say, I think, “You’re going to go to sleep now,” and the nurse, standing on the other side of the bed, would tell me, “I’ll see you when you wake up.” I remember the experience of succumbing to the anesthesia: the pull of exhaustion (when I would dutifully tell the nurse, “I’m getting sleepy”), an infinitesimal moment of panic where my brain would yell, stay awake! Don’t lose control!, and then the delicious sensation of relaxing into the darkness: It’s over now. I don’t have to worry any more. Sometimes I would dream. Then struggling up into the world again, being reminded where I was, drinking juice or tea or ginger ale in a waiting room with the other recovering patients for a monitoring period. If I was an inpatient, an MHS (Mental Health Specialist) from the Short-Term Unit would come down with a wheelchair to bring me back up to my room. Otherwise, eventually they’d call my name, and my husband would come take me home.

That’s about it. I remember hardly anything of the six-week period during which I was having ECT. I know Carmen came to visit me because my husband has told me that she did, but even when he describes the visit (it snowed a lot; we had coffee with a friend from our master’s program), it doesn’t evoke any internal recognition. I don’t remember the books I read or the things I did. I stopped writing in my journal. I had nothing to say.

They tell me that I was pretty happy during much of this period. I didn’t seem to want to kill myself any more. I didn’t seem to want anything any more. My doctors were pleased with my progress. They sent me home from the hospital, serene; and six days later I was bad enough to be readmitted. I seemed to be doing fine in the hospital, but I didn’t stay fine at home. The ECT seemed to be working, but it wasn’t working well enough (In retrospect, I think the inpatient environment was helping more than the treatments.). The ECT team decided to up the dose a bit, from “unilateral ultrabrief pulse” to “unilateral brief pulse,” in the hope that a higher voltage, a larger or longer seizure, would solidify the effects. They were optimistic and reassuring.

Allie Brosh of Hyperbole and a Half recently published an account of her own depression. It’s funnier than I can possibly describe, and hits hard and truly. Her description of her symptoms is different than mine: she describes an overwhelming, vacuous apathy, the inability to feel anything or care about anything, a calm deadness that cut her off from everyone she knew. What struck me was that that’s pretty close to how I felt, not during the depression, but during the ECT—or, more accurately, how I didn’t feel. Everything was muffled. I would go to therapy and just sit, saying nothing, for ten minutes at a time. It wasn’t that I was keeping quiet; my head was simply empty. I had to be reminded to eat. My daily planner—usually full of scrawled and crossed-out appointment times and things to do—is virtually blank for these six weeks.

The worst, according to my husband, was the week when I had three treatments, on Tuesday, Thursday, and Friday. When the weekend rolled around, I was unable to carry on a conversation—by the time I came to the end of my sentence I would have forgotten what we were talking about. This didn’t seem to frustrate me. I would start to reply to a question, stop to think, and then trail off into silence and resume my usual activity of sitting quietly in an armchair. I could neither describe the book I was reading nor remember the main characters’ names, let alone summon up an opinion on it. My husband tried to engage me in making scones, but I couldn’t figure out how to follow the recipe. Though I could still read, I couldn’t remember what I read for long enough to carry out the recipe’s directions. As my husband put it, “You weren’t even a zombie. At least zombies want to eat something.”

Throughout all this, I was pliable, compliant. The ECT doctors and nurses were kind to me, and they didn’t seem concerned about the disappearance of my personality. They said I was doing just fine. All my usual suspicion of expert opinion, my native trust in my own experience, my natural cussedness, was gone or muffled or hidden away somewhere. As I was reminded at every turn, I was at the best psychiatric hospital in the country, maybe the best in the world. I was extraordinarily lucky to be here. They knew what they were doing. Nonetheless, gradually, through the fog, I began to listen to the voices—both internal and from the people who knew me—telling me that something was not right, that this was not what I signed up for. I was coming to the end of the course of treatments anyway—usually patients undergo around 12 treatments, and I was up to 15 or 17—and so, eventually, I stopped. I left a phone message saying that I thought the ECT was hurting more than it was helping and that I wouldn’t be making my upcoming appointment, and that was that. That was the end.

It wasn’t the end, of course. From where I stand now, it was only the beginning. I don’t remember those six weeks of being vacant and withdrawn, but I do remember coming back to myself and discovering that large chunks of me were missing. The discovery was gradual—who can remember what she doesn’t remember, after all? First there was the reawakening, the week or two during which every day I felt more alive, more myself. It was like coming home after a long trip and turning the lights on in one familiar room after another. I started applying for jobs again, and answering my phone and email. As I came back, I began to notice gaps. Someone would refer to a visit made or a joke told or a dinner eaten, and I would say, “I don’t remember that.” The things that had been lost expanded in scope: the visits with family last Christmas, books read over the last several years, a friend’s pregnancy. I didn’t remember working at Starbucks for two months in the fall, my husband’s best friends from his graduate program (who have become dear friends to me, as well), the crisis plans and other strategies we’d come up with to fight the depression. I couldn’t recall how many times I’d been hospitalized because I was suicidal, or when, or where. I didn’t remember reading and becoming annoyingly obsessed with Infinite Jest over a period of several weeks. I didn’t remember the therapist I had seen for over a year through my school. I didn’t remember taking an Inquirer’s Class at my church. I didn’t remember this blog.

Some of these things more or less returned to me once I was reminded of them. Some of them didn’t.

Eventually, quite recently, I realized that most of the time I spent in my Ph.D. program is just…gone. I’m sure that if I went back through my notes I’d get back some inkling of the classes I took, the papers I wrote, the people I knew, the part-time jobs I took on. I might even remember the steps that led to my withdrawal from the program. Given how fiercely I’ve loved this career, how much of myself I’ve put into being a scholar, and how many sacrifices I made for it in the past (hello, long-distance marriage! Hello, crushing student loans!), I suppose my reasons were good. I expect that, like many of my choices in the last two years, it boiled down to “I have to leave school because otherwise I might die, and people don’t seem to be okay with that.” But to be honest, I don’t remember.

I’ve always been forgetful. It took me years to make checking for phone, wallet, and keys part of my routine before I left the house. I forget people’s birthdays, holidays, library due dates. Somehow I thought “short-term memory loss” would just be more of the same. It wasn’t, in ways that I was totally unprepared for. In losing my memory, I lost myself. Who I was—who I am—feels discontinuous, sketchy, tentative. I rely on other people to remind me. Many days, I feel like a ghost.

Perhaps the most unsettling part was forgetting where my belongings came from. I hadn’t even noticed the constant internal narrative of recognition until it was gone. Maybe I should wear that red wrap dress that my sister got me that looks really good on me; it would work with the brown boots from Target. I’ll make a pot of tea in the blue teapot Carmen gave me. These are napkins I sewed from cloth that I bought on sale at Gather Here. The sweaters, the can opener and cheese grater, the knicknacks, the earrings, the mugs, the scissors—they looked familiar, but there was no history to them. They, like myself, had apparently materialized from thin air. I wondered if I was even the same person who had bought these jeans, made these napkins, picked out these earrings. It felt more like I was a vaguely unsuitable replacement. I drank Earl Grey from the same hand-thrown blue pottery mug and wore the same size 10 turquoise flats, but they didn’t belong to me in the same way as they did to that other Mary, that Mary who used to know where they came from and how they came to be in her hands.

I’ve taken to asking my husband endless questions to fill in gaps and verify what I think I remember. Where did we get this red dutch oven? What about the silverware? Do you remember when I got these jeans? What did we do when Penelope visited? Did we visit Brian and Lori in New York? Sometimes his answers prod open unnoticed doors, and I remember what Brian and Lori’s house looked like, or the name of their dog. Sometimes, particularly for events from the period of the ECT itself, there is nothing. I learn his answers like I’m in a history class, imagine what it must have been like to live the life he’s telling me of. So gradually, between the things that are coming back and the things I have to let go of, I’m reconstructing my life. I’m rebuilding.

Ernest Hemingway had ECT, I found out recently. He wrote to his biographer, “It was a brilliant cure, but we lost the patient.” When I tried to describe to my psychiatrist what ECT had done to me, she said, “Well, it sounds as though you found something worse than being suicidal.” Which sounds about right.

Am I angry? Some days I am so filled with rage about all this that I can barely contain it. Today I’m less angry. On the one hand, the treatment that was supposed to help me ended up hurting me profoundly instead. (Incidentally, the main symptom ECT was supposed to deal with—the suicidality—came right back after I stopped.) On the other hand, the psychiatrists and nurses were very clearly trying to help, and they very clearly believed that ECT was something that helped people. I met a lot of people at McLean who were doing ECT, and (so far as I remember) they were uniformly enthusiastic and positive about it. They thought it was helping them. The fact that they also tended to be docile, placid, without discernible will, might have warned me; but I was desperate. Anyone who didn’t seem to be under the impression that she was being tortured, relentlessly and without mercy, was doing better than I was. I knew ECT posed a risk. But I wish someone had told me that I was risking myself.

The Ministry of Availability

I took a day off work today.  Yes, I am “sick.”  My minor medical condition could justify a day off.*  But more than “sick,” I am tired.  My work exhausts me in a way that it really hasn’t for the last two years.

This year, I have moved into a direct ministry role.  For the past two years I have done some combination of teaching and service learning and saw my role as ministerial, as I firmly believe that teaching is a ministry.  But this year, I am The Campus Minister of the school.  I coordinate the retreats, I stock the Campus Ministry candy bowl, I am the supplier of tissues for those who come into my office crying.  I did not think this transition from ministerial to minister would be so challenging.  After all, it’s the same school, same students, same colleagues.  But what I’m finding at the end of each day is that I am exhausted mentally, emotionally, to a deeper level than I have been by any other work.

Beyond the retreats, liturgies, and service work, ministry taxes me so much because how available I have to be.  What I didn’t know before I started is that being a minister means being available to whoever drops by my office and to chat, or discuss a problem, or find advice and encouragement.  Students and staff alike come into my office seeking something–they flop down on my couch and start talking and I have to turn away from my computer and listen.

At first I was annoyed.  I thought, “I don’t have time for this!”  (Especially since it happens approximately 200 times a day)  “This work is really important!  Do you think retreats plan themselves?” I thought self-righteously.  And I began to worry a lot about being able to get everything done–every time I had to stop working, I grew anxious or preoccupied and I couldn’t focus on the person in front of me.

But somewhere after I directed my first major retreat, I realized that listening and being available everyday in my office doesn’t take me away from my work as a minister–it is my work as a minister.  I can’t be a good minister unless I listen to my community, even in the most casual and mundane ways.  By stopping to chat with a student on her way to lunch, or a test, I became a little more attuned to what students worry about and how to best reach them spiritually.

And beyond my students and my work, as I listened more and more, theology came pouring out of me.  In the years since grad school, I have not picked up a theology book once; being in grad school just seemed so disconnected from the life of the Church and by the end, I mostly felt that I was done talking about theology and ready to start doing theology.  But as I listen to students’ questions and problems, I suddenly have so many ideas swirling around in my head.  With my ministerial experiences as my foundation, I see so many connections to what I’ve studied and want to develop those ideas into theology.  Being a minister has breathed life into those ideas I spent two years discussing in grad school and reinforced to me the importance of doing good theology.

So that is what I have been thinking about lately.  What if we made ourselves more available to each other?  What if the leadership and theologians of the Church made itself more available to the faithful?  If listening makes us better ministers, and being ministers makes us better theologians, shouldn’t we intentionally seek out opportunities to listen?**   I understand specialization makes ministry and theology more sophisticated, but in the process, we also divorce theology and ministry and prevent the kind of good theology that flows from ministry and good ministry that is rooted in theology.

The importance of availability not a novel idea, but I’m not referring to the kind of instant availability smart phones and the internet give us.  I can tweet at the Pope now, but I know he is not truly available to me.  I’m talking about availability on a person to person basis, built into the schedules and training of Church leaders and theologians.  To academics, this might seem outrageous; I know most academics would give me the standard answer–specialization gives academic theologians the freedom and time to produce good theology.  But I honestly think that specialization comes at the cost of theology rooted in the actual experience of the Church.  Given the rate at which Catholics are leaving or disengaging the Church, it seems that one of the highest priorities of those interested in the Church’s future should be to understand and respond to the needs of the faithful.  Being available ministers is the first step in that process.  


*Psst!  Don’t tattle on me!
**I won’t make the mistake of assuming my experience should apply to absolutely everyone, but I think in general, better connections between theologians, Church leadership, and the faithful is a good goal we ought to pursue.

Intellectual love

A classics teacher I have never met blew my mind today.  I was meandering around on the internet, and a few clicks into a mindless browse, I came across a teacher’s musings, where he asked, “how can I get students to love dead languages?”  On its face, it’s a simple question.  It’s what most teachers strive for.  But I have never framed my pedagogy this way.  How can I get a student to love theology?

I am showing the movie Romero in my classes next week.  Whenever I show this movie, I have to check myself a little bit, because it is a movie that I really value personally; now that I look back, I can see that the first time I watched was one of my steps towards understanding and loving liberation theology.  So I have to remind myself that not everyone is going to have a LIFE CHANGING EXPERIENCE while watching this movie in my classroom.  But the question above is the perfect way to reframe the issue.  How can I get a student to love Oscar Romero?

In case you’re not familiar with Romero, let me give a little bit of background.  Oscar Romero was a priest in El Salvador in the 1970s, at the outset of the brutal Salvadoran Civil War.  As a minister, he initially opposed any kind of Church intervention in politics, explaining that his call was to serve his people’s spiritual needs and not to organize a revolution.  He felt that his fellow priests were misguided in their attempts to change the social order (like poverty or disenfranchisement) because it resulted in sympathizing with or embracing socialism/Marxism.  Because of this “non-involvement” stance, he was appointed Archbishop of San Salvador, a strategic move for the Church who, at the time, was desperate to stem the tide of communism in South America.  But soon after his appointment Romero’s close friend Fr. Rutilio Grande was assassinated simply for urging his parish to organize against the conservative government and vote in democratic elections.  Inspired by his friend’s life and death, he began to understand the central concept of liberation theology—it is not enough just to serve a person’s spiritual needs if her/his physical needs are not being met.  He began to see that in order to serve God and his people, in the context of 1970s El Salvador, he had to care about the political situation, because Salvadorans were being kidnapped, tortured, raped, murdered, and oppressed by their own government.  Through his position as archbishop, he began to speak out against this oppression and came to embody a true Christian solidarity by struggling alongside of the poor and oppressed of El Salvador.  This solidarity and opposition to oppression took him all the way to his death.  He was assassinated in 1980 while saying mass.

I first learned about Oscar Romero in my religion class junior year of high school; we watched Romero and I was changed.  After learning about what Romero did and what El Salvador went through, I couldn’t think about anything in the same way anymore.  The movie got under my skin like a splinter, making me rethink how I thought about justice and what commitment I had made with my life towards living a life of Christian solidarity.

So how did I grow to love Oscar?  Well what I loved learning about him was that he changed his mind.  He wasn’t born a martyr.  His conversion from “non-involvement” to fearless solidarity is what I find both heroic and understandable—Romero lived in a terrible situation and reevaluated what he believed in the pursuit of being a better Christian.  He wasn’t perfect, of course, but his life serves as a model of holiness.  He faced real “persecution for righteousness sake” and his fearlessness and commitment to justice are inspiring.

If I had to answer the above question, I would say that I get students to love something by showing them why I love it.  I can show them why I love Oscar Romero, and I think that will go a long way for some students, particularly students who already like theology or are engaged in my class.  But what keeps me up at night is how to reach the student who doesn’t perceive any commonality between me and her, who refuses to even try to see why what I have to teach her might be cool.  How can I get her to love Oscar Romero?




Secularism and Biblical Studies

I think there’s a general perception that Biblical scholars have a secular worldview. I know that when I first began thinking about graduate programs, that was my expectation. My undergraduate experience was in a religious studies department at a public university, and while many or most of the students had a religious background that informed their studies, it was clear to me then that we were expected to check those at the door, so to speak, and approach a religion from the outside in. For me, with interests (at that time) in the formative periods of Judaism and Christianity, that wasn’t hard (perhaps surprisingly?). It was clear to me that even if we thought of the same texts as sacred, my religion was not the same as that of second Temple Jews, or even of the early Christian communities. My little cousin looks astoundingly like his grandfather but is clearly a different person.

[Now that I think about it, that was kind of a strange situation to be in—many public universities don’t have a religious studies department at all. I’d like to talk with my professors there about how their teaching is influenced by the type of program it is, whether they would teach from a different perspective in a different situation, and whether they have to tread carefully with the constant spectre of the state breathing down their necks. But I digress.]

I think that this perception becomes a stigma, even. The Bible is a career to you, and you spend your whole life picking it apart and de-sacralizing it, so to speak. How can you possibly take it seriously as a religious text? (Your mileage may vary, however—I’d be interested to know whether others have the same sense.) The question of how such a thing is possible is a great topic for another day. The point is that it is the case; the more I get to know others in my field, the more I realize that most of us do have some sort of faith commitment.

And yet– and yet—we’re still expected to check those at the door. It’s not that we pretend they’re not there; we just don’t really talk about them. We talk as if they didn’t inform our every thought; as if they didn’t matter. This is less true in theology departments, as I’ve since learned—there’s more of a space, in classrooms and conversations, to be more than a brain with legs. One of my great memories from [mystery program] is of the last ten minutes of a seminar class dealing with canon formation. We’d spent the past two and a half hours taking the canon apart, looking at how canonical choices were made, asking ourselves, “What is a canon, anyway?” and coming exhaustively to the conclusion that really, we had no idea. (That also is another post for another day.) Finally one man—an Episcopal priest, and a very good one, as it happened—sat back and said, “Okay. So what do I tell my congregations about this?” And suddenly everyone started talking at once. This was a fascinating intellectual question, but also a serious challenge to faith. What did it mean to be part of a “religion of the book” if we couldn’t decide on the nature or the content of the “book” in question? And we all really wanted to talk about that; and in that place at that time, we could.

Those kinds of conversations are rare in the classroom; and my experience so far has been that people want to have them in private but need a real atmosphere of trust before they’re possible. For me, this blog is a place to make that possible—hence the insistence (however illusory) on privacy; I want it to be very clear that these are not the things I am publishing or teaching. This is a separate space.

So, now we get to the question: is this a good thing? Isn’t this fragmenting of ourselves completely artificial? Isn’t this insistence on an “outside-in” attitude toward religion just a holdover from the Modern period? Aren’t we just placating the atheistic god of Science, trying to be a science (which we’re clearly not—don’t get me started on the “social sciences”) in a vacuum-sealed world cut off from our essential humanity?

And I would say: “Yes, it’s a good thing,” but with footnotes. Yes, we should check our beliefs at the door (but we should also realize that that’s impossible). Yes, we should insist on critical distance from the text for ourselves and for our students (but we need to have so much patience with students for whom that comes hard). Yes, we should keep the “public” conversations—the papers and the conferences—on the “secular” level (even though secularism doesn’t exist in the way it was originally conceived). And here’s why: that is the only way (that I can think of, anyway) we can all have a place at the table. That’s the only way I, the then-nonbelieving child of a low-church Episcopal priest, could have fallen in love with this field with the help of a Jewish convert professor and a deeply committed Catholic friend (who, incidentally, ended up studying Hinduism, in large part because he didn’t feel able to maintain that critical distance). There needs to be a safe space to talk about how your work informs your faith; but we also need the space in which we talk about the work itself to be safe. It can’t be okay, for example, for Jewish scholars to be the targets of proselytization at conferences. As I see it: for now, at least, secularism is like a language that’s foreign to all of us; but it’s the only language we all speak.

(That’s what I think these days, anyway. I’d like to know what you think.)

Let’s not blame the teenagers

Yesterday, a colleague in the religion department sent along a link for David Brooks’ most recent New York Times column, “If It Feels Right…”  In it, Brooks discusses the results of a sociological study done of American teenagers and moral decision making.  Brooks is pretty pessimistic about the future of American teenagers and their capacity to make moral decisions; he bluntly calls the study’s findings “depressing.”  But as someone who talks to teenagers everyday about morality, I am more optimistic.  Frankly, I find the study a little unfair and an indictment of American adults, not teens.

It must be noted first that while I am surrounded by teenagers (a fact that I am reminded of daily as I hear Justin Bieber belted out in the hallways), I teach a small subset of that population: I teach at a private, college prep high school of students who have had years of religion classes.  These facts alone will distinguish my students from the average American teen.  But from what I see and hear from my students when they aren’t being careful to impress their religion teacher, their opinions and decisions are not all that atypical.  They still are teenagers, and are not exempt from the pressures of their age or culture, despite their educational background.

To begin, Brooks and the sociologists he cites are correct in the first assessment: students find it difficult to identify a moral issue.  I completely agree.  I assigned a morality research paper at the end of last year (after 8 months of morality class) that asked students simply to ask a moral question and answer it.  I can’t tell you how many papers I got back that questioned the legality of gay marriage, abortion, or the death penalty.  My students, even with all their privileges, could not write a moral question that did not primarily ask about law (but they could identify which were the hot moral issues debated in the public sphere and formulated vague questions about them).  However, this skill can be taught–it is what we practiced in my morality class this week.  Parsing out the abstract legal, scientific, medical, religious, or personal issues present in a moral question is difficult, and I am not surprised that most of American teens can’t do it–most American teens are not enrolled in a class that asks them to practice this skill.  Is it really fair to judge students as morally illiterate if we don’t teach them what morality is?

The second thing to say is: yes, the siren song of relativism is particularly compelling to youths subject to peer pressure.  It’s hard to be morally stringent in an age group/maturity level that so values social standing.  And I suspect it will only become more compelling to them as they advance to higher education and learn about cultural differences that form the cornerstone of what relativism values.  But again, this issue is one that can be taught.  My class is covering relativism today and tomorrow.  (Sidenote: we teach this lesson through the lens of female genital mutilation.  It’s a bit sensationalistic to go to one of the MOST EXTREME moral quandaries, but it’s also really interesting to gauge their reactions to it.)  And for the most part, students can see the intellectual inconsistencies with relativism.  They struggle with it, but they can see why relativism is impractical or unrealistic.

What I find over and over again is that students *know* what is moral.  They can give me the “religion teacher” answer they think I want to hear.  They are smart enough to know what is expected of them, or at least, how to please an authority figure.  What is less clear or compelling to them is *why* they should do the moral thing.  They haven’t been given a compelling enough reason not to always act out of (usually short term) self interest.  The problem isn’t moral illiteracy, it’s moral laziness.  But this is because moral courage is harder to inculcate.  In this regard, they are not all that different from most adults I know.  And that is what I see as the particular challenge–not to show them what’s moral, necessarily, but to demonstrate what benefit there is in being moral in an attempt to draw out that courage.  Some of the hardest questions I have received in the classroom have been to ask me why I personally subscribe to a particular belief where the benefit is not as obvious to a teenager (example: how to explain my commitment to fair trade coffee).

My point is: let’s not blame the teenagers.  Let’s not get all depressed about the moral state of American youths before we really consider what we can do to teach moral decision making.  Brooks is right when he says that this study says “more about adult America than teen America.”–if the teens are morally illiterate or lazy, it’s because we haven’t taught them any other way to be.  I’d say this study is a clear indication that a class on ethics is not beyond the jurisdiction of a public school education.  Of course, it can’t espouse a particular belief set, but understanding different approaches to ethics and exactly what goes into a decision is a skill that does not need a prescribed belief set.  My students love morality class if only for the opportunity to give their own opinions about “what would you do?” scenarios and argue with their classmates.  We should give all teenagers that opportunity to examine their own decisions and learn about ways others do the same.  Before we go lamenting the future of America, we should give teenagers the chance to develop their own moral sensibilities and understand why moral courage is a positive virtue to attain.