For those of us who teach and who love teaching, there are few things worse for a classroom than apathy. I even prefer anger and resentment over indifference — at least you know the students are engaged enough to be angry with you or your topic. As a new professor in the mid-Aughts, I took it as part of my job to inspire desire, to show students why a course or even just one day’s topic mattered. I came up with all kinds of dog and pony shows, games, socratic examinations to try to convince students that they should be interested in what I had to say. A decade later, I recently sat through a meeting where a senior colleague expounded what an important part of our jobs it is, as university professors, to help students understand why what we are teaching is important. But I’ve begun to wonder if that’s really our “job.”
I began my life as a university professor with a strong feeling that teaching is a vocation, a calling of sorts, work worth doing. I also believed that I could change the world, one mind at a time. The naïve hubris of younger me makes me a cringe just a bit now. I do love what I research and I love what I teach and in general (there are exceptions) I love my students. But a calling? Changing the world?
Max Weber attempted to explain the loss of cultural depth and meaningful lives in European society in the late 19th and early 20th centuries through a reexamination of the social structures of capitalism as he knew them. He famously argued that, contra Marx, capitalism had to be understood as something more than a social system imposed from the top down or even internalized ideology; for Weber, the meaninglessness of life under capitalism—what he called, roughly in English, “disenchantment of every day life”—came from a radical reduction of all human values into an efficiency calculus (which ironically, he traced historically to values that arose from Protestantism, which was trying for its part to re-enchant the world). Culture had been, for Weber, rationalized, that is, reduced to the rational cost-benefit analysis of maximizing profits through the quantification of time and money and the use of science as the primary tool (and justification) for that rationalization. This made the functioning of not just the capitalist economy, but all of society’s bureaucracies, laws, practices appear rational when they saved “time and money.” All other activities or values lose out. Thus have we built for ourselves, in Weber’s most famous of phrases, an iron cage of rationality, market rationality, from which we can’t escape. I have to wonder what he would’ve made of the intensity of rationalization in the neoliberal order of our time, orders of magnitude worse than what he was experiencing 100 years ago.
During that same period of time, education reformers in the U.S. were divided between those who thought that this kind of rationalization should be applied to schools, where it would increase the efficiency of education; and those who thought that such methods of organizing learning were actually a kind of miseducation, as John Dewey called it. Of course all of us who sat in rows doing exercises out of a pre-designed text book to prepare for standardized tests know, the industrial model of education won out.
We now live in a world where every public dollar spent on education must be justified, rationalized as it were; where classroom content must be monitored from above through officious bureaucratic flows of power and control; where students are reduced to numbers. Our students come from a world of pinched opportunities and narrow goals of a kind of mobility that data reveal haven’t existed in the U.S. since the early 1970s. K-12 education has become the prequel to college; and college has become job training; and at all stops along the way, students are widgets and teachers are “deliverers of goods.”
And the apathy in the classroom is astounding. After twenty years teaching (the last ten of those as a full-time professor), the change is palpable. I’m not really saying anything new or earth shattering here, especially to people in education. But I’m wondering at a world where the wonder is more or less gone, the excited student a rare endangered animal you’re lucky to spot a few times a semester, a world where my job is to convince students they should care. Is that even possible?
In the Jewish tradition, which has a couple thousand years of history of teaching and learning, there is an old idea of the sacredness (the enchantment) of the teacher student relationship. To simplify a bit, the old Talmudic idea was that when a student learns something new, it is as a revelation from God; so for the teacher, when the student learns something new, the student becomes the revelator, the conduit to the divine for the teacher. A Rabbi Shapira, who lived in Poland during the Shoah, struggled with the role of a teacher in this equation, especially given some of the contradictions within the Talmud about the role and holiness of the teacher. Whereas the Talmud taught that a true teacher was like an a messenger from god, Shapira could not reconcile the fact that teachers are just humans like all others. He taught that students should seek out the teachers who are, from time to time, not always, maybe not even often, but from time to time, like an angel.**
The gulf between between an ancient traditional way of seeing the teacher-student interaction against the rationalized, industrialized, neoliberalized teacher-student interaction is wide and profound beyond words, on both sides of the relationship. In this old Jewish tradition, I see another perhaps more profound dynamic, one of desire. The student wants to learn, experiences the wonder of “revelation,” seeks out the teacher who can provide that experience, who is, from time to time, like a divine messenger despite being utterly ordinary and human most of the time. The teacher desires the revelation that can only come from the student, for it is the student who is learning, opening, changing.
I do not long for an old yeshiva, which were surely as filled with as many bored and apathetic students than engaged. This isn’t for me about a nostalgia for a halcyon learning environment (although I do think the Socratic life of wandering around the countryside in togas talking to friends about philosophy sounds pretty awesome). Rather, it’s to highlight how far from a revelation our industrialized education model is. It still happens, of course. I still have every semester a dwindling handful of students who are learning and want to learn and having those enlightening moments of uncovering hidden knowledge, of newness in the mind. But I do see how both the larger culture of education in the U.S. (and perhaps the world)‚ which treats education as a means to an end, a pecuniary end no less, and the actual structures of the university I teach in are reaching as far away from those revelatory moments as possible. I would even argue that structure of the modern university in many ways prevents or forestalls even the possibility of a seeking student and a seeking, imperfect teacher finding each other to share a moment of revelation.
*His full name was Kalonymus Kalman Shapira; he and his congregation were deported to Auschwitz in late 1944. He and his writings are no known as the Eish Kadesh.
**See Avivah Gottlieb Zornberg, Bewilderments: Reflections on the Book of Numbers (New York: Schocken Books, 2015): 28-30.
One of the great gifts of being a teacher is that, when the school year ends, my attention turns from teaching to research, exploration, and stretching my brain a bit (this is a necessary refresh for all teachers). In neoliberal terms, I suppose it’s professional development. But I like the more 19th century way of thinking about “the life of the mind” and just taking time to think and be. This can also be a challenge, however, when there are tasks needing done—especially writing and publishing tasks, for those of us whose jobs require it—but the freedom of a lazy morning coffee often moves me in a different direction, a kind of much needed procrastination, that can be both forthbearing and a distraction.
Before breakfast, I reached for my Pevear & Volokhonsky translation of War and Peace, which I’ve been picking at for the past few weeks. As an undergraduate, I had read Anna Karenina and had adored it, so I was expecting to pick up Tolstoy’s supposed masterpiece and be drawn in and inspired. The Anna Karenina I had read as a 23 year old was an edited edition of the Garnett translation (something I will admit I didn’t pay any attention to as an undergraduate). The Garnett translation is known for its beautiful prose style, but also for its inaccuracy—she apparently worked as quickly as possible, skipping over words and entire passages that she didn’t understand, to make deadlines. Kent and Berberova’s edit sought to maintain the beauty of Garnett’s translation while repairing her errors and filling in what she had cut. As my first experience with Russian literature during my undergraduate years, it was, to be a bit cliché about it, a revelation. But I can only describe War and Peace, so far, as a boring slog through wooden prose, instrumental at best, unreadable in its worst passages. I decided to give Tolstoy up and pick another novel for summer reading.
Coincidentally, as I ate my breakfast I stumbled on this fascinating comparison of Anna Karenina translations by Janet Malcom, wherein she takes Pevear & Volokhonsky to task for their translation. And so I spent the morning wandering down an unexpected path, reading about the art of translation and comparing translations of War and Peace. I skimmed over Walter Benjamin’s essay “The Task of the Translator,” which I’d read in graduate school, and then Vladimir Nobokov’s scathing essay on translation, “The Art of the Translation,” which I’d never seen before.
Because I stand in awe of Nobokov as a writer, I didn’t know that I would disagree so strongly with some of his thoughts on translation. Reading a few people’s commentaries on his essay, it would seem he was more or less an unreconstructed Russophile (in the 19th century sense of the word, as in an ethnic chauvinist) and believed that the beauty and complexity of Russian was untranslatable. He concluded, then, that the job of the translator was more or less to transliterate and be as “accurate” as possible to the original. He was against rendering a translated work into beautiful prose in the target language at all, and preferred what might be thought of as more a transliteration than a translation. There is a grain of truth here, in that he felt like the current aesthetic preferences of the translator’s context would distort the historical or contextual meaning of the original. Yet this is why so many people love Garnett’s (admittedly problematic) translation, in that her prose in English is both contextually close to Tolstoy’s Russian (late 19th century ruling class) and is beautiful in English—which, of course, Nobokov despised. Because his English prose is so shockingly gorgeous, particularly because it’s more or less the American dialect, I had not realized how much Nobokov sneered at English as a language, because he viewed it as so simple (the sheer number of cases and declensions in Russian is mind boggling).
Perhaps hewing more closely to Nobokov’s notion of translation, Pevear & Volokhonsky’s War and Peace does an admirable job of attempting to reproduce Tolstoy’s explicit and purposefully “bad” Russian in English, but ends up, rather than conveying what Tolstoy was doing with his playful and pursposeful stretching of the Russian, reading like a slavish transliteration without soul. Apparently, Tolstoy himself preferred the translation by the Maudes, with whom he’d actually become friends in the early 20th century. But the translation that people praise for its prose in English, even when they find fault with its translations, is the Garnett. Unlike Anna Karenina, Garnett’s translation of War and Peace has not been edited, and only exists in its original, flawed form. There is also a recent translation by Anthony Briggs, used now by Penguin Classics, which has won awards (a side by side look at some passages with the Pevear & Volokhonsky shows its skill with English). For the moment, I’m done with my attempt at summertime Russian literature, as I can’t decide which translation to pick—I’ll probably opt for some candy, like some epic sci-fi or a re-read of Game of Thrones.
This kind of intellectual wandering, “vagabondizing” as Melville might have said, is one of the reasons I love summer.
When I heard that a few of my students had asked that I be the professor to deliver the faculty convocation address for our department graduation, I was surprised by how deeply grateful I felt. Sometimes in life, it’s the small incidental acts when another person reaches out and says something kind that can transform a moment or, if we’re lucky, the way we see the world, in some small incidental way. And so I wanted to honor and thank those students for esteeming my thoughts a worthy way to mark your graduation—a great accomplishment and moment of transition in your life—by putting to paper some of what I might have said to them in a graduation speech. The graduation speech is a venerable art form in American life, and I cannot hope to match the great ones; so this is just from my heart and mind right now thinking about what matters to me most as a man and as a professor. I sent it to my students and post it here with all its imperfections in the hopes that they might find a nugget of wisdom for themselves at this liminal time in their lives.
My Wish for You
from Prof. Ormsbee who has been honored to be your teacher
21 May 2016
In some ways, we have no choice but to speak from where we are at the moment we take a breath to speak. What I want to share with you today, then, comes unavoidably out of my current life moment as I face the other side of middle age, the disappointments of life, and my failures. Yet my thoughts also emerge from the past few years of seeing you nearly every week in the classroom, and watching you learn and change and grow as you struggle with new ideas, new perspectives, and facts unknown to you before—because in some ways, talking to students, reading their work, watching them transform gradually before my eyes is without a doubt one of my life’s greatest pleasures and among the accomplishments I am truly proud of. Above all, today I want you to know that I have no doubt of the potential of each of you to become Good, with a capital-G, in your lives and in the lives of others.
Of course, saying as much presumes that I know what being “good” even is, or in the old philosophical formulation, what it means to have a “good life,” or as Socrates would have said what “a life worth living” might be. Human values are always emergent; that is, they are always in the process of becoming as we argue and struggle with each other over what we actually value, what is worthy of our time and attention and strength. And so we have to work out together what it means to be Good. We must talk about it, reckon, try it out, fail and try again. In this way, much like calculating limits in Calculus, we might throughout our lives approach goodness, maybe get closer and closer, so that when we are preparing for death or have died, people will look at our lives and think, “She was a good woman” or “He was a good man.”
I cannot sugarcoat this for you—having a conversation about goodness in our world can be almost ridiculous. Every generation has its own struggles and problems—I am not arguing that we are unique in that regard. But I am saying that we live in a global culture that makes being “good” and striving for “goodness” more and more difficult, if not laughable. We have hidden “goodness” away behind a fog of jobs and accumulation of consumer goods and celebrity; and within the clouds of always-on, always connected, iCulture. Indeed, there’s a way in which just talking about “being good” or having a “good life” evokes either mocking snickers or impassioned Jeremiads about STEM education and job markets. Goodness seems to be “out of touch” with the real world, even hopelessly naïve. And yet, in the face of youtube videos of beheadings, looming mass starvation in drought-stricken regions around the world, a melting polar ice cap, drone strikes and collateral damage, and staggering global poverty, how can we afford not to talk about what goodness might mean at this moment in history?
Any serious talk can be challenging in a world filled with constant noise and distraction demanding our attention. Some of these distractions are the very real, weighty matters of day-to-day life: How will I pay my rent this month? How do I feed my kids today after working a 10-hour shift? Will I be able to get enough sleep tonight? Other distractions are mere attention-seekers that want us not to think too deeply and indeed make profit from our daily desire to escape into shallow entertainments and mouse clicks. These are the flashy advertisements for useless plastic objects and shoes we don’t need; the Facebook updates about someone’s encountering with an ugly person in the dairy case at the grocer’s; the news headlines designed to eliminate complexity in favor of quick emotional payoff. In most ways, our human values have been narrowed and pruned down until all that is left is a quick efficiency calculation: how much money, and how much time. In this world of economic efficiency, who has the time or brain-power left to ask the questions that matter?
Even education has been reshaped to follow above all the efficiency calculus: starting in preschool, with children barely out of diapers pushing reading programs before brains are developed enough for the task; elementary schools driven by standardized testing that treats children like widgets in a factory; high schools that are ranked based on the appearance of success, shown through graduation statistics and college acceptance letters; and universities that have shrunk students down into quantifiable “learning objectives” and rubrics to justify budgets and demonstrate, you guessed it, efficiency. We have slowly created a university system aimed at producing what C. Wright Mills, a mid-twentieth century social theorist, called the “happy robots” of the American labor force, who obey, acquiesce, and work without questioning.
In this moment in history, then, a bachelor’s degree in the Humanities can be a rare gift, as it affords to us a few years out of life to ponder and ask the questions that really matter, the questions that help us determine what it means to be “Good” and what a life worth living would look like. Of course, using your bachelors degree as a time to think about Big Questions is not what the current economic system demands, and indeed is what many of the bureaucrats, budgeteers, and reformers at SJSU usually think of as a waste of time and money because it cannot be measured or quantified and does not “add value” to your “marketability” as a potential employee.
I’m not going to indulge here in a defense of the Humanities as a course of study—you can find those online if you search hard enough, wading through all of the anti-education noise. Rather, I want to ask to you to consider your time in the Humanities department in what may be a different light. Your time here has not merely given you skills you need to be teachers and workers; this was not merely a few years out of your life to check off requirement boxes on your way to jobs, credential programs, marriage and children. No. In our time together, sometimes explicitly, sometimes in the background, we have been asking what it means to be Good and what it means to lead a Good life, a life worth living. These four or five years of your life have hopefully introduced you to ideas, thoughts, and perspectives that will serve for you as a life-long foundation to build your life’s Goodness upon.
When the puritans came to North America in the 17th century, one of their dreams was to found a Good society. The last five U.S. presidents have all misquoted and misused John Winthrop’s now-famous “city on a hill” sermon as a paean to American exceptionalism, the essential superiority of The United States as a nation. But Winthrop’s speech actually hoped, from a particular Christian perspective, for a society of mutual care and responsibility to each other, a profoundly good society. About a hundred and fifty years later, Thomas Jefferson imagined a society of radical equality, where each person mattered, each person was in a way “chosen” and valued and endowed with rights. For Jefferson, the Good Life was “the pursuit of happiness.” In some ways, the puritan focus on mutual obligation stands in contradiction to Jefferson’s individualism (and many of you know how I personally feel about Americans’ specialsnowflakery). But when you put the two together, they may offer valuable rethinking of goodness that is not merely the pleasure or Jefferson’s happiness of the individual, but includes social responsibility to the people around us. In this way, goodness can be considered an act of balancing the individual happiness with society’s interdependence, and by extension an act of choosing ethical life with other people.
Whereas colonial Americans may offer us a possible ideal of social and individual goodness, ancient Chinese philosophy may contain some practical ways to actually become good people. Inspired by philosopher Martha Nussbaum’s insistence that a worthwhile Humanities education must include cultures beyond the traditional “West,” I started reading Chinese philosophy last fall in my free time. Although I’m new to Chinese philosophy, and we have an expert in Prof. Jochim in our midst, I will attempt to do two thinkers a small bit of justice. Confucius taught that one does not perform religious ritual because there are real supernatural beings or powers out there in the universe. Rather, we perform ritual because the ritual act itself ties us to the past, to each other, and to the future, such that we are attempting to enact a world that should be, but is not. From there, he taught that we can create small rituals that we perform in order to become what we should be. Humans, in this thinking, are works in progress, not innately good or essentially “complete,” but imperfect beings who can and should strive to be better. A later Chinese philosopher called Mencius saw the world as an unpredictable and possibly dangerous place that we, as humans, have no choice but to live in. For Mencius, goodness is a quality that must be cultivated within our hearts and minds, much as one would care for a garden. We prepare ourselves for goodness by becoming good ourselves. This can only happen through care and attention. Clearly, I have only begun my journey into ancient Chinese philosophy from the Axial Age, but just this bare introduction has transformed the way I think about how I live my life, and the kind of man I might become in creating life rituals and cultivating my mind and heart in order to be good.
Many students know some of my own religious history, as sometimes I share pieces of it in class, how I was raised in a strict, very conservative minority Christian sect. As a gay child and teenager, that upbringing did much damage that to this day I still must contend with. Yet I have chosen to retain one habit of mind from my childhood religious experience, one that values holiness. As many of you may know from taking Dr. Rycenga’s or Dr. George’s religious studies courses, most traditional versions of monotheism teach that holiness is “out there,” transcendent, from God, an experience that happens to us, or an outside presence that we encounter. On the other hand, many mystical traditions and in some religious traditions from India, holiness is an emanation that interpenetrates the entire world, and that we all participate in. Finally, in modern, post-Enlightenment Judaism, holiness has come to mean something that we do, something we create through our actions in the world. In this view, humans are themselves creators as through their work in the world they make the world holy—humans have the ability, the potential, the responsibility to repair what is broken in the world, tikkun olam, by making it holy. I realize that many of us are not religious at all—I myself am, at most, agnostic. And others of us have very strong commitments to specific religions. My hope is that wherever you fall along the religious-spiritual-secular spectrum, you can imagine and consider just for a moment this idea, that we are the sources of holiness, we through our own actions make the world worth living in, beautiful, holy.
In each of these—American, Chinese, Jewish—the history of the ideas presents us with profound contradictions and disturbing realities. The puritans despised and excluded all who were not of their specific Calvinist branch of Christianity from their community. Thomas Jefferson had a decades long “affair” with a slave who was his own wife’s half sister, a relationship that can only be thought of as some form of rape, and left his own children in slavery upon his death. America at large is the story of the displacement of Native Americans; the enslavement of Africans; the theft of Mexico, Puerto Rico, Hawaii, and the Philippines; 200 years of indentured servitude, misogyny, homophobia, and nativism. What use are American ideals of goodness? Chinese imperial history is not immune from this kind of critique either, as over time it expanded and contracted over fast geographical regions imposing a particular form of Han cultures on peoples throughout the continent. And modern Jewish ethics has at its core a struggle between the particularism of being Jewish and the necessity to think ethically about all humans in a more universalist way. No conversation about the value of goodness is beyond a reflexive critique of the society and people who created it. And yet, that is what humans do: They create value ideals and fail to live up to them.
But for me personally, this whole conversation can feel beside the point, either too historical or too philosophical and abstract. Sometimes life guts us without warning, and as Martha Nussbaum reminds us, at such moments in our lives, the emotions can overtake our consciousness, creating profound “upheavals of thought.” These are the moments that make philosophy, poetry, art, history, social science seem useless, while at the same time insisting on our deep need for philosophy, poetry, art, and history to help us make sense of all this. For me, I didn’t really understand what an upheaval of thought could be until one of my best friends was killed on Flight 93 on 9/11. I still don’t know how exactly to describe the shock and breathless grief that took me that morning, as I was frantically calling people to make sure that he was not on one of those flights (I knew that he was scheduled to be back in San Francisco that day). I sat in a bar in a hotel in downtown San Francisco a few days later for his wake, and still in shock, all I could see was that everything he was, everything he could have been, everything he might have done was gone, done, cut out of the fabric of the universe. My Aunt Karen passed away yesterday morning from an aggressive cancer that had already metastasized throughout her body, undiagnosed because they thought her pain was from post-polio syndrome, a lifelong condition suffered by survivors of polio. When I was very small, Aunt Karen was one of the few people in my extended family that I always felt deep, unconditional, profound and unquestioning love and acceptance from. And her life is ended, over in 5 days of unimaginable, unavoidable pain.
What can goodness then really be in the face of widespread historical injustice and our everyday, personal, even common suffering and pain? This spring, some of us encountered together for the first time George Eliot’s profound contemplation on life and love, Middlemarch. Because our class’s focus was elsewhere, we didn’t really explore in any depth the real heart of Eliot’s novel, her insistence on our connection and duty to each other as humans. “If we had a keen vision and feeling of all ordinary human life,” she writes, “it would be like hearing the grass grow and the squirrel’s heart beat, and we should die of that roar which lies on the other side of silence.” Eliot believed that part of being human is the unavoidable consequences of our actions, the ways that we, most often unintentionally, affect the people around us. Sometimes our deeds don’t fully manifest in other people’s lives for weeks or even years. But that intense interconnection, if we paid attention to it, might be overwhelming. Prof. Riley, whom many of you know, uses the phrase “openhearted” to talk about how to deal with this reality. We must not run away from nor ignore either historical injustices or the private suffering of the people around us. Goodness and a life worth living lie in somehow encountering the world and our fellows with an open heart and willing hands, all while minding our own well-being and happiness. Although we cannot right all the world’s wrongs, nor alleviate all the suffering, I must believe that its better to spend our lives working to make the world good knowing the task’s enormity and potential impossibility, than to give in to greed, tribalism, consumerism, or endless swiping through and ranking headless bodies on our iPhones. We can commit to being present in our lives for each other, to make the world and a life worth living.
As each of you have touched my life in ways that I cannot even identify nor quantify using the crass, inhumane efficiency calculus, I want you to know that I am changed for having known you. I am honored to have been part of your lives for a short time in this limited way as your teacher. As you take this significant step, all bedecked in your bedazzled mortar boards, I cannot hope for a life without suffering for you, simply because that is not possible. And I will not hope for a life of wealth or success for you, as I’m no longer convinced that is what makes a life worth living. Rather, from this moment in my life and with an open heart, I thank you for your generosity in sharing a piece of yourselves with me for the past few years by wishing for you the strength, wisdom, love, and commitment necessary to make goodness with your lives.
Some friends of mine have been talking about the recent upheaval in Tel Aviv. I suppose upheaval is a nice way to say race riots of the kind the U.S. used to have on a regular basis to intimidate people of color by white folks marching in the streets, destroying their property, burning torches (and crosses), and from time to time lynching someone. I’ll avoid the comparisons of the Tel Aviv riots with the Nazis and Kristallnacht, as they are obvious and painful. On a good day, the difficulties of talking about Israel within a Jewish context are legion, and one always risks being labeled an anti-semite for criticizing Israel or being labeled an imperialist for supporting Israel. The rhetoric is heated, divisive, and in my opinion counter-productive. That said, I’m going to dare to dip my toes into the turbulent waters to talk about a particular trend that I see in current conversations about Israel, especially among younger people: the desire to somehow split the difference between the “two sides” of the issue, that there is an “extreme” right and an “extreme” left when it comes to Israel, and so the correct or best answer must lie somewhere in between the two “extremes.”
Without having an actual person or article to argue against, I want to in this blog just hold up for examination the (admittedly decontextualized) idea that taking a middle position between two political positions, i.e., the political “center,” is not only possible, but indeed the most desirable political position. The language of the “center” that I have heard invoked often over the past several years (e.g., in analyses of Obama’s presidency, in discussions about the economic collapse, even about torture policies) poses some serious problems for me, both in terms of the ethics involved and in terms of its political efficacy to solve our collective problems. Just on the surface, I would observe that sometimes the center is the desirable position; but sometimes it is not. Sometimes the center is the best choice; sometimes it is the most dangerous. I remain unconvinced that the center position (whatever that might be) is going to be the best possible answer to the ethical problems of the state of Israel vis-a-vis the Palestinians or to its internal inequalities with its castes of partial and incomplete citizenship for arabs, refugees, converts, civilly married and mixed-marriages, etc.
To begin, I disagree with the valorization of the center as such. Arguments for the value of the center per se fail, for me, in that they make of the center a political end-in-itself, as somehow superior to any other political position per se. This makes the center into a kind of received or pre-approved morality or politics, excused from the burden of vetting its own values or policy positions. Empirically, the political “center” only exists relative to the political range of its social and historical context, so the “center” is a moving target as you jump from place to place and through history. Such a moving target cannot be said to be the best position, then, in any particular case, anymore than a right or left position can be taken as the de facto best answer to any grounded political problem. The center, moreover, is in fact just as ideological as any other political position. But the ideology of the center is often more insidious, because the center more often than not favors the status quo and resists change; the status quo is often the “hidden” ideology, that is, the ideology invisible to itself because it is the ideology of the habitual, the already-is. Choosing the center, then, doesn’t free you from any of the pit-falls of the so-called “extremes”; rather, it places the onus of critique upon the center to justify itself in terms other than the status quo. This is not necessarily the case, but looking for example at the last three years of Obama’s presidency, that seems to be true of what is currently the center in the U.S.
The problem for me comes not in the center as a position, but when the center is a valued as an end-in-itself, when the center is the default political position, valorized for its very character of being “between” or “in the middle”; here the center has become what is valued, rather than the contents of the specific political position. The center risks becoming in practice a means to avoid having the hard political and value conversations of a particular issue or within a specific context; it is seen as a position of wisdom, as if slicing the political baby in half to find the middle will automatically give one the best policy or value position.
While a careful consideration of a full range of political values and positions is the sine qua non of a healthy democracy, the normative claim that we should always end up in the center—a claim often facilely made in television “debates” about policy (or by my students in their papers)—in practice actually forestalls a careful evaluation of the range of values and positions, since we already know normatively that the center is the “correct” ending place of our evaluation. Such a normative claim for the center both forecloses the democratic process of deliberatively arriving at best policies—we need to be able to evaluate the range of possibilities and pick the best one, not simply the “center” one— and in effect often defaults to favoring as little disruption to the status quo as possible.
I think of middle-class white folks during the mid-century American Black Equality (Civil Rights) movement, for example, who as a group argued for blacks to stop protesting, use the courts, be patient and wait for change to happen. The primary value of the center is to make transitions and policies the least discomfiting and disruptive for those whose lives are relatively stable and good in the status quo. They weren’t against black folks have equal treatment under the law; but they also weren’t in favor of black folks demanding equality. So they, the sages of the middle, sought the answer somewhere between Jim Crow and Black Equality. We only have to look to the early 19th century “gradualists” to see how that middle way would have worked out.
Another problem with the valorization of the center is that it relies on positing a left and right that are somehow equivalent, both equally valuable and equally problematic. But empirically, they are not. Both left and right represent a range of values, practices, and policies that can be evaluated in terms of ethics and efficacy. The strategies and tactics of various political movements and ideologies (wherever they fall on an already problematic dichotomous left-right scale) differentiate them fundamentally and they can and should be evaluated accordingly; both the real and possible outcomes of policies (a consequentialist perspective, I suppose) as well as the ethical implications of specific practices and policies must be the focus our judgment. The most intellectually dubious normative is to judge a political position based on how closely it allies to the center.
In the case of Israel, it is not the left clamoring for exclusion of non-Jews, gerim, and Reform American Jews (let alone refugees). It is not the left with a stranglehold on religious institutions ranging from marriage to adoption to education to military service. It is not the left building settlements while dissembling to the public and the world. I see a right wing that has dominated Israeli politics since at least the early 1980s. I do not see a left in power or a left dominating political discourse in Israel (let alone American discourses about Israel, which is dominated by a love-it-or-leave-it or be called an anti-semite ideology). Nor do I even see a unified vision for the state of Israel on the left; rather I see a range of possible values and policies to the left of the status quo—and likewise to the right. So arguing that the two sides are equal and equally dangerous doesn’t make sense on the ground. And arguing for a “center” makes no sense in that context.
To be clear, I am not saying that the left is innocent or necessarily desirable. Far from it. Rather, I am saying that it is not the same as the right, in morality, outcomes, or practices. And I am saying the “middle way” wants to have its cake and eat it too, while pretending that it is itself non-ideological, a peacemaker (if you’ll excuse the King James Christian word), the “wise” who sees Israel more clearly than the “extremes”—when in fact, as I said above, it is just as ideological as any other political position. In the case of Israel, the so-called moderate center has the dubious distinction of having enabled for nearly 30 years the increasingly far right control of both politics and religion within Israel, to continue the expansion of settlements, to continue exclusion of non-Jews from the state, etc.
In both the U.S. and Israel, the “center” position more or less amounts to, as I said above, favoring the status quo while refusing responsibility for the consequences of the “centrist” political positions. In the real world, that could amount to the potential end of the state of Israel, in my opinion, for continuing on its present course will inevitably result in the demise of the state, or at the very least its final decline into the unapologetic colonial oppressor and exploiter its Arabic enemies have been accusing it of being for the past 60 years.
Recently when discussing Allen Ginsberg’s “Howl” and “Footnote to Howl” (here’s Ginsberg reciting) in an American culture course, students got up and left the class when the conversation turned to how Ginsberg uses gay sex as metaphor and imagery. Contextually, we had just come out of a few days of discussing Cold War conformity and domestic “containment,” as well as C. Wright Mills’ concept of the “happy robot” from White Collar. [Earlier this semester, students had also gotten up and left class during a conversation of orgasm and female sexual agency in Kate Chopin’s The Awakening.] I’ve been teaching about sexuality in my classes since 1998, usually integrating the issues of power and normativity surrounding sexuality into other topics (I haven’t had the chance to teach a course on culture and sexuality since 1998). I’m not bothered by student discomfort in the classroom; I am however troubled by the willful refusal to engage with the bothersome facts and ideas.
Like many scholars in that bleed-over space between humanities and social sciences, I see my role in the classroom as being more than a dispenser of information. As a sociologist (trained in interdisciplinary culture studies, American studies, and history as well) I see one of my primary pedagogical goals as being to help students develop their “critical thinking skills.” And many university level courses ranging from biology to Women’s studies, from Physics to art history can challenge students learned perceptions, bringing habitual patterns of thinking and doing to light. As anyone who has ever taught knows, it can be very uncomfortable. But in terms of learning and pedagogy, I’m okay with students being uncomfortable with course content.
The phrase “critical thinking” is so banal as to be meaningless at this point, so I feel it bears some further explanation from my personal pedagogical approach. I do not mean a bland ability to spot someone’s politics in a newspaper article; nor do I necessarily mean the ability to scan a poem. Rather, what I mean is the ability to step outside one’s own experience and habits and see the social structures, power relationships, ideologies, and statuses that produced them and limit them on a day to day basis. To step outside oneself and engage C. Wright Mills’ sociological imagination is no mean trick, and takes practice, exposure, and modeling to fully blossom. And even then, it requires students to be in what cognitive scientists term “conscious problem solving” mode, which involves effort, deliberate and purposeful thinking, abstraction, and will to execute. [Of course, we must admit that a perfect abstraction away from the self is not possible, in my opinion; but that it is a worthy end-in-view that enables worthwhile humanistic research.]
To this end, it often requires a shaking experience of some kind for people to think outside themselves. This can be tricky, as jolting people out of their comfort might raise ethical questions, and because they can resist the process. It should go without saying that heteronormativity structures and pervades a classroom. The students (and teachers) bring it with them and enact and reproduce it in the room, including all the privileges and powers that it bestows upon its adherents (if you’ll excuse the religious metaphor). There are many ways that a teacher can pedagogically lay these structures bare in the classroom, but my personality and teaching style tends toward the frank and the brash intrusion of queerness into a course design overall or into a particular discussion. Over the years I’ve had mixed responses, including students calling me faggot in class and writing homophobic comments in my teaching evaluations. But generally speaking from a pedagogical standpoint, my students learn to point out the systems of power, privilege, subordination, and oppression around genitals, bodies, desires, and pleasure.
But with “Howl,” male-male sex—specifically and explicitly, butt fucking and blow jobs—is central to the thing itself, and not a pedagogical choice. It is a queer production by a queer man at a time when deliberate shaming, ostracism, jailing, financial ruin, institutionalization, and suicide were the public face of homosexuality. It is in its writhing litany of the pain and foreclosure of American society in the 50s, among other things, a responsa, an apology, a hagiography to the queer. Ginsberg’s lost generation are those
who howled on their knees in the subway and were dragged off the roof waving genitals and manuscripts
who let themselves be fucked in the ass by saintly motorcyclists, and screamed with joy,
who blew and were blown by those human seraphim, the sailors, caresses of Atlantic and Caribbean love,
who balled in the morning in the evenings in rose gardens and the grass of public parks and cemeteries scattering their semen freely to whomever come who may
The transposition of public gay sex reveling in its abjection with Jewish symbols of holiness (seraphim) and with Whitmanian symbols of “spiritual democracy” (semen and grass) challenges the reader (the student?) to demand difficult answers from the domestic containment of the 1950s, and indeed, from the pro-gay marriage campaigns of today.
By broaching these sexual topics and throwing them out there for students, I challenge them to see the ways that assumptions about desires and bodies creates imbalances of power in the society they live in. And it makes them squirm. But in a world where heteronormativity blocks Ginsberg’s sanctification of gay male sex—not to mention Chopin’s longed-for full female sexual agency—they can always simply refuse it, turn their backs on it, and slam the door behind them. And there was really nothing I could do to protect the handful of gay students in the room from that rejection.
Last Saturday (31 March 2012), I presented the second paper to come out of my ex-Mormon project (1), entitled “‘It Felt Like a Cord Snapping’: Mormon Emotionality and Emotional Reframing among ex-Mormons.” The more I have been immersed in this second round of coding and analysis, the more excited I am about the findings. Emotionality is the range of beliefs and practices surrounding emotions, giving intelligibility and inciting expression and social behavior; Mormon emotionality thoroughly shapes adherents’ life experience, making it necessary for them to reframe and transform their emotionality as they leave Mormonism.
To understand this transformation, I must begin with an analysis of the Mormon context or “situation”; to wit, the specific ways that Mormonism structures emotionality by making bodily sensations intelligible, naming them, and giving them social significance, i.e., turning feelings into Mormon emotions (2). I argue that Mormonism overestimates (3) emotions, locating all evaluation of morality and knowledge within the purview of emotionality. This in turn creates a personal practice of constant self-scanning for emotions, an emotional vigilance among adherents, as well as a social practice of using emotions to judge and constrain others’ behavior and well-being.
During the Q&A session, several people in the audience objected to my characterization of emotional vigilance as something uniquely Mormon. Three specific objections were given: one person argued that we all constantly scan ourselves for emotions, or rather, that such emotional awareness is constitutive of modern life; another objected that this kind of emotional scanning is similar to patients or clients in a mental health, doctor-patient relationship; and a third person objected that this might be similar to any highly observant religious group with tight social bonds, making a comparison to Hasidic Judaism. In the moment, I responded quite simply that my intuition is that Mormon emotionality is indeed different; that as a microsociologist, I tend to see the specificities of groups. I also reflected back to the commenters that what I was hearing is that, whereas I had been focusing on the specific case, I may want to step back and see connections and overlaps with other forms of emotional practice coming from other social spheres or groups.
Although the uniqueness of emotional vigilance is a minor part of my overall argument, I took the feedback seriously and have spend the past day mulling the objections over. Upon reflection, I think I must stand by my argument that Mormon emotionality produces a particular kind of emotional vigilance that is emphatically Mormon in its make-up and meaning. Taking each of the three possibilities posed in the Q&A, I see parallels and similarities; but I still see major differences.
Probably the easiest case to consider is that of other religious communities. Having studied religiosity off and on for the past 10 years, I can say with a relative confidence that Hasidic (4) emotionality is quite different from Mormon emotionality. Given that Hasidism views embodiment, sin, forgiveness, truth, joy, revelation, etc., so differently from Mormonism, it is not that much of a stretch to see that, even if Hasidism produced an emotional vigilance (which I don’t think it does, frankly), it would be of a radically different character and for different social ends, than that of Mormonism. Within a religious context, I think that perhaps the closest you might get would be evangelical Christianity, with its belief in the gifts of the spirit. But there as well you find dramatic differences in the experience of conversion (born again) and communion with the spirit, as well as the means to discern truth and morality, that would make its emotionality and any kind of emotional vigilance quite different from that of Mormonism.
The second objection, however, poses a more difficult case for me. Although the therapeutic relationship itself is clearly different from Mormon emotionality, there is a transformation of emotionality in both the process of therapy and the process of leaving Mormonism. My informants reported a significant amount of self-awareness of emotions vis-a-vis Mormonism, specifically in the two areas of truth (knowledge) and morality (good/evil). All of the informants discussed one or both of these areas; and most of them talked about the experiences of constantly watching themselves for the feelings that would be “emotionailzed” in an acceptably way within Mormonism. Indeed, this self-surveillance was a key part of Mormon adherents’ lives; and consequently awareness of it is a key part of leaving Mormonism (at least for those who become unbelievers). Having only a limited knowledge of the literature about mental health clients/patients (Goffman and Foucault), my educated surmise is that the mental health process would provoke an intense self-scanning of the individual, one that requires, by its nature, a self-reflexivity, a consciousness, and a reframing (especially in cognitive therapy, where reframing is an explicit project), that is, creating a new emotionality. It would seem, then, that in a general sense, there are some parallels between a therapeutic emotionality and ex-Mormon emotionality.
There are, however, some key differences. The therapeutic relationship doesn’t produce a full cultural emotionality of itself; rather, it seeks to lay bare the patient’s existing emotionality, create a new, therapeutic practice of emotional vigilance and surveillance (sometimes even requiring a rigorous recording of emotions in “feeling journals”), and transform it into something deemed more “normal” or “healthy” with the help of the expert guide. As an alternative/new religious movement, Mormonism is a fully developed culture unto itself, including its emotionality, which children learn as part of their development and which converts buy into from their very first discussion with missionaries (5). Secondly, the therapeutic relationship is, in many ways, teleological, that is, it is moving toward a known end; whereas the process of leaving Mormonism has no emotional end known in advance to the apostates. In some ways, it occurs to me as I write this, the emotional transformation is an effect of the leaving Mormonism, rather than its explicit goal (although many of my informants reported that when they adhered to Mormonism, they had a longing to “feel better”).
The third objection that all of us in modern societies engage in a kind of emotional vigilance poses a different kind of sociological problem: namely, that Mormons are always embedded in larger social groups. My informants are nearly all North American (from the U.S. and Canada); and in all cases, Mormons are always to varying degrees bi-cultural, working to be “in the world, but not of it,” circulating in “gentile” contexts (sometimes more easily than others) but always with the knowledge of their “chosenness” and their responsibility to be a “light unto the world.” This means that the ex-Mormons in my study are also culturally identifiable as American and Canadian (and, not insignificantly, white).
There is a kind of emotional vigilance of late (post?) modern society, what Christopher Lasch has called the “therapeutic culture of consumerism” (6). In the post-Fordist, late capital, consumerist world, Lasch’s theory posits a focus on narcissistic pleasure, hedonism, that can be enacted through consumerism. This is the oft criticized, but structurally embedded, practice of constantly evaluating one’s state of happiness and judging the phenomenal world based on one’s level of happiness (7). That is indeed a kind of emotional vigilance, one tied deeply to structures of capital, I might add. Further, stepping back from Lasch’s theory and thinking about a broad critical historical literature on the self-help movement that began in the early 19th century and really took off during the 1960s, I definitely would argue that the dominant culture’s emotionality includes a kind of emotional scanning. Indeed, the literature on emotions generally points to the fact that it’s a pretty human thing to do to watch one’s emotions.
So the question that remains is whether or not the Mormon version of emotional vigilance is shared with that of the dominant culture, or if it is a specifically Mormon practices. The easiest answer I can give to this is that, for my informants who spoke about emotions after Mormonism, all of them reported a different, new emotionality, that is, a different emotional practice, characterized by ease and relief at not having to always be on emotional guard. All of them reported a reduction in the salience of emotions generally (by rejecting the Mormon meaning of emotions), and therefore, in my analysis, a reduction in their emotional vigilance.
But I want to go a step further here, and think about why the Mormon emotional vigiliance is different. The baseline human emotionality (which is only a theoretical construct, given that there is no such thing as a human without a social context that builds a particular, located emotionality), I would argue, is at a lower frequency than Mormon emotionality, precisely because Mormonism overestimates emotions; or to say it differently, Mormonism attaches a hyper-salience to emotions that provokes a higher intensity of scanning and surveillance. As far as modernity is concerned, Mormons in North America clearly take part in the narcissistic consumer culture. I see no evidence that Mormons are any less consumers, on average, than other North Americans. That means that Mormons are, in certain contexts, seeking their own happiness through consumerist means. I would argue, however, that these two emotionalities co-exist within an adherent and that they overlap and complement as well as contradict each other. But the Mormon emotionality is its own distinct thing, arising in a specific context, with different structure, meaning, salience, and emotional practices attached to it. Whereas I absolutely see Mormons participating in consumer emotionality, and I can see in my informants how the consumer “happiness” can bleed over into their Mormon emotionality (e.g., I feel happy and excited about buying that house, therefore, it must be god’s will that I buy the house!), I cannot see that they are the same thing, that the Mormon practice of constant, intense self-scrutiny of emotions in order to be able to categorize the phenomenal world into its Good/Evil and True/False schemas is the same as the dominant culture’s. To emphasize the point, my informants all described dramatically different emotionalities post-Mormonism.
So yes, emotional reframing occurs in leaving other religion; and in therapy. But the emotionality of the religions themselves are different; and therapeutic relationships create a practice of vigilance while trying to reframe the emotionality of the “patient.” And yes Mormons are also “normal” consumers. But I believe my data demonstrate that, among ex-Mormons unbelievers, their experience of emotional vigilance when they were adherents of Mormonism was quite different than their emotionalities post-Mormonism. To step back from my own very narrow research project, I think that there are some interesting implications here, namely, that undergoing a major cultural shift—joining or leaving a religion, migration, education, interracial marriage, etc.—is always accompanied by a transformation of emotionality, to some extent.
(1) This is a grounded theory project that studies people who leave the Brighamite branch of Mormonism—the Church of Jesus Christ of Latter-day Saints in Salt Lake City, Utah—and become unbelievers to some degree, describing themselves somewhere in the atheist or agnostic camp. I collected interview data through semi-structured interviews. A first round of coding, followed by axial codes rendered a theory of the psycho-social process people go through when leaving Mormonism; I tend to err in the direction of parsimony, so I only claim that the model works for people leaving Mormonism toward some level of unbelief, but I suspect that very similar processes might obtain in cases of anyone leaving an alternative/new religious movement. Building from that first round of coding, I discovered the centrality of emotions in the process, and went back to the data to try to explain Mormon and ex-Mormon emotionality.
(2) I’m currently working through where I stand in the theoretical literature about emotions themselves. In the near future, I will post to this blog a discussion of the debates within the field and where I might land. For the moment, I agree with Jonathan Turner that a sociology of emotions cannot ignore the mounting evidence that there is a core evolutionary and biological emotionality among our species. I rely heavily on Martha Nussbaum’s 2001 book, Upheavals of Thought (especially the first seven Chapters) for an excellent synthesis of the philosophical, neurological, social-scientific, and psychological literature on emotions. To give an over-simplified gloss of my current working theory of emotion: 1) there are embodied, physiological sensations that arise, which can be involuntary or incited feelings; 2) the acculumated knowledge and experience of the individual, including their social and cultural position, enable categorization of the feeling, rules for how, when, where, and to whom to express that emotion, and the significance or salience of the feeling: Emotion; and 3) practices and social behaviors arise in conjunciton with emotions, which can in turn (re)incite feelings (go back to 1).
(3) In the Freudian sense of over-valuing, or holding something in higher esteem than it should or might be held.
(4) The hasidic movements within Judaism are themselves a rather diverse bunch, with multiple schools and rebbes and movements under the label Hasid, so it is actually likely that, for example, the Lubavitcher emotionality may be different from, for example, a Satmar emotionality.
(5) Missionaries are actually trained to push people toward emotional vigilance by constantly asking possible converts to scan their feelings for positive emotions, which the missionaries then identify as “the Holy Spirit,” and which they assign the significance of “revelation of truth.” Thus, converts are inculcated into the Mormon emotionality from the beginning, and actively and consciously trained in emotional vigilance.
(6) It is important, here, to keep this notion of therapy from that of the medicalized therapy I discussed earlier. They are related, but not the same thing.
(7) Over the past 10 years or so, positive psychologists have argued that this consumer version of happiness has actually short-circuited our ability to know what contentment is and to discern when life is good, because we expect (inappropriately) that happiness be a constant high-level feeling of joy and pleasure. See Jonathan Haidt’s The Happiness Hypothesis for an example of this kind of work.