Skip to content

Longing for Knowledge in the ‘Efficient’ University

For those of us who teach and who love teaching, there are few things worse for a classroom than apathy. I even prefer anger and resentment over indifference — at least you know the students are engaged enough to be angry with you or your topic. As a new professor in the mid-Aughts, I took it as part of my job to inspire desire, to show students why a course or even just one day’s topic mattered. I came up with all kinds of dog and pony shows, games, socratic examinations to try to convince students that they should be interested in what I had to say. A decade later, I recently sat through a meeting where a senior colleague expounded what an important part of our jobs it is, as university professors, to help students understand why what we are teaching is important. But I’ve begun to wonder if that’s really our “job.”

I began my life as a university professor with a strong feeling that teaching is a vocation, a calling of sorts, work worth doing. I also believed that I could change the world, one mind at a time. The naïve hubris of younger me makes me a cringe just a bit now. I do love what I research and I love what I teach and in general (there are exceptions) I love my students. But a calling? Changing the world?

Max Weber attempted to explain the loss of cultural depth and meaningful lives in European society in the late 19th and early 20th centuries through a reexamination of the social structures of capitalism as he knew them. He famously argued that, contra Marx, capitalism had to be understood as something more than a social system imposed from the top down or even internalized ideology; for Weber, the meaninglessness of life under capitalism—what he called, roughly in English, “disenchantment of every day life”—came from a radical reduction of all human values into an efficiency calculus (which ironically, he traced historically to values that arose from Protestantism, which was trying for its part to re-enchant the world). Culture had been, for Weber, rationalized, that is, reduced to the rational cost-benefit analysis of maximizing profits through the quantification of time and money and the use of science as the primary tool (and justification) for that rationalization. This made the functioning of not just the capitalist economy, but all of society’s bureaucracies, laws, practices appear rational when they saved “time and money.” All other activities or values lose out. Thus have we built for ourselves, in Weber’s most famous of phrases, an iron cage of rationality, market rationality, from which we can’t escape. I have to wonder what he would’ve made of the intensity of rationalization in the neoliberal order of our time, orders of magnitude worse than what he was experiencing 100 years ago.

During that same period of time, education reformers in the U.S. were divided between those who thought that this kind of rationalization should be applied to schools, where it would increase the efficiency of education; and those who thought that such methods of organizing learning were actually a kind of miseducation, as John Dewey called it. Of course all of us who sat in rows doing exercises out of a pre-designed text book to prepare for standardized tests know, the industrial model of education won out.

We now live in a world where every public dollar spent on education must be justified, rationalized as it were; where classroom content must be monitored from above through officious bureaucratic flows of power and control; where students are reduced to numbers. Our students come from a world of pinched opportunities and narrow goals of a kind of mobility that data reveal haven’t existed in the U.S. since the early 1970s. K-12 education has become the prequel to college; and college has become job training; and at all stops along the way, students are widgets and teachers are “deliverers of goods.”

And the apathy in the classroom is astounding. After twenty years teaching (the last ten of those as a full-time professor), the change is palpable. I’m not really saying anything new or earth shattering here, especially to people in education. But I’m wondering at a world where the wonder is more or less gone, the excited student a rare endangered animal you’re lucky to spot a few times a semester, a world where my job is to convince students they should care. Is that even possible?

In the Jewish tradition, which has a couple thousand years of history of teaching and learning, there is an old idea of the sacredness (the enchantment) of the teacher student relationship. To simplify a bit, the old Talmudic idea was that when a student learns something new, it is as a revelation from God; so for the teacher, when the student learns something new, the student becomes the revelator, the conduit to the divine for the teacher. A Rabbi Shapira, who lived in Poland during the Shoah, struggled with the role of a teacher in this equation, especially given some of the contradictions within the Talmud about the role and holiness of the teacher. Whereas the Talmud taught that a true teacher was like an a messenger from god, Shapira could not reconcile the fact that teachers are just humans like all others. He taught that students should seek out the teachers who are, from time to time, not always, maybe not even often, but from time to time, like an angel.**

The gulf between between an ancient traditional way of seeing the teacher-student interaction against the rationalized, industrialized, neoliberalized teacher-student interaction is wide and profound beyond words, on both sides of the relationship. In this old Jewish tradition, I see another perhaps more profound dynamic, one of desire. The student wants to learn, experiences the wonder of “revelation,” seeks out the teacher who can provide that experience, who is, from time to time, like a divine messenger despite being utterly ordinary and human most of the time. The teacher desires the revelation that can only come from the student, for it is the student who is learning, opening, changing.

I do not long for an old yeshiva, which were surely as filled with as many bored and apathetic students than engaged. This isn’t for me about a nostalgia for a halcyon learning environment (although I do think the Socratic life of wandering around the countryside in togas talking to friends about philosophy sounds pretty awesome). Rather, it’s to highlight how far from a revelation our industrialized education model is. It still happens, of course. I still have every semester a dwindling handful of students who are learning and want to learn and having those enlightening moments of uncovering hidden knowledge, of newness in the mind. But I do see how both the larger culture of education in the U.S. (and perhaps the world)‚ which treats education as a means to an end, a pecuniary end no less, and the actual structures of the university I teach in are reaching as far away from those revelatory moments as possible. I would even argue that structure of the modern university in many ways prevents or forestalls even the possibility of a seeking student and a seeking, imperfect teacher finding each other to share a moment of revelation.

*His full name was Kalonymus Kalman Shapira; he and his congregation were deported to Auschwitz in late 1944. He and his writings are no known as the Eish Kadesh.

**See Avivah Gottlieb Zornberg, Bewilderments: Reflections on the Book of Numbers (New York: Schocken Books, 2015): 28-30.

Summer Loafing, or Russian Literature in Translation

leo-tolstoy

One of the great gifts of being a teacher is that, when the school year ends, my attention turns from teaching to research, exploration, and stretching my brain a bit (this is a necessary refresh for all teachers). In neoliberal terms, I suppose it’s professional development. But I like the more 19th century way of thinking about “the life of the mind” and just taking time to think and be. This can also be a challenge, however, when there are tasks needing done—especially writing and publishing tasks, for those of us whose jobs require it—but the freedom of a lazy morning coffee often moves me in a different direction, a kind of much needed procrastination, that can be both forthbearing and a distraction.

Before breakfast, I reached for my Pevear & Volokhonsky translation of War and Peace, which I’ve been picking at for the past few weeks. As an undergraduate, I had read Anna Karenina and had adored it, so I was expecting to pick up Tolstoy’s supposed masterpiece and be drawn in and inspired. The Anna Karenina I had read as a 23 year old was an edited edition of the Garnett translation (something I will admit I didn’t pay any attention to as an undergraduate). The Garnett translation is known for its beautiful prose style, but also for its inaccuracy—she apparently worked as quickly as possible, skipping over words and entire passages that she didn’t understand, to make deadlines. Kent and Berberova’s edit sought to maintain the beauty of Garnett’s translation while repairing her errors and filling in what she had cut. As my first experience with Russian literature during my undergraduate years, it was, to be a bit cliché about it, a revelation. But I can only describe War and Peace, so far, as a boring slog through wooden prose, instrumental at best, unreadable in its worst passages. I decided to give Tolstoy up and pick another novel for summer reading.

Coincidentally, as I ate my breakfast I stumbled on this fascinating comparison of Anna Karenina translations by Janet Malcom, wherein she takes Pevear & Volokhonsky to task for their translation. And so I spent the morning wandering down an unexpected path, reading about the art of translation and comparing translations of War and Peace. I skimmed over Walter Benjamin’s essay “The Task of the Translator,” which I’d read in graduate school, and then Vladimir Nobokov’s scathing essay on translation, “The Art of the Translation,” which I’d never seen before.

Because I stand in awe of Nobokov as a writer, I didn’t know that I would disagree so strongly with some of his thoughts on translation. Reading a few people’s commentaries on his essay, it would seem he was more or less an unreconstructed Russophile (in the 19th century sense of the word, as in an ethnic chauvinist) and believed that the beauty and complexity of Russian was untranslatable. He concluded, then, that the job of the translator was more or less to transliterate and be as “accurate” as possible to the original. He was against rendering a translated work into beautiful prose in the target language at all, and preferred what might be thought of as more a transliteration than a translation. There is a grain of truth here, in that he felt like the current aesthetic preferences of the translator’s context would distort the historical or contextual meaning of the original. Yet this is why so many people love Garnett’s (admittedly problematic) translation, in that her prose in English is both contextually close to Tolstoy’s Russian (late 19th century ruling class) and is beautiful in English—which, of course, Nobokov despised.  Because his English prose is so shockingly gorgeous, particularly because it’s more or less the American dialect, I had not realized how much Nobokov sneered at English as a language, because he viewed it as so simple (the sheer number of cases and declensions in Russian is mind boggling).

Perhaps hewing more closely to Nobokov’s notion of translation, Pevear & Volokhonsky’s War and Peace does an admirable job of attempting to reproduce Tolstoy’s explicit and purposefully “bad” Russian in English, but ends up, rather than conveying what Tolstoy was doing with his playful and pursposeful stretching of the Russian, reading like a slavish transliteration without soul. Apparently, Tolstoy himself preferred the translation by the Maudes, with whom he’d actually become friends in the early 20th century. But the translation that people praise for its prose in English, even when they find fault with its translations, is the Garnett. Unlike Anna Karenina, Garnett’s translation of War and Peace has not been edited, and only exists in its original, flawed form. There is also a recent translation by Anthony Briggs, used now by Penguin Classics, which has won awards (a side by side look at some passages with the Pevear & Volokhonsky shows its skill with English). For the moment, I’m done with my attempt at summertime Russian literature, as I can’t decide which translation to pick—I’ll probably opt for some candy, like some epic sci-fi or a re-read of Game of Thrones.

This kind of intellectual wandering, “vagabondizing” as Melville might have said, is one of the reasons I love summer.

The Promise and Limitations of Culture [Book Review]

51tohq4iwdl-_sx327_bo1204203200_

Terry Eagleton. Culture (New Haven, Conn.: Yale University Press, 2016).

This is a great little introduction to the history of the “cultural concept,” anchored in the 18th century effort to understand  and critique rising modernity. As usual, Eagleton is witty and sharp-tongued, glosses things he doesn’t like or agree with a bit to facilely, and at turns infuriatingly snarky. But that is one of the reasons we read Eagleton, no? Also as usual, Eagleton is a bull-headed critic of postmodernism, here especially in its conception of what he calls “relativism.” I had mixed feelings about this chapter. On one hand, my own thinking about culture has been heavily shaped by American Pragmatism (especially the classics, Peirce, James, Dewey, and Mead) and I simply do not accept the radical idealism (that reality is only ideas, to put it crassly) of poststructural and postmodern thought, which separates human knowledge and action from the push back of the obdurate material world. So although I’m very much with postmodernism (a sweeping term if ever there was one) in its insistence on the contextualization of human knowledge, I’m more often than not unwilling to go all the way to the conclusions they draw about what that contextualism means. Nor am I willing to go with the postmodern phenomenologists all the way to human beings being more or less “black boxes” unknowable to each other. On the other hand, I’m a social scientist, and so a certain kind of relativism is a necessary methodological stance to be able to, as accurately as possible, understand people who are different from oneself in the most ethical way possible. Indeed, Eagleton later in the book builds this back in to his critique of culture as Romantic nationalism and culture as colonialism—but he doesn’t call it relativism.

All that aside, three things make this short book worth reading. First is the way that Eagleton employs Irish post-colonial thinking throughout the book to illustrate the complicated, fraught, and violent relationships between culture and power (he sets up the 19th century concept of “civilization” as culture’s foil, which he defines as modernity & imperialism). This was my first sustained exposure to Irish critiques of the British empire and I found it refreshing and fascinating, and honestly it opened up my eyes a bit to what might be some missing threads in Americanists’ histories & critiques of the Irish love and adoption of African American culture in the United States, seeing themselves as akin to African Americans as fellows ground under the heal of ongoing British colonial culture (albeit in the context of an independent, post-British evolving American empire).

Secondly, Eagleton turns toward a set of scholars who are normally dismissed in the humanities and social scientists as conservatives — Burke, Herder, and Eliot, especially — and suggests that there might be more to their conceptions of culture than meets the eye, and there might be some things there that are necessary in our current, fragmented world. I have a value that I don’t exercise much, which is that all ideas can and should be confronted and considered seriously, including ideas I find repulsive, dangerous, or wrong-headed. But in practice, I have very little time or desire to actually do so. Here, Eagleton forced me to consider that what reads like deep conservatism in Burke, might actually be radical anti-colonialism coming from an Irish thinker. I now feel like I have to revisit Burke more carefully. In all of these political thinkers, Eagleton rejects the potential for “romantic nationalism” in their theories of culture, but finds something important in the fact that they already in the late 18th century see cultural diversity as a normal and a good thing, and that they believe that culture can be both a way to create solidarity and a means to talk about values. In other words, Eagleton does not merely accept these so-called conservative thinkers at face value, but critically engages them to suss out what is useful and true, while separating and rejecting what is potentially dangerous and even fascist. For Eagleton, ultimately the more common idea that culture is an entire way of life ends up being more or less useless on any level because such an all-encompassing concept offers no clear ground upon which to base either a scholarly inquiry or a necessary criticism. It is simply, too damn big. On the other hand, the 19th century (imperialism) notion of culture as “elevating” and as the purview of the elite has its obvious inherent problems. Although it’s not exactly explicit in the book, it seems that Eagleton is arguing that we should “right-size” culture back down from its overly capacious “way of life” definition back to an ongoing debate about values and individual and social development, but maintaining the warning that even Burke and Herder gave, which is that such debates are contextual and vary from group to group.

In the final chapter, Eagleton leads us to his conclusion on the “hubris of culture” (meaning the hubris and folly of the academic focus on ‘culture’), and my third reason why I think this book is worth reading. Here he makes an oft-repeated materialist critique of postmodern politics, one which I happen to agree with, so I found myself nodding along. Culture cannot save us, and has indeed already been coopted both by neoliberalism and modern empire. We live in a world where culture has been massified, and massified culture is available in a pseudo-democratic way to everyone, which gives the veneer of “freedom” and “democracy” to systems of power and exploitation that remain largely unchalleneged and unchanged, but since the 1970s, largely unfettered by national regulations and checks to its power.

In a way, Eagleton is making an argument that we academics have been hoisted on our own petard, as we have fetishized “culture” for nearly 60 years in a way that has served the interests of global capitalism. From an American perspective, it would seem that our constant pattering about culture has led to a misidentification of oppression in the hearts of individual oppressors, in their feelings toward oppressed groups. The willingness to look at structures, even where it exists, seems to always get derailed by discussions of “hatred” and cultural difference. I’m in an odd position of having been education in the 1980s and 90s, and so not able to completely unwind myself from the focus on culture (indeed, my own Ph.D. and professorship is sort of predicated on that entire movement within academia). However, I’ve also been deeply dissatisfied with the ability of “culture” to offer any real kind of hope for liberation or the end of oppression, or even more modestly, the ability of culture to even begin the fight. Eagleton argues that culture has the potential, but that its mass production since the late 19th century has transformed culture into a pseudo-democratic cover for the systems of oppression in the first place, and that the humanities and social sciences have swallowed it whole.

Toward an ‘Unglorious Whiteness’ [Book Review]

419edx4lb-l-_sx324_bo1204203200_

Linda Martín Alcoff. The Future of Whiteness (Chelmsford, Mass.: Polity Press, 2015).

Whiteness, white identity, and therefore white people are in a historical moment of turmoil, according to Linda Martín Alcoff, brought on by various civil rights movements, demographic shifts in the U.S., and a broadening understanding of the unearned privileges of whiteness in American (and European) society. This moment of turmoil, a historical consequence of social and material changes on the ground of white people’s experiences, may provide an opportunity to reformulate and transform whiteness for a future of, to summarize a bit facilely, inter-racial egalitarianism. To support her normative claims, Alcoff engages social scientists, historians, and fellow philosophers to think her way forward for anti-racist politics in the U.S. (and, I would add, her arguments would be equally germane in post-colonial Europe). This interweaving of her philosophical argument with mounds of empirical evidence from across various scholarly disciplines empowers her bold ultimate conclusion:

“What would it mean for whites to become more positively embodied as white within a multipolar social landscape? Perhaps the critical element will involve coming to understand whiteness as a mere particular among other particulars, rather than as a universal that stands as the exemplar of civilization. … [L]iving whiteness mindfully as a particular would have a deflationary effect, and produce an opening to the possibility of learning more than leading. It would also place whiteness within a complex multiplier history involving varied relations with others, some of which became constitutive of whiteness itself.” Aloof, The Future of Whiteness, 176-7.

To get to this point, Alcoff had to first break down what whiteness actually is, proposing a tripartite way to conceive of of white identity specifically and social identities more broadly. Academic and political identity discourses often slide into a kind of consumer individualism, where personal experiences of subjectivity are paramount; and no matter how much the scholar or activist may protest otherwise, identity as a category of analysis seems to nearly always end up ignoring the social origins of identities and their fundamental sharedness. Alcoff, however, presents an account of identity that insists on its sociality and sharedness, while spanning the breadth between the collective and the individual by breaking it down into three interrelated but distinct parts: the empirical identity, which locates it within a group, with a particular social position, at a specific time and in a specific place; the imaginary identity, which includes symbolic and representational formations, myths, narratives, images, sounds that make up the imaginary surrounding our identities; and finally, the subjective identity, which begins with individual experiences, but delves deep into the ways that social identities constitute and shape subjectivity itself, including perception and response to the environment (see 74-90).

With this three-way whiteness expounded, Alcoff is free to critique the historical formations of white exceptionalism in detail (Ch. 2). Whiteness has seen itself as exceptional in both senses of the word, as being better than all other kinds of humans and as not being subjected to the same rules or restrictions as all other kinds of humans. White exceptionalism consists, for Alcoff, of two related ideas: white supremacy and the myth of white vanguardism. White supremacy is relatively obvious at this point, in terms of the ways social power and privilege have flowed to whiteness over the past several hundred years of European domination of the globe. White vanguardism is an interesting break out of one piece of white supremacy, the myth that white people are at the forefront of the production of culture, knowledge, art, ideas, of human life itself. This idea, often held unconsciously, gives support and confidence to white folks across class and ethnic lines in multiple kinds of situations. It’s the presumption of a white-cultural superiority that forms the habits of mind of whiteness, undergirding its social supremacy.

The book as a whole, though, is making a more substantial political and cultural claim, which I will boil down to two related sub-claims.  First, she attempts a critical political corrective to the desire among whites to eliminate race generally, and whiteness specifically, as a social category; and second is her proposal for the transformation of whiteness into the future. The political goal to eliminate whiteness — which she terms eliminativism — has both left and right-wing version, with the right-wing version anchored in denialism and the left-wing version anchored in laudable anti-racist values but motivated unconsciously by a desire to escape the pain of the history and effects of whiteness. Alcoff justly dismisses and ignores right-wing denialism and focuses her attention throughout the book on anti-eliminativism — in short, that the goal should not be to eliminate, erase, eradicate, or destroy whiteness, but rather to transform it into something else.

Anti-eliminativism is based on, as far as I can tell, two somewhat problematic foundations. First is her somewhat willful oversimplifications of various scholars and disciplines she’s arguing against — especially Roediger, a labor historian, and Appiah, a philosopher of identity, race, and cosmopolitanism — including several awkward mischaracterizations of the massive body of social scientific work about whiteness and the motivations of anti-racist social scientists. That said, although these problems of scholarship crop up throughout the book, they really don’t distract from the strong throughline of Alcoff’s claims. The real problem with her anti-eliminativism comes toward the end of the book, where she lays out her three arguments against eliminating whiteness, arguments she aims at left scholars and activists. Her first two reasons are solid social scientific observations: 1) eliminativism misapprehends what social identities are, assuming they are merely discursive or mythological overlays, when in fact they are material conditions and accumulated habits of mind and behavior and perception; 2) the history of whiteness marks specific experiences of a variety of kinds of white people and impacts the way they respond emotionally to the world. But Alcoff goes a step further. She concludes from the social science that whiteness is itself a positive identity, constituting subjectivities at deep levels. Indeed, whiteness is baked into not just institutions, but people, and is not going away any time soon. Her first reason to support anti-eliminativism, then, is more or less a pragmatic one, that whiteness is here to stay, so we need a different approach to dealing with it.

Alcoff aims her third anti-eliminativist reason against the argument I would make, that whiteness as a social formation must be destroyed (or significantly reduced) in order to eliminate white supremacy and racism. Whiteness has been historically the center-pole of the global, colonial, western racist tent. It is, therefore, hard to see why it should be preserved in any form. That it is entrenched, that it forms subjectivities, that it channels emotional interactions is merely to state that whiteness is part of the way things are; but the status quo must never be its own justification. That it’s hard to eliminate is not convincing enough of an argument for why it should be preserved, even in a dramatically modified form. Unfortunately, Alcoff’s response to my (and others’) objection is to impute motivations on those who make it, that such white folks are simply suffering from white guilt and the desire to escape or transcend historical bads. This may or may not be true, but to attack the motivation for a claim is not the same thing as addressing the claim itself. I found her argument about white guilt insightful and moving, actually—and that she does so through a critical reading of James Cameron’s movie Avatar is kinda brilliant. In fact, I don’t doubt that many of us (Alcoff included) who work or teach in anti-racist areas feel heavily the burden of whiteness’s violent and exploitive history. And with Alcoff herself, I would argue that such guilt can be turned to forthbearing ends in fighting the anti-racist fight. And yet, blue aliens notwithstanding, Alcoff never provides a satisfying reason to abandon elimination of whiteness as a goal, other than its pragmatic impossibility.

Fortunately, what follows from anti-eliminativism is her second major sub-claim, a call for white people to develop their own kind of double consciousness. Whereas racism and white supremacy foisted double consciousness upon African Americans, as brilliantly theorized by W.E.B. Du Bois in Souls of Black Folks (1903), Alcoff contends that white people must embrace the real turmoil of whiteness in our present historical moment and make of it a kind of white double consciousness. White double consciousness would be fundamentally different from Du Bois’s black concept, inasmuch as it comes from the history of white supremacy and the myth of white vanguardism. Alcoff’s white double consciousness would consist in, on one hand, an honest, open-hearted, detailed accounting for the history of white identity and white supremacy; that history must be held in consciousness in order for white folks to understand and be accountable for their unearned social privileges and to ensure racial justice in the future. The second consciousness must be a full-throttle effort to reimagine whiteness, to reconstitute it in a different, anti-racist way, to transform it into one among many identities rather than the controlling or universal identity. I found this normative prescription for a transformation of whiteness compelling, and was further swayed by some of her ideas, sprinkled throughout the book, for how to think about whiteness as an umbrella term for a wide diversity and breadth of American experiences (class, regional, religious, sexual, etc.) and to include and foreground within it centuries of white activist struggles for justice, racial and otherwise (e.g., Thomas Paine, abolitionism, Progressive era social reforms, etc.).

Working from several sociological studies, Alcoff demonstrates that whiteness is already changing significantly and that the transformations she’s calling for may already be underway.

“In some cases, the turmoil in white subjectivity and embodied existence, and the incoherence of an alienated consciousness, produces a genuine disaffection from white supremacy, even if occurring in confused, inchoate form. But there is without a doubt a growing awareness about how whites are viewed by others as well as a significant decrease in white cultural domination.” (171)

Summarizing the work of Nell Painter, Alcoff lays out four trends in the current formation of whiteness in America: 1) Whiteness is not as salient nor as powerful as it used to be; 2) White privilege is more than ever mediated by other variables, such as class, immigration, sexuality, etc.; 3) White folks themselves are no longer expected a white-exclusive living space or work place; 4) Americans are creating many new kinds of racial categories, both official and popular, which undermine the coherence of whiteness (171-4). “I would note here,” she adds, that these are trends and not, alas, consolidated achievements with universal scope, but they are significant nonetheless.”

This all gives some hope and optimism to her argument, making room for white people who are seeking to escape the history of whiteness to maybe unclench their fists from color-blindness and embrace a different way forward. This is but one of many ways that Alcoff takes whiteness seriously, but as a potential social good rather than a permanent marker of the past. Perhaps in ways that I’m not (yet) willing to do, she respects people for whom whiteness is a salient and meaningful identity category while also insisting that they must change its contents. “The solution will not be found,” Alcoff writes, “in a flaccid universal humanism, nor in a pursuit of white redemption, nor in a call to a race-transcendent vision of class struggle” (204). Creating for euro-Americans an unglorious whiteness out of the history of slavery, Indian removal, Chinese exclusion, and the invasion of the Philippines holds out a kind of promise to help us white folks, as Alcoff concludes, “fac[e] the truth about who we are, how we got here, and then developing an offensive strategy for achieving a future in which we can all find a place.”

Musical Sounds, Meaning, & Streaming Media

Beyoncé advises me to “hold up” as I write this, her rap-singing above a series of spare staccato chords and pointed baseline. I’m acutely aware of the politics of a middle-aged gay white man setting in his underwear feeling the thrill of Miss B deep down, the interactions of gay black men’s—especially gay black men’s drag—cultures, black women’s culture, southern white men, and gay men of all colors. The arguments and accusations of appropriation swirl around in my head even as I’m pulled emotionally and, importantly, meaningfully into Lemonade for the umpteenth time over the last few months. I have no conclusions to draw at this point, other than to point out the intense interweaving of interactions and meaning, histories and identifications involved in my adoration of Beyoncé and Lemonade as well as its visual album.

My big writing project this summer is an attempt to theorize the relationship between music, sexual and gender difference, and queer communities—or more broadly, between musical sounds, social interaction, and meaning formation. (My thinking on the matter has been most profoundly shaped by Barry Shank’s The Political Force of Musical Beauty.) The relationships and interactions among meaning, community, habits of mind, cultural practices, politics, inequality, and power are hard enough to make sense of  — add into it the ephemerality of musical sound, where the experience of beauty and pleasure is tied to a relatively short, time-bound experience of purposefully arranged sounds, and you have a maelstrom of complexity that is difficult to describe in any kind of orderly way. Much like a more permanent physical object — say, a rock — sounds, patterns of sound, arrangements of qualities of sounds have meanings only inasmuch as they are ascribed to them by a community of listeners. Also much like a rock, the meanings of those sounds change over time and, in an era of mass reproduction of recorded sound, will often have different meanings within different communities of participators (listeners, hearers, dancers). During my morning news read, I stumbled upon Dan Chiasson’s “All the Songs Are Now Yours,” his review of Every Song Ever by Ben Ratliff where Chiasson considers the power of streaming technology for the musical connoisseur, which raises intense questions about how meaning is ascribed in a context of virtually unlimited, decontextualized streaming of musical sound.

In his review, Chiasson describes listeners “freed from the anguish of choosing,” where search engines and “smart lists” choose for her what she will “like” or be interested in. “It is like an open-format video game, where you make the world by advancing through it,” he observes. And here is the rub. Although problematic in some ways, Marshall McLuhan’s observation that the medium is the message asks us to consider streaming services from the perspective of how the technology (the medium) changes us. Whereas Chiasson (and I presume Ratliff) are somewhat awestruck by the scope and scale of musical availability and seem to genuinely derive pleasure from endless “discovery” of music, I notice that streaming technology is changing (has already changed?) the social processes of ascribing musical sounds with meaning. In other words, as McLuhan urges us to take note, our own interactions and habits of listening have been modified to accommodate the technology. The medium has changed us.

Chiasson begins to get at this when he observes that streaming eliminates (or transforms) the experience of nostalgia for music of our childhoods, making all the songs and sounds available to us at the click of a link. Indeed, the former cultural process of winnowing through the less (culturally) important music to rest upon a handful that survive in collective memory has been eliminated, as now all musical sounds are continually available, perhaps forever. “[N]othing in the past is lost, which mean temporal sequence itself … gets lost.”

Yet from my perspective as an interactionist, we must go further. The practice of streaming has broadened the impact of our always-on, always-available enwebbed culture — extolled as radically democratic, but increasingly revealed as radically consumerist — a process that began nearly 150 years ago, when recording technology was developed, enabling the mass-production and distribution of musical sound and the private, home experience of listening to those sounds. Then radio emerged as a means to broadcast sound to a mass audience; and eventually individual listening devices (Sony Walkman) enabled a further privatized listening practice, intensified by smart phones that put the entire opus of human musical sound potentially at our fingertips on our evening commutes. In other words, streaming can be seen as another stage in the movement of musical sound to an individual experience rather than a communal one.

I do not wish to make a normative argument here, nor an evaluative one (although my suspicions of both will probably be discernible in my words). That is, I don’t know if this individualization of musical beauty and its reduction to personal taste is good or bad — but I think it safe to say that such a reduction has happened (although some people might view what I see as a “reduction” as a kind of freedom). And I am sure that it is changing (has already changed?) the ways that musical sounds can accrue meanings at all. Of course musical sounds still have meaning, created and ascribed interactionally and socially; but it seems in the context of streaming that the meanings of the sounds are greatly reduced to communities of taste (perhaps exclusively so) rather than political communities or cultural affinities. This is potentially and paradoxically a flattening of culture through its infinite diversification into a billion individual taste conglomerations.

Rather than musical sound gathering layer upon layer of meaning as various communities use, engage, and participate with it together, through live performance (or recordings at a rave or dancehall), dancing, drag shows, political rallies — streaming technology intensifies the possible dislocation of sound from both communities of origin and communities of participation, where meaning now coalesces and adheres to a single individual and her iPhone. (And what do we then make of silent discos?) This is musical sound potentially bereft of contexts of participation in communities, a musical beauty stripped of history and memory, left with the bare sounds and their associations within a single listener.

At best, such a reduced meaning seems to me a brutal formalism, where at base, the forms of the sounds themselves—tempo, timbre, pitch, etc.—are the only meanings left. What would Beyoncé’s “Freedom” mean to me without all of the gay black men I have known and known of, all the drag shows I’ve seen, Ru Paul, or the gender queers of 1980s balls; or the history of black popular music, late 1960s rock, Jimmi Hendrix, and black power? Would it still be a musical sound worth listening to?

My Wish for You, Graduates

When I heard that a few of my students had asked that I be the professor to deliver the faculty convocation address for our department graduation, I was surprised by how deeply grateful I felt. Sometimes in life, it’s the small incidental acts when another person reaches out and says something kind that can transform a moment or, if we’re lucky, the way we see the world, in some small incidental way. And so I wanted to honor and thank those students for esteeming my thoughts a worthy way to mark your graduation—a great accomplishment and moment of transition in your life—by putting to paper some of what I might have said to them in a graduation speech. The graduation speech is a venerable art form in American life, and I cannot hope to match the great ones; so this is just from my heart and mind right now thinking about what matters to me most as a man and as a professor. I sent it to my students and post it here with all its imperfections in the hopes that they might find a nugget of wisdom for themselves at this liminal time in their lives.

My Wish for You

from Prof. Ormsbee who has been honored to be your teacher

21 May 2016

In some ways, we have no choice but to speak from where we are at the moment we take a breath to speak. What I want to share with you today, then, comes unavoidably out of my current life moment as I face the other side of middle age, the disappointments of life, and my failures. Yet my thoughts also emerge from the past few years of seeing you nearly every week in the classroom, and watching you learn and change and grow as you struggle with new ideas, new perspectives, and facts unknown to you before—because in some ways, talking to students, reading their work, watching them transform gradually before my eyes is without a doubt one of my life’s greatest pleasures and among the accomplishments I am truly proud of. Above all, today I want you to know that I have no doubt of the potential of each of you to become Good, with a capital-G, in your lives and in the lives of others.

Of course, saying as much presumes that I know what being “good” even is, or in the old philosophical formulation, what it means to have a “good life,” or as Socrates would have said what “a life worth living” might be. Human values are always emergent; that is, they are always in the process of becoming as we argue and struggle with each other over what we actually value, what is worthy of our time and attention and strength. And so we have to work out together what it means to be Good. We must talk about it, reckon, try it out, fail and try again. In this way, much like calculating limits in Calculus, we might throughout our lives approach goodness, maybe get closer and closer, so that when we are preparing for death or have died, people will look at our lives and think, “She was a good woman” or “He was a good man.”

I cannot sugarcoat this for you—having a conversation about goodness in our world can be almost ridiculous. Every generation has its own struggles and problems—I am not arguing that we are unique in that regard. But I am saying that we live in a global culture that makes being “good” and striving for “goodness” more and more difficult, if not laughable. We have hidden “goodness” away behind a fog of jobs and accumulation of consumer goods and celebrity; and within the clouds of always-on, always connected, iCulture. Indeed, there’s a way in which just talking about “being good” or having a “good life” evokes either mocking snickers or impassioned Jeremiads about STEM education and job markets. Goodness seems to be “out of touch” with the real world, even hopelessly naïve. And yet, in the face of youtube videos of beheadings, looming mass starvation in drought-stricken regions around the world, a melting polar ice cap, drone strikes and collateral damage, and staggering global poverty,  how can we afford not to talk about what goodness might mean at this moment in history?

Any serious talk can be challenging in a world filled with constant noise and distraction demanding our attention. Some of these distractions are the very real, weighty matters of day-to-day life: How will I pay my rent this month? How do I feed my kids today after working a 10-hour shift? Will I be able to get enough sleep tonight? Other distractions are mere attention-seekers that want us not to think too deeply and indeed make profit from our daily desire to escape into shallow entertainments and mouse clicks. These are the flashy advertisements for useless plastic objects and shoes we don’t need; the Facebook updates about someone’s encountering with an ugly person in the dairy case at the grocer’s; the news headlines designed to eliminate complexity in favor of quick emotional payoff. In most ways, our human values have been narrowed and pruned down until all that is left is a quick efficiency calculation: how much money, and how much time. In this world of economic efficiency, who has the time or brain-power left to ask the questions that matter?

Even education has been reshaped to follow above all the efficiency calculus: starting in preschool, with children barely out of diapers pushing reading programs before brains are developed enough for the task; elementary schools driven by standardized testing that treats children like widgets in a factory; high schools that are ranked based on the appearance of success, shown through graduation statistics and college acceptance letters; and universities that have shrunk students down into quantifiable “learning objectives” and rubrics to justify budgets and demonstrate, you guessed it, efficiency. We have slowly created a university system aimed at producing what C. Wright Mills, a mid-twentieth century social theorist, called the “happy robots” of the American labor force, who obey, acquiesce, and work without questioning.

In this moment in history, then, a bachelor’s degree in the Humanities can be a rare gift, as it affords to us a few years out of life to ponder and ask the questions that really matter, the questions that help us determine what it means to be “Good” and what a life worth living would look like. Of course, using your bachelors degree as a time to think about Big Questions is not what the current economic system demands, and indeed is what many of the bureaucrats, budgeteers, and reformers at SJSU usually think of as a waste of time and money because it cannot be measured or quantified and does not “add value” to your “marketability” as a potential employee.

I’m not going to indulge here in a defense of the Humanities as a course of study—you can find those online if you search hard enough, wading through all of the anti-education noise. Rather, I want to ask to you to consider your time in the Humanities department in what may be a different light. Your time here has not merely given you skills you need to be teachers and workers; this was not merely a few years out of your life to check off requirement boxes on your way to jobs, credential programs, marriage and children. No. In our time together, sometimes explicitly, sometimes in the background, we have been asking what it means to be Good and what it means to lead a Good life, a life worth living. These four or five years of your life have hopefully introduced you to ideas, thoughts, and perspectives that will serve for you as a life-long foundation to build your life’s Goodness upon.

When the puritans came to North America in the 17th century, one of their dreams was to found a Good society. The last five U.S. presidents have all misquoted and misused John Winthrop’s now-famous “city on a hill” sermon as a paean to American exceptionalism, the essential superiority of The United States as a nation. But Winthrop’s speech actually hoped, from a particular Christian perspective, for a society of mutual care and responsibility to each other, a profoundly good society. About a hundred and fifty years later, Thomas Jefferson imagined a society of radical equality, where each person mattered, each person was in a way “chosen” and valued and endowed with rights. For Jefferson, the Good Life was “the pursuit of happiness.” In some ways, the puritan focus on mutual obligation stands in contradiction to Jefferson’s individualism (and many of you know how I personally feel about Americans’ specialsnowflakery). But when you put the two together, they may offer valuable rethinking of goodness that is not merely the pleasure or Jefferson’s happiness of the individual, but includes social responsibility to the people around us. In this way, goodness can be considered an act of balancing the individual happiness with society’s interdependence, and by extension an act of choosing ethical life with other people.

Whereas colonial Americans may offer us a possible ideal of social and individual goodness, ancient Chinese philosophy may contain some practical ways to actually become good people. Inspired by philosopher Martha Nussbaum’s insistence that a worthwhile Humanities education must include cultures beyond the traditional “West,” I started reading Chinese philosophy last fall in my free time. Although I’m new to Chinese philosophy, and we have an expert in Prof. Jochim in our midst, I will attempt to do two thinkers a small bit of justice. Confucius taught that one does not perform religious ritual because there are real supernatural beings or powers out there in the universe. Rather, we perform ritual because the ritual act itself ties us to the past, to each other, and to the future, such that we are attempting to enact a world that should be, but is not. From there, he taught that we can create small rituals that we perform in order to become what we should be. Humans, in this thinking, are works in progress, not innately good or essentially “complete,” but imperfect beings who can and should strive to be better. A later Chinese philosopher called Mencius saw the world as an unpredictable and possibly dangerous place that we, as humans, have no choice but to live in. For Mencius, goodness is a quality that must be cultivated within our hearts and minds, much as one would care for a garden. We prepare ourselves for goodness by becoming good ourselves. This can only happen through care and attention. Clearly, I have only begun my journey into ancient Chinese philosophy from the Axial Age, but just this bare introduction has transformed the way I think about how I live my life, and the kind of man I might become in creating life rituals and cultivating my mind and heart in order to be good.

Many students know some of my own religious history, as sometimes I share pieces of it in class, how I was raised in a strict, very conservative minority Christian sect. As a gay child and teenager, that upbringing did much damage that to this day I still must contend with. Yet I have chosen to retain one habit of mind from my childhood religious experience, one that values holiness. As many of you may know from taking Dr. Rycenga’s or Dr. George’s religious studies courses, most traditional versions of monotheism teach that holiness is “out there,” transcendent, from God, an experience that happens to us, or an outside presence that we encounter. On the other hand, many mystical traditions and in some religious traditions from India, holiness is an emanation that interpenetrates the entire world, and that we all participate in. Finally, in modern, post-Enlightenment Judaism, holiness has come to mean something that we do, something we create through our actions in the world. In this view, humans are themselves creators as through their work in the world they make the world holy—humans have the ability, the potential, the responsibility to repair what is broken in the world, tikkun olam, by making it holy. I realize that many of us are not religious at all—I myself am, at most, agnostic. And others of us have very strong commitments to specific religions. My hope is that wherever you fall along the religious-spiritual-secular spectrum, you can imagine and consider just for a moment this idea, that we are the sources of holiness, we through our own actions make the world worth living in, beautiful, holy.

In each of these—American, Chinese, Jewish—the history of the ideas presents us with profound contradictions and disturbing realities. The puritans despised and excluded all who were not of their specific Calvinist branch of Christianity from their community. Thomas Jefferson had a decades long “affair” with a slave who was his own wife’s half sister, a relationship that can only be thought of as some form of rape, and left his own children in slavery upon his death. America at large is the story of the displacement of Native Americans; the enslavement of Africans; the theft of Mexico, Puerto Rico, Hawaii, and the Philippines; 200 years of indentured servitude, misogyny, homophobia, and nativism. What use are American ideals of goodness? Chinese imperial history is not immune from this kind of critique either, as over time it expanded and contracted over fast geographical regions imposing a particular form of Han cultures on peoples throughout the continent. And modern Jewish ethics has at its core a struggle between the particularism of being Jewish and the necessity to think ethically about all humans in a more universalist way. No conversation about the value of goodness is beyond a reflexive critique of the society and people who created it. And yet, that is what humans do: They create value ideals and fail to live up to them.

But for me personally, this whole conversation can feel beside the point, either too historical or too philosophical and abstract. Sometimes life guts us without warning, and as Martha Nussbaum reminds us, at such moments in our lives, the emotions can overtake our consciousness, creating profound “upheavals of thought.” These are the moments that make philosophy, poetry, art, history, social science seem useless, while at the same time insisting on our deep need for philosophy, poetry, art, and history to help us make sense of all this. For me, I didn’t really understand what an upheaval of thought could be until one of my best friends was killed on Flight 93 on 9/11. I still don’t know how exactly to describe the shock and breathless grief that took me that morning, as I was frantically calling people to make sure that he was not on one of those flights (I knew that he was scheduled to be back in San Francisco that day). I sat in a bar in a hotel in downtown San Francisco a few days later for his wake, and still in shock, all I could see was that everything he was, everything he could have been, everything he might have done was gone, done, cut out of the fabric of the universe. My Aunt Karen passed away yesterday morning from an aggressive cancer that had already metastasized throughout her body, undiagnosed because they thought her pain was from post-polio syndrome, a lifelong condition suffered by survivors of polio. When I was very small, Aunt Karen was one of the few people in my extended family that I always felt deep, unconditional, profound and unquestioning love and acceptance from. And her life is ended, over in 5 days of unimaginable, unavoidable pain.

What can goodness then really be in the face of widespread historical injustice and our everyday, personal, even common suffering and pain? This spring, some of us encountered together for the first time George Eliot’s profound contemplation on life and love, Middlemarch. Because our class’s focus was elsewhere, we didn’t really explore in any depth the real heart of Eliot’s novel, her insistence on our connection and duty to each other as humans. “If we had a keen vision and feeling of all ordinary human life,” she writes, “it would be like hearing the grass grow and the squirrel’s heart beat, and we should die of that roar which lies on the other side of silence.” Eliot believed that part of being human is the unavoidable consequences of our actions, the ways that we, most often unintentionally, affect the people around us. Sometimes our deeds don’t fully manifest in other people’s lives for weeks or even years. But that intense interconnection, if we paid attention to it, might be overwhelming. Prof. Riley, whom many of you know, uses the phrase “openhearted” to talk about how to deal with this reality. We must not run away from nor ignore either historical injustices or the private suffering of the people around us. Goodness and a life worth living lie in somehow encountering the world and our fellows with an open heart and willing hands, all while minding our own well-being and happiness. Although we cannot right all the world’s wrongs, nor alleviate all the suffering, I must believe that its better to spend our lives working to make the world good knowing the task’s enormity and potential impossibility, than to give in to greed, tribalism, consumerism, or endless swiping through and ranking headless bodies on our iPhones. We can commit to being present in our lives for each other, to make the world and a life worth living.

As each of you have touched my life in ways that I cannot even identify nor quantify using the crass, inhumane efficiency calculus, I want you to know that I am changed for having known you. I am honored to have been part of your lives for a short time in this limited way as your teacher. As you take this significant step, all bedecked in your bedazzled mortar boards, I cannot hope for a life without suffering for you, simply because that is not possible. And I will not hope for a life of wealth or success for you, as I’m no longer convinced that is what makes a life worth living. Rather, from this moment in my life and with an open heart, I thank you for your generosity in sharing a piece of yourselves with me for the past few years by wishing for you the strength, wisdom, love, and commitment necessary to make goodness with your lives.

Some thoughts on Israel and the Political Center

Some friends of mine have been talking about the recent upheaval in Tel Aviv. I suppose upheaval is a nice way to say race riots of the kind the U.S. used to have on a regular basis to intimidate people of color by white folks marching in the streets, destroying their property, burning torches (and crosses), and from time to time lynching someone. I’ll avoid the comparisons of the Tel Aviv riots with the Nazis and Kristallnacht, as they are obvious and painful. On a good day, the difficulties of talking about Israel within a Jewish context are legion, and one always risks being labeled an anti-semite for criticizing Israel or being labeled an imperialist for supporting Israel. The rhetoric is heated, divisive, and in my opinion counter-productive. That said, I’m going to dare to dip my toes into the turbulent waters to talk about a particular trend that I see in current conversations about Israel, especially among younger people:  the desire to somehow split the difference between the “two sides” of the issue, that there is an “extreme” right and an “extreme” left when it comes to Israel, and so the correct or best answer must lie somewhere in between the two “extremes.”

Without having an actual person or article to argue against, I want to in this blog just hold up for examination the (admittedly decontextualized) idea that taking a middle position between two political positions, i.e., the political “center,” is not only possible, but indeed the most desirable political position. The language of the “center” that I have heard invoked often over the past several years (e.g., in analyses of Obama’s presidency, in discussions about the economic collapse, even about torture policies) poses some serious problems for me, both in terms of the ethics involved and in terms of its political efficacy to solve our collective problems. Just on the surface, I would observe that sometimes the center is the desirable position; but sometimes it is not. Sometimes the center is the best choice; sometimes it is the most dangerous. I remain unconvinced that the center position (whatever that might be) is going to be the best possible answer to the ethical problems of the state of Israel vis-a-vis the Palestinians or to its internal inequalities with its castes of partial and incomplete citizenship for arabs, refugees, converts, civilly married and mixed-marriages, etc.

To begin, I disagree with the valorization of the center as such. Arguments for the value of the center per se fail, for me, in that they make of the center a political end-in-itself, as somehow superior to any other political position per se. This makes the center into a kind of received or pre-approved morality or politics, excused from the burden of vetting its own values or policy positions. Empirically, the political “center” only exists relative to the political range of its social and historical context, so the “center” is a moving target as you jump from place to place and through history. Such a moving target cannot be said to be the best position, then, in any particular case, anymore than a right or left position can be taken as the de facto best answer to any grounded political problem. The center, moreover, is in fact just as ideological as any other political position. But the ideology of the center is often more insidious, because the center more often than not favors the status quo and resists change; the status quo is often the “hidden” ideology, that is, the ideology invisible to itself because it is the ideology of the habitual, the already-is. Choosing the center, then, doesn’t free you from any of the pit-falls of the so-called “extremes”; rather, it places the onus of critique upon the center to justify itself in terms other than the status quo. This is not necessarily the case, but looking for example at the last three years of Obama’s presidency, that seems to be true of what is currently the center in the U.S.

The problem for me comes not in the center as a position, but when the center is a valued as an end-in-itself, when the center is the default political position, valorized for its very character of being “between” or “in the middle”; here the center has become what is valued, rather than the contents of the specific political position. The center risks becoming in practice a means to avoid having the hard political and value conversations of a particular issue or within a specific context; it is seen as a position of wisdom, as if slicing the political baby in half to find the middle will automatically give one the best policy or value position.

While a careful consideration of a full range of political values and positions is the sine qua non of a healthy democracy, the normative claim that we should always end up in the center—a claim often facilely made in television “debates” about policy (or by my students in their papers)—in practice actually forestalls a careful evaluation of the range of values and positions, since we already know normatively that the center is the “correct” ending place of our evaluation. Such a normative claim for the center both forecloses the democratic process of deliberatively arriving at best policies—we need to be able to evaluate the range of possibilities and pick the best one, not simply the “center” one— and in effect often defaults to favoring as little disruption to the status quo as possible.

I think of middle-class white folks during the mid-century American Black Equality (Civil Rights) movement, for example, who as a group argued for blacks to stop protesting, use the courts, be patient and wait for change to happen. The primary value of the center is to make transitions and policies the least discomfiting and disruptive for those whose lives are relatively stable and good in the status quo. They weren’t against black folks have equal treatment under the law; but they also weren’t in favor of black folks demanding equality. So they, the sages of the middle, sought the answer somewhere between Jim Crow and Black Equality. We only have to look to the early 19th century “gradualists” to see how that middle way would have worked out.

Another problem with the valorization of the center is that it relies on positing a left and right that are somehow equivalent, both equally valuable and equally problematic. But empirically, they are not. Both left and right represent a range of values, practices, and policies that can be evaluated in terms of ethics and efficacy. The strategies and tactics of various political movements and ideologies (wherever they fall on an already problematic dichotomous left-right scale) differentiate them fundamentally and they can and should be evaluated accordingly; both the real and possible outcomes of policies (a consequentialist perspective, I suppose) as well as the ethical implications of specific practices and policies must be the focus our judgment. The most intellectually dubious normative is to judge a political position based on how closely it allies to the center.

In the case of Israel, it is not the left clamoring for exclusion of non-Jews, gerim, and Reform American Jews (let alone refugees). It is not the left with a stranglehold on religious institutions ranging from marriage to adoption to education to military service. It is not the left building settlements while dissembling to the public and the world. I see a right wing that has dominated Israeli politics since at least the early 1980s. I do not see a left in power or a left dominating political discourse in Israel (let alone American discourses about Israel, which is dominated by a love-it-or-leave-it or be called an anti-semite ideology). Nor do I even see a unified vision for the state of Israel on the left; rather I see a range of possible values and policies to the left of the status quo—and likewise to the right. So arguing that the two sides are equal and equally dangerous doesn’t make sense on the ground. And arguing for a “center” makes no sense in that context.

To be clear, I am not saying that the left is innocent or necessarily desirable. Far from it. Rather, I am saying that it is not the same as the right, in morality, outcomes, or practices. And I am saying the “middle way” wants to have its cake and eat it too, while pretending that it is itself non-ideological, a peacemaker (if you’ll excuse the King James Christian word), the “wise” who sees Israel more clearly than the “extremes”—when in fact, as I said above, it is just as ideological as any other political position. In the case of Israel, the so-called moderate center has the dubious distinction of having enabled for nearly 30 years the increasingly far right control of both politics and religion within Israel, to continue the expansion of settlements, to continue exclusion of non-Jews from the state, etc.

In both the U.S. and Israel, the “center” position more or less amounts to, as I said above, favoring the status quo while refusing responsibility for the consequences of the “centrist” political positions. In the real world, that could amount to the potential end of the state of Israel, in my opinion, for continuing on its present course will inevitably result in the demise of the state, or at the very least its final decline into the unapologetic colonial oppressor and exploiter its Arabic enemies have been accusing it of being for the past 60 years.

Fear of Queer Sex in the Classroom

Recently when discussing Allen Ginsberg’s “Howl” and “Footnote to Howl” (here’s Ginsberg reciting) in an American culture course, students got up and left the class when the conversation turned to how Ginsberg uses gay sex as metaphor and imagery. Contextually, we had just come out of a few days of discussing Cold War conformity and domestic “containment,” as well as C. Wright Mills’ concept of the “happy robot” from White Collar. [Earlier this semester, students had also gotten up and left class during a conversation of orgasm and female sexual agency in Kate Chopin’s The Awakening.] I’ve been teaching about sexuality in my classes since 1998, usually integrating the issues of power and normativity surrounding sexuality into other topics (I haven’t had the chance to teach a course on culture and sexuality since 1998). I’m not bothered by student discomfort in the classroom; I am however troubled by the willful refusal to engage with the bothersome facts and ideas.

Like many scholars in that bleed-over space between humanities and social sciences, I see my role in the classroom as being more than a dispenser of information. As a sociologist (trained in interdisciplinary culture studies, American studies, and history as well) I see one of my primary pedagogical goals as being to help students develop their “critical thinking skills.” And many university level courses ranging from biology to Women’s studies, from Physics to art history can challenge students learned perceptions, bringing habitual patterns of thinking and doing to light. As anyone who has ever taught knows, it can be very uncomfortable.  But in terms of learning and pedagogy, I’m okay with students being uncomfortable with course content.

The phrase “critical thinking” is so banal as to be meaningless at this point, so I feel it bears some further explanation from my personal pedagogical approach. I do not mean a bland ability to spot someone’s politics in a newspaper article; nor do I necessarily mean the ability to scan a poem. Rather, what I mean is the ability to step outside one’s own experience and habits and see the social structures, power relationships, ideologies, and statuses that produced them and limit them on a day to day basis. To step outside oneself and engage C. Wright Mills’ sociological imagination is no mean trick, and takes practice, exposure, and modeling to fully blossom. And even then, it requires students to be in what cognitive scientists term “conscious problem solving” mode, which involves effort, deliberate and purposeful thinking, abstraction, and will to execute. [Of course, we must admit that a perfect abstraction away from the self is not possible, in my opinion; but that it is a worthy end-in-view that enables worthwhile humanistic research.]

To this end, it often requires a shaking experience of some kind for people to think outside themselves. This can be tricky, as jolting people out of their comfort might raise ethical questions, and because they can resist the process. It should go without saying that heteronormativity structures and pervades a classroom. The students (and teachers) bring it with them and enact and reproduce it in the room, including all the privileges and powers that it bestows upon its adherents (if you’ll excuse the religious metaphor). There are many ways that a teacher can pedagogically lay these structures bare in the classroom, but my personality and teaching style tends toward the frank and the brash intrusion of queerness into a course design overall or into a particular discussion. Over the years I’ve had mixed responses, including students calling me faggot in class and writing homophobic comments in my teaching evaluations. But generally speaking from a pedagogical standpoint, my students learn to point out the systems of power, privilege, subordination, and oppression around genitals, bodies, desires, and pleasure.

But with “Howl,” male-male sex—specifically and explicitly, butt fucking and blow jobs—is central to the thing itself, and not a pedagogical choice. It is a queer production by a queer man at a time when deliberate shaming, ostracism, jailing, financial ruin, institutionalization, and suicide were the public face of homosexuality. It is in its writhing litany of the pain and foreclosure of American society in the 50s, among other things, a responsa, an apology, a hagiography to the queer. Ginsberg’s lost generation are those

who howled on their knees in the subway and were dragged off the roof waving genitals and manuscripts

who let themselves be fucked in the ass by saintly motorcyclists, and screamed with joy,

who blew and were blown by those human seraphim, the sailors, caresses of Atlantic and Caribbean love,

who balled in the morning in the evenings in rose gardens and the grass of public parks and cemeteries scattering their semen freely to whomever come who may

The transposition of public gay sex reveling in its abjection with Jewish symbols of holiness (seraphim) and with Whitmanian symbols of “spiritual democracy” (semen and grass) challenges the reader (the student?) to demand difficult answers from the domestic containment of the 1950s, and indeed, from the pro-gay marriage campaigns of today.

By broaching these sexual topics and throwing them out there for students, I challenge them to see the ways that assumptions about desires and bodies creates imbalances of power in the society they live in. And it makes them squirm. But in a world where heteronormativity blocks Ginsberg’s sanctification of gay male sex—not to mention Chopin’s longed-for full female sexual agency—they can always simply refuse it, turn their backs on it, and slam the door behind them. And there was really nothing I could do to protect the handful of gay students in the room from that rejection.

Mormon Emotionality Compared to Other Constructions of Emotions

Last Saturday (31 March 2012), I presented the second paper to come out of my ex-Mormon project (1), entitled “‘It Felt Like a Cord Snapping’: Mormon Emotionality and Emotional Reframing among ex-Mormons.” The more I have been immersed in this second round of coding and analysis, the more excited I am about the findings. Emotionality is the range of beliefs and practices surrounding emotions, giving intelligibility and inciting expression and social behavior; Mormon emotionality thoroughly shapes adherents’ life experience, making it necessary for them to reframe and transform their emotionality as they leave Mormonism.

To understand this transformation, I must begin with an analysis of the Mormon context or “situation”; to wit, the specific ways that Mormonism structures emotionality by making bodily sensations intelligible, naming them, and giving them social significance, i.e., turning feelings into Mormon emotions (2). I argue that Mormonism overestimates (3) emotions, locating all evaluation of morality and knowledge within the purview of emotionality. This in turn creates a personal practice of constant self-scanning for emotions, an emotional vigilance among adherents, as well as a social practice of using emotions to judge and constrain others’ behavior and well-being.

During the Q&A session, several people in the audience objected to my characterization of emotional vigilance as something uniquely Mormon. Three specific objections were given: one person argued that we all constantly scan ourselves for emotions, or rather, that such emotional awareness is constitutive of modern life; another objected that this kind of emotional scanning is similar to patients or clients in a mental health, doctor-patient relationship; and a third person objected that this might be similar to any highly observant religious group with tight social bonds, making a comparison to Hasidic Judaism. In the moment, I responded quite simply that my intuition is that Mormon emotionality is indeed different; that as a microsociologist, I tend to see the specificities of groups. I also reflected back to the commenters that what I was hearing is that, whereas I had been focusing on the specific case, I may want to step back and see connections and overlaps with other forms of emotional practice coming from other social spheres or groups.

Although the uniqueness of emotional vigilance is a minor part of my overall argument, I took the feedback seriously and have spend the past day mulling the objections over. Upon reflection, I think I must stand by my argument that Mormon emotionality produces a particular kind of emotional vigilance that is emphatically Mormon in its make-up and meaning. Taking each of the three possibilities posed in the Q&A, I see parallels and similarities; but I still see major differences.

Probably the easiest case to consider is that of other religious communities. Having studied religiosity off and on for the past 10 years, I can say with a relative confidence that Hasidic (4) emotionality  is quite different from Mormon emotionality. Given that Hasidism views embodiment, sin, forgiveness, truth, joy, revelation, etc., so differently from Mormonism, it is not that much of a stretch to see that, even if Hasidism produced an emotional vigilance (which I don’t think it does, frankly), it would be of a radically different character and for different social ends, than that of Mormonism. Within a religious context, I think that perhaps the closest you might get would be evangelical Christianity, with its belief in the gifts of the spirit. But there as well you find dramatic differences in the experience of conversion (born again) and communion with the spirit, as well as the means to discern truth and morality, that would make its emotionality and any kind of emotional vigilance quite different from that of Mormonism.

The second objection, however, poses a more difficult case for me. Although the therapeutic relationship itself is clearly different from Mormon emotionality, there is a transformation of emotionality in both the process of therapy and the process of leaving Mormonism.  My informants reported a significant amount of self-awareness of emotions vis-a-vis Mormonism, specifically in the two areas of truth (knowledge) and morality (good/evil). All of the informants discussed one or both of these areas; and most of them talked about the experiences of constantly watching themselves for the feelings that would be “emotionailzed” in an acceptably way within Mormonism. Indeed, this self-surveillance was a key part of Mormon adherents’ lives; and consequently awareness of it is a key part of leaving Mormonism (at least for those who become unbelievers). Having only a limited knowledge of the literature about mental health clients/patients (Goffman and Foucault), my educated surmise is that the mental health process would provoke an intense self-scanning of the individual, one that requires, by its nature, a self-reflexivity, a consciousness, and a reframing (especially in cognitive therapy, where reframing is an explicit project), that is, creating a new emotionality. It would seem, then, that in a general sense, there are some parallels between a therapeutic emotionality and ex-Mormon emotionality.

There are, however, some key differences. The therapeutic relationship doesn’t produce a full cultural emotionality of itself; rather, it seeks to lay bare the patient’s existing emotionality, create a new, therapeutic practice of emotional vigilance and surveillance (sometimes even requiring a rigorous recording of emotions in “feeling journals”), and transform it into something deemed more “normal” or “healthy” with the help of the expert guide. As an alternative/new religious movement, Mormonism is a fully developed culture unto itself, including its emotionality, which children learn as part of their development and which converts buy into from their very first discussion with missionaries (5). Secondly, the therapeutic relationship is, in many ways, teleological, that is, it is moving toward a known end; whereas the process of leaving Mormonism has no emotional end known in advance to the apostates. In some ways, it occurs to me as I write this, the emotional transformation is an effect of the leaving Mormonism, rather than its explicit goal (although many of my informants reported that when they adhered to Mormonism, they had a longing to “feel better”).

The third objection that all of us in modern societies engage in a kind of emotional vigilance poses a different kind of sociological problem: namely, that Mormons are always embedded in larger social groups. My informants are nearly all North American (from the U.S. and Canada); and in all cases, Mormons are always to varying degrees bi-cultural, working to be “in the world, but not of it,” circulating in “gentile” contexts (sometimes more easily than others) but always with the knowledge of their “chosenness” and their responsibility to be a “light unto the world.” This means that the ex-Mormons in my study are also culturally identifiable as American and Canadian (and, not insignificantly, white).

There is a kind of emotional vigilance of late (post?) modern society, what Christopher Lasch has called the “therapeutic culture of consumerism” (6). In the post-Fordist, late capital, consumerist world, Lasch’s theory posits a focus on narcissistic pleasure, hedonism, that can be enacted through consumerism. This is the oft criticized, but structurally embedded, practice of constantly evaluating one’s state of happiness and judging the phenomenal world based on one’s level of happiness (7). That is indeed a kind of emotional vigilance, one tied deeply to structures of capital, I might add. Further, stepping back from Lasch’s theory and thinking about a broad critical historical literature on the self-help movement that began in the early 19th century and really took off during the 1960s, I definitely would argue that the dominant culture’s emotionality includes a kind of emotional scanning. Indeed, the literature on emotions generally points to the fact that it’s a pretty human thing to do to watch one’s emotions.

So the question that remains is whether or not the Mormon version of emotional vigilance is shared with that of the dominant culture, or if it is a specifically Mormon practices. The easiest answer I can give to this is that, for my informants who spoke about emotions after Mormonism, all of them reported a different, new emotionality, that is, a different emotional practice, characterized by ease and relief at not having to always be on emotional guard. All of them reported a reduction in the salience of emotions generally (by rejecting the Mormon meaning of emotions), and therefore, in my analysis, a reduction in their emotional vigilance.

But I want to go a step further here, and think about why the Mormon emotional vigiliance is different. The baseline human emotionality (which is only a theoretical construct, given that there is no such thing as a human without a social context that builds a particular, located emotionality), I would argue, is at a lower frequency than Mormon emotionality, precisely because Mormonism overestimates emotions; or to say it differently, Mormonism attaches a hyper-salience to emotions that provokes a higher intensity of scanning and surveillance. As far as modernity is concerned, Mormons in North America clearly take part in the narcissistic consumer culture. I see no evidence that Mormons are any less consumers, on average, than other North Americans. That means that Mormons are, in certain contexts, seeking their own happiness through consumerist means. I would argue, however, that these two emotionalities co-exist within an adherent and that they overlap and complement as well as contradict each other. But the Mormon emotionality is its own distinct thing, arising in a specific context, with different structure, meaning, salience, and emotional practices attached to it. Whereas I absolutely see Mormons participating in consumer emotionality, and I can see in my informants how the consumer “happiness” can bleed over into their Mormon emotionality (e.g., I feel happy and excited about buying that house, therefore, it must be god’s will that I buy the house!), I cannot see that they are the same thing, that the Mormon practice of constant, intense self-scrutiny of emotions in order to be able to categorize the phenomenal world into its Good/Evil and True/False schemas is the same as the dominant culture’s. To emphasize the point, my informants all described dramatically different emotionalities post-Mormonism.

So yes, emotional reframing occurs in leaving other religion; and in therapy. But the emotionality of the religions themselves are different; and therapeutic relationships create a practice of vigilance while trying to reframe the emotionality of the “patient.” And yes Mormons are also “normal” consumers. But I believe my data demonstrate that, among ex-Mormons unbelievers, their experience of emotional vigilance when they were adherents of Mormonism was quite different than their emotionalities post-Mormonism. To step back from my own very narrow research project, I think that there are some interesting implications here, namely, that undergoing a major cultural shift—joining or leaving a religion, migration, education, interracial marriage, etc.—is always accompanied by a transformation of emotionality, to some extent.

Notes

(1)  This is a grounded theory project that studies people who leave the Brighamite branch of Mormonism—the Church of Jesus Christ of Latter-day Saints in Salt Lake City, Utah—and become unbelievers to some degree, describing themselves somewhere in the atheist or agnostic camp. I collected interview data through semi-structured interviews. A first round of coding, followed by axial codes rendered a theory of the psycho-social process people go through when leaving Mormonism; I tend to err in the direction of  parsimony, so I only claim that the model works for people leaving Mormonism toward some level of unbelief, but I suspect that very similar processes might obtain in cases of anyone leaving an alternative/new religious movement. Building from that first round of coding, I discovered the centrality of emotions in the process, and went back to the data to try to explain Mormon and ex-Mormon emotionality.

(2) I’m currently working through where I stand in the theoretical literature about emotions themselves. In the near future, I will post to this blog a discussion of the debates within the field and where I might land. For the moment, I agree with Jonathan Turner that a sociology of emotions cannot ignore the mounting evidence that there is a core evolutionary and biological emotionality among our species. I rely heavily on Martha Nussbaum’s 2001 book, Upheavals of Thought (especially the first seven Chapters) for an excellent synthesis of the philosophical, neurological, social-scientific, and psychological literature on emotions. To give an over-simplified gloss of my current working theory of emotion: 1) there are embodied, physiological sensations that arise, which can be involuntary or incited feelings; 2) the acculumated knowledge and experience of the individual, including their social and cultural position, enable categorization of the feeling, rules for how, when, where, and to whom to express that emotion, and the significance or salience of the feeling: Emotion; and 3) practices and social behaviors arise in conjunciton with emotions, which can in turn (re)incite feelings (go back to 1).

(3) In the Freudian sense of over-valuing, or holding something in higher esteem than it should or might be held.

(4) The hasidic movements within Judaism are themselves a rather diverse bunch, with multiple schools and rebbes and movements under the label Hasid, so it is actually likely that, for example, the Lubavitcher emotionality may be different from, for example, a Satmar emotionality.

(5) Missionaries are actually trained to push people toward emotional vigilance by constantly asking possible converts to scan their feelings for positive emotions, which the missionaries then identify as “the Holy Spirit,” and which they assign the significance of “revelation of truth.” Thus, converts are inculcated into the Mormon emotionality from the beginning, and actively and consciously trained in emotional vigilance.

(6)  It is important, here, to keep this notion of therapy from that of the medicalized therapy I discussed earlier. They are related, but not the same thing.

(7) Over the past 10 years or so, positive psychologists have argued that this consumer version of happiness has actually short-circuited our ability to know what contentment is and to discern when life is good, because we expect (inappropriately) that happiness be a constant high-level feeling of joy and pleasure. See Jonathan Haidt’s The Happiness Hypothesis for an example of this kind of work.

Growing Up Black and Male in America

Inspired by Trayvon Martin’s murder, Melissa Harris-Perry hosted some young black men to talk about what it’s like to live in a society that sees them as criminals and threatens them with harassment and violence. Normally I would have an extensive commentary to offer. But in this case, I think it best to let these young black men speak for themselves.

Growing Up Black [and Male] in America

Visit msnbc.com for breaking news, world news, and news about the economy

Follow

Get every new post delivered to your Inbox.

Join 1,064 other followers

%d bloggers like this: