Terence Halliday
Why it is vitally needed despite its flaws.
- View Issue
- Subscribe
- Give a Gift
- Archives
Few global organizations generate more controversy than the World Bank. For several weeks in April and May of this year it was impossible to open the world’s marquee newspapers without confronting headlines and editorials on Paul Wolfowitz’s fight to remain at the helm of the Bank. Why should The China Daily or Kenya’s Daily Nation feature stories about palace wars on H Street in Washington? What warrants intensive politicking by finance ministers in Africa and Continental Europe, or veiled warnings from heads of state about the consequences, if Wolfowitz hung on to office? Why the delight in remote corners of the world when Wolfowitz surrendered to clamor for his removal?
Imperial Nature: The World Bank and Struggles for Social Justice in the Age of Globalization (Yale Agrarian Studies Series)
Michael Goldman (Author)
Yale University Press
384 pages
$55.00
The World's Banker: A Story of Failed States, Financial Crises, and the Wealth and Poverty of Nations
Sebastian Mallaby (Author)
Random House Books for Young Readers
480 pages
$17.94
The Bank matters to rich countries because they disproportionately finance its programs. The Bank focuses the attention of leaders in developing countries because its interventions are variously hated and loved for their intrusion into domestic politics and policy priorities. Situated a few blocks from the White House, in 2004-2005 the Bank extended loans in excess of $22 billion to more than 100 poor countries. Its investments in the private sector in developing countries exceeded an additional $5 billion.
The Wolfowitz controversy personified debate over a global institution with such massive stakes in the developing world. In Wolfowitz critics saw much they abhor about the institution itself. An architect of the Iraq War seemingly exemplified the bullying ways of an international organization widely resented for heavy-handedness. A man who made anti-corruption his watchword for draconian measures allegedly was corruptible himself. A leader who sought to export democracy to the Middle East failed to heed the opinions of his own experienced experts.
Wolfowitz thus became a lightning rod for bigger issues. World leaders critical of U.S. billions spent on an ill-conceived invasion of Iraq found it difficult to accept Wolfowitz’s credentials as a peacemaker. Europeans saw his vulnerability to criticism as an occasion to weaken the U.S. grip on World Bank leadership. Developing nations grumbled more loudly about the lack of transparency and democracy in Bank decisions, not least the appointment of its leader.
All this contrasts sharply with the Bank’s reading of itself. By its own account, the Bank’s billions all flow to the same cause—working for a world where hundreds of millions of people might aspire to “a future without poverty, disease, and illiteracy.” The Bank sees itself as an instrument of hope in a world beset by tragedy, inequality, and disaster. “Every week,” states its 2005 Annual Report, “10,000 women in the developing world die giving birth, and 200,000 children under age five die of disease. More than 8,000 people die every day from AIDS-related conditions, and 2 million people will die of AIDS this year in Africa alone. As many as 115 million children in developing countries are not in school.”
To arrest and reverse these gross injustices, the Bank commits itself to the Millennium Development Goals of “eradicating extreme poverty and hunger, achieving universal primary education,” promoting gender equality and empowering women. It seeks to reduce infant mortality for children under five by two-thirds by 2015. In addition to drastically reducing maternal death rates, the Bank supports the combating of diseases, such as malaria and HIV/AIDS, and increasing sharply the access of millions to safe drinking water. Not least, it affirms the Millennium goal of a reformed global trading system, especially in its impact on the world’s poorest countries.
A scan of the Bank’s reports and its programs reveals an extraordinarily eclectic range of activity in every corner of the globe: getting girls into school in Egypt and poor children to school in the Kyrgyz Republic; combating tb in Africa and malaria in Eritrea; managing forests in Southeast Asia and fisheries along rural coastlines; rushing emergency teams to Indonesia, the Maldives and Sri Lanka after the tsunami of 2004 and rebuilding strife-torn Central Africa; killing agricultural pests in Central Africa and developing the garment industry in Cambodia; pushing for court reform in the Philippines and building the power grid in the Dominican Republic; creating housing reform in Mexico and road-building in Poland.
Not least, the Bank now combats the money laundering that could fuel terrorism. It demands transparency in governments and offers transparency for itself. Millions of dollars are committed to reducing government corruption and to building civil society groups. The World Bank Institute trains aid workers. The Bank teams up not only with governments but with the World Wildlife Fund and the Scout Movement for children, with Conservation International and the UN’s Programme on HIV/AIDS.
How can this panoply of good works be read as anything else than a commitment to justice or a panacea to disease, disaster and despair? Michael Goldman’s Imperial Nature and Sebastian Mallaby’s The World’s Banker present alternative readings, neither concordant with the Bank’s self-portrait but both resonant of tones in the Wolfowitz controversy.
Dissenting most strongly from the Bank’s view of itself, sociologist Goldman subtitles his book “The World Bank and Struggles for Social justice in the Age of Globalization,” but he quickly avows that justice, if it is to be found, will not be from the Bank. For decades, major infrastructure projects were at the heart of Bank programs. Dams, electrical grids, ports, and highways have a tangible character that appeals to Wall Street bankers, national leaders, and Bank economists alike. Managing water has a special appeal. If water can be contained and manipulated, nature’s “waste” can be harnessed so that farmers can irrigate fields, manufacturers will obtain electricity, urban dwellers enjoy clean water, and residents along river banks will be protected from flooding. Investment, agricultural production, environmental control, health benefits—water projects promise them all.
Goldman had some reason to doubt these promises before he began his principal case study, the Nam Theun 2 Dam Project on the Mekong River in Laos. In the Thar desert of northwest India, the World Bank had invested in a massive irrigation canal system to pour Himalayan water into a harsh arid environment. Wealthy landowners turned land along the main arteries of the canals into export-producing farms. But along the minor arteries water disappeared. Systematic theft of cement left canals too porous to deliver water. In other places channels were sand-clogged, and adjacent land was waterlogged and salinated. Driven into debt, poor farmers were forced off their land and into part-time labor, indentured servanthood, or sharecropping.
Further south, in 1990, thousands of villagers set off on a “long march” to protest their forced resettlement to make way for a dam in the Narmada Valley. Local authorities were not impressed. Protesters did, however, disturb the Bank, which faced disconcerting international publicity about the refusal of its officials to heed hydrological and engineering reports that drinking water would not get to where it was promised, entire fisheries could be lost, and irrigation schemes would fail. As a result the Bank pulled out. Shortly after he became President of the Bank in 1995, James Wolfensohn canceled another large dam project in Nepal, fearful that growing protests would spur another “Narmada effect.”
Heeding these setbacks, the Bank learnt a lesson: solicit environmental impact statements before committing to any new infrastructure project. The massive Mekong River projects became a litmus test of the Bank’s response to its critics. Situated on a tributary of the Mekong in Laos, the Nam Theun 2 Dam project promised far more. The dam would simultaneously generate hydroelectricity for energy-hungry Thailand while earning hard currency for Laotians. Around this centerpiece the Bank planned linked projects—experimental farms, sustainable logging, eco-tourism, harvesting the forest, and protecting wildlife. More expansively, the Nam Theun project and its cluster of other “green” initiatives would require “new regimes of law, regulation and management,” the restructuring of government agencies, and the readjustment of the national budget, all enabled by the Bank as the lead lender.
No less than three waves of environmental studies followed. The first, by Australian consulting firm Snowy Mountains Engineering, offered a positive appraisal, but critical NGOs pointed to serious flaws in the report and to a conflict of interest the firm had with the contractors for the dam. A second evaluation, by a Thai consulting firm long-connected to the Bank, met a similar fate when international activists mobilized to contest its findings and auspices. In 1995, the Bank tried again, this time with two long-standing consulting firms, one German, one American, but also, unusually, with two international NGOs, the World Conservation Union and CARE International. Consultants headed into the jungle to interview remote tribes. Ichthyologists carried out fishery impact studies on the Mekong. Anthropologists evaluated the impact of resettlement of ethnic peoples.
On closer examination, Goldman finds that the illusions of neutral evaluation belie reality. The quality of appraisals fell far below scientific standards. Consultants were given impossibly tight timetables to elicit complex information about the impact on the Nakai Plateau tribespeople or on fisheries below the dam. Terms of reference excluded pertinent issues. Inconvenient data about resettlement were suppressed by the World Conservation Union and overly independent consultants fired. More significantly, consultants regularly yielded to pressure from the government of Laos, contracting firms, and the Bank to give them what they wanted. Meanwhile, as the reports accumulated, forests were cleared for the dam reservoir, project financing put in place, and government agencies retooled for resettlement and construction.
In constructing markets and reshaping the environment, the Bank has become arguably the most potent agent of state reconstruction in the world. In Laos, says Goldman, the “greening of Laos” required the building of an “environmental state.” Ministries thrived if they had a hand in the giant infrastructure projects; they shrunk if they served only health, education, or welfare. The Bank gave the ruling Pathet Lao a superb alibi to continue its “ethnocidal ‘Laocization policy,'” in which some 900,000 non-Lao speaking peoples would be forcibly resettled and “Laocized.” At least fifty foreign bilateral and multilateral agencies, governments, and banks hover over every decision of government, influencing its laws, investments and policies. The Laotian state, Goldman implies, is more responsive to “transnational capital” and global experts than to its own citizens.
The Mekong projects signify far more than the deep intrusiveness of the Bank into a country’s affairs. For Goldman, the big story concerns not the Bank’s financial capital, which is massive enough, but its intellectual capital. The Bank’s fundamental power is not of the purse but of the pen. With its unparalleled staff of professional experts, overwhelmingly economists, and its hydra-like capacity to absorb consultants and experts the world over, the Bank has crafted an ideology that not only controls the prevailing worldview of development but limits the questions that can legitimately be asked. In the environmental field the Bank has created a “green neo-liberalism” that fuses the financial neoliberalism of privatization, downsizing the state, and free enterprise with a “liberal society agenda of social justice and environmentally sustainable social justice.” This fusion produces “a frame of mind, a cultural dynamic, an entrepreneurial personality type, and a rule of law” that is virtually impossible to escape. The power of the pen controls minds, forecloses options, excludes questions, predisposes answers. In green neoliberalism, “neoliberalism” eclipses “green.”
To the argument that international civil society presents a counterpoint to the Bank, Goldman warns us to look again. On the much-contested issue of water privatization, where the United States and the Bank have urged, even pressured, governments to cede everyday responsibility for water supply to private corporations, a global consensus emerged quickly. Investigation reveals that consensus arose not from below, from concerned citizens and environmental groups, but from a combination of the Bank’s interventions from above, and predominantly from private sector groups closely aligned with the companies that offer these services. Not surprisingly, their diagnosis was that the public sector had failed, privatization was the only answer, and the Bank and the imf could ensure this by making privatization a condition of their loans to poor countries. The results? When the poor did not get the better services they were promised, or costs rose beyond their ability to pay, a strong backlash broke out in many countries. Eventually public protests have led to the redrawing of contracts and, in some cases, the withdrawal of private firms.
Ultimately, says Goldman, it is naïve to expect the Bank to be environmentally responsible, let alone an agent of social justice. The Bank answers to the institutions of global finance, to the interests of small consulting firms, major contractors, cash-starved political élites, international bankers, Wall Street, and its largest stockholders—the world’s richest nations. Look not to the Bank for sympathy or understanding for the poor, the voiceless, the excluded and marginal. Look through the Bank’s rhetoric to its actions. Weigh them against its predations—and that will constitute the measure of the Bank’s commitment to poverty, social justice, and democracy.
For Sebatian Mallaby, this reading, through the lens of water politics, is much too monochromatic. The World’s Banker does not disagree that the Bank has fallen short on any number of criteria, not least former director Robert McNamara’s goal to defeat global poverty by the end of the 20th century. The Bank’s structural adjustment policies, whereby it has demanded stringent financial belt-tightening in already impoverished countries, have triggered riots across Africa and Latin America, eliciting almost universal loathing. The earlier refusal of the Bank to consider debt relief for desperately poor countries meant that vastly more of their national budgets were spent on loan repayments than social services. Oxfam pointed out to Wolfensohn in the mid-1990s that Uganda spent $2.50 per citizen on health annually compared to $30 per citizen for debt repayment. Like many other multilateral organizations, the Bank was woefully slow to recognize and combat the catastrophic effects of the AIDS pandemic.
Bank officials often had themselves to blame for negative publicity. A hubris born of the power to alter the fates of nations breeds arrogance. Bank recipes for success came and went—in one period, infrastructure, in another, investment in people, and then back to infrastructure again. Theories of development seemed like one passing fad after another—physical capital in the 1940s and 1950s, human capital in the 1960s, social capital in the 1990s. One moment the priority might be poverty, then macroeconomic structural adjustment. Too often the allegations of critics had been correct: the Bank forged pacts between its own bureaucrats and national autocrats.
Bank policy and decision-making was dominated by the nations that paid in most capital—its richest investors. Their views of global priorities—and national interests—got imposed on hapless countries on the periphery. The processes of approving projects too often had been too slow or too fast, too ready to ignore reports that didn’t agree with Bank priorities. Evaluation as well was difficult at best and controversial at worst. The best laid schemes went awry. “Clinics got built, but there were no medicines to put in them. Roads were constructed, but were later not maintained.” Corruption siphoned off 10 percent or 20 percent or even 30 percent of loan moneys, as was alleged in Indonesia. No wonder NGOs mobilized outside the Bank annual meeting in 1994, chanting “Fifty Years is Enough.” The “overweening confidence” of these global mandarins was matched only by their “manifest incompetence.”
Despite all this, Mallaby does not settle for simplistic, moralistic judgments. In fact, the Bank could rightly point to notable achievements. When a U.S. Indonesian specialist alleged that 20-30 percent of Bank moneys were going astray in Indonesia, the Bank could retort that thanks to its interventions, Indonesian poverty rates had dropped from 60 percent in 1966 to 11 percent in the mid-1990s. When a hostile Bush Treasury Secretary asserted the Bank hadn’t made any impact on world poverty, it fired back with a powerful rejoinder: since 1960, life expectancy in poor countries had risen from 45 to 64; since 1970 world illiteracy fell from 47 percent to 25 percent; since 1980 the world population had increased by 1.6 billion but poverty had fallen by a net 200 million. For all these advances the Bank could reasonably claim some credit, although parsing out its precise contribution would never be uncontested.
It is true that the Bank—not alone among institutions or countries—took a long time to respond to the AIDS pandemic. But when it awoke, around 1999, its influence was decisive in several places, not least India, where a pact with the government saved millions of lives. Until Wolfensohn’s presidency, the Bank steadfastly resisted calls for relief from the crippling debt held by the most impoverished countries. But once Wolfensohn heard and heeded the cry, the force of Bank leadership propelled international financial institutions and the G-7 into one and then another wave of debt forgiveness—although even some ifi officials question whether the programs were well-conceived or executed.
Critics have no difficulty finding cases up and down Africa where the Bank’s loans have disappeared into a sinkhole of waste, corruption and monument building. But look at the story of Uganda. Ravaged by Idi Amin’s brutal dictatorship in the 1970s, and later Obote’s one-party rule, Uganda fell towards the bottom of the world’s poverty league. In 1986 a coup brought Yoweri Museveni to power. Joining forces with the Bank to stimulate economic development, together they lifted average income by 40 percent in a decade. In the eight years from 1992 to 2000, poverty dropped from 56 percent of the population to 35 percent. Thanks in large part to Museveni’s sophisticated finance minister, Uganda created a poverty eradication action plan across the broad front of economic, health, and education policies. In a campaign against corruption, the Bank pressed the government to rationalize a budget that would be transparent to the public, allowing even civil society groups to monitor expenditures on social services. A $155 million Bank loan for universal free primary school education doubled enrollment in elementary schools.
The Bank could also be a peacemaker. The Bank’s willingness to move fast and effectively to rebuild Bosnia helped to complete a peace agreement in an atmosphere of recrimination and suspicion. At the Dayton negotiations it showed Serbs, Croats, and Bosnians that concessions on their part would capture the imagination of a world community ready to springload reconstruction. As a multi-lateral institution, the Bank was able to coordinate donor nations and international organizations in a comprehensive effort to reconstruct the financial system, write a new constitution, rebuild schools and medical clinics, to name a few. The Bank could act as a “neutral” broker, not obviously party to any of the adversaries.
Similar refrains can be heard in other domains. Yes, many infrastructure projects—roads, dams, electricity grids—had delivered far less than they promised. But the Bank could learn. Building river banks in Bangladesh flood zones protected villagers. Building roads brought markets closer. Building an oil pipe line in the Chad offered a sliver of hope for state-building revenue. Fighting poverty, said Wolfensohn after 9/11, fights terror.
Goldman and Mallaby agree that Wall Street and the U.S. administration of the moment have a tremendous influence on World Bank priorities and resources. But beyond that their accounts diverge. If there is an immense disjunction between the Bank’s mission and what it delivers, how is this to be explained? Goldman blames an amalgam of global capital, a triumphalist U.S. attitude riding on the back of an ideology created from neoclassical economics, and the coincident interests of Bank development specialists, contractors, and consultants, and some Third World élites. Eliminating poverty is less the real goal than a rhetoric to justify rivers of money and to maintain a hegemony of experts in Washington and the Bank’s field offices. Ironically, Mallaby, the financial journalist, looks instead to the sociology of the Bank for an explanation. For Mallaby the Bank concentrates enormous resources that can be mobilized against “the greatest outrage of our times—the persistence of extreme poverty.” Yet even if its leadership were willing—and this has not always been the case—the Bank itself is riven with contradictions that would subvert the best-intentioned organization.
On any project at any moment, the Bank can find itself caught in a vise of contradictory expectations. Compare three often mutually colliding constituencies. Shareholders, most notably the rich nations of the North, advance their own agendas, which twist and turn as their administrations change (cf. the Clinton Administration’s priorities on education and the Bush Administration’s initial skepticism of the Bank) and as global realpolitik dictates. 9/11 redirected substantial new resources to combat terrorism through anti-money laundering campaigns.
Clients, developing countries, too often present no easy options to the Bank. The price of Indonesia’s sharp reduction in poverty seemed to be tolerance of rampant corruption, a siphoning off of World Bank moneys into bottomless private pockets. Behind the Uganda success story lies an authoritarian ruler, Museveni, who tolerated an election in 2006 by throwing his primary opponent into jail. And who are the Bank’s true clients? The government officials and their retinues of consultants, the regional administrators who “tax” the World Bank disbursem*nts to line their pockets, or the rural poor and slum-dwellers, struggling for a daily meal or a modicum of dignity? And if the last, how to find out what they truly need? And once that need is identified, how to get money into the hands of those who most need it?
And then there are the activists. On this Goldman’s and Mallaby’s readings of the Bank concur. There is a naïve view that NGOs are the parties of virtue in the face of Bank and donor vice. Goldman maintains that some of the most reputable environmental groups—the Wildlife Conservation Society and the Worldwide Fund for Nature, for example—were co-opted by the Bank in the Mekong Delta projects. Mallaby ranges farther and wider, on the one hand applauding Wolfensohn for reaching out to NGOs, but on the other hand castigating the “Stankoist” fringe that smashed windows and burnt vehicles at the Seattle annual meeting of the Bank and Fund. The Bank story is replete with episodes where tiny groups representing unknown constituencies were given more credence than governments, where NGOs reacted in knee-jerk opposition to Bank projects without doing their homework. Goldman shows that international NGOs with innocuous names are often fronts for economically self-interested industries which scramble for lucrative Bank contracts. Heeding civil society is a welcome practice for the Bank, but without prudent understanding of who NGOs represent and how much credence they should be given, Bank decision-makers do well to be wary. And putting international NGOs before the interests of citizens in borrowing countries requires special caution.
If this three-way squeeze were not enough, the Bank itself is a tribal society where those at headquarters clash with those in the field, where long-term technocrats resent the transient politicians brought in to lead them, where the prevailing ideology of a certain economics clashes with anthropologists and other social sciences. Managing directors from Bank investors breathe down the necks of managers. And high-paid officials who travel Business Class, stay in 5-star hotels, and obtain benefits seldom available in their countries of origin will fight fiercely to stay in their offices and protect their small fiefdoms.
Add to this mass of contradictions the impossibility of its mandate, as politicians and NGOs unload onto the Bank every intractable problem, it seems, that the world doesn’t know how—or lacks the resolve—to solve. We can affirm with World Bank President Robert McNamara that “the extremes of privilege and deprivation [in our world] are simply no longer acceptable.” Who will disagree with recent World Bank President Wolfensohn that “poverty alleviation is the single most important problem” for the world? But as we saw in Lyndon Johnson’s bold vision to eliminate poverty in American cities, a noble mission and political will are no guarantee that money and expertise can deliver.
Everything must be resolved at once since everything is interdependent. But comprehensive plans look too much like lack of focus. Too focused and programs die from lack of context; too comprehensive and energies and resources dissipate. In short, the complexity of the Bank’s mission demands that critics and champions alike should be modest enough to admit that economic or development theory—or, for that matter, any social science theory—are inadequate at present to provide definitive programs. And unwitting adoption of prevalent ideologies surely is no answer.
In the Bank’s case Goldman and Mallaby adduce enough evidence to remind us that institutions, like individuals, may be corrupted—by major powers using the institution for national benefit, by local political leaders using it to maintain their grip on power, by NGOs that make shrill claims to rise over the clamor of competing organizations, by bureaucrats more concerned with personal ambition than mission attainment, by the organizational imperatives for staff to show “results,” however flawed their outcomes, by scholars intent on preserving their disciplinary ascendancy.
The answer is not despair. Dismantling the Bank hardly seems sensible, although constantly asking how its money might be better channeled seems wise, if nothing else to keep it accountable to citizens as well as our leaders. Wolfensohn’s tenure shows it can be changed, if not easily. Even so, it behooves us to constantly subject this enormously powerful institution to precisely the kinds of searching critique offered by Goldman and Mallaby, to listen to discordant voices, to confront directly the contrasting visions sketched by Mallaby: a World Bank that partners primarily with Northern NGOs and governments, versus a Bank that keeps its mission focused on “the least of these,” the poor countries it is charged to help. Let the Bank continue with its experimentation, but demand that it listen to alternatives, engage in self-critique, and most of all, show evidence that indeed it is creating a world “free of poverty.” We would all do well to heed the still-bracing words of World Bank President George D. Woods (1963-1968): “the plight of developing peoples—two-thirds of humanity who are striving to cross the threshold of modernization—is the central drama of our times.”
Terence Halliday is Co-Director, Center on Law and Globalization, American Bar Foundation and University of Illinois College of Law. He is currently completing books on connections between globalization, law, markets, and political freedom. He has consulted with the World Bank on China.
Copyright © 2007 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromTerence Halliday
David Hempton
A fresh look at faith and doubt in Victorian England
- View Issue
- Subscribe
- Give a Gift
- Archives
More churches were built or restored and more yards of religious print were published in 19th-century Britain than in any other century of British history before or since. Similarly, various estimates of church attendance, whether based on the snapshot Religious Census of 1851 or on other data, are remarkably high by recent British standards, and are probably also high by 18th-century standards. In 19th-century Britain, religious voluntary associations were ubiquitous and religious issues often dominated social and political discourse. Unsurprisingly therefore, it was taken as axiomatic among scholars of George Kitson Clark’s generation that to begin to understand Victorian civilization one had to understand its religion. That is the intellectual tradition I grew up in as a college student, and there was no shortage of superior literature upon which to draw, including Owen Chadwick’s magisterial, if now rather dated and smugly Anglican, two-volume history of the Victorian Church. Although a combination of the Oxford Movement and Evangelicalism probably occasioned the most distinguished historiographies of Victorian religion, even new-fangled social historians like Hugh McLeod and James Obelkevich treated religion seriously back in the 1970s. Religion and Victorian culture, it seemed, were inextricably yoked.
Crisis of Doubt: Honest Faith in Nineteenth-Century England
Timothy Larsen (Author)
OXFORD UNIVERSITY PRESS
330 pages
$32.99
Timothy Larsen’s book, on the other hand, has been provoked by a different discourse altogether, namely that of the Victorian crisis of faith. Larsen shows that beginning with some eminent Victorians themselves, and then continued by scholars such as Basil Wiley and A. N. Wilson, the loss of faith has become a dominant motif in 19th-century British studies that has seeped its way into textbooks, general histories, and encyclopedias as the chief characteristic of Victorian religion. As British intellectual life has become more secular, and as religion has diminished in social salience, the intelligentsia has looked increasingly to the Victorian period for the roots of its secularity. Larsen’s aim is to attack that view of 19th-century England by showing that a “crisis of doubt” makes at least as much sense in characterizing the period as a “crisis of faith.”
Larsen’s angle of attack is to look at the plebeian leaders of 19th-century secularism who reconverted to Christianity after having made their mark as popular leaders of the Secular Movement. His book consists of seven biographical portraits topped and tailed by a helpful introduction and conclusion. His chosen figures are William Hone, Frederic Rowland Young, Thomas Cooper, John Henry Gordon, Joseph Barker, John Bagnall Bebbington, and George Sexton. These are scarcely household names outside the scholarly cognoscenti of Victorian experts, but they were all important leaders of Victorian secularism, and they all eventually reconverted to a more-or-less orthodox Christianity. Larsen is at pains to point out that their conversions to and from secularism were serious intellectual affairs and were not undertaken for mere pecuniary or positional advantage, and that all seven reconverted long before their deathbeds. The point of such an emphasis is to insist that these seven figures looked seriously into the gaping mouth of secularism yet returned to Christianity via a serious, honest, and careful evaluation of the respective merits and demerits of faith and infidelity.
Although each faith journey is obviously unique, the common pattern among Larsen’s chosen subjects is that they were either brought up in Christian homes and/or embraced populist forms of evangelical Nonconformity early in their lives. Many were educated in Sunday Schools or became Sunday School teachers or preachers, often of the Methodist or Baptist variety, before falling by the wayside. The most common reasons for their apostasy included doubts about the inspiration and moral content of the Bible, the growth of a radical political consciousness, disillusionment with established forms of religion, and acquaintance with a wide range of English and French infidel literature such as Thomas Paine’s Rights of Man and Baron d’Holbach’s The System of Nature. They also had temperaments that did not easily bow the knee to any kind of secular or religious authority. According to George Sexton, “the so called Secular societies were made up of young men, for whom skeptical views have an attraction, as being calculated to allow a sort of reckless independence, freedom from control, and a kind of intellectual audacity which fascinate for a time.”
Larsen’s cohort of secularists became leading orators, writers, organizers, and debaters for the secularist cause. Why then did they reconvert to Christianity? Once again, the narrative of every life is unique, but Larsen helpfully identifies a number of common factors at play in reconversion. These erstwhile militant secularists came to see that secularism was better at tearing down Christianity than building a replacement, left little solid basis for the construction of a satisfying morality, and was based on an oppressively narrow definition of reason that left little room for intuition and emotion. In addition, they remained haunted by the compelling figure of Jesus of Nazareth, became intrigued afresh by the grandeur of the Scriptures, repudiated naked materialism by flirting with spiritualism, came to see that they could be radical politically without abandoning Christianity, and became intellectually persuaded of the truth of Christianity from their consumption of a wide range of books, sermons, and letters.
Crisis of Doubt is an impressively researched, clearly written, and forcefully, even polemically, argued work of scholarship. Moreover, Larsen is careful not to overplay his hand. Despite supplying an appendix of some thirty additional names of erstwhile secularists who found some sort of religion, he acknowledges that reconversion from secularism was not exactly rampant in Victorian Britain. He is also careful to show that his seven converts did not necessarily return to an impeccably conservative form of evangelical Protestantism. In fact most embraced fairly conservative positions on important Christian doctrines, but many held a more flexible view of biblical inspiration, and most remained radical in their social and political orientations. Reconversion did not mean capitulation to the religious or political status quo, and old radicals lived on in new Christian clothes.
By suggesting that the “crisis of doubt” within Victorian secularism was a more common and powerful reality than was the “crisis of faith” among the Victorian intelligentsia, Larsen is hoping not only to correct an exaggerated emphasis on the Victorian crisis of faith but also to show the intellectual robustness of Christianity in the 19th century. Challenging the notion that there was an inevitable and inexorable slide towards Matthew Arnold’s “Dover Beach,” Crisis of Doubt argues that the tide of faith could come in as well as go out. In that sense the book also acts as an important counterpoint to intellectually sloppy versions of secularization theory.
Although Crisis of Doubt is an impressive achievement, it is not without its flaws. So keen is Larsen to demonstrate that reconversion is primarily an intellectual transaction, which ought not to be “reduced” to the effects of other more personal factors such as relationships, illness, bereavement, or penury, that his seven portraits are at times rather bloodless creations. The reader is left with a great deal of head knowledge about the intellectual reasons for conversion and reconversion, and the kind of Christian apologetics written by the reconverted, but beyond the bare essentials demanded by the narrative we learn little of the book’s main characters as human beings. Moreover, in order to sustain his main argument that reconversion was based primarily on ideas, Larsen has to work against some of his own evidence, as many of his chosen figures drew attention to the emotional sterility of secularism, and their eager longing for a return to a more compelling alternative.
A second quibble one might have with the book is that a biographical approach, albeit with a succinct and helpful introduction and conclusion, does not allow much space for treating seriously shared and repeated characteristics. For example, it is clear that these seven figures were all experiencing dissatisfaction with the moral content of the Bible, including the doctrines of hell and substitutionary atonement, but the negotiation of these moral difficulties, both in conversion and reconversion, is not drawn out as thoroughly as one would wish. Howard Murphy’s old claim in the American Historical Review in 1955, that the revolt against Christian orthodoxy in Victorian Britain was essentially an ethical one, is largely confirmed by this volume, but it is not brought into sharp focus. Similarly, most of the figures in the book were thoroughly disenchanted with the state of established religion and political culture in Britain, but the extent to which that was moderated or recalibrated after reconversion is only hinted at.
Timothy Larsen has produced an admirable study of a group of plebeian radicals who once dared to convert to secularism and then dared to convert back again to orthodox Christianity. The word dare is appropriate, for these sturdy individuals had to cope with the opprobrium of intellectually deserted friends and families—not once, but twice. In retrospect what is remarkable about them is their willingness to continue their lives as public figures, lecturing on platforms, writing in periodicals, and publishing pamphlets and books even after they had made a 180-degree turn for the second time. These were anything but blushing violets who coped with their reconversions by shunning the limelight in embarrassed silence.
Moreover, Larsen suggestively argues that working-class intellectuals, because they owed no deference to churches, colleges, and establishments, were earlier embracers of modern views about science, biblical criticism, and theology than their middle- and upper-class counterparts. Indeed, they were dealing with new ideas about nature, evolution, and biblical reliability a generation in advance of their more celebrated and better known countrymen. In that sense Larsen’s book contains a noble plea for taking seriously the intellectual culture of the 19th-century working class. (Not before time, one might add.)
Whether Larsen has succeeded in his broader aim of reversing the dominant trajectory of secularist scholarship on Victorian religion and irreligion is quite another matter. Larsen’s subjects are far from negligible figures, especially William Hone, Joseph Barker, and Thomas Cooper, but they somehow lack the intellectual eclat of the Victorian doubters such as George Eliot, Francis Newman, Leslie Stephen, and John Ruskin. Still, at least we now know they existed, and that their reconversion to orthodox Christianity was as important for making sense of their lives as their more highly publicized contribution to 19th-century secular societies. Larsen has amply demonstrated that at least among the plebeian leaders of 19th-century secularism there was indeed a “crisis of doubt” to go alongside the more familiar meta-narrative of a “crisis of faith.”
David Hempton is Alonzo L. McDonald Family Professor of Evangelical Theological Studies at Harvard Divinity School. He is the author most recently of Methodism: Empire of the Spirit (Yale Univ. Press).
Copyright © 2007 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromDavid Hempton
Randall L. Bytwerk
Nazi propaganda and the Holocaust.
- View Issue
- Subscribe
- Give a Gift
- Archives
Two books, published almost simultaneously in 2006, add significantly to our knowledge of the public face of the Holocaust. Peter Longerich’s “Davon haben wir nichts gewusst!” Die Deutschen und die Judenverfolgung 1933-1945 [“We Didn’t Know Anything About That!” The Germans and the Persecution of the Jews] is the more ambitious of the two. Longerich tracks Nazi public rhetoric on the Jews for the twelve years of Hitler’s rule, and attempts to reveal the German public’s thinking about what it heard and saw. Jeffrey Herf’s The Jewish Enemy: Nazi Propaganda during World War II and the Holocaust focuses on Nazi anti-Semitic propaganda during the war, and makes no determined attempt to analyze how it was received. Neither book presents startling news, but both provide an astonishing amount of carefully considered evidence from the period. Most readers of Books & Culture will prefer Herf’s cogent analysis to 448 pages of reasonably clear German, but despite their areas of common focus, the books are worth reading together.
"Davon haben wir nichts gewusst!": Judische Schicksale aus Hochneukirch / Rheinland 1933-1945
Rudiger Rottger (Author)
190 pages
The Jewish Enemy: Nazi Propaganda during World War II and the Holocaust
Jeffrey Herf (Author)
400 pages
$27.97
The two present parallel, and largely consistent, chronological surveys of what Germans saw and heard during the war. Besides the familiar public statements by Hitler, Goebbels, and other Nazi leaders (e.g., Hitler’s statements like this one: “If Jewry imagines itself to be able to lead an international world war to exterminate the European races, then the result will not be the extermination of the European races but rather the extermination of Jewry in Europe!”), both books trace the propaganda found in newspapers and posters. Longerich provides a greater sampling of newspapers, and also considers newsreels, the radio, and Allied broadcasting and leaflets, but the wider range of sources does not lead to significantly different conclusions.
Both books give only limited attention to the comprehensive system of Nazi speakers and propagandists at the local level, who regularly received guidelines on what they were to say about Jews in speeches and conversations. The Nazis saw such word-of-mouth propaganda as more effective than magazines and newspaper articles (a widely circulated print during the Nazi era showed Hitler speaking to his early followers, with the caption: “In the beginning was the word”); considering such material would have strengthened both analyses, even if the conclusions would not have changed greatly.1
The focus of Nazi anti-Semitism varied. After the signing of the German-Soviet pact in August 1939, public anti-Semitism diminished considerably, to be renewed suddenly after the June 22, 1941 attack on the Soviet Union. As enormous numbers of Jews were killed in 1942 and 1943, Nazi propaganda attacked the Jews through every imaginable channel. After mid-1943, with the bulk of the killing done, the intensity declined somewhat, but anti-Semitic propaganda hardly vanished.
The Nazis presented World War II as a defensive struggle against Jewish plans for world domination. The Jews were out to destroy Germany, in a literal, biological sense. The only response was to exterminate the Jews first. The war was a matter of life or death. Either Germany and its people would survive and the Jews would perish, or the Jews would triumph over the bodies of murdered Germans.
As both books point out, “the Nazis combined blunt speech about their general intentions with suppression of any facts or details regarding the Final Solution” (Herf, p. 268). German media completely ignored the death camps, such as Auschwitz, and mass shootings. Specific guidelines ordered propagandists to avoid such details. There were many rumors and reports from soldiers home on leave—but it was surely difficult for an average German to use such information to know what actual horrors were occurring (though some succeeded in so doing). Germans, in short, knew that bad things were happening, but had no clear idea just how bad those things were. In this, Nazi propaganda was following a successful strategy. If Germans had known for certain what was going on, there is little doubt that even many of those who were anti-Semitic would have been horrified. Just as it is possible for people to ignore the AIDS catastrophe in Africa while still being moved by the tragedy of individuals, so Germans, in J. P. Stern’s words, knew enough to know that they did not want to know any more.
The record of what Germans did hear and read is stunning, and both books bring more of it together than is available anywhere else. Longerich and Herf convincingly demonstrate that Germans could not claim that they did not know anything about the Holocaust. In Herf’s words: “Claims of ignorance regarding the murderous intentions and assertions of making good on such threats defy the evidence, logic, and common sense” (p. 267). Longerich draws a similar conclusion: “The German public was generally aware of the mass murder of the Jews” (p. 240).
Although Nazi propaganda was more than clear about the general intention to murder Jews, what Germans thought about the Jews, as opposed to what they knew about the Holocaust, is a vehemently disputed question, and one probably impossible, as Herf suggests, to resolve. The positions range from self-exculpatory claims of ignorance (as in Longerich’s title) to Daniel Goldhagen’s assertion in his 1996 book Hitler’s Willing Executioners that Germans in general were possessed by an “eliminationist anti-Semitism” that led them to welcome Nazi persecution of the Jews.2
Herf does not try to untangle what Germans actually thought after this barrage of anti-Semitism. Having done one thing well, he chooses not to do a second less well: “The beginning of wisdom in these matters is a certain restraint and much less certainty regarding what ‘ordinary Germans’ made of the Nazi propaganda,” he writes. He is reluctant to claim that Germans in general were eager to kill Jews. The Holocaust, he suggests, depended on a significant minority of fanatic anti-Semites who were surrounded by a society in which anti-Semitism was commonplace.
Longerich, on the other hand, uses a great range of sources to deduce what was going on in German minds: government and party morale reports, legal records, exile publications, Allied intelligence reports, diaries, and oral histories. Along the way, he provides a careful analysis of the strengths and weaknesses (mostly the latter) of the various sources. The problem is that, in a closed totalitarian system, there is nothing resembling a Gallup Poll one can use to decide what people think with reasonable confidence. To report too much public unhappiness was bad for a Nazi bureaucrat’s career, since his job was to keep morale high. After extensive analysis, Longerich comes to an unsatisfying conclusion. Germans in general, he thinks, knew enough about what was going on to put on a face of indifference and passivity with regards to the Holocaust, but their motivation “must be seen as an attempt to escape any responsibility for events by ostentatious ignorance.”
Surely that was true of some, perhaps even many. But the nature of human evil is more complicated than Longerich’s conclusion suggests. A significant number of Germans did hate Jews, and were happy to suspect that they were being killed. Others were concerned about what they thought might be happening, but through cowardice, the more immediate pressures of the war, the social isolation and removal of Jews from German life, or other reasons did not think about the Jews very much.
To what extent are we who enjoy what former German Chancellor Helmut Kohl called “the blessing of a late birth” justified in judging Germans who, for whatever reason, did nothing about the persecution of the Jews? Christians, it should be noted, were not much better in this respect than anyone else. Although Longerich frequently notes Nazi concern about opposition to anti-Semitic policies from both Catholics and Protestants, the number whom we can hold up as role models is discouragingly low. Is ignorance, pretended or real, an excuse?
Herf touches on an answer in noting that Nazism was more than a political movement. It claimed to be a worldview, offering an explanation for all aspects of life. Radical anti-Semitism provided a structure in which “all riddles were solved, all historical contingency was eliminated, and everything became explicable” (p. 6). Later, he concludes that Nazi leaders really believed most of what they said about the Jews; they “pushed to the extreme the widespread human capacity for delusion and belief in illusions” and “supplied a narrative of events that seemed to offer an iron-clad explanation of them as well as justification for uniting ideology and practice in war and mass murder” (pp. 269-270).
Nazism (and its totalitarian cousin Marxism-Leninism) can perhaps be seen as an example of what Christ meant with the parable of the empty rooms: “When an evil spirit comes out of a man, it goes through arid places seeking rest, and does not find it. Then it says, ‘I will return to the house I left.’ When it arrives, it finds the house swept clean and put in order. Then it goes and takes seven other spirits more wicked than itself, and they go in and live there. And the final condition of that man is worse than the first.” The Holocaust was not done in secret. It followed from a worldview that the Nazis proclaimed loudly and clearly, one that filled a spiritual vacuum for many Germans, one that people chose to accept. In light of that worldview’s plain speaking, the excuse “We did not know anything about that!” cannot withstand scrutiny.
Randall L. Bytwerk is Professor of Communication Arts and Sciences at Calvin College. His most recent book is Bending Spines: The Propagandas of Nazi Germany and the German Democratic Republic(Michigan State Univ. Press). His German Propaganda Archive (www.calvin.edu/cas/gpa) provides English translations of much Nazi anti-Semitic propaganda.
1. This is an area I consider in “The Argument for Genocide in Nazi Propaganda,” Quarterly Journal of Speech, Vol. 91 (2005), pp. 37-62.
2. Goldhagen, by the way, had a predictably negative review of Longerich’s book in the Hamburg daily newspaper Die Welt (May 6, 2006).
Copyright © 2007 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromRandall L. Bytwerk
Stephen H. Webb
Re-framing the questions.
- View Issue
- Subscribe
- Give a Gift
- Archives
In the first chapter of his wide-ranging and well-written book, He Came Down From Heaven: The Preexistence of Christ and the Christian Faith, Douglas McCready confesses that his topic is something of an oxymoron. He is right. The pre and exist of preexistence, often pinned together by a hyphen like siblings stuck in the back seat on a long car ride, add nothing in their combination to our understanding of Jesus Christ. Jesus exists, and he exists prior to everything, so talking about his preexistence is incoherent. He certainly does not exist prior to his own existence, which the term seems to imply.
He Came Down from Heaven: The Preexistence of Christ and the Christian Faith
Douglas McCready (Author)
IVP Academic
349 pages
$37.09
The Preexistent Son: Recovering the Christologies of Matthew, Mark, and Luke
Simon J. Gathercole (Author)
Eerdmans
356 pages
$28.81
Of course, theologians often retool ordinary words with technical meaning. This word, however, is neither ordinary nor precise. In fact, preexistence actually does not apply to anything, because nothing exists prior to his saying so, and he exists like nothing else. It would be better to speak of his eternal existence than his preexistence.
If preexistence were merely confusing, it might be worth salvaging, but its damage extends beyond the rules of grammar. Although this word has a long history of theological use, it actually drives a wedge into the life of Jesus Christ. The pre of preexistence suggests, in an insidious fashion, that Christians worship a split person if not a split personality: Jesus of Nazareth the miracle worker who had a prior career as the Son of God.
The truth is that Jesus exists in a manner that befuddles the way we are cursed to divide time into before, now, and after. We preexist ourselves, to coin a variant of this term, because we are always looking to the receding past to discover who we are. We have a problem with time, not Jesus. Rather than view the existence of Jesus Christ through the prism of our fragmented sense of time, we should let the coherent wholeness of his life judge our own. The Son of God mixes together time and eternity as if they were as easily interchangeable as mayonnaise and Miracle Whip. That is why we can hope that the rushing blurs of our lives will one day find their rest in him.
Rather than give this misleading term a decent burial, McCready tries to give it new life. He explains that there are at least three interpretations of preexistence. The first is real or personal preexistence. It is one of the awkward features of this term that even heretics such as Arius, who denied the full deity of Christ, could affirm his preexistence as a lesser deity created before the rest of the world. The second is ideal preexistence, which means that Jesus existed in God’s mind prior to the incarnation. McCready shows how trivial this position is, because, given God’s omniscience, everything preexists in the divine mind. The third interpretation, eschatological preexistence, argues that the experience of the resurrection led Jesus’ disciples to create the myth of his preexistence. Post-existence gave rise to pre-existence in order to provide balance to the story of Jesus, as if the prolonged ending of the Gospels in his being raised from death into glory required an equally elongated beginning in his coming down from heaven.
In order to combat eschatological preexistence, McCready conducts a common-sense survey of the New Testament, concluding that the earliest writings affirmed personal or real preexistence. McCready’s method keeps as low to the ground as Simon Gathercole’s soars into higher criticism, but they both reach the same conclusion. With The Preexistent Son: Recovering the Christologies of Matthew, Mark, and Luke, Gathercole—who has previously written a fascinating book on the role of boasting in Paul’s letters—has produced what should become the standard scholarly treatment of preexistence in the Synoptic Gospels. Like much of even the best New Testament scholarship, however, Gathercole’s book strikes me as an arduous exercise in belaboring the obvious. Gathercole is writing for the skeptics, who make the Bible more complex in order to make it less believable. Yet what the skeptics do with the numerous “I have come” sayings is hardly worth refuting. Jesus did not mean he had come from Nazareth, and his use of the first-person pronoun did not refer to somebody else.
Both McCready and Gathercole seem to regret that the New Testament does not do a better job of defining preexistence, and thus they set out to show how this concept can be pieced together from the broken bits of its biblical expression. The New Testament does not provide the level of clarity that these scholars seek because it presupposes the preexistence of Jesus Christ as a given that makes sense of everything else. Preexistence is the forest that makes sense of the trees.
Rather than view the existence of Jesus Christ through the prism of our fragmented sense of time, we should let the coherent wholeness of his life judge our own.
The title Jesus most frequently gives himself in the Gospels is the somewhat mysterious Son of Man, which evades scholarly efforts at a precise definition. McCready and Gathercole agree that this designation does not offer much help in thinking about preexistence, but to me, it is utterly decisive. By calling himself the Son of Man, Jesus was alluding to the brotherhood of all humanity in his personal identity. Just as God the Father is the Father of all men, so God the Son is the Son of Man. By being the son of both God and man, Jesus demonstrates that God chose to be with us from the very beginning of time. Jesus Christ is not a man but the man, which is a crucial distinction.
If McCready and Gathercole are right that the earliest Scriptures affirm personal preexistence, then why do so many modern theologians deny it? According to McCready, there are two main reasons. The first is that preexistence can appear to downplay Christ’s humanity. Docetism was an ancient heresy that taught that Jesus only appeared to be human. He was really a divine being who used a human body like a costume, discarding it at will. Liberal theologians in the 20th century liked to argue that docetism had returned as a hallmark of evangelical Christianity. They alleged that evangelical Christians overemphasize Christ’s divinity at the expense of his humanity. That charge, which was rarely substantiated, provided liberals with the cover they needed to sacrifice Christ’s divinity to his humanity. For liberal theology, preexistence is incompatible with a fully human Jesus.
What liberal theologians miss is how the eternity of Christ is the only guarantee of the reality and perfection of his human form. The Son of God became incarnate; he did not fill somebody else’s body with the invisible spiritual fluid of divinity. He took a role in his own production. The Word does not put on flesh like a man who gets dressed in the morning, although, if we were to use this unsuitable metaphor, we would have to say that God’s clothing is a perfect fit. The highest honor we can give to the humanity of Jesus is to recognize that his body is not unconnected to his identity as the Son of God. Otherwise, he would not have been resurrected in his human form.
Modern liberal theology wants to portray Jesus as just like us in order to establish his credentials as a great teacher and moral role-model who inaugurated a process of revolutionary social change. The New Testament, however, does not distinguish between the power and the person of Jesus Christ. He himself is the future of the world. When the point of the world comes into final focus, we will recognize his personal features. We will be at home in the end because God made his home in him.
The second reason liberal theology tends to deny preexistence is that this concept smacks of metaphysical speculation. Liberalism focuses on morality more than salvation, and moralistic theologians define the divinity of Jesus by what he does rather than who he is. This so-called functional Christology is predicated on the assumption that metaphysics is foreign to Hebraic thought. From this perspective, speculation about preexistence arises only when Christianity becomes mired in Hellenistic culture.
McCready is right to insist that being and doing are intimately connected in every life, let alone in the harmonious actions of Jesus. Jesus’ mission makes no sense apart from his relationship with God. McCready is on more shaky ground when he argues that the being of Jesus is an ontological question. Ontology is the philosophical study of existence as such. Christ’s preexistence is the presupposition for ontology, not an aspect of it. We can trust that being has a discernible structure not because our minds correspond to matter but because our minds correspond to Jesus. Because the Son knew the world with the mind of Jesus, we can trust that our minds know the world too.
The only way liberal theologians can coherently deny the doctrine of preexistence is to embrace some form of adoptionism, which portrays Jesus as a very good man upon whom the spirit of God descended at his baptism. Adoptionism was so soundly rejected by the early church that it is something of a straw figure, though McCready demonstrates, in a helpful survey of modern religious thought, how various liberal theologians have adopted adoptionism, often in a cagey manner, so that the heretical genealogy of their positions cannot be easily traced.
But adoptionism is not a temptation to which only liberals are vulnerable. Any Christology that goes too far in separating the fully human person of Jesus from the Son of God risks adoptionism by depicting the flesh of Jesus as a mere appendage to the divine. McCready himself risks this temptation in his anxiety to distance himself from any notion of the preexistence of Jesus’ human nature. “Jesus is the name we normally associate with the incarnate One, and it is incorrect to refer to Jesus’ existence at any time before the annunciation to Mary,” he writes, attributing any talk of the incarnation as “the manifestation of a humanity ever in the heart of God” to Platonic metaphysics. But if metaphysics is defined as the study of eternal truths, then preexistence and metaphysics go hand in hand. The Church Fathers legitimately adopted Plato as a providential gift. Christianity and Western philosophy are inextricably linked, regardless of attempts, from across the spectrum of modern theology, to sever their relationship.
If McCready barely avoids some form of adoptionism, he falls right into the trap of liberal theology when he worries that “teaching Jesus’ preexistent humanity would violate one of the major concerns of modern theologians, making his humanity different from everyone else’s.” The liberal insistence that Jesus is “just like us” has been the cause of much confusion in contemporary theology. If Jesus’ humanity were not different from our own, we would have no hope of salvation. We should try to be just like him, but he had no need to be just like us, because he is just like the father, even in his fleshly form.
Indeed, for all of his efforts to demonstrate the consistency of the doctrine of preexistence, McCready ends up cleaving Jesus Christ into two: the eternal Christ and the Jesus bound by time. The problem comes down to the idea of personhood. Jesus does not just reveal the identity of the Son. Jesus is the proper name of the Son of God. He is the Son. That means that nothing in the incarnation that manifests Jesus’ identity is alien to or an alteration of the eternal Son. Even his very flesh is not an afterthought to God’s triune nature.
To avoid such confusions, theologians should undertake Christology with a simple principle. Do not begin with the characteristics of human flesh that are incompatible with the divine attributes and then subtract them from Jesus in order to obtain what it was about him that preexisted his human form. Instead, begin with the Father begetting the Son, and think of the Son as the furthest reach of God into the space and time of creation. In other words, do not use the concept of preexistence to divide the person of Jesus into two. Rather, begin with the unity of his person, and marvel at the complexity of God.
The physical life of Jesus is more than an illustration of God’s purposes. His body is more than a visual aid. Modern Christians have gone too far in purging the spiritual realm of all analogies with physical matter, while scientists, in the meantime, have been busy discovering just how mysterious matter really is. Many in the scientific community have been influenced by the feminist idea that the world is God’s body, which veers into pantheism. The more startling truth is that Jesus is God’s body, and the world is what was needed to make creatures like us, whom Jesus could call friend (John 15:14).
McCready would probably accuse me of downplaying the newness—the Good News—of the incarnation. That news is good because Jesus Christ is God for us, which makes sense only if we realize that we were created for Jesus. The human form is the Father’s gift to the Son for his glory, which was established before the foundation of the world (John 17:24). The incarnation can be understood as the fullest expression of the Son only if the entire cosmos was created by, through, and for him. The theory of evolution will never be able to explain the origin of human nature in natural processes because humanity lies at the beginning, not the end, of nature. We are unique because we are copies of him.
Stephen H. Webb is professor of religion and philosophy at Wabash College. His most recent book, Dylan Redeemed: From Highway 61 to Saved (Continuum), focuses on Bob Dylan’s midlife conversion to Christianity. He is currently working on a book entitled Christianity and Its Enemies.
Copyright © 2007 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromStephen H. Webb
Robert Whaples
An economist’s view of baseball.
- View Issue
- Subscribe
- Give a Gift
- Archives
Economists are not a timid lot. We like to take our game to other people’s playgrounds and see if we can beat them. We think our game plan—building models based on the assumption that rational people are responding to incentives, crunching a lot of numbers, and ruthlessly sharpening our spikes—can be applied just about everywhere, so we’re not afraid to take road trips. Many find this adventurous bent laudable, although occasionally over-eager economists take on competition that’s pretty tough and the home crowd boos them off the field.
The Baseball Economist: The Real Game Exposed
J.C. Bradbury (Author)
Brand: Dutton Adult
288 pages
$2.38
A recent article in the Wall Street Journal told of one such case. When Cornell economist Michael Waldman used statistical evidence to argue that autism is linked to television watching, autism experts and parents of autistic children roundly criticized him and tried to eject him from the ballpark. Or take the case of The Marketplace of Christianity, in which economists Robert B. Ekelund, Jr., Robert F. Hebert, and Robert D. Tollison mangled the religious history of Christianity from the Reformation to the present in a ham-fisted attempt to impose public choice theory and neoclassical economics on spiritual matters. There were a few valuable insights, but overall a lot more misses than hits. And of course there’s the case of Steven D. Levitt, co-author of the surprise bestseller Freakonomics. He’s made news by applying economic arguments and statistical evidence to topics ranging from test taking to naming babies and crime. His argument that the Roe v. Wade decision precipitated a substantial drop in the crime rate a couple decades later is far from what most people once considered to be economics. Serious questions have been raised about the validity of this finding, but the umpiring crew hasn’t yet reached an official ruling.
And now we have economists sizing up sports. Though this may seem trivial to some, perhaps yet another sign of academic decline, the economics of sport is a booming business. My bet is that economists are attracted to analyzing sports primarily because many are enduring sports fans—plus most economists are “stat heads,” and a good many of them spent their youths mentally recalculating their favorite player’s batting average after each of his at bats.
But there’s another reason baseball and economics fit like hand and glove: sports can provide an empirical testing ground for economic theories that may otherwise go untested. This is perhaps the biggest advantage of sports markets, that they make available detailed performance measures for individuals (and their coworkers and managers). Appropriately, undergraduate courses in the economics of sports have popped up at leading research universities like Cornell, Harvard, and Vanderbilt. The Journal of Sports Economics was launched in 2000 and has published nearly 200 articles on sports of all varieties: football, basketball, hockey, golf, racing (foot, horse, bicycle, and auto), boxing, rugby, and even cricket. Topics include player compensation and incentives, league and tournament structures, racial discrimination, the value that consumers put on having a professional team in town, the efficiency of betting markets, the effects of competitive balance on attendance, and the economic impacts of stadium and sports spending. But regardless of what economic issue is being studied, baseball—perhaps because of its sublime essence—is king. More than twice as many studies examine it than the runner-up (soccer).
It’s not surprising, then, that while the sports economics field has sprouted an wide array of monographs (Stephen Shmanske’s Golfonomics, to take just one example), the best of these—such as Andrew Zimbalist’s May the Best Team Win: Baseball Economics and Public Policy, which I’ve discussed with eager students in my Current Economic Issues course—deal with good old American baseball. Out of the dugout and into the lineup steps The Baseball Economist. J. C. Bradbury of Kennesaw State University pitches a slider he calls Sabernomics (coined after Sabermetrics, whose root word comes from SABR—the Society for American Baseball Research—which sprang largely out of the research of the justly revered number-crunching guru Bill James).
What sets Bradbury apart from his competitors is his willingness to apply the tools of economics to the actual strategies of players and managers as the innings unfold. The book opens on the baseball diamond itself. In Chapter 1, for example, Bradbury argues that the price of hitting a batter is not merely that the player gets a free pass to first base, but that there’s likely to be payback. If you bean an opponent, the other team’s pitcher is likelier to bean you or your teammates. This is a conventional interpretation, but Bradbury finds a tractable way to actually test the magnitude of the effect. He draws upon play-by-play data from eight complete seasons (available at www. retrosheet.org) and crunches the numbers, reaching several findings that are consistent with the simple economic insight that hitting batsmen is more likely to occur when the costs are lower.
For example, lousy batters are less likely to be hit—why put on base a guy who will have a tough time earning his way there? Teams who are losing are more likely to plunk the other team, and the larger the run deficit, the greater the likelihood that the pitcher will hit a batter. As the chance of winning a game falls, in other words, the price of plunking, in terms of contributing to a loss, also falls. Finally, pitchers who hit batters in the previous inning are more likely to be hit than those who didn’t hit anyone. The retaliation effect is borne out by the data. Bradbury points out that, ironically, the adoption of the “double warning” rule in 1994—which metes out expulsions, fines, and suspension for subsequent beanings after an umpire determines that a pitcher has intentionally thrown at an opposing player—actually cuts the price of the first beanball in a game and may lead to more hit batsmen, since it ups the price of retaliation. Other chapters that bring economics onto the field provide a cost-benefit test for left-handed catchers, explaining their near extinction, and examine the incentives and payoffs for managers to argue balls and strikes.
Perhaps the most insightful chapter brings us into the locker room, examining the impact of steroids. Bradbury argues that steroids may enhance the quality of the overall baseball product, boosting fans’ willingness to pay and teams’ profits—by leading to more spectacular plays. Yet, steroids have well-known long-term negative side effects on players’ health. Why, then, has their union resisted testing? Bradbury argues that players fear that their urine samples will yield other “health related information,” for example evidence of marijuana use, which he argues is likely to be relatively high among MLB players. He proposes that the players’ union should adopt its own testing policy. The union itself would fine players who use steroids and distribute the fines among the other players—very sensible, since steroid users are imposing costs on other members of the union who must subsequently endanger their own health to keep up with the competition. Keeping testing in house would also protect secrecy about other chemicals deposited by players.
The heart of the book follows the main sports econ basepath in attempting to measure the productivity of baseball pros. But Bradbury reaches extra bases by examining managerial productivity. One thoughtful chapter looks at former Braves pitching coach Leo Mazzone’s success in boosting his squad’s pitching success, concluding that it was substantial.
Bradbury ends where many economists begin, assessing the competitiveness of the professional baseball market and the impact of its anomalous, much-discussed antitrust exemption. He argues, contrary to many other economists, that Major League Baseball doesn’t possess meaningful monopoly power. Rather, MLB is a “contestable market” with no effective barriers to entry. If there were room enough for a second elite professional circuit, it could easily sign players, line up a broadcasting contract, and rent stadiums. However, MLB has learned the lesson of history and has expanded the number of teams whenever such potential entry becomes likely. Unlike a true monopoly, therefore, Major League Baseball has not been able to restrict output—and Bradbury shows that its prices aren’t out of line with other professional sports. Although he overstates his case a bit, he supports it fairly well. MLB’s behavior is strikingly similar to that of the NBA, NHL, and NFL on a host of fronts, so any argument that its anti-trust exemption has a significant impact seems dubious. Moreover, for most fans the substitutes for a baseball game are immensely varied. Baseball is essentially entertainment, and much of its revenue comes from broadcasting, so if teams act as monopolists and charge higher and higher prices only their die-hard fans will be considerably worse off. Those of us who don’t worship the game can—and often do—easily tune out.
Not all of Bradbury’s arguments are as successful. He is too quick to equate maximizing profits with maximizing wins. He essentially assumes that fans pay to see good statistics rather than charismatic stars. His argument that small city teams don’t face much of a disadvantage to big city teams seems dubious, as it ignores the immense cable tv revenue gap.
All in all, though, Bradbury’s book is a major league effort. If it’s not Freakonomics, the Albert Pujols of recent popular economics books, it’s certainly a dependable role player.
Robert Whaples, professor of economics at Wake Forest University, is director and book review editor for EH.Net, which provides electronic services for economic historians.
Copyright © 2007 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromRobert Whaples
Dana L. Robert
- View Issue
- Subscribe
- Give a Gift
- Archives
Christian mission, along with the Christian faith, has often advanced on the strength of its rediscoveries—retrieving neglected stories and insights from the past and putting them to fresh use. In 2007 we are asking one big question: What must we learn, and unlearn, to be agents of God’s mission in the world? For any endeavor of learning and unlearning, historians are indispensable allies. Dana L. Robert, co-director of the Center for Global Christianity and Mission at the Boston University School of Theology, is an especially keen observer of the history of mission, and here she recalls a saint whose mission is annually celebrated and perennially relevant.
On March 17, people of Irish descent around the world celebrate “St. Patrick’s Day.” Nearly a million people stream into Dublin, Ireland, to enjoy the fireworks, concerts, parades, and street theater. St. Patrick’s Day parades began in 1762, when Irish soldiers serving in the colonial British Army marched through the streets of New York City accompanied by Irish music. By the early 20th century, St. Patrick’s Day parades in major American cities had become triumphant celebrations of Irish “arrival” in the hallowed halls of city government—victors over the old guard Protestant Yankees. The importance of St. Patrick to growing Irish self-confidence was expressed in 1921 by Seumas MacManus, author of the sentimental favorite Story of the Irish Race: “What Confucius was to the Oriental, Moses to the Israelite, Mohammed to the Arab, Patrick was to the Gaelic race. And the name and power of those other great ones will not outlive the name and the power of our Apostle.”1
The irony of MacManus’ paean to Patrick as the emblematic Irish religio-political race warrior is that Patrick himself was a “Brit,” born into a Christian family in the Roman colony of Britannia. Even though the Britons and the Irish shared a Celtic cultural heritage, they were historical enemies who raided each other’s territories and enslaved the vanquished. Young Patrick was such a slave. He escaped from an Irish master after six years of harsh servitude. Later in life, as a Christian priest, he returned to Ireland to share his faith as a missionary.
Why did a former slave risk his life to teach his captors what he believed about God? How did he become the beloved St. Patrick, the “Apostle of Ireland”? Why would the Irish—or any other group of people, for that matter—accept a former slave in their midst and then be willing to be transformed by his message? These questions uncover an essential, and paradoxical, lesson about the practice of Christian mission. The more deeply Patrick engaged the particularities of Irish culture and identified himself as Irish, the more authentic and believable was his expression of the ideals of a universal community in which there is no longer “Jew or Greek,” “slave or free,” “male and female” (Gal 3:28). This creative tension between cultural identification and universal ideals has made Christianity the largest religion in the world today. As the ideas, beliefs, and traditions of Christianity spread from one people to another, they are shaped—and reshaped—by the culture of each new group. So the paradox of St. Patrick’s Day is that in celebrating the creation of Irish identity, it also commemorates the incorporation of a particular people into a vision of universal and multi-cultural community.
The life of the 5th-century saint is shrouded in tradition and myth, as appropriate to a heroic figure in Irish epic poetry, whose stories were passed down through the generations. According to legend, St. Patrick was a miracle worker and healer who drove the snakes from Ireland. He explained the Christian Trinity by pointing to the leaves of a shamrock—three “persons” and yet united into one. Patrick is also credited with ordering the written preservation of oral Irish lore. But direct historical evidence about Patrick is slim.2
What is known with certainty is that a Christian bishop named Patrick left two documents—his Confession and Letter to Coroticus—written in the clumsy Latin of someone whose formal schooling was limited. They are the first known documents written in Ireland, and the only known “contemporary narrative of the conversion of Ireland to Christianity.”3 Although they are difficult to decipher in places, they also bear consideration as the first surviving substantial Latin texts from “outside the frontiers of the Roman world.”4
The better known of the two, Patrick’s Confession, narrates how he was seized from his father’s land by Irish invaders. Roman troops had withdrawn from Britannia in 410, leaving their northernmost province vulnerable to slave-raiding by the seaborne Picts and Irish. The 16-year-old was taken by boat to Ireland, where he worked as a shepherd and suffered nakedness and hunger. Although he had paid little attention to religious matters before he was captured, he began praying many times a day:
And my spirit was moved so that in a single day I would say as many as a hundred prayers, and almost as many in the night, and this even when I was staying in the woods and on the mountains; and I used to get up for prayer before daylight, through snow, through frost, through rain, and I felt no harm, and there was no sloth in me—as I now see, because the spirit within me was then fervent.5
On the strength of a dream, he escaped from slavery, eventually managing to cross the sea and return home.
Many years later, after Patrick had become a priest, he dreamed of a man bringing him letters from Ireland, one of which began, “The voice of the Irish.” Patrick dreamed of voices crying, “We ask thee, boy, come and walk among us once more.”6 Following his vision of Irish voices, Patrick had another experience of mystical union with God, in which he felt God praying within his body, and also beside and above him. At the end of the prayer, the mysterious force revealed himself as the Holy Spirit, and Patrick awakened. Through such mystical experiences, dreams and visions, Patrick knew that God was calling him to return to Ireland as his “ambassador.”7
Patrick’s self-understanding as a wanderer under God’s protection, operating on the margins of society, was essential to his calling as a missionary. He stated repeatedly in his Confession that he was “rustic, exiled, unlearned,” and “the outcast of this world.” He was a “stranger and sojourner,”8 “a stranger and exile for the love of God.”9 Not only were Christians followers of one who had a wandering ministry on earth and “nowhere to lay his head,” but God’s spiritual kingdom was universal and not confined to a particular location on earth. Followers of Christ were “in the world but not of it” because their community transcended time and space, and crossed all human boundaries.
Patrick’s calling was strongly motivated by his belief in the nearness of a day of judgment. Just as the sack of Rome in 410 caused St. Augustine to recognize that it was a theological mistake to conflate earthly empire with God’s heavenly city, so did the collapse of Roman Britannia help Patrick to sever God’s call from his own homeland. As have many missionaries down through history, he believed he was chosen to be God’s ambassador not because he was educated or sophisticated, but because Jesus had prophesied that the gospel would be preached throughout the world before it ended. Citing passages from the Bible (Isaiah) that referred to what would happen in the end times, Patrick noted that “To Thee [God] the gentiles shall come from the ends of the earth”; and “I have set Thee as a light among the gentiles, that Thou mayest be for salvation unto the utmost part of the earth.”10
The theology of mission that Patrick cited as a rationale for his own calling is known today as the “Great Commission.” Jesus’ final words to his disciples after his resurrection have been one of the major scriptural justifications for cross-cultural mission since the times of Patrick. As the world of Roman Britain crumbled around him, he felt himself burdened with Jesus’ final command to go into “all the world,” in anticipation of the end of time as he knew it. To Patrick, Ireland represented the “ends of the earth” beyond the boundaries of the known Romanized Christian world—and therefore the object of his own obedience.
According to Patrick’s Confession, many Irish were “reborn” through his mission work. But it is the lesser-known Letter to Coroticus that contains vital hints as to why the Irish may have embraced Patrick as their own. The Letter reveals his profound solidarity with the Irish, in an attitude that missiologists today call “missionary identification” or an “incarnational” approach.
It seems that a British chieftain named Coroticus had attacked new Irish Christians, fresh from their baptisms by Patrick. Many were slain in their baptismal robes, and others captured as slaves. In Celtic fury, Patrick cursed Coroticus and his men, calling them not Britons, but wicked “fellow citizens of the demons.” With all the power of his office, Patrick rendered divine judgment on Christians who murdered the innocent and enslaved their brothers and sisters in Christ:
Wherefore let every God-fearing man know that they are enemies of me and of Christ my God, for whom I am an ambassador. Parricide! fratricide! Ravening wolves that “eat the people of the Lord as they eat bread”! As is said, “the wicked, O Lord, have destroyed thy law,” which but recently He had excellently and kindly planted in Ireland, and which had established itself by the grace of God.11
To be a Christian meant following the laws of Christ. It meant giving up the violence of murder, and of enslaving one’s brothers and sisters in the Lord. Coroticus had even sold them as slaves to the non-Christian Scots and Picts! For these violations of Christian law, Coroticus would suffer the penalty according to the Scriptures: ” ‘The riches, it is written, which he has gathered unjustly, shall be vomited up from his belly; the angel of death drags him away, by the fury of dragons he shall be tormented, the viper’s tongue shall kill him, unquenchable fire devours him.'”12
In contrast to the actions of Coroticus, said Patrick, the Christians of Gaul send men with money to ransom Christian slaves from the Franks and other heathen tribes. But Coroticus had taken new Christians, including women who had taken vows of chastity, and betrayed “the members of Christ as it were into a brothel. What hope have you in God, or anyone who thinks as you do, or converses with you in words of flattery? God will judge … . Hence the church mourns and laments her sons and daughters whom the sword has not yet slain, but who were removed and carried off to faraway lands, where sin abounds openly, grossly, impudently.”13 As Patrick cried out in “sadness and grief” for the killed and stolen Irish Christians, he showed he considered them as members of his own family of believers: “The wickedness of the wicked hath prevailed over us … Perhaps they do not believe that we have received one and the same baptism, or have one and the same God as Father. For them it is a disgrace that we are Irish. Have ye not, as is written, one God?”14 After predicting that the slain Christians would “reign with the apostles, and prophets, and martyrs” in the “kingdom of heaven,” Patrick concluded the letter by asking that it be read aloud “before all the people” and to Coroticus himself.
In this powerful letter, Patrick showed his identification with the Irish in his phrase “we are Irish.” As the bishop of the Irish Christians, he defended them with every ounce of his spiritual power, even if it meant defying a powerful military leader of his own ethnic background. To be a Christian was to identify with a new “reference group”—the Christian family. Fellow baptized believers from whatever tribe or nation became one’s new family and should be treated as such. Racial and ethnic differences melted away in light of the common relationship in Christ.
Patrick’s Letter to Coroticus demonstrated that the Christian ideals of brotherly love and identification in Christ could overcome tribalism. Not only did the missionary Patrick become Irish in solidarity with their suffering, but he was brother to all baptized Christians. The demands of Christian mission included denouncing sin and injustice at grave risk to himself. In its ideal form, incorporation into the “body of Christ” meant choosing a way of peace and reconciliation that overcame ethnic boundaries, and renouncing the killing, violence, and slavery of a warrior culture.
The response of Coroticus and his men to Patrick’s countercultural vision was to laugh uproariously and to reject the letter out of hand. In their tribal understanding, there could be no brotherhood with those who stood outside one’s own ethnic group. Patrick was caught in a classic missionary dilemma. To the Irish he may have represented Romano-British imperial culture. After all, he was coming from the direction of the old “empire,” even if as an ascetic he had personally renounced the trappings of power and prosperity. Yet when he took a prophetic stand for justice on behalf of his converts, his own countrymen accused him of materially profiting from his mission work among the Irish. No doubt conservative authorities thought him a nonconformist troublemaker. This scenario has frequently characterized the self-imposed marginality of missionaries—caught between cultures, and subject to abuse and misunderstandings from both sides. Could opposition from fellow clerics explain why the rustic Patrick was forced to struggle so hard in bad Latin to justify his ministry?
In the 21st century, Christianity increasingly resides beyond the protection of empires, in a globalized context abounding with the movement of peoples—both willingly and unwillingly—across cultural and political borders. As in Patrick’s day, those who choose to go “to the ends of the earth” for the sake of the gospel do so in contexts of war, poverty, violence, disease, and even modern-day slavery. Patrick’s calling as “stranger and sojourner,” as “ambassador” for God, has striking relevance for mission practice in the 21st century. The paradox of Patrick is that to demonstrate the universal ethic of a loving God who transcends human divisions of tribe and race, Patrick took on the particularity of Irish identity. His defiant cry “we are Irish” was proclaimed in solidarity with those who, having enslaved him in the past, were now being killed and abused by his own countrymen. Because Patrick risked becoming Irish, the Irish became Christians.
Those who seek to witness to God’s mission in our time must also cast aside their own ethnic prejudices, cultural particularities, political loyalties, and memories of past injustices, in radical identification with the “other.” On St. Patrick’s Day, it is common to be asked if one is Irish. Perhaps next year, when asked that question, those who know that radical inculturation is essential for Christian witness might recall a certain British missionary, and say “yes.”
Dana L. Robert is Truman Collins Professor of World Christianity and History of Mission at Boston University School of Theology.
1. Seumas MacManus, The Story of the Irish Race, rev. ed. (Devin-Adair, 1921), pp. 124-125. Part of this article is adapted from the forthcoming volume, Dana L. Robert, A Brief History of Christian Mission (Blackwell, 2008). See also Dana L. Robert, “St. Patrick and Bernard Mizeki: Missionary Saints and the Creation of Christian Communities,” Yale Divinity School Library Occasional Publication No. 19 (Yale Divinity School, 2005).
2. Some scholars believe that communal memories of him were an amalgamation of two different missionaries.
3. Tomas Cardinal O Fiaich, “The Beginnings of Christianity,” in T.W. Moody and F.X. Martin, eds., The Course of Irish History, rev ed (Cork: Mercier Press, 1994), p. 61.
4. Peter Brown, The Rise of Western Christendom: Triumph and Diversity, 200-1000 A.D. (Blackwell, 1996), p. 83.
5. The Confession of St. Patrick, translated from Latin by Ludwig Bieler, p. 3.
6. Ibid., p. 4.
7. Ibid., p. 11.
8. Ibid., pp. 2, 4.
9. Patrick, Letter to Coroticus, translated from Latin by Ludwig Bieler, par. 1. http://www.irishchristian.com/stpatrick/CoroticusFrame.htm
10. Confession, p. 6.
11. Letter to Coroticus, par. 5.
12.ÂIbid., par. 8.
13.ÂIbid., par. 15, 16.
14.ÂIbid., par. 17.
Copyright © 2007 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromDana L. Robert
E. John Walford
- View Issue
- Subscribe
- Give a Gift
- Archives
In 1963, shortly after graduating from an elite English boarding school, I went to Florence, Italy, where I stayed for about six months. I enrolled in the then-well-known drawing school, Studio Simi, which was a gathering place for those interested in the visual arts, as well as those seeking professional art education. Our daily regimen was to draw in the mornings—from both classical casts and live models—visit museums and churches in the afternoons, and travel to other cities on weekends. We were lodged in an old-fashioned but elegantly furnished pensione, called the Mona Lisa. Today, we might categorize this interlude as a “gap year” experience, or scrape up some credit for it in the guise of a study-abroad program. But on the terms that I did it, it could also be seen as occurring on the tail end of a much older English tradition, that of the young gentleman on his Grand Tour, rounding out his education.
This is not an incidental matter, because it goes to the heart of why one was there, what one sought to acquire, and most of all, how one perceived the art one was exposed to. I believe that the only book packed in my suitcase, before leaving England, was a copy of Jacob Burckhardt’s The Civilization of the Renaissance in Italy (first published in 1860, and still-then a “must read”). To this I soon added a Random House edition of G.F. Young’s The Medici, first published in 1930. It is worth pausing to consider just what kind of induction to adult life this was providing. The funding for this trip had come from my maternal grandfather, a prominent Anglo-Greek merchant banker, whose grand London house was modeled on the Florentine Palazzo Strozzi, built by the Medici’s rivals. At the time, I did not fully realize the weight of symbolism embedded in these circ*mstances. Yet, here I was in Florence, with Burckhardt as my guide and the Medici—and my grandfather—as role models! What I did understand was that here before me were the approved models of “good taste” and inspiration for cultivating the patrician lifestyle. That we were looking at a Madonna and Child by Fra Filippo Lippi, a Venus by Sandro Botticelli, and a John the Baptist by Andrea del Sarto was paramount—that is, in terms of their power to instill “good taste.” That one was made for a church altar, the other for a private bedroom, and that the third was ambiguous enough to pass in either context, was incidental. For many, even in our less elitist society, it still is.
Thus, scholars and laypersons alike typically consider such works under the rubric of Italian Renaissance art, admire them for their associations with the emergence of a secularized humanistic culture, from which derives our own, and study them predominantly in terms of stylistic evolution. Art historical writings and university courses have long presented Italian Renaissance art in this way. For instance, James Beck, in one standard survey,1 constructs a three-generational model by which to understand the evolution of style in Italian Renaissance art, dividing each generation into a lyric and a monumental current. In the twelve pages dedicated to Fra Filippo Lippi, for instance, Lippi is placed in the monumental current of the first generation, after Masaccio, and no mention is made of the religious significance of any of his works, although most were made to serve religious functions. Beck attends exclusively to formal matters, such as figure construction and the treatment of space, light, and draperies, as well as consideration of influences on the artist and the evolution of his style.
Frederick Hartt, in his widely acclaimed survey of Italian Renaissance art,2 sought to present individual works of art in the context of contemporary history, show how they fulfilled specific needs, and identify their intended meaning. Nevertheless, in the seven pages dedicated to Fra Filippo Lippi, the prime foci of interest are the artist’s influences, stylistic development, and influence on later artists. Only one painting, the Madonna Adoring Her Child, from the late 1450s, painted for the Medici Palace chapel, is discussed in any detail in terms of its iconography, the sources and significance of its specific penitential imagery. Hartt, tacitly acknowledging Lippi’s reputation for immorality, also comments on the odd choice of artist for a penitential subject. This, he supposes, Lippi accepted out of necessity. Hartt thus hints at, but does not expand on a pragmatic element in the exchange between a Renaissance artist and his worldly patron.
The discourse exemplified by Beck, Hartt, and others like them employs a methodology of stylistic analysis for which a detached objectivity has been tacitly claimed. It has, however, also been driven by humanist assumptions about Renaissance art and culture. Admired as the fountainhead from which modernity springs, Renaissance art is seen as manifesting the best of human endeavor when first liberated from the grip of medieval religion. It is at once more rational and less mystical, because no longer corrupted by the hocus pocus of religious superstition. It is not difficult, therefore, to see how and why a Marxist critique has exposed the alleged objectivity of stylistic analysis masking ideological agendas. After all, who—other than privileged white males and their decadent offspring—has either the time or money to bother themselves with the study of form and the contemplation of beauty? Besides, it takes but a second look to realize that such art was inevitably determined in myriad ways by its context—social, economic, political and religious alike. Thus, over the last decades, revisionist art historians have been looking more closely at the context, function, and meaning of art—in process often downplaying formal analysis and matters of style, but ideally integrating these concerns with the newer ones.
As soon as context, function, and meaning are taken seriously with respect to Italian Renaissance art, one is obliged to consider the relationship of art and religion, since most Renaissance art served religious functions of one sort or another. In the case of Fra Filippo Lippi, the need is all the more trenchant, since he was a Carmelite friar and a beneficed priest as well as a maker of religious images. These are the assumptions underlying Megan Holmes’ scrupulously researched study, Fra Filippo Lippi: The Carmelite Painter. Her focus is the religious context of Lippi’s life and art, and the range of meanings communicated through it. She notes especially that Lippi—as a friar-painter working in Florence at a time (the 1430s– 1450s) when Florentine society was undergoing significant political, religious, and economic transition—was well-placed to explore critical issues within religious representation. Indeed, as she points out, at precisely that time, “religious art was the principal site where new artistic conceptions and technologies came into collision with traditional practices and values,” with two monastic artists—Fra Filippo and Fra Angelico (an Observant Dominican)—the leading protagonists of innovation.
The collision she refers to is that between the new, “Albertian” conception of pictorial space—the picture as window on the world, as seen beyond a rectangular frame—and the traditional gold-ground, pinnacled, polyptych altarpiece. In traditional altarpieces, Holmes explains, the mode of presenting the sacred was conceived in terms that were grounded, in the words of one commentator, Georges Didi-Huberman,3 in a “pictorial practice of nonverisimilitude, in opposition to every poetics or rhetoric of verisimilitude, … a pictorial practice of dissemblance.” Considered as material aids to contemplation of the sacred, the compartmentalized elements of a polyptych, while together constituting a formal and symbolic entity, were not presented as a continuous whole, but rather as segregated parts, divided up according to a sacred hierarchy of meanings and values. These images existed as signs of the sacred, visible within a mundane setting yet not continuous with it.
By contrast, early in the quattrocento (the 15th century), a new mode of vision and representation developed in the artistic practice of Florentine artists such as Brunelleschi, Donatello, Masaccio, and Ghiberti. Alberti, in De pictura, written in 1434, articulated the conceptual assumptions and practical means for realizing this new mode of vision and representation. His basic premise was that a painting should be conceived as being like a window, looking onto a tangible space. Alberti also conceived of a painting as a cross-section through the base of a “visual pyramid” formed by the visual rays traveling between the eye (at the apex of the pyramid) and the objects of vision (framed at the base). As Holmes concludes, “A painting could thus display a proportioned view of objective reality equivalent to the image transmitted by the visual rays to the eye.” Such a system of representation presupposes unity in the field of vision, verisimilitude, and continuity—absolutely contrary values to those embedded in a medieval, polyptych altarpiece. To engage the new visual paradigm, in this context, was necessarily to disrupt existing religious assumptions and practices.
Much has been written that pertains to this transition. Holmes, though, disregards these resources and proceeds directly to analyze how Lippi worked through the representational problems implicit in the new pictorial form. She tracks his exploration of the paradox underlying the conceptualization of religious representations in terms that were coextensive with actual profane settings, and how he renders Christian mysteries and sacred figures with a greater degree of naturalism. In so doing, she claims, Lippi—and Fra Angelico—found “ingenious means of potentially intensifying the religious experience of the devotee.” However, she does not attempt to explore what this new visual language might imply in theological terms, and thus what type of religious sensibility or understanding was being substituted for the older, medieval one. Rather, she concludes, with respect to his artistic practice: “Lippi’s paintings embodied artistic values that had currency within progressive cultural circles, influenced by humanists’ redefinition of the arts and fostered by a patrician elite with a vested interest in fashioning a new image of themselves and their city through artistic patronage.”
Does this mean that Holmes lost sight of her stated focus on the religious context of Lippi’s life and art? Not at all. What she explores are the institutional structures and social practices of Florentine religious life, in which various parties have a vested interest in commissioning a range of artistic commodities. These demands comprise the artist’s perfomative context. In Lippi’s case, as a friar-artist, she also seeks to uncover the bearing that his institutional religious affiliations have on his artistic practice, regardless of his personal convictions, or lack thereof. In this respect, it should be born in mind that Lippi was committed to the Florentine Carmelite monastery as an orphaned child, aged eight, long before he was in a position to make a deliberate choice for himself.
While Holmes is not so convincing in adducing the specifically Carmelite influence in either his pictorial style or his iconography, which can be demonstrated as having a currency far beyond this specific order, she does nevertheless shed considerable light on how Lippi negotiated the dual worlds of religion and art. Most of all, she demonstrates how art and artists, even within the domain of Florentine religious art, were implicated in the wider negotiation of social and political power. Thus, in one of the most lucid chapters of what is a well-constructed, carefully researched, and beautifully illustrated book, Holmes focuses on a single work of Lippi’s, his Madonna and Child with Sts. Francis, Damian, Cosmas, and Anthony of Padua (Florence, Uffizi). Commissioned by Cosimo de’Medici, it was painted about 1440-45 for the novitiate chapel of the monastery of Santa Croce, Florence. Holmes carefully considers this seminal work from four successive points of view: that of the patron, Cosimo de’Medici; that of the Conventual Franciscan order, for whose monastery it was made; that of the novices, who would have it as the visual focus of their daily devotions; and that of the artist. Within this context, Lippi had to negotiate the multifarious and conflicting demands of these other parties. Yet he also capitalized on this opportunity to advance his own vision of how to construct and present the diverse symbolic meanings called for, in ways consistent with a new premium placed on naturalism within quattrocento art.
Through her chosen methodology, Holmes clearly offers the reader far more than stylistic analysis by providing substantial contextual material on matters of religious practice, art patronage, and the working conditions of a Renaissance artist such as Lippi. Indeed, she integrates the analysis of artistic form into the fabric of these contexts, extrapolating the bearing of the context, content, and form of Lippi’s art on one another. This makes for both engaging and illuminating reading in terms of understanding art in relation to the institutional functioning of Florentine religious life. Holmes, however, does not allow herself to speculate too far on what the new forms of art, as used by Lippi and others, offered in place of the older medieval practice, in theological terms, as visual expressions of contemporary beliefs. Nevertheless, the type of carefully grounded historical research provided by Holmes, as extended across the spectrum of the religious art of the Italian Renaissance, would provide the necessary groundwork for a further study, such as that of Hans Belting.4 Especially with respect to the medieval period, Belting has shown how to take account of the reception of religious imagery, effectively demonstrating how art mediates belief. In this respect, the images themselves, if their communicative power is to be taken seriously, provide the most potent historical documents of all. We need therefore to learn, over again, how to approach them and view them in context.
E. John Walford is professor of art history at Wheaton College and author, most recently, of Great Themes in Art (Prentice-Hall, 2002).
1. James Beck, Italian Renaissance Painting (Harper & Row, 1981).
2. Frederick Hartt, History of Italian Renaissance Art: Painting—Sculpture—Architecture, 1969, 4th ed., revised by David G. Wilkins (Abrams, 1994).
3. Georges Didi-Huberman, Fra Angelico: Dissemblance et Figuration (Paris, 1990); English edition, Dissemblance and Figuration (Univ. of Chicago Press, 1995), p. 45, as cited by Holmes, p. 121.
4. Hans Belting, Likeness and Presence: A History of the Image before the Era of Art (Univ. of Chicago Press, 1994).
Copyright © 2007 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromE. John Walford
C. Stephen Evans
The hermeneutics of suspicion.
- View Issue
- Subscribe
- Give a Gift
- Archives
Kierkegaard has not been well served by his English-language biographers. Walter Lowrie wrote two early biographies, one immense and full of long quotations from Kierkegaard’s as yet untranslated works, and later the much-read and much-loved A Short Life of Kierkegaard. Though not completely uncritical (as a cleric Lowrie could hardly fully endorse Kierkegaard’s later attack on the church), Lowrie’s works are today often dismissed, too hastily in my view, as hagiography, since he certainly loved Kierkegaard and generally puts the best face possible on the famous episodes in Kierkegaard’s life. Josiah Thompson swung to the other extreme in his biography The Lonely Labyrinth, debunking many of Lowrie’s views and generally viewing with suspicion almost every claim Kierkegaard made about himself and his own work. (Thompson’s suspicious nature ran deep; after writing his work on Kierkegaard he left academe and became a private investigator, author of Gumshoe: Reflections in a Private Eye, and a prominent controversialist about the assassination of John F. Kennedy.)
It is therefore noteworthy that two new biographies have appeared in the last several years: Alastair Hannay’s Kierkegaard: A Biography, and Joakim Garff’s much-praised Søren Kierkegaard: A Biography, translated from the Danish. Both books are the result of many years of work on Kierkegaard, and both have much to offer the reader. Neither, however, will supplant Walter Lowrie as my first recommendation for someone interested in Kierkegaard’s life. Kierkegaard is still waiting for his ideal biographer.
Hannay, a British philosopher born to Scottish parents, spent most of his career teaching in Norway. He was a pioneer in studying Kierkegaard using the tools of analytical philosophy, author or editor of several important works, and has done a series of readable translations of Kierkegaard for Penguin Books. The current book aspires to be, in Hannay’s own words, an “intellectual biography,” one that looks to the life to help us understand the works and to the works for help in understanding the life. The twin focus gives Hannay’s work a lot of its strength; each work of Kierkegaard that is discussed appears in the context of Kierkegaard’s own personal struggles, and Kierkegaard’s life does offer new angles for understanding those works. Hannay is generally careful to avoid the fallacy of assuming that the biographical context of a work exhausts its meaning, and he is certainly knowledgeable about both the works and the life.
Nevertheless, I found Hannay’s book unsatisfying, for two odd reasons. The first stems from one of Hannay’s virtues as a philosopher: his ability to see complexity and nuance. This philosophical strength, however, leads to a weakness in the book: Hannay poses multiple possibilities for understanding the episodes of Kierkegaard’s life but often finds himself paralyzed when he considers them. Too often the reader is left guessing where Hannay stands himself. One might think that this is a virtue, since Hannay is granting the reader the freedom to make his or her own decisions about the subject, but given that the reader is unlikely to know as much as Hannay himself, the indecisiveness of the author tends to be conveyed to the reader. Frequently the reader discovers that “it is possible that Kierkegaard thought such and such” or that “Kierkegaard might have been motivated by this or that” or simply that “in the end it is unclear” what Kierkegaard was up to. Perhaps Hannay here simply reflects the undecidability that inheres in actual human beings, but it leads to frustration for the reader who longs to know Kierkegaard better, or who at least longs for a vigorous portrait from Hannay with which to interact.
The second problem is that, despite Hannay’s philosophical gifts, and the inordinate amount of space given (in a biography) to the interpretation of Kierkegaard’s works, I found Hannay not always reliable in his judgments about those works. The unreliability sometimes shows itself in simple factual mistakes, such as his claim (p. 174) that Kierkegaard “never once owned up publicly or even privately” to have written the pseudonymous Either/Or. (I find it unfathomable how a writer as knowledgeable as Hannay could ignore the “First and Last Declaration” that Kierkegaard appended to Concluding Unscientific Postscript, in which he takes legal and literary responsibility for all his pseudonymous works, even while asking his reader to recognize the distinction between the views of the pseudonymous “character authors” he has created and his own personal views.) At times Hannay gives what I would call serious misreadings of Kierkegaard’s texts, and here, in contrast to the restraint he often shows as historian, he tends to make over-confident judgments about matters that are at best controversial.
I made a long list of such instances, but here I can only cite a couple of examples. On p. 387, Hannay considers Kierkegaard’s own claim that his writings served the purposes of “Governance” or divine providence, and asks whether it might be true that Kierkegaard was serving God’s purposes: “Surely not. The very idea of God transcends purpose, and thus prudence and imprudence too.” This mysterious theological edict is offered without justification or explanation, even though if taken seriously it makes the idea of divine providence impossible. A second example can be found on p. 361, where Hannay discusses Kierkegaard’s penetrating discussion of love in Works of Love, and, much to my astonishment, judges that Kierkegaard, like Nietzsche, places little value on pity and compassion: “in the struggle out of which Kierkegaard’s individual emerges there is, as we saw, a hardening against the pity one is disposed to feel for human suffering … .” From my perspective, this is the exact opposite of the truth.
My own hunch is that many of Hannay’s misjudgments arise from a fundamental lack of sympathy for Kierkegaard’s Christian faith. Hannay himself was a signatory to the 1980 “A Secular Humanist Declaration,” and though, unlike some commentators, he is aware of Kierkegaard’s Christian faith and its importance, I find he often views issues connected to that faith through the wrong end of the telescope.
Joakim Garff’s Søren Kierkegaard: A Biography is an altogether different kind of book. While even longer than Hannay’s work, Garff’s biography sparkles from a literary perspective. (This may be partly to the credit of Bruce Kirmmse’s excellent translation.) Though Garff spends almost as much time discussing Kierkegaard’s works as Hannay does, Garff never loses sight of the story. He knows how to tell a tale, and while certainly long, the book is always a good read. Indeed, much of Garff’s biography reads like a novel.
According to Danish historian Peter Tudvad, Garff’s work is too much like a novel; that is, it plays fast and loose with the facts. Although Garff’s work has received numerous awards, including the prestigious Brandes Award in Denmark, and though the English language version has been extravagantly reviewed by such notables as John Updike, Tudvad, author of Kierkegaard’s København (Kierkegaard’s Copenhagen) and an expert on the period, has shown that Garff’s book is riddled with mistakes. Many of Tudvad’s findings have been communicated to English-speaking readers by philosopher M. G. Piety.1 (Everything I say about this below has been derived from Piety’s articles, though I first heard about the controversy when a Danish friend sent me a newspaper article from Copenhagen. Moreover, I am referring to the first English edition of Garff’s book, published in 2005; a revised paperback edition, not yet available at the time of this writing, is promised.)
Many of the mistakes are unimportant, but overall the errors show a clear bias: Garff consistently interprets Kierkegaard in a suspicious manner, putting a negative spin on most of the crucial episodes. Here is one example: Garff wants to interpret Kierkegaard as a self-indulgent man who lived luxuriously. So, for example, in a section called “A Dandy on a Pilgrimage,” Garff claims that during Kierkegaard’s trip to the ancestral family home in Jutland he was accompanied by “his servant, Anders Westergaard” (p. 154). The problem is that Westergaard, who later was employed by Kierkegaard, was actually a soldier during this period and could not have accompanied Kierkegaard on the journey.
A similar mistake occurs later in the book when Garff transforms one Frederik Christian Strube into another one of Kierkegaard’s “servants.” The problem, according to Tudvad, is that Strube was actually a journeyman carpenter, and would have been required by law to work at his job 12 hours a day for six days a week, leaving little time for domestic service. Kierkegaard allowed Strube, who was mentally disturbed, along with his family, to live with him for three and a half years, professing concern for Strube’s mental health, a concern which Garff ridicules. In the same vein Garff, relying on a previous author who painted Kierkegaard as a man who lived extravagantly while giving little to charity, describes Kierkegaard as having little concern for the poor. In reality there are no reliable records to tell us what Kierkegaard gave to charity. (Tudvad reveals that a figure often cited as the total amount Kierkegaard gave to charity in a particular year is in reality the amount given by Kierkegaard’s servant.)
Tudvad argues that the mistakes point to a more fundamental problem: Although Garff clearly knows Kierkegaard’s writings and the secondary literature about Kierkegaard extremely well, he relies on that secondary literature in a quite uncritical way, thus perpetuating many of the myths that have developed around Kierkegaard over the years. Even worse, according to Tudvad, Garff’s reliance on secondary sources sometimes descends to the level of plagiarism, quoting from other authors virtually verbatim without attribution and borrowing original theories and ideas (such as Carl Saggau’s theory that Kierkegaard’s father believed himself to be afflicted with syphilis), again without citing his sources.
An amusing example occurs when Garff, copying without attribution from Jørgen Bukdahl, claims that there were rumors that a Danish religious figure, J. C. Lindberg, “was to be incarcerated and executed (Danish henrettes) on Christiansø, a notorious prison island” (p. 33). What Bukdahl actually wrote was that Lindberg was to be incarcerated and “exiled” (Danish hensættes). Evidently, Garff miscopied or misread his own notes. Sadly, for reasons I will speculate about later, Tudvad’s exposing of Garff’s errors has created more difficulties for Tudvad than for Garff, leading eventually to his resignation from his position at the Kierkegaard Research Centre. (Still, the promise of a revised edition with substantial corrections suggests that Tudvad’s work has not been in vain.)
As serious as these problems that Tudvad has raised are, they by no means exhaust the flaws in Garff’s work. As a general rule, Kierkegaard commentators can be divided into those who regard Kierkegaard’s own The Point of View for My Work as an Author as reliable testimony and those who, like Garff, regard The Point of View as self-serving fiction (p. 562). In The Point of View Kierkegaard claims that his central purpose has been to “reintroduce Christianity into Christendom” and that he was “from first to last a religious author.” It is true, and Kierkegaard himself affirms this clearly, that he did not have a clear plan for his whole “authorship” when he began writing, but as he wrote, he himself was “educated” by “divine governance.” This admission by Kierkegaard, however, by no means implies that his account is untrue; every author realizes that one’s intentions in a writing project change as the project itself is implemented, and it is thus quite conceivable that Kierkegaard’s account is true.
Garff pours sophisticated scorn on Kierkegaard’s account, offering in its place a psychologized version: here Kierkegaard is seen as crippled by an abusive father, incapable of normal relations with women, and hungry for literary fame. Garff offers us a Kierkegaard whose writings served as a kind of therapy, or perhaps psychological defense, against his pathological guilt and depression.
There is truth in all of these charges. Kierkegaard, like his father and like all the rest of us, was a flawed human being. (Though not nearly so flawed as Garff makes him out to be.) The question is whether this all-too-human story is the whole story, or even the most important part of the story.
Throughout my career I have written about Kierkegaard as a prophetic figure who had something important to say both to the secular world and to the Church. To the secular, intellectual world Kierkegaard presents a powerful case that the decline of faith among European intellectuals is not rooted in intellectual problems or the growth of scientific knowledge but in diminished imaginative power and the loss of an emotional grasp of what it means to exist as a human being. To the Church, Kierkegaard presents a powerful protest against “Christendom,” the domestication of Christian faith by the equation of faith with human culture. The power of these two messages is evidenced by the continued fascination with Kierkegaard’s writings among Christians and non-Christians alike.
However, the power is also evidenced by the lengths to which both of Kierkegaard’s polemical targets sometimes go to obscure or eviscerate his message. Sadly, Hannay and Garff, though both have spent much of their lives studying and writing about Kierkegaard, may be examples. As a secular philosopher, Hannay really does not take Kierkegaard’s Christian challenge seriously. Perhaps this is what one should expect, given Kierkegaard’s own claim that people who are not themselves gripped by the passion of faith will find Christianity offensive.
Garff, on the other hand, could be seen as a representative of the Danish establishment, ensconced in the Christendom Kierkegaard attacked so vigorously. Although his writings about Kierkegaard might suggest that Garff is a professor of literary criticism, he is in fact a trained theologian, having graduated from the pastoral seminary in Denmark; he is also a product of a distinguished Danish family. It is not surprising, then, that Garff finds ways of making Kierkegaard’s protest against the Danish establishment the expression of a sick mind and sickly personality. I do not claim this actually explains Garff’s motivations; I of course cannot really know what his motives are. But it is a possibility that will at least occur to anyone who knows the history of Kierkegaard’s reception in his native Denmark. Nor, sadly, is it surprising that Peter Tudvad, in daring to challenge Garff, has suffered for the same kinds of reasons that Kierkegaard himself suffered, both during his lifetime and posthumously.
If someone takes Kierkegaard’s testimony in The Point of View as credible, is not that person in danger of being duped, if Kierkegaard is, as Garff claims, fictionalizing his life and works? Is it not safer to take the critical, suspicious road that Garff himself travels? Kierkegaard himself addresses this question in Works of Love in some reflections on the Pauline claim in 1 Corinthians 13 that “love believes all things.” In this section of the book, he argues that a loving person and a mistrustful person may have the same knowledge about a given individual, but they draw different conclusions from what they know, the loving person always choosing to interpret the individual in the best possible light. The mistrustful person regards this as gullible foolishness, an invitation to be deceived. Yet there are many ways of being deceived. To allow one’s suspicion and mistrust to cheat one out of love is to be deceived in the most terrible way about the most important thing in life. The lover who believes in another may be deceived about some finite, temporal event, but has a sure grasp on the most fundamental truth.
Those who are unashamed to be described as lovers of Kierkegaard may take some comfort from these thoughts. Of course they forfeit the status of being shrewd, superior beings, who have seen through Kierkegaard’s web of deception. But perhaps they partly escape the fate of those people that Johannes de Silentio, the pseudonymous author of Kierkegaard’s Fear and Trembling, calls “associate professors,” whose “task in life is to judge the great men.” The lives of these judges display a “curious mixture of arrogance and wretchedness—arrogance because they feel called upon to pass judgment, wretchedness because they do not feel their lives are even remotely related to those of the great.”2
I confess that—as a professor—I feel the sting in those Kierkegaardian words. I have my disagreements with Kierkegaard, and there are episodes in his life—particularly the broken engagement to Regine—that I find distressing. I realize that Kierkegaard’s motives were doubtless mixed. Still I believe that Kierkegaard struggled hard to be honest with himself, with God, and with his readers. His claims in The Point of View that his authorship centers around his vocation as a Christian seem right to me, not just because he makes them, but because they make sense of the writings in a way that no other view does. And this leads me to humbly confess my love and appreciation for a man whose greatness will withstand the work of biographers and commentators alike.
C. Stephen Evans is University Professor of Philosophy and the Humanities at Baylor University. Among his recent publications are Kierkegaard’s Ethic of Love: Divine Commands and Moral Obligations (Oxford Univ. Press), Kierkegaard on Faith and the Self: Collected Essays (Baylor Univ. Press), and an edition of Kierkegaard’s Fear and Trembling, coedited with Sylvia Walsh (Cambridge Univ. Press).
1. A summary of some of Tudvad’s findings can be found in the Danish publication Faklen http://www.faklen.dk/artikler/tudvad04-01.php. An English summary/translation by M. G. Piety is available at www.faklen.dk/ english/eng-tudvad07-01.php. Piety has written several articles recounting and defending Tudvad. The examples I provide are taken from “Some Reflections on Academic Ethics,” ASK, The Journal of the College of Arts and Sciences at Drexel University, September 2005, and “Who’s Søren Now?” in The Philosophers’ Magazine, Vol. 31, 2005. (The latter is available online but only by subscription.)
2. Søren Kierkegaard, Fear and Trembling, ed. by C. Stephen Evans and Sylvia Walsh, trans. Sylvia Walsh (Cambridge Univ. Press, 2006), p. 55.
Copyright © 2007 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromC. Stephen Evans
Allen C. Guelzo
The real Ben Franklin.
- View Issue
- Subscribe
- Give a Gift
- Archives
At the outbreak of the Revolution, no American name was better known than Benjamin Franklin’s. The remarkable thing is that, three hundred years after Franklin’s birth, this is still very close to being true. Although George Washington and Thomas Jefferson have, since 1776, nudged Franklin to the side of America’s 18th-century pedestal, they have not pushed him off. Not by any means. Washington is still respected, but respected only in the studied way one acknowledges the noblest of the noble Romans; Jefferson is still the wizard of revolutionary words, but his gangly nerdiness and gigantic moral lapses have also reduced our affection for him to politeness. Franklin, however, is loved in the bubbly and uninhibited manner one loves a rascally and doting favorite uncle. In my own city of Philadelphia, Franklin easily eclipses the city’s earnest and pious founder, William Penn (to the point where the statue of Penn atop City Hall is frequently mis-identified as Franklin), even though Franklin was born in Boston, had precious little in the way of piety, and actually spent most of his life after 1764 in London and Paris rather than Philadelphia.
Benjamin Franklin's Printing Network: Disseminating Virtue in Early America (Volume 1)
Ralph Frasca (Author)
University of Missouri
312 pages
$69.90
Not that this is undeserved. Franklin arrived in Philadelphia in 1723 as a penniless printer’s apprentice—or, more accurately, as a fugitive from an apprenticeship agreement with his older brother—and rose to an unprecedented level of wealth and celebrity when rising beyond one’s class was still considered an abnormality in nature. He established what became the best-read newspaper in British North America (the Pennsylvania Gazette), dominated the almanac market (which was no small market in an overwhelmingly agricultural society) with his hilariously irreverent Poor Richard’s Almanac, and by 1748 was able to retire from active management of his printing business and turn his attention to the two subjects which most interested him: establishing his social credentials as an English gentleman, and gaining a foothold in the world of Enlightenment science through an ingenious series of experiments with electricity. He could afford it, too. In the 1750s, Franklin’s annual income from the print-shop partnership, rental properties, patronage appointments and sales brokering (including the sale of slaves) amounted to nearly $2,000 a year, when George Washington’s per annum income from Mt. Vernon was only $300 and the governor of Pennsylvania had to go cap-in-hand to the provincial Assembly for his annual salary of $1000.1 Anyone who can imagine Rupert Murdoch with a Nobel laureate’s passion for nuclear physics has a fair idea of Franklin’s profile in the mid-18th century.
ÂElectricity was no humdrum subject in the 18th century. The scientific revolution of the 1600s had begun by locating the movement of objects in forces exerted on objects, rather than in the moral qualities of the objects themselves, and it proceeded from there to itemize whatever such forces could be identified and harnessed for human enjoyment and profit. But electricity remained one of the most baffling and random of these forces until the mid-18th century, when controlled experiments with Leyden jars and conducting materials made the production of electricity less of a mystery. Franklin’s great contribution, in his Experiments and Observations on Electricity (1751), was to demonstrate that lightning is, in effect, simply a gigantic electrical spark, and it made him famous and honored (he was awarded an M.A. from Harvard and Yale and an L.L.D. from St. Andrew’s). “Nothing,” wrote Joseph Priestley, “was ever written upon the subject of electricity which was more generally read, and admired in all parts of Europe.”2
Franklin was inevitably pulled into colonial Pennsylvania politics, and then into the administrative apparatus of the British empire through his appointment as deputy postmaster-general for the North American colonies, before finally moving to London as agent for the interests of Pennsylvania (and, in time, three other colonies) before Parliament. In 1771, he began composing an autobiography which, with its rollicking, pragmatic account of how the son of a tallow chandler had out-foxed the colonial stuffed-shirts and laughed his way to success and personal triumph, became the premier document of American individualism. In October, 1776, he was sent by the American revolutionary government to France to secure French support. Shrewdly setting aside his reputation as the premier American scientist, he cast himself instead as a charming practical philosopher in homespun and a coonskin cap rather than a periwig. The French court, far from being put off by this refreshing display of American simplicity, swooned over Franklin in delight, and an alliance with the Americans was soon in the offing. He lived long enough to sit in the Constitutional Convention, and, only a few months before he died in 1790, submitted a petition on behalf of the Pennsylvania Anti-Slavery Society to Congress, calling for the abolition of slavery. He could hardly have lived a more charmed life than if he had written the script himself.
In one sense, he did write the script, since his posthumously-published Autobiography is the means by which successive generations of Americans have most often come into contact with Franklin. But alongside the bubbling stream of the Autobiography, there has long run a darker current of unease with the Franklin which Franklin portrayed there. Max Weber and D. H. Lawrence turned on Franklin as the original Babbitt: a shallow, self-promoting bourgeois merchant with an eye forever co*cked for the main chance, “a wonderful little snuff-colored figure, so admirable, so clever, a little pathetic, and somewhere, ridiculous and detestable.” The late Francis Jennings, as acerbic a historian of early America as any who ever wrote, dismissed the Autobiography for being “about as valid as a campaign speech.” In 1975, Melvin Buxbaum published a study of Franklin’s relationships with the churches of the Pennsylvania colony, and found, not the beaming, tolerant Deist of the Autobiography, but a partisan heretic with a sharp knife for “injuring the Calvinist Establishment” of the middle colonies. And in 1987, a special issue of the Pennsylvania Magazine of History and Biography offered a “re-assessment” of Franklin which emphasized his “dark side”—his pessimism about human nature, his ethnic intolerance, and his Hobbesian notion of society.3
And yet, whatever the damage revelations like these have done to Washington and Jefferson, Franklin seems impervious. John Adams, who cordially hated Franklin, wailed that “The History of our Revolution” would soon become “one continued Lye from one end to the other” in which “Dr. Franklins electrical rod, smote the Earth and out sprung General Washington … and thence forward these two conducted all the Policy, Negotiations, Legislatures and War.” And so it was, to Adams’ unrequited fury. Adoring biographies— by H.W. Brands, Walter Isaacson, Edmund S. Morgan, and Gordon Wood, to which may be added Leo Lemay’s two-volume The Life of Benjamin Franklin (2005) and Joyce Chaplin’s The First Scientific American: Benjamin Franklin and the Pursuit of Genius (2006), to mention only the most recent—are but the tip of the celebratory iceberg. Franklin has become part of the dual personae of our culture, the secular, rational alter ego to Jonathan Edwards and the Puritan soul of America. Bruce Kuklick once remarked that Franklin was important to studies of the American mind, not so much for his intellectual value, as for the opportunity he affords modern scholars, possessed by the spirit of pragmatism and by a “presentist bias” toward “thought that foreshadows the non-religious values” of the professoriate, to ignore the larger currents of religious and speculative thought in early America.4 No wonder the Benjamin Franklin tercentary website asks “Do You See Yourself in Franklin?” It is the dream of every secularist that we will.
Curiously, what few people have bothered to ask is how Franklin climbed his way up to the niche he occupied in British North America in the first place. The answer is simple and straightforward: Franklin had a good head for business. For a long time, our knowledge of Franklin’s printing business was picked up in incidental pieces from larger biographies, as the authors hurried to get to the excitement of the electricity experiments or Franklin’s role in the Revolution. In so doing, they neglected to notice how very revolutionary Franklin’s entrepreneurial innovations were. There was at least a glimmering of recognition of Franklin-as-entrepreneur a decade ago in Frank Lambert’s “Pedlar in Divinity”: George Whitefield and the Transatlantic Revivals, in which Lambert accounted for the success of Whitefield as a revivalist in terms of Whitefield’s use of Franklin’s printing network up-and-down the colonial seaboard to provide advance publicity of the Grand Itinerant.5 Ralph Frasca, however, has undertaken a detailed analysis of Franklin’s business strategy as a printer, and the result is a marvel, both of diligent research and of Franklin’s fertile and opportunistic imagination.
As a trade, printing in colonial America was a shop affair, in which a master printer, together with his journeymen and apprentices, published and sold (under one roof) a variety of printed fare: newspapers, books, sermons, pamphlets, broadsides, circulars, almanacs. It was back-breaking, eye-straining, dirty work. But it put printers at the nexus of every intellectual wind that blew in the 18th century, as printers competed with each other to print material that would sell. It was also a constricted sort of trade—until 1695, a licensing law, regulating printers, imposed both censorship and economic controls on printing, and even after the law was allowed to expire, colonial governors and legislatures frequently shut down printers who annoyed them. Even colonial governments which kept a looser rein on printers could still shut down an offending press, since government printing contracts were often the principal source of a printer’s income, and what the colonial governments gave, they could take away.
Three things made Franklin a success in Philadelphia. The first was his effervescent writing, punctuated with hoaxes and satires as well as news. The second was his determination to elevate printing to the level of moral instruction, as a vehicle for promoting “virtue.” Franklin’s rule for contributors was that “no Piece can properly be called good, and well written, which is void of any Tendency to benefit the Reader, either by improving his Virtue or his Knowledge,” and in his last years, he bitterly criticized his grandson, Benjamin Franklin Bache, for turning Bache’s newspaper, the Philadelphia Aurora, into a engine of anti-Federalist “Rancour, malice, and Hatred.” The third key to success was, virtue notwithstanding, the patronage he secured from the colonial assembly. And tenuous as patronage was, it was actually another assembly with another patronage contract, which opened the way for Franklin to out-flank his dependence on the Pennsylvania Assembly, and all the other legislatures in the colonies.
In 1731, the South Carolina assembly, lacking a reliable printer in the colony to print its official records, invited Franklin to take up the South Carolina printing contract. Franklin was disinclined to leave Philadelphia. But it occurred to him that he might just as easily send his journeyman compositor, Thomas Whitmarsh, to South Carolina in his stead, complete with a press and type fonts. Effectively, this meant that Franklin would underwrite the start-up costs for a print shop in Charleston; in return, he would become Whitmarsh’s silent partner and receive one-third of the profits. Whitmarsh arrived in Charleston, only to find that two rival printers, newly arrived from England, had snatched the contract from him. But Franklin’s sponsorship turned out to be better patronage than the colony’s. Whitmarsh quickly set up a flourishing print-shop, selling and binding almanacs and sheets sent to him from Franklin, and commencing a sister publication to Franklin’s newspaper, the South-Carolina Gazette. When Whitmarsh suddenly died in 1733, Franklin rushed a new journeyman to Charleston, Lewis Timothy. For the next three decades, Timothy, his wife, and his son Peter ran the Charleston shop as the jewel in Franklin’s entrepreneurial crown. Franchise marketing had arrived in America.
From there, Franklin expanded his franchising operations to New York in 1742, to Newport, New Haven, and even Antigua in 1748. (In fact, by the 1750s, eight of the fifteen colonial newspapers came from shops part-owned by Franklin.) The pattern for these partnerships remained the same in each case: Franklin provided the up-front money and equipment; the on-site partner did the work, published a local Gazette, and sent a percentage of the profits to Franklin. Not all of these experiments turned out well. Franklin tried to start a German-language newspaper in Philadelphia and in Lancaster, but his paper was beaten out of the market by the “Palatine” immigrant, Christopher Sauer. In a moment of weakness, he put his nephew, Benjamin Mecom, in charge of the Antigua operation, only to discover that Mecom was an erratic and disturbed young man, and a flop as a businessman. Despite renewed efforts to set up print shops in New York, New Haven, and Boston—all of them funded by loans from Franklin—Mecom slowly spiraled downward toward a mental crack-up, and was eventually locked up in a New Jersey asylum, from which he disappeared in 1776.
On the other hand, having spread his business eggs into a number of different baskets, Franklin insured himself against the calamity which would otherwise have been involved when one or another of the franchises failed. Moreover, his influence over the franchise shops came to his political rescue in 1760s. Franklin, enjoying the good imperial life in London as a colonial agent, failed utterly to notice the explosive potential of the Stamp Act. Indeed, not only did he assure Parliament that the tax would meet with little discontent, he secured appointments as stamp agents for a number of friends. The frenzied outcry in the colonies against the Stamp Act caught Franklin with one foot in some very hot water; but he was able to extricate it, and repair the damage to his reputation in the colonies, by rallying his franchisees to his cause and publishing defensive essays in their newspapers.
One reason why Franklin’s entrepreneurship has remained so little-appreciated is that Franklin himself whited it out of the story of his life in the Autobiography. Franklin was born in an age when, as John Locke put it, “trade” was considered “wholly inconsistent with a gentleman’s calling,” and much as he enjoyed having trumped the manor-born, Franklin remained apprehensive of drawing too much attention to how dramatically he had stepped out of place. Certainly, in England, he was never allowed to forget that everything he was had been built on “trade”—his first letters on electricity to the Royal Society were ignored and then plagiarized by one of the Society’s officers, largely because Franklin had no aristocratic standing or influential relatives to punish the Society for its neglect. In 1774, as a further reminder that he would never be anything better than a provincial in the eyes of the Crown, Solicitor-General Alexander Wedderburn publicly humiliated Franklin in Whitehall as an “ungrateful, cunning upstart thing,” and had him fired from his postmaster’s job. (This event, more than any other, convinced Franklin to turn his back on George III’s government, a mistake for which they would pay handsomely.)
There was less penalty for jumping the social boundaries between classes in America, and after the Revolution, no penalty whatsoever. There was also less penalty for social climbing in the intellectual world of the Enlightenment. The scientific revolution inaugurated by Newton and Galileo and matured by Priestley and Lavoisier was dedicated to the abolition of artificial hierarchies, starting with the Great Chain of Being. All movement, without exception or privileged exemption, obeyed Nature and Nature’s laws. Starting from that premise, it was not at all difficult for the Enlightenment to develop a corresponding hostility to the artificial hierarchies of society and politics with which European aristocracies had paralyzed their economies. The Enlightenment’s riposte to aristocracy in politics was Lockean liberalism; in economics, it was Adam Smith’s capitalism; and both are the warp and woof of Franklin’s life.
Commercial capitalism has been so routinely disparaged for so long by American intellectuals that we have some difficulty crediting how very happily the Enlightenment embraced commercial capitalism as Nature’s own system of merit over against unearned aristocratic title. Gary Nash’s recent attempt to re-cast the American Revolution as a proletarian uprising, more concerned with “elementary political rights and social justice, rather than the protection of property and constitutional liberties,” misses utterly how genuinely revolutionary the protection of property and constitutional liberty was in a world of absolute autocrats and talentless courtiers.6 Precisely because the self-made man of commerce appeared to the philosophes as a manifestation of the operation of reason and nature, Voltaire sang an unashamed song of admiration for the calculating, dispassionate self-promotion of the bourgeoisie:
I don’t know which is the more useful to the state, a well-powdered lord who knows precisely what time the king gets up in the morning and what time he goes to bed, and who gives himself airs of grandeur while playing the role of slave in a minister’s antechamber, or a great merchant who enriches his country, sends orders from his office to Surat and to Cairo, and contributes to the well-being of the world.
No one in revolutionary America lived up to this reputation more than Franklin. It represents a fatal inversion of Franklin’s own expectations that no reputation today has lesser standing among the Revolution’s scholars than that of the “great merchant.” And none plays a smaller role in the modern-day marketing of Benjamin Franklin.
Allen C. Guelzo is Henry R. Luce Professor of the Civil War Era and director of the Civil War Era Studies program at Gettysburg College. He is at work on a book about the Lincoln-Douglas debates of 1858.
1. Gordon S. Wood, The Americanization of Benjamin Franklin (Knopf, 2004), p. 54.
2. Tom Tucker, Bolt of Fate: Benjamin Franklin and His Electric Kite Hoax (Public Affairs, 2003), p. 106.
3. Lawrence, “Benjamin Franklin,” English Review, Vol. 27 (December 1918), p. 405; Jennings, Benjamin Franklin, Politician: The Mask and the Man (Norton, 1996), p. 18; Ronald A. Bosco, ” ‘He That Best Understands the World, Least Likes It’: The Dark Side of Benjamin Franklin,” PHMB, Vol. 111 (October 1987), pp. 525-554; Buxbaum, Benjamin Franklin and the Zealous Presbyterians (Penn State Press, 1975), pp. 112-113.
4. Kuklick, Churchmen and Philosophers: From Jonathan Edwards to John Dewey (Yale Univ. Press, 1985), pp. xix-xx.
5. Lambert, “Pedlar in Divinity”: George Whitefield and the Transatlantic Revivals (Princeton Univ. Press, 1993), pp. 118-129.
6. Nash, The Unknown American Revolution: The Unruly Birth of Democracy and the Struggle to Create America (Viking, 2005), p. 94.
Copyright © 2007 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromAllen C. Guelzo
Culture
John H. McWhorter
Was Hitchco*ck a master in his use of music?
- View Issue
- Subscribe
- Give a Gift
- Archives
It’s no surprise that Jack Sullivan’s Hitchco*ck’s Music, has gotten so much press. The title alone gets anyone thinking about the most searingly memorable wedding of image and music ever filmed. Do I need to specify? The violin shrieks as Janet Leigh is knifed to death in the shower in Psycho.
However, one could reasonably ask what Sullivan was going to fill out the pages of his book with besides that scene. Hitchco*ck fans may think of the eerie theremin on the soundtrack in Spellbound; then the creeping triplet figure under the opening credits of Vertigo seems to have made a certain impression. But Sullivan is interested in more than these easy scores, as it were. Hitchco*ck’s Music argues that Alfred Hitchco*ck was especially sensitive to music for a director, and that throughout his oeuvre, Hitchco*ck applied music in so studious a way as to render it a kind of character in itself.
Despite having always enjoyed Hitchco*ck, I had never been aware of this as a defining trait of his, and so I took Sullivan’s book as an occasion to watch no fewer than 30 of Hitchco*ck’s 50-odd films—that is, all of his films considered to have passed the test of time. (With the exception of The 39 Steps and The Lady Vanishes, I left out the twenty-and-change films he made in England before coming to America, most of which were forgettable programmers, with rather sparse music due to budgetary constraints, as Sullivan acknowledges.) And after approximately 72 hours—no, not continuous!—of soaking in Hitchco*ck’s fusions of story and sound, I judge Sullivan’s thesis to succeed only partly.
His argument reminds me of the historiography of Broadway musicals, in which a guiding theme is the emergence of the “integrated” musical: the songs propel the plot instead of stopping it short with snappy but narratively irrelevant “acts.” Often, the biographer of a Broadway composer or composer/lyricist team labors under the notion that his subject was uniquely committed to this trend toward dramatic integration: the Gershwin brothers with Strike Up the Band and Of Thee I Sing, Rodgers and Hart with early efforts like Chee-Chee, or later ones like On Your Toes with its narratively pertinent “Slaughter on Tenth Avenue” ballet, Jerome Kern and his collaborators with the Princess Theatre shows and Show Boat, and so on. Yet the truth is that between the wars, various composers converged upon the integrated musical idea, to such an extent that it became the default mode by the early Forties. Theatergoers during this period did not experience any one production as unprecedentedly integrated, regardless of modern chroniclers designating their favorite artists’ shows as transformational keystones.
In this vein, Sullivan’s descriptions of Hitchco*ck’s music often imply something uniquely insightful on Hitchco*ck’s part in techniques that were standard industry-wide. The techniques of timing, limning the ambiguity of characters’ inner thoughts, interweaving themes, and so on that Sullivan describes were less conceptions driven by Hitchco*ck than the results of a communal development of the art of cinematic music from a period when at first there was none.
Yes, none. In early talkies, there was a conventional sense that music had to be realistic within the bounds of the narrative, such that if a sentimental tone was required for a scene, someone would turn on a radio, or an orchestra playing at a nightclub would suddenly fall into a pretty ballad. The idea of disembodied music just playing in the background seemed odd at first—and it still is, if you think about it. One can imagine an alternate universe in which music in American films was still sparse and “literal,” just as in Europe, there never arose a tradition of animated film shorts featuring talking animals wearing gloves.
The Psycho scene is, to be sure, a quintessential culmination of the art of film music as it has become. Seeing it without music, as one can in a Bonus Features segment on the DVD, points this up exquisitely. But over the whole span of Hitchco*ck’s career, the music in his films, while always and utterly professional, would hardly have motivated a book-length study on its own merits.
Make no mistake, there are wonderful musical touches throughout Hitchco*ck’s output. The music for the titles of The Wrong Man is, on the surface, an anodyne bossa nova—but with a quiet touch of discord in one recurring stretch that tells you glum things are in store. In North By Northwest, the trenchant chords that stab the soundtrack whenever one of the pursued looks at Mount Rushmore, knowing that he or she can escape death only by rapelling down, are a perfect musical summation of acrophobia. In Suspicion, the “real” waltz music of an orchestra at a party propels Joan Fontaine and Cary Grant outside, gradually transforming into “abstract” ominous musings—certainly not what the orchestra inside is playing—as the characters come into conflict. Much of the soundtrack music for Marnie is plangently lush, which contrasts quizzically and, ultimately, bracingly with Tippi Hedren’s contained, icy protagonist. Similarly in Frenzy, set in London, the music for the opening and for many early exterior shots is stately Elgaresque promenade, which clashes ironically with the grubby low-class goings on that the film will depict, and thus becomes in its way a mocking narrator.
As it happens, three of those five examples, which I chose at random, turn out to be from scores that Bernard Herrmann wrote. What truly motivates a study of “Hitchco*ck’s music” is Herrmann’s work, which struck me half of my life ago when I first saw about a dozen of the films, despite being unaware of Herrmann’s reputation. Psycho, then, is natually Herrmann; another useful exercise is to watch, on DVD, the opening of Torn Curtain with Herrmann’s scoring, and then with the music of John Addison, who replaced Herrmann after he and Hitchco*ck had a permanent falling out. With Herrmann’s music you know you’re in good hands: for Torn Curtain he had a clutch of twelve flutes twittering fiercely in dense, ominous harmonies over an orchestra that included eight double basses and nine trombones. With Addison’s syrupy score—when Paul Newman lets Julie Andrews know that he is not, as she supposed, a traitor to the United States, the orchestra points this up with purple surges from the string section—we might as well be watching Magnificent Obsession.
But Herrmann actually only wrote seven scores for Hitchco*ck, all in the late Fifties and early Sixties. That leaves a lot of films, and one problem Sullivan has to dance around is that, however professional or even artistic the scores of many of the earlier American ones were, Hitchco*ck himself often found them too schmaltzy. Rebecca was an example: the obsessive producer David Selznick insisted on almost wall-to-wall music, including rising strings to signify passion and so on. Hitchco*ck’s Forties scores are full of this kind of thing: as a kindly blind old man does a sentimental solilioquy in Saboteur, the orchestra creeps in with noble-sounding music; the strings surge and shimmer as Ingrid Bergman and Gregory Peck embrace in Spellbound. Such scoring was par for the course at the time, and hence no blot on Hitchco*ck’s genius, but neither can these films be adduced as meaningful evidence of a distinctive gift for marrying music to story.
Sullivan, one senses, intends an argument that, even if bounded by limitations of fashion, Hitchco*ck was especially masterful in getting the most out of the scores for his films. But this is persuasive only if we willfully forget what the state of the art was for film scoring when Sullivan discusses a particular film. From Sullivan’s presentation one might almost forget that Max Steiner and Erich Korngold existed, for example. It is also unclear to me that composers who worked with Hitchco*ck came up with scores distinctly more artful than their normal standard. For example, is Alfred Newman’s work on All About Eve really small potatoes compared to his work on Hitchco*ck’s also-ran Foreign Correspondent?
Another questionable argument is that the singing and playing of music had an unusual pride of place in Hitchco*ck’s films. Okay, Hitchco*ck was clearly a more musical soul than John Ford. But in an era before high fidelity recordings, more people played instruments and sang, period. And what about Frank Capra: Jimmy Stewart and Donna Reed singing “Buffalo Girls” in It’s a Wonderful Life, Essie dancing to Ed’s xylophone in You Can’t Take It With You, Frank Sinatra doing “High Hopes” in A Hole in the Head (despite it not being a musical)?
Ultimately, Sullivan is driven into forced argumentation by his format, which is to describe the scoring in every single Hitchco*ck film year-by-year. The truth is that more than a few of the films— the wan screwballer Mr. and Mrs. North, for example—simply do not merit musical discussion; The Birds has no music at all, while Rope only has music under the opening credits and one character playing that theme on the piano now and then. Yes, the scores that Hitchco*ck got out of Bernard Herrmann are indeed art for the ages. And some Hitchco*ck fans may well enjoy listening to the Late Romantic chocolate-box scores of Rebecca, Spellbound, and Notorious. Still, as to whether all of Hitchco*ck’s scores taken together are a cut above normal Hollywood scoring, my verdict was neatly illustrated at a bookstore appearance by Sullivan that I attended.
Someone in the back asked Sullivan to discuss the score of Bell, Book and Candle, and had to be gently reminded that Hitchco*ck didn’t direct the film. (The confusion was understandable: Bell, Book and Candle starred Jimmy Stewart and Kim Novak, the very same year they starred in Vertigo.) Next question.
Meanwhile, however, I was thinking about how much I’d always enjoyed the scoring in Bell, Book and Candle: sneaky, jazzy but contained, revolving around a neat, Pink Panther-like motif, pointing up the action without overwhelming it, and at times nicely interlaced with “literal” music by a jazz combo. One could pen a perfectly legitimate chapter analyzing the artistry of the score’s composer.
Who was, get ready, one George Duning, a house composer for the meat-and-potatoes Columbia studio, whose status in the composer firmament is indicated by the fact that while the celebrated Franz Waxman scored the classic Mister Roberts, it was left to Duning to do the honors for the unmemorable sequel Ensign Pulver. Yet workaday, unsung Duning managed the utterly deft and amiable score for Bell, Book and Candle. Often, what Sullivan praises Hitchco*ck for in lengthy detail was otherwise known as professional competence.
Especially having made my way through 30 Hitchco*ck films in six weeks, I salute Sullivan’s having viewed all 50-plus multiple times. It must also be mentioned that Sullivan makes ample reference to original score materials and the story behind the composition of each score, which often essentially means a description of the making of the film, material that is in itself almost always interesting.
I will value Hitchco*ck’s Music as a neat reference book on each and every Hitchco*ck score. Yet it will continue to be the seven scores by Herrmann—who conveyed Henry Fonda’s terror of imprisonment in The Wrong Man by putting a mike on someone plucking on the lowest strings of a grand piano—that draw my rapt attention. In that very selective preference, I suspect, I have plenty of company.
John H. McWhorter is a senior fellow at the Manhattan Institute. He is the author most recently of Winning the Race: Beyond the Crisis in Black America (Gotham Books). Among his other books is Defining Creole(Oxford Univ. Press).
Copyright © 2007 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.
- More fromJohn H. McWhorter