Tag: History

  • Pillars of Sand

    Pillars of Sand

    The fundamental shift in Western legitimacy

    A simple theory could explain the fundamental shift in Western politics in recent years. I propose that we are witnessing a shift in the way governments acquire and maintain legitimacy in the eyes of their populace. In determining whether governments and regulatory bodies are “legitimate”, judgements fall primarily into one of two beliefs: process legitimacy, or outcome legitimacy. Until recently, Western polities overwhelmingly believed in “process legitimacy”: democratically elected governments were inherently legitimate because they followed the process, i.e. they obeyed the laws, and came and went with elections. Whether they passed good or bad policies would make them more or less popular, and would help them to win or lose the next elections, but rarely did their basic legitimacy to govern depend on whether their policies were good or bad.

    In recent decades, though, this has begun to shift, and the populaces of western polities increasingly believe in “outcome legitimacy”: governments are legitimate or illegitimate as a function of how well they respond to the socioeconomic and sociocultural insecurities of their constituents. Many polls and studies reveal this deteriorating belief in democracy. Belief in the legitimacy of the Supreme Court or Congress are abysmal. We can see this in stark relief not only Trumpist claims to the illegitimacy of pluralistic Democratic victories, but also in France where Macron is decried as illegitimate in his accused abandonment of the working class. For the former, consider this excerpt:

    “Even if they don’t subscribe to the more outlandish conspiracies propagated by Trumpists, many Republicans agree that the Democratic party is a fundamentally illegitimate political faction – and that any election outcome that would lead to Democratic governance must be rejected as illegitimate as well. Republicans didn’t start from an assessment of how the 2020 election went down and come away from that exercise with sincerely held doubts. The rationalization worked backwards: They looked at the outcome and decided it must not stand.”

    And for the latter example of Macron, FranceInter could not put better the differences in claims to legitimacy:

    Sur le plan institutionnel, les règles de la démocratie sont simples et claires, le président de la République est celui des candidats qui a obtenu la majorité des suffrages exprimés. Sur ce plan, la légitimité d’Emmanuel Macron est donc incontestable.  

    Pour autant, au soir du premier tour, sur les plateaux de télévision, il y avait quelque chose d’indécent dans la suffisance et l’auto-satisfaction des « dignitaires du régime », souvent passés par des gouvernements de gauche et de droite, avant d’échouer en Macronie… Comme un manque de gravité qui ne correspondait pas aux circonstances et aux enjeux…  

    Pourquoi cela ? Parce que si la victoire d’Emmanuel Macron est indiscutable, la crise de la démocratie est, elle aussi, indéniable. Le président de la République a été réélu sur fond d’abstention massive, en particulier des actifs, face à une candidate qui continue à être délégitimée et même vilipendée par presque tous.    
    On an institutional level, the rules of democracy are simple and clear: the President of the Republic is the candidate who has obtained the majority of the votes cast. In this respect, Emmanuel Macron’s legitimacy is therefore unquestionable.  

    And yet, on the evening of the first round, there was something indecent about the smugness and self-satisfaction of the “dignitaries of the regime” on television panels—figures who had often passed through both left- and right-wing governments before ending up in Macron’s camp. There was a certain lack of gravity that did not match the circumstances or the stakes at hand.  

    Why is that? Because while Emmanuel Macron’s victory is indisputable, the crisis of democracy is equally undeniable. The President of the Republic was re-elected amid massive voter abstention, particularly among the working population, against a candidate who continues to be delegitimized and even vilified by almost everyone.  

    It is not the first time that outcome legitimacy has been significant in the west: as Jurgen Habermas claims in his 2013 The Lure of Technocracy, “The [European] Union legitimized itself in the eyes of its citizens primarily through the results it produced rather than by fulfilling the citizens’ political will.” But in general this belief about legitimacy is new to the modern West. It is, however, perfectly valid in other cultural and political systems around the world: Middle Eastern monarchies make no pretense to democracy (in many the denizens are deemed subjects, not citizens, implying no role to play in the political life of the state); in China, the Communist Party historically has relied heavily on its ability to buoy material prosperity and defend China’s image abroad as its primary claims to legitimacy, rather than on claims to democratic processes or popular election (though China does maintain some nominally democratic institutions).

    A fair follow up question to ask here is why this process has occurred. I am not entirely sure, but I have some hypotheses. The first is that the one-two punch of terrorism and the recession in the early 2000s created a climate of increased material insecurity and a need to ensure that governments were actually producing results that protected people physically and economically. A second, non-exclusive reason would be the Gurri hypothesis that distributed network technologies are making people more skeptical of governments and institutions, and want more explicit proof that they are working in the public interest. Other explanations surely abound and I would love to hear them in the comments.

    One could conclude that if this trend towards valuing outcome legitimacy continues, Westerners will become increasingly tolerant of undemocratic and unlawful acts on the parts of their governments, so long as they are able to deliver desired results. The stunned tolerance of Elon Musk’s activities in the US Federal Government, to the extent that it holds, may be due in part to a sense of awe that he is able to move so rapidly and effectively and produce the kind of results that Trump campaigned on.

  • The Life and Death of Honor

    The Life and Death of Honor

    Obituary of one of the oldest human values

    I was recently reading the book version of Disney’s Mulan to my four-year-old son when he asked me what “honor” was. Although I usually pride myself on concocting kid-friendly analogies and simplifications, I truly struggled with his question and muttered something like “people thinking you’re a good person” before moving on. The question stuck in the back of my mind, however,  and I have been continuously mulling how to mentally model “honor” in a concise way. After days of struggle, I began to read, research, and think critically about the idea, and what follows is the digest of that process.

    The concept of honor was a staple of human society since the dawn of recorded history, and yet somehow in the past 300 years it has gone the way of steamboats and horsedrawn carriages. Honor, today, is a quaint vestige at best and pathologized at worst, coming up most often in the context of “honor killings” or the “honor culture” of the American South. Outside the limited scope of military service, “honor” is nearly synonymous with “machismo” or “bravado”, a puerile attachment to one’s own ego and reputation (or that of one’s kith and kin).

    A comparison of a random selection of some other broad but unrelated terms demonstrated that the fall of honor is not just absolute but relative – freedom, for example, was a minor concern in the 16th century but has since dwarfed honor.

    Interestingly honor was more prevalent than even “love” in the 16th century but the opposite holds true since then.

    Wikipedia implies that honor is a proxy for morality prior to the rise of individuality and inner consciousness: in our earlier, tribal and clannish stages of moral development, the perceptions of actions by others and how they reflected on our kith and kin were much more important than any inherent deontological moral value, hence honor.

    And yet there is a part of us that knows that this idea is missing something critical. When we read or watch stories about “honorable” characters like Ned Stark or Aragorn talking about honor, we don’t think of them as being quaint, macho, and motivated by superficial public perceptions of their clans and relatives. We know that when they are talking about their honor, they are talking about being morally upstanding figures who do the right thing regardless of the material and reputational cost (to quote Aragorn, “I do not fear death nor pain, Éomer. Only dishonor.” -JRR Tolkien, Return of the King). When we read this, we know that that is not socially-minded showmanship but rather bravery and altruism, and a reader is supposed to like Aragorn for it.

    Upon contemplating this and reading further, it became obvious that “honor” was a catch-all term for many different qualities. It refers to personal moral righteousness and familial reputation, but it also refers to one’s conduct in warfare, or one’s fealty to one’s liege, and to one’s formal recognition by society. Given the ubiquitous and multifarious uses of the term, and the fact that pre-modern peoples seem to have absolutely obsessed over it (prior to 1630 it was as important as love and vastly more important than freedom), it stands to reason that it was useful and good. So how exactly can we explain the benefits of honor and what it meant?

    The Benefits of Honor

    I came to the following categorizations of how honor worked and why it was useful in pre-modern society, from shortest to longest:

    1. A solution to the game-theory of pre-modern warfare

    In the modern world there are the International Criminal Court to enforce international law and prevent war crimes, and an international press to publicize the violation of treaties and ceasefires. In the premodern world, these institutions did not exist. What prevented pre-modern peoples from regularly engaging in these acts? To some extent the answer is “nothing”, and indeed rape, pillage, and general atrocities were a constant feature of premodern warfare: the ancient Roman statesman Cato the Elder (234-149 BC) coined the phrase “The war feeds itself” (bellum ipsum se alet) to explain that armies would sustain themselves by pillaging and starving occupied territories, and it is telling of the longevity of that mode of warfare that the phrase is most heavily associated with a war nearly two millennia after Cato, the Thirty Years’ War of 1618-1648 AD.  One institution that may have held these atrocities in check was the papacy and later state churches, though it stands to reason that a commander willing to commit such venal acts might not be dissuaded simply by threats of additional eschatological penalties. But one additional force holding back the worst of human nature during premodern war may indeed have been the concept of honor. A society that places a high value on honor means that individuals will pay a high reputational cost for such actions as attacking civilians, violating treaties, or encouraging mass atrocities. This societal expectation discourages individuals from engaging in such behavior because they would lose honor – and as honor was transmitted familially, their families would be marked for generations. In a society where legacy is important, staining that legacy in perpetuity for a one-time military benefit may have made some commanders think twice.

    1. A heuristic to encourage the efficacy of pre-modern society and administration

    It is difficult for us moderns to understand the extent to which the pre-modern world was personal. Max Weber, one of the founders of modern sociology, viewed modernity as a process of transitioning from a Gemeinschaft (community) into a Gesellschaft (society). In the former, looking out for one’s friends and family, using influence for their benefit and helping them get into positions of power is considered good and dutiful; in the latter, being dutiful means impersonally discharging the role of one’s office without regard to personal relationships; doing too much favoritism is considered corruption or nepotism. Indonesia’s former president Suharto once neatly encapsulated the difference (and revealed that Indonesia was still a Gemeinschaft) with the quote “what you call corruption, we call family values”.

    Most pre-modern societies, particularly feudal ones, had almost non-existent states, the governing apparatus being almost completely dissolved and reconstituted upon a monarchical succession. 14th century England had only a few hundred people in direct employ of the crown, most of them being tax collectors, royal estate and forest managers, and keepers of the peace. A monarchical succession meant that a new person with his or her own network of dependents, friends and trustees would need to pick and choose his or her own councilors and appointees to go out and do things.  How was a monarch to choose people for all of these roles? What all this meant was that the work of state administration was built on personal reputation. If a monarch needed something done well, they needed a quick metric to be able to assess the efficacy of an appointee. To that end, they could simply use someone’s reputation for honor.

    Thus, to the extent that honor encompasses such qualities as honesty and forthrightness, it would encourage the enforcement of contracts and upholding of laws. If it encodes diligence, willingness to abide by oaths and be overt in one’s declarations of allegiances, then it would help to encourage stable politics and relations amongst peers of the realm (an above-board network of alliances allows a rational calculus of when and whether to initiate a conflict; a clandestine alliance system gets us the tangle of World War One). If honor encompasses honesty and charity, it would entail dependability in collective and remitting taxes and making necessary investments to prevent or curtail social unrest by the lower classes. And most importantly, honor was a stand-in for loyalty to one’s liege and the crown. If you’re assigning a general to lead an army or a governor to lead a wealthy province, you want to be sure that they’re not going to betray you. Honor serves as both a metric for that likelihood and, failing that, a reputational deterrent on future generations of a traitor.

    1. A general shorthand for morality that is responsive to changing moral frameworks

    Western civilization has spoken of “honor” over thousands of years, and what that means in terms of personal virtues has changed radically over that time. One of the most important ideas of Friederich Nietzsche is the conceptualization of Slave Morality versus Master Morality. In Nietzsche’s conception, the Master Morality of the societies that used to be slave masters (Mesopotamians, Greeks, Romans) was later overcome by an inverted Slave Morality of those who used to be their slaves, i.e. the Judeo-Christian moral values.

    Let us first examine how honor worked in the Master Morality of the ancient world. In this conception, what was morally good was to be judiciously utilize the benefits of being at the top of the social pyramid, to become the most powerful and beautiful version of oneself, to surpass one’s rivals, to fully self-actualize. We see this fully laid out in epics such as the Iliad, wherein what is exalted is martial, mental, and interpersonal prowess. As Hector explains in the Iliad (Book VI), “I have long understood, deep in my heart, to defend the Trojans under Hector’s spear, and to win noble glory for myself and my forebears”.  We see this carried over into Roman society as exemplified by the pursuit of glory and the acquisition of familial “honors” which is how the word (honor) is used when it first enters the Western lexicon. Ancient Romans, particularly in the late Republican period, were absolutely obsessed with acquiring honors, in the plural. In this sense, honors often means public recognitions of honors via titles, offices, and displays such as the all-important triumph in which the celebrated man would have to be reminded that he was mortal lest he follow the path of Bellerophon and think he had acquired divinity. By the late Republic the quest for honors had become an obsession, and their pursuit was fuel for the civil wars and lust for power that ended the Roman Republic. To wit, Cicero comments in his De Officiis (On Duties), “honor is the reward of virtue, and the pursuit of honor is the very aim of every great man. It is the highest obligation to seek the recognition of those who are worthy, not for personal gain, but for the service of the state.” And as later Roman commenters (Livy) observed in looking back on that period, “No man is truly great unless he has acquired honor through the strength of his own actions. It is the pursuit of honor that drives men to greatness” (History of Rome book 2). In other words, honor in the Master Morality framework was exogenous, not endogenous. It was about getting others to recognize your greatness.

    The rise of the former slave populations with their Slave Morality truly inverted things.  In the Gospels Jesus intones repeatedly against the pursuit of public recognition: “When you give to the needy, sound no trumpet before you, as the hypocrites do in the synagogues and in the streets, that they may be praised by others” (Matthew 6:2); “you know that the rulers of the Gentiles lord it over them, and their great ones exercise authority over them. It shall not be so among you. But whoever would be great among you must be your servant, and whoever would be first among you must be your slave” (Matthew 20:25-26) and further “for they loved the glory that comes from man more than the glory that comes from God” (John 12:43). The message of humility naturally was taken up by those who were materially already humbled by the existing socioeconomic order (the slaves and urban poor) and resisted by those who had the most to lose – the practitioners of the Master Morality of public honor and glory. Over the following three hundred years Christianity came to be the dominant religion of the Mediterranean world. What followed in the West was literal moral revolution, the overthrow of the Masters by the Slaves, and the creation of a new order in which honor was still a goal, but the means of its attainment shifted radically. Thus by the 5th century St. Augustine echoed the same message “Let us then approach with reverence, and seek our honor in humility, not in pride” (City of God, published 426). Through the Middle Ages we hear from Dante Alighieri that “The greatest honor is not that which comes from men, but from God. And the greatest humility is knowing that, without His grace, we are nothing” (Divine Comedy, 1321). And at the end of the Medieval period, Sir Thomas Mallory repeats the same message, that “He who is humble in heart, his honour shall be pure and his glory eternal; but pride is the enemy of honor and virtue” (Le Mort d’Arthur, 1470).

    The Death of Honor

    As we saw from the n-grams above, Honor died around 1700 in Western Europe, as “the old aristocratic code of honor was gradually replaced by a new middle-class ethic of self-discipline, hard work, and social respectability” (Lawrence Stone, “The Crisis of the Aristocracy” (1965)). But following the trend line, its subsequent exhumation in the 19th century was as a pastiche to reminisce about or poke fun at, not as a genuine revival of the cultural value. For example, in Sir Walter Scott’s magnum opus Rob Roy, the word “honour” appears 152 times in just a little over 500 pages. Jane Austen used the term quite often, 256 times in her collected works, but anyone who has read Austen will know that she came to bury honor, not praise it. 

    The exact reasons for the decline in honor are difficult to pinpoint as there were myriad processes unfolding at the time. The Enlightenment subjected many cultural values to rational scrutiny and Enlightenment thinkers like Voltaire had no mercy for the concept that they wished to be rid of with great urgency: “The idea of honor has caused more bloodshed than any other human folly.” Voltaire Candide 1759. If one looks back on the typology for the benefits of honor above, we can see that many of the reasons for honor’s existence began to be moot by the 18th and 19th  centuries. For example, the growth of centralized bureaucratic states allowed expanded recordkeeping and objective evaluations of merit, eliminating the need for reputational heuristics. Increased law, order, communication and infrastructure meant greater movement and atomization of the individual as the Gemeinschaft gave way to the Gesellschaft; familial reputation gave way to licenses, certifications, degrees, and more affected signals of social status. And as “honor” used to be a vacuous stand-in for any number of human virtues and moral qualities, it was with little difficulty that it could simply be replaced by more precise terms for those qualities, e.g. “honesty” or “bravery”. Thus “honor” came to be a term of critique for those areas of the world most resistant to modernizing and enlightenment influences, where “honor culture” and “honor killings” persist as remnants of what once was the dominant mode of human thought and moral reasoning.

    Epilogue: Another Resurrection?

    Looking back at the n-grams, we can see one last fillip on the trend line beginning in the late 20th century. While no clear explanation has been put forth for this, one obvious suspect would be the rise in historical fantasy. With Lord of the Rings, Dungeons and Dragons, Game of Thrones, and countless other fantasy worlds attracting millions of fans on screens, pages and tabletops, it is little wonder that concomitant historical concepts such as “honor” should rise in popularity as well. This clear growth in this direction can be evidenced by the fact that historical fantasy tropes such as dragons, empires, knights and castles have seen large growth in popularity from the 1990s, but historical terms with less resonance in historical fantasy such as “crusade”, “dowry”, “abbot” and “chastity” demonstrate no such gains.

    Nevertheless, the usage in the English lexicon of “honor” remains a small fraction of what it once was. Honor continues to interest us academically and fictionally, but there is little chance of it returning to guide our choices and moral values in the here-and-now.

  • How Important is the “Scientific Method”?

    How Important is the “Scientific Method”?

    Tyler Cowen recently posted on Marginal Revolution the question “How Important is the ‘scientific method’?” This called to mind the following paper I wrote several years ago in which I analyzed how many of the most important scientific discoveries of all time had come about from scientists who eschew “good science” and follow their hearts, biases, convictions – whatever you want to call them. Since writing this analysis, my opinion on the matter has ebbed and flowed – I think increasingly we need to turn to established institutions and procedures to help navigate the rising tide of disinformation, fake news, and fake science, but at the same time I think we must be more skeptical and critical of such institutions as they do maintain the ability to shut down dissenting voices or heterodox research. The recent Washington Post article on heterodox research in anthropology/archaeology is a great case in point. Open sourcing the publication and peer review of research, as the article illustrates, is one possible avenue, but does that solution prevent or facilitate the capture of those review processes by bad actors (botnets, special interests, internet vigilantes)? Without question the world is heading toward a fundamental restructuring of core processes and institutions that have served us well for centuries, and Western Civilization is facing its greatest epistemic upheaval since the Protestant Reformation. For those of you seeing this blog for the first time, that is a core theme of my writings. For other articles touching on this theme, check out this, this, or this.


    Without further ado, my paper to the point:

                Scientific progress comes in a variety of forms, be it via flashes of inspiration, methodical research and investigation, or simply the products of fortunate accidents. Nevertheless, through the history of western science, certain standards have arisen regarding the way in which “true” science should be done. The scientific method remains the centerpiece of these standards, stressing repeated observation, testing of hypotheses both new and old, and checking our assumptions against the realities of the physical world. In general, it is also considered standard to report all findings, whether they support current theories or not, and thus manipulation and selection of data to support preconceived notions is frowned upon, and generally considered unscientific. This stance, however, is the ideal; in reality, manipulation and staging of data has given us some of the greatest scientific breakthroughs in history – and perpetuated some of the worst misconceptions. The question, therefore, is not whether manipulation takes place in good science, but why it takes place and how science can be good despite it. Ultimately, we find that selection and staging of data has occurred throughout modern science, and that the reasons for it are based not on failures of the methods and standards of scientific practice, but rather on external social and personal influences which take advantage of the fact that science is, at its core, a human endeavor.

                In their critical analysis of the history of modern science, The Golem, authors Collins and Pinch discuss at length the 19th century debate over spontaneous generation versus biogenesis, and the role that Louis Pasteur played in the battle of scientific viewpoints. The account provides an excellent illustration of how good science can still be wrong, while “impure” practices can still illustrate the truth. Pasteur’s rival, Felix Pouchet, a staunch proponent of spontaneous generation, conducted a series of experiments to prove that life could arise from non-living material. His results were fully documented, and his experiments always showed the rise of life from his supposedly sterilized materials (Collins and Pinch 84-86). On the other hand, Pasteur conducted many experiments, of which some also seemed to show abiogenesis. Pasteur conveniently disregarded those experiments, only publicly reporting those outcomes which supported his hypothesis; “he did not accept these results as evidence of the spontaneous generation hypothesis” (C&P 85). Ultimately, Pasteur was able to align the scientific community in his favor, and biogenesis became the accepted theory of the propagation of life (C&P 87-88). The question this requires us to ask, though, is how selected and staged data ultimately came to be proven correct. Clearly, Pasteur’s methods broke with “standard” scientific practice even in the 19th century and even more so today, so there appears to be no direct connection between factual accuracy and adherence to the scientific method. Yet the scientific method remains the cornerstone of scientific research and investigation, so perhaps there is more to the nature of science than the example of Pasteur can adequately illuminate.

                Perhaps a more blatant example of staging and manipulation is that of Arthur Stanley Eddington and his astronomical expeditions in 1919. Determined to prove Einstein’s theory of relativity correct, Eddington and his fellow researchers set sail for the equator to make observations of a solar eclipse, a unique opportunity to test relativity’s predictions of gravitational lensing. The expedition’s observations, however, seemed vague, with some observations supported Einstein’s predictions, others Newton’s (C&P 48-49). Eddington chose only to publicize those data which supported Einstein. His reasons for this are mixed, and both The Golem and Matthew Stanley’s article explain his staging in different ways. The account from The Golem portrays Eddington’s thought process as simply ignoring systematic errors and focusing on those results which seemed devoid of irregularities; as a result he was conveniently left with those results which confirmed relativity. By that account, Eddington was doing “good” science, because he knew well the objective realities of his work, and was able to determine what data should and should not have been taken into consideration.

                The Stanley article, on the other hand, paints a very different picture of Eddington and his motives. The article notes that prior to the time of Eddington’s observations, international science had grown petty and nationalistic, in many ways tied to the bellicose technological advances during the First World War. The horrors which destructive technologies such as the battle tank, advanced artillery, and poisonous gasses had wrought by the end of the war led to public apprehension and disdain toward scientific achievement (Dr. Ralph Hamerla, Lecture). According to Stanley, Eddington sought to change that sentiment. Raised a Quaker, and thus with a more humanistic and anationalistic outlook, Eddington sought a transnational approach to solve the problems of mankind (Stanley 59). He believed that a British expedition into Africa and South America to confirm a German’s theory would speak volumes for the international approach necessary for beneficial scientific advancement (Stanley 59). In light of Stanley’s article, then, Eddington maintained a personal motive and religious background which may have biased his observations and decisions regarding the staging of his data.

    This brings us to an important question about not only Eddington, but manipulation and staging of data in general. Are data and observations manipulated intentionally and consciously by those who present them, staged because of subconscious influences, or is there more to the matter than that?  Returning to the example of Pasteur, The Golem reveals that “Pasteur was so committed in his opposition to spontaneous generation that he preferred to believe there was some unknown flaw in his work than to publish the results…He defined experiments that seemed to confirm spontaneous generation as unsuccessful, and vice versa” (C&P 85). Here, then, is another example of a scientist who does manipulate and disregard data in a conscious way, but not for any conscious reasons.  Rather, his preconceived notions about what was to be expected influenced his interpretation of the data. Eddington’s actions likely followed a similar path.  His Quaker, internationalist attitudes and desires may have subconsciously caused him to see systematic error in the observations he made, and those observations which confirmed relativity appeared relatively flawless to him. It is always possible, however, that he made his interpretation purely out of scientific objectivity, but Pasteur’s example seems to make the first possibility more likely.

    These examples cannot be taken to imply that manipulation is always done without conscious intent, however, nor that such staging of data always contributes to a better understanding of natural realities – quite the contrary on both counts.

    Though one could imagine that a conscious manipulation of data and figures would be intended to change the perceived outcome of the experiment, Charles Darwin used staging to the opposite effect: to better explain the argument he was already making, from the conclusions he had already drawn. Philip Prodger’s “Inscribing Science” addresses the idea that Darwin’s intentions behind the manipulation of photography. In his The Expression of the Emotions in Man and Animals (heretofore “Emotions”), Darwin makes great use of the then-fairly-new medium of photography to provide actual images of expressed emotions in people. As Prodger asserts, “His photographic illustrations were carefully contrived to present evidence Darwin considered important to his work” (Prodger 144). Given that the medium was relatively new at the time, it had its limits in terms of both detail and exposure time, “in the order of tens of seconds” (Prodger 156). As a result, some tweaking, staging, and manipulation were necessary to accurately convey Darwin’s selected evidence. He collaborated with both Oscar Rejlander and Duchenne de Boulogne in generating images for Emotions, with the former providing photographs of posed emotions, and the latter, photos of electrode-induced facial expressions. Darwin manipulated both for his book. In the case of Rejlander, Darwin removed electrodes and scientific equipment from the photos, leaving only the induced emotion visible in the book (Prodger 166-169). One of Rejlander’s photographs, known as Ginx’s Baby, was so important for the book that Darwin created a photograph of a drawing of the photo, so as to ensure that all details of the image were captured perfectly (Prodger 173-5). At the same time, for the photographs produced by both men, Darwin changed the settings of the subjects, going so far as to place Ginx’s Baby into a comfortable chair.

    Ginx’s Baby:

    Darwin’s reasons for his manipulations seem obvious enough. The photography of the day, with its long exposure times and imperfect detail, was incapable of distilling the split-second nuances of human emotional expression. It was thus difficult to communicate, via photography, the scientifically important intricacies which Darwin needed to support his claim. However, he could observe these emotions and draw his conclusions from them as they happened. Thus, Darwin was not truly manipulating his data, merely the means by which he passed it on to others, casting serious doubt on the idea that he may have had coercive motives behind his alterations.

    Innocence may be questionable, however, in the case of the alteration and manipulation of data relating to racism and biological determinism during the 19th and early 20th centuries. In The Mismeasure of Man, Stephen Jay Gould analyzes the manipulative means – intentional or otherwise – that racial scientists and craniologists employed in the dissemination of data relating to innate racial differences and phylogeny. His analysis of Samuel George Morton gives keen insight into the thought process of a blatant manipulator of data. In Morton’s presentation of data about average brain size among races, Gould states that he “often chose to include or delete large subsamples in order to match group averages with prior expectations,” that in his reports “degrees of discrepancy  match a priori assumptions,” and that “he never considered alternate hypotheses, though his own data cried out for a different interpretation” (Gould 100). All of these cases seem to demand the inference that Morton was consciously and actively manipulating data to match his own preconceived notions about racial characteristics. Yet Gould himself takes the other side, stating that he “detect[s] no sign of fraud or conscious manipulation…Morton made no attempt to cover his tracks…he explained all of his procedures and published all of his data” (Gould 101). He comes to the conclusion that the only motivation behind Morton’s warping of data was “an a priori conviction about racial ranking” (Gould 101). Yet despite such flawed data, “Morton was widely hailed as the objectivist of his day” (Gould 101).  The fact that he was hailed as such clearly demonstrates the degree to which bias and misconceptions permeated society. Based on his and others’ studies, craniometry and racial sciences perpetuated the ideas of white racial superiority well into the twentieth century.

    We are therefore left with an indecipherable mixture of outcomes based upon the manipulation of scientific data that is generated in departing from the purity of the scientific method. With Pasteur and Eddington their assumptions about the “correct” outcome of their experiments allowed them to “know” which data to exclude and which to accept.  Both were ultimately proven correct, but whether their correctness was due to their superior scientific understanding or pure luck is not an answerable question . With Darwin, his choice of manipulation was clearly intentional, but the purpose benign: to better communicate technologically limited evidence and proof.  His conclusions regarding the related emotions in humans and other animals are now supported by overwhelming scientific evidence – thus his case was one of superior scientific understanding. In the case of Morton and the racial scientists of the 19th and 20th centuries, it is clearly visible that preconceived notions can lead science down the wrong path as well as the right path. In the scientists’ quest to prove assumed facts, they ignored alternate interpretations and, instead, caused the stagnation of “objective” scientific perspective in the area of human physiology and evolution, while perpetuating social ills for a century or more.

    It can be seen that assumptions and a priori conceptions about an area of science can utterly change the scientist’s perception of the outcome. However, our initial investigation into the cause of these preconceptions has many possible solutions. Social and religious goals are possible answers, from the example of both Morton and Eddington respectively, but pride, arrogance, or simple variances in scientific understanding are equally valid conclusions. It seems foolish, however to assume that humans, who carry opinions and preconceptions in every area of their personal lives, could be capable of completely ridding themselves of those same opinions when it comes to the pursuit of science. One can conclude that as long as humans engage in the endeavor of scientific inquiry, they shall bring with them their imaginations, opinions, and cultural biases, but whether bringing those variables into scientific pursuits ultimately adds or detracts from the quality of human scientific achievement is a purely subjective matter that we cannot hope to settle through prattling verbosity.

  • Pride and Prosperity:

    The Pillars of CCP Legitimacy

    The Chinese Communist Party has ruled over the People’s Republic of China ever since its victory over Nationalist forces in 1949. For the past six decades it has maintained a monopoly on political power despite wars, crises and political dissent, and today is the largest and one of the longest-ruling political parties in the world. Nevertheless the party’s rule does not come easily, and the CCP has turned to various strategies over the years to end dissent and lend legitimacy to the one-party system. To solidify its rule, though, the CCP has turned consistently to two main ways of satisfying the 1.3 billion people of China: first, it taps into strong sentiments of nationalistic pride and identity and binds them to the CCP, and second, it seeks to improve the material quality of life of its citizens and endeavors to present itself as the best means of doing so. It is through these two avenues that the CCP has maintained its power in China both historically and today. Yet even these time-tested methods of legitimation have their intricacies and drawbacks, leaving the CCP in a precarious position of tentative rule.

    The Chinese sense of nationalism has been one of the grassroots bases of support for the CCP since the 1930s and the struggle against Japanese invasion and occupation. By the end of the war in 1945 the Communists had seized on nationalism as a powerful recruitment mechanism and articulated it even more effectively than the so-called Nationalists, garnering the momentum necessary to win the ensuing civil war but with the side effect of internalizing nationalist sentiment as a source of identity and legitimacy as a ruling party. On proclaiming the People’s Republic in 1949, Mao declared that “the Chinese people have stood up,” a humble yet blatantly nationalist remark that broke with the universalist Marxist ideology (e.g. “workers of the world unite!”) which the CCP at least nominally followed.

    Even as other claims to legitimacy have fallen to the wayside, nationalism still forms part of the backbone of CCP ideology, outlasting even the namesake communism. Indeed, “the Communist Party stirs patriotic feelings to underpin its legitimacy at a time when few, even in its own ranks, put much faith in Marxism” (Kahn). Various issues today lend strength to those Chinese citizens who take a hard line on nationalism as well as providing issues by which the CCP can articulate its nationalist identity, ensuring nationalism plays an important role in both domestic and foreign policy of the CCP.

    “Japan, which China says killed or wounded 35 million Chinese from 1937 through 1945, gets the most attention” on the Chinese nationalist front (Kahn). Of course, as the primary opponent against which the CCP built its nationalist identity, Japan naturally feels the brunt of Chinese nationalism. Anti-Japanese sentiment has surged in recent years, with events such as the outrage over the Zhao Wei dress incident and glorification of Feng Jinhua’s desecration of the Yasukuni shrine perhaps only the first in a long line of anti-Japanese outbursts in China (Gries “New Thinking” 832, 5).

    Though Japan may be the most prominent target on the CCP’s nationalist agenda it is certainly not alone, as “official propaganda and the national education system stress the indignities suffered at the hands of foreign powers from the mid-19th century through World War II,” known as China’s “Century of Humiliation” (Kahn). During that time period European, American, and Japanese interests subjugated China, inflicting wounds which still scar the Chinese consciousness. However, the re-acquisition of Hong Kong and Macau in the 1990s served as a rallying point for Chinese nationalist who depicted China as a lion roused from slumber, all of which played perfectly into the age-old narrative that the CCP was leading China to national might.

    However, amidst the pride of power comes apprehensiveness of weakness compared to the United States, as illustrated by the 1999 bombing of the Chinese embassy in Belgrade. The incident, despite American assurances that it was purely accidental, stoked a firestorm of anti-western and, more specifically, anti-American sentiment. However, through Chinese eyes, the bombing “was not an isolated event; rather, it was the latest in a long series of western aggressions against China” (Gries “CNN” 17). The Chinese public was outraged, and the CCP, reflecting such strong national sentiment, echoed the people’s outcries of intentionality and called for apologies from the US (Gries “CNN” 19-20).

    This illustrates an important change: the power of nationalism can be at times unwieldy and has become a force beyond the CCP’s control. “The 1990s witnessed the emergence of a genuinely popular nationalism in china that should not be conflated with state or official nationalism…the party’s legitimacy now depends on accommodating popular opinion” and as such the CCP has in recent years become subject to the force it once commanded (Gries “CNN” 20). This has dire ramifications for the future of Chinese power, as the CCP can no longer direct a course in its foreign affairs but must placate the hardliners in populist fashion. The issue of Japan is one which may play out worst for China’s future. The CCP is unable to pursue any conciliatory policies with Japan for fear of appearing “weak before nationalists at home” and thus cannot use Japan to counterbalance the influence of the United States – an outcome which is in the long-term worst interest of a China with global ambitions (Gries “New Thinking” 848).

    Thus nationalism, long one of the CCP’s reins of power, may have lost its power to effect loyalty and fully legitimize CCP rule. Instead of using nationalism to its advantage, it has become a necessity for party survival, even if its application may be to the long-term detriment of Chinese power. Unfortunately for the CCP, nationalism and foreign policy may not be the only areas wherein necessity, not desire, has become the call to action.

    The other traditional area in which the CCP has articulated its legitimacy has been its stewardship of the people, bettering their lives materially and reducing inequality and exploitation by the ruling class. At the outset of the People’s Republic this was the nominal goal of the communists (and indeed communism in general) but over the course of the next thirty years, disastrous policies in socialist and communist endeavors resulted in the Reform and Opening of the post-Mao era (Thornton). Since that time China has transitioned to a full market economy, shedding almost all vestiges of its communist origin and namesake, “gambling that people would overlook the failure of communism as an ideology if Communists could make them richer” (Pan 117). What has resulted, ironically, is the exact exploitation that the CCP came into power to eradicate: authorities systematically repress peasants, income inequality is on the rise, and corruption and graft deprive the people of wealth and opportunity. Yet the CCP attempts to address all of these issues and others, managing to cleave tentatively to power by supposedly bettering the lives of its people.

    Pension reform represents a major way in which the CCP has averted unrest in recent years while simultaneously addressing issues of inequality, with efforts to provide a basic social safety net  materializing in the Social Insurance Law passed in 2010. Representing “a major step in the CCP’s efforts to tackle problems of income inequality and inadequate welfare” the legislation aims to unify and codify many of China’s disparate and inadequate welfare systems (Frazier 386). By this increase in welfare and pension benefits, the CCP sought to avoid the “often dramatic urban protests” which “posed multiple challenges to the CCP’s legitimacy” and thus the party shored up support amongst the urban poor and re-affirming its narrative of working for the betterment of the people.

    One of the longest-standing and most well-known of the CCP’s solutions has been the One Child Policy which is aimed at controlling once-unsustainable birthrate by limiting most families to having one child. Through the policy, “officials have sought to curb the excesses and inequities and have argued that the policy has prevented roughly 400 million births and allowed the country to prosper and better live within its resources” allowing material wealth and opportunity to be distributed amongst fewer people overall, helping to secure their welfare in the long term (Yardley). However, the policy threatens a future drawback: demographic crisis. “China’s fertility rate is now extremely low, and the country’s population is aging rapidly,” indicating that in the near future, young workers may be insufficient to sustain the more populous elderly (Yardley).  In response to this looming issue, CCP policymakers have flirted with altering or ending this longstanding policy, demonstrating that working for the good of the people has been the goal all along: first reducing birthrates to prevent overpopulation, then relaxing restrictions to prevent demographic collapse, both of which work in the interest of national stability, ensuring government legitimacy.

    Perhaps most importantly, the struggle against corruption has been one in which the CCP aims to garner loyalty by casting itself as a staunch defender of the people, against abusive local officials. In recent years corruption cases such as that of party officials in Shenyang and “shoddy construction” of earthquake-felled buildings in Sichuan has revealed enormous corruption at the local level (Pan 131, Alpert). Without fail, however, the “state media [present] the case[s] as an example of the party’s resolve to keep its cadres honest” and unerringly portrays corruption as a purely local issue and anathema to the CCP’s national practices and ideals (Pan 131).

    Yet despite all its toil and propaganda, the CCP’s decades-old narrative of working for the good of the people may finally be beginning to wither away, for “China’s propaganda machine…is sometimes hamstrung in the age of the Internet, especially when it tries to manipulate a pithy narrative about the abuse of power” (Wines). As news such as the Li Gang case spreads around the country and the national populace becomes aware of the “scale of malfeasance” transpiring around them, it may not be long before the legitimacy of one-party rule is irreparably damaged (Pan 131).

    So how does the Chinese Communist Party maintain its power today? It does so in the same ways it always has. By taking up the banner of national pride and strength, the CCP earns the support and loyalty of nationalist elements. And by portraying itself as the supporter and benefactor of the people it gains the trust of the common man. But as the tides of history turn and the people learn to contest the monopoly of Communist power, the CCP may find its twin pillars of legitimacy looking remarkably fragile in the coming years.

    Works Cited

    • Alpert, John and Matthew O’Neill. China’s Unnatural Disaster: The Tears of Sichuan Province. Dirs. Jon Alpert and Matthew O’Neill. 2010.
    • Frazier, Mark. “From Status to Citizenship in China’s Emerging Welfare State.” Gries, Peter Hays and Stanley Rosen. Chinese Politics: State, Society and the Market. RoutledgeCourzon, 2010. 386-404.
    • Gries, Peter Hays. “China’s “New Thinking” on Japan.” The China Quarterly (2005): 831-850.
    • —. China’s New Nationalism. Berkeley: University of California Press, 2004.
    • Kahn, Joseph. “Beijing Finds Anti-Japan Propaganda a 2-Edged Sword.” The New York Times 3 May 2005.
    • Pan, Philip. Out of Mao’s Shadow. New York: Simon & Schuster, 2008.
    • Thornton, Patricia. “Comrades and Collectives in Arms: Tax Resistance, Evasion, and Avoidance Strategies in post-Mao China.” Gries, Peter Hays and Stanley Rosen. State and Society in 21st Century China. RoutledgeCourzon, 2004.
    • Wines, Michael. “China’s Censors Misfire in Abuse-of-Power Case.” The New York Times 17 November 2010.
    • Yardley, Jim. “China wants gradual shift away from its one-child policy.” The New York Times 8 December 2008.