Author: Scott Mauldin

  • The War That Wasn’t: Christianity, Science, and the Making of the Western World

    The War That Wasn’t: Christianity, Science, and the Making of the Western World

    Examining the History between Science and Christianity

    During my early adulthood I was a zealous New Atheist, and as such believed wholeheartedly in a message that was central to the NA movement: that Christianity had been a parasite on Western civilization, dragging humanity into the Dark Ages and smothering science until skeptics and Enlightenment thinkers finally pulled us back into the light. While studying European history in depth, though, I began to see cracks in that story. The relationship between Christianity and science wasn’t as clear-cut as New Atheists made it out to be, and in some elements was rather constructive. But the question remained in my mind, and it grew into a larger curiosity about what made the West different—one that would eventually drive my MA studies in international economics and development.

    Recently, though, something surprising happened: I saw the old narrative resurface. If you were active in New Atheism circles in the 2000s (or honestly if you were active on the internet at all; to quote Scott Alexander, “I imagine the same travelers visiting 2005, logging on to the Internet, and holy @#$! that’s a lot of atheism-related discourse) you probably saw a chart that looks something like this:

    While many of those who were active New Atheists back in the early 2000s have mellowed out and found other ideological motivations (or disenchantments), it seems there is a new generation of zealous Atheists engaging with these ideas for the first time, and one secular group I saw the above graphic posted unironically, even with the ridiculous attribution of “Christian” dark ages. The resurfacing of that old certainty, combined with a provocative new scholarly article offering an economic perspective on Christianity’s role in scientific progress, prompted me to revisit the question. How exactly did Christianity interact with science throughout history? The answer is messier, and far more interesting, than the stories I once took for granted. I will tackle this in a way that is at once thematic and chronological: 1) the Dark Ages (or Early Middle Ages) 2) The (High) Middle Ages, and 3) the modern (Post-renaissance) world.

    I. Did Christianity Cause the Dark Ages?

      First, I want to address the easiest part of the question: did Christianity somehow cause the Dark Ages?

      I think this can be answered very briefly with an unqualified “no”, and I will go even farther and say “quite the opposite”. I don’t know of any reputable modern historians who would say otherwise. Historically there have of course been literally hundreds of different causes blamed for the fall of the Western Roman Empire and the ensuing “Dark Ages”, including such obscure culprits as infertility, trade imbalances, loss of martial virtues, and wealth inequality. Yes, contemporaries in late antiquity did blame Christianity and the abandonment of traditional paganism for the fall of Rome. For example one of the last notable pagan scholars, the Eastern Roman historian Zosimus, put it plainly that “It was because of the neglect of the traditional rites that the Roman Empire began to lose its strength and to be overwhelmed by the barbarians.” (Historia Nova, Book 4.59), and most famously (Saint) Augustine of Hippo wrote his City of God to refute such perspectives (though he wrote before Zosimus): “They say that the calamities of the Roman Empire are to be ascribed to our religion; that all such evils have come upon them since the preaching of the Gospel and the rejection of their old worship.”(The City of God, Book 1, Chapter 1). Needless to say, these were not modern academic historians and were clearly making biased assertions based on the vibes of the day.

      Today the historical consensus seems to be some combination of climate change-induced plague (see Harper, Kyle, “The Fate of Rome”), mismanagement by later emperors of the resulting chaos and malaise, and most importantly the Völkerwanderung, “The wandering of the peoples,” i.e. the migration of massive numbers of people from central and western Eurasia into Europe during the 4th and 5th centuries. To summarize the contemporary consensus into one sentence: the Western Roman Empire fell because decades of plague and civil war made it too weak to repulse or assimilate the entire nations of people migrating into Western Europe, and thus had to cut deals, cede territory, and slowly “delegate itself out of existence” (Collins, “Early Medieval Europe”). In direct refutation of the claim that Christianity caused the Dark Ages, the Christianization of these Germanic and other peoples was a vital channel for the transmission of culture and values and an important step toward (in Rome’s conception) civilizing and settling them in the Mediterranean world (see https://en.wikipedia.org/wiki/Ulfilas as one example).

      As further refutation, the brightest spark of European Civilization during the Western European Dark Ages (roughly 500-800) was the Eastern Roman Empire, which was unquestionably *more* thoroughly Christian than the “Darkening” West. The Eastern empire boasted a stronger institutional religious structure, with the emperor himself dictating much of theological policy, and strongarm enforcement of official positions (e.g. with the Christological debates of late antiquity) was common: “The Byzantine emperor, always the head of the imperial hierarchy, automatically evolved into the head of this Christian hierarchy. The various bishops were subservient to him as the head of the Church, just as the governors had been (and were still) subservient to him as the head of the empire. The doctrines of the Christian religion were formulated by bishops at councils convened by the emperor and updated periodically at similar councils, with the emperor always having the final say” (Ansary, Tamim, “Destiny Disrupted”).

      The Western empire, by contrast, struggled for centuries with institutionalization and conversion, with the Catholic church wrestling not just with latent paganism and heretical syncretism among rural populations but also an existential battle with Arian Christianity (a nontrinitarian form of Christianity that asserts that Jesus not an incarnation of God but merely a lesser creation of God), common for centuries among the ruling strata of Vandals and Goths in the early middle ages; “the Vandals were Arian Christians, and they regarded the beliefs of the Roman majority as sufficiently incorrect that they needed to be expunged… Arianism continued as the Christianity of ‘barbarian’ groups, notably Goths, Vandals and eventually Lombards, into the seventh century” (Chris Wickham, “The Inheritance of Rome”). Though I will risk overextending my argument here, I will say that the Church in fact prevented the Dark Ages from being even worse: “the Church emerged as the single source of cultural coherence and unity in western Europe, the cultural medium through which people who spoke different languages and served different sovereigns could still interact or travel through one another’s realms.” (Ansary).

      There is a caveat to all this, though. Christianity did seem to have a deleterious effect on the logical philosophy of the late Empire. I have been able to find at least three separate early Christian philosophers who all deliver variation on the same idea that faith should triumph over logic and reason:  “The nature of the Trinity surpasses all human comprehension and speech” (Origen, First Principles, Preface, Ch. 2); “If you comprehend it, it is not God” (Tertullian, De Carne Christi); and “I believe because it is absurd”, “Credo quia absurdum est” (Augustine of Hippo). But it is important to contextualize these perspectives in a general trend towards mysticism in late antiquity. Christianity was not alone, as Mithraism, Manicheism, Sun worship, and other prescriptive revealed religions swirled in the ideological whirlpool of late antiquity, and also to see the rise of all of the above as reactions to the declining state of the political economy: as we see evidenced today, material insecurity pushes people toward the comfort of religion (see e.g. Norris and Inglehart, Sacred and Secular).

      II. Did Christianity Hinder or Help the Birth of Modern Science?

      This question is somewhat more difficult to answer, and I originally had drafted much more, but decided to cut it down to prevent losing anyone in an overly academic morass. To summarize what I see as the answer to this question, there are two necessary components to the rise of modern science, the ideological and the structural. Ideologically, the quest to understand the divine plan through exploration of the natural world was common to both the Christian and Islamic proto-scientists, but when authorities decided this ideological quest was becoming threatening, structural changes that had taken place in Christendom (in part as a result of religious motivations) but not in the Caliphate saved proto-science in the former from the same fate as the latter. Thus Christianity initially helped, then became antagonistic toward emerging proto-science, but by the point that things got antagonistic, structural changes prevented the Church from effectively tamping out the proto-scientific sparks. Let’s expand a bit, but first a note about the Twelfth Century Renaissance.

      In the west, the critical window here was the 12th Century Renaissance and the resulting changes that took place in the 13th century. The 12th Century Renaissance is less well-known than “The” Renaissance of the 15th century, but arguably had more far-reaching consequences in terms of laying the foundations of Western civilization and culture. “Although some scholars prefer to trace Europe’s defining moments back to the so-called Axial Age between 800 and 300 b.c., the really defining transformative period took place during the Renaissance of the twelfth and thirteenth centuries. That is when the extraordinary fusion of Greek philosophy, Roman law, and Christian theology gave Europe a new and powerful civilizational coherence.” (Huff)

      The Twelfth Century Renaissance witnessed two unrelated trends that came together at the end of the 13th century in one seminal decade, which I will unpack in later paragraphs. The first trend is the reintroduction (via translation schools in Toledo, Constantinople and the Papal States) of most of the works of Aristotle, giving birth to a “new intellectual discipline [that] came to be known as ‘dialectic.’ In its fully developed form it proceeded from questions (quaestio), to the views pro (videtur quod) and con (sed contra) of traditional authorities on a particular subject, to the author’s own conclusion (responsio). Because they subjected the articles of faith to tight logical analysis, the exponents of the new rational methods of study became highly suspect in the eyes of church authorities” (Ozment). The second trend was the rediscovery of Roman Law which triggered an immense restructuring of legal rights and entities in the entire Western world: “An examination of the great revolutionary reconstruction of Western Europe in the twelfth and thirteenth centuries shows that it witnessed sweeping legal reforms, indeed, a revolutionary reconstruction, of all the realms and divisions of law […] It is this great legal transformation that laid the foundations for the rise and autonomous development of modern science […]” (Huff).

      We will examine how the novelties of the Twelfth Century Renaissance interacted to create the preconditions for Science as we know it, by examining the confluence of the two requirements, the ideological and the structural.

      Ideologically, what was required for the creation of science was the attempt to use existing knowledge to understand the underlying structure of the world, i.e. the codification and understanding of the scaffold of natural laws that allowed an understanding of the way the world worked. The belief in a knowable creator god seems to have given rise to this concept in the Abrahamic world. In his “history of the world through Muslim eyes,” Destiny Disrupted, Tamim Ansary encapsulates that both Christian and Muslim proto-scientists shared the same goals of understanding God through natural observation: “As in the West, where science was long called natural philosophy, they [Abbasid-era Muslim philosophers] saw no need to sort some of their speculations into a separate category and call it by a new name[…]science as such did not exist to be disentangled from religion. The philosophers were giving birth to it without quite realizing it. They thought of religion as their field of inquiry and theology as their intellectual specialty; they were on a quest to understand the ultimate nature of reality. That (they said) was what both religion and philosophy were about at the highest level. Anything they discovered about botany or optics or disease was a by-product of this core quest[…]”. Islamic and European civilization both shared the Greek intellectual roots: “Greek logic and its various modes were adopted among the religious scholars.” (Huff) China, in contrast, along with the pre-Christian Mediterranean world, had admirable command of engineering principles and keen natural observation as exemplified by the likes of Zhang Heng, Archimedes, or Heron of Alexandria, but while visionary voices likely existed in each, neither Classical nor Chinese civilization generally adopted an ideological outlook that sought to comprehend what a bunch of discrete natural and engineering phenomena, such as the flow of water, motion of planets or architectural physics, might all say about the fundamental structure of the world or the will of the divine. To nail the issue even more tightly shut, “traditional Chinese mathematics was not abstract because the Chinese did not see mathematics in any philosophical sense or as a means to comprehend the universe. When mathematical patterns were established, they ‘were quite in accord with the tendency towards organic thinking’ and equations always ‘retained their connection with concrete problems, so no general theory could emerge’  (Olerich 22). In the west, in contrast, the Twelfth Century Renaissance added jet fuel to the existing ideological quest to create general theories and comprehend the universe: “In a word, by importing and digesting the corpus of the “new Aristotle” and its methods of argumentation and inquiry, the intellectual elite of medieval Europe established an impersonal intellectual agenda whose ultimate purpose was to describe and explain the world in its entirety in terms of causal processes and mechanisms” (Huff 152).

      Structurally, what was required for the creation of modern science was the institutional independence of proto-universities to explore questions that ran contrary to social and religious dogmas. As historian Toby Huff explains, the new legal world created by the rediscovery of Roman legal codes was a veritable Cambrian explosion for European institutions and ideas

      “For example, the theory of corporate existence, as understood by Roman civil law and refashioned by the Canonists and Romanists of the twelfth and thirteenth centuries, granted legal autonomy to a variety of corporate entities such as cities and towns, charitable organizations, and merchant guilds as well as professional groups represented by surgeons and physicians. Not least of all, it granted legal autonomy to universities. All of these entities were thus enabled to create their own rules and regulations and, in the case of cities and towns, to mint their own currency and establish their own courts of law. Nothing like this kind of legal autonomy existed in Islamic law or Chinese law or Hindu law of an earlier era.”  (Huff)

      To expand, whereas up to the 12th century, the legal forms in Western Europe were almost wholly those that had been imported from Germanic feudal law, a highly personalist structure of fealty and dependence – land, businesses, countries, churches were the responsibility of individual lords, artisans, kings, bishops, what have you – who depended on their superiors for the mere right to exist. The idea of corporate personhood, that “corporations are people”, (putting aside all of the exploitative and oligarchic connotations it has taken on in the context of the American political scene in the 21st century) was a fascinating, powerful, liberating idea in the 12th century, and one that proved critical to the rise of modern science. Quickly towns, cities, mercantile ventures, and most critically cathedral schools, seminaries, and proto-universities strove to incorporate (literally, “to make into a body”) their existence in the Roman legal mold – no longer were they merely collections of people, they argued their way into being as legal entities distinct from their members and adherents. Further, the Catholic church enriched its canon law with Roman borrowings and promoted the creation of formal legal studies, for example at the University of Bologna. The Compounding with the ideological ferment after the reintroduction of Aristotle and other new texts, “European scholars began gravitating to monasteries that had libraries because the books were there[…] Learning communities formed around the monasteries and these ripened into Europe’s first universities.” (Ansary), which could then as independent corporate entities survive political or theological pressure on or from any individual member.

      We can quite clearly examine the benefit of this arrangement by counterfactual comparison with Islamic society. Despite the fact that Islamic society also had a quest to understand the divine structure of the physical world and thus shared the same ideological perspectives that gave rise to proto-science, the very different institutional structure of the Islamic world resulted in a very different outcome for Islamic science. As philosophers began to question fundamental religious dogmas such as the necessity of revelation or the infallibility of the Quran, “the ulama were in a good position to fight off such challenges. They controlled the laws, education of the young, social institutions such as marriage, and so on. Most importantly, they had the fealty of the masses” (Ansary). The intellectual institutions such as Islamic awaqaf (plural of waqf,  “pious endowment”) that did house these nascent intellectual pursuits were not legally independent but were the dependencies of individual sponsors who could apply pressure – or have pressure applied to them – and their very nature as pious endowments meant that “they had to conform to the spirit and letter of Islamic law” (Huff). Reaching higher into the political structure, In the 10th century, most of the Islamic world was united under the Abbasid Caliphate, and consequently a reactionary shift by the government could result in a persecution that could reach to most of the Islamic world. That is precisely what happened, for the Ulama used their influence to force a change in direction of the Caliphate. After a high tide of intellectual ferment, subsequent persecution of the scientist-philosophers under the next Caliph “signaled the rising status of the scholars who maintained the edifice of orthodox doctrine, an edifice that eventually choked off the ability of Muslim intellectuals to pursue inquiries without any reference to revelation.” (Ansary). And just to once again contrast with the Far East, “In China, the cultural and legal setting was entirely different, though it too lacked the vital idea of legal autonomy” (Huff). Most importantly in China, the dominance of the Civil Service Examinations served as a gravity well attracting all intellectual talent to a centralized, conservative endeavor, stifling other intellectual pursuits: “This was not a system that instilled or encouraged scientific curiosity […] The official Civil Service Examination system created a structure of rewards and incentives that over time diverted almost all attention away from disinterested learning into the narrow mastery of the Confucian classics.” (Huff 164).

      Bringing this all together, in the West, the twin fuses of ideological ferment and corporate independence intertwined, and who else would light the spark aside from the Catholic church. As noted already, the Church realized the threat posed by the rising tide of Aristotelianism and its promotion of rigorous logical examination of the Church’s teachings. Whereas earlier in the 1200s the tendency was to try to find common ground between Aristotle and Christianity, or even to use them to reinforce each other as exemplified by Thomas Aquinas, by the latter part of the century conservative elements in the church saw Aristotelianism as an inherently hostile cancer, and in 1270 and again in 1277 they declared war, issuing (a reinforcing) a blanket condemnation of Aristotle.  Historian Steven Ozment explains that “In the place of a careful rational refutation of error, like those earlier attempted by Albert the Great and Thomas Aquinas, Bishop Tempier and Pope John XXI simply issued a blanket condemnation. The church did not challenge bad logic with good logic or meet bad reasoning with sound; it simply pronounced Anathema sit.”


      The momentousness of this decision for the course of western thought cannot be overstated, for it represented an end to the attempt to reconcile theology and philosophy, science and religion. “Theological speculation, and with it the medieval church itself, henceforth increasingly confined itself to the incontestable sphere of revelation and faith[…] rational demonstration and argument in theology became progressively unimportant to religious people, while faith and revelation held increasingly little insight into reality for secular people.” (Ozment, Steven “The Age of Reform 1250-1550”). In short from 1277 onward, the religious got more religious, and the rational became more rational.

      We see in action the importance of corporate independence and decentralized governance, because there were attempts to stamp out Aristotelianism: In England, there were attempts at Oxford in the late 13th century to restrict the teaching of certain Aristotelian texts. In 1282, the Franciscan Minister General Bonagratia issued statutes attempting to limit the study of “pagan” philosophy (mainly Aristotle) among Franciscan students. In the Dominican Order, after Aquinas’s death, there were some attempts by conservative members to restrict the teaching of his Aristotelian-influenced theology. The Dominican General Chapter of 1278 sent visitors to investigate teachers suspected of promoting dangerous philosophical doctrines. But these efforts failed, and universities proudly asserted their newfound legal independence: the University of Toulouse, which incorporated in only 1229, declared that “those who wish to scrutinize the bosom of nature to the inmost can hear the books of Aristotle which were forbidden at Paris” (Thorndike). The University of Padua became particularly known as a center for “secular Aristotelianism” in the late 13th and 14th centuries, and maintained a strong tradition of studying Averroes’ commentaries on Aristotle even when these were controversial elsewhere (Conti, Stanford).

      But for the thinkers during and just after this time period, the intellectual whiplash stimulated new thought that truly began the rebirth of scientific thinking in the western world. Instead of blindly taking either the Church or Aristotle at face value, the idea that they could be in conflict gave rise to the idea that either or both could be wrong.  The University of Padua mentioned above Scholars such as Jean Buridan or Nicole Oresme began their studies in religious matters (the former was a cleric and the latter a bishop) before turning to “scientific” studies, but their questioning of both religious and Aristotelian dogmas led them to pierce through accepted dogmas, making unique contributions to a wide variety of fields and generally considered to have lain the foundations for the coming scientific and Copernican revolutions.

      III. How have science and religion reacted Post-Renaissance?

      In a recent post on MarginalRevolution, economist Tyler Cowen linked a new article which tears this ancient quarrel new abroach, at least for the modern era. The opening statement concisely encapsulates the picture painted above:  “Today’s leading historians of science have ‘debunked’ the notion that religious dogmatism and science were largely in conflict in Western history: conflict was rare and inconsequential, the relationship between religion and science was constructive overall”, and Cowen adds his commentary that “Christianity was a necessary institutional background”, as I believe the preceding section has shown. But the article by Matías Cabello picks up the story where I left off, and looks at the relationship after the Renaissance. Cabello sees the modern period as unfolding in three stages, with an increasingly secular perspective from the late Middle Ages until the Reformation, then a new period of increased religious fervor during the period of the Reformation and Wars of Religion (16th-17th centuries), finally relenting with the dawn of the Enlightenment in the early 18th century.

      Cabello’s chronology lines up closely with my own knowledge of the topic, though I admit that after the Reformation my knowledge of this period is inferior to that of the previous eras. I draw primarily from Carlos Eire’s monumental and reputed book Reformations for knowledge of this period. But in general, there’s a lot of data showing that the Reformation was a much more violent, zealous, and unscientific time than the periods straddling it.  A useful theory for understanding the dynamics of religion during this period is the Religious Market theory as formulated by Stark and Bainbridge (1987). In this theory, religions compete for adherents on a sort of market (or playing field, if you will), and in areas of intense competition, religions must improve and hone their “products” to stay competitive against other religions, but in areas where one religion monopolizes the market it becomes less competitive, vital, and active in the minds of adherents. This phenomenon is visible most clearly in the secularization of Scandinavian countries once Lutheranism enjoyed near complete monopoly for 400 years, and is often employed to explain why the pluralistic US is more religious than European countries which usually have one hegemonic church, but I would it argue it was also clearly at play in the Midde Ages. By the late Middle Ages, Catholicism enjoyed complete dominance in Western Europe against all rivals, allowing cynicism, political infighting (e.g. the western Schism which at one point saw three popes competing for recognition over the church), and, most critically, corruption to creep into the Church’s edifice. But when the Protestant Reformation broke out (in large part for the reasons just enumerated), suddenly there were several competing “vendors” who had to knuckle down and compete with each other and with different strands within themselves, leading to increased fanaticism and internecine violence for more than a century. There’s a lot of evidence that corroborates this general trend, for example Witch Hunts, which despite being portrayed in popular culture as a medieval phenomenon were decidedly a Reformation-era thing as shown in the below chart, (to wit, many of our popular ideas of the middle ages come from Enlightenment/Whig writers looking back on the 17th century and erroneously extrapolating from there).

      If I can contribute my own assembled quasi “data set”, a few years ago I put together a western musical history playlist featuring the most notable composers from each time period, and one thing that clearly jumped out to me without being aware of this historical topography was that the music before the Reformation was much more joyous and open (and to my ears, just more enjoyable to listen to) than the rather conservative and solemn music that would come just after. To sum up, a lot of indicators tell us that the period of roughly 1500-1700 would have been a much less creative, openminded, and probably fun time to live than the periods just before or after.

      Getting back to Cabello, one of the novelties of his work is in its quantitative approach to what has traditionally been a very non-quantitative area of inquiry, scraping and analyzing Wikipedia articles to see how the distribution and length of science-related figures shifted over time. His perspective is most concisely presented by his figure B2, reproduced here:

      To quote the author, “This article provides quantitative evidence—from the continental level down to the personal one—suggesting that religious dogmatism has been indeed detrimental to science on balance. Beginning with Europe as a whole, it shows that the religious revival associated with the Reformations coincides with scientific deceleration, while the secularization of science during the Enlightenment coincides with scientific re-acceleration. It then discusses how regional- and city-level dynamics further support a causal interpretation running from religious dogmatism to diminished science. Finally, it presents person-level statistical evidence suggesting that—throughout modern Western history, and within a given city and time period—scientists who doubted God and the scriptures have been considerably more productive than those with dogmatic beliefs.”

      It is no coincidence, then, that the single most famous skirmish in history between science and religion, the trial and condemnation of Galileo Galilei, came squarely in the nadir of this fanatical period (1633).

      And yet even in this context, science did, of course, continue to progress, and religious beliefs often lit the way. Historian of Science Thomas Kuhn produced what is likely the best analysis to date of how science progresses throughout the ages and how it is embedded in sociocultural assumptions. In his magnum opus, The Structure of Scientific Revolutions it is clear that the paradigm shifts that create scientific revolutions do not require secularization but rather the communion of different assumptions and worldviews. For example, the different worldviews of Tyco Brahe and Johannes Kepler, two Dutch astronomers who, as master and apprentice respectively, were looking at the same data but, with different worldviews, came to very different conclusions. Brahe believed that in a biblically and divinely structured universe, the earth must be at the center, and he as such rejected the new Copernican heliocentrism. His apprentice Kepler, however, also employed religious motivation, seeing the new heliocentric model as an equally beautiful expression of divine design, and one which squared more elegantly with the existing data and mathematics. Science, thus, is not about accumulating different facts, but looking at them through different worldviews. In one of my posts a few months ago, I mentioned that religious convictions and other reasons can push scientists to bend or break the scientific method, sometimes leading to scientific breakthroughs. One of the clearest examples was the scientific expedition of Sir Arthur Eddington, whose Quaker convictions likely made him see scientific discovery as a route towards global brotherhood and peace. In short the scientific method is an indispensable tool for verifying discoveries and we neglect it at our peril, but we must not let it become a dogma, for the initial spark of discovery often emanates from the deeply personal, irrational, cultural, or religious ideas of individuals.

      In a recent blog post, Harvard professor of AI and Data Science Colin Plewis posted the following praise of Kuhn: “Far from the smooth and steady accumulation of knowledge, scientific advancement, as Kuhn demonstrates, often comes in fits and starts, driven by paradigm shifts that challenge and ultimately overthrow established norms. His concept of ‘normal science’ followed by ‘revolutionary science’ underscores the tension between tradition and innovation, a dynamic that resonates far beyond science itself. Kuhn’s insights have helped me see innovation as a fundamentally disruptive and courageous act, one that forces society to confront its entrenched beliefs and adapt to new ways of understanding.”

      Conclusion

      Hopefully this post has given a strong argument for the limited claim that Christianity is not a perennial enemy of science and civilizational progress. And perhaps it has also given some evidence for the idea that scientific advancement benefits from the contact and communication of different worldviews, assumptions, and frameworks of belief, and that Christianity, or religious belief in general, is not necessarily harmful for this broader project. Without question there can be places and times in which dogma, oppression, and fanaticism inhibit freedom of thought and impede the scientific project – but these can be found not only in the fanatical religious periods of the Wars of Religion or the fall of the Caliphates, but also in the fire of secular fanaticism such as Lysenkoism or the Cultural Revolution, or even the simple oppressive weight of institutional gravity, as was the case of the imperial exam system in China.

      What can we take away from this historical investigation to inform the present and future?

      Normally, we consider universities and affiliated research labs to be the wellsprings of scientific advancement in the modern West. But given that higher education in the united states demonstrates an increasing ideological conformity (a 2016 study found that “Democratic-to-Republican ratios are even higher than we had thought (particularly in Economics and in History), and that an awful lot of departments have zero Republicans”), Americans are increasingly sorting themselves into likeminded bubbles including into politically homogenous universities, preventing the confrontation with alternative worldviews that is the very stuff of free thought, creativity, and scientific progress. And since popular perceptions imply that this trend has only been exacerbated in intervening years, it may be that the “courageous act” of “revolutionary science” that “challenges and overthrows existing norms” may have to come from an ideological perspective outside the secular, liberal worldview of modern American academia. It may be that an overtly religious institution like a new Catholic Polytechnic Institute or explicitly Free Speech schools like the University of Austin,  no matter how retrograde or reactionary they may appear to many rationalists, are the future of heterodox thinking necessary to effect the next scientific revolution.

      Of course, the future of science will be inextricably linked to Artificial Intelligence, and it remains to be seen exactly what kind of role AI (or AGI or ASI) will play in the future of scientific discovery, leaving me with nothing but questions: (when) will AI have a curious and creative spark? Will it have a model of the universe, preconceptions and biases that determine how it deals with anomalous data and limit what and how it is able to think? Or will it have the computational power and liberty to explore all possible hypotheses and models at once, an entire quantum universe of expanding and collapsing statistical possibilities that ebb and flow with each new data point? And if, or when, it reaches that point, will scientific discovery cease to be a human endeavor? Or will human interpreters still need to pull the printed teletype from Multivac and read it out to the masses like the Moses of a new age?

    1. The Parable of the Day-old Bread

      The Parable of the Day-old Bread

      This post is based on a true story from my wife’s family. She is from southern France, and several of her family members, including a great aunt and uncle, were refugees from Spain in the broad context of the Spanish Civil War and early Francoist regime. Because of this background, they carried with them certain habits and values shaped by hardship, what can only be described as a “culture of poverty.” One of the most poignant and interesting of these legacies was that at one point my wife’s great aunt and uncle voluntarily ate stale, hard bread for most of their lives, even when they had fresh bread in the house.

      To understand this, it helps to know something about Southern European bread culture. In France, Spain, and Italy, bread doesn’t typically mean the pre-sliced, plastic-wrapped loaves common in the U.S. and Northern Europe. Bread in Southern Europe generally means a preservative-free loaf, baked every day at the local bakery (the exact composition for typical loaves is even codified in French law). In these cultures, the texture of the bread is everything: a good loaf is hard and crackly on the outside but soft on the inside. If these loaves are sealed in plastic bags like American sandwich bread, the moisture equalizes between the interior and crust, and the entire thing becomes unappetizingly spongy. So this bread is always sold in paper, which keeps the crust crunchy, with the downside that within a day or so the bread will get dry and hard.

      Some days my wife’s great aunt and uncle would do their shopping and pick up their usual loaf of bread, only to find upon returning home that for whatever reason they hadn’t finished their loaf from the previous day. And like many who live through poverty and hunger, they obeyed one cardinal commandment above all others: thou shalt not waste food. So they would eat the previous day’s loaf, lest it go to waste, before beginning the next one. There would inevitably be some of the new loaf left over, and the cycle would repeat the following day. As a result, most of the bread they ate in their lives was stale, hard, day-old bread. It is easy to imagine that through a slight tweaking of their preferences and choices, they could have foregone eating the old bread and found another use for it, and eaten the fresher, better bread every day. But their preferences were deeply ingrained, hard-coded by their early lives and lived experiences, as unchangeable as the color of their eyes.

      There is perhaps a lesson in this for all of us. We may not eat old, hard bread, but most of us are probably doing irrational things that result from cultures and habits that we have unthinkingly inherited. Many of them are rational, logical choices given certain initial conditions, but these may be material conditions that no longer exist. This is true not just of individual behaviors, but entire cultures and ways of being. Some, like those of my wife’s family, are hard-coded, and we will never be able to change them or even think about changing them, for example Sam Harris once very insightfully pointed out that by all rational analysis, no one with alternative heating systems should be lighting a fire in their fireplace, and used this to explain to rationalist-atheists why some people still believed in God. Others may be preferences, heuristics, or automatic choices that with rational reflection we may realize are inefficient or wrong for the modern world, for example daylight savings time, office work, or the 40-hour workweek. As we go about our days, let us be critical in asking ourselves: what aspects of our cultures, what habits in our lives, are merely stale bread?

    2. Pillars of Sand

      Pillars of Sand

      The fundamental shift in Western legitimacy

      A simple theory could explain the fundamental shift in Western politics in recent years. I propose that we are witnessing a shift in the way governments acquire and maintain legitimacy in the eyes of their populace. In determining whether governments and regulatory bodies are “legitimate”, judgements fall primarily into one of two beliefs: process legitimacy, or outcome legitimacy. Until recently, Western polities overwhelmingly believed in “process legitimacy”: democratically elected governments were inherently legitimate because they followed the process, i.e. they obeyed the laws, and came and went with elections. Whether they passed good or bad policies would make them more or less popular, and would help them to win or lose the next elections, but rarely did their basic legitimacy to govern depend on whether their policies were good or bad.

      In recent decades, though, this has begun to shift, and the populaces of western polities increasingly believe in “outcome legitimacy”: governments are legitimate or illegitimate as a function of how well they respond to the socioeconomic and sociocultural insecurities of their constituents. Many polls and studies reveal this deteriorating belief in democracy. Belief in the legitimacy of the Supreme Court or Congress are abysmal. We can see this in stark relief not only Trumpist claims to the illegitimacy of pluralistic Democratic victories, but also in France where Macron is decried as illegitimate in his accused abandonment of the working class. For the former, consider this excerpt:

      “Even if they don’t subscribe to the more outlandish conspiracies propagated by Trumpists, many Republicans agree that the Democratic party is a fundamentally illegitimate political faction – and that any election outcome that would lead to Democratic governance must be rejected as illegitimate as well. Republicans didn’t start from an assessment of how the 2020 election went down and come away from that exercise with sincerely held doubts. The rationalization worked backwards: They looked at the outcome and decided it must not stand.”

      And for the latter example of Macron, FranceInter could not put better the differences in claims to legitimacy:

      Sur le plan institutionnel, les règles de la démocratie sont simples et claires, le président de la République est celui des candidats qui a obtenu la majorité des suffrages exprimés. Sur ce plan, la légitimité d’Emmanuel Macron est donc incontestable.  

      Pour autant, au soir du premier tour, sur les plateaux de télévision, il y avait quelque chose d’indécent dans la suffisance et l’auto-satisfaction des « dignitaires du régime », souvent passés par des gouvernements de gauche et de droite, avant d’échouer en Macronie… Comme un manque de gravité qui ne correspondait pas aux circonstances et aux enjeux…  

      Pourquoi cela ? Parce que si la victoire d’Emmanuel Macron est indiscutable, la crise de la démocratie est, elle aussi, indéniable. Le président de la République a été réélu sur fond d’abstention massive, en particulier des actifs, face à une candidate qui continue à être délégitimée et même vilipendée par presque tous.    
      On an institutional level, the rules of democracy are simple and clear: the President of the Republic is the candidate who has obtained the majority of the votes cast. In this respect, Emmanuel Macron’s legitimacy is therefore unquestionable.  

      And yet, on the evening of the first round, there was something indecent about the smugness and self-satisfaction of the “dignitaries of the regime” on television panels—figures who had often passed through both left- and right-wing governments before ending up in Macron’s camp. There was a certain lack of gravity that did not match the circumstances or the stakes at hand.  

      Why is that? Because while Emmanuel Macron’s victory is indisputable, the crisis of democracy is equally undeniable. The President of the Republic was re-elected amid massive voter abstention, particularly among the working population, against a candidate who continues to be delegitimized and even vilified by almost everyone.  

      It is not the first time that outcome legitimacy has been significant in the west: as Jurgen Habermas claims in his 2013 The Lure of Technocracy, “The [European] Union legitimized itself in the eyes of its citizens primarily through the results it produced rather than by fulfilling the citizens’ political will.” But in general this belief about legitimacy is new to the modern West. It is, however, perfectly valid in other cultural and political systems around the world: Middle Eastern monarchies make no pretense to democracy (in many the denizens are deemed subjects, not citizens, implying no role to play in the political life of the state); in China, the Communist Party historically has relied heavily on its ability to buoy material prosperity and defend China’s image abroad as its primary claims to legitimacy, rather than on claims to democratic processes or popular election (though China does maintain some nominally democratic institutions).

      A fair follow up question to ask here is why this process has occurred. I am not entirely sure, but I have some hypotheses. The first is that the one-two punch of terrorism and the recession in the early 2000s created a climate of increased material insecurity and a need to ensure that governments were actually producing results that protected people physically and economically. A second, non-exclusive reason would be the Gurri hypothesis that distributed network technologies are making people more skeptical of governments and institutions, and want more explicit proof that they are working in the public interest. Other explanations surely abound and I would love to hear them in the comments.

      One could conclude that if this trend towards valuing outcome legitimacy continues, Westerners will become increasingly tolerant of undemocratic and unlawful acts on the parts of their governments, so long as they are able to deliver desired results. The stunned tolerance of Elon Musk’s activities in the US Federal Government, to the extent that it holds, may be due in part to a sense of awe that he is able to move so rapidly and effectively and produce the kind of results that Trump campaigned on.

    3. On Aesthetic Progress

      On Aesthetic Progress

      And where it is leading

      I have recently been contemplating the question of whether there can be said to be aesthetic (or artistic) “progress”, or whether changes from era to era are a kind of “random walk” in response to changing tastes and attitudes from generation to generation [in this post I refer primarily to visual art, but we can imagine similar discussions on the matters of literature or music]. Would someone from the 12th century walking through an art museum today see some things and say “wow, this is clearly better than anything from my time”, or would they think “hum, I don’t really see anything that appeals to my 12th century artistic sensibilities”? At some level we can say that there is unarguably technical progress, in that the invention of perspective drawing and the chemistry and economics of modern paint colors (or digital screens) enable modern artists to depict things they never would have been able to prior to the advent of these mechanisms. But how does that square with the question of tastes and sensibilities?

      To some extent this is a false binary, and a hybrid view might bring us closer to understanding the way things really work: while stylistic trends often evolve unpredictably, there is a discernible trajectory toward optimizing aesthetic effects for specific goals, an expansion of the possibility-space of art. Art is not a one-sided enterprise of the consumer, but a dualistic relationship between the audience and the artist. The artist seeks to respond to the desires of the audience, and as techniques and technology advance the artist can have a broader pallet of potential tools to meet (or shape) the demand of the audiences. The aesthetic preferences themselves may be a random walk, but the ability of artists to meet them undoubtedly progresses.

      If we look at this trend throughout history, artists have always sought to influence their audiences—this is intrinsic to artistic practice, whether trying to induce a noble to give his patronage or to elicit a religious experience by a biblical scene. What has changed in the modern world is the level of precision with which aesthetic choices can now be tailored to achieve intended effects. Previously guided by the artist’s intuition and cultural precedent, aesthetic decisions are increasingly informed by empirical research in the realms of graphic design, marketing, advertising, and Hollywood productions. The integration of psychology, neuroscience, social network data and data analytics suggests that we are advancing toward a model of aesthetic engineering capable of systematically eliciting specific emotions, or reactions from average audiences (or specific audiences). Rather than relying solely on subjective artistic instinct, creators can now leverage measurable data on human perception and cognition to optimize engagement and emotional impact.

      But where is this leading as technological progress begins to outrun human creativity? This leads to an important and dystopian question: does the increasing precision of aesthetic manipulation enhance artistic expression, or does it bleed into manipulation or outright control? While it seems evident that different personality types exhibit varying levels of susceptibility to algorithmic predictability (many people follow mass entertainment while others gravitate to niche ventures), suggesting that while the majority may be influenced by data-driven design strategies, there will always be outliers who resist standardized aesthetic appeals. However, the scope of algorithmic influence continues to expand. Psychology and neurology are very likely “solvable” problems that AGI or ASI may be able to use to decode human perception and cognition as systematically as current AI systems  are beginning to do with protein folding. If human psychological responses become fully mapped, aesthetic design may transcend broad statistical targeting and instead achieve personalized precision—where an artwork, advertisement, or political message is dynamically adapted in real time to maximize its impact on an individual’s neurological profile, or perhaps to force particular thoughts or actions. Can a human brain be subliminally “hacked” by extension of the same channels by which flashing lights can trigger an epileptic seizure in some individuals?

      I have no answers to these questions, other than suspecting that the answer is probably “yes” to all of them. If we’re going to live in a world with a machine god, we should prepare for the numinous, miraculous, and infernal.

    4. The Life and Death of Honor

      The Life and Death of Honor

      Obituary of one of the oldest human values

      I was recently reading the book version of Disney’s Mulan to my four-year-old son when he asked me what “honor” was. Although I usually pride myself on concocting kid-friendly analogies and simplifications, I truly struggled with his question and muttered something like “people thinking you’re a good person” before moving on. The question stuck in the back of my mind, however,  and I have been continuously mulling how to mentally model “honor” in a concise way. After days of struggle, I began to read, research, and think critically about the idea, and what follows is the digest of that process.

      The concept of honor was a staple of human society since the dawn of recorded history, and yet somehow in the past 300 years it has gone the way of steamboats and horsedrawn carriages. Honor, today, is a quaint vestige at best and pathologized at worst, coming up most often in the context of “honor killings” or the “honor culture” of the American South. Outside the limited scope of military service, “honor” is nearly synonymous with “machismo” or “bravado”, a puerile attachment to one’s own ego and reputation (or that of one’s kith and kin).

      A comparison of a random selection of some other broad but unrelated terms demonstrated that the fall of honor is not just absolute but relative – freedom, for example, was a minor concern in the 16th century but has since dwarfed honor.

      Interestingly honor was more prevalent than even “love” in the 16th century but the opposite holds true since then.

      Wikipedia implies that honor is a proxy for morality prior to the rise of individuality and inner consciousness: in our earlier, tribal and clannish stages of moral development, the perceptions of actions by others and how they reflected on our kith and kin were much more important than any inherent deontological moral value, hence honor.

      And yet there is a part of us that knows that this idea is missing something critical. When we read or watch stories about “honorable” characters like Ned Stark or Aragorn talking about honor, we don’t think of them as being quaint, macho, and motivated by superficial public perceptions of their clans and relatives. We know that when they are talking about their honor, they are talking about being morally upstanding figures who do the right thing regardless of the material and reputational cost (to quote Aragorn, “I do not fear death nor pain, Éomer. Only dishonor.” -JRR Tolkien, Return of the King). When we read this, we know that that is not socially-minded showmanship but rather bravery and altruism, and a reader is supposed to like Aragorn for it.

      Upon contemplating this and reading further, it became obvious that “honor” was a catch-all term for many different qualities. It refers to personal moral righteousness and familial reputation, but it also refers to one’s conduct in warfare, or one’s fealty to one’s liege, and to one’s formal recognition by society. Given the ubiquitous and multifarious uses of the term, and the fact that pre-modern peoples seem to have absolutely obsessed over it (prior to 1630 it was as important as love and vastly more important than freedom), it stands to reason that it was useful and good. So how exactly can we explain the benefits of honor and what it meant?

      The Benefits of Honor

      I came to the following categorizations of how honor worked and why it was useful in pre-modern society, from shortest to longest:

      1. A solution to the game-theory of pre-modern warfare

      In the modern world there are the International Criminal Court to enforce international law and prevent war crimes, and an international press to publicize the violation of treaties and ceasefires. In the premodern world, these institutions did not exist. What prevented pre-modern peoples from regularly engaging in these acts? To some extent the answer is “nothing”, and indeed rape, pillage, and general atrocities were a constant feature of premodern warfare: the ancient Roman statesman Cato the Elder (234-149 BC) coined the phrase “The war feeds itself” (bellum ipsum se alet) to explain that armies would sustain themselves by pillaging and starving occupied territories, and it is telling of the longevity of that mode of warfare that the phrase is most heavily associated with a war nearly two millennia after Cato, the Thirty Years’ War of 1618-1648 AD.  One institution that may have held these atrocities in check was the papacy and later state churches, though it stands to reason that a commander willing to commit such venal acts might not be dissuaded simply by threats of additional eschatological penalties. But one additional force holding back the worst of human nature during premodern war may indeed have been the concept of honor. A society that places a high value on honor means that individuals will pay a high reputational cost for such actions as attacking civilians, violating treaties, or encouraging mass atrocities. This societal expectation discourages individuals from engaging in such behavior because they would lose honor – and as honor was transmitted familially, their families would be marked for generations. In a society where legacy is important, staining that legacy in perpetuity for a one-time military benefit may have made some commanders think twice.

      1. A heuristic to encourage the efficacy of pre-modern society and administration

      It is difficult for us moderns to understand the extent to which the pre-modern world was personal. Max Weber, one of the founders of modern sociology, viewed modernity as a process of transitioning from a Gemeinschaft (community) into a Gesellschaft (society). In the former, looking out for one’s friends and family, using influence for their benefit and helping them get into positions of power is considered good and dutiful; in the latter, being dutiful means impersonally discharging the role of one’s office without regard to personal relationships; doing too much favoritism is considered corruption or nepotism. Indonesia’s former president Suharto once neatly encapsulated the difference (and revealed that Indonesia was still a Gemeinschaft) with the quote “what you call corruption, we call family values”.

      Most pre-modern societies, particularly feudal ones, had almost non-existent states, the governing apparatus being almost completely dissolved and reconstituted upon a monarchical succession. 14th century England had only a few hundred people in direct employ of the crown, most of them being tax collectors, royal estate and forest managers, and keepers of the peace. A monarchical succession meant that a new person with his or her own network of dependents, friends and trustees would need to pick and choose his or her own councilors and appointees to go out and do things.  How was a monarch to choose people for all of these roles? What all this meant was that the work of state administration was built on personal reputation. If a monarch needed something done well, they needed a quick metric to be able to assess the efficacy of an appointee. To that end, they could simply use someone’s reputation for honor.

      Thus, to the extent that honor encompasses such qualities as honesty and forthrightness, it would encourage the enforcement of contracts and upholding of laws. If it encodes diligence, willingness to abide by oaths and be overt in one’s declarations of allegiances, then it would help to encourage stable politics and relations amongst peers of the realm (an above-board network of alliances allows a rational calculus of when and whether to initiate a conflict; a clandestine alliance system gets us the tangle of World War One). If honor encompasses honesty and charity, it would entail dependability in collective and remitting taxes and making necessary investments to prevent or curtail social unrest by the lower classes. And most importantly, honor was a stand-in for loyalty to one’s liege and the crown. If you’re assigning a general to lead an army or a governor to lead a wealthy province, you want to be sure that they’re not going to betray you. Honor serves as both a metric for that likelihood and, failing that, a reputational deterrent on future generations of a traitor.

      1. A general shorthand for morality that is responsive to changing moral frameworks

      Western civilization has spoken of “honor” over thousands of years, and what that means in terms of personal virtues has changed radically over that time. One of the most important ideas of Friederich Nietzsche is the conceptualization of Slave Morality versus Master Morality. In Nietzsche’s conception, the Master Morality of the societies that used to be slave masters (Mesopotamians, Greeks, Romans) was later overcome by an inverted Slave Morality of those who used to be their slaves, i.e. the Judeo-Christian moral values.

      Let us first examine how honor worked in the Master Morality of the ancient world. In this conception, what was morally good was to be judiciously utilize the benefits of being at the top of the social pyramid, to become the most powerful and beautiful version of oneself, to surpass one’s rivals, to fully self-actualize. We see this fully laid out in epics such as the Iliad, wherein what is exalted is martial, mental, and interpersonal prowess. As Hector explains in the Iliad (Book VI), “I have long understood, deep in my heart, to defend the Trojans under Hector’s spear, and to win noble glory for myself and my forebears”.  We see this carried over into Roman society as exemplified by the pursuit of glory and the acquisition of familial “honors” which is how the word (honor) is used when it first enters the Western lexicon. Ancient Romans, particularly in the late Republican period, were absolutely obsessed with acquiring honors, in the plural. In this sense, honors often means public recognitions of honors via titles, offices, and displays such as the all-important triumph in which the celebrated man would have to be reminded that he was mortal lest he follow the path of Bellerophon and think he had acquired divinity. By the late Republic the quest for honors had become an obsession, and their pursuit was fuel for the civil wars and lust for power that ended the Roman Republic. To wit, Cicero comments in his De Officiis (On Duties), “honor is the reward of virtue, and the pursuit of honor is the very aim of every great man. It is the highest obligation to seek the recognition of those who are worthy, not for personal gain, but for the service of the state.” And as later Roman commenters (Livy) observed in looking back on that period, “No man is truly great unless he has acquired honor through the strength of his own actions. It is the pursuit of honor that drives men to greatness” (History of Rome book 2). In other words, honor in the Master Morality framework was exogenous, not endogenous. It was about getting others to recognize your greatness.

      The rise of the former slave populations with their Slave Morality truly inverted things.  In the Gospels Jesus intones repeatedly against the pursuit of public recognition: “When you give to the needy, sound no trumpet before you, as the hypocrites do in the synagogues and in the streets, that they may be praised by others” (Matthew 6:2); “you know that the rulers of the Gentiles lord it over them, and their great ones exercise authority over them. It shall not be so among you. But whoever would be great among you must be your servant, and whoever would be first among you must be your slave” (Matthew 20:25-26) and further “for they loved the glory that comes from man more than the glory that comes from God” (John 12:43). The message of humility naturally was taken up by those who were materially already humbled by the existing socioeconomic order (the slaves and urban poor) and resisted by those who had the most to lose – the practitioners of the Master Morality of public honor and glory. Over the following three hundred years Christianity came to be the dominant religion of the Mediterranean world. What followed in the West was literal moral revolution, the overthrow of the Masters by the Slaves, and the creation of a new order in which honor was still a goal, but the means of its attainment shifted radically. Thus by the 5th century St. Augustine echoed the same message “Let us then approach with reverence, and seek our honor in humility, not in pride” (City of God, published 426). Through the Middle Ages we hear from Dante Alighieri that “The greatest honor is not that which comes from men, but from God. And the greatest humility is knowing that, without His grace, we are nothing” (Divine Comedy, 1321). And at the end of the Medieval period, Sir Thomas Mallory repeats the same message, that “He who is humble in heart, his honour shall be pure and his glory eternal; but pride is the enemy of honor and virtue” (Le Mort d’Arthur, 1470).

      The Death of Honor

      As we saw from the n-grams above, Honor died around 1700 in Western Europe, as “the old aristocratic code of honor was gradually replaced by a new middle-class ethic of self-discipline, hard work, and social respectability” (Lawrence Stone, “The Crisis of the Aristocracy” (1965)). But following the trend line, its subsequent exhumation in the 19th century was as a pastiche to reminisce about or poke fun at, not as a genuine revival of the cultural value. For example, in Sir Walter Scott’s magnum opus Rob Roy, the word “honour” appears 152 times in just a little over 500 pages. Jane Austen used the term quite often, 256 times in her collected works, but anyone who has read Austen will know that she came to bury honor, not praise it. 

      The exact reasons for the decline in honor are difficult to pinpoint as there were myriad processes unfolding at the time. The Enlightenment subjected many cultural values to rational scrutiny and Enlightenment thinkers like Voltaire had no mercy for the concept that they wished to be rid of with great urgency: “The idea of honor has caused more bloodshed than any other human folly.” Voltaire Candide 1759. If one looks back on the typology for the benefits of honor above, we can see that many of the reasons for honor’s existence began to be moot by the 18th and 19th  centuries. For example, the growth of centralized bureaucratic states allowed expanded recordkeeping and objective evaluations of merit, eliminating the need for reputational heuristics. Increased law, order, communication and infrastructure meant greater movement and atomization of the individual as the Gemeinschaft gave way to the Gesellschaft; familial reputation gave way to licenses, certifications, degrees, and more affected signals of social status. And as “honor” used to be a vacuous stand-in for any number of human virtues and moral qualities, it was with little difficulty that it could simply be replaced by more precise terms for those qualities, e.g. “honesty” or “bravery”. Thus “honor” came to be a term of critique for those areas of the world most resistant to modernizing and enlightenment influences, where “honor culture” and “honor killings” persist as remnants of what once was the dominant mode of human thought and moral reasoning.

      Epilogue: Another Resurrection?

      Looking back at the n-grams, we can see one last fillip on the trend line beginning in the late 20th century. While no clear explanation has been put forth for this, one obvious suspect would be the rise in historical fantasy. With Lord of the Rings, Dungeons and Dragons, Game of Thrones, and countless other fantasy worlds attracting millions of fans on screens, pages and tabletops, it is little wonder that concomitant historical concepts such as “honor” should rise in popularity as well. This clear growth in this direction can be evidenced by the fact that historical fantasy tropes such as dragons, empires, knights and castles have seen large growth in popularity from the 1990s, but historical terms with less resonance in historical fantasy such as “crusade”, “dowry”, “abbot” and “chastity” demonstrate no such gains.

      Nevertheless, the usage in the English lexicon of “honor” remains a small fraction of what it once was. Honor continues to interest us academically and fictionally, but there is little chance of it returning to guide our choices and moral values in the here-and-now.

    5. The Murder of Brian Thompson: an applied lesson in deontology versus consequentialism

      The Murder of Brian Thompson: an applied lesson in deontology versus consequentialism

      The murder of Brian Thompson is a morally and emotionally challenging event. Many people feel that some sort of justice was administered, even though the matter concerns premeditated murder. What is justice in this situation? Why has this event provoked such strange and passionate reactions?

      We return to a topic I wrote about recently, the difference between deontological and consequentialist moralities. In the deontological sense, murder is usually considered to be wrong (it depends on precisely which deontological system we are referring to, but most moral systems tend to say that murder is always wrong). In the consequentialist sense, murder can be moral if the results are sufficiently beneficial. The idea that this murder was justice derives from a consequentialist understanding of justice: a sufficiently ostentatious display of a punishment for a behavior, even a brutal and disproportionate punishment (think cutting the hand off of a thief and hanging it above the city gate), can prevent the offending behavior from re-occurring and thus in the long term improve the aggregate quality of life.

      When I posted on social media explaining the consequentialist justice argument, a wise acquaintance responded asking if it was not exactly the same as pro-lifers advocating the killing of a doctor who performs abortions. He also posted Meditation 17 by John Donne:

      No man is an island,  entire of itself; every man is a piece of the continent, a part of the main; if a clod be washed away by the sea, Europe is the less, as well as if a promontory were, as well as if a manor of thy friend’s or of thine own were;  any man’s death diminishes me, because I am involved in mankind, and therefore never send to know for whom the bell tolls; it tolls for thee.

      I appreciated the critique and understood the invocation against murder. I agree that consequentialist reasoning can potentially be used to excuse anything, and a world in which more people felt emboldened to murder those they had sociopolitical disagreements with would be a worse one for everyone. But consider the following: what if Thompson had been shot with a bullet that instead of inflicting physical damage had instead inflicted bankruptcy and years of heartache and misery like his choices had done to his clients?

      What I mean to say is, consequentialist excuse for murder is a game anyone can play at, progressives and pro-lifers alike, yes, but for those who revel in the murder, Brian Thompson’s own actions and executive choices as symptomatic of another brand of legal and moral decay, one that allows the wealthy and powerful to prey on the weak with complete impunity so long as it follows the byzantine prescriptions of laws written by the same ilk for their own benefit. Should the penalty for this immorality be murder? Deontologically, clearly not. Even consequentially, as I said, it would be bad if everyone started murdering everyone they disagreed with. But I think even deontologists agree that there should be severe punishment for the actions of Brian Thompson and those who do similar things – enough to force those in his position to think twice about the welfare of their clients whose lives and livelihoods depend on the companies delivering certain services. This is a legal and institutional failure. The solution for the pro-lifers was to take control of institutions and effect the legal changes to make abortion stop in the polities they control.

      The answer is institutional and legal, then: if the clients of a company could all get together and vote to dispense bankruptcy and misery on CEOs who did this kind of thing to them, then that would probably be a better world from both consequentialist and deontological perspectives.

      Until then, the question is: which is closer to justice for the actions of Brian Thompson: murder, or impunity?

    6. Countering Chinese Nationalist Talking Points

      Countering Chinese Nationalist Talking Points

      Update: please see the update note after the guide image for some additional arguments and refutations.

      I compiled a handy guide to some of the most common strategies and talking points by Chinese nationalists online (on forums like twitter and reddit). [Sharable image first, copy-able text follows.] This list is far from exhaustive, but should be a good base for combating most arguments. Please share additional talking points or strategies in the comments.

      One overriding thing to note: anyone in China has to use a VPN and violate Chinese law in order to be engaging on these forums in the first place. So don’t hesitate to draw attention to their hypocrisy and disrespect of Chinese law.

      Update: This was posted on reddit, and the discussion there generated many more arguments and responses. Consider

      These are really low hanging fruit. What about the more difficult points to combat that nationalists often make? How do we counter misinformation like this:

      “It’s easy to criticize the CCP, but don’t the people have a right to say they want a government and society that is different from what Americans have? How do you promote freedom and human rights without also weakening the institutions that maintain China’s independence and uniqueness we value which many other countries have lost to globalization and westernization?”

      “I think that the integration of China’s economy with the US has promoted the values we all want to see adopted by our government: free trade, freedom of movement, freedom of expression, etc. But now, the US is severing ties with China by imposing tariffs (even on goods like solar panels and EVs which are desperately needed to combat climate change), sanctioning and banning Chinese companies, and regressing to unfair trade practices like subsidizing domestic industry — practices it has criticized China for. How can the CCP in its current form be opposed when the good actors on the global stage like the US can’t be relied on to help in this fight and demonstrate correct behavior? How can we pressure the CCP when the US wants to punish China rather than shape China for the better?”

      “Whenever the extremely high incarceration rate in the US is brought up, the disproportionate imprisonment of minorities there, or the forced labor practices the US and its state governments engage in, people always do whataboutism and say hush, you have no room to talk when the CCP is doing the same and worse in Xinjiang and Tibet. I think we should oppose human rights violations no matter where they happen in the world, but the conversation always gets turned to sanctions against China and opposing the CCP. In contrast, you’ve never heard someone say ‘it’s time for regime change in the US’ or ‘why not have sanctions against the US for its crimes’, and that’s because the US is still the global policeman, judge, jury, and executioner. It’s above reproach, above the law, and unaccountable to anyone. The US should be expected to be a state party to the Rome Statute; it should be expected to support and comply with the WTO; it should be a state party in the Paris Climate Accords all of the time, not just when it feels like it. If not for its military power, the US would be considered a rogue state.”

      A (self-described) Chinese commenter replied to these points (my posting them here is not an endorsement):

      As a Chinese person to answer these questions:

      The Chinese people certainly have the right to choose a government that is different from that of the United States, but the Chinese Communist Party (CCP) has not given the Chinese people the power to choose a government that is different from that of the CCP

      The CCP has frantically suppressed civil society, from rights lawyers to investigative journalists to ordinary citizens. The CCP has used every means to crack down and persecute them. More than a decade ago, an old man took the initiative to monitor the misuse of public vehicles by officials. The CCP secret police lured him into prostitution with a scam and made it public. An attempt was made to ruin his reputation.

      The CCP does not practice free trade. Take the communications industry for example. The CCP pretended to open up the communications industry when it joined the WTO, and after it joined the WTO, it opened up only a very small number of proliferating businesses. The same thing happened to the insurance industry. The CCP has formulated a series of “documents” to create a glass ceiling for foreign investment. Foreign investors are not allowed to participate in the most important insurance business at all. By contrast, it was not until the Trump era that the US government began to restrict Chinese telecoms operators from doing business in the US.

      Liberalism itself encourages independence and uniqueness. Holding independence and uniqueness against Western civilisation, Hong Kong, the most liberal city in China, retains the most traditional culture. Under the rule of the Chinese Communist Party, people had been forced to destroy countless traditional cultures. They even destroyed the tomb of the legendary “Yellow Emperor”, the ancestor of the Chinese people. The independence that the CCP tries to retain is in fact their uninterrupted rule over the Chinese people.

      Every country violates international law to a greater or lesser extent. But the United States remains the foremost defender of the international order. On the question of the US supporting Ukraine with tens of billions of dollars against the Russian invaders, China is supporting Russia on a massive scale. Including, but not limited to, massive prepaid energy orders, drones, industrial equipment.

      Without further ado, the guide:

      StrategyDefinitionExampleIdea about how to counter
      WhataboutismAKA “tu quoque fallacy”, turning an accusation around without actually addressing itCriticizing CCP ➔ “Oh, America is perfect?”
      Criticizing Xi ➔ “But Trump did…”
      Criticizing Xinjiang ➔ Native Americans, Slavery “You don’t have freedom or democracy in the US, everything is controlled by corporations”.
      Agree that these things are all bad and it’s important to oppose them anywhere in the world.
      JingoismAn overt assertion of national strength“You can gloat now, but pretty soon we’ll own your countries”
      “You’re just angry that China has managed Covid better than you and you’re left with a failed government that’s getting you killed”
      The west laments its imperialist past. Why does China want to make the same mistakes the West did? Point out that most people around the world don’t tie their pride to their national strength; what matters is whether people are having happy lives. How does international power make them happy?
      Economic EssentialismUsing China’s economic growth to excuse unrelated things“Sure the government wanted to put down the rebels in Tiananmen in 1989, but clearly it was justified considering how much economic growth China has achieved”.Why can’t China figure out how to have economic growth with freedom? Point out countries like Japan, Singapore, Korea, Taiwan have done so. It’s not one or the other. Why does the CCP fear its own people?
      HansplainingResorting to the “mystery” that is China that foreigners will never understand“It’s easy for you to criticize something you don’t understand. Only real Chinese who grew up in China would understand why this is necessary”.It’s fine for a culture to be complicated and difficult to understand. But how can such a culture become globally competitive?
      Nation-Government ConflationInterpreting an attack on the CCP/Government as an attack on the Chinese people“Me and my country can never be separated”.
      Attack on the CCP ➔ “why are you racist against Chinese people? What have we done to you?”
      Breaking the government/nation conflation is the key to fighting Wumaos. CCP propaganda has indoctrinated people that an attack on the CCP is an attack on the Chinese people. We need to be clear that the world would love to see a prosperous, happy, and free Chinese nation.
      Outright distractionTaking a conversation that is going against China and making inflammatory (usually political) comments to distract“Do you think Biden or Trump is the bigger tool of China?”Call out the blatant CCP distraction, downvote, and move on. Do not feed the trolls.
      Praise of ChinaPosting articles or comments that explain how good something is in China“China has built the world’s fastest supercomputer…”“It’s so cool what humans are capable of. Who cares that it’s Chinese?”
      Agree that it’s great. Every country has great things. That doesn’t confer greatness on the other 1.4 billion Chinese and more than it confers greatness on non-Chinese.
    7. Arguments for Natalism on the Left

      Arguments for Natalism on the Left

      Natalism, the belief in the need for higher birthrates, is increasingly a topic of concern for various thinkers and prognosticators  (Robin Hanson, Tyler Cowen, Zvi Moskowitz, and Elon Musk among many others). However, the calls for natalist policies are almost unanimously from the political right. I would like to argue that it would behoove the political left to take on this banner as well.

      The reason that the left has been reluctant to promote natalism are somewhat obvious. One of the core ideological constituencies of the political Left in many developed countries is young educated professionals, many of whom are child-free: some simply by the vicissitudes of professional life, and some of whom by ecological or personal choice. For the child-free members of this group, to embrace natalism would be hypocrisy. And for a leftist group or party to embrace natalism would be to risk alienating this important source of votes, funds, and political energy. Natalism is closely associated with the “traditional family” and “family values”, typically conservative calling cards.

      That said, there are a two strong arguments to make for the left embracing natalism, one of them Machiavellian and the other Darwinian.

      The Machiavellian argument is simply that natalism could be a powerful argument and political tool for advancing many leftist causes. I will take the American example here, even though the US is out of step with most western countries on these issues, but the example should be illustrative to other political systems nonetheless. Some of the dreams of the American left include expanding public healthcare, instating paid medical and parental leave policies, and funding public schooling, including higher education. A powerful political argument from the natalist perspective is that the cost and burden of having, raising, and educating a child is too prohibitive and that this is a significant reason for the choice of many adults in developed countries not to have children. By putting in place these policies, the cost of having, raising, and educating a child is distributed to society as a whole, just as the benefit of having that additional participant in the economy is distributed – public goods should have public funding. Should the American political left embrace natalism, it could seek common cause with natalists on the right to find compromises on these policies for the benefit of boosting the birthrate.

      On the Darwinian side, Leftists should consider embracing natalism to ensure their ideological and demographic sustainability. In the short-term national scale, if left-leaning individuals and groups continue to have lower birth rates compared to their right-leaning counterparts, the political landscape could shift significantly over a few decades; higher birth rates on the right could lead to a future where conservative values and policies dominate simply due to numerical superiority and intra-familial transmission. As Robin Hanson argues, over time this could mean a far future that is populated by the descendants of high-fertility subcultures like Amish and Ultra-Orthodox Jews, who are of course very religious and conservative. When Hanson first promulgated this idea, I was resistant and argued that

      “The idea that society will be dominated by the high-fertility subcultures is reductionist and assumes that the part of society one is born into is nearly perfectly correlated with the part of society one affiliates with as an adult, which is not the case. Conservative religious groups have higher fertility, but many people raised in those environments convert to more secular or liberal worldviews as adults. Parts of society that don’t have high fertility compete with high-fertility parts by being more alluring. Equilibrium can continue indefinitely.”

      However, I did the math, and posited a scenario in which there is a dominant culture D with fertility rate 1.5, and subculture S which is only 5% of the population but has a fertility rate of 4. To ensure that S never becomes dominant, the conversion rate from S to D needs to be approximately 29.33% per generation. This means that for every 100 S individuals, at least 29.33 need to convert to D each generation to prevent S from ever becoming the majority. 29% is a high barrier, considering that fewer than 10% of Amish leave their communities. It would be much easier to simply increase the fertility rate of mainstream society.[1] By promoting and supporting family-friendly policies that encourage higher birth rates within their communities, leftists can help ensure the populational vitality of the coalition.

      In the long term global perspective, falling birth rates in secular, developed countries can lead to a significant population imbalance compared to developing countries, which, without stereotyping, are on average less secular and egalitarian than western countries. This will put secular liberal values at a disadvantage globally in bodies such as the UN or its successors. Further, countries experiencing starkly declining populations may increasingly rely on immigration to sustain their economies and address labor shortages (NB: I am pro-immigration and this is overall a good thing!). However, as shown in the previous link, this immigration will increasingly have to come from nations with more conservative cultures, posing increasingly difficult demands on systems of integration/assimilation, which may over time threaten the influence of liberal and secular ideals (we don’t have to go full Houellebecq and see some abrupt takeover). This process can be slowed and eased by boosting domestic fertility.


      [1] Note that this sort of scenario only really plays out in a peaceful world; in a more belligerent time like in most of human history, dynamism in social organization and scientific and technological advance allowed the dominance of countries with small populations over larger ones; see, for example, the Mongol, British or Japanese victories over China, or Prusso-German successes over Russia, or for the most extreme examples the incursions of Pizarro and Cortez in the Americas.

    8. How Important is the “Scientific Method”?

      How Important is the “Scientific Method”?

      Tyler Cowen recently posted on Marginal Revolution the question “How Important is the ‘scientific method’?” This called to mind the following paper I wrote several years ago in which I analyzed how many of the most important scientific discoveries of all time had come about from scientists who eschew “good science” and follow their hearts, biases, convictions – whatever you want to call them. Since writing this analysis, my opinion on the matter has ebbed and flowed – I think increasingly we need to turn to established institutions and procedures to help navigate the rising tide of disinformation, fake news, and fake science, but at the same time I think we must be more skeptical and critical of such institutions as they do maintain the ability to shut down dissenting voices or heterodox research. The recent Washington Post article on heterodox research in anthropology/archaeology is a great case in point. Open sourcing the publication and peer review of research, as the article illustrates, is one possible avenue, but does that solution prevent or facilitate the capture of those review processes by bad actors (botnets, special interests, internet vigilantes)? Without question the world is heading toward a fundamental restructuring of core processes and institutions that have served us well for centuries, and Western Civilization is facing its greatest epistemic upheaval since the Protestant Reformation. For those of you seeing this blog for the first time, that is a core theme of my writings. For other articles touching on this theme, check out this, this, or this.


      Without further ado, my paper to the point:

                  Scientific progress comes in a variety of forms, be it via flashes of inspiration, methodical research and investigation, or simply the products of fortunate accidents. Nevertheless, through the history of western science, certain standards have arisen regarding the way in which “true” science should be done. The scientific method remains the centerpiece of these standards, stressing repeated observation, testing of hypotheses both new and old, and checking our assumptions against the realities of the physical world. In general, it is also considered standard to report all findings, whether they support current theories or not, and thus manipulation and selection of data to support preconceived notions is frowned upon, and generally considered unscientific. This stance, however, is the ideal; in reality, manipulation and staging of data has given us some of the greatest scientific breakthroughs in history – and perpetuated some of the worst misconceptions. The question, therefore, is not whether manipulation takes place in good science, but why it takes place and how science can be good despite it. Ultimately, we find that selection and staging of data has occurred throughout modern science, and that the reasons for it are based not on failures of the methods and standards of scientific practice, but rather on external social and personal influences which take advantage of the fact that science is, at its core, a human endeavor.

                  In their critical analysis of the history of modern science, The Golem, authors Collins and Pinch discuss at length the 19th century debate over spontaneous generation versus biogenesis, and the role that Louis Pasteur played in the battle of scientific viewpoints. The account provides an excellent illustration of how good science can still be wrong, while “impure” practices can still illustrate the truth. Pasteur’s rival, Felix Pouchet, a staunch proponent of spontaneous generation, conducted a series of experiments to prove that life could arise from non-living material. His results were fully documented, and his experiments always showed the rise of life from his supposedly sterilized materials (Collins and Pinch 84-86). On the other hand, Pasteur conducted many experiments, of which some also seemed to show abiogenesis. Pasteur conveniently disregarded those experiments, only publicly reporting those outcomes which supported his hypothesis; “he did not accept these results as evidence of the spontaneous generation hypothesis” (C&P 85). Ultimately, Pasteur was able to align the scientific community in his favor, and biogenesis became the accepted theory of the propagation of life (C&P 87-88). The question this requires us to ask, though, is how selected and staged data ultimately came to be proven correct. Clearly, Pasteur’s methods broke with “standard” scientific practice even in the 19th century and even more so today, so there appears to be no direct connection between factual accuracy and adherence to the scientific method. Yet the scientific method remains the cornerstone of scientific research and investigation, so perhaps there is more to the nature of science than the example of Pasteur can adequately illuminate.

                  Perhaps a more blatant example of staging and manipulation is that of Arthur Stanley Eddington and his astronomical expeditions in 1919. Determined to prove Einstein’s theory of relativity correct, Eddington and his fellow researchers set sail for the equator to make observations of a solar eclipse, a unique opportunity to test relativity’s predictions of gravitational lensing. The expedition’s observations, however, seemed vague, with some observations supported Einstein’s predictions, others Newton’s (C&P 48-49). Eddington chose only to publicize those data which supported Einstein. His reasons for this are mixed, and both The Golem and Matthew Stanley’s article explain his staging in different ways. The account from The Golem portrays Eddington’s thought process as simply ignoring systematic errors and focusing on those results which seemed devoid of irregularities; as a result he was conveniently left with those results which confirmed relativity. By that account, Eddington was doing “good” science, because he knew well the objective realities of his work, and was able to determine what data should and should not have been taken into consideration.

                  The Stanley article, on the other hand, paints a very different picture of Eddington and his motives. The article notes that prior to the time of Eddington’s observations, international science had grown petty and nationalistic, in many ways tied to the bellicose technological advances during the First World War. The horrors which destructive technologies such as the battle tank, advanced artillery, and poisonous gasses had wrought by the end of the war led to public apprehension and disdain toward scientific achievement (Dr. Ralph Hamerla, Lecture). According to Stanley, Eddington sought to change that sentiment. Raised a Quaker, and thus with a more humanistic and anationalistic outlook, Eddington sought a transnational approach to solve the problems of mankind (Stanley 59). He believed that a British expedition into Africa and South America to confirm a German’s theory would speak volumes for the international approach necessary for beneficial scientific advancement (Stanley 59). In light of Stanley’s article, then, Eddington maintained a personal motive and religious background which may have biased his observations and decisions regarding the staging of his data.

      This brings us to an important question about not only Eddington, but manipulation and staging of data in general. Are data and observations manipulated intentionally and consciously by those who present them, staged because of subconscious influences, or is there more to the matter than that?  Returning to the example of Pasteur, The Golem reveals that “Pasteur was so committed in his opposition to spontaneous generation that he preferred to believe there was some unknown flaw in his work than to publish the results…He defined experiments that seemed to confirm spontaneous generation as unsuccessful, and vice versa” (C&P 85). Here, then, is another example of a scientist who does manipulate and disregard data in a conscious way, but not for any conscious reasons.  Rather, his preconceived notions about what was to be expected influenced his interpretation of the data. Eddington’s actions likely followed a similar path.  His Quaker, internationalist attitudes and desires may have subconsciously caused him to see systematic error in the observations he made, and those observations which confirmed relativity appeared relatively flawless to him. It is always possible, however, that he made his interpretation purely out of scientific objectivity, but Pasteur’s example seems to make the first possibility more likely.

      These examples cannot be taken to imply that manipulation is always done without conscious intent, however, nor that such staging of data always contributes to a better understanding of natural realities – quite the contrary on both counts.

      Though one could imagine that a conscious manipulation of data and figures would be intended to change the perceived outcome of the experiment, Charles Darwin used staging to the opposite effect: to better explain the argument he was already making, from the conclusions he had already drawn. Philip Prodger’s “Inscribing Science” addresses the idea that Darwin’s intentions behind the manipulation of photography. In his The Expression of the Emotions in Man and Animals (heretofore “Emotions”), Darwin makes great use of the then-fairly-new medium of photography to provide actual images of expressed emotions in people. As Prodger asserts, “His photographic illustrations were carefully contrived to present evidence Darwin considered important to his work” (Prodger 144). Given that the medium was relatively new at the time, it had its limits in terms of both detail and exposure time, “in the order of tens of seconds” (Prodger 156). As a result, some tweaking, staging, and manipulation were necessary to accurately convey Darwin’s selected evidence. He collaborated with both Oscar Rejlander and Duchenne de Boulogne in generating images for Emotions, with the former providing photographs of posed emotions, and the latter, photos of electrode-induced facial expressions. Darwin manipulated both for his book. In the case of Rejlander, Darwin removed electrodes and scientific equipment from the photos, leaving only the induced emotion visible in the book (Prodger 166-169). One of Rejlander’s photographs, known as Ginx’s Baby, was so important for the book that Darwin created a photograph of a drawing of the photo, so as to ensure that all details of the image were captured perfectly (Prodger 173-5). At the same time, for the photographs produced by both men, Darwin changed the settings of the subjects, going so far as to place Ginx’s Baby into a comfortable chair.

      Ginx’s Baby:

      Darwin’s reasons for his manipulations seem obvious enough. The photography of the day, with its long exposure times and imperfect detail, was incapable of distilling the split-second nuances of human emotional expression. It was thus difficult to communicate, via photography, the scientifically important intricacies which Darwin needed to support his claim. However, he could observe these emotions and draw his conclusions from them as they happened. Thus, Darwin was not truly manipulating his data, merely the means by which he passed it on to others, casting serious doubt on the idea that he may have had coercive motives behind his alterations.

      Innocence may be questionable, however, in the case of the alteration and manipulation of data relating to racism and biological determinism during the 19th and early 20th centuries. In The Mismeasure of Man, Stephen Jay Gould analyzes the manipulative means – intentional or otherwise – that racial scientists and craniologists employed in the dissemination of data relating to innate racial differences and phylogeny. His analysis of Samuel George Morton gives keen insight into the thought process of a blatant manipulator of data. In Morton’s presentation of data about average brain size among races, Gould states that he “often chose to include or delete large subsamples in order to match group averages with prior expectations,” that in his reports “degrees of discrepancy  match a priori assumptions,” and that “he never considered alternate hypotheses, though his own data cried out for a different interpretation” (Gould 100). All of these cases seem to demand the inference that Morton was consciously and actively manipulating data to match his own preconceived notions about racial characteristics. Yet Gould himself takes the other side, stating that he “detect[s] no sign of fraud or conscious manipulation…Morton made no attempt to cover his tracks…he explained all of his procedures and published all of his data” (Gould 101). He comes to the conclusion that the only motivation behind Morton’s warping of data was “an a priori conviction about racial ranking” (Gould 101). Yet despite such flawed data, “Morton was widely hailed as the objectivist of his day” (Gould 101).  The fact that he was hailed as such clearly demonstrates the degree to which bias and misconceptions permeated society. Based on his and others’ studies, craniometry and racial sciences perpetuated the ideas of white racial superiority well into the twentieth century.

      We are therefore left with an indecipherable mixture of outcomes based upon the manipulation of scientific data that is generated in departing from the purity of the scientific method. With Pasteur and Eddington their assumptions about the “correct” outcome of their experiments allowed them to “know” which data to exclude and which to accept.  Both were ultimately proven correct, but whether their correctness was due to their superior scientific understanding or pure luck is not an answerable question . With Darwin, his choice of manipulation was clearly intentional, but the purpose benign: to better communicate technologically limited evidence and proof.  His conclusions regarding the related emotions in humans and other animals are now supported by overwhelming scientific evidence – thus his case was one of superior scientific understanding. In the case of Morton and the racial scientists of the 19th and 20th centuries, it is clearly visible that preconceived notions can lead science down the wrong path as well as the right path. In the scientists’ quest to prove assumed facts, they ignored alternate interpretations and, instead, caused the stagnation of “objective” scientific perspective in the area of human physiology and evolution, while perpetuating social ills for a century or more.

      It can be seen that assumptions and a priori conceptions about an area of science can utterly change the scientist’s perception of the outcome. However, our initial investigation into the cause of these preconceptions has many possible solutions. Social and religious goals are possible answers, from the example of both Morton and Eddington respectively, but pride, arrogance, or simple variances in scientific understanding are equally valid conclusions. It seems foolish, however to assume that humans, who carry opinions and preconceptions in every area of their personal lives, could be capable of completely ridding themselves of those same opinions when it comes to the pursuit of science. One can conclude that as long as humans engage in the endeavor of scientific inquiry, they shall bring with them their imaginations, opinions, and cultural biases, but whether bringing those variables into scientific pursuits ultimately adds or detracts from the quality of human scientific achievement is a purely subjective matter that we cannot hope to settle through prattling verbosity.

    9. China’s Coral Reef Economic Stimulus

      China’s Coral Reef Economic Stimulus

      Chinese manufacturing policies are unsustainable. That doesn’t mean they won’t accomplish China’s goals.

      The Chinese economy has been drawing contradictory comments in recent months. Amidst the gloom and doom of prognosticators declaring that the Chinese economic engine may finally be stalling, there is the new and sudden alarum about the flood of cheap Chinese exported goods that are now overwhelming global markets. While these opposing narratives may seem incompatible – how could a stalling economy be so productive and competitive? – they are actually very closely related. China may be pursuing a stimulus strategy that I liken to a coral reef: though many subsidized companies will fail, their skeletons will scaffold the success of China’s future industrial titans.

      The Disease

      On the one hand, it is incontrovertible that the Chinese economy is not what it once was. Property giants are imploding, Chinese outbound tourist numbers have not recovered to pre-pandemic levels, and the deflationary cycle of low consumer confidence threatens a long malaise. Chinese economic growth, even according to the official numbers, is clearly in a new low-growth mode, one deemed by the Economist as “economic Long Covid”.

      The Uniquely Chinese Cure

      But the way in which China is choosing to address this crisis is showing some signs of success, and is the result of Xi Jinping’s unique ideological outlook. Under Xi, the Chinese Communist Party has begun a slow return to its socialist ideological roots and sought a different form of stimulus than the standard prescription other economies would employ. In most of the world, the textbook response to a slowing economy would be a Keynesian, demand-side stimulus meant to put money into consumers’ pockets and jumpstart spending, keeping the economic engine moving – think of the American “stimulus checks” cut under Obama in 2009 or Trump and Biden during the pandemic in 2020-21. Xi Jinping and his tongzhi, however, view that kind of stimulus as capitalist decadence, fearing that any direct payments to individuals from the government would precipitate the kind of needy indolence that western conservatives love to lambaste (just one of the many ways in which Chinese governance is actually quite right-wing on the western spectrum). They refuse to pursue that textbook route. In seeking a resolution to the policy dilemma, the PRC has decided to use a variation on the same stimulus strategy they used during the 2008-9 crisis, which then injected money into local governments, construction programs, and large industrial corporations. The hope was then, as now, that by tying access to stimulus funds to jobs and industry, individual citizens would be compelled to go out and be productive, stimulating the old-school Maoist spirit of nationalist industriousness. At the same time the government could make long-term investments in critical areas like infrastructure and industrial technology.

      This time around, instead of injecting money into bloated and debt-ridden local governments and construction sectors, China is focusing on what it sees as the future: high-tech export-oriented manufacturing, with a clear emphasis on electric vehicles.

      “In June last year, China introduced a 520 billion yuan ($71.8bn) package of sales tax breaks, to be rolled out over four years. Sales tax will be exempted for EVS up to a maximum of 30,000 yuan ($4,144) this year with a maximum tax exemption of 15,000 yuan ($2,072) in 2026 and 2027.

      According to the Kiel Institute, a German think tank that offers consultation to China, the Chinese government has also granted subsidies to BYD worth at least $3.7bn to give the company, which recently reported a 42 percent decrease in EV deliveries compared with the fourth quarter of 2023, a much-needed boost.”

      https://www.aljazeera.com/economy/2024/4/20/are-chinese-evs-taking-over-the-car-market

      Beyond these significant numbers, EU and US policymakers suspect even larger, undisclosed boost from the Chinese government (particularly debt-driven incentives from local governments), prompting official investigations and declarations, and an even an official Chinese acknowledgement of  industrial overcapacity was real – a claim that Premier Li Qiang later reversed course on.

      The finger-pointing and blame game dynamics aside, the policy is not sustainable. Whether the subsidies are paid for by local government debt or by spending down of China’s cash reserves, or whether these industries are truly competitive, having large, tax-free industries is not sustainable for China fiscally, and in any event not acceptable for the world marketwise: “China is now simply too large for the rest of the world to absorb this enormous capacity” stated US Treasury Secretary Janet Yellen. In short, the world was unprepared to prevent the first China Shock, but will not accept a second one. Eventually, there will be a reckoning of China’s industrial overcapacity, many unprofitable zombie firms will close, and global markets will react as they deem necessary.

      The Coral Reef Economy

      There are two ways to understand the current dilemma. The first way is to assume that Chinese policymaking is a shortsighted reaction to a slowing economy and that policymakers did not anticipate the global backlash. The second is to think that policymakers took these steps despite these obstacles because there was a longer-term goal in mind. What, then, might that longer-term goal be? I find the analogy of a coral reef to be potentially helpful here. Though coral reefs are huge rocky structures, corals themselves are small living animals. In their deaths, the skeletons they leave behind make up the structure of the reef itself and remain useful to their successors, serving as scaffolds, the reef as a whole growing fractally upwards and outwards on the bones of the corals’ ancestors. Likewise, the Chinese EV push may be hoping for a similar outcome: although most of the current EV manufacturers will not survive once debts are called in, stimuli are removed and global markets harden, their skeletal infrastructure will remain in place to serve their kin: skilled workers, upstream supply chains, downstream market and aftersales contracts, distribution networks, and most importantly technical innovations, will remain in place and can be bought out and more efficiently utilized by the (as China perhaps hopes) handful of surviving EV manufacturers who can, like corals, use the skeletons of their comrades to grow upwards and outwards. Furthermore, growing corals compete with other coral species for space, and an EV sale by a Chinese company, even a company destined for failure, is one fewer sale for a non-Chinese EV company. The Chinese “surge” in EV exports are not just beneficial for China directly, but, in China’s zero-sum vision of global competition, are indirectly beneficial for China by depriving rivals of the same sale, suffocating the competitiveness of the Teslas and Volkswagens of the world. The reef after the stimulus-fueled surge will be one in which the surviving Chinese companies can reign supreme.

      I will not argue that the second analysis is indeed the perspective of PRC policymakers, or even if it is that the “coral reef” scenario will play out as outlined here. Many would argue that China’s policy responses are indeed short-sighted and reactive, and that only the long-promised shift to higher consumer spending will guarantee China’s long-term financial stability and comfortable integration into the global political economy. But it is difficult to deny that Chinese EV manufacturing has made impressive leaps in both technology and capacity in recent years, and regardless of the fate of the current market situation, it seems likely that at least a few such manufacturers will remain globally competitive in the long term.