Tag: culture

  • The War That Wasn’t: Christianity, Science, and the Making of the Western World

    The War That Wasn’t: Christianity, Science, and the Making of the Western World

    Examining the History between Science and Christianity

    During my early adulthood I was a zealous New Atheist, and as such believed wholeheartedly in a message that was central to the NA movement: that Christianity had been a parasite on Western civilization, dragging humanity into the Dark Ages and smothering science until skeptics and Enlightenment thinkers finally pulled us back into the light. While studying European history in depth, though, I began to see cracks in that story. The relationship between Christianity and science wasn’t as clear-cut as New Atheists made it out to be, and in some elements was rather constructive. But the question remained in my mind, and it grew into a larger curiosity about what made the West different—one that would eventually drive my MA studies in international economics and development.

    Recently, though, something surprising happened: I saw the old narrative resurface. If you were active in New Atheism circles in the 2000s (or honestly if you were active on the internet at all; to quote Scott Alexander, “I imagine the same travelers visiting 2005, logging on to the Internet, and holy @#$! that’s a lot of atheism-related discourse) you probably saw a chart that looks something like this:

    While many of those who were active New Atheists back in the early 2000s have mellowed out and found other ideological motivations (or disenchantments), it seems there is a new generation of zealous Atheists engaging with these ideas for the first time, and one secular group I saw the above graphic posted unironically, even with the ridiculous attribution of “Christian” dark ages. The resurfacing of that old certainty, combined with a provocative new scholarly article offering an economic perspective on Christianity’s role in scientific progress, prompted me to revisit the question. How exactly did Christianity interact with science throughout history? The answer is messier, and far more interesting, than the stories I once took for granted. I will tackle this in a way that is at once thematic and chronological: 1) the Dark Ages (or Early Middle Ages) 2) The (High) Middle Ages, and 3) the modern (Post-renaissance) world.

    I. Did Christianity Cause the Dark Ages?

      First, I want to address the easiest part of the question: did Christianity somehow cause the Dark Ages?

      I think this can be answered very briefly with an unqualified “no”, and I will go even farther and say “quite the opposite”. I don’t know of any reputable modern historians who would say otherwise. Historically there have of course been literally hundreds of different causes blamed for the fall of the Western Roman Empire and the ensuing “Dark Ages”, including such obscure culprits as infertility, trade imbalances, loss of martial virtues, and wealth inequality. Yes, contemporaries in late antiquity did blame Christianity and the abandonment of traditional paganism for the fall of Rome. For example one of the last notable pagan scholars, the Eastern Roman historian Zosimus, put it plainly that “It was because of the neglect of the traditional rites that the Roman Empire began to lose its strength and to be overwhelmed by the barbarians.” (Historia Nova, Book 4.59), and most famously (Saint) Augustine of Hippo wrote his City of God to refute such perspectives (though he wrote before Zosimus): “They say that the calamities of the Roman Empire are to be ascribed to our religion; that all such evils have come upon them since the preaching of the Gospel and the rejection of their old worship.”(The City of God, Book 1, Chapter 1). Needless to say, these were not modern academic historians and were clearly making biased assertions based on the vibes of the day.

      Today the historical consensus seems to be some combination of climate change-induced plague (see Harper, Kyle, “The Fate of Rome”), mismanagement by later emperors of the resulting chaos and malaise, and most importantly the Völkerwanderung, “The wandering of the peoples,” i.e. the migration of massive numbers of people from central and western Eurasia into Europe during the 4th and 5th centuries. To summarize the contemporary consensus into one sentence: the Western Roman Empire fell because decades of plague and civil war made it too weak to repulse or assimilate the entire nations of people migrating into Western Europe, and thus had to cut deals, cede territory, and slowly “delegate itself out of existence” (Collins, “Early Medieval Europe”). In direct refutation of the claim that Christianity caused the Dark Ages, the Christianization of these Germanic and other peoples was a vital channel for the transmission of culture and values and an important step toward (in Rome’s conception) civilizing and settling them in the Mediterranean world (see https://en.wikipedia.org/wiki/Ulfilas as one example).

      As further refutation, the brightest spark of European Civilization during the Western European Dark Ages (roughly 500-800) was the Eastern Roman Empire, which was unquestionably *more* thoroughly Christian than the “Darkening” West. The Eastern empire boasted a stronger institutional religious structure, with the emperor himself dictating much of theological policy, and strongarm enforcement of official positions (e.g. with the Christological debates of late antiquity) was common: “The Byzantine emperor, always the head of the imperial hierarchy, automatically evolved into the head of this Christian hierarchy. The various bishops were subservient to him as the head of the Church, just as the governors had been (and were still) subservient to him as the head of the empire. The doctrines of the Christian religion were formulated by bishops at councils convened by the emperor and updated periodically at similar councils, with the emperor always having the final say” (Ansary, Tamim, “Destiny Disrupted”).

      The Western empire, by contrast, struggled for centuries with institutionalization and conversion, with the Catholic church wrestling not just with latent paganism and heretical syncretism among rural populations but also an existential battle with Arian Christianity (a nontrinitarian form of Christianity that asserts that Jesus not an incarnation of God but merely a lesser creation of God), common for centuries among the ruling strata of Vandals and Goths in the early middle ages; “the Vandals were Arian Christians, and they regarded the beliefs of the Roman majority as sufficiently incorrect that they needed to be expunged… Arianism continued as the Christianity of ‘barbarian’ groups, notably Goths, Vandals and eventually Lombards, into the seventh century” (Chris Wickham, “The Inheritance of Rome”). Though I will risk overextending my argument here, I will say that the Church in fact prevented the Dark Ages from being even worse: “the Church emerged as the single source of cultural coherence and unity in western Europe, the cultural medium through which people who spoke different languages and served different sovereigns could still interact or travel through one another’s realms.” (Ansary).

      There is a caveat to all this, though. Christianity did seem to have a deleterious effect on the logical philosophy of the late Empire. I have been able to find at least three separate early Christian philosophers who all deliver variation on the same idea that faith should triumph over logic and reason:  “The nature of the Trinity surpasses all human comprehension and speech” (Origen, First Principles, Preface, Ch. 2); “If you comprehend it, it is not God” (Tertullian, De Carne Christi); and “I believe because it is absurd”, “Credo quia absurdum est” (Augustine of Hippo). But it is important to contextualize these perspectives in a general trend towards mysticism in late antiquity. Christianity was not alone, as Mithraism, Manicheism, Sun worship, and other prescriptive revealed religions swirled in the ideological whirlpool of late antiquity, and also to see the rise of all of the above as reactions to the declining state of the political economy: as we see evidenced today, material insecurity pushes people toward the comfort of religion (see e.g. Norris and Inglehart, Sacred and Secular).

      II. Did Christianity Hinder or Help the Birth of Modern Science?

      This question is somewhat more difficult to answer, and I originally had drafted much more, but decided to cut it down to prevent losing anyone in an overly academic morass. To summarize what I see as the answer to this question, there are two necessary components to the rise of modern science, the ideological and the structural. Ideologically, the quest to understand the divine plan through exploration of the natural world was common to both the Christian and Islamic proto-scientists, but when authorities decided this ideological quest was becoming threatening, structural changes that had taken place in Christendom (in part as a result of religious motivations) but not in the Caliphate saved proto-science in the former from the same fate as the latter. Thus Christianity initially helped, then became antagonistic toward emerging proto-science, but by the point that things got antagonistic, structural changes prevented the Church from effectively tamping out the proto-scientific sparks. Let’s expand a bit, but first a note about the Twelfth Century Renaissance.

      In the west, the critical window here was the 12th Century Renaissance and the resulting changes that took place in the 13th century. The 12th Century Renaissance is less well-known than “The” Renaissance of the 15th century, but arguably had more far-reaching consequences in terms of laying the foundations of Western civilization and culture. “Although some scholars prefer to trace Europe’s defining moments back to the so-called Axial Age between 800 and 300 b.c., the really defining transformative period took place during the Renaissance of the twelfth and thirteenth centuries. That is when the extraordinary fusion of Greek philosophy, Roman law, and Christian theology gave Europe a new and powerful civilizational coherence.” (Huff)

      The Twelfth Century Renaissance witnessed two unrelated trends that came together at the end of the 13th century in one seminal decade, which I will unpack in later paragraphs. The first trend is the reintroduction (via translation schools in Toledo, Constantinople and the Papal States) of most of the works of Aristotle, giving birth to a “new intellectual discipline [that] came to be known as ‘dialectic.’ In its fully developed form it proceeded from questions (quaestio), to the views pro (videtur quod) and con (sed contra) of traditional authorities on a particular subject, to the author’s own conclusion (responsio). Because they subjected the articles of faith to tight logical analysis, the exponents of the new rational methods of study became highly suspect in the eyes of church authorities” (Ozment). The second trend was the rediscovery of Roman Law which triggered an immense restructuring of legal rights and entities in the entire Western world: “An examination of the great revolutionary reconstruction of Western Europe in the twelfth and thirteenth centuries shows that it witnessed sweeping legal reforms, indeed, a revolutionary reconstruction, of all the realms and divisions of law […] It is this great legal transformation that laid the foundations for the rise and autonomous development of modern science […]” (Huff).

      We will examine how the novelties of the Twelfth Century Renaissance interacted to create the preconditions for Science as we know it, by examining the confluence of the two requirements, the ideological and the structural.

      Ideologically, what was required for the creation of science was the attempt to use existing knowledge to understand the underlying structure of the world, i.e. the codification and understanding of the scaffold of natural laws that allowed an understanding of the way the world worked. The belief in a knowable creator god seems to have given rise to this concept in the Abrahamic world. In his “history of the world through Muslim eyes,” Destiny Disrupted, Tamim Ansary encapsulates that both Christian and Muslim proto-scientists shared the same goals of understanding God through natural observation: “As in the West, where science was long called natural philosophy, they [Abbasid-era Muslim philosophers] saw no need to sort some of their speculations into a separate category and call it by a new name[…]science as such did not exist to be disentangled from religion. The philosophers were giving birth to it without quite realizing it. They thought of religion as their field of inquiry and theology as their intellectual specialty; they were on a quest to understand the ultimate nature of reality. That (they said) was what both religion and philosophy were about at the highest level. Anything they discovered about botany or optics or disease was a by-product of this core quest[…]”. Islamic and European civilization both shared the Greek intellectual roots: “Greek logic and its various modes were adopted among the religious scholars.” (Huff) China, in contrast, along with the pre-Christian Mediterranean world, had admirable command of engineering principles and keen natural observation as exemplified by the likes of Zhang Heng, Archimedes, or Heron of Alexandria, but while visionary voices likely existed in each, neither Classical nor Chinese civilization generally adopted an ideological outlook that sought to comprehend what a bunch of discrete natural and engineering phenomena, such as the flow of water, motion of planets or architectural physics, might all say about the fundamental structure of the world or the will of the divine. To nail the issue even more tightly shut, “traditional Chinese mathematics was not abstract because the Chinese did not see mathematics in any philosophical sense or as a means to comprehend the universe. When mathematical patterns were established, they ‘were quite in accord with the tendency towards organic thinking’ and equations always ‘retained their connection with concrete problems, so no general theory could emerge’  (Olerich 22). In the west, in contrast, the Twelfth Century Renaissance added jet fuel to the existing ideological quest to create general theories and comprehend the universe: “In a word, by importing and digesting the corpus of the “new Aristotle” and its methods of argumentation and inquiry, the intellectual elite of medieval Europe established an impersonal intellectual agenda whose ultimate purpose was to describe and explain the world in its entirety in terms of causal processes and mechanisms” (Huff 152).

      Structurally, what was required for the creation of modern science was the institutional independence of proto-universities to explore questions that ran contrary to social and religious dogmas. As historian Toby Huff explains, the new legal world created by the rediscovery of Roman legal codes was a veritable Cambrian explosion for European institutions and ideas

      “For example, the theory of corporate existence, as understood by Roman civil law and refashioned by the Canonists and Romanists of the twelfth and thirteenth centuries, granted legal autonomy to a variety of corporate entities such as cities and towns, charitable organizations, and merchant guilds as well as professional groups represented by surgeons and physicians. Not least of all, it granted legal autonomy to universities. All of these entities were thus enabled to create their own rules and regulations and, in the case of cities and towns, to mint their own currency and establish their own courts of law. Nothing like this kind of legal autonomy existed in Islamic law or Chinese law or Hindu law of an earlier era.”  (Huff)

      To expand, whereas up to the 12th century, the legal forms in Western Europe were almost wholly those that had been imported from Germanic feudal law, a highly personalist structure of fealty and dependence – land, businesses, countries, churches were the responsibility of individual lords, artisans, kings, bishops, what have you – who depended on their superiors for the mere right to exist. The idea of corporate personhood, that “corporations are people”, (putting aside all of the exploitative and oligarchic connotations it has taken on in the context of the American political scene in the 21st century) was a fascinating, powerful, liberating idea in the 12th century, and one that proved critical to the rise of modern science. Quickly towns, cities, mercantile ventures, and most critically cathedral schools, seminaries, and proto-universities strove to incorporate (literally, “to make into a body”) their existence in the Roman legal mold – no longer were they merely collections of people, they argued their way into being as legal entities distinct from their members and adherents. Further, the Catholic church enriched its canon law with Roman borrowings and promoted the creation of formal legal studies, for example at the University of Bologna. The Compounding with the ideological ferment after the reintroduction of Aristotle and other new texts, “European scholars began gravitating to monasteries that had libraries because the books were there[…] Learning communities formed around the monasteries and these ripened into Europe’s first universities.” (Ansary), which could then as independent corporate entities survive political or theological pressure on or from any individual member.

      We can quite clearly examine the benefit of this arrangement by counterfactual comparison with Islamic society. Despite the fact that Islamic society also had a quest to understand the divine structure of the physical world and thus shared the same ideological perspectives that gave rise to proto-science, the very different institutional structure of the Islamic world resulted in a very different outcome for Islamic science. As philosophers began to question fundamental religious dogmas such as the necessity of revelation or the infallibility of the Quran, “the ulama were in a good position to fight off such challenges. They controlled the laws, education of the young, social institutions such as marriage, and so on. Most importantly, they had the fealty of the masses” (Ansary). The intellectual institutions such as Islamic awaqaf (plural of waqf,  “pious endowment”) that did house these nascent intellectual pursuits were not legally independent but were the dependencies of individual sponsors who could apply pressure – or have pressure applied to them – and their very nature as pious endowments meant that “they had to conform to the spirit and letter of Islamic law” (Huff). Reaching higher into the political structure, In the 10th century, most of the Islamic world was united under the Abbasid Caliphate, and consequently a reactionary shift by the government could result in a persecution that could reach to most of the Islamic world. That is precisely what happened, for the Ulama used their influence to force a change in direction of the Caliphate. After a high tide of intellectual ferment, subsequent persecution of the scientist-philosophers under the next Caliph “signaled the rising status of the scholars who maintained the edifice of orthodox doctrine, an edifice that eventually choked off the ability of Muslim intellectuals to pursue inquiries without any reference to revelation.” (Ansary). And just to once again contrast with the Far East, “In China, the cultural and legal setting was entirely different, though it too lacked the vital idea of legal autonomy” (Huff). Most importantly in China, the dominance of the Civil Service Examinations served as a gravity well attracting all intellectual talent to a centralized, conservative endeavor, stifling other intellectual pursuits: “This was not a system that instilled or encouraged scientific curiosity […] The official Civil Service Examination system created a structure of rewards and incentives that over time diverted almost all attention away from disinterested learning into the narrow mastery of the Confucian classics.” (Huff 164).

      Bringing this all together, in the West, the twin fuses of ideological ferment and corporate independence intertwined, and who else would light the spark aside from the Catholic church. As noted already, the Church realized the threat posed by the rising tide of Aristotelianism and its promotion of rigorous logical examination of the Church’s teachings. Whereas earlier in the 1200s the tendency was to try to find common ground between Aristotle and Christianity, or even to use them to reinforce each other as exemplified by Thomas Aquinas, by the latter part of the century conservative elements in the church saw Aristotelianism as an inherently hostile cancer, and in 1270 and again in 1277 they declared war, issuing (a reinforcing) a blanket condemnation of Aristotle.  Historian Steven Ozment explains that “In the place of a careful rational refutation of error, like those earlier attempted by Albert the Great and Thomas Aquinas, Bishop Tempier and Pope John XXI simply issued a blanket condemnation. The church did not challenge bad logic with good logic or meet bad reasoning with sound; it simply pronounced Anathema sit.”


      The momentousness of this decision for the course of western thought cannot be overstated, for it represented an end to the attempt to reconcile theology and philosophy, science and religion. “Theological speculation, and with it the medieval church itself, henceforth increasingly confined itself to the incontestable sphere of revelation and faith[…] rational demonstration and argument in theology became progressively unimportant to religious people, while faith and revelation held increasingly little insight into reality for secular people.” (Ozment, Steven “The Age of Reform 1250-1550”). In short from 1277 onward, the religious got more religious, and the rational became more rational.

      We see in action the importance of corporate independence and decentralized governance, because there were attempts to stamp out Aristotelianism: In England, there were attempts at Oxford in the late 13th century to restrict the teaching of certain Aristotelian texts. In 1282, the Franciscan Minister General Bonagratia issued statutes attempting to limit the study of “pagan” philosophy (mainly Aristotle) among Franciscan students. In the Dominican Order, after Aquinas’s death, there were some attempts by conservative members to restrict the teaching of his Aristotelian-influenced theology. The Dominican General Chapter of 1278 sent visitors to investigate teachers suspected of promoting dangerous philosophical doctrines. But these efforts failed, and universities proudly asserted their newfound legal independence: the University of Toulouse, which incorporated in only 1229, declared that “those who wish to scrutinize the bosom of nature to the inmost can hear the books of Aristotle which were forbidden at Paris” (Thorndike). The University of Padua became particularly known as a center for “secular Aristotelianism” in the late 13th and 14th centuries, and maintained a strong tradition of studying Averroes’ commentaries on Aristotle even when these were controversial elsewhere (Conti, Stanford).

      But for the thinkers during and just after this time period, the intellectual whiplash stimulated new thought that truly began the rebirth of scientific thinking in the western world. Instead of blindly taking either the Church or Aristotle at face value, the idea that they could be in conflict gave rise to the idea that either or both could be wrong.  The University of Padua mentioned above Scholars such as Jean Buridan or Nicole Oresme began their studies in religious matters (the former was a cleric and the latter a bishop) before turning to “scientific” studies, but their questioning of both religious and Aristotelian dogmas led them to pierce through accepted dogmas, making unique contributions to a wide variety of fields and generally considered to have lain the foundations for the coming scientific and Copernican revolutions.

      III. How have science and religion reacted Post-Renaissance?

      In a recent post on MarginalRevolution, economist Tyler Cowen linked a new article which tears this ancient quarrel new abroach, at least for the modern era. The opening statement concisely encapsulates the picture painted above:  “Today’s leading historians of science have ‘debunked’ the notion that religious dogmatism and science were largely in conflict in Western history: conflict was rare and inconsequential, the relationship between religion and science was constructive overall”, and Cowen adds his commentary that “Christianity was a necessary institutional background”, as I believe the preceding section has shown. But the article by Matías Cabello picks up the story where I left off, and looks at the relationship after the Renaissance. Cabello sees the modern period as unfolding in three stages, with an increasingly secular perspective from the late Middle Ages until the Reformation, then a new period of increased religious fervor during the period of the Reformation and Wars of Religion (16th-17th centuries), finally relenting with the dawn of the Enlightenment in the early 18th century.

      Cabello’s chronology lines up closely with my own knowledge of the topic, though I admit that after the Reformation my knowledge of this period is inferior to that of the previous eras. I draw primarily from Carlos Eire’s monumental and reputed book Reformations for knowledge of this period. But in general, there’s a lot of data showing that the Reformation was a much more violent, zealous, and unscientific time than the periods straddling it.  A useful theory for understanding the dynamics of religion during this period is the Religious Market theory as formulated by Stark and Bainbridge (1987). In this theory, religions compete for adherents on a sort of market (or playing field, if you will), and in areas of intense competition, religions must improve and hone their “products” to stay competitive against other religions, but in areas where one religion monopolizes the market it becomes less competitive, vital, and active in the minds of adherents. This phenomenon is visible most clearly in the secularization of Scandinavian countries once Lutheranism enjoyed near complete monopoly for 400 years, and is often employed to explain why the pluralistic US is more religious than European countries which usually have one hegemonic church, but I would it argue it was also clearly at play in the Midde Ages. By the late Middle Ages, Catholicism enjoyed complete dominance in Western Europe against all rivals, allowing cynicism, political infighting (e.g. the western Schism which at one point saw three popes competing for recognition over the church), and, most critically, corruption to creep into the Church’s edifice. But when the Protestant Reformation broke out (in large part for the reasons just enumerated), suddenly there were several competing “vendors” who had to knuckle down and compete with each other and with different strands within themselves, leading to increased fanaticism and internecine violence for more than a century. There’s a lot of evidence that corroborates this general trend, for example Witch Hunts, which despite being portrayed in popular culture as a medieval phenomenon were decidedly a Reformation-era thing as shown in the below chart, (to wit, many of our popular ideas of the middle ages come from Enlightenment/Whig writers looking back on the 17th century and erroneously extrapolating from there).

      If I can contribute my own assembled quasi “data set”, a few years ago I put together a western musical history playlist featuring the most notable composers from each time period, and one thing that clearly jumped out to me without being aware of this historical topography was that the music before the Reformation was much more joyous and open (and to my ears, just more enjoyable to listen to) than the rather conservative and solemn music that would come just after. To sum up, a lot of indicators tell us that the period of roughly 1500-1700 would have been a much less creative, openminded, and probably fun time to live than the periods just before or after.

      Getting back to Cabello, one of the novelties of his work is in its quantitative approach to what has traditionally been a very non-quantitative area of inquiry, scraping and analyzing Wikipedia articles to see how the distribution and length of science-related figures shifted over time. His perspective is most concisely presented by his figure B2, reproduced here:

      To quote the author, “This article provides quantitative evidence—from the continental level down to the personal one—suggesting that religious dogmatism has been indeed detrimental to science on balance. Beginning with Europe as a whole, it shows that the religious revival associated with the Reformations coincides with scientific deceleration, while the secularization of science during the Enlightenment coincides with scientific re-acceleration. It then discusses how regional- and city-level dynamics further support a causal interpretation running from religious dogmatism to diminished science. Finally, it presents person-level statistical evidence suggesting that—throughout modern Western history, and within a given city and time period—scientists who doubted God and the scriptures have been considerably more productive than those with dogmatic beliefs.”

      It is no coincidence, then, that the single most famous skirmish in history between science and religion, the trial and condemnation of Galileo Galilei, came squarely in the nadir of this fanatical period (1633).

      And yet even in this context, science did, of course, continue to progress, and religious beliefs often lit the way. Historian of Science Thomas Kuhn produced what is likely the best analysis to date of how science progresses throughout the ages and how it is embedded in sociocultural assumptions. In his magnum opus, The Structure of Scientific Revolutions it is clear that the paradigm shifts that create scientific revolutions do not require secularization but rather the communion of different assumptions and worldviews. For example, the different worldviews of Tyco Brahe and Johannes Kepler, two Dutch astronomers who, as master and apprentice respectively, were looking at the same data but, with different worldviews, came to very different conclusions. Brahe believed that in a biblically and divinely structured universe, the earth must be at the center, and he as such rejected the new Copernican heliocentrism. His apprentice Kepler, however, also employed religious motivation, seeing the new heliocentric model as an equally beautiful expression of divine design, and one which squared more elegantly with the existing data and mathematics. Science, thus, is not about accumulating different facts, but looking at them through different worldviews. In one of my posts a few months ago, I mentioned that religious convictions and other reasons can push scientists to bend or break the scientific method, sometimes leading to scientific breakthroughs. One of the clearest examples was the scientific expedition of Sir Arthur Eddington, whose Quaker convictions likely made him see scientific discovery as a route towards global brotherhood and peace. In short the scientific method is an indispensable tool for verifying discoveries and we neglect it at our peril, but we must not let it become a dogma, for the initial spark of discovery often emanates from the deeply personal, irrational, cultural, or religious ideas of individuals.

      In a recent blog post, Harvard professor of AI and Data Science Colin Plewis posted the following praise of Kuhn: “Far from the smooth and steady accumulation of knowledge, scientific advancement, as Kuhn demonstrates, often comes in fits and starts, driven by paradigm shifts that challenge and ultimately overthrow established norms. His concept of ‘normal science’ followed by ‘revolutionary science’ underscores the tension between tradition and innovation, a dynamic that resonates far beyond science itself. Kuhn’s insights have helped me see innovation as a fundamentally disruptive and courageous act, one that forces society to confront its entrenched beliefs and adapt to new ways of understanding.”

      Conclusion

      Hopefully this post has given a strong argument for the limited claim that Christianity is not a perennial enemy of science and civilizational progress. And perhaps it has also given some evidence for the idea that scientific advancement benefits from the contact and communication of different worldviews, assumptions, and frameworks of belief, and that Christianity, or religious belief in general, is not necessarily harmful for this broader project. Without question there can be places and times in which dogma, oppression, and fanaticism inhibit freedom of thought and impede the scientific project – but these can be found not only in the fanatical religious periods of the Wars of Religion or the fall of the Caliphates, but also in the fire of secular fanaticism such as Lysenkoism or the Cultural Revolution, or even the simple oppressive weight of institutional gravity, as was the case of the imperial exam system in China.

      What can we take away from this historical investigation to inform the present and future?

      Normally, we consider universities and affiliated research labs to be the wellsprings of scientific advancement in the modern West. But given that higher education in the united states demonstrates an increasing ideological conformity (a 2016 study found that “Democratic-to-Republican ratios are even higher than we had thought (particularly in Economics and in History), and that an awful lot of departments have zero Republicans”), Americans are increasingly sorting themselves into likeminded bubbles including into politically homogenous universities, preventing the confrontation with alternative worldviews that is the very stuff of free thought, creativity, and scientific progress. And since popular perceptions imply that this trend has only been exacerbated in intervening years, it may be that the “courageous act” of “revolutionary science” that “challenges and overthrows existing norms” may have to come from an ideological perspective outside the secular, liberal worldview of modern American academia. It may be that an overtly religious institution like a new Catholic Polytechnic Institute or explicitly Free Speech schools like the University of Austin,  no matter how retrograde or reactionary they may appear to many rationalists, are the future of heterodox thinking necessary to effect the next scientific revolution.

      Of course, the future of science will be inextricably linked to Artificial Intelligence, and it remains to be seen exactly what kind of role AI (or AGI or ASI) will play in the future of scientific discovery, leaving me with nothing but questions: (when) will AI have a curious and creative spark? Will it have a model of the universe, preconceptions and biases that determine how it deals with anomalous data and limit what and how it is able to think? Or will it have the computational power and liberty to explore all possible hypotheses and models at once, an entire quantum universe of expanding and collapsing statistical possibilities that ebb and flow with each new data point? And if, or when, it reaches that point, will scientific discovery cease to be a human endeavor? Or will human interpreters still need to pull the printed teletype from Multivac and read it out to the masses like the Moses of a new age?

    1. The Parable of the Day-old Bread

      The Parable of the Day-old Bread

      This post is based on a true story from my wife’s family. She is from southern France, and several of her family members, including a great aunt and uncle, were refugees from Spain in the broad context of the Spanish Civil War and early Francoist regime. Because of this background, they carried with them certain habits and values shaped by hardship, what can only be described as a “culture of poverty.” One of the most poignant and interesting of these legacies was that at one point my wife’s great aunt and uncle voluntarily ate stale, hard bread for most of their lives, even when they had fresh bread in the house.

      To understand this, it helps to know something about Southern European bread culture. In France, Spain, and Italy, bread doesn’t typically mean the pre-sliced, plastic-wrapped loaves common in the U.S. and Northern Europe. Bread in Southern Europe generally means a preservative-free loaf, baked every day at the local bakery (the exact composition for typical loaves is even codified in French law). In these cultures, the texture of the bread is everything: a good loaf is hard and crackly on the outside but soft on the inside. If these loaves are sealed in plastic bags like American sandwich bread, the moisture equalizes between the interior and crust, and the entire thing becomes unappetizingly spongy. So this bread is always sold in paper, which keeps the crust crunchy, with the downside that within a day or so the bread will get dry and hard.

      Some days my wife’s great aunt and uncle would do their shopping and pick up their usual loaf of bread, only to find upon returning home that for whatever reason they hadn’t finished their loaf from the previous day. And like many who live through poverty and hunger, they obeyed one cardinal commandment above all others: thou shalt not waste food. So they would eat the previous day’s loaf, lest it go to waste, before beginning the next one. There would inevitably be some of the new loaf left over, and the cycle would repeat the following day. As a result, most of the bread they ate in their lives was stale, hard, day-old bread. It is easy to imagine that through a slight tweaking of their preferences and choices, they could have foregone eating the old bread and found another use for it, and eaten the fresher, better bread every day. But their preferences were deeply ingrained, hard-coded by their early lives and lived experiences, as unchangeable as the color of their eyes.

      There is perhaps a lesson in this for all of us. We may not eat old, hard bread, but most of us are probably doing irrational things that result from cultures and habits that we have unthinkingly inherited. Many of them are rational, logical choices given certain initial conditions, but these may be material conditions that no longer exist. This is true not just of individual behaviors, but entire cultures and ways of being. Some, like those of my wife’s family, are hard-coded, and we will never be able to change them or even think about changing them, for example Sam Harris once very insightfully pointed out that by all rational analysis, no one with alternative heating systems should be lighting a fire in their fireplace, and used this to explain to rationalist-atheists why some people still believed in God. Others may be preferences, heuristics, or automatic choices that with rational reflection we may realize are inefficient or wrong for the modern world, for example daylight savings time, office work, or the 40-hour workweek. As we go about our days, let us be critical in asking ourselves: what aspects of our cultures, what habits in our lives, are merely stale bread?

    2. The Life and Death of Honor

      The Life and Death of Honor

      Obituary of one of the oldest human values

      I was recently reading the book version of Disney’s Mulan to my four-year-old son when he asked me what “honor” was. Although I usually pride myself on concocting kid-friendly analogies and simplifications, I truly struggled with his question and muttered something like “people thinking you’re a good person” before moving on. The question stuck in the back of my mind, however,  and I have been continuously mulling how to mentally model “honor” in a concise way. After days of struggle, I began to read, research, and think critically about the idea, and what follows is the digest of that process.

      The concept of honor was a staple of human society since the dawn of recorded history, and yet somehow in the past 300 years it has gone the way of steamboats and horsedrawn carriages. Honor, today, is a quaint vestige at best and pathologized at worst, coming up most often in the context of “honor killings” or the “honor culture” of the American South. Outside the limited scope of military service, “honor” is nearly synonymous with “machismo” or “bravado”, a puerile attachment to one’s own ego and reputation (or that of one’s kith and kin).

      A comparison of a random selection of some other broad but unrelated terms demonstrated that the fall of honor is not just absolute but relative – freedom, for example, was a minor concern in the 16th century but has since dwarfed honor.

      Interestingly honor was more prevalent than even “love” in the 16th century but the opposite holds true since then.

      Wikipedia implies that honor is a proxy for morality prior to the rise of individuality and inner consciousness: in our earlier, tribal and clannish stages of moral development, the perceptions of actions by others and how they reflected on our kith and kin were much more important than any inherent deontological moral value, hence honor.

      And yet there is a part of us that knows that this idea is missing something critical. When we read or watch stories about “honorable” characters like Ned Stark or Aragorn talking about honor, we don’t think of them as being quaint, macho, and motivated by superficial public perceptions of their clans and relatives. We know that when they are talking about their honor, they are talking about being morally upstanding figures who do the right thing regardless of the material and reputational cost (to quote Aragorn, “I do not fear death nor pain, Éomer. Only dishonor.” -JRR Tolkien, Return of the King). When we read this, we know that that is not socially-minded showmanship but rather bravery and altruism, and a reader is supposed to like Aragorn for it.

      Upon contemplating this and reading further, it became obvious that “honor” was a catch-all term for many different qualities. It refers to personal moral righteousness and familial reputation, but it also refers to one’s conduct in warfare, or one’s fealty to one’s liege, and to one’s formal recognition by society. Given the ubiquitous and multifarious uses of the term, and the fact that pre-modern peoples seem to have absolutely obsessed over it (prior to 1630 it was as important as love and vastly more important than freedom), it stands to reason that it was useful and good. So how exactly can we explain the benefits of honor and what it meant?

      The Benefits of Honor

      I came to the following categorizations of how honor worked and why it was useful in pre-modern society, from shortest to longest:

      1. A solution to the game-theory of pre-modern warfare

      In the modern world there are the International Criminal Court to enforce international law and prevent war crimes, and an international press to publicize the violation of treaties and ceasefires. In the premodern world, these institutions did not exist. What prevented pre-modern peoples from regularly engaging in these acts? To some extent the answer is “nothing”, and indeed rape, pillage, and general atrocities were a constant feature of premodern warfare: the ancient Roman statesman Cato the Elder (234-149 BC) coined the phrase “The war feeds itself” (bellum ipsum se alet) to explain that armies would sustain themselves by pillaging and starving occupied territories, and it is telling of the longevity of that mode of warfare that the phrase is most heavily associated with a war nearly two millennia after Cato, the Thirty Years’ War of 1618-1648 AD.  One institution that may have held these atrocities in check was the papacy and later state churches, though it stands to reason that a commander willing to commit such venal acts might not be dissuaded simply by threats of additional eschatological penalties. But one additional force holding back the worst of human nature during premodern war may indeed have been the concept of honor. A society that places a high value on honor means that individuals will pay a high reputational cost for such actions as attacking civilians, violating treaties, or encouraging mass atrocities. This societal expectation discourages individuals from engaging in such behavior because they would lose honor – and as honor was transmitted familially, their families would be marked for generations. In a society where legacy is important, staining that legacy in perpetuity for a one-time military benefit may have made some commanders think twice.

      1. A heuristic to encourage the efficacy of pre-modern society and administration

      It is difficult for us moderns to understand the extent to which the pre-modern world was personal. Max Weber, one of the founders of modern sociology, viewed modernity as a process of transitioning from a Gemeinschaft (community) into a Gesellschaft (society). In the former, looking out for one’s friends and family, using influence for their benefit and helping them get into positions of power is considered good and dutiful; in the latter, being dutiful means impersonally discharging the role of one’s office without regard to personal relationships; doing too much favoritism is considered corruption or nepotism. Indonesia’s former president Suharto once neatly encapsulated the difference (and revealed that Indonesia was still a Gemeinschaft) with the quote “what you call corruption, we call family values”.

      Most pre-modern societies, particularly feudal ones, had almost non-existent states, the governing apparatus being almost completely dissolved and reconstituted upon a monarchical succession. 14th century England had only a few hundred people in direct employ of the crown, most of them being tax collectors, royal estate and forest managers, and keepers of the peace. A monarchical succession meant that a new person with his or her own network of dependents, friends and trustees would need to pick and choose his or her own councilors and appointees to go out and do things.  How was a monarch to choose people for all of these roles? What all this meant was that the work of state administration was built on personal reputation. If a monarch needed something done well, they needed a quick metric to be able to assess the efficacy of an appointee. To that end, they could simply use someone’s reputation for honor.

      Thus, to the extent that honor encompasses such qualities as honesty and forthrightness, it would encourage the enforcement of contracts and upholding of laws. If it encodes diligence, willingness to abide by oaths and be overt in one’s declarations of allegiances, then it would help to encourage stable politics and relations amongst peers of the realm (an above-board network of alliances allows a rational calculus of when and whether to initiate a conflict; a clandestine alliance system gets us the tangle of World War One). If honor encompasses honesty and charity, it would entail dependability in collective and remitting taxes and making necessary investments to prevent or curtail social unrest by the lower classes. And most importantly, honor was a stand-in for loyalty to one’s liege and the crown. If you’re assigning a general to lead an army or a governor to lead a wealthy province, you want to be sure that they’re not going to betray you. Honor serves as both a metric for that likelihood and, failing that, a reputational deterrent on future generations of a traitor.

      1. A general shorthand for morality that is responsive to changing moral frameworks

      Western civilization has spoken of “honor” over thousands of years, and what that means in terms of personal virtues has changed radically over that time. One of the most important ideas of Friederich Nietzsche is the conceptualization of Slave Morality versus Master Morality. In Nietzsche’s conception, the Master Morality of the societies that used to be slave masters (Mesopotamians, Greeks, Romans) was later overcome by an inverted Slave Morality of those who used to be their slaves, i.e. the Judeo-Christian moral values.

      Let us first examine how honor worked in the Master Morality of the ancient world. In this conception, what was morally good was to be judiciously utilize the benefits of being at the top of the social pyramid, to become the most powerful and beautiful version of oneself, to surpass one’s rivals, to fully self-actualize. We see this fully laid out in epics such as the Iliad, wherein what is exalted is martial, mental, and interpersonal prowess. As Hector explains in the Iliad (Book VI), “I have long understood, deep in my heart, to defend the Trojans under Hector’s spear, and to win noble glory for myself and my forebears”.  We see this carried over into Roman society as exemplified by the pursuit of glory and the acquisition of familial “honors” which is how the word (honor) is used when it first enters the Western lexicon. Ancient Romans, particularly in the late Republican period, were absolutely obsessed with acquiring honors, in the plural. In this sense, honors often means public recognitions of honors via titles, offices, and displays such as the all-important triumph in which the celebrated man would have to be reminded that he was mortal lest he follow the path of Bellerophon and think he had acquired divinity. By the late Republic the quest for honors had become an obsession, and their pursuit was fuel for the civil wars and lust for power that ended the Roman Republic. To wit, Cicero comments in his De Officiis (On Duties), “honor is the reward of virtue, and the pursuit of honor is the very aim of every great man. It is the highest obligation to seek the recognition of those who are worthy, not for personal gain, but for the service of the state.” And as later Roman commenters (Livy) observed in looking back on that period, “No man is truly great unless he has acquired honor through the strength of his own actions. It is the pursuit of honor that drives men to greatness” (History of Rome book 2). In other words, honor in the Master Morality framework was exogenous, not endogenous. It was about getting others to recognize your greatness.

      The rise of the former slave populations with their Slave Morality truly inverted things.  In the Gospels Jesus intones repeatedly against the pursuit of public recognition: “When you give to the needy, sound no trumpet before you, as the hypocrites do in the synagogues and in the streets, that they may be praised by others” (Matthew 6:2); “you know that the rulers of the Gentiles lord it over them, and their great ones exercise authority over them. It shall not be so among you. But whoever would be great among you must be your servant, and whoever would be first among you must be your slave” (Matthew 20:25-26) and further “for they loved the glory that comes from man more than the glory that comes from God” (John 12:43). The message of humility naturally was taken up by those who were materially already humbled by the existing socioeconomic order (the slaves and urban poor) and resisted by those who had the most to lose – the practitioners of the Master Morality of public honor and glory. Over the following three hundred years Christianity came to be the dominant religion of the Mediterranean world. What followed in the West was literal moral revolution, the overthrow of the Masters by the Slaves, and the creation of a new order in which honor was still a goal, but the means of its attainment shifted radically. Thus by the 5th century St. Augustine echoed the same message “Let us then approach with reverence, and seek our honor in humility, not in pride” (City of God, published 426). Through the Middle Ages we hear from Dante Alighieri that “The greatest honor is not that which comes from men, but from God. And the greatest humility is knowing that, without His grace, we are nothing” (Divine Comedy, 1321). And at the end of the Medieval period, Sir Thomas Mallory repeats the same message, that “He who is humble in heart, his honour shall be pure and his glory eternal; but pride is the enemy of honor and virtue” (Le Mort d’Arthur, 1470).

      The Death of Honor

      As we saw from the n-grams above, Honor died around 1700 in Western Europe, as “the old aristocratic code of honor was gradually replaced by a new middle-class ethic of self-discipline, hard work, and social respectability” (Lawrence Stone, “The Crisis of the Aristocracy” (1965)). But following the trend line, its subsequent exhumation in the 19th century was as a pastiche to reminisce about or poke fun at, not as a genuine revival of the cultural value. For example, in Sir Walter Scott’s magnum opus Rob Roy, the word “honour” appears 152 times in just a little over 500 pages. Jane Austen used the term quite often, 256 times in her collected works, but anyone who has read Austen will know that she came to bury honor, not praise it. 

      The exact reasons for the decline in honor are difficult to pinpoint as there were myriad processes unfolding at the time. The Enlightenment subjected many cultural values to rational scrutiny and Enlightenment thinkers like Voltaire had no mercy for the concept that they wished to be rid of with great urgency: “The idea of honor has caused more bloodshed than any other human folly.” Voltaire Candide 1759. If one looks back on the typology for the benefits of honor above, we can see that many of the reasons for honor’s existence began to be moot by the 18th and 19th  centuries. For example, the growth of centralized bureaucratic states allowed expanded recordkeeping and objective evaluations of merit, eliminating the need for reputational heuristics. Increased law, order, communication and infrastructure meant greater movement and atomization of the individual as the Gemeinschaft gave way to the Gesellschaft; familial reputation gave way to licenses, certifications, degrees, and more affected signals of social status. And as “honor” used to be a vacuous stand-in for any number of human virtues and moral qualities, it was with little difficulty that it could simply be replaced by more precise terms for those qualities, e.g. “honesty” or “bravery”. Thus “honor” came to be a term of critique for those areas of the world most resistant to modernizing and enlightenment influences, where “honor culture” and “honor killings” persist as remnants of what once was the dominant mode of human thought and moral reasoning.

      Epilogue: Another Resurrection?

      Looking back at the n-grams, we can see one last fillip on the trend line beginning in the late 20th century. While no clear explanation has been put forth for this, one obvious suspect would be the rise in historical fantasy. With Lord of the Rings, Dungeons and Dragons, Game of Thrones, and countless other fantasy worlds attracting millions of fans on screens, pages and tabletops, it is little wonder that concomitant historical concepts such as “honor” should rise in popularity as well. This clear growth in this direction can be evidenced by the fact that historical fantasy tropes such as dragons, empires, knights and castles have seen large growth in popularity from the 1990s, but historical terms with less resonance in historical fantasy such as “crusade”, “dowry”, “abbot” and “chastity” demonstrate no such gains.

      Nevertheless, the usage in the English lexicon of “honor” remains a small fraction of what it once was. Honor continues to interest us academically and fictionally, but there is little chance of it returning to guide our choices and moral values in the here-and-now.

    3. Arguments for Natalism on the Left

      Arguments for Natalism on the Left

      Natalism, the belief in the need for higher birthrates, is increasingly a topic of concern for various thinkers and prognosticators  (Robin Hanson, Tyler Cowen, Zvi Moskowitz, and Elon Musk among many others). However, the calls for natalist policies are almost unanimously from the political right. I would like to argue that it would behoove the political left to take on this banner as well.

      The reason that the left has been reluctant to promote natalism are somewhat obvious. One of the core ideological constituencies of the political Left in many developed countries is young educated professionals, many of whom are child-free: some simply by the vicissitudes of professional life, and some of whom by ecological or personal choice. For the child-free members of this group, to embrace natalism would be hypocrisy. And for a leftist group or party to embrace natalism would be to risk alienating this important source of votes, funds, and political energy. Natalism is closely associated with the “traditional family” and “family values”, typically conservative calling cards.

      That said, there are a two strong arguments to make for the left embracing natalism, one of them Machiavellian and the other Darwinian.

      The Machiavellian argument is simply that natalism could be a powerful argument and political tool for advancing many leftist causes. I will take the American example here, even though the US is out of step with most western countries on these issues, but the example should be illustrative to other political systems nonetheless. Some of the dreams of the American left include expanding public healthcare, instating paid medical and parental leave policies, and funding public schooling, including higher education. A powerful political argument from the natalist perspective is that the cost and burden of having, raising, and educating a child is too prohibitive and that this is a significant reason for the choice of many adults in developed countries not to have children. By putting in place these policies, the cost of having, raising, and educating a child is distributed to society as a whole, just as the benefit of having that additional participant in the economy is distributed – public goods should have public funding. Should the American political left embrace natalism, it could seek common cause with natalists on the right to find compromises on these policies for the benefit of boosting the birthrate.

      On the Darwinian side, Leftists should consider embracing natalism to ensure their ideological and demographic sustainability. In the short-term national scale, if left-leaning individuals and groups continue to have lower birth rates compared to their right-leaning counterparts, the political landscape could shift significantly over a few decades; higher birth rates on the right could lead to a future where conservative values and policies dominate simply due to numerical superiority and intra-familial transmission. As Robin Hanson argues, over time this could mean a far future that is populated by the descendants of high-fertility subcultures like Amish and Ultra-Orthodox Jews, who are of course very religious and conservative. When Hanson first promulgated this idea, I was resistant and argued that

      “The idea that society will be dominated by the high-fertility subcultures is reductionist and assumes that the part of society one is born into is nearly perfectly correlated with the part of society one affiliates with as an adult, which is not the case. Conservative religious groups have higher fertility, but many people raised in those environments convert to more secular or liberal worldviews as adults. Parts of society that don’t have high fertility compete with high-fertility parts by being more alluring. Equilibrium can continue indefinitely.”

      However, I did the math, and posited a scenario in which there is a dominant culture D with fertility rate 1.5, and subculture S which is only 5% of the population but has a fertility rate of 4. To ensure that S never becomes dominant, the conversion rate from S to D needs to be approximately 29.33% per generation. This means that for every 100 S individuals, at least 29.33 need to convert to D each generation to prevent S from ever becoming the majority. 29% is a high barrier, considering that fewer than 10% of Amish leave their communities. It would be much easier to simply increase the fertility rate of mainstream society.[1] By promoting and supporting family-friendly policies that encourage higher birth rates within their communities, leftists can help ensure the populational vitality of the coalition.

      In the long term global perspective, falling birth rates in secular, developed countries can lead to a significant population imbalance compared to developing countries, which, without stereotyping, are on average less secular and egalitarian than western countries. This will put secular liberal values at a disadvantage globally in bodies such as the UN or its successors. Further, countries experiencing starkly declining populations may increasingly rely on immigration to sustain their economies and address labor shortages (NB: I am pro-immigration and this is overall a good thing!). However, as shown in the previous link, this immigration will increasingly have to come from nations with more conservative cultures, posing increasingly difficult demands on systems of integration/assimilation, which may over time threaten the influence of liberal and secular ideals (we don’t have to go full Houellebecq and see some abrupt takeover). This process can be slowed and eased by boosting domestic fertility.


      [1] Note that this sort of scenario only really plays out in a peaceful world; in a more belligerent time like in most of human history, dynamism in social organization and scientific and technological advance allowed the dominance of countries with small populations over larger ones; see, for example, the Mongol, British or Japanese victories over China, or Prusso-German successes over Russia, or for the most extreme examples the incursions of Pizarro and Cortez in the Americas.

    4. The Heterosexual Elect

      Navigating the Predestination of Sexual Orientation in the Bible Belt

      Christianity has long held a view called “predestination” – the idea that God plans out the entire moral course of human lives, and determines before birth whether a person is destined for salvation or damnation. The early church had detractors from this view, for example the fourth century Pelagian heresy espoused a more maximalist view of human self-determination (believing that it was principally human acts, not divine planning, that destined someone for heaven or hell), but ultimately ever since Saint Augustine of Hippo most brands of Christianity accepted the idea of Predestination.

      A natural critique of the position that one’s fate is unalterable is to give in to anarchy and nihilism: why not  be an immoral rake, whoring, drinking and stealing to one’s heart’s content, if none of it matters anyway for one’s outcome? As a result of this natural critique, many theologians moderated the position in different ways throughout the ages, allowing some small role for human will to “desire” or “deserve” the salvation that was already chosen for them. With the renewed theological intensity of the Reformation, however, some reformist hardliners went back to the stark belief, none more so than the Calvinists. The early Calvinists promoted the idea of Unconditional Election (i.e. not dependent on any human will) of a select few, chosen by God to be saved, while most of humanity was condemned to hell. To avoid the loophole of amorality and nihilism, in the Calvinist view, the elect would naturally embody the values that had commended them to salvation, and as such would be morally irreproachable. For the zealous Calvinist, then, social interaction became a theater of moral one-upmanship, everyone going out their ways to demonstrate their moral superiority, even though such moral acts had no eschatological or soteriological consequences.

      This belief has mostly faded from contemporary mainstream Christianity [don’t ever underestimate the extent to which antiquated theological extremes can persist in isolated communities for centuries or more], but the model of “immutable predetermined status begetting moral competition” remains one that we can see in other places and times. In my personal experience this model perfectly describes perceptions of sexual orientation during my adolescence.

      I grew up in the American Bible Belt and was an adolescent in the early 2000s. I first learned from my peers, when I was about 10 or 11, of categories called “gay” and “straight”, and that one should definitely try to be “straight” and that essentially nothing was more ostracizing for an adolescent male than to be “gay”. As my adolescence went on, I learned increasingly that there were signs of one’s gayness or straightness, and that if one were incapable of properly replicating and affecting the signs of straightness, it would indicate one’s true gay nature. Growing up in this environment, every action, every gesture, every preference seemed to be scrutinized for clues about one’s “true” sexual orientation. I remember the subtle ways in which we policed ourselves and each other, seeking to conform to the expectations of an immutable reality. Among the list of signs I had to police myself for were the following:

      • When carrying schoolbooks in the hallway, they had to be held at my waist by a straight arm; carrying them in the crook of my elbow at chest level was gay
      • When crossing my legs while sitting, the raised shin had to be held horizontally across the other leg; having the leg folded over with the back of my knee resting on the lower thigh was gay.
      • My socks had to be short or pushed down to my ankles; wearing my socks up too high was gay
      • I had to demonstrate knowledge of appropriate musicians and sports stars; not liking sports or listening to “gay” music like classical music was a sure sign that one was gay

      It should be noted that I was a bit more delicate, nerdy, “unmasculine” than many of my peers. I did prefer listening to classical music, drinking tea from a formal British tea set, and would have rather watched Lord of the Rings than sportsball any day. As such, I was in a state of constant torment through most of my teen years, horrified by the possibility that I might be, unbeknownst even to me, gay, even though I felt attracted exclusively to women. But the battle to Not Be Gay was so utterly consuming that it impeded me from considering the issue with any rational thought, and indeed I was stridently anti-gay even as many of my friends began supporting gay rights and signing petitions to launch a GSA chapter (Gay-Straight Alliance) at my high school – I recall ashamedly that a friend slid the petition to me at the lunch table, and I ostentatiously slid it immediately onward.

      It was in this context that I first encountered an openly gay person. As one did, I was bantering with a friend and I, unable to think of a witty comeback, called out “oh yeah, well I think you’re gay!”. He responded with an accepting, almost bemused “yeah, so?”. I was rendered, for I believe the first and only time in my life, a speechless, mouth agape cartoon character, unable to process what I had just heard. I was the modern incarnation of a 16th-century Calvinist whose neighbor had just told him he worshipped the devil. Someone had openly declared themselves anathema in what was then the most salient identitarian issue of my life.

      It was only with great hesitation, delay, and reserve that I shed the arms and armor of that identitarian struggle. It required several people in my close circle of friends to come out, and I still look back with remorse on my initial incredulous, mocking reactions. In one sense my perspective was not truly my fault, for I was a product of my environment. But I would be a poor rationalist if I did not say that I was at least somewhat at fault for not being sufficiently critical of that environment.

      The model of Unconditional Elect still holds, and can likely be seen in other places. I would welcome any input about where we can see it at work.

    5. Culture as a Trade Barrier

      Or one way illiberal states get the better deal on trade agreements

      A concept that I would have imagined was thoroughly discussed, but which I somehow cannot find discussed anywhere, is the concept of culture as a trade barrier. Now the idea that culture affects trade is nothing new – no one ever claimed that every country should buy equally all the products of the world; culture is a normal and expected part of the global marketing and trade landscape. But what I have never seen discussed is the extent to which culture can act as a hard barrier which can act one way more strongly than the other, or as one that is malleable for the purposes of statecraft – particularly in the hands of totalitarian societies that can shape public opinion and craft cultural trade preferences more easily than democracies.

      What I mean when I say that culture can be a trade barrier, and often should be studied and analyzed as one, is this: different peoples in different countries tend to buy different things. Sounds simple, right? But it’s not simple. Some cultures can be very fussy about the products they consume coming in particular forms or from particular places, and these preferences can make foreign producers of ostensibly similar products (replacement goods, to use the formal term) have to fight uphill battles to get their products into those markets, even if there’s not an equivalent in the other direction (I list several examples below). These preferences can take many different forms: sometimes people tend to buy things that are from their own country, or tend not to buy things that are from a specific country, for completely irrational reasons or even without any particular reason, just by background cultural “by-default” programming. Or sometimes, because of the cultural traditions and preferences of the country, there may be an extreme difficulty getting the citizens of the country to buy things from somewhere else. Critically, these preferences are not fixed, and are susceptible to marketing campaigns, but are equally susceptible to state programs of marketing or propaganda (depending on your perspective).

      Nationalized Preferences

      For an example of “national preference” trade barriers, we need only think of “buy American” campaigns. In the context of World Trade Organization or other free trade agreement (e.g. the European Union or USMCA), national governments have their hands tied on providing direct subsidies, protections, and benefits to the industries covered by the agreement. For example, if it is agreed that countries should trade bicycles without trade barriers, it would be a violation if a party to the agreement were giving government subsidies to their domestic bicycle industry, or doing something to restrict the imports of bicycles, causing an unfair advantage in their competition with trade partners; the WTO has mechanisms for levying punishments on violations by members. However, countries have the possible workaround of trying to shift national preferences. A campaign encouraging people to “buy American” can potentially have small effects that shift buying preferences and result in some difficulty in non-American products competing in certain contexts – a slight raising of the cultural trade barrier. Though in practice these campaigns don’t have much effect in the US, in other countries waves of national sentiment can constitute huge trade barriers: the Chinese government has long fanned the flames of anti-Japanese sentiment, causing Japanese shops and factories to be damaged and close due to Chinese protests, and even causing rebranding of Chinese brands accused of being “too Japanese”; when this happens, Japanese sales to China of many goods predictably fall. Critics may argue that preferences of national origins are often “signals” of quality (i.e. with no further information about products that appear identical, most western consumers would likely judge “made in China” to be lower quality than “made in Germany”), this is not a 1:1 correlation with preferences for buying things from a specific country – people may choose to buy from one’s own country even if it doesn’t mean cheaper or better quality, or buy from “friendly” countries over “unfriendly ones” as seen by American boycotts of French-sounding products at the outset of the Iraq War. So clearly there is something else going on aside from signaling.

      Denationalized Preferences

      For the denationalized “cultural preference” barrier, take milk for example. In country A people may be perfectly willing to buy and use UHT (Ultra-High Temperature pasteurized, i.e. shelf-stable) milk as any other milk. And in a neighboring country B people may overwhelmingly prefer to use fresh, refrigerated milk. As a result, country B can UHT-pasteurize and export all of its excess milk production into country A, but country A will have a much harder time shipping fresh milk to country B at affordable prices, since such shipments would require refrigerated trucks and much more efficient logistical planning to ship the milk larger distances over international borders. Thus, the culture of country B constitutes a form of trade barrier relative to that of country A. For a data-backed real-world example, consider the preferences in bread consumption of France versus the UK. In the UK, bread is often consumed, as in the US, in a soft, pre-sliced form, easy to pop in the toaster for breakfast, and just as easy to keep fresh on the shelves for days on end; in France, bread is by and large consumed fresh, with a crackly-crusty exterior while still being soft on the interior, a juxtaposition that breaks down within hours if wrapped in plastic, or becomes too dry and hard if left unwrapped – in short, impossible to pack and ship internationally. As a result, we got the following (before Brexit):

      French exports of bread to the UK dwarfed the inverse – France could produce and ship the kind of bread that Britons wanted to eat, but the UK couldn’t produce and ship the kind of bread that French wanted to eat. Thus French exports to the UK were, since 2005 or so, 3-6x UK bread exports to France. There are certainly other possible explanations for this phenomenon, but I imagine that the cultural barrier is a significant one.

      Another notable real-world example, though slightly more abstract, was salmon. Prior to the 1990s, Japan consumed very little salmon and almost exclusively in a cooked form, viewing salmon as a fish prone to parasites that should not ever be consumed raw, whereas in Norway raw or lightly smoked salmon is a staple of the national cuisine. In the late 1980s, Norwegian fishermen found themselves with a surplus of Salmon and insufficient markets to offload it into, and thus they sought to change the culture of Japan through a fierce marketing campaign that transformed the culinary culture of the land of the rising sun – salmon sushi is now arguably one of the most iconic emblems of Japanese cuisine. The culture of Japan constituted a trade barrier, and clever Norwegian marketing lowered, or even reversed, the cultural trade barrier.

      The Illiberal Advantage

      As I mentioned, one aspect of this discussion – the impacts of culture on trade – are nothing new. But what is often missed from these analyses is that it does not operate equally for all countries – some countries have much stronger cultural “walls” than others. It stands to reason that authoritarian regimes with tight media controls (e.g. China) have much more power to shift culture in a direction that  brings economic benefit – for example, encouraging Traditional Chinese Medicine as a way of stimulating the domestic market and raising a trade barrier to foreign pharmaceuticals, or perhaps doing behind-the-scenes manipulation to discourage state-affiliated firms (increasingly all major Chinese firms) from buying from geostrategic competitors. As such, liberal democracies have a strong incentive to understand this greater power of their non-democratic rivals and trade competitors to shape tradeflows and effectively circumvent and nullify aspects of free trade agreements. A solution would be to create monitoring offices at the WTO or embedded in trade agreement arbitration mechanisms to set limits on the scale or intensity of marketing campaigns or state manipulation of cultural preferences that affect trade.

    6. Peculiarities of China

      I have now spent nearly 1.5 years in China. I thought it fitting that I take some time to try to remember the things that were shocking to me when I first arrived, before everything becomes normalized to me.

      You Can Turn Around Wherever the F*** You Want

      20191215_150722.jpg

      Without question one of the most shocking things about China is the culture of driving. It is simultaneously terrifying and amazing. There are two simple rules that everyone follows to the letter: 1, do whatever the f*** you want. Now obviously I exaggerate a bit for effect, but compared with the US, it certainly seems to be a laissez-faire driving environment. You can get into the other lane whenever you want, pull a three-point turn in the middle of a busy thoroughfare, or make right turns whenever other people are going that way, or turn left against traffic on green. This works because of the next rule: 2, be prepared to stop on a f***ing dime. In this regard, Chinese drivers are surely among the best in the world. Everyone is sublimely excellent at watching their own asses. Chinese drivers are incredibly alert. Every single time I take a car anywhere I witness behavior that would without question cause an accident in the US. But in China, it doesn’t, because the drivers are just excellent.

       

      Pollution is Serious

      20191211_103944

      Despite declaring a war on pollution and having a lot of success in tackling it, China’s pollution is still really bad. I reside in one of the best areas in the country for pollution, but still experience days that are rated as “very unhealthy” according the World Health Organization Standards. And unfortunately, despite constant moves toward green energy, China is still building an enormous number of new coal-powered plants – equal, in fact, to the total amount that the rest of the world has taken offline in recent years, meaning that a whole lot of the successes that western environmental movements have made in reducing carbon emissions will essentially be neutralized in the next few years.

      The Old China is Still Around

      20191214_163610.jpg

      Despite acclamations of the rise of New China, even here in the heart of Shanghai you find tiny little shops filled with hand tools and artisans. Nevertheless…

       

       

       

       

       

      The New China is Big and Beautiful

      20191207_193117

      China is heaven for fans of modern monumental architecture. Interior spaces are utterly massive, and many have incredible lighting and exterior design. Every time I look at a Chinese skyline at night I feel that the cityscapes – even in smaller towns – have overshot the visual aesthetics of sci-fi worlds like Blade Runner 2049 in their attainment of senses of superhuman grandeur. But these imposing edifices are  thrown into even starker relief in places where…

       

       

       

       

      Old and New Sit Side-by-Side

      20191214_170403

      One hears this truism far too often, but it is far too true. Much of China’s development has been haphazard, and high-tech commercial areas sometimes happen to spring up very near to ancient monuments. Shanghai is one such example, where dozens of temples around the city sit nestled among skyscrapers and Buddhist monks bump elbows with CEOs on the sidewalk. In fact it’s hard not too, considering that

      Chinese Crowds are Next F***ing Level

      20191214_173618

      One knows that China is heavily populated, but the extent to which that is true eludes the imaginations of those who have never been there.  If you dumped the entire population of Europe and Latin America into the United States it would still be hundreds of millions shy of the population of China. And it’s mostly concentrated in cities. 6 of the 10 most populous cities in the world are in China, including spots 1, 2, 3, and 4. You could triple the downtown population of New York City and it would be only about that of Shanghai and still far under Chongqing. You do not know crowded if you don’t know China crowded. Which can get extremely unpleasant when you factor in the facts that:

      Smoking is Ubiquitous

      What’s that guy doing?

      20191215_150914

      Picking out fruit in the fruit store. While smoking a cigarette.

      What’s this guy doing? 20190814_111559

      standing next to a no-smoking sign. While smoking a cigarette (I asked this person if he could read, and he just glowered at me). More than a third of the Chinese population smokes (however this statistic exhibits strong sexual dimorphism, with the rate for Men being over 50% and that for women being under 5%) . I have heard anecdotally that one reason for the high smoking rate is that cigarette sales taxes are a huge source of revenue for local governments, but I do not understand the structure of Chinese civic finance enough to verify or refute that assertion. However, I have recently noticed a sharp uptick in the number of anti-tobacco messages through various channels. No-smoking signs exist in most of the places you would expect to find them in the West, but they are routinely ignored as a matter of principle, to the extent that I have taken to using the simile, “as useless as a Chinese no-smoking sign”. It is particularly accepted – to the extent that it is essentially the rule – to smoke in bathrooms, and every train station bathroom I have ever been in has reeked of cigarette smoke. Despite the signs.

      Squatty Potties

      20191206_17263120191201_113516

      One hears about there being squatting toilets in China. However, a typical reaction is to assume that they are the old style toilets of poverty, and that modern toilets are new and sitting-style. This is absolutely not the case: the squatting toilet featured here is on a new model high-speed train. Many people simple prefer the squatting potties because they can actually help with defecation. The problem, however, is that many people take that preference so strongly that they insist on squatting on western-style sitting toilets, such as the one at the Starbucks where the sign was posted. Starbucks felt the need to respond to that proclivity with the second point on the sign listed here.

      Cherry Tomato is a Fruit

      20191206_102253.jpg

      There is an aphorism in English that “intelligence is knowing the tomato is a fruit; wisdom is not putting it in a fruit salad”. The Chinese would take great offense to that, as cherry tomatoes are regular features of fruit salad – in fact one of the most common ingredients. Cherry tomatoes are featured atop fruit pizza, in yogurt, and even candied to sprinkle atop ice cream. In case you’re wondering, they’re no sweeter than American varieties; in fact, I’ve had many varieties in the US that were far sweeter and less tart.

      Eating on the job in professional settings

      20191201_182730.jpg

       

      What you see in this photo is a pharmacist, in her lab coat, in a store that is open for business, eating her dinner with a companion in the middle of the store. This behavior is extremely common. There is no shame or embarrassment or even an attempt to hide it behind a counter. Nope – plonk a table down in the middle of the store and chow down.

       

       

       

      White people for advertising purposes

      20191201_134457.jpg

      It may be somewhat hard to see in this photo, but there are three white people used for advertisements in this photo – one in the bottom right and two at the top left. Regardless of the product, white people are often used to give a luster of quality and classiness to a product, particularly older white men who look like they could be professors. Though the official line is that China must walk past the West, in practice a lot of Western things are still celebrated as ideals.

      Atypical food combinations

      20191129_181432

      What, you don’t put cheese and mustard on your waffles? What about mayonnaise and corn on your pizza? How about espresso lemonade, beer-flavor latte, or yogurt and green tea? For me the things that are completely foreign in China are not shocking; it’s the the complete re-appropriation and recombination of Western foods that makes me do a double-take. And although it’s usually shocking, I’m constantly appreciative of the willingness to completely reimagine the artificial boundaries we place on food in our own cultures.

      Very strong opinions about borders

      20191129_181455

      China does not see eye-to-eye with its neighbors regarding where international boundaries lie, and makes sure to defend its position at every opportunity. Legally, all maps and globes printed in China must display the government’s official opinion on borders, including the famous nine-dash line of maximalist claims in the South China Sea (reaching all the way to the coast of Borneo). And by all maps, I mean all maps, even novelty things or those in a children’s movie. In fact, these kinds of things are perhaps most important of all from the perspective of the government: it’s important that kids be raised from birth always seeing the maximalist territorial claims, always believing such positions as “Taiwan is an inseparable part of China”.

      A lot of dress-up

      20191207_194855.jpg

      Many people like dressing up in, let’s just say “atypical” clothing in China. The two most common kinds are this kind of Victorian Doll type getup as seen here, or more commonly “Hanfu“, “Han clothing”, an anachronistic mishmash of any kind of historical attire worn in China from really any pre-modern period. As long as it looks historical and Chinese-y. This movement is often, but not always, associate with a Chinese nationalist movement to reject Western-influenced attire.

       

       

      Lots of thermoses

      20191207_202638.jpg

      This photo depicts a thermos store. A store…entirely of thermoses. This was not even the only one in this particular mall. Many Chinese carry thermoses ubiquitously, usually filled with tea leaves, and no airport, train, or waiting room is complete without a complementary hot water dispenser so that people can top-up their tea bottles. In literally every taxi I have ever been in in China, the driver has a thermos full of tea (this is not always the case for Didi, the Chinese Uber clone, for some reason)

    7. On the Relative Longevity of Chinese and Roman Civilization

      Ask yourself this question: which survived longer, China, or Rome? The conventional answer is China, of course. By why is that the conventional answer? Is that not just a story we tell ourselves?

      Why do we say that China is 2000 years old, but that the Roman Empire fell 1500 years ago? China was conquered and divided numerous times in its history, its dominant languages have changed drastically (though maintaining the same writing system so physical evidence of those changes is fleeting), and the dominant religions, customs, and institutions have oscillated and varied immensely.

      For comparison (note, this is in very broad strokes):

      • All European languages with the exception of Greek use the Roman alphabet – or Cyrillic, which was created by an Eastern Roman emperor.
      • The leaders of the Roman churches (in Rome and Constantinople) were the unquestioned religious leaders of Europe until the 1400s in the East, until the 1500s in Northern Europe, and still today in most of Southern Europe.
      • All European legal systems with the exception of the British ones derive in large part from the Roman/Justinian code.
      • The claimed successor to the Roman Empire in the West, the Holy Roman Empire, existed from 800 until the 1800s; if you count Byzantium, there was never a gap in the continuity of claimed successor empires until only 200 years ago. China, in comparison, had the Warring States Period, the Sixteen Kingdoms, the Ten Kingdoms, etc.
      • The above empire was conquered by a French Emperor presiding on a government substantially modeled on the Roman Republic including Consuls and eagle-adorned legion banners.
      • The German empire was later reformed by a Kaiser, the word being derived from Caesar.
      • Latin was the dominant academic, diplomatic, and scientific language of Europe until the 18th Century.

      This list could go on, but I’ll leave it here for now.

      I’m not attempting to make an argument for the survival of Rome per se, but merely in comparison to what is the generally accepted continuity of China, for example. If we accept the legitimacy of Chinese successor kingdoms after periods of imperial collapse and chaos, then I fail to see why the Holy Roman Empire doesn’t count as a legitimate successor kingdom to the Roman Empire by the same criteria. The HRE arguably has even more legitimacy, given that it had the sanction of an actual continuing institution of the Roman Empire, i.e. the Catholic Church, and all the while a very real Eastern Roman Empire saw themselves as every bit as Roman as the Western empire. They referred to themselves as Romaioi, for example.