What is Technology

One of the main themes of W. Brian Arthur’s book The Nature of Technology: What it Is and How it Evolves is: technology evolves.

As I reflected on the book, I found myself thinking about the opening sequence from Stanley Kubrick’s film 2001: A Space Odyssey. The film opens with a sequence called “Dawn of Man.” A group of apes learns how to use animal bones as tools and weapons. One ape throws the bone into the air. The camera follows the bone through the air and then, suddenly, we’re millions of years in the future seeing a spaceship with a similar shape.

The underlying message of the film sequence and cut is similar to the book’s core message: Humans have evolved along with our technology — and both will continue to evolve.

***

Context

It’s worth understanding a bit about the author and the research institute with which he’s chiefly affiliated, the Santa Fe Institute.

W. Brian Arthur is an economist. He was Morrison Professor of Economics and Population Studies at Stanford University from 1983 to 1996, and one of his key contributions was the concept of increasing returns. He was also one of the early pioneers of complexity theory, particularly as it applied to economics. In contrast to conventional neoclassical economics, which assumes rational agents, equilibrium, and elegant, balanced equations, complexity economics assumes a world in which agents differ from each other, have imperfect information, and are constantly changing and reacting. The world it models is more vibrant, changing, and organic, displaying emergent phenomena.

Technology plays a key role, hence the goal of the book to understand rigorously what technology is and how it changes (evolves). After laying the intellectual foundation of technology, W. Brian Arthur makes the link between technology and economics explicit.

***

The core building blocks of the argument

“…technology—the collection of artifacts and methods available to a society—creates new elements from those that already exist and thereby builds out.”

W. Brian Arthur starts by providing three definitions of technology—a singular, a plural, and a general:

  1. A technology is a means to fulfill a human purpose.

  2. Technology is an assemblage of practices and components.

  3. Technology is the entire collection of devices and engineering practices available to a culture.

He offers three foundational principles to think about technology:

  1. All technologies are combinations.

  2. Each component of technology is itself in miniature a technology.

  3. All technologies harness and exploit some effect or phenomenon, usually several.

He explains these principles as follows:

  1. All technologies are combinations:

    • Technology consists of parts organized around a central concept or principle: “the method of the thing…”

    • The primary structure of a technology consists of a main assembly that carries out its base function plus a set of subassemblies that support this.

    • The assemblies, subassemblies, and individual parts are all executables; they are all technologies, which brings us to the second principle…

  2. Each component of technology is itself in miniature a technology:

    • Technologies are built from a hierarchy of technologies

      e.g.,

    • The F-35 fighter aircraft’s main purpose is to provide close air support, intercept enemy aircraft, suppress enemy radar defenses, and take out ground targets. It does so by combining sub-technologies that include: the wings; the engine; the avionics suite (or aircraft electronic systems); the landing gears; the flight control systems; the hydraulic system; and so forth.

    • And you can go further down the hierarchy, breaking down the engine: air inlet system, compressor system, combustion system, turbine system, nozzle system.

    • And further, breaking down the air inlet system, which Arthur does, and so on

    • He also extends the technology hierarchy upward, placing the F-35 into a larger system, the carrier wing, all the way up into a grand technology: a theater-of-war grouping, which consists of a carrier group supported by land-based air units, airborne refueling tankers, Naval Reconnaissance Office satellites, ground surveillance units, and marine aviation units.

  3. All technologies harness and exploit some effect or phenomenon, usually several:

    • e.g., oil refining exploits vaporization/condensation; hammer exploits momentum; radiocarbon dating exploits carbon-14 decay; trucks exploit energy from fossil fuels and friction

    • Phenomena have to be harnessed, coaxed, and tuned to work effectively; hence, the need for supporting technologies (the subassemblies) that bring them together in a usable way

    • This allows W. Brian Arthur to provide an incredibly powerful way to characterize technology: A technology is a programming of phenomena to our purposes.

    • Another way to think about this is: Phenomena are the “genes” of technology

    • Here, W. Brian Arthur makes a fascinating and important detour to address an important implication of his principles:

      I defined a technology as a means to a purpose. But there are many means to purposes and some of them do not feel like technologies at all. Business organizations, legal systems, monetary systems, and contracts are all means to purposes and so my definition makes them technologies. Moreover, their subparts—such as subdivisions and departments of organizations, say—are also means to purposes, so they seem to share the properties of technology. But they lack in some sense the “technology-ness” of technologies. So, what do we do about them? If we disallow them as technologies, then my definition of technologies as means to purposes is not valid. But if we accept them, where do we draw the line? Is a Mahler symphony a technology? It also is a means to fulfill a purpose—providing a sensual experience, say—and it too has component parts that in turn fulfill purposes. Is Mahler then an engineer? Is his second symphony—if you can pardon the pun—an orchestration of phenomena to a purpose?

    • He proposes that, yes, “…all means—monetary systems, contracts, symphonies, and legal codes, as well as physical methods and devices—are technologies.”

    • He dives into a discussion about money and the monetary system as technologies that is very relevant to Bitcoin, though he doesn’t mention Bitcoin

    • The reason, he suggests, that some technologies feel more technology-like than others is the following:

      Conventional technologies, such as radar and electricity generation, feel like “technologies” because they are based upon physical phenomena. Nonconventional ones, such as contracts and legal systems, do not feel like technologies because they are based upon nonphysical “effects”—organizational or behavioral effects, or even logical or mathematical ones in the case of algorithms.

These are the core building blocks of his argument. The rest are implications and observations.

***

Deepening the argument

From there, W. Brian Arthur makes other fascinating observations:

  • Science and technology share a symbiotic relationship. Science uncovers phenomena, technology exploits them. But then technology helps science advance, uncovering more usable phenomena. And so on.

  • Standard engineering is the mechanism by which technologies come into being and evolve.

    • The principles above allow technology not to be viewed from the outside as stand-alone objects but rather from the inside:

      …when we look from the inside, we see that a technology’s interior components are changing all the time, as better parts are substituted, materials improve, methods for construction change, the phenomena the technology is based on are better understood, and new elements become available as its parent domain develops. So a technology is not a fixed thing that produces a few variations or updates from time to time. It is a fluid thing, dynamic, alive, highly configurable, and highly changeable over time.

    • It’s here that he offers the main theme of the book: “…technology evolves by combining existing technologies to yield further technologies and by using existing technologies to harness effects that become technologies.”

    • W. Brian Arthur goes on to argue that a key method by which this happens is standard engineering. Standard engineering is problem-solving, and problem-solving is about combining technologies.

    • There’s a beautiful passage about engineering and creativity:

      [Engineering] is a form of composition, of expression, and as such it is open to all the creativity we associate with [architecture, fashion, or music]” … [But] the reason engineering is held in less esteem than other creative fields is that unlike music or architecture, the public has not been trained to appreciate a particularly well-executed piece of technology. The computer scientist C. A. R. (Tony) Hoare created the Quicksort algorithm in 1960, a creation of real beauty, but there is no Carnegie Hall for the performance of algorithms to applaud his composition.

    • Creativity is key to inventing new, novel technologies…

  • Novel technologies are truly new technologies—inventions

    • He defines a novel technology as: “one that uses a principle new or different to the purpose in hand.”

    • It’s worth calling out here that this sounds a lot like Clayton Christensen’s definition of a disruptive technology

    • W. Brian Arthur dives deep into how these new principles are discovered. Ultimately, it’s about creativity:

      At the creative heart of invention lies appropriation, some sort of mental borrowing that comes in the form of a half-conscious suggestion. …

      At [the heart of invention] lies the act of seeing a suitable solution in action—seeing a suitable principle that will do the job, [which] in most cases arrives by conscious deliberation; it arises through a process of association—mental association.

  • When a novel technology emerges, it is crude. It works, but barely, and then begins to improve, develop.

    • Once the novel technology emerges, standard engineering kicks in, and the process of problem-solving and recombination takes over

    • As technologies improve, as they get stretched to solve more and more difficult problems, they become more complex—more subassemblies, more parts, etc.

    • But this has a cost:

      Over time it becomes encrusted with systems and subassemblies hung onto it to make it work properly, handle exceptions, extend its range of application, and provide redundancy in the event of failure.

    • Note, again, how all this sounds very similar to Clayton Christensen’s theories: an existing solution “overshoots” the market; a cheaper, simpler, more convenient disruptive innovation with an entirely new approach emerges; it is initially inferior to the existing high end solutions so it targets the low end of the market; but then it improves much faster, ultimately overtaking the existing high end solutions.

  • Revolutions in the economy happen through domains of technology

    • Now, perhaps this is obvious, but technologies organize themselves into domains of similar technologies—and W. Brian Arthur admits as much

    • But then he points out that the domains of technology have a different and important character from individual technologies, particularly in how the impact the economy:

      [Domains] are not invented; they emerge, crystallizing around a set of phenomena or a novel enabling technology, and building organically from these. They develop not on a time scale measured in years, but on one measured in decades—the digital domain emerged in the 1940s and is still building out. And they are developed not by a single practitioner or a small group of these, but by a wide number of interested parties.

      Domains also affect the economy more deeply than do individual technologies. The economy does not react to the coming of the railway locomotive in 1829 or its improvements in the 1830s—not much at any rate. But it does react and change significantly when the domain of technologies that comprise the railways comes along. In fact, one thing I will argue is that the economy does not adopt a new body of technology; it encounters it. The economy reacts to the new body’s presence, and in doing so changes its activities, its industries, its organizational arrangements—its structures. And if the resulting change in the economy is important enough, we call that a revolution.

    • Domains have a life cycle, both of the technology and the interest/investment around them (as articulated by Carlota Perez)

    • Some domains, however, avoid the life cycle of youth, adulthood, and old age because they morph—one of the key technologies changes (vacuum tubes transistors to integrated circuits) or its main application changes (computation: scientific calculation in wartime to accounting to personal computing, etc.)—or it throws off new subdomains (e.g., computing and telecommunications birthed the internet, and it looks like now the internet and cryptography is birthing a new subdomain: crypto networks)

    • W. Brian Arthur then goes into detail of how technology domains and industries interact:

      How does this happen? Think of the new body of technology as its methods, devices, understandings, and practices. And think of a particular industry as comprised of its organizations and business processes, its production methods and physical equipment. All these are technologies in the wide sense I talked of earlier. These two collections of individual technologies—one from the new domain, and the other from the particular industry—come together and encounter each other. And new combinations form as a result.

    • This is fascinating to consider and explains a lot. Barnes and Noble didn’t succeed from the internet; Amazon emerged. Taxis didn’t change from mobile; Uber emerged. Industries tend not to adopt technologies; they’re transformed by them.

    • W. Brian Arthur defines redomaining to mean that…

      …industries adapt themselves to a new body of technology, but they do not do this merely by adopting it. They draw from the new body, select what they want, and combine some of their parts with some of the new domain’s, sometimes creating subindustries as a result. As this happens the domain of course adapts too. It adds new functionalities that better fit it to the industries that use it.

  • Technology evolves. All of the above sets the stage for W. Brian Arthur’s central argument.

    • Example: In the early 1900s, a few engineers were successful in adding a third electrode in a diode vacuum tube, making it easier to transmit and receive radio signals, thus birthing radio broadcasting; but, incidentally, the same technology in a different configuration could be a logic circuit, thus birthing modern computation

    • W. Brian Arthur makes a few central statements:

      • “Any solution to a human need—any novel means to a purpose—can only be made manifest in the physical world using methods and components that already exist in that world.”

      • “All technologies are birthed from existing technologies in the sense that these in combination directly made them possible.”

      • “…novel elements are directly made possible by existing ones. But more loosely we can say they arise from a set of existing technologies, from a combination of existing technologies. It is in this sense that novel elements in the collective of technology are brought into being—made possible—from existing ones, and that technology creates itself out of itself.”

      • In other words, he says, technology is autopoietic—“self-creating”—meaning…

      • “…every technology stands upon a pyramid of others that made it possible in a succession that goes back to the earliest phenomena that humans captured…[and] all future technologies will derive from those that now exist (perhaps in no obvious way) because these are the elements that will form further elements that will eventually make these future technologies possible.”

    • And from here it gets grand, truly mind-blowing:

      Autopoiesis gives us a sense of technology expanding into the future. It also gives us a way to think of technology in human history. Usually that history is presented as a set of discrete inventions that happened at different times, with some cross influences from one technology to another. What would this history look like if we were to recount it Genesis-style from this self-creating point of view? Here is a thumbnail version.

      In the beginning, the first phenomena to be harnessed were available directly in nature. Certain materials flake when chipped: whence bladed tools from flint or obsidian. Heavy objects crush materials when pounded against hard surfaces: whence the grinding of herbs and seeds. Flexible materials when bent store energy: whence bows from deer’s antler or saplings. These phenomena, lying on the floor of nature as it were, made possible primitive tools and techniques. These in turn made possible yet others. Fire made possible cooking, the hollowing out of logs for primitive canoes, the firing of pottery. And it opened up other phenomena—that certain ores yield formable metals under high heat: whence weapons, chisels, hoes, and nails. Combinations of elements began to occur: thongs or cords of braided fibers were used to haft metal to wood for axes. Clusters of technology and crafts of practice—dyeing, potting, weaving, mining, metal smithing, boat-building—began to emerge. Wind and water energy were harnessed for power. Combinations of levers, pulleys, cranks, ropes, and toothed gears appeared—early machines—and were used for milling grains, irrigation, construction, and timekeeping. Crafts of practice grew around these technologies; some benefited from experimentation and yielded crude understandings of phenomena and their uses.

      In time, these understandings gave way to close observation of phenomena, and the use of these became systematized—here the modern era begins—as the method of science. The chemical, optical, thermodynamic, and electrical phenomena began to be understood and captured using instruments—the thermometer, calorimeter, torsion balance—constructed for precise observation. The large domains of technology came on line: heat engines, industrial chemistry, electricity, electronics. And with these still finer phenomena were captured: X-radiation, radio-wave transmission, coherent light. And with laser optics, radio transmission, and logic circuit elements in a vast array of different combinations, modern telecommunications

      In this way, the few became many, and the many became specialized, and the specialized uncovered still further phenomena and made possible the finer and finer use of nature’s principles. So that now, with the coming of nanotechnology, captured phenomena can direct captured phenomena to move and place single atoms in materials for further specific uses. All this has issued from the use of natural earthly phenomena. Had we lived in a universe with different phenomena we would have had different technologies. In this way, over a time long-drawn-out by human measures but short by evolutionary ones, the collective that is technology has built out, deepened, specialized, and complicated itself.

    • ☝️ STOP — read that again and savor it. It’s a masterpiece of writing, compressing all of technology and human history into four paragraphs and the articulation of an incredible idea: a Genesis-style reconstruction of technology evolution!

    • There are two forces that drive technology evolution

      • One force is combination, which is the ability of the existing collective technology to “supply” new technologies, whether by putting together existing parts and assemblies, or by using them to capture phenomena.

      • The other force is the “demand” for means to fulfill purposes, the need for novel technologies.

    • “Existing technologies used in combination provide the possibilities of novel technologies: the potential supply of them. And human and technical needs create opportunity niches: the demand for them. As new technologies are brought in, new opportunities appear for further harnessings and further combinings. The whole bootstraps its way upward.”

    • W. Brian Arthur describes an experiment that he and his colleague Wolfgang Polak conducted using a computer to simulate this evolution using simple logic circuits. I won’t go into it here because it’s complex, but the implication is pretty amazing (emphasis mine):

      “We found that after sufficient time, the system evolved quite complicated circuits.”…

      “In several of the runs the system evolved an 8-bit adder, the basis of a simple calculator. This may seem not particularly remarkable, but actually it is striking.”…

      “…the chances of such a circuit being discovered by random combination in 250,000 steps is negligible. If you did not know the process by which this evolution worked, and opened up the computer at the end of the experiment to find it had evolved a correctly functioning 8-bit adder against such extremely long odds, you would be rightly surprised that anything so complicated had appeared. You might have to assume an intelligent designer within the machine.”

    • Meaning: evolution is powerful

    • W. Brian Arthur clarifies that, while this is evolution, it is not the same mechanism as Darwinian evolution, which is based on mutation and selection; this is combinatorial evolution

  • The economy is technology, and it evolves with technology.

    • W. Brian Arthur makes a subtle and powerful point that I had to read a number of times before I grasped it, particularly its significance

    • Traditionally, the economy is defined as a “system of production and distribution and consumption” of goods and services, as if the economy is a giant container for technology

    • W. Brian Arthur defines it differently: the economy is the set of arrangements and activities by which a society satisfies its needs; meaning…

      The set of arrangements that form the economy include all the myriad devices and methods and all the purposed systems we call technologies. They include hospitals and surgical procedures. And markets and pricing systems. And trading arrangements, distribution systems, organizations, and businesses. And financial systems, banks, regulatory systems, and legal systems. All these are arrangements by which we fulfill our needs, all are means to fulfill human purposes. All are therefore by my earlier criterion “technologies,” or purposed systems. I talked about this in Chapter 3, so the idea should not be too unfamiliar. It means that the New York Stock Exchange and the specialized provisions of contract law are every bit as much means to human purposes as are steel mills and textile machinery. They too are in a wide sense technologies.

      If we include all these “arrangements” in the collective of technology, we begin to see the economy not as container for its technologies, but as something constructed from its technologies. The economy is a set of activities and behaviors and flows of goods and services mediated by—draped over—its technologies. It follows that the methods, processes, and organizational forms I have been talking about form the economy.

      The economy is an expression of its technologies.

      The shift in thinking I am putting forward here is not large; it is subtle. It is like seeing the mind not as a container for its concepts and habitual thought processes but as something that emerges from these. Or seeing an ecology not as containing a collection of biological species, but as forming from its collection of species. So it is with the economy.

    • The implication:

      Because the economy is an expression of its technologies, it is a set of arrangements that forms from the processes, organizations, devices, and institutional provisions that comprise the evolving collective; and it evolves as its technologies do. And because economy arises out of its technologies, it inherits from them self-creation, perpetual openness, and perpetual novelty. The economy therefore arises ultimately out of the phenomena that create technology; it is nature organized to serve our needs.

    • W. Brian Arthur paints a picture of a very complex thing, this economy, and the meta point is that it is very difficult to understand how it work:

      The economy is not a simple system; it is an evolving, complex one, and the structures it forms change constantly over time. This means our interpretations of the economy must change constantly over time. I sometimes think of the economy as a World War I battlefield at night. It is dark, and not much can be seen over the parapets. From a half mile or so away, across in enemy territory, rumblings are heard and a sense develops that emplacements are shifting and troops are being redeployed. But the best guesses of the new configuration are extrapolations of the old. Then someone puts up a flare and it illuminates a whole pattern of emplacements and disposals and troops and trenches in the observers’ minds, and all goes dark again. So it is with the economy. The great flares in economics are those of theorists like Smith or Ricardo or Marx or Keynes. Or indeed Schumpeter himself. They light for a time, but the rumblings and redeployments continue in the dark. We can indeed observe the economy, but our language for it, our labels for it, and our understanding of it are all frozen by the great flares that have lit up the scene, and in particular by the last great set of flares.

  • Technology is becoming biological and vice versa

    • Technology is becoming biological: “…technologies are acquiring properties we associate with living organisms. As they sense and react to their environment, as they become self-assembling, self-configuring, self-healing, and “cognitive,” they more and more resemble living organisms. The more sophisticated and “high-tech” technologies become, the more they become biological.”

    • And biology is becoming technology: “As biology is better understood, we are steadily seeing it as more mechanistic.” … “…organisms and organelles [are] highly elaborate technologies. In fact, living things give us a glimpse of how far technology has yet to go. No engineering technology is remotely as complicated in its workings as the cell.”

    • Which means, per the above logic, that the economy is becoming biological—generative.

  • Our economy is not a system at equilibrium but rather an evolving, complex system

    • All of this means the economy is much more vibrant than we think, a thing of “messy vitality”

      Economics itself is beginning to respond to these changes and reflect that the object it studies is not a system at equilibrium, but an evolving, complex system whose elements—consumers, investors, firms, governing authorities—react to the patterns these elements create.

      Not only is our understanding of the economy changing to reflect a more open, organic view. Our interpretation of the world is also becoming more open and organic; and again technology has a part in this shift. In the time of Descartes we began to interpret the world in terms of the perceived qualities of technology: its mechanical linkages, its formal order, its motive power, its simple geometry, its clean surfaces, its beautiful clockwork exactness. These qualities have projected themselves on culture and thought as ideals to be used for explanation and emulation—a tendency that was greatly boosted by the discoveries of simple order and clockwork exactness in Galilean and Newtonian science. These gave us a view of the world as consisting of parts, as rational, as governed by Reason (capitalized in the eighteenth century, and feminine) and by simplicity. They engendered, to borrow a phrase from architect Robert Venturi, prim dreams of pure order.

      And so the three centuries since Newton became a long period of fascination with technique, with machines, and with dreams of the pure order of things. The twentieth century saw the high expression of this as this mechanistic view began to dominate. In many academic areas—psychology and economics, for example—the mechanistic interpretation subjugated insightful thought to the fascination of technique. In philosophy, it brought hopes that rational philosophy could be founded on—constructed from—the elements of logic and later of language. In politics, it brought ideals of controlled, engineered societies; and with these the managed, controlled structures of socialism, communism, and various forms of fascism. In architecture, it brought the austere geometry and clean surfaces of Le Corbusier and the Bauhaus. But in time all these domains sprawled beyond any system built to contain them, and all through the twentieth century movements based on the mechanistic dreams of pure order broke down.

      In its place is growing an appreciation that the world reflects more than the sum of its mechanisms. Mechanism, to be sure, is still central. But we are now aware that as mechanisms become interconnected and complicated, the worlds they reveal are complex. They are open, evolving, and yield emergent properties that are not predictable from their parts. The view we are moving to is no longer one of pure order. It is one of wholeness, an organic wholeness, and imperfection.

  • We fear technology that destroys our nature; we want technology that extends our nature

    • There’s two seemingly contradicting views: “…that technology is a thing directing our lives, and simultaneously a thing blessedly serving our lives…”

    • There’s a tension: if, as W. Brian Arthur proposes, all technology emerges from phenomena in nature, then technology is based in nature. “We trust nature—we feel at home in it—but we hope in technology":

      There is an irony here. Technology, as I have said, is the programming of nature, the orchestration and use of nature’s phenomena. So in its deepest essence it is natural, profoundly natural. But it does not feel natural.

      …we react to this deep unease in many ways. We turn to tradition. We turn to environmentalism. We hearken to family values. We turn to fundamentalism. We protest. Behind these reactions, justified or not, lie fears. We fear that technology separates us from nature, destroys nature, destroys our nature.

      [But] to have no technology is to be not-human; technology is a very large part of what makes us human.

    • Robert Pirsig said:

      The Buddha, the Godhead, resides quite as comfortably in the circuits of a digital computer or the gears of a cycle transmission as he does at the top of a mountain or in the petals of a flower.

    • “…our unconscious makes a distinction between technology as enslaving our nature versus technology as extending our nature. This is the correct distinction.”

Reversals

In 1614, Sir Thomas Roe, London’s first emissary to India, visited the Mughal emperor Jahangir. But turns out, Jahangir was not very interested in the meeting. At the time, the Mughals were rulers of the greatest and richest empire in the world, and England was deemed not worthy of their attention at all.

And no wonder: The Mughal empire spanned most of present day India, Pakistan, Bangladesh, and Afghanistan, about 1.5 million square miles, more than 15 times that of England. The empire covered a fifth of humanity, about 150 million people. Its standing army alone of 4 million was bigger than the population of England.

But, of course, by 1858, India was a colony of the British empire.

And that was the peak of the British empire. According to Ray Dalio, the empire began to decline shortly thereafter:

That decline is really only clear in hindsight, of course. For the first half the twentieth century, the British empire certainly felt mighty to many observers. Consider the observations of Singapore’s founder and first prime minister, Lee Kuan Yew, as he reflected back to the 1930s, when Singapore was a British colony:

…By the time I went to school in 1930, I was aware that the Englishman was the big boss, and those who were white like him were also bosses – some big, others not so big, but all bosses. They had superior lifestyles and lived separately from the Asiatics, as we were then called. They ate superior food with plenty of meat and milk products. Every three years they went “home” to England for three to six months at a time to recuperate from the enervating climate of equatorial Singapore. Their children also went “home” to be educated, not to Singapore schools. They, too, led superior lives. At Raffles College, the teaching staff were all white. Two of the best local graduates with class one diplomas for physics and chemistry were appointed “demonstrators”, but at much lower salaries, and they had to get London external BSc degrees to gain this status. One of the best arts graduates of his time with a class one diploma for economics, Goh Keng Swee (later to be deputy prime minister), was a tutor, not a lecturer.

There was no question of any resentment. The superior status of the British in government and society was simply a fact of life. After all, they were the greatest people in the world. They had the biggest empire that history had ever known, stretching over all time zones, across all four oceans and five continents.

But soon Lee Kuan Yew would realize that his perception of the British empire and the reality were quite different. On February 15, 1942, British forces surrendered Singapore to Japan. It was a shocking and humiliating defeat. The Japanese captured the island, along with 80,000 British, Australian, and Indian troops, with relative ease. Winston Churchill called it “the worst disaster and largest capitulation in British history."

The following four years of occupation were a turning point for Lee Kuan Yew. He called the experience the turning point of his life, his primary education. After Japan’s surrender following the bombings of Hiroshima and Nagasaki, the British returned. But things were not the same. Lee Kuan Yew attended Cambridge and returned to Singapore with a firm belief that Singapore’s future did not include the British.

On August 9, 1965, Singapore gained independence, with Lee Kuan Yew as Prime Minister. It was actually an unexpected and unwelcome event because independence was not Lee Kuan Yew’s goal. In fact, he had been negotiating the merger of Singapore with the Federation of Malaya. The agreement fell apart at an advanced stage when it became clear that there would be significant issues incorporating Singapore’s majority Chinese majority population into the Malay dominant federation.

Independence should have been a joyful and optimistic event, but the country’s birth was more like that of a prematurely born baby with little chance of survival. Singapore was a tiny island with no fresh water, no independent military, limited natural resources, and a largely uneducated, impoverished population, not to mention simmering tensions between its Chinese and Malay populations.

But what followed over the course of one generation, largely due to Lee Kuan Yew’s leadership, perseverance, and sheer force of will, is one of the greatest success stories in history.

Singapore’s GDP per capita in 1968 was $500; in 2022, it was $83,000, the sixth highest in the world (just higher than that of the US and meaningfully higher than that of the UK, whom it surpassed long ago). The country is a role model for competent government, fiscal responsibility, and a stable, business-friendly political environment with little corruption. It consistently runs a fiscal surplus, despite investing heavily in its population, particularly on education.

Britain’s trajectory since 1968 has been quite different, best summarized in a recent article with the pretty direct title “Britain is Dead”:

[The United Kingdom’s] overall trajectory becomes obvious when you look at outcomes in productivity, investment, capacity, research and development, growth, quality of life, GDP per capita, wealth distribution, and real wage growth measured by unit labor cost. All are either falling or stagnant. Reporting from the Financial Times has claimed that at current levels, the UK will be poorer than Poland in a decade, and will have a lower median real income than Slovenia by 2024. Many provincial areas already have lower GDPs than Eastern Europe.

Huw Pill, Chief Economist of the Bank of England, recently said that British businesses and households need to “accept that they’re worse off” due to recent inflationary pressures.

The grand sweep of events is both humbling and inspiring: In the seventeenth century, the Mughal empire reigned supreme, and England was a backwater. By the nineteenth century, the Mughals faded into history, and the British Empire ruled the world. In the early twentieth century, a young boy perceived his white masters’ place at the top of the world as so firm and complete that “there was no question of any resentment…it was simply a fact of life.” After all, “they were the greatest people in the world.” The awe they inspired was well-deserved: “…they had the biggest empire that history had ever known, stretching over all time zones, across all four oceans and five continents.” But within one generation, that young man raised his country to incredible heights, and the British Empire crumbled.

Obviously, there’s much more nuance to all of these stories, but the main takeaway for me from these reversals is one of humility (success is hard to maintain) and perseverance (humble beginnings can lead to remarkable success).

Sources:

  • "Courting India — the unpromising sources of British power” by William Dalrymple in The Financial Times, March 16, 2023 (link)

  • The Changing World Order by Ray Dalio

  • The Singapore Story by Lee Kuan Yew

  • “Britain is Dead” by Samuel McIlhagga in Palladium Magazine, April 27, 2023 (link)

Our Complex Origins

We typically think of technological advance as moving us into the future. But technology also allows us to see our past more clearly. The Hubble and James Webb telescopes (and the lesser known COBE, WMAP, and Planck) allowed us to see deep into the universe’s past, closer to the Big Bang than we’d ever seen before.

DNA sequencing technology allows us to do something similar: we can now see the ancient past of humans more clearly than ever before. Svante Pääbo, winner of the 2022 Nobel Prize in Medicine, and David Reich, author of the book Who We Are and How We Got Here, are among the geneticists that have pioneered this science.

Below is a quick overview of the science and some of the more fascinating insights I took from the book (and David Reich’s appearance on various podcast, linked below).

The science and the technology

  • Human nuclear DNA sequences are about 3 billion base pairs long

  • Only 1 to 2 percent of that sequence serves a direct, clear purpose: coding proteins; the rest is (arguably) useless, non-coding sections of DNA called “junk” DNA

  • Mutations occur regularly from one human generation to the next—about 30 base pair mutations (these come mostly from the father)

  • So that means the vast majority of mutations occur in the non-coding sections of DNA

  • These mutations serve as a sort of population signature: populations of humans that breed with one another will have more similarities in these stretches of DNA than those that do not

  • By applying statistical analysis tools to the DNA of various human populations as well as from the DNA of ancient human remains, geneticists can create a “map” of how populations have mixed and diverged over time

Some key ancient human milestones

mya = million years ago
kya = thousand years ago

  • 7 mya - 5 mya: human ancestors split from chimpanzees

  • 1.8 mya: humans (genus homo) appeared outside of Africa

  • 770 kya - 550 kya: ancestors of modern humans split from Neanderthals

  • 320 kya: most recent shared ancestor of all present-day humans

  • 200 kya - 300 kya: modern humans appear in Africa

  • 50 kya: modern humans spread out of Africa

Humans were much more mobile than thought

  • People used to believe that the ancestors of present day people in an area descended from ancient ancestors that migrated to that region

  • The reality is far more complex

  • There have been layers and layers of mass population movement and replacement

  • This means that the people in a region are descended only a little, if at all, from the people that lived there 10 to 20 kya

Ancient Northern Eurasians (ANEs)

  • ANEs are a ghost population, or a population that doesn’t exist today but must have existed based on the statistical traces it left in the genes of present day populations

  • David Reich and his team used a test for population mixture (Three Population Test), which compares populations to see how much they vary along 600,000 positions in DNA

  • For each population in their dataset, they compared them to all other populations to find which populations gave the strongest signal of mixture

  • The results were surprising; e.g., they found that the French were most closely related to Sardinians and Native Americans

  • In fact, Siberians and East Asians were not related much at all to the French

  • So the team theorized that Native Americans didn’t migrate across the Atlantic into Europe; rather, there was a population that existed in the past in northern Eurasia and sometime before 15 kya moved through Siberia and contributed to the population that became the Native Americans—and that same population moved into Europe and contributed to the population that became the French

  • This was the ghost population the team called the Ancient North Eurasians (ANEs)

  • The team predicted this in 2012 and, amazingly, in late 2013, a separate team of scientists sequenced DNA from the remains of a 24 kya human; the DNA matched that of the predicted ghost population

The Yamnaya

  • The Yamnaya are an archaeological culture defined as such based on their tools and artifacts

  • They started in the steppe region north of the Black and Caspian Sea sometime a little before 5 kya

  • Prior to this, there were a lot of isolated populations in the area, but after the emergence of the Yamnaya, these groups are replaced by the Yamnaya

  • The Yamnaya spread over a vast area: from Hungary in the west to the Altai mountains of central Siberia in the east; their descendants reached India

  • The Yamnaya lasted hundreds of years and then faded out into groups that had a similar lifestyle

  • They left a strong genetic record and language traces: the Indo-European language

  • The cultures that preceded the Yamnaya lived in villages; but with the rise of the Yamnaya, the villages disappeared

  • The Yamnaya innovated with horses, wheels, and carriages, allowing them to cover vast areas (they were mobile)

  • Within 500 to 800 years, their descendants spread even further (one of these descendant groups is the Corded Ware culture)

  • The Yamnaya and their descendants were remarkably effective: they replaced 70 percent of present-day Germany’s population and 90 percent of present-day of Britain

Signs of power

  • There are groups of Y chromosomes in the world that indicate a common male ancestor

  • One example is in Ireland where there is a common Y chromosome type that shares a common ancestor about 1,500 years ago where a good fraction of the population of Ireland have the same Y chromosome type, indicating one powerful male who had preferential access to women and had many offspring (as did his male descendants)

  • Another example is from the Mongol empire about 800 years ago in East Asia, where a Y chromosome culture is dominant, presumably from Genghis Khan

  • There are other, deeper implications of Y chromosome start clusters because there are Y chromosome clusters in Europe, Asia, and South, where each regions shares common regional ancestors (but different from other regions) 5 to 8 kya, but this isn’t the case prior to this period, indicating that it was the first time people accumulated great wealth and power

  • All the Indo-Europeans (except for ancient Hittite) share common vocabulary for wheel, axle, and horses, so the language must have spread after the development of those technologies

  • The language tree shows a very peculiar pattern, which the genetic pattern explains: Indo-Iranian languages (e.g., Iranian and Indian languages) have clear relationships with Balto-Slavic languages (e.g., Lithuanian), which is strange considering the distance between these regions; but the genetic record shows that between 4,500 and 3,500 years ago, we see a chain of archaeological cultures that shows a movement from the steppe to Europe to Eastern Europe back to Europe and then east to India

Yamnaya and India

  • In India, there is diligent practice of something called endogamy

  • There are a minimum of 5,000 or so well-defined endogamous groups, which are groups of people that will only marry individuals within the group

  • India is not a large population like Han Chinese; rather, they are many small populations

  • These groups descend from a relatively small group of founders

  • This is remarkable because some groups, such as the Visya, have kept their genetics clearly differentiated over thousands of years, despite being in close geographic proximity with other groups (similar to the Jews in Europe)

Mixing with Neanderthals

  • It had previously been believed that all modern humans spread out from Africa about 50 kya

  • But the reality of migrations (and separations between populations) was, in fact, more complicated

  • In 2006, the Neanderthal genome was sequenced

  • Using statistical tests, David Reich and his team compared Neanderthal genome to African and non-African genomes

  • If African and non-African genomes differed equally from the Neanderthal genome, then that would have been consistent with the theory that Africans and non-Africans descended from a common ancestor that separated earlier from Neanderthals

  • However, the data showed something different: non-African genome matched Neanderthal genome more closely than African genomes matched Neanderthal genomes (2-4 percent more)

  • The implication was that Neanderthals and modern humans interbred with one another

  • The group that interbed would have been the ancestors of East Asians, Europeans, South Asians, New Guineans (all of whom carry this ancestry)

  • There is a mystery: the Neanderthal ancestry is diminishing (something is selecting against it)

Denisovans

  • There was another distinct archaic population: Denisovans

  • They were discovered in 2010 in Siberia (Denisova Caves)

  • They also interbred with ancestors of modern humans

  • Denisovan genome is found in large amounts in people from Philippines and New Guinea and indigenous people from the Philippines (and broadly with East Asians in small proportions)

South Asia

  • South Asia’s history is, unsurprisingly, complex

  • While Reich admits that there is much detail yet to be discovered (e.g., what were the genetic origins of the Indus Valley Civilization), the genetic tools have illuminated a rich history

  • Until 2016, much of the focus was on two distinct populations that had existed in South Asia: the Ancestral North Indians (ANIs) and the Ancestral South Indians (ASIs):

    • Before mixing, these two groups were as different from one another as Europeans and East Asians are today

    • The ANI are related to Europeans, central Asians, near Easterners, and the people of the Caucasus

    • The ASI descend from a population not related to any present-day population outside India

    • The people of India today are mixtures of these two populations, albeit in different proportions

  • But in 2016, some laboratories published the genomes of the world’s earliest farmers—people that lived in present-day Israel, Jordan, Anatolia (Asian peninsula of Turkey), and Iran.

  • After much study, analysis, and discussion, a much more complex picture emerged:

    • South Asia and Europe have parallel genetic histories

    • About 9 kya, there was a first wave of migration and mixing that originated from the earliest farmers in the Near East and mixed with the local hunter-gatherer populations of Europe and South Asia: from Anatolia to Europe and from Iran to South Asia (1 on the map below)

    • About 5 kya, there was a second wave that brought the Yamnaya pastoralists from the steppe who spoke the origins of the Indo-European languages to mix with the local farmers in the northern regions of Europe and South Asia (2 on the map below)

    • The second wave’s focus on the northern regions of Europe and Asia caused a gradient of ancestry that is common in Europe and Asia (the European and Indian cline’s shown on the map)

    • More specifically, the ANI were a 50 percent mix of Yamnaya steppe pastoralists and 50 percent mix of Iranian farmer-related ancestry, and the ASI were a mix of about 25 percent Iranian farmer-related ancestry and 75 percent local hunter-gatherer ancestry

Origins of caste in India

  • There was an anomaly in the South Asian genetic data, however

  • The model of the Indian cline was based on a simple mixing of ANI and ASI populations

  • Six groups did not fit the model; they had a higher than expected mix of steppe-related ancestry than Iranian farmer-related ancestry than was expected in the model

  • All six of these groups were Brahmins, with a traditional role as priests and custodians of the sacred texts in the Indo-European Sanskrit languages

  • The theory was that the populations did not mix evenly; rather, there were sub-populations that were socially distinct

  • The people who were custodians of the Indo-European language and culture were the ones with relatively higher steppe ancestry, and because they married one another preferentially (if not exclusively) their ancient ANI genetic structure is still intact after thousands of years

Controversy about human diversity

  • David Reich wrote an op-ed in The New York Times: “How Genetics is Changing our Understanding of Race”

  • The piece re-ignited longstanding controversy about science related to race.

  • David Reich’s perspective is that racial categories as we discuss them are social categorizations (e.g., the notion of black has varied over time and varies by geography), and Reich claims that scientists don’t use the word race (rather, in the op-ed piece he puts the word race in quotations)

  • Rather, geneticists deal with groupings based on various objective definitions, and what has emerged from the genome revolution is the understanding that the human species contains within it lineages that have been largely separated from each for many tens of thousands of years, and in some cases hundreds of thousands years, which is enough time for natural selection and evolution to cause shifts in the average frequencies of mutations that matter to traits

  • The implication is that the human species is quite varied; and the correlation to races as described socially is also quite varied (i.e., not perfect)

  • The orthodoxy has been that there is “no meaningful differences between human populations”

  • But this isn’t quite accurate, as it gives the impression that there is no space for there to be average biological differences between groups of people (which is contradicted by the science)

  • We know, for example, that different genetic populations have different susceptibility to disease

  • What is clear is that the genetic record shows that from a purely statistical perspective, the human genome today can be grouped into five categories: Eurasians, Africans, East Asians, Native Americans, and New Guineans

  • The difference on average between populations is about a sixth of the average difference between individuals

  • Jim Watson (one of the people that discovered DNA) and Nicholas Wade, author of the panned book, A Troublesome Inheritance, have argued that these biological differences between populations correspond to the old racial stereotypes; Reich doesn’t agree with this (“there is no evidence in favor of that…[and] those are racist statements to make”)

Podcasts:





The (mis)Behavior of Markets

One goal of this blog is to find new ideas that challenge orthodoxy. It’s easy to look back at discarded ideas like geocentrism and laugh at how obviously they were wrong. But it certainly wasn’t obvious at the time. And it doesn’t take a dramatic leap of imagination to realize that clearly there are many ideas we hold today and believe accurately reflect reality that will actually be proven to be wrong later.

Modern finance is fertile ground for this dynamic. In fact, many highly paid and respected practitioners of modern finance continue have held on to ideas long past the point when their flaws were obvious.

To highlight this point, Mandelbrot tells about a joke Warren Buffett once made—that he’d like to fund university chairs in the Efficient Market Hypothesis in order to train more misguided investors and ultimately increase his advantage in the marketplace.

I was drawn to Benoit Mandelbrot’s book The (mis)Behavior of Markets because Mandelbrot was known throughout his career to challenge core assumptions in modern finance.

I was also drawn to the book because fractals have long been interesting to me. As a child with a certain nerdy, bookish disposition, I discovered James Gleick’s book Chaos at a young age and learned about fractals, patterns that are the same at various levels.

The idea of repeating patterns at different levels of scale stuck with me, and it was one mental model that led me to believe Bitcoin is on a path to be even more valuable and impactful than it is today.

The core message of Mandelbrot’s book is as follows:

Just as there are three states of matter—solid, liquid, and gas—imagine three states of randomness: mild, medium, and wild.

Conventional financial theory assumes that randomness only occurs in the “mild” state. From the perspective of this theory, real prices “misbehave” often.

A more accurate view of randomness in financial theory can be described with “fractally wild randomness”—a model that describes diverse natural phenomena, from fluid dynamics to electrical “flicker” noise.

In short: markets are riskier than you think.

The book is at its best with pictures, which isn’t surprising, considering that’s how Mandelbrot thinks.

For example, the core of finance rests on what is called the Bechalier Brownian motion model, which assumes that each day’s price change is independent of the last and follows the mildly random pattern predicted by the bell curve.

However, a simple analysis of the Dow Jones Industrial Average between 1916 and 2002 shows that this isn’t true.

Here’s the DJIA in its familiar form between 1916 and 2002:

Here’s that same chart in logarithmic scale:

Here’s the daily change in the DJIA in log scale:

You can immediately see that, one, there are some very extreme events and, two, they tend to concentrate together, contradicting both assumptions of the conventional model.

To further illustrate the point, Mandelbrot goes a step further: he manufactures a chart using the assumptions in the conventional model.

Here’s the manufactured chart:

It looks somewhat realistic. But when you look at the daily change, you immediately realize it doesn’t reflect reality.

Here’s what that manufactured chart looks like in daily changes in standard deviations:

Here’s what the DJIA actually looks like in daily changes in standard deviations:

Mandelbrot points out that the conventional model, by definition, sees changes that are “small” (less than one standard deviation) 68 percent of the time; 95 percent of the time, it will see changes less than two standard deviations (2σ); and 98 percent of the time, it will changes that are less than three standard deviations (3σ).

In reality, however, the DJIA with some regularity sees spikes that are 10σ. And the 1987 crash was a 22σ event, the odds of which according to the conventional model are one in 10 to the 50th power. Mandelbrot kindly places that number into perspective:

“It is a number outside the scale of nature. You could span the powers of ten from the smallest subatomic particle to the breadth of the measurable universe—and still never meet such a number.”

In other words, it’s impossible according to the assumptions in our standard models of finance. And yet there it is.

It’s maddening to think that despite reality repeatedly showing the model isn’t true, practitioners cling to it.

The rest of the book is worthwhile, too, and worth a read. It’s worth noting that Mandelbrot admits the alternative theories need a lot more work. His goal is simply to encourage others to pursue the alternatives so that we may emerge with better models.

What I took away from the book are the following:

  • Downside risk. Markets are riskier than we believe, and we should act accordingly. (Insurance in the form of options protecting against extreme events may be a good value if it’s systematically underpriced.)

  • Upside opportunity. I believe the inverse is true, too—that we aren’t thinking properly about the upside benefit of positive extreme events, and I believe this is at the core of investing in technology and innovation.

Know what game you're playing

In investing, you have to know what game you’re playing. Consider the following two very different approaches:

APPROACH 1: Don’t lose money

The first is from Steve Schwarzman, founder of the private equity firm The Blackstone Group. Here’s Schwarzman from his book What It Takes:

People often smile whenever they hear my number one rule for investing: Don’t. Lose. Money. I never understand the smirks, because it is just that simple. At Blackstone we have established, and over time refined, an investment process to accomplish that basic concept. We have created a framework for assessing risk that has been incredibly reliable. We train our professionals to distill every individual investment opportunity down to the two or three major variables that will define the success of our investment case and create value. At Blackstone, the decision to invest is all about disciplined, dispassionate, and robust risk assessment. It’s not only a process but a mind-set and an integral part of our culture.

APPROACH 2: Take the bet — lose 1x or make 1,000x

A different philosophy comes from Marc Andreessen, co-founder of the venture capital firm Andreessen Horowitz. Here’s Marc Andreessen from a fireside chat at Stanford Graduate School of Business (summary, video):

In venture capital, there's two kinds of mistakes you can make. There's a mistake of commission. I make a decision. I invest in a company. I lose all my money. That's the mistake everybody thinks about. And then there's the mistake of omission. Mark Zuckerberg walks in the door at venture capital firm XYZ in 2004. They think: “What is this little kid doing? This idea is crazy. Friendster proved that this could never work. This is ridiculous.” That is, by the way, what he got told by a lot of people. In the venture capital business, every highly successful VC has made mistakes of omission—really big ones—of companies that they had the chance to invest in, that they should have invested in, and that they didn't invest in. It turns out the mistakes of commission don't really matter. They don't scar you for life. They just go and fade into history. The mistakes of omission are much worse. There’s an asymmetric payoff. When these companies work, they work at 1,000x. If you lose, you lose 1x. If you win, you win 1,000x or 10,000x. … We call this mentality the “slugging percentage” mentality, which is basically: take the bet, lose 1x; don't take the bet, possibly miss on 1,000x.

***

Now, clearly, given both firms are tremendously successful, both approaches can lead to successful outcomes. The mistake is in mixing the approaches and not following them to their logical conclusion.

The “don’t lose money” approach the requires extreme vigilance on the downside. You can’t afford to be wrong.

The “take the bet” approach requires a “swing for the fences” mentality with every single investment. You still understand the downside, but you spend more time on understanding whether and how each investment can pay off on the upside for a 1,000x return, if not more. Each investment has to fit this profile because you can’t place one or two of those bets; you need to place enough to have one in your portfolio that pays off, more than compensating for the numerous bets that won’t pay off.

The key is to know — with clarity and conviction — which game you’re playing. You must not mix the two approaches. I’ve seen firsthand how easy it is to get this wrong.

A private equity investor I know had a large investment go to zero because they missed the downside risk. The investment had significant financial and operating leverage, so even a relatively small miss in projections resulted in a dramatically bad financial outcome. The investment had high downside risk, but on the upside it promised “only” 3x to 8x returns. This was a private equity investment on the upside and a venture investment on the downside. That doesn’t work.

On the flip side, I’ve seen venture investors do this as well. Here, I made the mistake. I met Brian Armstrong at YC Demo Day in 2012, and he visited the offices of the venture capital firm I worked with at the time, Rho Ventures. He painted a truly breathtaking story, but one that was so hard to grasp, we ultimately didn’t pursue an investment. Brian ultimately raised from Andreessen Horowitz and Union Square Ventures. Certainly, hindsight is 20/20, but that’d be “resulting” — judging a decision by its outcome, not its process. Rather, our process was flawed. What we got wrong was that we overly focused on the downside but not on the upside. Yes, Coinbase could have gone to zero, but if it worked (and it has), it was well north of a 1,000x return (possibly magnitudes more, especially if you include the value of Bitcoin at the time compared to today). The expected value of the investment, even with high likelihood of failure was positive. We should have taken the bet. (More precisely: we should have been willing to take the bet. In all likelihood, Brian would likely have ended up partnering with A16Z and USV regardless, given their reputations, more significant knowledge and proactive stance on the sector, etc.)

***

Fat Tails

Relatedly, I’ve been reading a lot lately on “fat tails”—the idea that investment returns are riskier than typical risk models would have you believe.

I started spending more time on the topic after reading The Dao of Capital by Mark Spitznagel and The (Mis)behavior of Markets by Benoit Mandelbrot.

After reading the books it occurred to me that there have been a surprisingly large number of “extreme” events:

  • The Black Monday crash of 1987

  • The Asian economic crisis of 1997

  • The Russian default in 1998 that led to the collapse of the hedge fund LTCM

  • The financial crisis of 2008 that brought the world financial market to the brink of collapse

When extreme events happen often, you have to start wondering whether they’re really extreme events or normal events. Is the world defying our models, or are our models of the world wrong?

Spitznagel and Mandelbrot along with Nassim Taleb, Vineer Bhansali of LongTail Alpha, and others believe our models are wrong.

Among the compelling arguments in Mandelbrot’s book is that economists have twisted themselves into knots for decades trying to solve the “equity premium puzzle”—the “excess” reward stocks have provided above risk-free investments, even when adjusted for risk.

As Mandelbrot points out (emphasis mine):

…these papers miss the point. They assume that the “average” stock-market profit means something to a real person; in fact, it is the extremes of profit or loss that matter most. Just one out-of-the-average year of losing more than a third of capital—as happened with many stocks in 2002—would justifiably scare even the boldest investors away for a long while. The problem also assumes wrongly that the bell curve is a realistic yardstick for measuring the risk. As I have said often, real prices gyrate much more wildly than the Gaussian standards assume. In this light, there is no puzzle to the equity premium. Real investors know better than the economists. They instinctively realize that the market is very, very risky, riskier than the standard models say. So, to compensate them for taking that risk, they naturally demand and often get a higher return.

I’m convinced, and over time, I’ve increasingly found myself forming an investment philosophy to invest accordingly, both in public and private markets. More to come, but here’s a simple visual for now of “fat tails” investing: