Question Assumptions

The economist Ronald Coase died today. I was saddened by the news because he touched my life in various ways.

I studied economics in college, but I discovered the field pretty late—my junior year. I had taken a class on economic development mainly because it satisfied two requirements and sounded kind of interesting. I loved it, and shortly thereafter changed my major. In one of my classes, we learned about Ronald Coase, and the simple elegance and power of his theories drew me further to economics. 

Later, I went on to participate in auctions of wireless spectrum by the U.S. and Canadian governments. In the process, I learned that he advocated for the process of allocating spectrum in this manner, providing the U.S. and other governments with the intellectual underpinnings for spectrum auctions.

Most of all, though, I remember being inspired because he wrote the essay “The Nature of the Firm”—the first of his two essays that would go on to change the field of economics and together earn him the Nobel Prize—while he was an undergraduate student. The paper reconciled a key theory of economics with the observed reality.

The contradiction it addressed is as follows. On the one hand, Adam Smith articulated the idea of the invisible hand—that the machinery of production should function without coordination. Prices would adjust, supply and demand would fall into balance, and exchange would take place. On the other hand, we see that firms exist. We see what Coase called “islands of conscious power in this ocean of unconscious power.”

Coase reconciled this conflict by describing the role of transaction costs. A firm exists because certain transactions have such high transaction costs that it makes sense for there to be a higher degree of coordination than can exist in market transactions. Firms are a collection of these transactions.

This was a tremendous insight. According to the Royal Swedish Academy of Sciences in announcing his Nobel Prize, “Coase may be said to have identified a new set of ‘elementary particles’ in the economic system.”

What I found so inspiring about this is that Coase must have been sitting in class, learning the principles behind the idea of the invisible hand, supply and demand, and so forth, thinking through the implications of it all, and at some point, it must have occurred to him that it just doesn’t make sense. 

He may have asked some tough questions of his professors and his peers. His professors, in a knowing, confident voice, may have talked down to him, thinking he’s just being difficult or just isn’t wrapping his head around the concepts properly. His peers may just not have cared. And, yet, the question must have bothered him. He must have really enjoyed what he was learning, and yet, here was a contradiction he couldn’t resolve. 

Rather than give up, convincing himself he must just not be getting it, or file it away, thinking it’s just not worth pursuing, he dug into the question. And eventually—through no small amount of tremendously hard work and after many painful conversations, I’m sure—he produced a groundbreaking work. And it wasn’t even groundbreaking immediately. It took a long time for the idea to catch. But eventually it did. 

And this wasn’t the only time this happened. The New York Times, in his obituary, recounts a fascinating story:

While teaching at Virginia, Professor Coase submitted his essay about the F.C.C. to The Journal of Law and Economics, a new periodical at the University of Chicago. The astonished faculty there wondered, according to one of their number, George J. Stigler, “how so fine an economist could make such an obvious mistake.” They invited Professor Coase to dine at the home of Aaron Director, the founder of the journal, and explain his views to a group that included Milton Friedman and several other Nobel laureates-to-be.

“In the course of two hours of argument, the vote went from 20 against and one for Coase, to 21 for Coase,” Professor Stigler later wrote. “What an exhilarating event! I lamented afterward that we had not had the clairvoyance to tape it.” Professor Coase was asked to expand on the ideas in that essay for the journal. The result was “The Problem of Social Cost.”

When I first learned about Coase, I remember being inspired by his tenacity and confidence, his unwillingness to compromise on the questions he had, his unwillingness to let people hold on to a conventional wisdom that just didn’t make sense. 

Remembering this as I read about his death, I thought I’d write this to remind myself and others: question assumptions—and have the courage to follow where those questions take you, particularly if its scary.

Lessons from IBM

You don’t hear much about IBM in Silicon Valley. Oracle and Microsoft are talked about, typically with disdain or anger. But IBM—well, it’s just not really mentioned. It’s almost like it’s not a technology company. 

And yet, lately, I’ve found myself thinking about them for a number of reasons:

  • Communication. I was really impressed by the fact that in 2010 they published a 5 year earnings per share projection that they largely appear on track to hit. I wrote about this in an earlier post about the power of writing and committing one’s self or organization to a goal. The IBM projection implied an incredible degree of strategic planning and alignment. 
  • Research. IBM’s Director of Research, John Kelly III, was on NPR about a month ago, and it was an incredible talk. He talked a lot about their supercomputer Watson, which won a Jeopardy match, and Deep Blue, which beat Garry Kasparov in chess. In addition, the labs have created a dizzying array of incredible technology, ranging from barcodes to the relational database to the technology underlying laser eye surgery. Everyone loves Google’s innovations: maps, Gmail, self-driving cars, Glass, etc. But IBM historically has been just as prolific. So IBM is most definitely a technology company. 
  • Strategy. When Marissa Mayer took over as CEO of Yahoo! last year, I became curious about what it takes for a technology company to sustain itself over the long-term. Consumer products create long-lasting assets in brands and distribution. They can be disrupted, but it tends to be hard. Think Coca-Cola. With technology, however, it’s almost a bygone conclusion that today’s winner will be tomorrow’s also-ran. Microsoft is struggling with this as well, and now with Ballmer out as CEO, they’ll be more explicitly trying to clarify their strategy as well. Asking this question led me to read Lou Gerstner’s book on IBM’s turnaround, Who Says Elephants Can’t Dance? And it was a worthwhile read. 

__

So, what did Gerstner, having joined IBM February of 1993, do to turn IBM around?

First, he stabilized the immediate problem—mainframe:

  • Drop the price of the mainframe. Mainframe revenue had dropped from $13 billion in 1990 to projected $7 billion in 1993. Dropping the price wasn’t straightforward as it, in fact, made the cash position worse. But it was a customer-friendly move as IBM’s mainframe customers were its most important customers and mainframe pricing was almost 50 percent higher than competitors’. Mainframe pricing per unit of processing decreased from $63,000 to $2,500 over seven years. 
  • Commit to mainframe research. Rather than exit, he doubled down. Gerstner lent support to the mainframe technical team’s efforts to migrate from bipolar to CMOS, which would lower the mainframe’s cost dramatically. IBM ended up spending $1 billion over the next four years, but it saved the mainframe business. Gerstner points out that, if not for that investment, IBM would have been out of mainframes by 1997. Staying in mainframe generated $19 billion of revenue from 1997 to 2001. 

Second, he came up with a near-term plan to stabilize the company:

  • Keep the company together. The book goes into great depth about this decision, and it’s fascinating reading. Essentially, however, Gerstner initially felt intuitively and later confirmed that IBM had a unique ability to satisfy a significant need to become a technology integrator—the person that could help companies create value from technology by integrating all the pieces and delivering a working solution. This was huge because everyone assumed that the solution was for IBM to break itself apart. This decision drove every other decision. It took enormous vision and confidence. 
  • Bring the expense base in line. Having made the decision to stay together and stay in the mainframe by dropping price, there was no other choice but to bring expenses in line. Gross margins on mainframe had dropped dramatically so it only made the problem worse. Gerstner had his CFO benchmark costs and found that IBM was spending 42 cents to generate $1 of revenue versus competitors that were spending 31 cents. This equated to $7 billion of expense that needed to be cut!
  • Improve every business process. The early expense cutting was unfortunately and predictably headcount. IBM cut 35,000 jobs in addition to the 45,000 the previous CEO had cut in 1992. Beyond that, however, it was necessary to improve every basic process, all of which had become cumbersome and bloated. Just one example was that IBM had 128 CIOs across 24 independent business units, each running their own systems and applications. This was unglamorous, grinding, and painful work. But necessary. And it worked. From 1994 to 1998, the expense reengineering effort saved $9.5 billion. 
  • Sell unproductive assets to raise cash. IBM was almost out of cash in 1993. Bankruptcy was a possible yet unmentioned scenario. More likely was a painful restructuring with creditors that would have limited options. So Gerstner cut the dividend and moved quickly to sell unproductive assets like the Federal Systems Company for $1.5 billion. 

Underlying all of this was improved customer and market awareness. A key aspect of changing the culture of the company was making everyone, from leadership down, more aware of customer needs and competitor actions. 

Another amusing aspect of these early changes was Gerstner’s statement at an early press conference that “the last thing IBM needs right now is a vision.”

Gerstner’s point was:

IBM had drawers full of vision statements. We had never missed predicting correctly a major technological trend in the industry. In fact, we were still inventing most of the technology that created those changes.

Basically,"fixing IBM was all about execution."

Underlying these tactics was a broader strategy:

  • Keep the company together.
  • Reinvest in the mainframe.
  • Remain in the core semiconductor technology business. 
  • Protect the fundamental R&D business.
  • Drive all we did from the customer back and turn IBM into a market-driven rather than an internally-focused, process-driven enterprise.

Gerstner makes the point that “a lot of these decisions represented a return to Watson’s roots.”

I find this to be amazing. Many CEOs taking over a troubled enterprise would make dramatic, visible changes in strategy. Gerstner’s bold insight was that the basics of the business were, in fact, right. The issues were in execution and structure. The solution wasn’t dramatic and glamorous. It was dramatic yet behind the scenes.

I’m only scratching the surface here. These were just Gerstner’s first steps. He went on to address the leadership team, the reporting structure, the culture, the brand, marketing, the product and services lineup, geographic organization and reporting, etc.

The entire book is filled with insight. Gerstner had no coauthor or ghostwriter. He wrote it himself. 

Payments on Twitter

I sent a friend some money via Twitter today using Dwolla.

Overall, it was a very slick experience. It took me the time it takes to write a tweet (and I did it on my mobile so conceivably we could do it real time, say, as we settle a check at a restaurant). 

The drawbacks/issues: 

  • Funds transfer. I don’t keep lots of money in my Dwolla account so, first, I had to transfer funds. That took four days. Perhaps as I do this more often, I’ll just keep a few hundred dollars there. 
  • Receiver sign up. The receiver, of course, has to sign up for Dwolla. The service is well-architected in that the receiver doesn’t have to be a Dwolla user at the outset. I can send funds, and they can then go through the sign up process to receive them (or at least in theory as I haven’t confirmed with my friend yet whether he received the funds). But, nonetheless, from the receiver’s perspective it’s somewhat a pain as they first have to sign up. (Admittedly, Dwolla’s sign up process is very well-designed so it’s as easy as it can be.) Then, the receiver has to validate their bank account to transfer the funds to their bank account. That requires them to check their bank account and tell Dwolla the amount of two small deposits they made. It can take a few days. That’s frustrating and takes time. 
  • Transfer fee. I had wanted to pay the $0.25 transfer fee (rather than have the receiver pay it, which is the default). When you transfer on the Dwolla site, you can check a box to do that. I couldn’t do that on Twitter. (So, Ahad, I owe you a quarter.)
  • Pending? Immediately after I made the transfer, I checked my balance in Dwolla, and it still hadn’t reflected the transfer. I also couldn’t see the transfer in the Pending section. That seems unusual because I could conceivably send more money than I have in my account. I did see the following tweet sent from my Twitter account immediately after I tweeted so it seems to have worked:

So there’s some overhead in the process and some tweaks that I’m sure the team will iron out.

Tinkering = Optionality

I’ve been spending more time with the quality books that I’ve read. As opposed to reading them once and moving on to another, I’ve been going back to sections and re-reading them, thinking about them more, reconciling them with my own experiences and pre-conceptions, and generally trying to find a way to get them to stick in my mind with more conviction.

After all, the point isn’t reading. It’s reading so that you do something differently in the future. Or better yet, reading so that you can convince a group of people to behave together towards some meaningful end. 

A post on Farnam Street Blog quoting the philosopher Seneca stuck with me:

The primary indication, to my thinking, of a well-ordered mind is a man’s ability to remain in one place and linger in his own company. Be careful, however, lest this reading of many authors and books of every sort may tend to make you discursive and unsteady. You must linger among a limited number of master-thinkers, and digest their works, if you would derive ideas which shall win firm hold in your mind. Everywhere means nowhere. 

Antifragile and optionality

To that end, Nassim Taleb’s book Antifragile has stuck with me, not so much because I actively made the decision to go back and spend more time with it, but because very often I’ll see, read, or hear something that relates to his ideas. It’s had a dramatic impact on how I see the world.

Most recently, I find myself recalling Taleb’s idea that great advances tend to result more from random tinkering than systematic problem-solving.

The essential ideas in Book IV of Antifragile, ‘Optionality, Technology, and the Intelligence of Antifragility,’ are as follows:

  • Optionality exists all over the place. An option is anything that has limited and small downside, with the potential for extreme upside. There are, of course, the explicit options that exist in financial markets, but hidden options exist in life everywhere. Example: show up to a party. If it’s horrible, you can leave. If it’s great, you stay. (This is probably the underlying idea in Woody Allen’s quote “Eighty percent of success is showing up.”) Limited downside, huge potential upside.

Taleb illustrates with the graph below, quoting Steve Jobs: “stay hungry, stay foolish.” He interprets that to mean “Be crazy but retain the rationality of choosing the upper bound when you see it.”

  • You have to know it when you see it. Taleb describes an option as follows: Option = asymmetry + rationality. Taleb points out that almost always explicit options are overpriced “…much like insurance contracts…[and] because of the domain dependence of our minds, we don’t recognize [optionality] in other places, where these options tend to remain underpriced or not priced at all.”
  • Don’t outsmart yourself. A key aspect of these ideas is that optionality trumps intelligence. The subtlety is that trying to bring too much linear intelligence is bad. You’ll take a linear approach, come up with a theory, implement an idea, and maybe meet with success. Rather, try a number of things. Place yourself in situations with low downside and huge upside, and then have enough intelligence to know when an upside is taking place. Taleb draws the distinction between the linear approach ‘Academia → Applied Science and Technology → Practice’ and the tinkering approach ‘Random Tinkering → Heuristics → Practice and Apprenticeship → Random Tinkering …’ as two different models of getting to breakthrough, with the latter being far more effective.

The story of Tide

This was all interesting to me but just a vague idea until I read about the story of Tide in a New York Magazine piece titled “Suds for Drugs” that highlighted the incredible market position of Tide detergent.

Shoppers have surprisingly strong feelings about laundry detergent. In a 2009 survey, Tide ranked in the top three brand names that consumers at all income levels were least likely to give up regardless of the recession, alongside Kraft and Coca-Cola. That loyalty has enabled its manufacturer, Procter & Gamble, to position the product in a way that defies economic trends. At upwards of $20 per 150-ounce bottle, Tide costs about 50 percent more than the average liquid detergent yet outsells Gain, the closest competitor by market share (and another P&G product), by more than two to one. According to research firm SymphonyIRI Group, Tide is now a $1.7 billion business representing more than 30 percent of the liquid-detergent market.

I thought that was mind blowing and was very curious how something that huge comes about. And the article talked about it a bit:

Before the advent of liquid detergent, the average American by one estimate owned fewer than ten outfits, wearing items multiple times (to keep them from getting threadbare too fast) before scrubbing them by hand using bars of soap or ground-up flakes. To come up with a less laborious way to do the laundry, executives at Procter & Gamble began tinkering with compounds called surfactants that penetrate dirt and unbond it from a garment while keeping a spot on a shirt elbow from resettling on the leg of a pant. When the company released Tide in 1946, it was greeted as revolutionary. “It took something that had been an age-old drudgery job and transformed it into something that was way easier and got better results,” says Davis Dyer, co-author of Rising Tide, which charts the origins of the brand. “It was cool, kind of like the iPod of the day.” Procter & Gamble, naturally, patented its formula, forcing competitors to develop their own surfactants. It took years for other companies to come up with effective alternatives.

I became more curious about Proctor & Gamble and the deeper story so I checked out the definitive book about P&G that’s referenced in the article, Rising Tide: Lessons From 165 Years of Brand Building at Proctor & Gamble

The article already mentioned that P&G was tinkering with compounds, which made me recall Taleb’s ideas. But the book, in Chapter Four, “Science in the Washing Machine,” went into more detail.

At the time, P&G was a good business, “profitable, but by no means comfortable.” It was “unable to achieve anything like breakout success against Colgate or Lever Brothers.”

The chapter echoes Taleb’s ideas right from the start, pointing out that “developing Tide was not a linear process.” In fact, at the heart of it was a renegade engineer, Dick Byerly. 

P&G had an inkling that the synthetic detergent market was interesting, but they weren’t able to crack the code on a compound that would work across the United States, with geographic regions that had different types of water, and would be strong enough to wash heavily soiled clothes without leaving a residue.

By 1939, after a few unsuccessful attempts, P&G had backed away from the area, but Dick Byerly, a researcher in the Product Research Department, just wouldn’t give up.

One of his supervisors described him as follows: “You’ve got to understand the man to understand what he did. He was moody at times, and obstinate as all get out. [His supervisors] had horrible times with Dick on occasion. Just tenacious as all hell.”

"We had a system at P&G that every week you wrote a weekly report," one of Byerly’s colleagues related years later. "Byerly had long since given up on putting this in his weekly report regularly since he had comments to the effect, "What in the hell is he working on that for?"

The whole story is fascinating, including details of how Byerly cautiously looped in his new boss into his research, how his boss was wary but intrigued, how they were pressured to stop diluting resources in an already-strapped research organization, how that pressure increased during World War II, how they stopped and then restarted, how they had to beg for expensive plant resources (and sometimes fly under the radar) to make the granules, and so on.

And then in the early 1940s, they had a breakthrough when Byerly reversed the typical ratio of the key components. All of a sudden, it worked. They didn’t know why (echoing Taleb’s ideas about tinkering trumping intelligence), but it worked. 

The chapter goes into incredible detail about what happened thereafter, but essentially, here’s what happened:

Voice

I’ve been spending time recently getting to know a technology company that helps companies succeed by allowing them to more effectively listen to their customers. Among many innovations, they do so by giving their client companies tools that give their customers a voice. 

Coincidentally, it happened that today I was reading Malcolm Gladwell’s excellent essay, "The Gift of Doubt," in the most recent issue of The New Yorker about the economist Albert O. Hirschman, and the idea of voice presented itself there as well. 

This happens often in my reading. I’m not sure it’s coincidental so much as me picking up on subtle aspects of what I’m reading that match what I’m thinking about. In this case, I’ve been thinking about organizations and dissatisfied constituents. 

In the broad sense, I’ve always been interested in the topic, particularly since, about a decage ago, I decided I wanted to focus on building great companies. I believe in the ability of a well-run company to improve this world—by hiring and training people, by delighting customers with great service and customers, and by generating returns for investors. At its core, the idea is powerful: groups of people come together to create something that creates value for sellers and buyers. There’s an honesty in the market, and (generally speaking) the better offerings and the better companies win.

As part of that exploration of what makes a company great, the idea of being connected to customers is a constant theme. It saddens me when a once-great company becomes obviously disconnected. I certainly acknowledge the difficulty of the problem. Large organizations are hard to manage and tougher to lead. Listening to customers is difficult. Reacting to that feedback is even tougher. The ability to give companies tools to solve this problem is why this company’s mission resonates with me so strongly. 

More recently and on a similar note, I’ve been affected by disenchantment with governments. As I wrote, the unrest bubbling to the surface with varying intensity in almost every major country is too significant to ignore. 

I hadn’t realized these ideas were connected until I read Gladwell’s essay. Gladwell mentions that Hirschman’s most famous book is Exit, Loyalty, and Voice: Responses to Decline in Firms, Organizations, and States. As Gladwell describes it:

The closest Hirschman ever came to explaining his motives [for fighting in the Spanish Civil War and World War II] was in his most famous work, “Exit, Voice, and Loyalty,” and even then it was only by implication. Hirschman was interested in contrasting the two strategies that people have for dealing with badly performing organizations and institutions. “Exit” is voting with your feet, expressing your displeasure by taking your business elsewhere. “Voice” is staying put and speaking up, choosing to fight for reform from within. There is no denying where his heart lay.

I’m going to read that book after I finish the (admittedly daunting) one I just started: The Idea of Justice by Amartya Sen. I ordered Sen’s book and What Money Can’t Buy: The Moral Limits of Markets by Michael Sandel to try and come up with a framework for the role business, and in particular we in the technology industry, might play in the global events I’ve mentioned. 

In his book The Idea of Justice, Sen echoes some of Hirschman’s ideas. The central idea he puts forth is that we don’t need to agree on the exact shape of a perfectly just world. Rather, he argues, reasoned discussion and debate among parties with different views and philosophies can still lead to enough agreement about what sorts of actions and institutions increase or decrease justice that we can move forward.

It’s a subtle and beautifully argued idea within which is the same idea of debate…voice. People have to participate. They have to speak up. 

I’m still developing this idea, but at its core, the idea of voice is something that resonates within me deeply as does the related idea of participation, as opposed to exit. I’m going to write more about this later because I'm observing that some of the ideals that seem to be emerging in the Valley eschew participation. There seems to be a belief that government is so broken—so “90s-era enterprise software company”—that it can’t be fixed, that the best strategy is to avoid or marginalize it. 

I disagree. I argue we in the Valley have an incredible opportunity to improve government and, through it, the lot of many who are suffering. 

Discontent

This video, an amazing on-the-ground point of view compilation of the protests in Rio de Janeiro—one of many throughout Brazil—is making the rounds.

This footage affected me because the fact that these protests are taking place right alongside those in Turkey, another significant country, along with the fact that both are occurring shortly after the Arab Spring and various Occupy movements makes the trend too big to ignore. 

When I introduced this blog, I wrote about how we in the Valley can’t ignore the broader aspects of the society we live in—issues like inequality, justice, employment, and education. 

As I stated then as well, Steve Jobs was able to create Apple and its iconic products because he appreciated aspects of the arts and humanity in general beyond what was immediately applicable to technology. 

In his own words when he introduced the iPad 2:

It is in Apple’s DNA that technology alone is not enough—it’s technology married with liberal arts, married with the humanities, that yields us the results that make our heart sing.

And in his commencement speech at Stanford about dropping out of Reed College and auditing calligraphy classes:

I learned about serif and sans-serif typefaces, about varying the amount of space between different letter combinations, about what makes great typography great. It was beautiful, historical, artistically subtle in a way that science can’t capture, and I found it fascinating.

None of this had even a hope of practical application in my life. But ten years later, when we were designing the first Macintosh computer, it all came back to me. And we designed it all into the Mac. It was the first computer with beautiful typography. If I had never dropped in on that single course in college, the Mac would never have had multiple typefaces or proportionally spaced fonts.

I don’t have a solution to offer, but at least we can start asking the questions and having the conversations that may ultimately lead to a solution. If we set the goal to improve the world as we build great technology and great companies, we’ll be able to do it. That’s the amazing thing about creativity—you don’t know when and where the solution will emerge. 

Daft Punk

In honor of Kanye releasing his latest (and very much anticipated) album Yeezus today, I thought I’d try out SquareSpace's SoundCloud music feature for another album that made a huge splash in 2013: Daft Punk’s Random Access Memories. Plus, Daft Punk and Kanye are frequent collaborators, and they make an appearance in Yeezus. The richness and creativity of this music is just amazing. 

A Tech Subculture

This weekend’s Financial Times had a great cover story on Bitcoin titled "The Bitcoin Believers"

While I’ve been following Bitcoin’s developments closely and the article’s insights into the development (and continued struggles) of the currency were very interesting, what I found most fascinating was one description of the subculture of “Bitcoin believers” that I thought very elegantly captured a cultural theme that could describe a large and growing subculture of tech generally:

Around the world, a generation is growing up whose intellectual framework was forged in an economic conflagration which destroyed the reputations of government, finance and central banks alike. The only heroes in this landscape are the hoodie-wearing tech entrepreneurs with their billion-dollar businesses.

This is one of the topics touched upon in George Packer’s recent article in The New Yorker, "Can Silicon Valley embrace politics?".

The sentiment is certainly a driver of the outrage in the Valley (and certainly more broadly) over the NSA’s activities exposed by Edward Snowden. For periods of times in the days following Glenn Greenwald’s article on the matter in The Guardian, every single link on Hacker News was related to the issue. I’ve never seen anything like that happen.

I’ve heard similar sentiments in conversations with friends in the developer community related to issues like Bradley Manning, bank bailouts, taxi regulations affecting Uber, internet sales tax, and so forth. 

For these reasons, I do believe this is a very significant aspect of Silicon Valley. I’m going to weigh in with a more thoughtful post on whether or not I agree with the sentiments embodied in this subculture, but for now I just wanted to highlight it and the fact that more people broadly seem to be noticing as well. 

Annual Letters

I’ve been reading Warren Buffett’s Annual Letters. You can access them back to 1977 at Berkshire Hathaway’s website, or you can buy a book with the full unedited collection going back to the start (1965).

They’re excellent. I’ve learned a lot, not just about investing but financial and economic history as well. It’s one thing to read about stagflation in the ’70s in an economics book and another to hear someone trying to generate returns in a business talk about it as it was happening.

I was curious who else writes thoughtful annual letters and learned from an excellent post by Ben Horowitz about CEOs that Jeff Bezos does. 

Horowitz referenced Bezos’s 1997 letter (the first one after its IPO) as an excellent example of how a CEO gives a company a strategy and a story. All of Bezos’s letters are well done and worth reading. You can find them here

I’m going to write more about what I learn from all of this reading, but for now I just wanted to note the power of writing these letters. They’re an excellent way to organize your thinking, both retrospectively, allowing you to share what happened and why, and prospectively, describing what you plan to do. 

In Buffett’s own words (from a lecture he gave at Notre Dame):

I proposed this to the stock exchange some years ago: that everybody be able to write out “I am buying 100 shares of Coca Cola Company, market value $32 billion, because…” and they wouldn’t take your order until you filled that thing out. 

I find this very useful when I write my annual report. I learn while I think when I write it out. Some of the things I think I think, I find don’t make any sense when I start trying to write them down and explain them to people. You ought to be able to explain why you’re taking the job you’re taking, why you’re making the investment you’re making, or whatever it may be. And if it can’t stand applying pencil to paper, you’d better think it through some more.

Bezos has clearly taken this idea of writing’s ability to help you think even further. At the beginning of every meeting of senior executives, the entire team sits in silence for thirty minutes reading a six page memo discussing various aspects of the issue at hand. 

From Fortune:

Amazon executives call these documents “narratives,” and even Bezos realizes that for the uninitiated—and fans of the PowerPoint presentation—the process is a bit odd. “For new employees, it’s a strange initial experience,” he tells Fortune. “They’re just not accustomed to sitting silently in a room and doing study hall with a bunch of executives.” Bezos says the act of communal reading guarantees the group’s undivided attention. Writing a memo is an even more important skill to master. “Full sentences are harder to write,” he says. “They have verbs. The paragraphs have topic sentences. There is no way to write a six-page, narratively structured memo and not have clear thinking.”

Mobile

The mythology of Amazon’s founding is that it all started in 1994 when Jeff Bezos read that the internet was growing 2,300 percent. It’s not clear what that metric was exactly. Some say it was annually, some say monthly, and I haven’t been able to figure out exactly what the metric was measuring, whether pages, page views, etc. Another story talked about user growth—from 16 million in 1995 to 31 million in 1996. 

It doesn’t matter really. It was a huge shift. He perceived it. And he acted on it. 

The equivalent, if not more significant shift today, is mobile. And by far the best articulation of those trends that I’ve seen is here (h/t to Fred Wilson):

Slide 6 had the most impact on me. The world in 2017 will have about 7.5 billion people of which 3.2 billion will have smart phones (i.e., small internet-connected computers). And that’s from a base of about 1.2 billion in 2012. 

That compares to a PC base of about 1.5 billion, with sales collapsing.

Consider the following from Pew Internet for another perspective:

Recalling what Jeff Bezos saw of a doubling in internet users, the percentage point increases in the above are mind-boggling. In April 2012, 18 percent of Americans over the age of eighteen had a tablet. In May 2013, that had increased to 34 percent. The right hand column is misleading in giving point increases and not percentage increases. The percentage growth is tremendous, 80 percent and even 100 percent or higher in many cases.

The uniformity of adoption is amazing, too. 

Low income, minority, elderly, rural—those are the fastest growing percentages. Granted, they’re from smaller bases, but they’re converging to the average adoption.

Paul Graham wrote the most thoughtful post I’ve seen on the imprecise nature of describing these devices. He pointed out that if the iPad had come first, we wouldn’t think of the iPhone as something very different. We would think of it as a small iPad that you can hold to your ear to double as a phone. It’s just slightly more tailored to one specific app on the phone.

So smartphones and tablets are really the same thing—a new internet connected device with a touch capability that can run different types of applications that also has special characteristics like knowing where it is in the world, how it’s physically moving through space, and how it’s oriented. That’s all in addition to its ability to deliver all the pre-tablet consumer services that are more valuable with increased mobility: music, video, email, etc.

Oh, and they can be pretty cheap. Sure, Apple is extracting margin for the time being. But given the dynamics of supply chains, contract manufacturing, and scale, prices will drop dramatically. I already know of one entrepreneur that is building an incredible business leveraging the fact that he can buy Android phones built to last year’s specs in bulk for about $50 per unit.

Returning to my earlier post on prophecy, this is one of those shifts that may be prone to both of the failures Clarke identified: failure of nerves and failure of imagination.

On nerves, there’s no doubt the world is moving in this direction. We have to accept it and act accordingly (which means boldly).

On imagination, there’s more that we can’t imagine about what will be possible than we can imagine so entrepreneurs have to be willing to experiment and investors have to be willing to dig to the core of an idea or offering and be willing to suspend disbelief, relying on more fundamental signals—the quality of the entrepreneur and the product. The rest will emerge over time.

It’s an exciting time.

SaaS Valuations: Part 1

Long story short: The revenue multiples we typically use to describe SaaS company valuations obscure a lot of information, particularly growth. Replicating those multiples using simple discounted cash flow valuations shows that growth rates have a large impact on valuation. So let’s not react to multiples without giving equal, if not more, weight to the assumptions about growth rates and their persistence. IF you believe the high growth rates, you believe the multiple—that’s mechanical. 

Longer version:

The market isn’t crazy—at least not in the long-term. It tracks fundamentals (revenue, cash flows, capital invested, etc.).

Consider the following three perspectives:

  • Since taking over Berkshire Hathaway in 1965, Warren Buffett has focused consistently on growing book value per share in the belief that the company’s market value per share will track accordingly. That belief has proven to be pretty true:

image

  • In their popular book Valuation: Measuring and Managing the Value of Companies McKinsey & Company described an analysis they did in which they estimated the earnings multiple of the stock market at specific points in time over a forty year period based on a forward-looking cash flow model. They compared the predicted price-to-earnings (P/E) multiple to the actual P/E multiple, drawing the chart below. They concluded: “Over the long term, the stock market as a whole appears to follow the simple, fundamental economic laws described in Chapter 3: Value is driven by returns on capital, growth, and—via the cost of capital—interest rates.”

image

  • A more recent study by McKinsey & Company tied different economic eras to stock returns. A nice (and deliberate) aspect of the descriptions is that they include eras long enough so that major drops in the stock market (e.g., the dot-com bubble and housing crash) are placed in proper context. Their conclusion was the same: 

Unlike the market for fine art or exotic cars, where value is determined by changing investor tastes and fads, the stock market is underpinned by companies that generate real profits and cash flows. Most of the time, its performance can be explained by those profits, cash flows, and the behavior of inflation and interest rates. Deviations from those linkages, as in the tech bubble in 1999–2000 or the panic in 2009, tend to be short-lived.

 image

So for SaaS companies…?

I liked McKinsey’s approach of replicating the multiples with discounted cash flows. It squared the circle, so to say, for me by explicitly tying the shorthand language of multiples to the more meaningful underlying assumptions about fundamentals. 

I wondered: could you do the same for SaaS multiples? Could you describe a large part of the difference in multiples with simple assumptions?

Below are SaaS multiples for the most significant public SaaS companies:

image

These are all the SaaS companies from Pacific Crest’s weekly Software Company Valuations distribution as of May 3, 2013. (I’ve included the specific distribution here. See page 2 for the SaaS companies, and their detail.) 

The projected revenue growth rate from 2013 to 2014 is on the x axis, and the corresponding current enterprise value as a multiple of 2014 projected revenue is on the y axis. 

(Those outliers at the top are NetSuite, to the left, and Workday, to the right—I’ll dive into what’s going on there in a later post.)

There’s a pretty clear relationship:

image

The red point is RealPage (RP), which is projected in the Pacific Crest document to grow at 16 percent and is trading at 3.3x 2014 projected revenue. Specifically, it has an enterprise value of $1.5 billion and projected 2014 revenue of $454 million.

I valued RealPage using a simple discounted cash flow, with the following assumptions:

  • Revenue decay rate of 10 percent (i.e., revenue growth of 30 percent this year, 27 percent next year)
  • Unlevered free cash flow margins of 20 percent
  • Forecast period of 10 years
  • Share count increase of about 2 percent each year
  • Discount rate of about 12 percent
  • Perpetual growth rate at the end of the ten year period of 2.5 percent

The resulting valuation is pretty close (RP*):

image

If you take that discounted cash flow analysis and just change the growth rates to reflect assumed ‘13 to ‘14 revenue growth rates of 10 to 50 percent and keep everything else the same, you get the following points and trend line (orange):

image

In other words, if you’re reasonably confident a company will grow its revenue 50 percent in 2014 and that it will see no more than 10 percent decay in that growth rate for the next ten years while sustaining 20 percent unlevered free cash flow margins, you’d be willing to pay 7.5 times 2014 projected revenue for that company. 

And you’d only pay 2.8 times 2014 projected revenue for a similar company that would only grow revenue 20 percent in 2014. 

That’s mechanical. Those assumptions are certainly arguable, but that’s not the point. The point is that the multiples tell you nothing. They just incorporate deeper—more meaningful—assumptions that do tell you something. Those assumptions at least you can debate, test, and research. Those assumptions and their robustness drive the valuation. Not the multiples. The multiples are just a summary. 

In other words, a 7.5 times revenue valuation by itself doesn’t indicate a valuation that is expensive or cheap.

Now, let’s take those points and draw them again changing one assumption: revenue decay rate.

Let’s say, hypothetically, there were SaaS companies with very talented management, large revenue bases, high historic growth rates, great products, and proven development and sales capabilities attacking tremendously large markets with incumbents that were constrained by technology and legacy deployments from responding. And let’s say that for those reasons you believe those companies can sustain high growth rates for a long time horizon.

So let’s say instead of a 10 percent revenue decay rate you forecast a 5 percent revenue decay rate and keep the other assumptions above the same.

You get the red points and trend line:

image

Remove NetSuite and Workday, and you get this:

image

That’s a pretty good match. 

Conclusion

My goal with the above is to make clearer the language we should use to think about SaaS valuations. Revenue multiples obscure too many assumptions.

At the highest level, they obscure the impact growth has on valuations. If you’re an investor for the long-term, buying these companies for the future cash flows they give you in return for your investment, the higher valuations as a function of higher growth rates are completely justified. If you believe the growth rates and other assumptions, you believe the multiple—that’s mechanical.

At deeper levels—operating margins, capex, discount rates, perpetual growth rates—this gives you a framework to work with. We know roughly the assumptions the market is pricing into the valuation of these companies. We can then determine where we differ.

SaaS Valuations: Intro

I’ve had a few conversations recently that led me to dig deeper into SaaS company valuations. The conversations were along the lines of: 

  • “Workday is trading at 15 times 2014 revenue—that’s crazy!”
  • “Let’s hope SaaS company valuations hold up.”
  • “I need to value [early stage SaaS company]—what are SaaS companies multiples looking like?”

These conversations bother me because embedded in them are a number of incorrect ideas.

In the first, it’s the idea that a 15 times revenue multiple is too high. It ignores the fact that Workday is forecast to grow revenue north of 50 percent in 2014—and that’s on 2013 projected revenue of $425 million. That’s tremendous growth by any standard, but on $425 million it’s incredible. So the number of 15 times revenue tells you nothing.

The second implies the valuations are unreasonable and that we are at the whim of the market in selling shares to the public. The reality is that, one, the valuations are reasonably supported by SaaS company fundamentals and that, two, the market overall is pretty good about aligning prices with fundamentals. At least, it is in the long-term. Not so much in the short-term. 

The third statement is, in part, a version of the first. Most private SaaS company are growing revenue at much faster rates than the public companies. Applying the same multiple ignores that fact.

It’s also flawed because the market for private SaaS companies is very different than that for public SaaS companies. Public SaaS companies have pretty definitively reached escape velocity. They are clear going concerns, whose near-term revenue growth is relatively known. Private SaaS companies are riskier affairs. I argue actually that they reach escape velocity earlier than people seem to think, justifying what many believe are unreasonable valuations. But given the risk at various stages of development, the right way to value them in my opinion is the probability-weighted forecast of an IPO at any given stage.

So I’m going to write three posts to address these and related points. Those posts are:

  • Part 1: The stock market isn’t crazy. Stock market values in general track fundamentals, and SaaS company valuations do so as well. 
  • Part 2: What are the chances? An approach I’ve been toying with builds on that idea. If the IPO values have a dependable logic to them, then the right approach to valuing earlier stage private SaaS companies is a valuation based on the probability that the company goes public. Having seen these valuations done, I know this is what investors in the private market do anyway. I’d just like to formalize this a bit. 
  • Part 3: Embedded options in public SaaS companies. Finally, I’m going to explore a bit whether public SaaS valuations might be missing some key elements of SaaS companies: (1) low downside risk, given recurring revenues and high steady state cash flow margins and (2) dramatic upside potential given (i) their strong competitive positions in large markets and (ii) their R&D and sales/marketing capabilities that allow them to create or acquire complementary products with high growth potential. The low downside risk with upside potential is an option embedded in public SaaS companies that tends to be overlooked. 

Performance

I’m reading Creative Capital by Spencer Ante. It’s a biography of Georges Doriot, who in 1946 founded American Research and Development Corporation (ARD). It was one of the first venture capital firms, and it was the first to have an institutional base or to focus on technical ventures.

ARD’s biggest success was Digital Equipment Corporation in which it invested $70,000 in 1957. After the company’s IPO in 1968, ARD’s stake was worth $355 million.

But Doriot was a fascinating person in other ways. From 1946 to 1966, he was widely considered by many to be the most popular professor at Harvard Business School, not because his class was fun but because it was hard and taught students how to think. 

In a predecessor to the case method at HBS, he had students write two reports: a topic report and a company report.

The topic report asked students to write a report on “a subject of your own choosing which will be a contribution to the future of American business,” but there was a twist: they had to imagine the impact of the problem, product, or technology ten years in the future.

The company report made students learn the nuts and bolts of a business by actually working with local manufacturing companies. For half a year they’d study companies up close. 

Doriot on why he had them do that:

Up until that time when the students read that the price of copper went up, to them it was just statistical information which they might feed back to the teacher. In this case, I want them to say, “What does it mean?” And also very important, “What do I do about it?” I want them to learn to have pains in the stomach, you see what I mean?

This idea of really challenging people to help them grow is a theme throughout the book.

Georges Doriot was a thirteen-year-old boy in his home country of France when he placed second in his class. He was excited and raced home to show his parents the certificate he received. His mother, Camille, gave him a hug and baked him cookies. But when his father, Auguste, came home and saw the certificate, Ante describes a very different reaction:

In stark contrast to Camille, Auguste seemed unimpressed with his son’s award. He acknowledged the certificate with only a cursory glance, nodded perfunctorily, and then fixed his son with one of those chilling stares of appraisal. “And why not first?” asked Auguste.

The story goes on to describe how upset Georges was at this. He ran to his room bewildered and humiliated. 

Ante describes how Georges recounted the experience to a friend many years later:

His father…was not concerned that Georges had failed to achieve first place honors in his class at Ecole Communale. No, he was concerned that Georges was happy placing second. To Auguste, a famous automobile engineer who had raised his children to strive for excellence in everything they did, celebrating anything less than the best possible result smacked of contentment. And contentment, Auguste believed, is a state of mind that recognizes no need for improvement.

Georges apparently told that story often because many that knew him felt the experience played a large role in what he achieved later life. And the key point is worth repeating: 

Contentment is a state of mind that recognizes no need for improvement. 

This resonated with me for many reasons. My father was like this as well. It had a big impact on me, and I believe it’s the right approach. Yet, I’m seeing many indications that cultural norms are moving in the other direction. In both the parenting and professional realms, I see people giving less feedback, particularly negative feedback, and being less clear about what they expect. 

On parenting, I’ll make the perhaps controversial statement that I thought Battle Hymn of the Tiger Mother was a great book. It was well-written and entertaining. But more importantly, I respected Amy Chua’s approach. She expected the best from her children and pushed them to achieve it. She approached her interactions with her children assuming strength on their part, not weakness. She openly acknowledged throughout those years that the cultural norms around her were different. She actively and thoughtfully dismissed them, and she was happy to tell you why. (If you missed this controversy, read the WSJ article.)

In the professional world, I’m surprised how little I see people giving direct, timely, and thoughtful feedback, both good and bad. Early in Creative Capital, the book describes the career of Georges Doriot’s father, Camille. He was a brilliant automotive engineer at Peugeot, and he was actively mentored by Armand Peugeot on all aspects of the business, ultimately starting his own company.

Apprenticeship is the most natural way to learn, and yet it seems people are becoming loath to giving feedback. I just don’t hear it very often. I believe the right model has two interrelated pieces:

1. Frequent, even daily, feedback. These are immediate observations on what went well and what could have been better. For example: “You responded poorly to a question in that presentation. It sounded like you were ill-prepared, but I think if you had just paused a beat or two, you would have made a bigger impact.”

2. Periodic reviews. I think yearly reviews are too infrequent. In the quarterly reviews, you highlight common positive and negative themes (i.e., connect the dots from what should by then be a large amount of feedback) and review milestones and achievements (or the lack of them). The quarterly review could still be less involved than the more significant, but quarter reviews seem about the frequency to tie together trends, give higher level feedback, and evaluate (and potentially redirect) progress towards goals. 

In both of these worlds—parenting and professional—I believe these behaviors can have immense impact if there’s a genuine desire to see the person succeed. With parenting, I’d like to believe that’s obvious. You want to see your kids succeed. In professional settings, I see that vary much more. 

The other related element is expectations. People have to believe that mistakes are a necessary step to growth. Negative feedback shouldn’t be perceived as a knock. It isn’t personal. Every single person that has achieved something significant made missteps along the way. Those missteps are a necessary ingredient for growth. You need to at least learn from them on your own. But if someone else can help you do so with even more effectiveness, all the better. So if you can defuse the inherently defensive, painful nature of feedback, there’s tremendous opportunity for growth.

No silver bullets

I caught up with a good friend that is also in venture capital. We were chatting about a company that he had backed early and that is on track to be quite successful. 

When I asked him about some of the key decision points in the company’s history, he said, “You know, there really were no silver bullets.”

His point as he elaborated was that the company did nothing that isn’t commonly accepted wisdom in building successful technology companies. The only difference, he mused, was that many companies try to skip steps.

He mentioned that it’s a lot like doing well in school. You put in your time. You do the work. You get a base of knowledge. You build on that knowledge. You’ll get some things quickly. Other things you might need to work on more or ask for help. You progress forward doing all the right things.

That really resonated with me. Partly, and this is coincidental, it was an interesting analogy given Peter Thiel’s recent efforts to encourage bright students to skip college and pursue entrepreneurship (The 20 Under 20 Thiel Fellowship.) 

More so, however, his perspective jived with something I’d been wondering lately: there are no secrets. Wisdom is there for the taking. There’s more good writing on the web than one can digest.

Most people in and around the Valley ecosystem of entrepreneurship can articulate the key lessons of building a successful company: target a big market, build a great product, get a product in front of customers, learn what customers want, build a great team, find product-market fit, articulate benefits, establish a strong culture, work well with your teammates, find a revenue model, raise capital, scale distribution, scale marketing, scale recruiting, raise more capital, make the product better, etc. 

But it’s amazing how many companies I see that are quite far along (in terms of capital raised more so than revenue) that, for example, haven’t found a compelling product-market fit. 

There are no secrets. Just the conviction that you’re doing what needs to be done and the grit to do it day in and day out. 

Basketball

I read today about Muthu Alagappan. Two years ago, Alagappan was an undergraduate intern at the Palo Alto data analysis startup Ayasdi and, using their innovative technology, showed that basketball actually has thirteen positions—not five.

Ayasdi is a leader in topological data analysis, which uses a branch of mathematics called topology to yield insights from very large data sets.

The vast majority of tools for data analysis today allow you to ask questions of the data and get answers. Yet, as data sets grow ever larger, the number of possible questions you can ask grows exponentially larger, making it impossible to ask all the questions that matter.

Topology is a branch of mathematics that deals with shapes and spaces. Ayasdi’s insight was that complex data sets are about relationships and that those relationships can be represented mathematically and, through topology, spatially. The spatial representation can then suggest areas that may be interesting to explore. This allows you to discover relationships you may have never thought to explore. 

As the CEO Gurjeet Singh said to me: "The entire history of BI, despite dramatic advances in visualization and the ability to handle larger and larger sets, has seen the same fundamental query-answer structure. But as the size of data sets increases, there are more interesting questions than you can even ask. Ayasdi allows you to get answers to the questions you would never even think to ask."

Alagappan applied Ayasdi’s technology to basketball statistics. He entered seven statistics (points, rebounds, assists, steals, turnovers, fouls, and blocks) for every NBA player, adjusted for playing time. 

This is what he ultimately came up with:

Instead of seeing five clusters, he saw thirteen. (Later work narrowed the thirteen down to the ten you see above.)

According to the Mercury News article, two teams have a formal partnership with Alagappan: the Portland Trail Blazers and the Miami Heat. And apparently, throughout the current playoffs, a Miami Heat representative has been in touch with Alagappan before each series to create a data-driven scouting report. 

Prophecy

"Any sufficiently advanced technology is indistinguishable from magic."

That aphorism, attributed to Arthur C. Clarke, came to mind as I was reflecting on a vision for the future an entrepreneur shared with me, so I looked it up. 

Turns out it’s from a fascinating essay Clarke wrote in 1962, “Hazards of Prophecy.” It’s a biting essay on how bad “experts” are at forecasting the future. It’s similar to some of Nassim Taleb’s writing. You can find the essay here

Clarke starts off:

With monotonous regularity, apparently competent men have laid down the law about what is technically possible or impossible—and have been proved utterly wrong, sometimes while the ink was scarcely dry from their pens. On careful analysis, it appears that these debacles fall into two classes, which I will call “failure of nerve” and “failure of imagination.”

Failure of nerve

Failure of nerve is the more common failure, which occurs “when even given all the relevant facts the would-be prophet cannot see that they point to an inescapable conclusion” (emphasis Clarke’s). 

He goes on to list a series of developments and the common misgivings of naysayers: locomotives—suffocation at thirty miles per hour; electric light bulb—impractical; flying—physically impossible; space travel—fuel would be too heavy; etc. 

He makes an interesting point that many of the naysayers were credible, competent people:

The lesson to be learned from these examples is one that can never be repeated too often, and is one that is seldom understood by laymen—who have an almost superstitious awe of mathematics. But mathematics is only a tool, though an immensely powerful one. No equations, however impressive and complex, can arrive at the truth if the initial assumptions are incorrect.

Failure of imagination

The second part of the essay focuses on what he calls the less blameworthy, and more interesting, failure—the failure of imagination:

It arises when all the available facts are appreciated and marshaled correctly—but when the really vital facts are still undiscovered, and the possibility of their existence is not admitted.

Clarke gives the example of Lord Rutherford, who discovered a great deal about the structure of the atom and yet frequently mocked those who believe that one day the energy locked up in the atom could be harnessed.

Clarke points out that it was only five years after Lord Rutherford’s death that the first nuclear chain reaction was started and warns:

The example of Lord Rutherford demonstrates that it is not the man who knows most about a subject, and is the acknowledged master of his field, who can give the most reliable pointers to its future. Too great a burden of knowledge can clog the wheels of imagination…

He embodies that idea in what he calls Clarke’s Law:

When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

Clarke gives some advice on how to prevent this second failure:

One can only prepare for the unthinkable by trying to keep an open and unprejudiced mind—a feat which is extremely difficult to achieve, even with the best will in the world. Indeed, a completely open mind would be an empty one, and freedom from all prejudices and preconceptions is an unattainable ideal. Yet there is one form of mental exercise that can provide good basic training for would-be prophets: Anyone who wishes to cope with the future should travel back in imagination a single lifetime—say to 1900—and ask himself just how much of today’s technology would be, not merely incredible, but incomprehensible to the keenest scientific brains of that time. 

Predicting technology

The thought experiment Clarke suggests is pretty neat. Since I’m an investor in technology, I’m more interested in the five to fifteen year window so I’ll modify that thought experiment slightly:

To understand the sort of technology companies that will win in the next ten years, go back ten years and ask yourself which of the winners of today would be completely unforeseeable by the keenest investing minds of that time.

Markets are nice because expectations are baked into prices. Valuations reflect beliefs about the future so if the value of a company increases (or decreases) dramatically in the intervening time, it’s obvious that something unexpected happened. 

You can’t predict everything, but to Clarke’s point, much that could be predicted wasn’t because there was either a failure of nerves (failing to take the facts to their logical conclusion) or a failure of imagination (failing to accept that there were possible—in some cases, likely—significant events in the future).

Ten years ago was 2003. Some brief anecdotes of what was happening then and what happened since:

  • VMWare was five years old. In 2004, it would be acquired by EMC for $625 million. In 2007, it would IPO at an $11 billion market cap. And by 2013 it would have a $30 billion market cap.
  • Salesforce was four years old. In 2003, it would have close to $100 million of revenue, having grown revenue 90 percent that year. It would go public in 2004 and be valued at about $1.3 billion. Analysts valued the company using multiples of revenue from “comparable” companies and by projecting its future cash flows. For a sense of expectations at the time, Credit Suisse was neutral, believing the $13 per share price would increase to $15 in 12 months. It valued the company on projected cash flows and projected FY ‘13 (ending 1/31/13) revenue of $812 million. Actual FYE 1/31/13 revenue: $3.1 billion. Enterprise value: $29 billion. That’s a 22x appreciation, 41 percent CAGR—through the worst economic downturn since the Great Depression. 
  • SuccessFactors was two years old. Revenue in 2002 was $2.5 million, and it would be $4.1 million in 2003. Good, not great. I don’t have the financing details directly but backed into them from the S-1 (happy to share details): it looks like the Series B it raised in 2002 of $5 million had an $11 million post-money. That was certainly a deal given the outcome, but it was a deal even on those revenue figures. 
  • Amazon was nine years old and had been a public company for six years. The stock was trading around $35 at mid-year for a $14.6 billion market cap on projections for 2003 and 2004 of revenue increasing from $5.1 billion to about $6.2 billion. All of the public analyst discussion was centered around merchandising categories, fulfillment costs, free shipping promotions, and margins. The stock crested at $91 just prior to the 2008 housing/banking crisis, shrinking to $38 at the low. Today, it’s at $272 for a market cap of $127 billion and projected 2013 revenue of $74 billion. That’s a 23 percent CAGR on the share price and a 31 percent CAGR on revenue. All of the discussion was focused on AWS, digital offerings, Kindle, Amazon Prime, original content, and China. 
  • Facebook hadn’t launched yet. That would be January 2004. The seeds were sown, though. Zuckerberg had created Face Mash and gotten in trouble for it. Friendster was one year old, and Myspace would launch that year and be acquired two years later for $580 million. 

Just some quick observations. I’m going to write about each of these in more detail later. But they give you a flavor of what the technology world looked like ten years ago. 

The reverse thought experiment—who were the high flyers of the time that eventually stumbled—is informative as well. I’d love to dig in to Yahoo, RIM, and others. 

Decision notebook

Farnam Street Blog is a daily destination reading destination. Shane Parrish writes mainly about decision making, and today’s post linked back to one about a decision notebook. As almost everything these days related to thinking seems to do, the idea originated with Daniel Kahneman.

The below is from Michael Mauboussin, author of The Success Equation: Untangling Skill and Luck in Business, Sports, and Investing, speaking about advice from Daniel Kahneman, the psychologist who won the Nobel Prize for economics in 2002:

When I pose [Kahneman] the question, what is a single thing an investor can do to improve his or her performance, he said almost without hesitation, go down to a local drugstore and buy a very cheap notebook and start keeping track of your decisions. And the specific idea is whenever you’re making a consequential decision, something going in or out of the portfolio, just take a moment to think, write down what you expect to happen, why you expect it to happen and then actually, and this is optional, but probably a great idea, is write down how you feel about the situation, both physically and even emotionally. Just, how do you feel? I feel tired. I feel good, or this stock is really draining me. Whatever you think.

The key to doing this is that it prevents something called hindsight bias, which is no matter what happens in the world. We tend to look back on our decision-making process, and we tilt it in a way that looks more favorable to us, right? So we have a bias to explain what has happened.

When you’ve got a decision-making journal, it gives you accurate and honest feedback of what you were thinking at that time. And so there can be situations, by the way, you buy a stock and it goes up, but it goes up for reasons very different than what you thought was going to happen. And having that feedback in a way to almost check yourself periodically is extremely valuable. So that’s, I think, a very inexpensive; it’s actually not super time consuming, but a very, very valuable way of giving yourself essential feedback because our minds won’t do it normally.

He’s writing above from the perspective of public securities, but I’ve often thought about something similar from the perspective of venture capital. We obviously spend a lot of time with our portfolio companies so it’s natural to think of what went wrong with companies that didn’t succeed—though even then I don’t see many people do that systematically.

But far less common is thinking about the companies that did succeed yet aren’t in the portfolio. Bessemer Venture Partners has a pretty amusing anti-portfolio on their website listing some of their major misses. I can only assume they do this sort of analysis systematically, which if true, would certainly explain at least a part of their dramatic success. 

Kahneman makes the point above about individual investors, but I believe there’s a broader implication: a firm could institutionalize this sort of transparent record-keeping where all investors openly keep track of their decisions. Again, the companies that end up in the portfolio would likely be well-documented and a natural focal point so the key would be documenting the companies that didn’t make it into the portfolio. 

Then, periodically, perhaps quarterly, the firm would review a list of companies that weren’t in the portfolio and had successful outcomes (or were tracking to successful outcomes). It would ask itself why they had passed on those companies and what part of the process could be improved.

And that’s assuming it had the chance to invest in those companies. In stark contrast to public investing, access is key in private markets. So for the companies that were successful that the firm never even had the chance to invest in, the process improvement would focus on why they never got a look. 

By regularly asking themselves, one, what high quality companies they never saw and, two, why they passed on winners, they could identify gaps in their capabilities. Together with an institutional culture of improving their capabilities, a separate topic, such a firm would have a tremendous advantage. 

The persistence of insight

I’m reading James Gleick’s The Information: A History, A Theory, A Flood, which I highly recommend. Gleick takes a philosophical, historical, and theoretical view on data in all its forms, from messages communicated through drum beats in 18th century Africa to the development of the written word to the telegraph and so on. It’s a fascinating, fun read that makes your mind spin with ideas about what’s coming next.

A recurring theme, as indicated by the title, is the flood of information that exists today. I was pleasantly surprised to find that Gleick evokes that idea by drawing on one of my favorite short stories, “The Library of Babel” by Jorge Luis Borges, which describes a universe in the form of a library.

The Library, Borges describes, is composed of a vast (perhaps infinite) number of hexagonal rooms. Each room is connected to other rooms that are identical, and there is a spiral staircase that winds upward and downward, seemingly forever, connecting to other levels, each with identical rooms. In each room, on four of the six walls are bookshelves filled with books.

The story is told by one of the inhabitants of this universe who doesn’t know precisely what the Library is, yet repeats what he has been told:

The Library is a sphere whose exact center is any hexagon and whose circumference is unattainable.

Each wall of each hexagon has five bookshelves, and each bookshelf holds thirty-two books of the same exact format: 410 pages, 40 lines on each page, and about 80 black letters on each line. But the characters seem to be random. Most books are complete gibberish. One contains the letter M C V repeated from start to finish. Another, “is a mere labyrinth of letters whose penultimate page contains the phrase O Time thy pyramids.” The narrator laments: ”For every rational line or forthright statement there are leagues of senseless cacophony, verbal nonsense, and incoherency.”

It’s a beautiful and haunting story, like all of Borges’s stories, and a very apt description of the world we live in today. There’s more data than we know what to do with. In Nassim Taleb’s book Antifragile, he shows how many spurious correlations can result from very large data sets, yielding apparent signal, when there’s really just noise [1]. It’s enough to drive you mad—and that’s precisely what happens to many of the inhabitants of the Library.

But other parts of Gleick’s book offer hope. My favorite chapter so far was Chapter 2, “The Persistence of the Word,” in which he describes the impact on humanity from the written word. Gleick describes the rise in the Paleolithic age at least 30,000 years ago of pictures scratched and painted to represent horses, fish, and hunters. Over time, the symbols become more abstract, and, eventually, writing developed. But most importantly, there was a shift in writing, from the representation of things to representations of the spoken word.

There is a progression from pictographic, writing the picture; to ideographic, writing the idea; and then logographic, writing the word.

When an alphabet arose—which Gleick points out happened only once, that all alphabets are descendants of the same ancestor—it allowed everything to be abstracted. Homer’s poems that had been passed on for generations in solely spoken form were finally written down, an event one scholar called a “thunder-clap in human history, which the bias of familiarity has converted into the rustle of papers on the desk.”

This act allowed something that had been experienced as a momentary event to become permanent. The words themselves when placed on something permanent became something new, leading to a whole new level of abstraction.

Gleick takes this line of thought to an amazing idea. Once Aristotle and other philosophers could grapple with words, it gave rise to entirely new language and ways of thinking.

The persistence of writing made it possible to impose structure on what was known about the world and, then, on what was known about knowing.

Gleick continues:

In our world of ingrained literacy, thinking and writing seem scarcely related activities. We can imagine the latter depending on the former, but surely not the other way around: everyone thinks, whether or not they write. But [the British scholar Eric] Havelock was right. The written word—the persistent word—was a prerequisite for conscious thought as we understand it. It was the trigger for wholesale, irreversible change in the human psyche—psyche being the word favored by Socrates/Plato as they struggled to understand it.

At this point, I had to stop reading and absorb that idea.

Again:

The written word—the persistent word—was a prerequisite for conscious thought as we understand it. 

A technology fundamentally changed us as a species. It made me wonder, What’s next? The technology of writing fundamentally changed how we thought. Will technologies of today have a similar impact over time?

Tableau Software went public last week. Its incredible financial performance underscored what many already knew—that it offers an incredible product that allows people and organizations to interact with data more efficiently and effectively. In addition to public companies like Tableau and Splunk, there’s Domo, Palantir, Ayasdi, and scores of young startups combining the latest database, visualization, and machine learning technologies to push forward our ability to interact with data. The goal of these companies is to yield better insights allowing organizations to make better decisions.

When I look at some of the tools emerging and how people are using them, I wonder if we’re just at the very start. There will be other types of innovation, second order problems and solutions to those problems: new academic research on how to think about insight from data, new ways to train young people just entering the work force to work with data, new business models that give companies an advantage through data, new ways to store and share data, new concepts on how to build insights on top of other insights, and so on. Each innovation will build on earlier innovations and, possibly, exponentially increase the impact.

I’m really excited about where all of this takes us and wonder which technology, company, or event we’ll look back at and see as another “thunder-clap in human history.”

__

[1] Read Antifragile Chapter 24, ‘Fitting Ethics to a Profession’ > ‘Big Data and the Researcher’s Option,’ which has the following chart:

AVC on Diligence

I was heartened to read Fred Wilson’s post on diligence ("You Can Do Too Much Due Diligence") after I wrote my last post because it echoes exactly what I have been thinking. 

In it, he talks about his due diligence back in 2004 on Feedburner, which provided services around blogs’ RSS feeds. But when he did research (“due diligence” in industry lingo), a dozen or so major publishers that had blogs said they wouldn’t use the service. So he and his firm passed.

Six months later, Fred caught up with Feedburner’s CEO, a nice fella named Dick Costolo (now CEO of Twitter), and asked how things were going. It turned out that everyone Fred had spoken with was a Feedburner customer. 

"So what did I learn from this lesson?" asks Fred:

…First, trust your gut. I was using Feedburner and knew it was a very useful service. I felt that others would see that too. They did, but it took some time. Second, I learned that a service can get traction with the little guys and in time, the big guys will come along. I have seen that happen quite a bit since then. And finally, I learned that you can do too much due diligence. It’s important to talk to the market and hear what it is saying. But you have to balance that with other things; the quality of the team, the product, the user experience, etc. You cannot rely alone on due diligence, particularly early on in the development of a company and a market.