text
large_stringlengths 55
74.7k
|
---|
April 2007
There are two different ways people judge you. Sometimes judging you correctly
is the end goal. But there's a second much more common type of judgement where
it isn't. We tend to regard all judgements of us as the first type. We'd
probably be happier if we realized which are and which aren't.
The first type of judgement, the type where judging you is the end goal,
include court cases, grades in classes, and most competitions. Such judgements
can of course be mistaken, but because the goal is to judge you correctly,
there's usually some kind of appeals process. If you feel you've been
misjudged, you can protest that you've been treated unfairly.
Nearly all the judgements made on children are of this type, so we get into
the habit early in life of thinking that all judgements are.
But in fact there is a second much larger class of judgements where judging
you is only a means to something else. These include college admissions,
hiring and investment decisions, and of course the judgements made in dating.
This kind of judgement is not really about you.
Put yourself in the position of someone selecting players for a national team.
Suppose for the sake of simplicity that this is a game with no positions, and
that you have to select 20 players. There will be a few stars who clearly
should make the team, and many players who clearly shouldn't. The only place
your judgement makes a difference is in the borderline cases. Suppose you
screw up and underestimate the 20th best player, causing him not to make the
team, and his place to be taken by the 21st best. You've still picked a good
team. If the players have the usual distribution of ability, the 21st best
player will be only slightly worse than the 20th best. Probably the difference
between them will be less than the measurement error.
The 20th best player may feel he has been misjudged. But your goal here wasn't
to provide a service estimating people's ability. It was to pick a team, and
if the difference between the 20th and 21st best players is less than the
measurement error, you've still done that optimally.
It's a false analogy even to use the word unfair to describe this kind of
misjudgement. It's not aimed at producing a correct estimate of any given
individual, but at selecting a reasonably optimal set.
One thing that leads us astray here is that the selector seems to be in a
position of power. That makes him seem like a judge. If you regard someone
judging you as a customer instead of a judge, the expectation of fairness goes
away. The author of a good novel wouldn't complain that readers were _unfair_
for preferring a potboiler with a racy cover. Stupid, perhaps, but not unfair.
Our early training and our self-centeredness combine to make us believe that
every judgement of us is about us. In fact most aren't. This is a rare case
where being less self-centered will make people more confident. Once you
realize how little most people judging you care about judging you
accurately—once you realize that because of the normal distribution of most
applicant pools, it matters least to judge accurately in precisely the cases
where judgement has the most effect—you won't take rejection so personally.
And curiously enough, taking rejection less personally may help you to get
rejected less often. If you think someone judging you will work hard to judge
you correctly, you can afford to be passive. But the more you realize that
most judgements are greatly influenced by random, extraneous factors—that most
people judging you are more like a fickle novel buyer than a wise and
perceptive magistrate—the more you realize you can do things to influence the
outcome.
One good place to apply this principle is in college applications. Most high
school students applying to college do it with the usual child's mix of
inferiority and self-centeredness: inferiority in that they assume that
admissions committees must be all-seeing; self-centeredness in that they
assume admissions committees care enough about them to dig down into their
application and figure out whether they're good or not. These combine to make
applicants passive in applying and hurt when they're rejected. If college
applicants realized how quick and impersonal most selection processes are,
they'd make more effort to sell themselves, and take the outcome less
personally.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
October 2006
In the Q & A period after a recent talk, someone asked what made startups
fail. After standing there gaping for a few seconds I realized this was kind
of a trick question. It's equivalent to asking how to make a startup succeed —
if you avoid every cause of failure, you succeed — and that's too big a
question to answer on the fly.
Afterwards I realized it could be helpful to look at the problem from this
direction. If you have a list of all the things you shouldn't do, you can turn
that into a recipe for succeeding just by negating. And this form of list may
be more useful in practice. It's easier to catch yourself doing something you
shouldn't than always to remember to do something you should. [1]
In a sense there's just one mistake that kills startups: not making something
users want. If you make something users want, you'll probably be fine,
whatever else you do or don't do. And if you don't make something users want,
then you're dead, whatever else you do or don't do. So really this is a list
of 18 things that cause startups not to make something users want. Nearly all
failure funnels through that.
**1\. Single Founder**
Have you ever noticed how few successful startups were founded by just one
person? Even companies you think of as having one founder, like Oracle,
usually turn out to have more. It seems unlikely this is a coincidence.
What's wrong with having one founder? To start with, it's a vote of no
confidence. It probably means the founder couldn't talk any of his friends
into starting the company with him. That's pretty alarming, because his
friends are the ones who know him best.
But even if the founder's friends were all wrong and the company is a good
bet, he's still at a disadvantage. Starting a startup is too hard for one
person. Even if you could do all the work yourself, you need colleagues to
brainstorm with, to talk you out of stupid decisions, and to cheer you up when
things go wrong.
The last one might be the most important. The low points in a startup are so
low that few could bear them alone. When you have multiple founders, esprit de
corps binds them together in a way that seems to violate conservation laws.
Each thinks "I can't let my friends down." This is one of the most powerful
forces in human nature, and it's missing when there's just one founder.
**2\. Bad Location**
Startups prosper in some places and not others. Silicon Valley dominates, then
Boston, then Seattle, Austin, Denver, and New York. After that there's not
much. Even in New York the number of startups per capita is probably a 20th of
what it is in Silicon Valley. In towns like Houston and Chicago and Detroit
it's too small to measure.
Why is the falloff so sharp? Probably for the same reason it is in other
industries. What's the sixth largest fashion center in the US? The sixth
largest center for oil, or finance, or publishing? Whatever they are they're
probably so far from the top that it would be misleading even to call them
centers.
It's an interesting question why cities [become](siliconvalley.html) startup
hubs, but the reason startups prosper in them is probably the same as it is
for any industry: that's where the experts are. Standards are higher; people
are more sympathetic to what you're doing; the kind of people you want to hire
want to live there; supporting industries are there; the people you run into
in chance meetings are in the same business. Who knows exactly how these
factors combine to boost startups in Silicon Valley and squish them in
Detroit, but it's clear they do from the number of startups per capita in
each.
**3\. Marginal Niche**
Most of the groups that apply to Y Combinator suffer from a common problem:
choosing a small, obscure niche in the hope of avoiding competition.
If you watch little kids playing sports, you notice that below a certain age
they're afraid of the ball. When the ball comes near them their instinct is to
avoid it. I didn't make a lot of catches as an eight year old outfielder,
because whenever a fly ball came my way, I used to close my eyes and hold my
glove up more for protection than in the hope of catching it.
Choosing a marginal project is the startup equivalent of my eight year old
strategy for dealing with fly balls. If you make anything good, you're going
to have competitors, so you may as well face that. You can only avoid
competition by avoiding good ideas.
I think this shrinking from big problems is mostly unconscious. It's not that
people think of grand ideas but decide to pursue smaller ones because they
seem safer. Your unconscious won't even let you think of grand ideas. So the
solution may be to think about ideas without involving yourself. What would be
a great idea for _someone else_ to do as a startup?
**4\. Derivative Idea**
Many of the applications we get are imitations of some existing company.
That's one source of ideas, but not the best. If you look at the origins of
successful startups, few were started in imitation of some other startup.
Where did they get their ideas? Usually from some specific, unsolved problem
the founders identified.
Our startup made software for making online stores. When we started it, there
wasn't any; the few sites you could order from were hand-made at great expense
by web consultants. We knew that if online shopping ever took off, these sites
would have to be generated by software, so we wrote some. Pretty
straightforward.
It seems like the best problems to solve are ones that affect you personally.
Apple happened because Steve Wozniak wanted a computer, Google because Larry
and Sergey couldn't find stuff online, Hotmail because Sabeer Bhatia and Jack
Smith couldn't exchange email at work.
So instead of copying the Facebook, with some variation that the Facebook
rightly ignored, look for ideas from the other direction. Instead of starting
from companies and working back to the problems they solved, look for problems
and imagine the company that might solve them. [2] What do people complain
about? What do you wish there was?
**5\. Obstinacy**
In some fields the way to succeed is to have a vision of what you want to
achieve, and to hold true to it no matter what setbacks you encounter.
Starting startups is not one of them. The stick-to-your-vision approach works
for something like winning an Olympic gold medal, where the problem is well-
defined. Startups are more like science, where you need to follow the trail
wherever it leads.
So don't get too attached to your original plan, because it's probably wrong.
Most successful startups end up doing something different than they originally
intended — often so different that it doesn't even seem like the same company.
You have to be prepared to see the better idea when it arrives. And the
hardest part of that is often discarding your old idea.
But openness to new ideas has to be tuned just right. Switching to a new idea
every week will be equally fatal. Is there some kind of external test you can
use? One is to ask whether the ideas represent some kind of progression. If in
each new idea you're able to re-use most of what you built for the previous
ones, then you're probably in a process that converges. Whereas if you keep
restarting from scratch, that's a bad sign.
Fortunately there's someone you can ask for advice: your users. If you're
thinking about turning in some new direction and your users seem excited about
it, it's probably a good bet.
**6\. Hiring Bad Programmers**
I forgot to include this in the early versions of the list, because nearly all
the founders I know are programmers. This is not a serious problem for them.
They might accidentally hire someone bad, but it's not going to kill the
company. In a pinch they can do whatever's required themselves.
But when I think about what killed most of the startups in the e-commerce
business back in the 90s, it was bad programmers. A lot of those companies
were started by business guys who thought the way startups worked was that you
had some clever idea and then hired programmers to implement it. That's
actually much harder than it sounds — almost impossibly hard in fact — because
business guys can't tell which are the good programmers. They don't even get a
shot at the best ones, because no one really good wants a job implementing the
vision of a business guy.
In practice what happens is that the business guys choose people they think
are good programmers (it says here on his resume that he's a Microsoft
Certified Developer) but who aren't. Then they're mystified to find that their
startup lumbers along like a World War II bomber while their competitors
scream past like jet fighters. This kind of startup is in the same position as
a big company, but without the advantages.
So how do you pick good programmers if you're not a programmer? I don't think
there's an answer. I was about to say you'd have to find a good programmer to
help you hire people. But if you can't recognize good programmers, how would
you even do that?
**7\. Choosing the Wrong Platform**
A related problem (since it tends to be done by bad programmers) is choosing
the wrong platform. For example, I think a lot of startups during the Bubble
killed themselves by deciding to build server-based applications on Windows.
Hotmail was still running on FreeBSD for years after Microsoft bought it,
presumably because Windows couldn't handle the load. If Hotmail's founders had
chosen to use Windows, they would have been swamped.
PayPal only just dodged this bullet. After they merged with X.com, the new CEO
wanted to switch to Windows — even after PayPal cofounder Max Levchin showed
that their software scaled only 1% as well on Windows as Unix. Fortunately for
PayPal they switched CEOs instead.
Platform is a vague word. It could mean an operating system, or a programming
language, or a "framework" built on top of a programming language. It implies
something that both supports and limits, like the foundation of a house.
The scary thing about platforms is that there are always some that seem to
outsiders to be fine, responsible choices and yet, like Windows in the 90s,
will destroy you if you choose them. Java applets were probably the most
spectacular example. This was supposed to be the new way of delivering
applications. Presumably it killed just about 100% of the startups who
believed that.
How do you pick the right platforms? The usual way is to hire good programmers
and let them choose. But there is a trick you could use if you're not a
programmer: visit a top computer science department and see what they use in
research projects.
**8\. Slowness in Launching**
Companies of all sizes have a hard time getting software done. It's intrinsic
to the medium; software is always 85% done. It takes an effort of will to push
through this and get something released to users. [3]
Startups make all kinds of excuses for delaying their launch. Most are
equivalent to the ones people use for procrastinating in everyday life.
There's something that needs to happen first. Maybe. But if the software were
100% finished and ready to launch at the push of a button, would they still be
waiting?
One reason to launch quickly is that it forces you to actually _finish_ some
quantum of work. Nothing is truly finished till it's released; you can see
that from the rush of work that's always involved in releasing anything, no
matter how finished you thought it was. The other reason you need to launch is
that it's only by bouncing your idea off users that you fully understand it.
Several distinct problems manifest themselves as delays in launching: working
too slowly; not truly understanding the problem; fear of having to deal with
users; fear of being judged; working on too many different things; excessive
perfectionism. Fortunately you can combat all of them by the simple expedient
of forcing yourself to launch _something_ fairly quickly.
**9\. Launching Too Early**
Launching too slowly has probably killed a hundred times more startups than
launching too fast, but it is possible to launch too fast. The danger here is
that you ruin your reputation. You launch something, the early adopters try it
out, and if it's no good they may never come back.
So what's the minimum you need to launch? We suggest startups think about what
they plan to do, identify a core that's both (a) useful on its own and (b)
something that can be incrementally expanded into the whole project, and then
get that done as soon as possible.
This is the same approach I (and many other programmers) use for writing
software. Think about the overall goal, then start by writing the smallest
subset of it that does anything useful. If it's a subset, you'll have to write
it anyway, so in the worst case you won't be wasting your time. But more
likely you'll find that implementing a working subset is both good for morale
and helps you see more clearly what the rest should do.
The early adopters you need to impress are fairly tolerant. They don't expect
a newly launched product to do everything; it just has to do _something_.
**10\. Having No Specific User in Mind**
You can't build things users like without understanding them. I mentioned
earlier that the most successful startups seem to have begun by trying to
solve a problem their founders had. Perhaps there's a rule here: perhaps you
create wealth in proportion to how well you understand the problem you're
solving, and the problems you understand best are your own. [4]
That's just a theory. What's not a theory is the converse: if you're trying to
solve problems you don't understand, you're hosed.
And yet a surprising number of founders seem willing to assume that someone,
they're not sure exactly who, will want what they're building. Do the founders
want it? No, they're not the target market. Who is? Teenagers. People
interested in local events (that one is a perennial tarpit). Or "business"
users. What business users? Gas stations? Movie studios? Defense contractors?
You can of course build something for users other than yourself. We did. But
you should realize you're stepping into dangerous territory. You're flying on
instruments, in effect, so you should (a) consciously shift gears, instead of
assuming you can rely on your intuitions as you ordinarily would, and (b) look
at the instruments.
In this case the instruments are the users. When designing for other people
you have to be empirical. You can no longer guess what will work; you have to
find users and measure their responses. So if you're going to make something
for teenagers or "business" users or some other group that doesn't include
you, you have to be able to talk some specific ones into using what you're
making. If you can't, you're on the wrong track.
**11\. Raising Too Little Money**
Most successful startups take funding at some point. Like having more than one
founder, it seems a good bet statistically. How much should you take, though?
Startup funding is measured in time. Every startup that isn't profitable
(meaning nearly all of them, initially) has a certain amount of time left
before the money runs out and they have to stop. This is sometimes referred to
as runway, as in "How much runway do you have left?" It's a good metaphor
because it reminds you that when the money runs out you're going to be
airborne or dead.
Too little money means not enough to get airborne. What airborne means depends
on the situation. Usually you have to advance to a visibly higher level: if
all you have is an idea, a working prototype; if you have a prototype,
launching; if you're launched, significant growth. It depends on investors,
because until you're profitable that's who you have to convince.
So if you take money from investors, you have to take enough to get to the
next step, whatever that is. [5] Fortunately you have some control over both
how much you spend and what the next step is. We advise startups to set both
low, initially: spend practically nothing, and make your initial goal simply
to build a solid prototype. This gives you maximum flexibility.
**12\. Spending Too Much**
It's hard to distinguish spending too much from raising too little. If you run
out of money, you could say either was the cause. The only way to decide which
to call it is by comparison with other startups. If you raised five million
and ran out of money, you probably spent too much.
Burning through too much money is not as common as it used to be. Founders
seem to have learned that lesson. Plus it keeps getting cheaper to start a
startup. So as of this writing few startups spend too much. None of the ones
we've funded have. (And not just because we make small investments; many have
gone on to raise further rounds.)
The classic way to burn through cash is by hiring a lot of people. This bites
you twice: in addition to increasing your costs, it slows you down—so money
that's getting consumed faster has to last longer. Most hackers understand why
that happens; Fred Brooks explained it in The Mythical Man-Month.
We have three general suggestions about hiring: (a) don't do it if you can
avoid it, (b) pay people with equity rather than salary, not just to save
money, but because you want the kind of people who are committed enough to
prefer that, and (c) only hire people who are either going to write code or go
out and get users, because those are the only things you need at first.
**13\. Raising Too Much Money**
It's obvious how too little money could kill you, but is there such a thing as
having too much?
Yes and no. The problem is not so much the money itself as what comes with it.
As one VC who spoke at Y Combinator said, "Once you take several million
dollars of my money, the clock is ticking." If VCs fund you, they're not going
to let you just put the money in the bank and keep operating as two guys
living on ramen. They want that money to go to work. [6] At the very least
you'll move into proper office space and hire more people. That will change
the atmosphere, and not entirely for the better. Now most of your people will
be employees rather than founders. They won't be as committed; they'll need to
be told what to do; they'll start to engage in office politics.
When you raise a lot of money, your company moves to the suburbs and has kids.
Perhaps more dangerously, once you take a lot of money it gets harder to
change direction. Suppose your initial plan was to sell something to
companies. After taking VC money you hire a sales force to do that. What
happens now if you realize you should be making this for consumers instead of
businesses? That's a completely different kind of selling. What happens, in
practice, is that you don't realize that. The more people you have, the more
you stay pointed in the same direction.
Another drawback of large investments is the time they take. The time required
to raise money grows with the amount. [7] When the amount rises into the
millions, investors get very cautious. VCs never quite say yes or no; they
just engage you in an apparently endless conversation. Raising VC scale
investments is thus a huge time sink — more work, probably, than the startup
itself. And you don't want to be spending all your time talking to investors
while your competitors are spending theirs building things.
We advise founders who go on to seek VC money to take the first reasonable
deal they get. If you get an offer from a reputable firm at a reasonable
valuation with no unusually onerous terms, just take it and get on with
building the company. [8] Who cares if you could get a 30% better deal
elsewhere? Economically, startups are an all-or-nothing game. Bargain-hunting
among investors is a waste of time.
**14\. Poor Investor Management**
As a founder, you have to manage your investors. You shouldn't ignore them,
because they may have useful insights. But neither should you let them run the
company. That's supposed to be your job. If investors had sufficient vision to
run the companies they fund, why didn't they start them?
Pissing off investors by ignoring them is probably less dangerous than caving
in to them. In our startup, we erred on the ignoring side. A lot of our energy
got drained away in disputes with investors instead of going into the product.
But this was less costly than giving in, which would probably have destroyed
the company. If the founders know what they're doing, it's better to have half
their attention focused on the product than the full attention of investors
who don't.
How hard you have to work on managing investors usually depends on how much
money you've taken. When you raise VC-scale money, the investors get a great
deal of control. If they have a board majority, they're literally your bosses.
In the more common case, where founders and investors are equally represented
and the deciding vote is cast by neutral outside directors, all the investors
have to do is convince the outside directors and they control the company.
If things go well, this shouldn't matter. So long as you seem to be advancing
rapidly, most investors will leave you alone. But things don't always go
smoothly in startups. Investors have made trouble even for the most successful
companies. One of the most famous examples is Apple, whose board made a nearly
fatal blunder in firing Steve Jobs. Apparently even Google got a lot of grief
from their investors early on.
**15\. Sacrificing Users to (Supposed) Profit**
When I said at the beginning that if you make something users want, you'll be
fine, you may have noticed I didn't mention anything about having the right
business model. That's not because making money is unimportant. I'm not
suggesting that founders start companies with no chance of making money in the
hope of unloading them before they tank. The reason we tell founders not to
worry about the business model initially is that making something people want
is so much harder.
I don't know why it's so hard to make something people want. It seems like it
should be straightforward. But you can tell it must be hard by how few
startups do it.
Because making something people want is so much harder than making money from
it, you should leave business models for later, just as you'd leave some
trivial but messy feature for version 2. In version 1, solve the core problem.
And the core problem in a startup is how to [create wealth](wealth.html) (=
how much people want something x the number who want it), not how to convert
that wealth into money.
The companies that win are the ones that put users first. Google, for example.
They made search work, then worried about how to make money from it. And yet
some startup founders still think it's irresponsible not to focus on the
business model from the beginning. They're often encouraged in this by
investors whose experience comes from less malleable industries.
It _is_ irresponsible not to think about business models. It's just ten times
more irresponsible not to think about the product.
**16\. Not Wanting to Get Your Hands Dirty**
Nearly all programmers would rather spend their time writing code and have
someone else handle the messy business of extracting money from it. And not
just the lazy ones. Larry and Sergey apparently felt this way too at first.
After developing their new search algorithm, the first thing they tried was to
get some other company to buy it.
Start a company? Yech. Most hackers would rather just have ideas. But as Larry
and Sergey found, there's not much of a market for ideas. No one trusts an
idea till you embody it in a product and use that to grow a user base. Then
they'll pay big time.
Maybe this will change, but I doubt it will change much. There's nothing like
users for convincing acquirers. It's not just that the risk is decreased. The
acquirers are human, and they have a hard time paying a bunch of young guys
millions of dollars just for being clever. When the idea is embodied in a
company with a lot of users, they can tell themselves they're buying the users
rather than the cleverness, and this is easier for them to swallow. [9]
If you're going to attract users, you'll probably have to get up from your
computer and go find some. It's unpleasant work, but if you can make yourself
do it you have a much greater chance of succeeding. In the first batch of
startups we funded, in the summer of 2005, most of the founders spent all
their time building their applications. But there was one who was away half
the time talking to executives at cell phone companies, trying to arrange
deals. Can you imagine anything more painful for a hacker? [10] But it paid
off, because this startup seems the most successful of that group by an order
of magnitude.
If you want to start a startup, you have to face the fact that you can't just
hack. At least one hacker will have to spend some of the time doing business
stuff.
**17\. Fights Between Founders**
Fights between founders are surprisingly common. About 20% of the startups
we've funded have had a founder leave. It happens so often that we've reversed
our attitude to vesting. We still don't require it, but now we advise founders
to vest so there will be an orderly way for people to quit.
A founder leaving doesn't necessarily kill a startup, though. Plenty of
successful startups have had that happen. [11] Fortunately it's usually the
least committed founder who leaves. If there are three founders and one who
was lukewarm leaves, big deal. If you have two and one leaves, or a guy with
critical technical skills leaves, that's more of a problem. But even that is
survivable. Blogger got down to one person, and they bounced back.
Most of the disputes I've seen between founders could have been avoided if
they'd been more careful about who they started a company with. Most disputes
are not due to the situation but the people. Which means they're inevitable.
And most founders who've been burned by such disputes probably had misgivings,
which they suppressed, when they started the company. Don't suppress
misgivings. It's much easier to fix problems before the company is started
than after. So don't include your housemate in your startup because he'd feel
left out otherwise. Don't start a company with someone you dislike because
they have some skill you need and you worry you won't find anyone else. The
people are the most important ingredient in a startup, so don't compromise
there.
**18\. A Half-Hearted Effort**
The failed startups you hear most about are the spectacular flameouts. Those
are actually the elite of failures. The most common type is not the one that
makes spectacular mistakes, but the one that doesn't do much of anything — the
one we never even hear about, because it was some project a couple guys
started on the side while working on their day jobs, but which never got
anywhere and was gradually abandoned.
Statistically, if you want to avoid failure, it would seem like the most
important thing is to quit your day job. Most founders of failed startups
don't quit their day jobs, and most founders of successful ones do. If startup
failure were a disease, the CDC would be issuing bulletins warning people to
avoid day jobs.
Does that mean you should quit your day job? Not necessarily. I'm guessing
here, but I'd guess that many of these would-be founders may not have the kind
of determination it takes to start a company, and that in the back of their
minds, they know it. The reason they don't invest more time in their startup
is that they know it's a bad investment. [12]
I'd also guess there's some band of people who could have succeeded if they'd
taken the leap and done it full-time, but didn't. I have no idea how wide this
band is, but if the winner/borderline/hopeless progression has the sort of
distribution you'd expect, the number of people who could have made it, if
they'd quit their day job, is probably an order of magnitude larger than the
number who do make it. [13]
If that's true, most startups that could succeed fail because the founders
don't devote their whole efforts to them. That certainly accords with what I
see out in the world. Most startups fail because they don't make something
people want, and the reason most don't is that they don't try hard enough.
In other words, starting startups is just like everything else. The biggest
mistake you can make is not to try hard enough. To the extent there's a secret
to success, it's not to be in denial about that.
**Notes**
[1] This is not a complete list of the causes of failure, just those you can
control. There are also several you can't, notably ineptitude and bad luck.
[2] Ironically, one variant of the Facebook that might work is a facebook
exclusively for college students.
[3] Steve Jobs tried to motivate people by saying "Real artists ship." This is
a fine sentence, but unfortunately not true. Many famous works of art are
unfinished. It's true in fields that have hard deadlines, like architecture
and filmmaking, but even there people tend to be tweaking stuff till it's
yanked out of their hands.
[4] There's probably also a second factor: startup founders tend to be at the
leading edge of technology, so problems they face are probably especially
valuable.
[5] You should take more than you think you'll need, maybe 50% to 100% more,
because software takes longer to write and deals longer to close than you
expect.
[6] Since people sometimes call us VCs, I should add that we're not. VCs
invest large amounts of other people's money. We invest small amounts of our
own, like angel investors.
[7] Not linearly of course, or it would take forever to raise five million
dollars. In practice it just feels like it takes forever.
Though if you include the cases where VCs don't invest, it would literally
take forever in the median case. And maybe we should, because the danger of
chasing large investments is not just that they take a long time. That's the
_best_ case. The real danger is that you'll expend a lot of time and get
nothing.
[8] Some VCs will offer you an artificially low valuation to see if you have
the balls to ask for more. It's lame that VCs play such games, but some do. If
you're dealing with one of those you should push back on the valuation a bit.
[9] Suppose YouTube's founders had gone to Google in 2005 and told them
"Google Video is badly designed. Give us $10 million and we'll tell you all
the mistakes you made." They would have gotten the royal raspberry. Eighteen
months later Google paid $1.6 billion for the same lesson, partly because they
could then tell themselves that they were buying a phenomenon, or a community,
or some vague thing like that.
I don't mean to be hard on Google. They did better than their competitors, who
may have now missed the video boat entirely.
[10] Yes, actually: dealing with the government. But phone companies are up
there.
[11] Many more than most people realize, because companies don't advertise
this. Did you know Apple originally had three founders?
[12] I'm not dissing these people. I don't have the determination myself. I've
twice come close to starting startups since Viaweb, and both times I bailed
because I realized that without the spur of poverty I just wasn't willing to
endure the stress of a startup.
[13] So how do you know whether you're in the category of people who should
quit their day job, or the presumably larger one who shouldn't? I got to the
point of saying that this was hard to judge for yourself and that you should
seek outside advice, before realizing that that's what we do. We think of
ourselves as investors, but viewed from the other direction Y Combinator is a
service for advising people whether or not to quit their day job. We could be
mistaken, and no doubt often are, but we do at least bet money on our
conclusions.
**Thanks** to Sam Altman, Jessica Livingston, Greg McAdoo, and Robert Morris
for reading drafts of this.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
October 2009
_(This essay is derived from a talk at the 2009 Startup School.)_
I wasn't sure what to talk about at Startup School, so I decided to ask the
founders of the startups we'd funded. What hadn't I written about yet?
I'm in the unusual position of being able to test the essays I write about
startups. I hope the ones on other topics are right, but I have no way to test
them. The ones on startups get tested by about 70 people every 6 months.
So I sent all the founders an email asking what surprised them about starting
a startup. This amounts to asking what I got wrong, because if I'd explained
things well enough, nothing should have surprised them.
I'm proud to report I got one response saying:
> What surprised me the most is that everything was actually fairly
> predictable!
The bad news is that I got over 100 other responses listing the surprises they
encountered.
There were very clear patterns in the responses; it was remarkable how often
several people had been surprised by exactly the same thing. These were the
biggest:
**1\. Be Careful with Cofounders**
This was the surprise mentioned by the most founders. There were two types of
responses: that you have to be careful who you pick as a cofounder, and that
you have to work hard to maintain your relationship.
What people wished they'd paid more attention to when choosing cofounders was
character and commitment, not ability. This was particularly true with
startups that failed. The lesson: don't pick cofounders who will flake.
Here's a typical reponse:
> You haven't seen someone's true colors unless you've worked with them on a
> startup.
The reason character is so important is that it's tested more severely than in
most other situations. One founder said explicitly that the relationship
between founders was more important than ability:
> I would rather cofound a startup with a friend than a stranger with higher
> output. Startups are so hard and emotional that the bonds and emotional and
> social support that come with friendship outweigh the extra output lost.
We learned this lesson a long time ago. If you look at the YC application,
there are more questions about the commitment and relationship of the founders
than their ability.
Founders of successful startups talked less about choosing cofounders and more
about how hard they worked to maintain their relationship.
> One thing that surprised me is how the relationship of startup founders goes
> from a friendship to a marriage. My relationship with my cofounder went from
> just being friends to seeing each other all the time, fretting over the
> finances and cleaning up shit. And the startup was our baby. I summed it up
> once like this: "It's like we're married, but we're not fucking."
Several people used that word "married." It's a far more intense relationship
than you usually see between coworkers—partly because the stresses are so much
greater, and partly because at first the founders are the whole company. So
this relationship has to be built of top quality materials and carefully
maintained. It's the basis of everything.
**2\. Startups Take Over Your Life**
Just as the relationship between cofounders is more intense than it usually is
between coworkers, so is the relationship between the founders and the
company. Running a startup is not like having a job or being a student,
because it never stops. This is so foreign to most people's experience that
they don't get it till it happens. [1]
> I didn't realize I would spend almost every waking moment either working or
> thinking about our startup. You enter a whole different way of life when
> it's your company vs. working for someone else's company.
It's exacerbated by the fast pace of startups, which makes it seem like time
slows down:
> I think the thing that's been most surprising to me is how one's perspective
> on time shifts. Working on our startup, I remember time seeming to stretch
> out, so that a month was a huge interval.
In the best case, total immersion can be exciting:
> It's surprising how much you become consumed by your startup, in that you
> think about it day and night, but never once does it feel like "work."
Though I have to say, that quote is from someone we funded this summer. In a
couple years he may not sound so chipper.
**3\. It's an Emotional Roller-coaster**
This was another one lots of people were surprised about. The ups and downs
were more extreme than they were prepared for.
In a startup, things seem great one moment and hopeless the next. And by next,
I mean a couple hours later.
> The emotional ups and downs were the biggest surprise for me. One day, we'd
> think of ourselves as the next Google and dream of buying islands; the next,
> we'd be pondering how to let our loved ones know of our utter failure; and
> on and on.
The hard part, obviously, is the lows. For a lot of founders that was the big
surprise:
> How hard it is to keep everyone motivated during rough days or weeks, i.e.
> how low the lows can be.
After a while, if you don't have significant success to cheer you up, it wears
you out:
> Your most basic advice to founders is "just don't die," but the energy to
> keep a company going in lieu of unburdening success isn't free; it is
> siphoned from the founders themselves.
There's a limit to how much you can take. If you get to the point where you
can't keep working anymore, it's not the end of the world. Plenty of famous
founders have had some failures along the way.
**4\. It Can Be Fun**
The good news is, the highs are also very high. Several founders said what
surprised them most about doing a startup was how fun it was:
> I think you've left out just how fun it is to do a startup. I am more
> fulfilled in my work than pretty much any of my friends who did not start
> companies.
What they like most is the freedom:
> I'm surprised by how much better it feels to be working on something that is
> challenging and creative, something I believe in, as opposed to the hired-
> gun stuff I was doing before. I knew it would feel better; what's surprising
> is how much better.
Frankly, though, if I've misled people here, I'm not eager to fix that. I'd
rather have everyone think starting a startup is grim and hard than have
founders go into it expecting it to be fun, and a few months later saying
"This is supposed to be _fun_? Are you kidding?"
The truth is, it wouldn't be fun for most people. A lot of what we try to do
in the application process is to weed out the people who wouldn't like it,
both for our sake and theirs.
The best way to put it might be that starting a startup is fun the way a
survivalist training course would be fun, if you're into that sort of thing.
Which is to say, not at all, if you're not.
**5\. Persistence Is the Key**
A lot of founders were surprised how important persistence was in startups. It
was both a negative and a positive surprise: they were surprised both by the
degree of persistence required
> Everyone said how determined and resilient you must be, but going through it
> made me realize that the determination required was still understated.
and also by the degree to which persistence alone was able to dissolve
obstacles:
> If you are persistent, even problems that seem out of your control (i.e.
> immigration) seem to work themselves out.
Several founders mentioned specifically how much more important persistence
was than intelligence.
> I've been surprised again and again by just how much more important
> persistence is than raw intelligence.
This applies not just to intelligence but to ability in general, and that's
why so many people said character was more important in choosing cofounders.
**6\. Think Long-Term**
You need persistence because everything takes longer than you expect. A lot of
people were surprised by that.
> I'm continually surprised by how long everything can take. Assuming your
> product doesn't experience the explosive growth that very few products do,
> everything from development to dealmaking (especially dealmaking) seems to
> take 2-3x longer than I always imagine.
One reason founders are surprised is that because they work fast, they expect
everyone else to. There's a shocking amount of shear stress at every point
where a startup touches a more bureaucratic organization, like a big company
or a VC fund. That's why fundraising and the enterprise market kill and maim
so many startups. [2]
But I think the reason most founders are surprised by how long it takes is
that they're overconfident. They think they're going to be an instant success,
like YouTube or Facebook. You tell them only 1 out of 100 successful startups
has a trajectory like that, and they all think "we're going to be that 1."
Maybe they'll listen to one of the more successful founders:
> The top thing I didn't understand before going into it is that persistence
> is the name of the game. For the vast majority of startups that become
> successful, it's going to be a _really_ long journey, at least 3 years and
> probably 5+.
There is a positive side to thinking longer-term. It's not just that you have
to resign yourself to everything taking longer than it should. If you work
patiently it's less stressful, and you can do better work:
> Because we're relaxed, it's so much easier to have fun doing what we do.
> Gone is the awkward nervous energy fueled by the desperate need to not fail
> guiding our actions. We can concentrate on doing what's best for our
> company, product, employees and customers.
That's why things get so much better when you hit ramen profitability. You can
shift into a different mode of working.
**7\. Lots of Little Things**
We often emphasize how rarely startups win simply because they hit on some
magic idea. I think founders have now gotten that into their heads. But a lot
were surprised to find this also applies within startups. You have to do lots
of different things:
> It's much more of a grind than glamorous. A timeslice selected at random
> would more likely find me tracking down a weird DLL loading bug on Swedish
> Windows, or tracking down a bug in the financial model Excel spreadsheet the
> night before a board meeting, rather than having brilliant flashes of
> strategic insight.
Most hacker-founders would like to spend all their time programming. You won't
get to, unless you fail. Which can be transformed into: If you spend all your
time programming, you will fail.
The principle extends even into programming. There is rarely a single
brilliant hack that ensures success:
> I learnt never to bet on any one feature or deal or anything to bring you
> success. It is never a single thing. Everything is just incremental and you
> just have to keep doing lots of those things until you strike something.
Even in the rare cases where a clever hack makes your fortune, you probably
won't know till later:
> There is no such thing as a killer feature. Or at least you won't know what
> it is.
So the best strategy is to try lots of different things. The reason not to put
all your eggs in one basket is not the usual one, which applies even when you
know which basket is best. In a startup you don't even know that.
**8\. Start with Something Minimal**
Lots of founders mentioned how important it was to launch with the simplest
possible thing. By this point everyone knows you should release fast and
iterate. It's practically a mantra at YC. But even so a lot of people seem to
have been burned by not doing it:
> Build the absolute smallest thing that can be considered a complete
> application and ship it.
Why do people take too long on the first version? Pride, mostly. They hate to
release something that could be better. They worry what people will say about
them. But you have to overcome this:
> Doing something "simple" at first glance does not mean you aren't doing
> something meaningful, defensible, or valuable.
Don't worry what people will say. If your first version is so impressive that
trolls don't make fun of it, you waited too long to launch. [3]
One founder said this should be your approach to all programming, not just
startups, and I tend to agree.
> Now, when coding, I try to think "How can I write this such that if people
> saw my code, they'd be amazed at how little there is and how little it
> does?"
Over-engineering is poison. It's not like doing extra work for extra credit.
It's more like telling a lie that you then have to remember so you don't
contradict it.
**9\. Engage Users**
Product development is a conversation with the user that doesn't really start
till you launch. Before you launch, you're like a police artist before he's
shown the first version of his sketch to the witness.
It's so important to launch fast that it may be better to think of your
initial version not as a product, but as a trick for getting users to start
talking to you.
> I learned to think about the initial stages of a startup as a giant
> experiment. All products should be considered experiments, and those that
> have a market show promising results extremely quickly.
Once you start talking to users, I guarantee you'll be surprised by what they
tell you.
> When you let customers tell you what they're after, they will often reveal
> amazing details about what they find valuable as well what they're willing
> to pay for.
The surprise is generally positive as well as negative. They won't like what
you've built, but there will be other things they would like that would be
trivially easy to implement. It's not till you start the conversation by
launching the wrong thing that they can express (or perhaps even realize) what
they're looking for.
**10\. Change Your Idea**
To benefit from engaging with users you have to be willing to change your
idea. We've always encouraged founders to see a startup idea as a hypothesis
rather than a blueprint. And yet they're still surprised how well it works to
change the idea.
> Normally if you complain about something being hard, the general advice is
> to work harder. With a startup, I think you should find a problem that's
> easy for you to solve. Optimizing in solution-space is familiar and
> straightforward, but you can make enormous gains playing around in problem-
> space.
Whereas mere determination, without flexibility, is a greedy algorithm that
may get you nothing more than a mediocre local maximum:
> When someone is determined, there's still a danger that they'll follow a
> long, hard path that ultimately leads nowhere.
You want to push forward, but at the same time twist and turn to find the most
promising path. One founder put it very succinctly:
> Fast iteration is the key to success.
One reason this advice is so hard to follow is that people don't realize how
hard it is to judge startup ideas, particularly their own. Experienced
founders learn to keep an open mind:
> Now I don't laugh at ideas anymore, because I realized how terrible I was at
> knowing if they were good or not.
You can never tell what will work. You just have to do whatever seems best at
each point. We do this with YC itself. We still don't know if it will work,
but it seems like a decent hypothesis.
**11\. Don't Worry about Competitors**
When you think you've got a great idea, it's sort of like having a guilty
conscience about something. All someone has to do is look at you funny, and
you think "Oh my God, _they know._ "
These alarms are almost always false:
> Companies that seemed like competitors and threats at first glance usually
> never were when you really looked at it. Even if they were operating in the
> same area, they had a different goal.
One reason people overreact to competitors is that they overvalue ideas. If
ideas really were the key, a competitor with the same idea would be a real
threat. But it's usually execution that matters:
> All the scares induced by seeing a new competitor pop up are forgotten weeks
> later. It always comes down to your own product and approach to the market.
This is generally true even if competitors get lots of attention.
> Competitors riding on lots of good blogger perception aren't really the
> winners and can disappear from the map quickly. You need consumers after
> all.
Hype doesn't make satisfied users, at least not for something as complicated
as technology.
**12\. It's Hard to Get Users**
A lot of founders complained about how hard it was to get users, though.
> I had no idea how much time and effort needed to go into attaining users.
This is a complicated topic. When you can't get users, it's hard to say
whether the problem is lack of exposure, or whether the product's simply bad.
Even good products can be blocked by switching or integration costs:
> Getting people to use a new service is incredibly difficult. This is
> especially true for a service that other companies can use, because it
> requires their developers to do work. If you're small, they don't think it
> is urgent. [4]
The sharpest criticism of YC came from a founder who said we didn't focus
enough on customer acquisition:
> YC preaches "make something people want" as an engineering task, a never
> ending stream of feature after feature until enough people are happy and the
> application takes off. There's very little focus on the cost of customer
> acquisition.
This may be true; this may be something we need to fix, especially for
applications like games. If you make something where the challenges are mostly
technical, you can rely on word of mouth, like Google did. One founder was
surprised by how well that worked for him:
> There is an irrational fear that no one will buy your product. But if you
> work hard and incrementally make it better, there is no need to worry.
But with other types of startups you may win less by features and more by
deals and marketing.
**13\. Expect the Worst with Deals**
Deals fall through. That's a constant of the startup world. Startups are
powerless, and good startup ideas generally seem wrong. So everyone is nervous
about closing deals with you, and you have no way to make them.
This is particularly true with investors:
> In retrospect, it would have been much better if we had operated under the
> assumption that we would never get any additional outside investment. That
> would have focused us on finding revenue streams early.
My advice is generally pessimistic. Assume you won't get money, and if someone
does offer you any, assume you'll never get any more.
> If someone offers you money, take it. You say it a lot, but I think it needs
> even more emphasizing. We had the opportunity to raise a lot more money than
> we did last year and I wish we had.
Why do founders ignore me? Mostly because they're optimistic by nature. The
mistake is to be optimistic about things you can't control. By all means be
optimistic about your ability to make something great. But you're asking for
trouble if you're optimistic about big companies or investors.
**14\. Investors Are Clueless**
A lot of founders mentioned how surprised they were by the cluelessness of
investors:
> They don't even know about the stuff they've invested in. I met some
> investors that had invested in a hardware device and when I asked them to
> demo the device they had difficulty switching it on.
Angels are a bit better than VCs, because they usually have startup experience
themselves:
> VC investors don't know half the time what they are talking about and are
> years behind in their thinking. A few were great, but 95% of the investors
> we dealt with were unprofessional, didn't seem to be very good at business
> or have any kind of creative vision. Angels were generally much better to
> talk to.
Why are founders surprised that VCs are clueless? I think it's because they
seem so formidable.
The reason VCs seem formidable is that it's their profession to. You get to be
a VC by convincing asset managers to trust you with hundreds of millions of
dollars. How do you do that? You have to seem confident, and you have to seem
like you understand technology. [5]
**15\. You May Have to Play Games**
Because investors are so bad at judging you, you have to work harder than you
should at selling yourself. One founder said the thing that surprised him most
was
> The degree to which feigning certitude impressed investors.
This is the thing that has surprised _me_ most about YC founders' experiences.
This summer we invited some of the alumni to talk to the new startups about
fundraising, and pretty much 100% of their advice was about investor
psychology. I thought I was cynical about VCs, but the founders were much more
cynical.
> A lot of what startup founders do is just posturing. It works.
VCs themselves have no idea of the extent to which the startups they like are
the ones that are best at selling themselves to VCs. [6] It's exactly the same
phenomenon we saw a step earlier. VCs get money by seeming confident to LPs,
and founders get money by seeming confident to VCs.
**16\. Luck Is a Big Factor**
With two such random linkages in the path between startups and money, it
shouldn't be surprising that luck is a big factor in deals. And yet a lot of
founders are surprised by it.
> I didn't realize how much of a role luck plays and how much is outside of
> our control.
If you think about famous startups, it's pretty clear how big a role luck
plays. Where would Microsoft be if IBM insisted on an exclusive license for
DOS?
Why are founders fooled by this? Business guys probably aren't, but hackers
are used to a world where skill is paramount, and you get what you deserve.
> When we started our startup, I had bought the hype of the startup founder
> dream: that this is a game of skill. It is, in some ways. Having skill is
> valuable. So is being determined as all hell. But being lucky is the
> critical ingredient.
Actually the best model would be to say that the outcome is the _product_ of
skill, determination, and luck. No matter how much skill and determination you
have, if you roll a zero for luck, the outcome is zero.
These quotes about luck are not from founders whose startups failed. Founders
who fail quickly tend to blame themselves. Founders who succeed quickly don't
usually realize how lucky they were. It's the ones in the middle who see how
important luck is.
**17\. The Value of Community**
A surprising number of founders said what surprised them most about starting a
startup was the value of community. Some meant the micro-community of YC
founders:
> The immense value of the peer group of YC companies, and facing similar
> obstacles at similar times.
which shouldn't be that surprising, because that's why it's structured that
way. Others were surprised at the value of the startup community in the larger
sense:
> How advantageous it is to live in Silicon Valley, where you can't help but
> hear all the cutting-edge tech and startup news, and run into useful people
> constantly.
The specific thing that surprised them most was the general spirit of
benevolence:
> One of the most surprising things I saw was the willingness of people to
> help us. Even people who had nothing to gain went out of their way to help
> our startup succeed.
and particularly how it extended all the way to the top:
> The surprise for me was how accessible important and interesting people are.
> It's amazing how easily you can reach out to people and get immediate
> feedback.
This is one of the reasons I like being part of this world. Creating wealth is
not a zero-sum game, so you don't have to stab people in the back to win.
**18\. You Get No Respect**
There was one surprise founders mentioned that I'd forgotten about: that
outside the startup world, startup founders get no respect.
> In social settings, I found that I got a lot more respect when I said, "I
> worked on Microsoft Office" instead of "I work at a small startup you've
> never heard of called x."
Partly this is because the rest of the world just doesn't get startups, and
partly it's yet another consequence of the fact that most good startup ideas
seem bad:
> If you pitch your idea to a random person, 95% of the time you'll find the
> person instinctively thinks the idea will be a flop and you're wasting your
> time (although they probably won't say this directly).
Unfortunately this extends even to dating:
> It surprised me that being a startup founder does not get you more
> admiration from women.
I did know about that, but I'd forgotten.
**19\. Things Change as You Grow**
The last big surprise founders mentioned is how much things changed as they
grew. The biggest change was that you got to program even less:
> Your job description as technical founder/CEO is completely rewritten every
> 6-12 months. Less coding, more managing/planning/company building, hiring,
> cleaning up messes, and generally getting things in place for what needs to
> happen a few months from now.
In particular, you now have to deal with employees, who often have different
motivations:
> I knew the founder equation and had been focused on it since I knew I wanted
> to start a startup as a 19 year old. The employee equation is quite
> different so it took me a while to get it down.
Fortunately, it can become a lot less stressful once you reach cruising
altitude:
> I'd say 75% of the stress is gone now from when we first started. Running a
> business is so much more enjoyable now. We're more confident. We're more
> patient. We fight less. We sleep more.
I wish I could say it was this way for every startup that succeeded, but 75%
is probably on the high side.
**The Super-Pattern**
There were a few other patterns, but these were the biggest. One's first
thought when looking at them all is to ask if there's a super-pattern, a
pattern to the patterns.
I saw it immediately, and so did a YC founder I read the list to. These are
supposed to be the surprises, the things I didn't tell people. What do they
all have in common? They're all things I tell people. If I wrote a new essay
with the same outline as this that wasn't summarizing the founders' responses,
everyone would say I'd run out of ideas and was just repeating myself.
What is going on here?
When I look at the responses, the common theme is that starting a startup was
like I said, but way more so. People just don't seem to get how different it
is till they do it. Why? The key to that mystery is to ask, how different
_from what?_ Once you phrase it that way, the answer is obvious: from a job.
Everyone's model of work is a job. It's completely pervasive. Even if you've
never had a job, your parents probably did, along with practically every other
adult you've met.
Unconsciously, everyone expects a startup to be like a job, and that explains
most of the surprises. It explains why people are surprised how carefully you
have to choose cofounders and how hard you have to work to maintain your
relationship. You don't have to do that with coworkers. It explains why the
ups and downs are surprisingly extreme. In a job there is much more damping.
But it also explains why the good times are surprisingly good: most people
can't imagine such freedom. As you go down the list, almost all the surprises
are surprising in how much a startup differs from a job.
You probably can't overcome anything so pervasive as the model of work you
grew up with. So the best solution is to be consciously aware of that. As you
go into a startup, you'll be thinking "everyone says it's really extreme."
Your next thought will probably be "but I can't believe it will be that bad."
If you want to avoid being surprised, the next thought after that should be:
"and the reason I can't believe it will be that bad is that my model of work
is a job."
**Notes**
[1] Graduate students might understand it. In grad school you always feel you
should be working on your thesis. It doesn't end every semester like classes
do.
[2] The best way for a startup to engage with slow-moving organizations is to
fork off separate processes to deal with them. It's when they're on the
critical path that they kill you—when you depend on closing a deal to move
forward. It's worth taking extreme measures to avoid that.
[3] This is a variant of Reid Hoffman's principle that if you aren't
embarrassed by what you launch with, you waited too long to launch.
[4] The question to ask about what you've built is not whether it's good, but
whether it's good enough to supply the activation energy required.
[5] Some VCs seem to understand technology because they actually do, but
that's overkill; the defining test is whether you can talk about it well
enough to convince limited partners.
[6] This is the same phenomenon you see with defense contractors or fashion
brands. The dumber the customers, the more effort you expend on the process of
selling things to them rather than making the things you sell.
**Thanks:** to Jessica Livingston for reading drafts of this, and to all the
founders who responded to my email.
**Related:**
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
April 2010
The best way to come up with startup ideas is to ask yourself the question:
what do you wish someone would make for you?
There are two types of startup ideas: those that grow organically out of your
own life, and those that you decide, from afar, are going to be necessary to
some class of users other than you. Apple was the first type. Apple happened
because Steve Wozniak wanted a computer. Unlike most people who wanted
computers, he could design one, so he did. And since lots of other people
wanted the same thing, Apple was able to sell enough of them to get the
company rolling. They still rely on this principle today, incidentally. The
iPhone is the phone Steve Jobs wants. [1]
Our own startup, Viaweb, was of the second type. We made software for building
online stores. We didn't need this software ourselves. We weren't direct
marketers. We didn't even know when we started that our users were called
"direct marketers." But we were comparatively old when we started the company
(I was 30 and Robert Morris was 29), so we'd seen enough to know users would
need this type of software. [2]
There is no sharp line between the two types of ideas, but the most successful
startups seem to be closer to the Apple type than the Viaweb type. When he was
writing that first Basic interpreter for the Altair, Bill Gates was writing
something he would use, as were Larry and Sergey when they wrote the first
versions of Google.
Organic ideas are generally preferable to the made up kind, but particularly
so when the founders are young. It takes experience to predict what other
people will want. The worst ideas we see at Y Combinator are from young
founders making things they think other people will want.
So if you want to start a startup and don't know yet what you're going to do,
I'd encourage you to focus initially on organic ideas. What's missing or
broken in your daily life? Sometimes if you just ask that question you'll get
immediate answers. It must have seemed obviously broken to Bill Gates that you
could only program the Altair in machine language.
You may need to stand outside yourself a bit to see brokenness, because you
tend to get used to it and take it for granted. You can be sure it's there,
though. There are always great ideas sitting right under our noses. In 2004 it
was ridiculous that Harvard undergrads were still using a Facebook printed on
paper. Surely that sort of thing should have been online.
There are ideas that obvious lying around now. The reason you're overlooking
them is the same reason you'd have overlooked the idea of building Facebook in
2004: organic startup ideas usually don't seem like startup ideas at first. We
know now that Facebook was very successful, but put yourself back in 2004.
Putting undergraduates' profiles online wouldn't have seemed like much of a
startup idea. And in fact, it wasn't initially a startup idea. When Mark spoke
at a YC dinner this winter he said he wasn't trying to start a company when he
wrote the first version of Facebook. It was just a project. So was the Apple I
when Woz first started working on it. He didn't think he was starting a
company. If these guys had thought they were starting companies, they might
have been tempted to do something more "serious," and that would have been a
mistake.
So if you want to come up with organic startup ideas, I'd encourage you to
focus more on the idea part and less on the startup part. Just fix things that
seem broken, regardless of whether it seems like the problem is important
enough to build a company on. If you keep pursuing such threads it would be
hard not to end up making something of value to a lot of people, and when you
do, surprise, you've got a company. [3]
Don't be discouraged if what you produce initially is something other people
dismiss as a toy. In fact, that's a good sign. That's probably why everyone
else has been overlooking the idea. The first microcomputers were dismissed as
toys. And the first planes, and the first cars. At this point, when someone
comes to us with something that users like but that we could envision forum
trolls dismissing as a toy, it makes us especially likely to invest.
While young founders are at a disadvantage when coming up with made-up ideas,
they're the best source of organic ones, because they're at the forefront of
technology. They use the latest stuff. They only just decided what to use, so
why wouldn't they? And because they use the latest stuff, they're in a
position to discover valuable types of fixable brokenness first.
There's nothing more valuable than an unmet need that is just becoming
fixable. If you find something broken that you can fix for a lot of people,
you've found a gold mine. As with an actual gold mine, you still have to work
hard to get the gold out of it. But at least you know where the seam is, and
that's the hard part.
**Notes**
[1] This suggests a way to predict areas where Apple will be weak: things
Steve Jobs doesn't use. E.g. I doubt he is much into gaming.
[2] In retrospect, we should have _become_ direct marketers. If I were doing
Viaweb again, I'd open our own online store. If we had, we'd have understood
users a lot better. I'd encourage anyone starting a startup to become one of
its users, however unnatural it seems.
[3] Possible exception: It's hard to compete directly with open source
software. You can build things for programmers, but there has to be some part
you can charge for.
**Thanks** to Sam Altman, Trevor Blackwell, and Jessica Livingston for reading
drafts of this.
|
February 2009
A lot of cities look at Silicon Valley and ask "How could we make something
like that happen here?" The [organic](siliconvalley.html) way to do it is to
establish a first-rate university in a place where rich people want to live.
That's how Silicon Valley happened. But could you shortcut the process by
funding startups?
Possibly. Let's consider what it would take.
The first thing to understand is that encouraging startups is a different
problem from encouraging startups in a particular city. The latter is much
more expensive.
People sometimes think they could improve the startup scene in their town by
starting something like [Y Combinator](http://ycombinator.com) there, but in
fact it will have near zero effect. I know because Y Combinator itself had
near zero effect on Boston when we were based there half the year. The people
we funded came from all over the country (indeed, the world) and afterward
they went wherever they could get more funding—which generally meant Silicon
Valley.
The seed funding business is not a regional business, because at that stage
startups are mobile. They're just a couple founders with laptops. [1]
If you want to encourage startups in a particular city, you have to fund
startups that won't leave. There are two ways to do that: have rules
preventing them from leaving, or fund them at the point in their life when
they naturally take root. The first approach is a mistake, because it becomes
a filter for selecting bad startups. If your terms force startups to do things
they don't want to, only the desperate ones will take your money.
Good startups will move to another city as a condition of funding. What they
won't do is agree not to move the next time they need funding. So the only way
to get them to stay is to give them enough that they never need to leave.
___
How much would that take? If you want to keep startups from leaving your town,
you have to give them enough that they're not tempted by an offer from Silicon
Valley VCs that requires them to move. A startup would be able to refuse such
an offer if they had grown to the point where they were (a) rooted in your
town and/or (b) so successful that VCs would fund them even if they didn't
move.
How much would it cost to grow a startup to that point? A minimum of several
hundred thousand dollars. [Wufoo](http://wufoo.com) seem to have rooted
themselves in Tampa on $118k, but they're an extreme case. On average it would
take at least half a million.
So if it seems too good to be true to think you could grow a local silicon
valley by giving startups $15-20k each like Y Combinator, that's because it
is. To make them stick around you'd have to give them at least 20 times that
much.
However, even that is an interesting prospect. Suppose to be on the safe side
it would cost a million dollars per startup. If you could get startups to
stick to your town for a million apiece, then for a billion dollars you could
bring in a thousand startups. That probably wouldn't push you past Silicon
Valley itself, but it might get you second place.
For the price of a football stadium, any town that was decent to live in could
make itself one of the biggest startup hubs in the world.
What's more, it wouldn't take very long. You could probably do it in five
years. During the term of one mayor. And it would get easier over time,
because the more startups you had in town, the less it would take to get new
ones to move there. By the time you had a thousand startups in town, the VCs
wouldn't be trying so hard to get them to move to Silicon Valley; instead
they'd be opening local offices. Then you'd really be in good shape. You'd
have started a self-sustaining chain reaction like the one that drives the
Valley.
___
But now comes the hard part. You have to pick the startups. How do you do
that? Picking startups is a rare and valuable skill, and the handful of people
who have it are not readily hireable. And this skill is so hard to measure
that if a government did try to hire people with it, they'd almost certainly
get the wrong ones.
For example, a city could give money to a VC fund to establish a local branch,
and let them make the choices. But only a bad VC fund would take that deal.
They wouldn't _seem_ bad to the city officials. They'd seem very impressive.
But they'd be bad at picking startups. That's the characteristic failure mode
of VCs. All VCs look impressive to limited partners. The difference between
the good ones and the bad ones only becomes visible in the other half of their
jobs: choosing and advising startups. [2]
What you really want is a pool of local angel investors—people investing money
they made from their own startups. But unfortunately you run into a chicken
and egg problem here. If your city isn't already a startup hub, there won't be
people there who got rich from startups. And there is no way I can think of
that a city could attract angels from outside. By definition they're rich.
There's no incentive that would make them move. [3]
However, a city could select startups by piggybacking on the expertise of
investors who weren't local. It would be pretty straightforward to make a list
of the most eminent Silicon Valley angels and from that to generate a list of
all the startups they'd invested in. If a city offered these companies a
million dollars each to move, a lot of the earlier stage ones would probably
take it.
Preposterous as this plan sounds, it's probably the most efficient way a city
could select good startups.
It would hurt the startups somewhat to be separated from their original
investors. On the other hand, the extra million dollars would give them a lot
more runway.
___
Would the transplanted startups survive? Quite possibly. The only way to find
out would be to try it. It would be a pretty cheap experiment, as civil
expenditures go. Pick 30 startups that eminent angels have recently invested
in, give them each a million dollars if they'll relocate to your city, and see
what happens after a year. If they seem to be thriving, you can try importing
startups on a larger scale.
Don't be too legalistic about the conditions under which they're allowed to
leave. Just have a gentlemen's agreement.
Don't try to do it on the cheap and pick only 10 for the initial experiment.
If you do this on too small a scale you'll just guarantee failure. Startups
need to be around other startups. 30 would be enough to feel like a community.
Don't try to make them all work in some renovated warehouse you've made into
an "incubator." Real startups prefer to work in their own spaces.
In fact, don't impose any restrictions on the startups at all. Startup
founders are mostly [hackers](gba.html), and hackers are much more constrained
by gentlemen's agreements than regulations. If they shake your hand on a
promise, they'll keep it. But show them a lock and their first thought is how
to pick it.
Interestingly, the 30-startup experiment could be done by any sufficiently
rich private citizen. And what pressure it would put on the city if it worked.
[4]
___
Should the city take stock in return for the money? In principle they're
entitled to, but how would they choose valuations for the startups? You
couldn't just give them all the same valuation: that would be too low for some
(who'd turn you down) and too high for others (because it might make their
next round a "down round"). And since we're assuming we're doing this without
being able to pick startups, we also have to assume we can't value them, since
that's practically the same thing.
Another reason not to take stock in the startups is that startups are often
involved in disreputable things. So are established companies, but they don't
get blamed for it. If someone gets murdered by someone they met on Facebook,
the press will treat the story as if it were about Facebook. If someone gets
murdered by someone they met at a supermarket, the press will just treat it as
a story about a murder. So understand that if you invest in startups, they
might build things that get used for pornography, or file-sharing, or the
expression of unfashionable opinions. You should probably sponsor this project
jointly with your political opponents, so they can't use whatever the startups
do as a club to beat you with.
It would be too much of a political liability just to give the startups the
money, though. So the best plan would be to make it convertible debt, but
which didn't convert except in a really big round, like $20 million.
___
How well this scheme worked would depend on the [city](cities.html). There are
some towns, like Portland, that would be easy to turn into startup hubs, and
others, like Detroit, where it would really be an uphill battle. So be honest
with yourself about the sort of town you have before you try this.
It will be easier in proportion to how much your town resembles San Francisco.
Do you have good weather? Do people live downtown, or have they abandoned the
center for the suburbs? Would the city be described as "hip" and "tolerant,"
or as reflecting "traditional values?" Are there good universities nearby? Are
there walkable neighborhoods? Would nerds feel at home? If you answered yes to
all these questions, you might be able not only to pull off this scheme, but
to do it for less than a million per startup.
I realize the chance of any city having the political will to carry out this
plan is microscopically small. I just wanted to explore what it would take if
one did. How hard would it be to jumpstart a silicon valley? It's fascinating
to think this prize might be within the reach of so many cities. So even
though they'll all still spend the money on the stadium, at least now someone
can ask them: why did you choose to do that instead of becoming a serious
rival to Silicon Valley?
**Notes**
[1] What people who start these supposedly local seed firms always find is
that (a) their applicants come from all over, not just the local area, and (b)
the local startups also apply to the other seed firms. So what ends up
happening is that the applicant pool gets partitioned by quality rather than
geography.
[2] Interestingly, the bad VCs fail by choosing startups run by people like
them—people who are good presenters, but have no real substance. It's a case
of the fake leading the fake. And since everyone involved is so plausible, the
LPs who invest in these funds have no idea what's happening till they measure
their returns.
[3] Not even being a tax haven, I suspect. That makes some rich people move,
but not the type who would make good angel investors in startups.
[4] Thanks to Michael Keenan for pointing this out.
**Thanks** to Trevor Blackwell, Jessica Livingston, Robert Morris, and Fred
Wilson for reading drafts of this.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
June 2006
_(This essay is derived from talks at Usenix 2006 and Railsconf 2006.)_
A couple years ago my friend Trevor and I went to look at the Apple garage. As
we stood there, he said that as a kid growing up in Saskatchewan he'd been
amazed at the dedication Jobs and Wozniak must have had to work in a garage.
"Those guys must have been freezing!"
That's one of California's hidden advantages: the mild climate means there's
lots of marginal space. In cold places that margin gets trimmed off. There's a
sharper line between outside and inside, and only projects that are officially
sanctioned — by organizations, or parents, or wives, or at least by oneself —
get proper indoor space. That raises the activation energy for new ideas. You
can't just tinker. You have to justify.
Some of Silicon Valley's most famous companies began in garages: Hewlett-
Packard in 1938, Apple in 1976, Google in 1998. In Apple's case the garage
story is a bit of an urban legend. Woz says all they did there was assemble
some computers, and that he did all the actual design of the Apple I and Apple
II in his apartment or his cube at HP. [1] This was apparently too marginal
even for Apple's PR people.
By conventional standards, Jobs and Wozniak were marginal people too.
Obviously they were smart, but they can't have looked good on paper. They were
at the time a pair of college dropouts with about three years of school
between them, and hippies to boot. Their previous business experience
consisted of making "blue boxes" to hack into the phone system, a business
with the rare distinction of being both illegal and unprofitable.
**Outsiders**
Now a startup operating out of a garage in Silicon Valley would feel part of
an exalted tradition, like the poet in his garret, or the painter who can't
afford to heat his studio and thus has to wear a beret indoors. But in 1976 it
didn't seem so cool. The world hadn't yet realized that starting a computer
company was in the same category as being a writer or a painter. It hadn't
been for long. Only in the preceding couple years had the dramatic fall in the
cost of hardware allowed outsiders to compete.
In 1976, everyone looked down on a company operating out of a garage,
including the founders. One of the first things Jobs did when they got some
money was to rent office space. He wanted Apple to seem like a real company.
They already had something few real companies ever have: a fabulously well
designed product. You'd think they'd have had more confidence. But I've talked
to a lot of startup founders, and it's always this way. They've built
something that's going to change the world, and they're worried about some nit
like not having proper business cards.
That's the paradox I want to explore: great new things often come from the
margins, and yet the people who discover them are looked down on by everyone,
including themselves.
It's an old idea that new things come from the margins. I want to examine its
internal structure. Why do great ideas come from the margins? What kind of
ideas? And is there anything we can do to encourage the process?
**Insiders**
One reason so many good ideas come from the margin is simply that there's so
much of it. There have to be more outsiders than insiders, if insider means
anything. If the number of outsiders is huge it will always seem as if a lot
of ideas come from them, even if few do per capita. But I think there's more
going on than this. There are real disadvantages to being an insider, and in
some kinds of work they can outweigh the advantages.
Imagine, for example, what would happen if the government decided to
commission someone to write an official Great American Novel. First there'd be
a huge ideological squabble over who to choose. Most of the best writers would
be excluded for having offended one side or the other. Of the remainder, the
smart ones would refuse such a job, leaving only a few with the wrong sort of
ambition. The committee would choose one at the height of his career — that
is, someone whose best work was behind him — and hand over the project with
copious free advice about how the book should show in positive terms the
strength and diversity of the American people, etc, etc.
The unfortunate writer would then sit down to work with a huge weight of
expectation on his shoulders. Not wanting to blow such a public commission,
he'd play it safe. This book had better command respect, and the way to ensure
that would be to make it a tragedy. Audiences have to be enticed to laugh, but
if you kill people they feel obliged to take you seriously. As everyone knows,
America plus tragedy equals the Civil War, so that's what it would have to be
about. When finally completed twelve years later, the book would be a 900-page
pastiche of existing popular novels — roughly _Gone with the Wind_ plus
_Roots_. But its bulk and celebrity would make it a bestseller for a few
months, until blown out of the water by a talk-show host's autobiography. The
book would be made into a movie and thereupon forgotten, except by the more
waspish sort of reviewers, among whom it would be a byword for bogusness like
Milli Vanilli or _Battlefield Earth_.
Maybe I got a little carried away with this example. And yet is this not at
each point the way such a project would play out? The government knows better
than to get into the novel business, but in other fields where they have a
natural monopoly, like nuclear waste dumps, aircraft carriers, and regime
change, you'd find plenty of projects isomorphic to this one — and indeed,
plenty that were less successful.
This little thought experiment suggests a few of the disadvantages of insider
projects: the selection of the wrong kind of people, the excessive scope, the
inability to take risks, the need to seem serious, the weight of expectations,
the power of vested interests, the undiscerning audience, and perhaps most
dangerous, the tendency of such work to become a duty rather than a pleasure.
**Tests**
A world with outsiders and insiders implies some kind of test for
distinguishing between them. And the trouble with most tests for selecting
elites is that there are two ways to pass them: to be good at what they try to
measure, and to be good at hacking the test itself.
So the first question to ask about a field is how honest its tests are,
because this tells you what it means to be an outsider. This tells you how
much to trust your instincts when you disagree with authorities, whether it's
worth going through the usual channels to become one yourself, and perhaps
whether you want to work in this field at all.
Tests are least hackable when there are consistent standards for quality, and
the people running the test really care about its integrity. Admissions to PhD
programs in the hard sciences are fairly honest, for example. The professors
will get whoever they admit as their own grad students, so they try hard to
choose well, and they have a fair amount of data to go on. Whereas
undergraduate admissions seem to be much more hackable.
One way to tell whether a field has consistent standards is the overlap
between the leading practitioners and the people who teach the subject in
universities. At one end of the scale you have fields like math and physics,
where nearly all the teachers are among the best practitioners. In the middle
are medicine, law, history, architecture, and computer science, where many
are. At the bottom are business, literature, and the visual arts, where
there's almost no overlap between the teachers and the leading practitioners.
It's this end that gives rise to phrases like "those who can't do, teach."
Incidentally, this scale might be helpful in deciding what to study in
college. When I was in college the rule seemed to be that you should study
whatever you were most interested in. But in retrospect you're probably better
off studying something moderately interesting with someone who's good at it
than something very interesting with someone who isn't. You often hear people
say that you shouldn't major in business in college, but this is actually an
instance of a more general rule: don't learn things from teachers who are bad
at them.
How much you should worry about being an outsider depends on the quality of
the insiders. If you're an amateur mathematician and think you've solved a
famous open problem, better go back and check. When I was in grad school, a
friend in the math department had the job of replying to people who sent in
proofs of Fermat's last theorem and so on, and it did not seem as if he saw it
as a valuable source of tips — more like manning a mental health hotline.
Whereas if the stuff you're writing seems different from what English
professors are interested in, that's not necessarily a problem.
**Anti-Tests**
Where the method of selecting the elite is thoroughly corrupt, most of the
good people will be outsiders. In art, for example, the image of the poor,
misunderstood genius is not just one possible image of a great artist: it's
the _standard_ image. I'm not saying it's correct, incidentally, but it is
telling how well this image has stuck. You couldn't make a rap like that stick
to math or medicine. [2]
If it's corrupt enough, a test becomes an anti-test, filtering out the people
it should select by making them to do things only the wrong people would do.
[Popularity](nerds.html) in high school seems to be such a test. There are
plenty of similar ones in the grownup world. For example, rising up through
the hierarchy of the average big company demands an attention to politics few
thoughtful people could spare. [3] Someone like Bill Gates can grow a company
under him, but it's hard to imagine him having the patience to climb the
corporate ladder at General Electric — or Microsoft, actually.
It's kind of strange when you think about it, because lord-of-the-flies
schools and bureaucratic companies are both the default. There are probably a
lot of people who go from one to the other and never realize the whole world
doesn't work this way.
I think that's one reason big companies are so often blindsided by startups.
People at big companies don't realize the extent to which they live in an
environment that is one large, ongoing test for the wrong qualities.
If you're an outsider, your best chances for beating insiders are obviously in
fields where corrupt tests select a lame elite. But there's a catch: if the
tests are corrupt, your victory won't be recognized, at least in your
lifetime. You may feel you don't need that, but history suggests it's
dangerous to work in fields with corrupt tests. You may beat the insiders, and
yet not do as good work, on an absolute scale, as you would in a field that
was more honest.
Standards in art, for example, were almost as corrupt in the first half of the
eighteenth century as they are today. This was the era of those fluffy
idealized portraits of countesses with their lapdogs. [Chardin](largilliere-
chardin.html) decided to skip all that and paint ordinary things as he saw
them. He's now considered the best of that period — and yet not the equal of
Leonardo or Bellini or Memling, who all had the additional encouragement of
honest standards.
It can be worth participating in a corrupt contest, however, if it's followed
by another that isn't corrupt. For example, it would be worth competing with a
company that can spend more than you on marketing, as long as you can survive
to the next round, when customers compare your actual products. Similarly, you
shouldn't be discouraged by the comparatively corrupt test of college
admissions, because it's followed immediately by less hackable tests. [4]
**Risk**
Even in a field with honest tests, there are still advantages to being an
outsider. The most obvious is that outsiders have nothing to lose. They can do
risky things, and if they fail, so what? Few will even notice.
The eminent, on the other hand, are weighed down by their eminence. Eminence
is like a suit: it impresses the wrong people, and it constrains the wearer.
Outsiders should realize the advantage they have here. Being able to take
risks is hugely valuable. Everyone values safety too much, both the obscure
and the eminent. No one wants to look like a fool. But it's very useful to be
able to. If most of your ideas aren't stupid, you're probably being too
conservative. You're not bracketing the problem.
Lord Acton said we should judge talent at its best and character at its worst.
For example, if you write one great book and ten bad ones, you still count as
a great writer — or at least, a better writer than someone who wrote eleven
that were merely good. Whereas if you're a quiet, law-abiding citizen most of
the time but occasionally cut someone up and bury them in your backyard,
you're a bad guy.
Almost everyone makes the mistake of treating ideas as if they were
indications of character rather than talent — as if having a stupid idea made
you stupid. There's a huge weight of tradition advising us to play it safe.
"Even a fool is thought wise if he keeps silent," says the Old Testament
(Proverbs 17:28).
Well, that may be fine advice for a bunch of goatherds in Bronze Age
Palestine. There conservatism would be the order of the day. But times have
changed. It might still be reasonable to stick with the Old Testament in
political questions, but materially the world now has a lot more state.
Tradition is less of a guide, not just because things change faster, but
because the space of possibilities is so large. The more complicated the world
gets, the more valuable it is to be willing to look like a fool.
**Delegation**
And yet the more successful people become, the more heat they get if they
screw up — or even seem to screw up. In this respect, as in many others, the
eminent are prisoners of their own success. So the best way to understand the
advantages of being an outsider may be to look at the disadvantages of being
an insider.
If you ask eminent people what's wrong with their lives, the first thing
they'll complain about is the lack of time. A friend of mine at Google is
fairly high up in the company and went to work for them long before they went
public. In other words, he's now rich enough not to have to work. I asked him
if he could still endure the annoyances of having a job, now that he didn't
have to. And he said that there weren't really any annoyances, except — and he
got a wistful look when he said this — that he got _so much email_.
The eminent feel like everyone wants to take a bite out of them. The problem
is so widespread that people pretending to be eminent do it by pretending to
be overstretched.
The lives of the eminent become scheduled, and that's not good for thinking.
One of the great advantages of being an outsider is long, uninterrupted blocks
of time. That's what I remember about grad school: apparently endless supplies
of time, which I spent worrying about, but not writing, my dissertation.
Obscurity is like health food — unpleasant, perhaps, but good for you. Whereas
fame tends to be like the alcohol produced by fermentation. When it reaches a
certain concentration, it kills off the yeast that produced it.
The eminent generally respond to the shortage of time by turning into
managers. They don't have time to work. They're surrounded by junior people
they're supposed to help or supervise. The obvious solution is to have the
junior people do the work. Some good stuff happens this way, but there are
problems it doesn't work so well for: the kind where it helps to have
everything in one head.
For example, it recently emerged that the famous glass artist Dale Chihuly
hasn't actually blown glass for 27 years. He has assistants do the work for
him. But one of the most valuable sources of ideas in the visual arts is the
resistance of the medium. That's why oil paintings look so different from
watercolors. In principle you could make any mark in any medium; in practice
the medium steers you. And if you're no longer doing the work yourself, you
stop learning from this.
So if you want to beat those eminent enough to delegate, one way to do it is
to take advantage of direct contact with the medium. In the arts it's obvious
how: blow your own glass, edit your own films, stage your own plays. And in
the process pay close attention to accidents and to new ideas you have on the
fly. This technique can be generalized to any sort of work: if you're an
outsider, don't be ruled by plans. Planning is often just a weakness forced on
those who delegate.
Is there a general rule for finding problems best solved in one head? Well,
you can manufacture them by taking any project usually done by multiple people
and trying to do it all yourself. Wozniak's work was a classic example: he did
everything himself, hardware and software, and the result was miraculous. He
claims not one bug was ever found in the Apple II, in either hardware or
software.
Another way to find good problems to solve in one head is to focus on the
grooves in the chocolate bar — the places where tasks are divided when they're
split between several people. If you want to beat delegation, focus on a
vertical slice: for example, be both writer and editor, or both design
buildings and construct them.
One especially good groove to span is the one between tools and things made
with them. For example, programming languages and applications are usually
written by different people, and this is responsible for a lot of the worst
flaws in [programming languages](hundred.html). I think every language should
be designed simultaneously with a large application written in it, the way C
was with Unix.
Techniques for competing with delegation translate well into business, because
delegation is endemic there. Instead of avoiding it as a drawback of senility,
many companies embrace it as a sign of maturity. In big companies software is
often designed, implemented, and sold by three separate types of people. In
startups one person may have to do all three. And though this feels stressful,
it's one reason startups win. The needs of customers and the means of
satisfying them are all in one head.
**Focus**
The very skill of insiders can be a weakness. Once someone is good at
something, they tend to spend all their time doing that. This kind of focus is
very valuable, actually. Much of the skill of experts is the ability to ignore
false trails. But focus has drawbacks: you don't learn from other fields, and
when a new approach arrives, you may be the last to notice.
For outsiders this translates into two ways to win. One is to work on a
variety of things. Since you can't derive as much benefit (yet) from a narrow
focus, you may as well cast a wider net and derive what benefit you can from
similarities between fields. Just as you can compete with delegation by
working on larger vertical slices, you can compete with specialization by
working on larger horizontal slices — by both writing and illustrating your
book, for example.
The second way to compete with focus is to see what focus overlooks. In
particular, new things. So if you're not good at anything yet, consider
working on something so new that no one else is either. It won't have any
prestige yet, if no one is good at it, but you'll have it all to yourself.
The potential of a new medium is usually underestimated, precisely because no
one has yet explored its possibilities. Before [Durer](pilate.html) tried
making engravings, no one took them very seriously. Engraving was for making
little devotional images — basically fifteenth century baseball cards of
saints. Trying to make masterpieces in this medium must have seemed to Durer's
contemporaries the way that, say, making masterpieces in
[comics](http://www.fantagraphics.com/artist/clowes/clowes.html) might seem to
the average person today.
In the computer world we get not new mediums but new platforms: the
minicomputer, the microprocessor, the web-based application. At first they're
always dismissed as being unsuitable for real work. And yet someone always
decides to try anyway, and it turns out you can do more than anyone expected.
So in the future when you hear people say of a new platform: yeah, it's
popular and cheap, but not ready yet for real work, jump on it.
As well as being more comfortable working on established lines, insiders
generally have a vested interest in perpetuating them. The professor who made
his reputation by discovering some new idea is not likely to be the one to
discover its replacement. This is particularly true with companies, who have
not only skill and pride anchoring them to the status quo, but money as well.
The Achilles heel of successful companies is their inability to cannibalize
themselves. Many innovations consist of replacing something with a cheaper
alternative, and companies just don't want to see a path whose immediate
effect is to cut an existing source of revenue.
So if you're an outsider you should actively seek out contrarian projects.
Instead of working on things the eminent have made prestigious, work on things
that could steal that prestige.
The really juicy new approaches are not the ones insiders reject as
impossible, but those they ignore as undignified. For example, after Wozniak
designed the Apple II he offered it first to his employer, HP. They passed.
One of the reasons was that, to save money, he'd designed the Apple II to use
a TV as a monitor, and HP felt they couldn't produce anything so declasse.
**Less**
Wozniak used a TV as a monitor for the simple reason that he couldn't afford a
monitor. Outsiders are not merely free but compelled to make things that are
cheap and lightweight. And both are good bets for growth: cheap things spread
faster, and lightweight things evolve faster.
The eminent, on the other hand, are almost forced to work on a large scale.
Instead of garden sheds they must design huge art museums. One reason they
work on big things is that they can: like our hypothetical novelist, they're
flattered by such opportunities. They also know that big projects will by
their sheer bulk impress the audience. A garden shed, however lovely, would be
easy to ignore; a few might even snicker at it. You can't snicker at a giant
museum, no matter how much you dislike it. And finally, there are all those
people the eminent have working for them; they have to choose projects that
can keep them all busy.
Outsiders are free of all this. They can work on small things, and there's
something very pleasing about small things. Small things can be perfect; big
ones always have something wrong with them. But there's a [magic](isetta.html)
in small things that goes beyond such rational explanations. All kids know it.
Small things have more personality.
Plus making them is more fun. You can do what you want; you don't have to
satisfy committees. And perhaps most important, small things can be done fast.
The prospect of seeing the finished project hangs in the air like the smell of
dinner cooking. If you work fast, maybe you could have it done tonight.
Working on small things is also a good way to learn. The most important kinds
of learning happen one project at a time. ("Next time, I won't...") The faster
you cycle through projects, the faster you'll evolve.
Plain materials have a charm like small scale. And in addition there's the
challenge of making do with less. Every designer's ears perk up at the mention
of that game, because it's a game you can't lose. Like the JV playing the
varsity, if you even tie, you win. So paradoxically there are cases where
fewer resources yield better results, because the designers' pleasure at their
own ingenuity more than compensates. [5]
So if you're an outsider, take advantage of your ability to make small and
inexpensive things. Cultivate the pleasure and simplicity of that kind of
work; one day you'll miss it.
**Responsibility**
When you're old and eminent, what will you miss about being young and obscure?
What people seem to miss most is the lack of responsibilities.
Responsibility is an occupational disease of eminence. In principle you could
avoid it, just as in principle you could avoid getting fat as you get old, but
few do. I sometimes suspect that responsibility is a trap and that the most
virtuous route would be to shirk it, but regardless it's certainly
constraining.
When you're an outsider you're constrained too, of course. You're short of
money, for example. But that constrains you in different ways. How does
responsibility constrain you? The worst thing is that it allows you not to
focus on real work. Just as the most dangerous forms of
[procrastination](procrastination.html) are those that seem like work, the
danger of responsibilities is not just that they can consume a whole day, but
that they can do it without setting off the kind of alarms you'd set off if
you spent a whole day sitting on a park bench.
A lot of the pain of being an outsider is being aware of one's own
procrastination. But this is actually a good thing. You're at least close
enough to work that the smell of it makes you hungry.
As an outsider, you're just one step away from getting things done. A huge
step, admittedly, and one that most people never seem to make, but only one
step. If you can summon up the energy to get started, you can work on projects
with an intensity (in both senses) that few insiders can match. For insiders
work turns into a duty, laden with responsibilities and expectations. It's
never so pure as it was when they were young.
Work like a dog being taken for a walk, instead of an ox being yoked to the
plow. That's what they miss.
**Audience**
A lot of outsiders make the mistake of doing the opposite; they admire the
eminent so much that they copy even their flaws. Copying is a good way to
learn, but copy the right things. When I was in college I imitated the pompous
diction of famous professors. But this wasn't what _made_ them eminent — it
was more a flaw their eminence had allowed them to sink into. Imitating it was
like pretending to have gout in order to seem rich.
Half the distinguishing qualities of the eminent are actually disadvantages.
Imitating these is not only a waste of time, but will make you seem a fool to
your models, who are often well aware of it.
What are the genuine advantages of being an insider? The greatest is an
audience. It often seems to outsiders that the great advantage of insiders is
money — that they have the resources to do what they want. But so do people
who inherit money, and that doesn't seem to help, not as much as an audience.
It's good for morale to know people want to see what you're making; it draws
work out of you.
If I'm right that the defining advantage of insiders is an audience, then we
live in exciting times, because just in the last ten years the Internet has
made audiences a lot more liquid. Outsiders don't have to content themselves
anymore with a proxy audience of a few smart friends. Now, thanks to the
Internet, they can start to grow themselves actual audiences. This is great
news for the marginal, who retain the advantages of outsiders while
increasingly being able to siphon off what had till recently been the
prerogative of the elite.
Though the Web has been around for more than ten years, I think we're just
beginning to see its democratizing effects. Outsiders are still learning how
to steal audiences. But more importantly, audiences are still learning how to
be stolen — they're still just beginning to realize how much
[deeper](http://journalism.nyu.edu/pubzone/weblogs/pressthink/2004/03/15/lott_case.html)
bloggers can dig than journalists, how much [more
interesting](http://reddit.com) a democratic news site can be than a front
page controlled by editors, and how much
[funnier](http://www.youtube.com/watch?v=SLbFDMplZDs) a bunch of kids with
webcams can be than mass-produced sitcoms.
The big media companies shouldn't worry that people will post their
copyrighted material on YouTube. They should worry that people will post their
own stuff on YouTube, and audiences will watch that instead.
**Hacking**
If I had to condense the power of the marginal into one sentence it would be:
just try hacking something together. That phrase draws in most threads I've
mentioned here. Hacking something together means deciding what to do as you're
doing it, not a subordinate executing the vision of his boss. It implies the
result won't be pretty, because it will be made quickly out of inadequate
materials. It may work, but it won't be the sort of thing the eminent would
want to put their name on. Something hacked together means something that
barely solves the problem, or maybe doesn't solve the problem at all, but
another you discovered en route. But that's ok, because the main value of that
initial version is not the thing itself, but what it leads to. Insiders who
daren't walk through the mud in their nice clothes will never make it to the
solid ground on the other side.
The word "try" is an especially valuable component. I disagree here with Yoda,
who said there is no try. There is try. It implies there's no punishment if
you fail. You're driven by curiosity instead of duty. That means the wind of
procrastination will be in your favor: instead of avoiding this work, this
will be what you do as a way of avoiding other work. And when you do it,
you'll be in a better mood. The more the work depends on imagination, the more
that matters, because most people have more ideas when they're happy.
If I could go back and redo my twenties, that would be one thing I'd do more
of: just try hacking things together. Like many people that age, I spent a lot
of time worrying about what I should do. I also spent some time trying to
build stuff. I should have spent less time worrying and more time building. If
you're not sure what to do, make something.
Raymond Chandler's advice to thriller writers was "When in doubt, have a man
come through a door with a gun in his hand." He followed that advice. Judging
from his books, he was often in doubt. But though the result is occasionally
cheesy, it's never boring. In life, as in books, action is underrated.
Fortunately the number of things you can just hack together keeps increasing.
People fifty years ago would be astonished that one could just hack together a
movie, for example. Now you can even hack together distribution. Just make
stuff and put it online.
**Inappropriate**
If you really want to score big, the place to focus is the margin of the
margin: the territories only recently captured from the insiders. That's where
you'll find the juiciest projects still undone, either because they seemed too
risky, or simply because there were too few insiders to explore everything.
This is why I spend most of my time writing [essays](essay.html) lately. The
writing of essays used to be limited to those who could get them published. In
principle you could have written them and just shown them to your friends; in
practice that didn't work. [6] An essayist needs the resistance of an
audience, just as an engraver needs the resistance of the plate.
Up till a few years ago, writing essays was the ultimate insider's game.
Domain experts were allowed to publish essays about their field, but the pool
allowed to write on general topics was about eight people who went to the
right parties in New York. Now the reconquista has overrun this territory,
and, not surprisingly, found it sparsely cultivated. There are so many essays
yet unwritten. They tend to be the naughtier ones; the insiders have pretty
much exhausted the motherhood and apple pie topics.
This leads to my final suggestion: a technique for determining when you're on
the right track. You're on the right track when people complain that you're
unqualified, or that you've done something inappropriate. If people are
complaining, that means you're doing something rather than sitting around,
which is the first step. And if they're driven to such empty forms of
complaint, that means you've probably done something good.
If you make something and people complain that it doesn't _work_ , that's a
problem. But if the worst thing they can hit you with is your own status as an
outsider, that implies that in every other respect you've succeeded. Pointing
out that someone is unqualified is as desperate as resorting to racial slurs.
It's just a legitimate sounding way of saying: we don't like your type around
here.
But the best thing of all is when people call what you're doing inappropriate.
I've been hearing this word all my life and I only recently realized that it
is, in fact, the sound of the homing beacon. "Inappropriate" is the null
criticism. It's merely the adjective form of "I don't like it."
So that, I think, should be the highest goal for the marginal. Be
inappropriate. When you hear people saying that, you're golden. And they,
incidentally, are busted.
**Notes**
[1] The facts about Apple's early history are from an interview with [Steve
Wozniak](http://foundersatwork.com/steve-wozniak.html) in Jessica Livingston's
_Founders at Work_.
[2] As usual the popular image is several decades behind reality. Now the
misunderstood artist is not a chain-smoking drunk who pours his soul into big,
messy canvases that philistines see and say "that's not art" because it isn't
a picture of anything. The philistines have now been trained that anything
hung on a wall is art. Now the misunderstood artist is a coffee-drinking vegan
cartoonist whose work they see and say "that's not art" because it looks like
stuff they've seen in the Sunday paper.
[3] In fact this would do fairly well as a definition of politics: what
determines rank in the absence of objective tests.
[4] In high school you're led to believe your whole future depends on where
you go to college, but it turns out only to buy you a couple years. By your
mid-twenties the people worth impressing already judge you more by what you've
done than where you went to school.
[5] Managers are presumably wondering, how can I make this miracle happen? How
can I make the people working for me do more with less? Unfortunately the
constraint probably has to be self-imposed. If you're _expected_ to do more
with less, then you're being starved, not eating virtuously.
[6] Without the prospect of publication, the closest most people come to
writing essays is to write in a journal. I find I never get as deeply into
subjects as I do in proper essays. As the name implies, you don't go back and
rewrite journal entries over and over for two weeks.
**Thanks** to Sam Altman, Trevor Blackwell, Paul Buchheit, Sarah Harlin,
Jessica Livingston, Jackie McDonough, Robert Morris, Olin Shivers, and Chris
Small for reading drafts of this, and to Chris Small and Chad Fowler for
inviting me to speak.
|
September 2009
I bet you the current issue of _Cosmopolitan_ has an article whose title
begins with a number. "7 Things He Won't Tell You about Sex," or something
like that. Some popular magazines feature articles of this type on the cover
of every issue. That can't be happening by accident. Editors must know they
attract readers.
Why do readers like the list of n things so much? Mainly because it's easier
to read than a regular article. [1] Structurally, the list of n things is a
degenerate case of essay. An essay can go anywhere the writer wants. In a list
of n things the writer agrees to constrain himself to a collection of points
of roughly equal importance, and he tells the reader explicitly what they are.
Some of the work of reading an article is understanding its structure—figuring
out what in high school we'd have called its "outline." Not explicitly, of
course, but someone who really understands an article probably has something
in his brain afterward that corresponds to such an outline. In a list of n
things, this work is done for you. Its structure is an exoskeleton.
As well as being explicit, the structure is guaranteed to be of the simplest
possible type: a few main points with few to no subordinate ones, and no
particular connection between them.
Because the main points are unconnected, the list of n things is random
access. There's no thread of reasoning you have to follow. You could read the
list in any order. And because the points are independent of one another, they
work like watertight compartments in an unsinkable ship. If you get bored
with, or can't understand, or don't agree with one point, you don't have to
give up on the article. You can just abandon that one and skip to the next. A
list of n things is parallel and therefore fault tolerant.
There are times when this format is what a writer wants. One, obviously, is
when what you have to say actually is a list of n things. I once wrote an
essay about the [mistakes that kill startups](startupmistakes.html), and a few
people made fun of me for writing something whose title began with a number.
But in that case I really was trying to make a complete catalog of a number of
independent things. In fact, one of the questions I was trying to answer was
how many there were.
There are other less legitimate reasons for using this format. For example, I
use it when I get close to a deadline. If I have to give a talk and I haven't
started it a few days beforehand, I'll sometimes play it safe and make the
talk a list of n things.
The list of n things is easier for writers as well as readers. When you're
writing a real essay, there's always a chance you'll hit a dead end. A real
essay is a train of thought, and some trains of thought just peter out. That's
an alarming possibility when you have to give a talk in a few days. What if
you run out of ideas? The compartmentalized structure of the list of n things
protects the writer from his own stupidity in much the same way it protects
the reader. If you run out of ideas on one point, no problem: it won't kill
the essay. You can take out the whole point if you need to, and the essay will
still survive.
Writing a list of n things is so relaxing. You think of n/2 of them in the
first 5 minutes. So bang, there's the structure, and you just have to fill it
in. As you think of more points, you just add them to the end. Maybe you take
out or rearrange or combine a few, but at every stage you have a valid (though
initially low-res) list of n things. It's like the sort of programming where
you write a version 1 very quickly and then gradually modify it, but at every
point have working code—or the style of painting where you begin with a
complete but very blurry sketch done in an hour, then spend a week cranking up
the resolution.
Because the list of n things is easier for writers too, it's not always a
damning sign when readers prefer it. It's not necessarily evidence readers are
lazy; it could also mean they don't have much confidence in the writer. The
list of n things is in that respect the cheeseburger of essay forms. If you're
eating at a restaurant you suspect is bad, your best bet is to order the
cheeseburger. Even a bad cook can make a decent cheeseburger. And there are
pretty strict conventions about what a cheeseburger should look like. You can
assume the cook isn't going to try something weird and artistic. The list of n
things similarly limits the damage that can be done by a bad writer. You know
it's going to be about whatever the title says, and the format prevents the
writer from indulging in any flights of fancy.
Because the list of n things is the easiest essay form, it should be a good
one for beginning writers. And in fact it is what most beginning writers are
taught. The classic 5 paragraph essay is really a list of n things for n = 3.
But the students writing them don't realize they're using the same structure
as the articles they read in _Cosmopolitan_. They're not allowed to include
the numbers, and they're expected to spackle over the gaps with gratuitous
transitions ("Furthermore...") and cap the thing at either end with
introductory and concluding paragraphs so it will look superficially like a
real essay. [2]
It seems a fine plan to start students off with the list of n things. It's the
easiest form. But if we're going to do that, why not do it openly? Let them
write lists of n things like the pros, with numbers and no transitions or
"conclusion."
There is one case where the list of n things is a dishonest format: when you
use it to attract attention by falsely claiming the list is an exhaustive one.
I.e. if you write an article that purports to be about _the_ 7 secrets of
success. That kind of title is the same sort of reflexive challenge as a
whodunit. You have to at least look at the article to check whether they're
the same 7 you'd list. Are you overlooking one of the secrets of success?
Better check.
It's fine to put "The" before the number if you really believe you've made an
exhaustive list. But evidence suggests most things with titles like this are
linkbait.
The greatest weakness of the list of n things is that there's so little room
for new thought. The main point of essay writing, when done right, is the new
ideas you have while doing it. A real essay, as the name implies, is
[dynamic](essay.html): you don't know what you're going to write when you
start. It will be about whatever you discover in the course of writing it.
This can only happen in a very limited way in a list of n things. You make the
title first, and that's what it's going to be about. You can't have more new
ideas in the writing than will fit in the watertight compartments you set up
initially. And your brain seems to know this: because you don't have room for
new ideas, you don't have them.
Another advantage of admitting to beginning writers that the 5 paragraph essay
is really a list of n things is that we can warn them about this. It only lets
you experience the defining characteristic of essay writing on a small scale:
in thoughts of a sentence or two. And it's particularly dangerous that the 5
paragraph essay buries the list of n things within something that looks like a
more sophisticated type of essay. If you don't know you're using this form,
you don't know you need to escape it.
**Notes**
[1] Articles of this type are also startlingly popular on Delicious, but I
think that's because [delicious/popular](http://delicious.com/popular) is
driven by bookmarking, not because Delicious users are stupid. Delicious users
are collectors, and a list of n things seems particularly collectible because
it's a collection itself.
[2] Most "word problems" in school math textbooks are similarly misleading.
They look superficially like the application of math to real problems, but
they're not. So if anything they reinforce the impression that math is merely
a complicated but pointless collection of stuff to be memorized.
|
December 2008
For nearly all of history the success of a society was proportionate to its
ability to assemble large and disciplined organizations. Those who bet on
economies of scale generally won, which meant the largest organizations were
the most successful ones.
Things have already changed so much that this is hard for us to believe, but
till just a few decades ago the largest organizations tended to be the most
progressive. An ambitious kid graduating from college in 1960 wanted to work
in the huge, gleaming offices of Ford, or General Electric, or NASA. Small
meant small-time. Small in 1960 didn't mean a cool little startup. It meant
uncle Sid's shoe store.
When I grew up in the 1970s, the idea of the "corporate ladder" was still very
much alive. The standard plan was to try to get into a good college, from
which one would be drafted into some organization and then rise to positions
of gradually increasing responsibility. The more ambitious merely hoped to
climb the same ladder faster. [1]
But in the late twentieth century something changed. It turned out that
economies of scale were not the only force at work. Particularly in
technology, the increase in speed one could get from smaller groups started to
trump the advantages of size.
The future turned out to be different from the one we were expecting in 1970.
The domed cities and flying cars we expected have failed to materialize. But
fortunately so have the jumpsuits with badges indicating our specialty and
rank. Instead of being dominated by a few, giant tree-structured
organizations, it's now looking like the economy of the future will be a fluid
network of smaller, independent units.
It's not so much that large organizations stopped working. There's no evidence
that famously successful organizations like the Roman army or the British East
India Company were any less afflicted by protocol and politics than
organizations of the same size today. But they were competing against
opponents who couldn't change the rules on the fly by discovering new
technology. Now it turns out the rule "large and disciplined organizations
win" needs to have a qualification appended: "at games that change slowly." No
one knew till change reached a sufficient speed.
Large organizations _will_ start to do worse now, though, because for the
first time in history they're no longer getting the best people. An ambitious
kid graduating from college now doesn't want to work for a big company. They
want to work for the hot startup that's rapidly growing into one. If they're
really ambitious, they want to start it. [2]
This doesn't mean big companies will disappear. To say that startups will
succeed implies that big companies will exist, because startups that succeed
either become big companies or are acquired by them. [3] But large
organizations will probably never again play the leading role they did up till
the last quarter of the twentieth century.
It's kind of surprising that a trend that lasted so long would ever run out.
How often does it happen that a rule works for thousands of years, then
switches polarity?
The millennia-long run of bigger-is-better left us with a lot of
[traditions](credentials.html) that are now obsolete, but extremely deeply
rooted. Which means the ambitious can now do arbitrage on them. It will be
very valuable to understand precisely which ideas to keep and which can now be
discarded.
The place to look is where the spread of smallness began: in the world of
startups.
There have always been occasional cases, particularly in the US, of ambitious
people who grew the ladder under them instead of climbing it. But till
recently this was an anomalous route that tended to be followed only by
outsiders. It was no coincidence that the great industrialists of the
nineteenth century had so little formal education. As huge as their companies
eventually became, they were all essentially mechanics and shopkeepers at
first. That was a social step no one with a college education would take if
they could avoid it. Till the rise of technology startups, and in particular,
Internet startups, it was very unusual for educated people to start their own
businesses.
The eight men who left Shockley Semiconductor to found Fairchild
Semiconductor, the original Silicon Valley startup, weren't even trying to
start a company at first. They were just looking for a company willing to hire
them as a group. Then one of their parents introduced them to a small
investment bank that offered to find funding for them to start their own, so
they did. But starting a company was an alien idea to them; it was something
they backed into. [4]
Now I would guess that practically every Stanford or Berkeley undergrad who
knows how to program has at least considered the idea of starting a startup.
East Coast universities are not far behind, and British universities only a
little behind them. This pattern suggests that attitudes at Stanford and
Berkeley are not an anomaly, but a leading indicator. This is the way the
world is going.
Of course, Internet startups are still only a fraction of the world's economy.
Could a trend based on them be that powerful?
I think so. There's no reason to suppose there's any limit to the amount of
work that could be done in this area. Like science, wealth seems to expand
fractally. Steam power was a sliver of the British economy when Watt started
working on it. But his work led to more work till that sliver had expanded
into something bigger than the whole economy of which it had initially been a
part.
The same thing could happen with the Internet. If Internet startups offer the
best opportunity for ambitious people, then a lot of ambitious people will
start them, and this bit of the economy will balloon in the usual fractal way.
Even if Internet-related applications only become a tenth of the world's
economy, this component will set the tone for the rest. The most dynamic part
of the economy always does, in everything from salaries to standards of dress.
Not just because of its prestige, but because the principles underlying the
most dynamic part of the economy tend to be ones that work.
For the future, the trend to bet on seems to be networks of small, autonomous
groups whose performance is measured individually. And the societies that win
will be the ones with the least impedance.
As with the original industrial revolution, some societies are going to be
better at this than others. Within a generation of its birth in England, the
Industrial Revolution had spread to continental Europe and North America. But
it didn't spread everywhere. This new way of doing things could only take root
in places that were prepared for it. It could only spread to places that
already had a vigorous middle class.
There is a similar social component to the transformation that began in
Silicon Valley in the 1960s. Two new kinds of techniques were developed there:
techniques for building integrated circuits, and techniques for building a new
type of company designed to grow fast by creating new technology. The
techniques for building integrated circuits spread rapidly to other countries.
But the techniques for building startups didn't. Fifty years later, startups
are ubiquitous in Silicon Valley and common in a handful of other US cities,
but they're still an anomaly in most of the world.
Part of the reason—possibly the main reason—that startups have not spread as
broadly as the Industrial Revolution did is their social disruptiveness.
Though it brought many social changes, the Industrial Revolution was not
fighting the principle that bigger is better. Quite the opposite: the two
dovetailed beautifully. The new industrial companies adapted the customs of
existing large organizations like the military and the civil service, and the
resulting hybrid worked well. "Captains of industry" issued orders to "armies
of workers," and everyone knew what they were supposed to do.
Startups seem to go more against the grain, socially. It's hard for them to
flourish in societies that value hierarchy and stability, just as it was hard
for industrialization to flourish in societies ruled by people who stole at
will from the merchant class. But there were already a handful of countries
past that stage when the Industrial Revolution happened. There do not seem to
be that many ready this time.
**Notes**
[1] One of the bizarre consequences of this model was that the usual way to
make more money was to become a manager. This is one of the things startups
fix.
[2] There are a lot of reasons American car companies have been doing so much
worse than Japanese car companies, but at least one of them is a cause for
optimism: American graduates have more options.
[3] It's possible that companies will one day be able to grow big in revenues
without growing big in people, but we are not very far along that trend yet.
[4] Lecuyer, Christophe, _Making Silicon Valley_ , MIT Press, 2006.
**Thanks** to Trevor Blackwell, Paul Buchheit, Jessica Livingston, and Robert
Morris for reading drafts of this.
|
There is a kind of mania for object-oriented programming at the moment, but
some of the [smartest programmers](reesoo.html) I know are some of the least
excited about it.
My own feeling is that object-oriented programming is a useful technique in
some cases, but it isn't something that has to pervade every program you
write. You should be able to define new types, but you shouldn't have to
express every program as the definition of new types.
I think there are five reasons people like object-oriented programming, and
three and a half of them are bad:
1. Object-oriented programming is exciting if you have a statically-typed language without lexical closures or macros. To some degree, it offers a way around these limitations. (See [Greenspun's Tenth Rule](quotes.html).)
2. Object-oriented programming is popular in big companies, because it suits the way they write software. At big companies, software tends to be written by large (and frequently changing) teams of mediocre programmers. Object-oriented programming imposes a discipline on these programmers that prevents any one of them from doing too much damage. The price is that the resulting code is bloated with protocols and full of duplication. This is not too high a price for big companies, because their software is probably going to be bloated and full of duplication anyway.
3. Object-oriented programming generates a lot of what looks like work. Back in the days of fanfold, there was a type of programmer who would only put five or ten lines of code on a page, preceded by twenty lines of elaborately formatted comments. Object-oriented programming is like crack for these people: it lets you incorporate all this scaffolding right into your source code. Something that a Lisp hacker might handle by pushing a symbol onto a list becomes a whole file of classes and methods. So it is a good tool if you want to convince yourself, or someone else, that you are doing a lot of work.
4. If a language is itself an object-oriented program, it can be extended by users. Well, maybe. Or maybe you can do even better by offering the sub-concepts of object-oriented programming a la carte. Overloading, for example, is not intrinsically tied to classes. We'll see.
5. Object-oriented abstractions map neatly onto the domains of certain specific kinds of programs, like simulations and CAD systems.
I personally have never needed object-oriented abstractions. Common Lisp has
an enormously powerful object system and I've never used it once. I've done a
lot of things (e.g. making hash tables full of closures) that would have
required object-oriented techniques to do in wimpier languages, but I have
never had to use CLOS.
Maybe I'm just stupid, or have worked on some limited subset of applications.
There is a danger in designing a language based on one's own experience of
programming. But it seems more dangerous to put stuff in that you've never
needed because it's thought to be a good idea.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
January 2012
A year ago I noticed a pattern in the least successful startups we'd funded:
they all seemed hard to talk to. It felt as if there was some kind of wall
between us. I could never quite tell if they understood what I was saying.
This caught my attention because earlier we'd noticed a pattern among the most
successful startups, and it seemed to hinge on a different quality. We found
the startups that did best were the ones with the sort of founders about whom
we'd say "they can take care of themselves." The startups that do best are
fire-and-forget in the sense that all you have to do is give them a lead, and
they'll close it, whatever type of lead it is. When they're raising money, for
example, you can do the initial intros knowing that if you wanted to you could
stop thinking about it at that point. You won't have to babysit the round to
make sure it happens. That type of founder is going to come back with the
money; the only question is how much on what terms.
It seemed odd that the outliers at the two ends of the spectrum could be
detected by what appeared to be unrelated tests. You'd expect that if the
founders at one end were distinguished by the presence of quality x, at the
other end they'd be distinguished by lack of x. Was there some kind of inverse
relation between [resourcefulness](relres.html) and being hard to talk to?
It turns out there is, and the key to the mystery is the old adage "a word to
the wise is sufficient." Because this phrase is not only overused, but
overused in an indirect way (by prepending the subject to some advice), most
people who've heard it don't know what it means. What it means is that if
someone is wise, all you have to do is say one word to them, and they'll
understand immediately. You don't have to explain in detail; they'll chase
down all the implications.
In much the same way that all you have to do is give the right sort of founder
a one line intro to a VC, and he'll chase down the money. That's the
connection. Understanding all the implications — even the inconvenient
implications — of what someone tells you is a subset of resourcefulness. It's
conversational resourcefulness.
Like real world resourcefulness, conversational resourcefulness often means
doing things you don't want to. Chasing down all the implications of what's
said to you can sometimes lead to uncomfortable conclusions. The best word to
describe the failure to do so is probably "denial," though that seems a bit
too narrow. A better way to describe the situation would be to say that the
unsuccessful founders had the sort of conservatism that comes from weakness.
They traversed idea space as gingerly as a very old person traverses the
physical world. [1]
The unsuccessful founders weren't stupid. Intellectually they were as capable
as the successful founders of following all the implications of what one said
to them. They just weren't eager to.
So being hard to talk to was not what was killing the unsuccessful startups.
It was a sign of an underlying lack of resourcefulness. That's what was
killing them. As well as failing to chase down the implications of what was
said to them, the unsuccessful founders would also fail to chase down funding,
and users, and sources of new ideas. But the most immediate evidence I had
that something was amiss was that I couldn't talk to them.
**Notes**
[1] A YC partner wrote:
My feeling with the bad groups is that coming into office hours, they've
already decided what they're going to do and everything I say is being put
through an internal process in their heads, which either desperately tries to
munge what I've said into something that conforms with their decision or just
outright dismisses it and creates a rationalization for doing so. They may not
even be conscious of this process but that's what I think is happening when
you say something to bad groups and they have that glazed over look. I don't
think it's confusion or lack of understanding per se, it's this internal
process at work.
With the good groups, you can tell that everything you say is being looked at
with fresh eyes and even if it's dismissed, it's because of some logical
reason e.g. "we already tried that" or "from speaking to our users that isn't
what they'd like," etc. Those groups never have that glazed over look.
**Thanks** to Sam Altman, Patrick Collison, Aaron Iba, Jessica Livingston,
Robert Morris, Harj Taggar, and Garry Tan for reading drafts of this.
|
"...the mere consciousness of an engagement will sometimes worry a whole day."
� Charles Dickens
July 2009
One reason programmers dislike meetings so much is that they're on a different
type of schedule from other people. Meetings cost them more.
There are two types of schedule, which I'll call the manager's schedule and
the maker's schedule. The manager's schedule is for bosses. It's embodied in
the traditional appointment book, with each day cut into one hour intervals.
You can block off several hours for a single task if you need to, but by
default you change what you're doing every hour.
When you use time that way, it's merely a practical problem to meet with
someone. Find an open slot in your schedule, book them, and you're done.
Most powerful people are on the manager's schedule. It's the schedule of
command. But there's another way of using time that's common among people who
make things, like programmers and writers. They generally prefer to use time
in units of half a day at least. You can't write or program well in units of
an hour. That's barely enough time to get started.
When you're operating on the maker's schedule, meetings are a disaster. A
single meeting can blow a whole afternoon, by breaking it into two pieces each
too small to do anything hard in. Plus you have to remember to go to the
meeting. That's no problem for someone on the manager's schedule. There's
always something coming on the next hour; the only question is what. But when
someone on the maker's schedule has a meeting, they have to think about it.
For someone on the maker's schedule, having a meeting is like throwing an
exception. It doesn't merely cause you to switch from one task to another; it
changes the mode in which you work.
I find one meeting can sometimes affect a whole day. A meeting commonly blows
at least half a day, by breaking up a morning or afternoon. But in addition
there's sometimes a cascading effect. If I know the afternoon is going to be
broken up, I'm slightly less likely to start something ambitious in the
morning. I know this may sound oversensitive, but if you're a maker, think of
your own case. Don't your spirits rise at the thought of having an entire day
free to work, with no appointments at all? Well, that means your spirits are
correspondingly depressed when you don't. And ambitious projects are by
definition close to the limits of your capacity. A small decrease in morale is
enough to kill them off.
Each type of schedule works fine by itself. Problems arise when they meet.
Since most powerful people operate on the manager's schedule, they're in a
position to make everyone resonate at their frequency if they want to. But the
smarter ones restrain themselves, if they know that some of the people working
for them need long chunks of time to work in.
Our case is an unusual one. Nearly all investors, including all VCs I know,
operate on the manager's schedule. But [Y Combinator](http://ycombinator.com)
runs on the maker's schedule. Rtm and Trevor and I do because we always have,
and Jessica does too, mostly, because she's gotten into sync with us.
I wouldn't be surprised if there start to be more companies like us. I suspect
founders may increasingly be able to resist, or at least postpone, turning
into managers, just as a few decades ago they started to be able to resist
switching from jeans to suits.
How do we manage to advise so many startups on the maker's schedule? By using
the classic device for simulating the manager's schedule within the maker's:
office hours. Several times a week I set aside a chunk of time to meet
founders we've funded. These chunks of time are at the end of my working day,
and I wrote a signup program that ensures all the appointments within a given
set of office hours are clustered at the end. Because they come at the end of
my day these meetings are never an interruption. (Unless their working day
ends at the same time as mine, the meeting presumably interrupts theirs, but
since they made the appointment it must be worth it to them.) During busy
periods, office hours sometimes get long enough that they compress the day,
but they never interrupt it.
When we were working on [our own startup](start.html), back in the 90s, I
evolved another trick for partitioning the day. I used to program from dinner
till about 3 am every day, because at night no one could interrupt me. Then
I'd sleep till about 11 am, and come in and work until dinner on what I called
"business stuff." I never thought of it in these terms, but in effect I had
two workdays each day, one on the manager's schedule and one on the maker's.
When you're operating on the manager's schedule you can do something you'd
never want to do on the maker's: you can have speculative meetings. You can
meet someone just to get to know one another. If you have an empty slot in
your schedule, why not? Maybe it will turn out you can help one another in
some way.
Business people in Silicon Valley (and the whole world, for that matter) have
speculative meetings all the time. They're effectively free if you're on the
manager's schedule. They're so common that there's distinctive language for
proposing them: saying that you want to "grab coffee," for example.
Speculative meetings are terribly costly if you're on the maker's schedule,
though. Which puts us in something of a bind. Everyone assumes that, like
other investors, we run on the manager's schedule. So they introduce us to
someone they think we ought to meet, or send us an email proposing we grab
coffee. At this point we have two options, neither of them good: we can meet
with them, and lose half a day's work; or we can try to avoid meeting them,
and probably offend them.
Till recently we weren't clear in our own minds about the source of the
problem. We just took it for granted that we had to either blow our schedules
or offend people. But now that I've realized what's going on, perhaps there's
a third option: to write something explaining the two types of schedule. Maybe
eventually, if the conflict between the manager's schedule and the maker's
schedule starts to be more widely understood, it will become less of a
problem.
Those of us on the maker's schedule are willing to compromise. We know we have
to have some number of meetings. All we ask from those on the manager's
schedule is that they understand the cost.
**Thanks** to Sam Altman, Trevor Blackwell, Paul Buchheit, Jessica Livingston,
and Robert Morris for reading drafts of this.
**Related:**
|
November 2015
A few months ago an article about Y Combinator said that early on it had been
a "one-man show." It's sadly common to read that sort of thing. But the
problem with that description is not just that it's unfair. It's also
misleading. Much of what's most novel about YC is due to Jessica Livingston.
If you don't understand her, you don't understand YC. So let me tell you a
little about Jessica.
YC had 4 founders. Jessica and I decided one night to start it, and the next
day we recruited my friends Robert Morris and Trevor Blackwell. Jessica and I
ran YC day to day, and Robert and Trevor read applications and did interviews
with us.
Jessica and I were already dating when we started YC. At first we tried to act
"professional" about this, meaning we tried to conceal it. In retrospect that
seems ridiculous, and we soon dropped the pretense. And the fact that Jessica
and I were a couple is a big part of what made YC what it was. YC felt like a
family. The founders early on were mostly young. We all had dinner together
once a week, cooked for the first couple years by me. Our first building had
been a private home. The overall atmosphere was shockingly different from a
VC's office on Sand Hill Road, in a way that was entirely for the better.
There was an authenticity that everyone who walked in could sense. And that
didn't just mean that people trusted us. It was the perfect quality to instill
in startups. Authenticity is one of the most important things YC looks for in
founders, not just because fakers and opportunists are annoying, but because
authenticity is one of the main things that separates the most successful
startups from the rest.
Early YC was a family, and Jessica was its mom. And the culture she defined
was one of YC's most important innovations. Culture is important in any
organization, but at YC culture wasn't just how we behaved when we built the
product. At YC, the culture was the product.
Jessica was also the mom in another sense: she had the last word. Everything
we did as an organization went through her first — who to fund, what to say to
the public, how to deal with other companies, who to hire, everything.
Before we had kids, YC was more or less our life. There was no real
distinction between working hours and not. We talked about YC all the time.
And while there might be some businesses that it would be tedious to let
infect your private life, we liked it. We'd started YC because it was
something we were interested in. And some of the problems we were trying to
solve were endlessly difficult. How do you recognize good founders? You could
talk about that for years, and we did; we still do.
I'm better at some things than Jessica, and she's better at some things than
me. One of the things she's best at is judging people. She's one of those rare
individuals with x-ray vision for character. She can see through any kind of
faker almost immediately. Her nickname within YC was the Social Radar, and
this special power of hers was critical in making YC what it is. The earlier
you pick startups, the more you're picking the founders. Later stage investors
get to try products and look at growth numbers. At the stage where YC invests,
there is often neither a product nor any numbers.
Others thought YC had some special insight about the future of technology.
Mostly we had the same sort of insight Socrates claimed: we at least knew we
knew nothing. What made YC successful was being able to pick good founders. We
thought Airbnb was a bad idea. We funded it because we liked the founders.
During interviews, Robert and Trevor and I would pepper the applicants with
technical questions. Jessica would mostly watch. A lot of the applicants
probably read her as some kind of secretary, especially early on, because she
was the one who'd go out and get each new group and she didn't ask many
questions. She was ok with that. It was easier for her to watch people if they
didn't notice her. But after the interview, the three of us would turn to
Jessica and ask "What does the Social Radar say?" [1]
Having the Social Radar at interviews wasn't just how we picked founders who'd
be successful. It was also how we picked founders who were good people. At
first we did this because we couldn't help it. Imagine what it would feel like
to have x-ray vision for character. Being around bad people would be
intolerable. So we'd refuse to fund founders whose characters we had doubts
about even if we thought they'd be successful.
Though we initially did this out of self-indulgence, it turned out to be very
valuable to YC. We didn't realize it in the beginning, but the people we were
picking would become the YC alumni network. And once we picked them, unless
they did something really egregious, they were going to be part of it for
life. Some now think YC's alumni network is its most valuable feature. I
personally think YC's advice is pretty good too, but the alumni network is
certainly among the most valuable features. The level of trust and helpfulness
is remarkable for a group of such size. And Jessica is the main reason why.
(As we later learned, it probably cost us little to reject people whose
characters we had doubts about, because how good founders are and how well
they do are [_not orthogonal_](mean.html). If bad founders succeed at all,
they tend to sell early. The most successful founders are almost all good.)
If Jessica was so important to YC, why don't more people realize it? Partly
because I'm a writer, and writers always get disproportionate attention. YC's
brand was initially my brand, and our applicants were people who'd read my
essays. But there is another reason: Jessica hates attention. Talking to
reporters makes her nervous. The thought of giving a talk paralyzes her. She
was even uncomfortable at our wedding, because the bride is always the center
of attention. [2]
It's not just because she's shy that she hates attention, but because it
throws off the Social Radar. She can't be herself. You can't watch people when
everyone is watching you.
Another reason attention worries her is that she hates bragging. In anything
she does that's publicly visible, her biggest fear (after the obvious fear
that it will be bad) is that it will seem ostentatious. She says being too
modest is a common problem for women. But in her case it goes beyond that. She
has a horror of ostentation so visceral it's almost a phobia.
She also hates fighting. She can't do it; she just shuts down. And
unfortunately there is a good deal of fighting in being the public face of an
organization.
So although Jessica more than anyone made YC unique, the very qualities that
enabled her to do it mean she tends to get written out of YC's history.
Everyone buys this story that PG started YC and his wife just kind of helped.
Even YC's haters buy it. A couple years ago when people were attacking us for
not funding more female founders (than exist), they all treated YC as
identical with PG. It would have spoiled the narrative to acknowledge
Jessica's central role at YC.
Jessica was boiling mad that people were accusing _her_ company of sexism.
I've never seen her angrier about anything. But she did not contradict them.
Not publicly. In private there was a great deal of profanity. And she wrote
three separate essays about the question of female founders. But she could
never bring herself to publish any of them. She'd seen the level of vitriol in
this debate, and she shrank from engaging. [3]
It wasn't just because she disliked fighting. She's so sensitive to character
that it repels her even to fight with dishonest people. The idea of mixing it
up with linkbait journalists or Twitter trolls would seem to her not merely
frightening, but disgusting.
But Jessica knew her example as a successful female founder would encourage
more women to start companies, so last year she did something YC had never
done before and hired a PR firm to get her some interviews. At one of the
first she did, the reporter brushed aside her insights about startups and
turned it into a sensationalistic story about how some guy had tried to chat
her up as she was waiting outside the bar where they had arranged to meet.
Jessica was mortified, partly because the guy had done nothing wrong, but more
because the story treated her as a victim significant only for being a woman,
rather than one of the most knowledgeable investors in the Valley.
After that she told the PR firm to stop.
You're not going to be hearing in the press about what Jessica has achieved.
So let me tell you what Jessica has achieved. Y Combinator is fundamentally a
nexus of people, like a university. It doesn't make a product. What defines it
is the people. Jessica more than anyone curated and nurtured that collection
of people. In that sense she literally made YC.
Jessica knows more about the qualities of startup founders than anyone else
ever has. Her immense data set and x-ray vision are the perfect storm in that
respect. The qualities of the founders are the best predictor of how a startup
will do. And startups are in turn the most important source of growth in
mature economies.
The person who knows the most about the most important factor in the growth of
mature economies — that is who Jessica Livingston is. Doesn't that sound like
someone who should be better known?
**Notes**
[1] Harj Taggar reminded me that while Jessica didn't ask many questions, they
tended to be important ones:
"She was always good at sniffing out any red flags about the team or their
determination and disarmingly asking the right question, which usually
revealed more than the founders realized."
[2] Or more precisely, while she likes getting attention in the sense of
getting credit for what she has done, she doesn't like getting attention in
the sense of being watched in real time. Unfortunately, not just for her but
for a lot of people, how much you get of the former depends a lot on how much
you get of the latter.
Incidentally, if you saw Jessica at a public event, you would never guess she
hates attention, because (a) she is very polite and (b) when she's nervous,
she expresses it by smiling more.
[3] The existence of people like Jessica is not just something the mainstream
media needs to learn to acknowledge, but something feminists need to learn to
acknowledge as well. There are successful women who don't like to fight. Which
means if the public conversation about women consists of fighting, their
voices will be silenced.
There's a sort of Gresham's Law of conversations. If a conversation reaches a
certain level of incivility, the more thoughtful people start to leave. No one
understands female founders better than Jessica. But it's unlikely anyone will
ever hear her speak candidly about the topic. She ventured a toe in that water
a while ago, and the reaction was so violent that she decided "never again."
**Thanks** to Sam Altman, Paul Buchheit, Patrick Collison, Daniel Gackle,
Carolynn Levy, Jon Levy, Kirsty Nathoo, Robert Morris, Geoff Ralston, and Harj
Taggar for reading drafts of this. And yes, Jessica Livingston, who made me
cut surprisingly little.
|
January 2004
Have you ever seen an old photo of yourself and been embarrassed at the way
you looked? _Did we actually dress like that?_ We did. And we had no idea how
silly we looked. It's the nature of fashion to be invisible, in the same way
the movement of the earth is invisible to all of us riding on it.
What scares me is that there are moral fashions too. They're just as
arbitrary, and just as invisible to most people. But they're much more
dangerous. Fashion is mistaken for good design; moral fashion is mistaken for
good. Dressing oddly gets you laughed at. Violating moral fashions can get you
fired, ostracized, imprisoned, or even killed.
If you could travel back in a time machine, one thing would be true no matter
where you went: you'd have to watch what you said. Opinions we consider
harmless could have gotten you in big trouble. I've already said at least one
thing that would have gotten me in big trouble in most of Europe in the
seventeenth century, and did get Galileo in big trouble when he said it � that
the earth moves. [1]
It seems to be a constant throughout history: In every period, people believed
things that were just ridiculous, and believed them so strongly that you would
have gotten in terrible trouble for saying otherwise.
Is our time any different? To anyone who has read any amount of history, the
answer is almost certainly no. It would be a remarkable coincidence if ours
were the first era to get everything just right.
It's tantalizing to think we believe things that people in the future will
find ridiculous. What _would_ someone coming back to visit us in a time
machine have to be careful not to say? That's what I want to study here. But I
want to do more than just shock everyone with the heresy du jour. I want to
find general recipes for discovering what you can't say, in any era.
**The Conformist Test**
Let's start with a test: Do you have any opinions that you would be reluctant
to express in front of a group of your peers?
If the answer is no, you might want to stop and think about that. If
everything you believe is something you're supposed to believe, could that
possibly be a coincidence? Odds are it isn't. Odds are you just think what
you're told.
The other alternative would be that you independently considered every
question and came up with the exact same answers that are now considered
acceptable. That seems unlikely, because you'd also have to make the same
mistakes. Mapmakers deliberately put slight mistakes in their maps so they can
tell when someone copies them. If another map has the same mistake, that's
very convincing evidence.
Like every other era in history, our moral map almost certainly contains a few
mistakes. And anyone who makes the same mistakes probably didn't do it by
accident. It would be like someone claiming they had independently decided in
1972 that bell-bottom jeans were a good idea.
If you believe everything you're supposed to now, how can you be sure you
wouldn't also have believed everything you were supposed to if you had grown
up among the plantation owners of the pre-Civil War South, or in Germany in
the 1930s � or among the Mongols in 1200, for that matter? Odds are you would
have.
Back in the era of terms like "well-adjusted," the idea seemed to be that
there was something wrong with you if you thought things you didn't dare say
out loud. This seems backward. Almost certainly, there is something wrong with
you if you _don't_ think things you don't dare say out loud.
**Trouble**
What can't we say? One way to find these ideas is simply to look at things
people do say, and get in trouble for. [2]
Of course, we're not just looking for things we can't say. We're looking for
things we can't say that are true, or at least have enough chance of being
true that the question should remain open. But many of the things people get
in trouble for saying probably do make it over this second, lower threshold.
No one gets in trouble for saying that 2 + 2 is 5, or that people in
Pittsburgh are ten feet tall. Such obviously false statements might be treated
as jokes, or at worst as evidence of insanity, but they are not likely to make
anyone mad. The statements that make people mad are the ones they worry might
be believed. I suspect the statements that make people maddest are those they
worry might be true.
If Galileo had said that people in Padua were ten feet tall, he would have
been regarded as a harmless eccentric. Saying the earth orbited the sun was
another matter. The church knew this would set people thinking.
Certainly, as we look back on the past, this rule of thumb works well. A lot
of the statements people got in trouble for seem harmless now. So it's likely
that visitors from the future would agree with at least some of the statements
that get people in trouble today. Do we have no Galileos? Not likely.
To find them, keep track of opinions that get people in trouble, and start
asking, could this be true? Ok, it may be heretical (or whatever modern
equivalent), but might it also be true?
**Heresy**
This won't get us all the answers, though. What if no one happens to have
gotten in trouble for a particular idea yet? What if some idea would be so
radioactively controversial that no one would dare express it in public? How
can we find these too?
Another approach is to follow that word, heresy. In every period of history,
there seem to have been labels that got applied to statements to shoot them
down before anyone had a chance to ask if they were true or not. "Blasphemy",
"sacrilege", and "heresy" were such labels for a good part of western history,
as in more recent times "indecent", "improper", and "unamerican" have been. By
now these labels have lost their sting. They always do. By now they're mostly
used ironically. But in their time, they had real force.
The word "defeatist", for example, has no particular political connotations
now. But in Germany in 1917 it was a weapon, used by Ludendorff in a purge of
those who favored a negotiated peace. At the start of World War II it was used
extensively by Churchill and his supporters to silence their opponents. In
1940, any argument against Churchill's aggressive policy was "defeatist". Was
it right or wrong? Ideally, no one got far enough to ask that.
We have such labels today, of course, quite a lot of them, from the all-
purpose "inappropriate" to the dreaded "divisive." In any period, it should be
easy to figure out what such labels are, simply by looking at what people call
ideas they disagree with besides untrue. When a politician says his opponent
is mistaken, that's a straightforward criticism, but when he attacks a
statement as "divisive" or "racially insensitive" instead of arguing that it's
false, we should start paying attention.
So another way to figure out which of our taboos future generations will laugh
at is to start with the labels. Take a label � "sexist", for example � and try
to think of some ideas that would be called that. Then for each ask, might
this be true?
Just start listing ideas at random? Yes, because they won't really be random.
The ideas that come to mind first will be the most plausible ones. They'll be
things you've already noticed but didn't let yourself think.
In 1989 some clever researchers tracked the eye movements of radiologists as
they scanned chest images for signs of lung cancer. [3] They found that even
when the radiologists missed a cancerous lesion, their eyes had usually paused
at the site of it. Part of their brain knew there was something there; it just
didn't percolate all the way up into conscious knowledge. I think many
interesting heretical thoughts are already mostly formed in our minds. If we
turn off our self-censorship temporarily, those will be the first to emerge.
**Time and Space**
If we could look into the future it would be obvious which of our taboos
they'd laugh at. We can't do that, but we can do something almost as good: we
can look into the past. Another way to figure out what we're getting wrong is
to look at what used to be acceptable and is now unthinkable.
Changes between the past and the present sometimes do represent progress. In a
field like physics, if we disagree with past generations it's because we're
right and they're wrong. But this becomes rapidly less true as you move away
from the certainty of the hard sciences. By the time you get to social
questions, many changes are just fashion. The age of consent fluctuates like
hemlines.
We may imagine that we are a great deal smarter and more virtuous than past
generations, but the more history you read, the less likely this seems. People
in past times were much like us. Not heroes, not barbarians. Whatever their
ideas were, they were ideas reasonable people could believe.
So here is another source of interesting heresies. Diff present ideas against
those of various past cultures, and see what you get. [4] Some will be
shocking by present standards. Ok, fine; but which might also be true?
You don't have to look into the past to find big differences. In our own time,
different societies have wildly varying ideas of what's ok and what isn't. So
you can try diffing other cultures' ideas against ours as well. (The best way
to do that is to visit them.) Any idea that's considered harmless in a
significant percentage of times and places, and yet is taboo in ours, is a
candidate for something we're mistaken about.
For example, at the high water mark of political correctness in the early
1990s, Harvard distributed to its faculty and staff a brochure saying, among
other things, that it was inappropriate to compliment a colleague or student's
clothes. No more "nice shirt." I think this principle is rare among the
world's cultures, past or present. There are probably more where it's
considered especially polite to compliment someone's clothing than where it's
considered improper. Odds are this is, in a mild form, an example of one of
the taboos a visitor from the future would have to be careful to avoid if he
happened to set his time machine for Cambridge, Massachusetts, 1992. [5]
**Prigs**
Of course, if they have time machines in the future they'll probably have a
separate reference manual just for Cambridge. This has always been a fussy
place, a town of i dotters and t crossers, where you're liable to get both
your grammar and your ideas corrected in the same conversation. And that
suggests another way to find taboos. Look for prigs, and see what's inside
their heads.
Kids' heads are repositories of all our taboos. It seems fitting to us that
kids' ideas should be bright and clean. The picture we give them of the world
is not merely simplified, to suit their developing minds, but sanitized as
well, to suit our ideas of what kids ought to think. [6]
You can see this on a small scale in the matter of dirty words. A lot of my
friends are starting to have children now, and they're all trying not to use
words like "fuck" and "shit" within baby's hearing, lest baby start using
these words too. But these words are part of the language, and adults use them
all the time. So parents are giving their kids an inaccurate idea of the
language by not using them. Why do they do this? Because they don't think it's
fitting that kids should use the whole language. We like children to seem
innocent. [7]
Most adults, likewise, deliberately give kids a misleading view of the world.
One of the most obvious examples is Santa Claus. We think it's cute for little
kids to believe in Santa Claus. I myself think it's cute for little kids to
believe in Santa Claus. But one wonders, do we tell them this stuff for their
sake, or for ours?
I'm not arguing for or against this idea here. It is probably inevitable that
parents should want to dress up their kids' minds in cute little baby outfits.
I'll probably do it myself. The important thing for our purposes is that, as a
result, a well brought-up teenage kid's brain is a more or less complete
collection of all our taboos � and in mint condition, because they're
untainted by experience. Whatever we think that will later turn out to be
ridiculous, it's almost certainly inside that head.
How do we get at these ideas? By the following thought experiment. Imagine a
kind of latter-day Conrad character who has worked for a time as a mercenary
in Africa, for a time as a doctor in Nepal, for a time as the manager of a
nightclub in Miami. The specifics don't matter � just someone who has seen a
lot. Now imagine comparing what's inside this guy's head with what's inside
the head of a well-behaved sixteen year old girl from the suburbs. What does
he think that would shock her? He knows the world; she knows, or at least
embodies, present taboos. Subtract one from the other, and the result is what
we can't say.
**Mechanism**
I can think of one more way to figure out what we can't say: to look at how
taboos are created. How do moral fashions arise, and why are they adopted? If
we can understand this mechanism, we may be able to see it at work in our own
time.
Moral fashions don't seem to be created the way ordinary fashions are.
Ordinary fashions seem to arise by accident when everyone imitates the whim of
some influential person. The fashion for broad-toed shoes in late fifteenth
century Europe began because Charles VIII of France had six toes on one foot.
The fashion for the name Gary began when the actor Frank Cooper adopted the
name of a tough mill town in Indiana. Moral fashions more often seem to be
created deliberately. When there's something we can't say, it's often because
some group doesn't want us to.
The prohibition will be strongest when the group is nervous. The irony of
Galileo's situation was that he got in trouble for repeating Copernicus's
ideas. Copernicus himself didn't. In fact, Copernicus was a canon of a
cathedral, and dedicated his book to the pope. But by Galileo's time the
church was in the throes of the Counter-Reformation and was much more worried
about unorthodox ideas.
To launch a taboo, a group has to be poised halfway between weakness and
power. A confident group doesn't need taboos to protect it. It's not
considered improper to make disparaging remarks about Americans, or the
English. And yet a group has to be powerful enough to enforce a taboo.
Coprophiles, as of this writing, don't seem to be numerous or energetic enough
to have had their interests promoted to a lifestyle.
I suspect the biggest source of moral taboos will turn out to be power
struggles in which one side only barely has the upper hand. That's where
you'll find a group powerful enough to enforce taboos, but weak enough to need
them.
Most struggles, whatever they're really about, will be cast as struggles
between competing ideas. The English Reformation was at bottom a struggle for
wealth and power, but it ended up being cast as a struggle to preserve the
souls of Englishmen from the corrupting influence of Rome. It's easier to get
people to fight for an idea. And whichever side wins, their ideas will also be
considered to have triumphed, as if God wanted to signal his agreement by
selecting that side as the victor.
We often like to think of World War II as a triumph of freedom over
totalitarianism. We conveniently forget that the Soviet Union was also one of
the winners.
I'm not saying that struggles are never about ideas, just that they will
always be made to seem to be about ideas, whether they are or not. And just as
there is nothing so unfashionable as the last, discarded fashion, there is
nothing so wrong as the principles of the most recently defeated opponent.
Representational art is only now recovering from the approval of both Hitler
and Stalin. [8]
Although moral fashions tend to arise from different sources than fashions in
clothing, the mechanism of their adoption seems much the same. The early
adopters will be driven by ambition: self-consciously cool people who want to
distinguish themselves from the common herd. As the fashion becomes
established they'll be joined by a second, much larger group, driven by fear.
[9] This second group adopt the fashion not because they want to stand out but
because they are afraid of standing out.
So if you want to figure out what we can't say, look at the machinery of
fashion and try to predict what it would make unsayable. What groups are
powerful but nervous, and what ideas would they like to suppress? What ideas
were tarnished by association when they ended up on the losing side of a
recent struggle? If a self-consciously cool person wanted to differentiate
himself from preceding fashions (e.g. from his parents), which of their ideas
would he tend to reject? What are conventional-minded people afraid of saying?
This technique won't find us all the things we can't say. I can think of some
that aren't the result of any recent struggle. Many of our taboos are rooted
deep in the past. But this approach, combined with the preceding four, will
turn up a good number of unthinkable ideas.
**Why**
Some would ask, why would one want to do this? Why deliberately go poking
around among nasty, disreputable ideas? Why look under rocks?
I do it, first of all, for the same reason I did look under rocks as a kid:
plain curiosity. And I'm especially curious about anything that's forbidden.
Let me see and decide for myself.
Second, I do it because I don't like the idea of being mistaken. If, like
other eras, we believe things that will later seem ridiculous, I want to know
what they are so that I, at least, can avoid believing them.
Third, I do it because it's good for the brain. To do good work you need a
brain that can go anywhere. And you especially need a brain that's in the
habit of going where it's not supposed to.
Great work tends to grow out of ideas that others have overlooked, and no idea
is so overlooked as one that's unthinkable. Natural selection, for example.
It's so simple. Why didn't anyone think of it before? Well, that is all too
obvious. Darwin himself was careful to tiptoe around the implications of his
theory. He wanted to spend his time thinking about biology, not arguing with
people who accused him of being an atheist.
In the sciences, especially, it's a great advantage to be able to question
assumptions. The m.o. of scientists, or at least of the good ones, is
precisely that: look for places where conventional wisdom is broken, and then
try to pry apart the cracks and see what's underneath. That's where new
theories come from.
A good scientist, in other words, does not merely ignore conventional wisdom,
but makes a special effort to break it. Scientists go looking for trouble.
This should be the m.o. of any scholar, but scientists seem much more willing
to look under rocks. [10]
Why? It could be that the scientists are simply smarter; most physicists
could, if necessary, make it through a PhD program in French literature, but
few professors of French literature could make it through a PhD program in
physics. Or it could be because it's clearer in the sciences whether theories
are true or false, and this makes scientists bolder. (Or it could be that,
because it's clearer in the sciences whether theories are true or false, you
have to be smart to get jobs as a scientist, rather than just a good
politician.)
Whatever the reason, there seems a clear correlation between intelligence and
willingness to consider shocking ideas. This isn't just because smart people
actively work to find holes in conventional thinking. I think conventions also
have less hold over them to start with. You can see that in the way they
dress.
It's not only in the sciences that heresy pays off. In any competitive field,
you can [win big](avg.html) by seeing things that others daren't. And in every
field there are probably heresies few dare utter. Within the US car industry
there is a lot of hand-wringing now about declining market share. Yet the
cause is so obvious that any observant outsider could explain it in a second:
they make bad cars. And they have for so long that by now the US car brands
are antibrands � something you'd buy a car despite, not because of. Cadillac
stopped being the Cadillac of cars in about 1970. And yet I suspect no one
dares say this. [11] Otherwise these companies would have tried to fix the
problem.
Training yourself to think unthinkable thoughts has advantages beyond the
thoughts themselves. It's like stretching. When you stretch before running,
you put your body into positions much more extreme than any it will assume
during the run. If you can think things so outside the box that they'd make
people's hair stand on end, you'll have no trouble with the small trips
outside the box that people call innovative.
**_Pensieri Stretti_**
When you find something you can't say, what do you do with it? My advice is,
don't say it. Or at least, pick your battles.
Suppose in the future there is a movement to ban the color yellow. Proposals
to paint anything yellow are denounced as "yellowist", as is anyone suspected
of liking the color. People who like orange are tolerated but viewed with
suspicion. Suppose you realize there is nothing wrong with yellow. If you go
around saying this, you'll be denounced as a yellowist too, and you'll find
yourself having a lot of arguments with anti-yellowists. If your aim in life
is to rehabilitate the color yellow, that may be what you want. But if you're
mostly interested in other questions, being labelled as a yellowist will just
be a distraction. Argue with idiots, and you become an idiot.
The most important thing is to be able to think what you want, not to say what
you want. And if you feel you have to say everything you think, it may inhibit
you from thinking improper thoughts. I think it's better to follow the
opposite policy. Draw a sharp line between your thoughts and your speech.
Inside your head, anything is allowed. Within my head I make a point of
encouraging the most outrageous thoughts I can imagine. But, as in a secret
society, nothing that happens within the building should be told to outsiders.
The first rule of Fight Club is, you do not talk about Fight Club.
When Milton was going to visit Italy in the 1630s, Sir Henry Wootton, who had
been ambassador to Venice, told him his motto should be _"i pensieri stretti &
il viso sciolto."_ Closed thoughts and an open face. Smile at everyone, and
don't tell them what you're thinking. This was wise advice. Milton was an
argumentative fellow, and the Inquisition was a bit restive at that time. But
I think the difference between Milton's situation and ours is only a matter of
degree. Every era has its heresies, and if you don't get imprisoned for them
you will at least get in enough trouble that it becomes a complete
distraction.
I admit it seems cowardly to keep quiet. When I read about the harassment to
which the Scientologists subject their critics [12], or that pro-Israel groups
are "compiling dossiers" on those who speak out against Israeli human rights
abuses [13], or about people being sued for violating the DMCA [14], part of
me wants to say, "All right, you bastards, bring it on." The problem is, there
are so many things you can't say. If you said them all you'd have no time left
for your real work. You'd have to turn into Noam Chomsky. [15]
The trouble with keeping your thoughts secret, though, is that you lose the
advantages of discussion. Talking about an idea leads to more ideas. So the
optimal plan, if you can manage it, is to have a few trusted friends you can
speak openly to. This is not just a way to develop ideas; it's also a good
rule of thumb for choosing friends. The people you can say heretical things to
without getting jumped on are also the most interesting to know.
**_Viso Sciolto?_**
I don't think we need the _viso sciolto_ so much as the _pensieri stretti._
Perhaps the best policy is to make it plain that you don't agree with whatever
zealotry is current in your time, but not to be too specific about what you
disagree with. Zealots will try to draw you out, but you don't have to answer
them. If they try to force you to treat a question on their terms by asking
"are you with us or against us?" you can always just answer "neither".
Better still, answer "I haven't decided." That's what Larry Summers did when a
group tried to put him in this position. Explaining himself later, he said "I
don't do litmus tests." [16] A lot of the questions people get hot about are
actually quite complicated. There is no prize for getting the answer quickly.
If the anti-yellowists seem to be getting out of hand and you want to fight
back, there are ways to do it without getting yourself accused of being a
yellowist. Like skirmishers in an ancient army, you want to avoid directly
engaging the main body of the enemy's troops. Better to harass them with
arrows from a distance.
One way to do this is to ratchet the debate up one level of abstraction. If
you argue against censorship in general, you can avoid being accused of
whatever heresy is contained in the book or film that someone is trying to
censor. You can attack labels with meta-labels: labels that refer to the use
of labels to prevent discussion. The spread of the term "political
correctness" meant the beginning of the end of political correctness, because
it enabled one to attack the phenomenon as a whole without being accused of
any of the specific heresies it sought to suppress.
Another way to counterattack is with metaphor. Arthur Miller undermined the
House Un-American Activities Committee by writing a play, "The Crucible,"
about the Salem witch trials. He never referred directly to the committee and
so gave them no way to reply. What could HUAC do, defend the Salem witch
trials? And yet Miller's metaphor stuck so well that to this day the
activities of the committee are often described as a "witch-hunt."
Best of all, probably, is humor. Zealots, whatever their cause, invariably
lack a sense of humor. They can't reply in kind to jokes. They're as unhappy
on the territory of humor as a mounted knight on a skating rink. Victorian
prudishness, for example, seems to have been defeated mainly by treating it as
a joke. Likewise its reincarnation as political correctness. "I am glad that I
managed to write 'The Crucible,'" Arthur Miller wrote, "but looking back I
have often wished I'd had the temperament to do an absurd comedy, which is
what the situation deserved." [17]
**ABQ**
A Dutch friend says I should use Holland as an example of a tolerant society.
It's true they have a long tradition of comparative open-mindedness. For
centuries the low countries were the place to go to say things you couldn't
say anywhere else, and this helped to make the region a center of scholarship
and industry (which have been closely tied for longer than most people
realize). Descartes, though claimed by the French, did much of his thinking in
Holland.
And yet, I wonder. The Dutch seem to live their lives up to their necks in
rules and regulations. There's so much you can't do there; is there really
nothing you can't say?
Certainly the fact that they value open-mindedness is no guarantee. Who thinks
they're not open-minded? Our hypothetical prim miss from the suburbs thinks
she's open-minded. Hasn't she been taught to be? Ask anyone, and they'll say
the same thing: they're pretty open-minded, though they draw the line at
things that are really wrong. (Some tribes may avoid "wrong" as judgemental,
and may instead use a more neutral sounding euphemism like "negative" or
"destructive".)
When people are bad at math, they know it, because they get the wrong answers
on tests. But when people are bad at open-mindedness they don't know it. In
fact they tend to think the opposite. Remember, it's the nature of fashion to
be invisible. It wouldn't work otherwise. Fashion doesn't seem like fashion to
someone in the grip of it. It just seems like the right thing to do. It's only
by looking from a distance that we see oscillations in people's idea of the
right thing to do, and can identify them as fashions.
Time gives us such distance for free. Indeed, the arrival of new fashions
makes old fashions easy to see, because they seem so ridiculous by contrast.
From one end of a pendulum's swing, the other end seems especially far away.
To see fashion in your own time, though, requires a conscious effort. Without
time to give you distance, you have to create distance yourself. Instead of
being part of the mob, stand as far away from it as you can and watch what
it's doing. And pay especially close attention whenever an idea is being
suppressed. Web filters for children and employees often ban sites containing
pornography, violence, and hate speech. What counts as pornography and
violence? And what, exactly, is "hate speech?" This sounds like a phrase out
of _1984._
Labels like that are probably the biggest external clue. If a statement is
false, that's the worst thing you can say about it. You don't need to say that
it's heretical. And if it isn't false, it shouldn't be suppressed. So when you
see statements being attacked as x-ist or y-ic (substitute your current values
of x and y), whether in 1630 or 2030, that's a sure sign that something is
wrong. When you hear such labels being used, ask why.
Especially if you hear yourself using them. It's not just the mob you need to
learn to watch from a distance. You need to be able to watch your own thoughts
from a distance. That's not a radical idea, by the way; it's the main
difference between children and adults. When a child gets angry because he's
tired, he doesn't know what's happening. An adult can distance himself enough
from the situation to say "never mind, I'm just tired." I don't see why one
couldn't, by a similar process, learn to recognize and discount the effects of
moral fashions.
You have to take that extra step if you want to think clearly. But it's
harder, because now you're working against social customs instead of with
them. Everyone encourages you to grow up to the point where you can discount
your own bad moods. Few encourage you to continue to the point where you can
discount society's bad moods.
How can you see the wave, when you're the water? Always be questioning. That's
the only defence. What can't you say? And why?
[_**Notes**_](http://www.paulgraham.com/saynotes.html)
**Thanks** to Sarah Harlin, Trevor Blackwell, Jessica Livingston, Robert
Morris, Eric Raymond and Bob van der Zwaan for reading drafts of this essay,
and to Lisa Randall, Jackie McDonough, Ryan Stanley and Joel Rainey for
conversations about heresy. Needless to say they bear no blame for opinions
expressed in it, and especially for opinions _not_ expressed in it.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
August 2013
The biggest component in most investors' opinion of you is the opinion of
other investors. Which is of course a recipe for exponential growth. When one
investor wants to invest in you, that makes other investors want to, which
makes others want to, and so on.
Sometimes inexperienced founders mistakenly conclude that manipulating these
forces is the essence of fundraising. They hear stories about stampedes to
invest in successful startups, and think it's therefore the mark of a
successful startup to have this happen. But actually the two are not that
highly correlated. Lots of startups that cause stampedes end up flaming out
(in extreme cases, partly as a result of the stampede), and lots of very
successful startups were only moderately popular with investors the first time
they raised money.
So the point of this essay is not to explain how to create a stampede, but
merely to explain the forces that generate them. These forces are always at
work to some degree in fundraising, and they can cause surprising situations.
If you understand them, you can at least avoid being surprised.
One reason investors like you more when other investors like you is that you
actually become a better investment. Raising money decreases the risk of
failure. Indeed, although investors hate it, you are for this reason justified
in raising your valuation for later investors. The investors who invested when
you had no money were taking more risk, and are entitled to higher returns.
Plus a company that has raised money is literally more valuable. After you
raise the first million dollars, the company is at least a million dollars
more valuable, because it's the same company as before, plus it has a million
dollars in the bank. [1]
Beware, though, because later investors so hate to have the price raised on
them that they resist even this self-evident reasoning. Only raise the price
on an investor you're comfortable with losing, because some will angrily
refuse. [2]
The second reason investors like you more when you've had some success at
fundraising is that it makes you more confident, and an investors' opinion of
[you](convince.html) is the foundation of their opinion of your company.
Founders are often surprised how quickly investors seem to know when they
start to succeed at raising money. And while there are in fact lots of ways
for such information to spread among investors, the main vector is probably
the founders themselves. Though they're often clueless about technology, most
investors are pretty good at reading people. When fundraising is going well,
investors are quick to sense it in your increased confidence. (This is one
case where the average founder's inability to remain poker-faced works to your
advantage.)
But frankly the most important reason investors like you more when you've
started to raise money is that they're bad at judging startups. Judging
startups is hard even for the best investors. The mediocre ones might as well
be flipping coins. So when mediocre investors see that lots of other people
want to invest in you, they assume there must be a reason. This leads to the
phenomenon known in the Valley as the "hot deal," where you have more interest
from investors than you can handle.
The best investors aren't influenced much by the opinion of other investors.
It would only dilute their own judgment to average it together with other
people's. But they are indirectly influenced in the practical sense that
interest from other investors imposes a deadline. This is the fourth way in
which offers beget offers. If you start to get far along the track toward an
offer with one firm, it will sometimes provoke other firms, even good ones, to
make up their minds, lest they lose the deal.
Unless you're a wizard at negotiation (and if you're not sure, you're not) be
very careful about exaggerating this to push a good investor to decide.
Founders try this sort of thing all the time, and investors are very sensitive
to it. If anything oversensitive. But you're safe so long as you're telling
the truth. If you're getting far along with investor B, but you'd rather raise
money from investor A, you can tell investor A that this is happening. There's
no manipulation in that. You're genuinely in a bind, because you really would
rather raise money from A, but you can't safely reject an offer from B when
it's still uncertain what A will decide.
Do not, however, tell A who B is. VCs will sometimes ask which other VCs
you're talking to, but you should never tell them. Angels you can sometimes
tell about other angels, because angels cooperate more with one another. But
if VCs ask, just point out that they wouldn't want you telling other firms
about your conversations, and you feel obliged to do the same for any firm you
talk to. If they push you, point out that you're inexperienced at fundraising
— which is always a safe card to play — and you feel you have to be extra
cautious. [3]
While few startups will experience a stampede of interest, almost all will at
least initially experience the other side of this phenomenon, where the herd
remains clumped together at a distance. The fact that investors are so much
influenced by other investors' opinions means you always start out in
something of a hole. So don't be demoralized by how hard it is to get the
first commitment, because much of the difficulty comes from this external
force. The second will be easier.
**Notes**
[1] An accountant might say that a company that has raised a million dollars
is no richer if it's convertible debt, but in practice money raised as
convertible debt is little different from money raised in an equity round.
[2] Founders are often surprised by this, but investors can get very
emotional. Or rather indignant; that's the main emotion I've observed; but it
is very common, to the point where it sometimes causes investors to act
against their own interests. I know of one investor who invested in a startup
at a $15 million valuation cap. Earlier he'd had an opportunity to invest at a
$5 million cap, but he refused because a friend who invested earlier had been
able to invest at a $3 million cap.
[3] If an investor pushes you hard to tell them about your conversations with
other investors, is this someone you want as an investor?
**Thanks** to Paul Buchheit, Jessica Livingston, Geoff Ralston, and Garry Tan
for reading drafts of this.
|
November 2019
If you discover something new, there's a significant chance you'll be accused
of some form of heresy.
To discover new things, you have to work on ideas that are good but non-
obvious; if an idea is obviously good, other people are probably already
working on it. One common way for a good idea to be non-obvious is for it to
be hidden in the shadow of some mistaken assumption that people are very
attached to. But anything you discover from working on such an idea will tend
to contradict the mistaken assumption that was concealing it. And you will
thus get a lot of heat from people attached to the mistaken assumption.
Galileo and Darwin are famous examples of this phenomenon, but it's probably
always an ingredient in the resistance to new ideas.
So it's particularly dangerous for an organization or society to have a
culture of pouncing on heresy. When you suppress heresies, you don't just
prevent people from contradicting the mistaken assumption you're trying to
protect. You also suppress any idea that implies indirectly that it's false.
Every cherished mistaken assumption has a dead zone of unexplored ideas around
it. And the more preposterous the assumption, the bigger the dead zone it
creates.
There is a positive side to this phenomenon though. If you're looking for new
ideas, one way to find them is by [_looking for heresies_](say.html). When you
look at the question this way, the depressingly large dead zones around
mistaken assumptions become excitingly large mines of new ideas.
|
December 2005
The most impressive people I know are all terrible procrastinators. So could
it be that procrastination isn't always bad?
Most people who write about procrastination write about how to cure it. But
this is, strictly speaking, impossible. There are an infinite number of things
you could be doing. No matter what you work on, you're not working on
everything else. So the question is not how to avoid procrastination, but how
to procrastinate well.
There are three variants of procrastination, depending on what you do instead
of working on something: you could work on (a) nothing, (b) something less
important, or (c) something more important. That last type, I'd argue, is good
procrastination.
That's the "absent-minded professor," who forgets to shave, or eat, or even
perhaps look where he's going while he's thinking about some interesting
question. His mind is absent from the everyday world because it's hard at work
in another.
That's the sense in which the most impressive people I know are all
procrastinators. They're type-C procrastinators: they put off working on small
stuff to work on big stuff.
What's "small stuff?" Roughly, work that has zero chance of being mentioned in
your obituary. It's hard to say at the time what will turn out to be your best
work (will it be your magnum opus on Sumerian temple architecture, or the
detective thriller you wrote under a pseudonym?), but there's a whole class of
tasks you can safely rule out: shaving, doing your laundry, cleaning the
house, writing thank-you notes—anything that might be called an errand.
Good procrastination is avoiding errands to do real work.
Good in a sense, at least. The people who want you to do the errands won't
think it's good. But you probably have to annoy them if you want to get
anything done. The mildest seeming people, if they want to do real work, all
have a certain degree of ruthlessness when it comes to avoiding errands.
Some errands, like replying to letters, go away if you ignore them (perhaps
taking friends with them). Others, like mowing the lawn, or filing tax
returns, only get worse if you put them off. In principle it shouldn't work to
put off the second kind of errand. You're going to have to do whatever it is
eventually. Why not (as past-due notices are always saying) do it now?
The reason it pays to put off even those errands is that real work needs two
things errands don't: big chunks of time, and the right mood. If you get
inspired by some project, it can be a net win to blow off everything you were
supposed to do for the next few days to work on it. Yes, those errands may
cost you more time when you finally get around to them. But if you get a lot
done during those few days, you will be net more productive.
In fact, it may not be a difference in degree, but a difference in kind. There
may be types of work that can only be done in long, uninterrupted stretches,
when inspiration hits, rather than dutifully in scheduled little slices.
Empirically it seems to be so. When I think of the people I know who've done
great things, I don't imagine them dutifully crossing items off to-do lists. I
imagine them sneaking off to work on some new idea.
Conversely, forcing someone to perform errands synchronously is bound to limit
their productivity. The cost of an interruption is not just the time it takes,
but that it breaks the time on either side in half. You probably only have to
interrupt someone a couple times a day before they're unable to work on hard
problems at all.
I've wondered a lot about why [startups](start.html) are most productive at
the very beginning, when they're just a couple guys in an apartment. The main
reason may be that there's no one to interrupt them yet. In theory it's good
when the founders finally get enough money to hire people to do some of the
work for them. But it may be better to be overworked than interrupted. Once
you dilute a startup with ordinary office workers—with type-B
procrastinators—the whole company starts to resonate at their frequency.
They're interrupt-driven, and soon you are too.
Errands are so effective at killing great projects that a lot of people use
them for that purpose. Someone who has decided to write a novel, for example,
will suddenly find that the house needs cleaning. People who fail to write
novels don't do it by sitting in front of a blank page for days without
writing anything. They do it by feeding the cat, going out to buy something
they need for their apartment, meeting a friend for coffee, checking email. "I
don't have time to work," they say. And they don't; they've made sure of that.
(There's also a variant where one has no place to work. The cure is to visit
the places where famous people worked, and see how unsuitable they were.)
I've used both these excuses at one time or another. I've learned a lot of
tricks for making myself work over the last 20 years, but even now I don't win
consistently. Some days I get real work done. Other days are eaten up by
errands. And I know it's usually my fault: I _let_ errands eat up the day, to
avoid facing some hard problem.
The most dangerous form of procrastination is unacknowledged type-B
procrastination, because it doesn't feel like procrastination. You're "getting
things done." Just the wrong things.
Any advice about procrastination that concentrates on crossing things off your
to-do list is not only incomplete, but positively misleading, if it doesn't
consider the possibility that the to-do list is itself a form of type-B
procrastination. In fact, possibility is too weak a word. Nearly everyone's
is. Unless you're working on the biggest things you could be working on,
you're type-B procrastinating, no matter how much you're getting done.
In his famous essay [You and Your Research](hamming.html) (which I recommend
to anyone ambitious, no matter what they're working on), Richard Hamming
suggests that you ask yourself three questions:
1. What are the most important problems in your field?
2. Are you working on one of them?
3. Why not?
Hamming was at Bell Labs when he started asking such questions. In principle
anyone there ought to have been able to work on the most important problems in
their field. Perhaps not everyone can make an equally dramatic mark on the
world; I don't know; but whatever your capacities, there are projects that
stretch them. So Hamming's exercise can be generalized to:
> What's the best thing you could be working on, and why aren't you?
Most people will shy away from this question. I shy away from it myself; I see
it there on the page and quickly move on to the next sentence. Hamming used to
go around actually asking people this, and it didn't make him popular. But
it's a question anyone ambitious should face.
The trouble is, you may end up hooking a very big fish with this bait. To do
good work, you need to do more than find good projects. Once you've found
them, you have to get yourself to work on them, and that can be hard. The
bigger the problem, the harder it is to get yourself to work on it.
Of course, the main reason people find it difficult to work on a particular
problem is that they don't [enjoy](hs.html) it. When you're young, especially,
you often find yourself working on stuff you don't really like-- because it
seems impressive, for example, or because you've been assigned to work on it.
Most grad students are stuck working on big problems they don't really like,
and grad school is thus synonymous with procrastination.
But even when you like what you're working on, it's easier to get yourself to
work on small problems than big ones. Why? Why is it so hard to work on big
problems? One reason is that you may not get any reward in the forseeable
future. If you work on something you can finish in a day or two, you can
expect to have a nice feeling of accomplishment fairly soon. If the reward is
indefinitely far in the future, it seems less real.
Another reason people don't work on big projects is, ironically, fear of
wasting time. What if they fail? Then all the time they spent on it will be
wasted. (In fact it probably won't be, because work on hard projects almost
always leads somewhere.)
But the trouble with big problems can't be just that they promise no immediate
reward and might cause you to waste a lot of time. If that were all, they'd be
no worse than going to visit your in-laws. There's more to it than that. Big
problems are _terrifying_. There's an almost physical pain in facing them.
It's like having a vacuum cleaner hooked up to your imagination. All your
initial ideas get sucked out immediately, and you don't have any more, and yet
the vacuum cleaner is still sucking.
You can't look a big problem too directly in the eye. You have to approach it
somewhat obliquely. But you have to adjust the angle just right: you have to
be facing the big problem directly enough that you catch some of the
excitement radiating from it, but not so much that it paralyzes you. You can
tighten the angle once you get going, just as a sailboat can sail closer to
the wind once it gets underway.
If you want to work on big things, you seem to have to trick yourself into
doing it. You have to work on small things that could grow into big things, or
work on successively larger things, or split the moral load with
collaborators. It's not a sign of weakness to depend on such tricks. The very
best work has been done this way.
When I talk to people who've managed to make themselves work on big things, I
find that all blow off errands, and all feel guilty about it. I don't think
they should feel guilty. There's more to do than anyone could. So someone
doing the best work they can is inevitably going to leave a lot of errands
undone. It seems a mistake to feel bad about that.
I think the way to "solve" the problem of procrastination is to let delight
pull you instead of making a to-do list push you. Work on an ambitious project
you really enjoy, and sail as close to the wind as you can, and you'll leave
the right things undone.
**Thanks** to Trevor Blackwell, Jessica Livingston, and Robert Morris for
reading drafts of this.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
October 2006
_(This essay is derived from a talk at MIT.)_
Till recently graduating seniors had two choices: get a job or go to grad
school. I think there will increasingly be a third option: to start your own
startup. But how common will that be?
I'm sure the default will always be to get a job, but starting a startup could
well become as popular as grad school. In the late 90s my professor friends
used to complain that they couldn't get grad students, because all the
undergrads were going to work for startups. I wouldn't be surprised if that
situation returns, but with one difference: this time they'll be starting
their own instead of going to work for other people's.
The most ambitious students will at this point be asking: Why wait till you
graduate? Why not start a startup while you're in college? In fact, why go to
college at all? Why not start a startup instead?
A year and a half ago I gave a [talk](hiring.html) where I said that the
average age of the founders of Yahoo, Google, and Microsoft was 24, and that
if grad students could start startups, why not undergrads? I'm glad I phrased
that as a question, because now I can pretend it wasn't merely a rhetorical
one. At the time I couldn't imagine why there should be any lower limit for
the age of startup founders. Graduation is a bureaucratic change, not a
biological one. And certainly there are undergrads as competent technically as
most grad students. So why shouldn't undergrads be able to start startups as
well as grad students?
I now realize that something does change at graduation: you lose a huge excuse
for failing. Regardless of how complex your life is, you'll find that everyone
else, including your family and friends, will discard all the low bits and
regard you as having a single occupation at any given time. If you're in
college and have a summer job writing software, you still read as a student.
Whereas if you graduate and get a job programming, you'll be instantly
regarded by everyone as a programmer.
The problem with starting a startup while you're still in school is that
there's a built-in escape hatch. If you start a startup in the summer between
your junior and senior year, it reads to everyone as a summer job. So if it
goes nowhere, big deal; you return to school in the fall with all the other
seniors; no one regards you as a failure, because your occupation is student,
and you didn't fail at that. Whereas if you start a startup just one year
later, after you graduate, as long as you're not accepted to grad school in
the fall the startup reads to everyone as your occupation. You're now a
startup founder, so you have to do well at that.
For nearly everyone, the opinion of one's peers is the most powerful motivator
of all—more powerful even than the nominal goal of most startup founders,
getting rich. [1] About a month into each funding cycle we have an event
called Prototype Day where each startup presents to the others what they've
got so far. You might think they wouldn't need any more motivation. They're
working on their cool new idea; they have funding for the immediate future;
and they're playing a game with only two outcomes: wealth or failure. You'd
think that would be motivation enough. And yet the prospect of a demo pushes
most of them into a rush of activity.
Even if you start a startup explicitly to get rich, the money you might get
seems pretty theoretical most of the time. What drives you day to day is not
wanting to look bad.
You probably can't change that. Even if you could, I don't think you'd want
to; someone who really, truly doesn't care what his peers think of him is
probably a psychopath. So the best you can do is consider this force like a
wind, and set up your boat accordingly. If you know your peers are going to
push you in some direction, choose good peers, and position yourself so they
push you in a direction you like.
Graduation changes the prevailing winds, and those make a difference. Starting
a startup is so hard that it's a close call even for the ones that succeed.
However high a startup may be flying now, it probably has a few leaves stuck
in the landing gear from those trees it barely cleared at the end of the
runway. In such a close game, the smallest increase in the forces against you
can be enough to flick you over the edge into failure.
When we first started [Y Combinator](http://ycombinator.com) we encouraged
people to start startups while they were still in college. That's partly
because Y Combinator began as a kind of summer program. We've kept the program
shape—all of us having dinner together once a week turns out to be a good
idea—but we've decided now that the party line should be to tell people to
wait till they graduate.
Does that mean you can't start a startup in college? Not at all. Sam Altman,
the co-founder of [Loopt](http://loopt.com), had just finished his sophomore
year when we funded them, and Loopt is probably the most promising of all the
startups we've funded so far. But Sam Altman is a very unusual guy. Within
about three minutes of meeting him, I remember thinking "Ah, so this is what
Bill Gates must have been like when he was 19."
If it can work to start a startup during college, why do we tell people not
to? For the same reason that the probably apocryphal violinist, whenever he
was asked to judge someone's playing, would always say they didn't have enough
talent to make it as a pro. Succeeding as a musician takes determination as
well as talent, so this answer works out to be the right advice for everyone.
The ones who are uncertain believe it and give up, and the ones who are
sufficiently determined think "screw that, I'll succeed anyway."
So our official policy now is only to fund undergrads we can't talk out of it.
And frankly, if you're not certain, you _should_ wait. It's not as if all the
opportunities to start companies are going to be gone if you don't do it now.
Maybe the window will close on some idea you're working on, but that won't be
the last idea you'll have. For every idea that times out, new ones become
feasible. Historically the opportunities to start startups have only increased
with time.
In that case, you might ask, why not wait longer? Why not go work for a while,
or go to grad school, and then start a startup? And indeed, that might be a
good idea. If I had to pick the sweet spot for startup founders, based on who
we're most excited to see applications from, I'd say it's probably the mid-
twenties. Why? What advantages does someone in their mid-twenties have over
someone who's 21? And why isn't it older? What can 25 year olds do that 32
year olds can't? Those turn out to be questions worth examining.
**Plus**
If you start a startup soon after college, you'll be a young founder by
present standards, so you should know what the relative advantages of young
founders are. They're not what you might think. As a young founder your
strengths are: stamina, poverty, rootlessness, colleagues, and ignorance.
The importance of stamina shouldn't be surprising. If you've heard anything
about startups you've probably heard about the long hours. As far as I can
tell these are universal. I can't think of any successful startups whose
founders worked 9 to 5. And it's particularly necessary for younger founders
to work long hours because they're probably not as efficient as they'll be
later.
Your second advantage, poverty, might not sound like an advantage, but it is a
huge one. Poverty implies you can live cheaply, and this is critically
important for startups. Nearly every startup that fails, fails by running out
of money. It's a little misleading to put it this way, because there's usually
some other underlying cause. But regardless of the source of your problems, a
low burn rate gives you more opportunity to recover from them. And since most
startups make all kinds of mistakes at first, room to recover from mistakes is
a valuable thing to have.
Most startups end up doing something different than they planned. The way the
successful ones find something that works is by trying things that don't. So
the worst thing you can do in a startup is to have a rigid, pre-ordained plan
and then start spending a lot of money to implement it. Better to operate
cheaply and give your ideas time to evolve.
Recent grads can live on practically nothing, and this gives you an edge over
older founders, because the main cost in software startups is people. The guys
with kids and mortgages are at a real disadvantage. This is one reason I'd bet
on the 25 year old over the 32 year old. The 32 year old probably is a better
programmer, but probably also has a much more expensive life. Whereas a 25
year old has some work experience (more on that later) but can live as cheaply
as an undergrad.
Robert Morris and I were 29 and 30 respectively when we started Viaweb, but
fortunately we still lived like 23 year olds. We both had roughly zero assets.
I would have loved to have a mortgage, since that would have meant I had a
_house_. But in retrospect having nothing turned out to be convenient. I
wasn't tied down and I was used to living cheaply.
Even more important than living cheaply, though, is thinking cheaply. One
reason the Apple II was so popular was that it was cheap. The computer itself
was cheap, and it used cheap, off-the-shelf peripherals like a cassette tape
recorder for data storage and a TV as a monitor. And you know why? Because Woz
designed this computer for himself, and he couldn't afford anything more.
We benefitted from the same phenomenon. Our prices were daringly low for the
time. The top level of service was $300 a month, which was an order of
magnitude below the norm. In retrospect this was a smart move, but we didn't
do it because we were smart. $300 a month seemed like a lot of money to us.
Like Apple, we created something inexpensive, and therefore popular, simply
because we were poor.
A lot of startups have that form: someone comes along and makes something for
a tenth or a hundredth of what it used to cost, and the existing players can't
follow because they don't even want to think about a world in which that's
possible. Traditional long distance carriers, for example, didn't even want to
think about VoIP. (It was coming, all the same.) Being poor helps in this
game, because your own personal bias points in the same direction technology
evolves in.
The advantages of rootlessness are similar to those of poverty. When you're
young you're more mobile—not just because you don't have a house or much
stuff, but also because you're less likely to have serious relationships. This
turns out to be important, because a lot of startups involve someone moving.
The founders of Kiko, for example, are now en route to the Bay Area to start
their next startup. It's a better place for what they want to do. And it was
easy for them to decide to go, because neither as far as I know has a serious
girlfriend, and everything they own will fit in one car—or more precisely,
will either fit in one car or is crappy enough that they don't mind leaving it
behind.
They at least were in Boston. What if they'd been in Nebraska, like Evan
Williams was at their age? Someone wrote recently that the drawback of Y
Combinator was that you had to move to participate. It couldn't be any other
way. The kind of conversations we have with founders, we have to have in
person. We fund a dozen startups at a time, and we can't be in a dozen places
at once. But even if we could somehow magically save people from moving, we
wouldn't. We wouldn't be doing founders a favor by letting them stay in
Nebraska. Places that aren't [startup hubs](siliconvalley.html) are toxic to
startups. You can tell that from indirect evidence. You can tell how hard it
must be to start a startup in Houston or Chicago or Miami from the
microscopically small number, per capita, that succeed there. I don't know
exactly what's suppressing all the startups in these towns—probably a hundred
subtle little things—but something must be. [2]
Maybe this will change. Maybe the increasing cheapness of startups will mean
they'll be able to survive anywhere, instead of only in the most hospitable
environments. Maybe 37signals is the pattern for the future. But maybe not.
Historically there have always been certain towns that were centers for
certain industries, and if you weren't in one of them you were at a
disadvantage. So my guess is that 37signals is an anomaly. We're looking at a
pattern much older than "Web 2.0" here.
Perhaps the reason more startups per capita happen in the Bay Area than Miami
is simply that there are more founder-type people there. Successful startups
are almost never started by one person. Usually they begin with a conversation
in which someone mentions that something would be a good idea for a company,
and his friend says, "Yeah, that is a good idea, let's try it." If you're
missing that second person who says "let's try it," the startup never happens.
And that is another area where undergrads have an edge. They're surrounded by
people willing to say that. At a good college you're concentrated together
with a lot of other ambitious and technically minded people—probably more
concentrated than you'll ever be again. If your nucleus spits out a neutron,
there's a good chance it will hit another nucleus.
The number one question people ask us at Y Combinator is: Where can I find a
co-founder? That's the biggest problem for someone starting a startup at 30.
When they were in school they knew a lot of good co-founders, but by 30
they've either lost touch with them or these people are tied down by jobs they
don't want to leave.
Viaweb was an anomaly in this respect too. Though we were comparatively old,
we weren't tied down by impressive jobs. I was trying to be an artist, which
is not very constraining, and Robert, though 29, was still in grad school due
to a little interruption in his academic career back in 1988. So arguably the
Worm made Viaweb possible. Otherwise Robert would have been a junior professor
at that age, and he wouldn't have had time to work on crazy speculative
projects with me.
Most of the questions people ask Y Combinator we have some kind of answer for,
but not the co-founder question. There is no good answer. Co-founders really
should be people you already know. And by far the best place to meet them is
school. You have a large sample of smart people; you get to compare how they
all perform on identical tasks; and everyone's life is pretty fluid. A lot of
startups grow out of schools for this reason. Google, Yahoo, and Microsoft,
among others, were all founded by people who met in school. (In Microsoft's
case, it was high school.)
Many students feel they should wait and get a little more experience before
they start a company. All other things being equal, they should. But all other
things are not quite as equal as they look. Most students don't realize how
rich they are in the scarcest ingredient in startups, co-founders. If you wait
too long, you may find that your friends are now involved in some project they
don't want to abandon. The better they are, the more likely this is to happen.
One way to mitigate this problem might be to actively plan your startup while
you're getting those n years of experience. Sure, go off and get jobs or go to
grad school or whatever, but get together regularly to scheme, so the idea of
starting a startup stays alive in everyone's brain. I don't know if this
works, but it can't hurt to try.
It would be helpful just to realize what an advantage you have as students.
Some of your classmates are probably going to be successful startup founders;
at a great technical university, that is a near certainty. So which ones? If I
were you I'd look for the people who are not just smart, but incurable
[builders](http://my-computer.cruftlabs.com:8080/photos/motorcouch/0067.html).
Look for the people who keep starting projects, and finish at least some of
them. That's what we look for. Above all else, above academic credentials and
even the idea you apply with, we look for people who build things.
The other place co-founders meet is at work. Fewer do than at school, but
there are things you can do to improve the odds. The most important,
obviously, is to work somewhere that has a lot of smart, young people. Another
is to work for a company located in a startup hub. It will be easier to talk a
co-worker into quitting with you in a place where startups are happening all
around you.
You might also want to look at the employment agreement you sign when you get
hired. Most will say that any ideas you think of while you're employed by the
company belong to them. In practice it's hard for anyone to prove what ideas
you had when, so the line gets drawn at code. If you're going to start a
startup, don't write any of the code while you're still employed. Or at least
discard any code you wrote while still employed and start over. It's not so
much that your employer will find out and sue you. It won't come to that;
investors or acquirers or (if you're so lucky) underwriters will nail you
first. Between t = 0 and when you buy that yacht, _someone_ is going to ask if
any of your code legally belongs to anyone else, and you need to be able to
say no. [3]
The most overreaching employee agreement I've seen so far is Amazon's. In
addition to the usual clauses about owning your ideas, you also can't be a
founder of a startup that has another founder who worked at Amazon—even if you
didn't know them or even work there at the same time. I suspect they'd have a
hard time enforcing this, but it's a bad sign they even try. There are plenty
of other places to work; you may as well choose one that keeps more of your
options open.
Speaking of cool places to work, there is of course Google. But I notice
something slightly frightening about Google: zero startups come out of there.
In that respect it's a black hole. People seem to like working at Google too
much to leave. So if you hope to start a startup one day, the evidence so far
suggests you shouldn't work there.
I realize this seems odd advice. If they make your life so good that you don't
want to leave, why not work there? Because, in effect, you're probably getting
a local maximum. You need a certain activation energy to start a startup. So
an employer who's fairly pleasant to work for can lull you into staying
indefinitely, even if it would be a net win for you to leave. [4]
The best place to work, if you want to start a startup, is probably a startup.
In addition to being the right sort of experience, one way or another it will
be over quickly. You'll either end up rich, in which case problem solved, or
the startup will get bought, in which case it it will start to suck to work
there and it will be easy to leave, or most likely, the thing will blow up and
you'll be free again.
Your final advantage, ignorance, may not sound very useful. I deliberately
used a controversial word for it; you might equally call it innocence. But it
seems to be a powerful force. My Y Combinator co-founder Jessica Livingston is
just about to publish a book of
[interviews](http://www.amazon.com/gp/product/1590597141) with startup
founders, and I noticed a remarkable pattern in them. One after another said
that if they'd known how hard it would be, they would have been too
intimidated to start.
Ignorance can be useful when it's a counterweight to other forms of stupidity.
It's useful in starting startups because you're capable of more than you
realize. Starting startups is harder than you expect, but you're also capable
of more than you expect, so they balance out.
Most people look at a company like Apple and think, how could I ever make such
a thing? Apple is an institution, and I'm just a person. But every institution
was at one point just a handful of people in a room deciding to start
something. Institutions are made up, and made up by people no different from
you.
I'm not saying everyone could start a startup. I'm sure most people couldn't;
I don't know much about the population at large. When you get to groups I know
well, like hackers, I can say more precisely. At the top schools, I'd guess as
many as a quarter of the CS majors could make it as startup founders if they
wanted.
That "if they wanted" is an important qualification—so important that it's
almost cheating to append it like that—because once you get over a certain
threshold of intelligence, which most CS majors at top schools are past, the
deciding factor in whether you succeed as a founder is how much you want to.
You don't have to be that smart. If you're not a genius, just start a startup
in some unsexy field where you'll have less competition, like software for
human resources departments. I picked that example at random, but I feel safe
in predicting that whatever they have now, it wouldn't take genius to do
better. There are a lot of people out there working on boring stuff who are
desperately in need of better software, so however short you think you fall of
Larry and Sergey, you can ratchet down the coolness of the idea far enough to
compensate.
As well as preventing you from being intimidated, ignorance can sometimes help
you discover new ideas. [Steve
Wozniak](http://foundersatwork.com/stevewozniak.html) put this very strongly:
> All the best things that I did at Apple came from (a) not having money and
> (b) not having done it before, ever. Every single thing that we came out
> with that was really great, I'd never once done that thing in my life.
When you know nothing, you have to reinvent stuff for yourself, and if you're
smart your reinventions may be better than what preceded them. This is
especially true in fields where the rules change. All our ideas about software
were developed in a time when processors were slow, and memories and disks
were tiny. Who knows what obsolete assumptions are embedded in the
conventional wisdom? And the way these assumptions are going to get fixed is
not by explicitly deallocating them, but by something more akin to garbage
collection. Someone ignorant but smart will come along and reinvent
everything, and in the process simply fail to reproduce certain existing
ideas.
**Minus**
So much for the advantages of young founders. What about the disadvantages?
I'm going to start with what goes wrong and try to trace it back to the root
causes.
What goes wrong with young founders is that they build stuff that looks like
class projects. It was only recently that we figured this out ourselves. We
noticed a lot of similarities between the startups that seemed to be falling
behind, but we couldn't figure out how to put it into words. Then finally we
realized what it was: they were building class projects.
But what does that really mean? What's wrong with class projects? What's the
difference between a class project and a real startup? If we could answer that
question it would be useful not just to would-be startup founders but to
students in general, because we'd be a long way toward explaining the mystery
of the so-called real world.
There seem to be two big things missing in class projects: (1) an iterative
definition of a real problem and (2) intensity.
The first is probably unavoidable. Class projects will inevitably solve fake
problems. For one thing, real problems are rare and valuable. If a professor
wanted to have students solve real problems, he'd face the same paradox as
someone trying to give an example of whatever "paradigm" might succeed the
Standard Model of physics. There may well be something that does, but if you
could think of an example you'd be entitled to the Nobel Prize. Similarly,
good new problems are not to be had for the asking.
In technology the difficulty is compounded by the fact that real startups tend
to discover the problem they're solving by a process of evolution. Someone has
an idea for something; they build it; and in doing so (and probably only by
doing so) they realize the problem they should be solving is another one. Even
if the professor let you change your project description on the fly, there
isn't time enough to do that in a college class, or a market to supply
evolutionary pressures. So class projects are mostly about implementation,
which is the least of your problems in a startup.
It's not just that in a startup you work on the idea as well as
implementation. The very implementation is different. Its main purpose is to
refine the idea. Often the only value of most of the stuff you build in the
first six months is that it proves your initial idea was mistaken. And that's
extremely valuable. If you're free of a misconception that everyone else still
shares, you're in a powerful position. But you're not thinking that way about
a class project. Proving your initial plan was mistaken would just get you a
bad grade. Instead of building stuff to throw away, you tend to want every
line of code to go toward that final goal of showing you did a lot of work.
That leads to our second difference: the way class projects are measured.
Professors will tend to judge you by the distance between the starting point
and where you are now. If someone has achieved a lot, they should get a good
grade. But customers will judge you from the other direction: the distance
remaining between where you are now and the features they need. The market
doesn't give a shit how hard you worked. Users just want your software to do
what they need, and you get a zero otherwise. That is one of the most
distinctive differences between school and the real world: there is no reward
for putting in a good effort. In fact, the whole concept of a "good effort" is
a fake idea adults invented to encourage kids. It is not found in nature.
Such lies seem to be helpful to kids. But unfortunately when you graduate they
don't give you a list of all the lies they told you during your education. You
have to get them beaten out of you by contact with the real world. And this is
why so many jobs want work experience. I couldn't understand that when I was
in college. I knew how to program. In fact, I could tell I knew how to program
better than most people doing it for a living. So what was this mysterious
"work experience" and why did I need it?
Now I know what it is, and part of the confusion is grammatical. Describing it
as "work experience" implies it's like experience operating a certain kind of
machine, or using a certain programming language. But really what work
experience refers to is not some specific expertise, but the elimination of
certain habits left over from childhood.
One of the defining qualities of kids is that they flake. When you're a kid
and you face some hard test, you can cry and say "I can't" and they won't make
you do it. Of course, no one can make you do anything in the grownup world
either. What they do instead is fire you. And when motivated by that you find
you can do a lot more than you realized. So one of the things employers expect
from someone with "work experience" is the elimination of the flake reflex—the
ability to get things done, with no excuses.
The other thing you get from work experience is an understanding of what work
is, and in particular, how intrinsically horrible it is. Fundamentally the
equation is a brutal one: you have to spend most of your waking hours doing
stuff someone else wants, or starve. There are a few places where the work is
so interesting that this is concealed, because what other people want done
happens to coincide with what you want to work on. But you only have to
imagine what would happen if they diverged to see the underlying reality.
It's not so much that adults lie to kids about this as never explain it. They
never explain what the deal is with money. You know from an early age that
you'll have some sort of job, because everyone asks what you're going to "be"
when you grow up. What they don't tell you is that as a kid you're sitting on
the shoulders of someone else who's treading water, and that starting working
means you get thrown into the water on your own, and have to start treading
water yourself or sink. "Being" something is incidental; the immediate problem
is not to drown.
The relationship between work and money tends to dawn on you only gradually.
At least it did for me. One's first thought tends to be simply "This sucks.
I'm in debt. Plus I have to get up on monday and go to work." Gradually you
realize that these two things are as tightly connected as only a market can
make them.
So the most important advantage 24 year old founders have over 20 year old
founders is that they know what they're trying to avoid. To the average
undergrad the idea of getting rich translates into buying Ferraris, or being
admired. To someone who has learned from experience about the relationship
between money and work, it translates to something way more important: it
means you get to opt out of the brutal equation that governs the lives of
99.9% of people. Getting rich means you can stop treading water.
Someone who gets this will work much harder at making a startup succeed—with
the proverbial energy of a drowning man, in fact. But understanding the
relationship between money and work also changes the way you work. You don't
get money just for working, but for doing things other people want. Someone
who's figured that out will automatically focus more on the user. And that
cures the other half of the class-project syndrome. After you've been working
for a while, you yourself tend to measure what you've done the same way the
market does.
Of course, you don't have to spend years working to learn this stuff. If
you're sufficiently perceptive you can grasp these things while you're still
in school. Sam Altman did. He must have, because Loopt is no class project.
And as his example suggests, this can be valuable knowledge. At a minimum, if
you get this stuff, you already have most of what you gain from the "work
experience" employers consider so desirable. But of course if you really get
it, you can use this information in a way that's more valuable to you than
that.
**Now**
So suppose you think you might start a startup at some point, either when you
graduate or a few years after. What should you do now? For both jobs and grad
school, there are ways to prepare while you're in college. If you want to get
a job when you graduate, you should get summer jobs at places you'd like to
work. If you want to go to grad school, it will help to work on research
projects as an undergrad. What's the equivalent for startups? How do you keep
your options maximally open?
One thing you can do while you're still in school is to learn how startups
work. Unfortunately that's not easy. Few if any colleges have classes about
startups. There may be business school classes on entrepreneurship, as they
call it over there, but these are likely to be a waste of time. Business
schools like to talk about startups, but philosophically they're at the
opposite end of the spectrum. Most books on startups also seem to be useless.
I've looked at a few and none get it right. Books in most fields are written
by people who know the subject from experience, but for startups there's a
unique problem: by definition the founders of successful startups don't need
to write books to make money. As a result most books on the subject end up
being written by people who don't understand it.
So I'd be skeptical of classes and books. The way to learn about startups is
by watching them in action, preferably by working at one. How do you do that
as an undergrad? Probably by sneaking in through the back door. Just hang
around a lot and gradually start doing things for them. Most startups are (or
should be) very cautious about hiring. Every hire increases the burn rate, and
bad hires early on are hard to recover from. However, startups usually have a
fairly informal atmosphere, and there's always a lot that needs to be done. If
you just start doing stuff for them, many will be too busy to shoo you away.
You can thus gradually work your way into their confidence, and maybe turn it
into an official job later, or not, whichever you prefer. This won't work for
all startups, but it would work for most I've known.
Number two, make the most of the great advantage of school: the wealth of co-
founders. Look at the people around you and ask yourself which you'd like to
work with. When you apply that test, you may find you get surprising results.
You may find you'd prefer the quiet guy you've mostly ignored to someone who
seems impressive but has an attitude to match. I'm not suggesting you suck up
to people you don't really like because you think one day they'll be
successful. Exactly the opposite, in fact: you should only start a startup
with someone you like, because a startup will put your friendship through a
stress test. I'm just saying you should think about who you really admire and
hang out with them, instead of whoever circumstances throw you together with.
Another thing you can do is learn skills that will be useful to you in a
startup. These may be different from the skills you'd learn to get a job. For
example, thinking about getting a job will make you want to learn programming
languages you think employers want, like Java and C++. Whereas if you start a
startup, you get to pick the language, so you have to think about which will
actually let you get the most done. If you use that test you might end up
learning Ruby or Python instead.
But the most important skill for a startup founder isn't a programming
technique. It's a knack for understanding users and figuring out how to give
them what they want. I know I repeat this, but that's because it's so
important. And it's a skill you can learn, though perhaps habit might be a
better word. Get into the habit of thinking of software as having users. What
do those users want? What would make them say wow?
This is particularly valuable for undergrads, because the concept of users is
missing from most college programming classes. The way you get taught
programming in college would be like teaching writing as grammar, without
mentioning that its purpose is to communicate something to an audience.
Fortunately an audience for software is now only an http request away. So in
addition to the programming you do for your classes, why not build some kind
of website people will find useful? At the very least it will teach you how to
write software with users. In the best case, it might not just be preparation
for a startup, but the startup itself, like it was for Yahoo and Google.
**Notes**
[1] Even the desire to protect one's children seems weaker, judging from
things people have historically done to their kids rather than risk their
community's disapproval. (I assume we still do things that will be regarded in
the future as barbaric, but historical abuses are easier for us to see.)
[2] Worrying that Y Combinator makes founders move for 3 months also suggests
one underestimates how hard it is to start a startup. You're going to have to
put up with much greater inconveniences than that.
[3] Most employee agreements say that any idea relating to the company's
present or potential future business belongs to them. Often as not the second
clause could include any possible startup, and anyone doing due diligence for
an investor or acquirer will assume the worst.
To be safe either (a) don't use code written while you were still employed in
your previous job, or (b) get your employer to renounce, in writing, any claim
to the code you write for your side project. Many will consent to (b) rather
than lose a prized employee. The downside is that you'll have to tell them
exactly what your project does.
[4] Geshke and Warnock only founded Adobe because Xerox ignored them. If Xerox
had used what they built, they would probably never have left PARC.
**Thanks** to Jessica Livingston and Robert Morris for reading drafts of this,
and to Jeff Arnold and the SIPB for inviting me to speak.
[](http://reddit.com) [ Comment](http://reddit.com/info/l1xb/comments) on this
essay.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
January 2012
There are great startup ideas lying around unexploited right under our noses.
One reason we don't see them is a phenomenon I call _schlep blindness_. Schlep
was originally a Yiddish word but has passed into general use in the US. It
means a tedious, unpleasant task.
No one likes schleps, but hackers especially dislike them. Most hackers who
start startups wish they could do it by just writing some clever software,
putting it on a server somewhere, and watching the money roll in—without ever
having to talk to users, or negotiate with other companies, or deal with other
people's broken code. Maybe that's possible, but I haven't seen it.
One of the many things we do at Y Combinator is teach hackers about the
inevitability of schleps. No, you can't start a startup by just writing code.
I remember going through this realization myself. There was a point in 1995
when I was still trying to convince myself I could start a company by just
writing code. But I soon learned from experience that schleps are not merely
inevitable, but pretty much what business consists of. A company is defined by
the schleps it will undertake. And schleps should be dealt with the same way
you'd deal with a cold swimming pool: just jump in. Which is not to say you
should seek out unpleasant work per se, but that you should never shrink from
it if it's on the path to something great.
The most dangerous thing about our dislike of schleps is that much of it is
unconscious. Your unconscious won't even let you see ideas that involve
painful schleps. That's schlep blindness.
The phenomenon isn't limited to startups. Most people don't consciously decide
not to be in as good physical shape as Olympic athletes, for example. Their
unconscious mind decides for them, shrinking from the work involved.
The most striking example I know of schlep blindness is
[Stripe](http://stripe.com), or rather Stripe's idea. For over a decade, every
hacker who'd ever had to process payments online knew how painful the
experience was. Thousands of people must have known about this problem. And
yet when they started startups, they decided to build recipe sites, or
aggregators for local events. Why? Why work on problems few care much about
and no one will pay for, when you could fix one of the most important
components of the world's infrastructure? Because schlep blindness prevented
people from even considering the idea of fixing payments.
Probably no one who applied to Y Combinator to work on a recipe site began by
asking "should we fix payments, or build a recipe site?" and chose the recipe
site. Though the idea of fixing payments was right there in plain sight, they
never saw it, because their unconscious mind shrank from the complications
involved. You'd have to make deals with banks. How do you do that? Plus you're
moving money, so you're going to have to deal with fraud, and people trying to
break into your servers. Plus there are probably all sorts of regulations to
comply with. It's a lot more intimidating to start a startup like this than a
recipe site.
That scariness makes ambitious ideas doubly valuable. In addition to their
intrinsic value, they're like undervalued stocks in the sense that there's
less demand for them among founders. If you pick an ambitious idea, you'll
have less competition, because everyone else will have been frightened off by
the challenges involved. (This is also true of starting a startup generally.)
How do you overcome schlep blindness? Frankly, the most valuable antidote to
schlep blindness is probably ignorance. Most successful founders would
probably say that if they'd known when they were starting their company about
the obstacles they'd have to overcome, they might never have started it. Maybe
that's one reason the most successful startups of all so often have young
founders.
In practice the founders grow with the problems. But no one seems able to
foresee that, not even older, more experienced founders. So the reason younger
founders have an advantage is that they make two mistakes that cancel each
other out. They don't know how much they can grow, but they also don't know
how much they'll need to. Older founders only make the first mistake.
Ignorance can't solve everything though. Some ideas so obviously entail
alarming schleps that anyone can see them. How do you see ideas like that? The
trick I recommend is to take yourself out of the picture. Instead of asking
"what problem should I solve?" ask "what problem do I wish someone else would
solve for me?" If someone who had to process payments before Stripe had tried
asking that, Stripe would have been one of the first things they wished for.
It's too late now to be Stripe, but there's plenty still broken in the world,
if you know how to see it.
**Thanks** to Sam Altman, Paul Buchheit, Patrick Collison, Aaron Iba, Jessica
Livingston, Emmett Shear, and Harj Taggar for reading drafts of this.
|
March 2021
The secret curse of the nonprofit world is restricted donations. If you
haven't been involved with nonprofits, you may never have heard this phrase
before. But if you have been, it probably made you wince.
Restricted donations mean donations where the donor limits what can be done
with the money. This is common with big donations, perhaps the default. And
yet it's usually a bad idea. Usually the way the donor wants the money spent
is not the way the nonprofit would have chosen. Otherwise there would have
been no need to restrict the donation. But who has a better understanding of
where money needs to be spent, the nonprofit or the donor?
If a nonprofit doesn't understand better than its donors where money needs to
be spent, then it's incompetent and you shouldn't be donating to it at all.
Which means a restricted donation is inherently suboptimal. It's either a
donation to a bad nonprofit, or a donation for the wrong things.
There are a couple exceptions to this principle. One is when the nonprofit is
an umbrella organization. It's reasonable to make a restricted donation to a
university, for example, because a university is only nominally a single
nonprofit. Another exception is when the donor actually does know as much as
the nonprofit about where money needs to be spent. The Gates Foundation, for
example, has specific goals and often makes restricted donations to individual
nonprofits to accomplish them. But unless you're a domain expert yourself or
donating to an umbrella organization, your donation would do more good if it
were unrestricted.
If restricted donations do less good than unrestricted ones, why do donors so
often make them? Partly because doing good isn't donors' only motive. They
often have other motives as well — to make a mark, or to generate good
publicity [1], or to comply with regulations or corporate policies. Many
donors may simply never have considered the distinction between restricted and
unrestricted donations. They may believe that donating money for some specific
purpose is just how donation works. And to be fair, nonprofits don't try very
hard to discourage such illusions. They can't afford to. People running
nonprofits are almost always anxious about money. They can't afford to talk
back to big donors.
You can't expect candor in a relationship so asymmetric. So I'll tell you what
nonprofits wish they could tell you. If you want to donate to a nonprofit,
donate unrestricted. If you trust them to spend your money, trust them to
decide how.
**Note**
[1] Unfortunately restricted donations tend to generate more publicity than
unrestricted ones. "X donates money to build a school in Africa" is not only
more interesting than "X donates money to Y nonprofit to spend as Y chooses,"
but also focuses more attention on X.
**Thanks** to Chase Adam, Ingrid Bassett, Trevor Blackwell, and Edith Elliot
for reading drafts of this.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
April 2008
Umair Haque
[wrote](http://discussionleader.hbsp.com/haque/2008/04/i_agree_and_i.html)
recently that the reason there aren't more Googles is that most startups get
bought before they can change the world.
> Google, despite serious interest from Microsoft and Yahoo—what must have
> seemed like lucrative interest at the time—didn't sell out. Google might
> simply have been nothing but Yahoo's or MSN's search box.
>
> Why isn't it? Because Google had a deeply felt sense of purpose: a
> conviction to change the world for the better.
This has a nice sound to it, but it isn't true. Google's founders were willing
to sell early on. They just wanted more than acquirers were willing to pay.
It was the same with Facebook. They would have sold, but Yahoo blew it by
offering too little.
Tip for acquirers: when a startup turns you down, consider raising your offer,
because there's a good chance the outrageous price they want will later seem a
bargain. [1]
From the evidence I've seen so far, startups that turn down acquisition offers
usually end up doing better. Not always, but usually there's a bigger offer
coming, or perhaps even an IPO.
Of course, the reason startups do better when they turn down acquisition
offers is not necessarily that all such offers undervalue startups. More
likely the reason is that the kind of founders who have the balls to turn down
a big offer also tend to be very successful. That spirit is exactly what you
want in a startup.
While I'm sure Larry and Sergey do want to change the world, at least now, the
reason Google survived to become a big, independent company is the same reason
Facebook has so far remained independent: acquirers underestimated them.
Corporate M&A is a strange business in that respect. They consistently lose
the best deals, because turning down reasonable offers is the most reliable
test you could invent for whether a startup will make it big.
**VCs**
So what's the real reason there aren't more Googles? Curiously enough, it's
the same reason Google and Facebook have remained independent: money guys
undervalue the most innovative startups.
The reason there aren't more Googles is not that investors encourage
innovative startups to sell out, but that they won't even fund them. I've
learned a lot about VCs during the 3 years we've been doing Y Combinator,
because we often have to work quite closely with them. The most surprising
thing I've learned is how conservative they are. VC firms present an image of
boldly encouraging innovation. Only a handful actually do, and even they are
more conservative in reality than you'd guess from reading their sites.
I used to think of VCs as piratical: bold but unscrupulous. On closer
acquaintance they turn out to be more like bureaucrats. They're more
upstanding than I used to think (the good ones, at least), but less bold.
Maybe the VC industry has changed. Maybe they used to be bolder. But I suspect
it's the startup world that has changed, not them. The low cost of starting a
startup means the average good bet is a riskier one, but most existing VC
firms still operate as if they were investing in hardware startups in 1985.
Howard Aiken said "Don't worry about people stealing your ideas. If your ideas
are any good, you'll have to ram them down people's throats." I have a similar
feeling when I'm trying to convince VCs to invest in startups Y Combinator has
funded. They're terrified of really novel ideas, unless the founders are good
enough salesmen to compensate.
But it's the bold ideas that generate the biggest returns. Any really good new
idea will seem bad to most people; otherwise someone would already be doing
it. And yet most VCs are driven by consensus, not just within their firms, but
within the VC community. The biggest factor determining how a VC will feel
about your startup is how other VCs feel about it. I doubt they realize it,
but this algorithm guarantees they'll miss all the very best ideas. The more
people who have to like a new idea, the more outliers you lose.
Whoever the next Google is, they're probably being told right now by VCs to
come back when they have more "traction."
Why are VCs so conservative? It's probably a combination of factors. The large
size of their investments makes them conservative. Plus they're investing
other people's money, which makes them worry they'll get in trouble if they do
something risky and it fails. Plus most of them are money guys rather than
technical guys, so they don't understand what the startups they're investing
in do.
**What's Next**
The exciting thing about market economies is that stupidity equals
opportunity. And so it is in this case. There is a huge, unexploited
opportunity in startup investing. Y Combinator funds startups at the very
beginning. VCs will fund them once they're already starting to succeed. But
between the two there is a substantial gap.
There are companies that will give $20k to a startup that has nothing more
than the founders, and there are companies that will give $2 million to a
startup that's already taking off, but there aren't enough investors who will
give $200k to a startup that seems very promising but still has some things to
figure out. This territory is occupied mostly by individual angel
investors—people like Andy Bechtolsheim, who gave Google $100k when they
seemed promising but still had some things to figure out. I like angels, but
there just aren't enough of them, and investing is for most of them a part
time job.
And yet as it gets cheaper to start startups, this sparsely occupied territory
is becoming more and more valuable. Nowadays a lot of startups don't want to
raise multi-million dollar series A rounds. They don't need that much money,
and they don't want the hassles that come with it. The median startup coming
out of Y Combinator wants to raise $250-500k. When they go to VC firms they
have to ask for more because they know VCs aren't interested in such small
deals.
VCs are money managers. They're looking for ways to put large sums to work.
But the startup world is evolving away from their current model.
Startups have gotten cheaper. That means they want less money, but also that
there are more of them. So you can still get large returns on large amounts of
money; you just have to spread it more broadly.
I've tried to explain this to VC firms. Instead of making one $2 million
investment, make five $400k investments. Would that mean sitting on too many
boards? Don't sit on their boards. Would that mean too much due diligence? Do
less. If you're investing at a tenth the valuation, you only have to be a
tenth as sure.
It seems obvious. But I've proposed to several VC firms that they set aside
some money and designate one partner to make more, smaller bets, and they
react as if I'd proposed the partners all get nose rings. It's remarkable how
wedded they are to their standard m.o.
But there is a big opportunity here, and one way or the other it's going to
get filled. Either VCs will evolve down into this gap or, more likely, new
investors will appear to fill it. That will be a good thing when it happens,
because these new investors will be compelled by the structure of the
investments they make to be ten times bolder than present day VCs. And that
will get us a lot more Googles. At least, as long as acquirers remain stupid.
**Notes**
[1] Another tip: If you want to get all that value, don't destroy the startup
after you buy it. Give the founders enough autonomy that they can grow the
acquisition into what it would have become.
**Thanks** to Sam Altman, Paul Buchheit, David Hornik, Jessica Livingston,
Robert Morris, and Fred Wilson for reading drafts of this.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
March 2005
_(This essay is derived from a talk at the Harvard Computer Society.)_
You need three things to create a successful startup: to start with good
people, to make something customers actually want, and to spend as little
money as possible. Most startups that fail do it because they fail at one of
these. A startup that does all three will probably succeed.
And that's kind of exciting, when you think about it, because all three are
doable. Hard, but doable. And since a startup that succeeds ordinarily makes
its founders rich, that implies getting rich is doable too. Hard, but doable.
If there is one message I'd like to get across about startups, that's it.
There is no magically difficult step that requires brilliance to solve.
**The Idea**
In particular, you don't need a brilliant [idea](ideas.html) to start a
startup around. The way a startup makes money is to offer people better
technology than they have now. But what people have now is often so bad that
it doesn't take brilliance to do better.
Google's plan, for example, was simply to create a search site that didn't
suck. They had three new ideas: index more of the Web, use links to rank
search results, and have clean, simple web pages with unintrusive keyword-
based ads. Above all, they were determined to make a site that was good to
use. No doubt there are great technical tricks within Google, but the overall
plan was straightforward. And while they probably have bigger ambitions now,
this alone brings them a billion dollars a year. [1]
There are plenty of other areas that are just as backward as search was before
Google. I can think of several heuristics for generating ideas for startups,
but most reduce to this: look at something people are trying to do, and figure
out how to do it in a way that doesn't suck.
For example, dating sites currently suck far worse than search did before
Google. They all use the same simple-minded model. They seem to have
approached the problem by thinking about how to do database matches instead of
how dating works in the real world. An undergrad could build something better
as a class project. And yet there's a lot of money at stake. Online dating is
a valuable business now, and it might be worth a hundred times as much if it
worked.
An idea for a startup, however, is only a beginning. A lot of would-be startup
founders think the key to the whole process is the initial idea, and from that
point all you have to do is execute. Venture capitalists know better. If you
go to VC firms with a brilliant idea that you'll tell them about if they sign
a nondisclosure agreement, most will tell you to get lost. That shows how much
a mere idea is worth. The market price is less than the inconvenience of
signing an NDA.
Another sign of how little the initial idea is worth is the number of startups
that change their plan en route. Microsoft's original plan was to make money
selling programming languages, of all things. Their current business model
didn't occur to them until IBM dropped it in their lap five years later.
Ideas for startups are worth something, certainly, but the trouble is, they're
not transferrable. They're not something you could hand to someone else to
execute. Their value is mainly as starting points: as questions for the people
who had them to continue thinking about.
What matters is not ideas, but the people who have them. Good people can fix
bad ideas, but good ideas can't save bad people.
**People**
What do I mean by good people? One of the best tricks I learned during
[our](road.html) startup was a rule for deciding who to hire. Could you
describe the person as an animal? It might be hard to translate that into
another language, but I think everyone in the US knows what it means. It means
someone who takes their work a little too seriously; someone who does what
they do so well that they pass right through professional and cross over into
obsessive.
What it means specifically depends on the job: a salesperson who just won't
take no for an answer; a hacker who will stay up till 4:00 AM rather than go
to bed leaving code with a bug in it; a PR person who will cold-call _New York
Times_ reporters on their cell phones; a graphic designer who feels physical
pain when something is two millimeters out of place.
Almost everyone who worked for us was an animal at what they did. The woman in
charge of sales was so tenacious that I used to feel sorry for potential
customers on the phone with her. You could sense them squirming on the hook,
but you knew there would be no rest for them till they'd signed up.
If you think about people you know, you'll find the animal test is easy to
apply. Call the person's image to mind and imagine the sentence "so-and-so is
an animal." If you laugh, they're not. You don't need or perhaps even want
this quality in big companies, but you need it in a startup.
For programmers we had three additional tests. Was the person genuinely smart?
If so, could they actually get things done? And finally, since a few good
hackers have unbearable personalities, could we stand to have them around?
That last test filters out surprisingly few people. We could bear any amount
of nerdiness if someone was truly smart. What we couldn't stand were people
with a lot of attitude. But most of those weren't truly smart, so our third
test was largely a restatement of the first.
When nerds are unbearable it's usually because they're trying too hard to seem
smart. But the smarter they are, the less pressure they feel to act smart. So
as a rule you can recognize genuinely smart people by their ability to say
things like "I don't know," "Maybe you're right," and "I don't understand x
well enough."
This technique doesn't always work, because people can be influenced by their
environment. In the MIT CS department, there seems to be a tradition of acting
like a brusque know-it-all. I'm told it derives ultimately from Marvin Minsky,
in the same way the classic airline pilot manner is said to derive from Chuck
Yeager. Even genuinely smart people start to act this way there, so you have
to make allowances.
It helped us to have Robert Morris, who is one of the readiest to say "I don't
know" of anyone I've met. (At least, he was before he became a professor at
MIT.) No one dared put on attitude around Robert, because he was obviously
smarter than they were and yet had zero attitude himself.
Like most startups, ours began with a group of friends, and it was through
personal contacts that we got most of the people we hired. This is a crucial
difference between startups and big companies. Being friends with someone for
even a couple days will tell you more than companies could ever learn in
interviews. [2]
It's no coincidence that startups start around universities, because that's
where smart people meet. It's not what people learn in classes at MIT and
Stanford that has made technology companies spring up around them. They could
sing campfire songs in the classes so long as admissions worked the same.
If you start a startup, there's a good chance it will be with people you know
from college or grad school. So in theory you ought to try to make friends
with as many smart people as you can in school, right? Well, no. Don't make a
conscious effort to schmooze; that doesn't work well with hackers.
What you should do in college is work on your own projects. Hackers should do
this even if they don't plan to start startups, because it's the only real way
to learn how to program. In some cases you may collaborate with other
students, and this is the best way to get to know good hackers. The project
may even grow into a startup. But once again, I wouldn't aim too directly at
either target. Don't force things; just work on stuff you like with people you
like.
Ideally you want between two and four founders. It would be hard to start with
just one. One person would find the moral weight of starting a company hard to
bear. Even Bill Gates, who seems to be able to bear a good deal of moral
weight, had to have a co-founder. But you don't want so many founders that the
company starts to look like a group photo. Partly because you don't need a lot
of people at first, but mainly because the more founders you have, the worse
disagreements you'll have. When there are just two or three founders, you know
you have to resolve disputes immediately or perish. If there are seven or
eight, disagreements can linger and harden into factions. You don't want mere
voting; you need unanimity.
In a technology startup, which most startups are, the founders should include
technical people. During the Internet Bubble there were a number of startups
founded by business people who then went looking for hackers to create their
product for them. This doesn't work well. Business people are bad at deciding
what to do with technology, because they don't know what the options are, or
which kinds of problems are hard and which are easy. And when business people
try to hire hackers, they can't tell which ones are [good](gh.html). Even
other hackers have a hard time doing that. For business people it's roulette.
Do the founders of a startup have to include business people? That depends. We
thought so when we started ours, and we asked several people who were said to
know about this mysterious thing called "business" if they would be the
president. But they all said no, so I had to do it myself. And what I
discovered was that business was no great mystery. It's not something like
physics or medicine that requires extensive study. You just try to get people
to pay you for stuff.
I think the reason I made such a mystery of business was that I was disgusted
by the idea of doing it. I wanted to work in the pure, intellectual world of
software, not deal with customers' mundane problems. People who don't want to
get dragged into some kind of work often develop a protective incompetence at
it. Paul Erdos was particularly good at this. By seeming unable even to cut a
grapefruit in half (let alone go to the store and buy one), he forced other
people to do such things for him, leaving all his time free for math. Erdos
was an extreme case, but most husbands use the same trick to some degree.
Once I was forced to discard my protective incompetence, I found that business
was neither so hard nor so boring as I feared. There are esoteric areas of
business that are quite hard, like tax law or the pricing of derivatives, but
you don't need to know about those in a startup. All you need to know about
business to run a startup are commonsense things people knew before there were
business schools, or even universities.
If you work your way down the Forbes 400 making an x next to the name of each
person with an MBA, you'll learn something important about business school.
After Warren Buffett, you don't hit another MBA till number 22, Phil Knight,
the CEO of Nike. There are only 5 MBAs in the top 50\. What you notice in the
Forbes 400 are a lot of people with technical backgrounds. Bill Gates, Steve
Jobs, Larry Ellison, Michael Dell, Jeff Bezos, Gordon Moore. The rulers of the
technology business tend to come from technology, not business. So if you want
to invest two years in something that will help you succeed in business, the
evidence suggests you'd do better to learn how to hack than get an MBA. [3]
There is one reason you might want to include business people in a startup,
though: because you have to have at least one person willing and able to focus
on what customers want. Some believe only business people can do this-- that
hackers can implement software, but not design it. That's nonsense. There's
nothing about knowing how to program that prevents hackers from understanding
users, or about not knowing how to program that magically enables business
people to understand them.
If you can't understand users, however, you should either learn how or find a
co-founder who can. That is the single most important issue for technology
startups, and the rock that sinks more of them than anything else.
**What Customers Want**
It's not just startups that have to worry about this. I think most businesses
that fail do it because they don't give customers what they want. Look at
restaurants. A large percentage fail, about a quarter in the first year. But
can you think of one restaurant that had really good food and went out of
business?
Restaurants with great food seem to prosper no matter what. A restaurant with
great food can be expensive, crowded, noisy, dingy, out of the way, and even
have bad service, and people will keep coming. It's true that a restaurant
with mediocre food can sometimes attract customers through gimmicks. But that
approach is very risky. It's more straightforward just to make the food good.
It's the same with technology. You hear all kinds of reasons why startups
fail. But can you think of one that had a massively popular product and still
failed?
In nearly every failed startup, the real problem was that customers didn't
want the product. For most, the cause of death is listed as "ran out of
funding," but that's only the immediate cause. Why couldn't they get more
funding? Probably because the product was a dog, or never seemed likely to be
done, or both.
When I was trying to think of the things every startup needed to do, I almost
included a fourth: get a version 1 out as soon as you can. But I decided not
to, because that's implicit in making something customers want. The only way
to make something customers want is to get a prototype in front of them and
refine it based on their reactions.
The other approach is what I call the "Hail Mary" strategy. You make elaborate
plans for a product, hire a team of engineers to develop it (people who do
this tend to use the term "engineer" for hackers), and then find after a year
that you've spent two million dollars to develop something no one wants. This
was not uncommon during the Bubble, especially in companies run by business
types, who thought of software development as something terrifying that
therefore had to be carefully planned.
We never even considered that approach. As a Lisp hacker, I come from the
tradition of rapid prototyping. I would not claim (at least, not here) that
this is the right way to write every program, but it's certainly the right way
to write software for a startup. In a startup, your initial plans are almost
certain to be wrong in some way, and your first priority should be to figure
out where. The only way to do that is to try implementing them.
Like most startups, we changed our plan on the fly. At first we expected our
customers to be Web consultants. But it turned out they didn't like us,
because our software was easy to use and we hosted the site. It would be too
easy for clients to fire them. We also thought we'd be able to sign up a lot
of catalog companies, because selling online was a natural extension of their
existing business. But in 1996 that was a hard sell. The middle managers we
talked to at catalog companies saw the Web not as an opportunity, but as
something that meant more work for them.
We did get a few of the more adventurous catalog companies. Among them was
Frederick's of Hollywood, which gave us valuable experience dealing with heavy
loads on our servers. But most of our users were small, individual merchants
who saw the Web as an opportunity to build a business. Some had retail stores,
but many only existed online. And so we changed direction to focus on these
users. Instead of concentrating on the features Web consultants and catalog
companies would want, we worked to make the software easy to use.
I learned something valuable from that. It's worth trying very, very hard to
make technology easy to use. Hackers are so used to computers that they have
no idea how horrifying software seems to normal people. Stephen Hawking's
editor told him that every equation he included in his book would cut sales in
half. When you work on making technology easier to use, you're riding that
curve up instead of down. A 10% improvement in ease of use doesn't just
increase your sales 10%. It's more likely to double your sales.
How do you figure out what customers want? Watch them. One of the best places
to do this was at trade shows. Trade shows didn't pay as a way of getting new
customers, but they were worth it as market research. We didn't just give
canned presentations at trade shows. We used to show people how to build real,
working stores. Which meant we got to watch as they used our software, and
talk to them about what they needed.
No matter what kind of startup you start, it will probably be a stretch for
you, the founders, to understand what users want. The only kind of software
you can build without studying users is the sort for which you are the typical
user. But this is just the kind that tends to be open source: operating
systems, programming languages, editors, and so on. So if you're developing
technology for money, you're probably not going to be developing it for people
like you. Indeed, you can use this as a way to generate ideas for startups:
what do people who are not like you want from technology?
When most people think of startups, they think of companies like Apple or
Google. Everyone knows these, because they're big consumer brands. But for
every startup like that, there are twenty more that operate in niche markets
or live quietly down in the infrastructure. So if you start a successful
startup, odds are you'll start one of those.
Another way to say that is, if you try to start the kind of startup that has
to be a big consumer brand, the odds against succeeding are steeper. The best
odds are in niche markets. Since startups make money by offering people
something better than they had before, the best opportunities are where things
suck most. And it would be hard to find a place where things suck more than in
corporate IT departments. You would not believe the amount of money companies
spend on software, and the crap they get in return. This imbalance equals
opportunity.
If you want ideas for startups, one of the most valuable things you could do
is find a middle-sized non-technology company and spend a couple weeks just
watching what they do with computers. Most good hackers have no more idea of
the horrors perpetrated in these places than rich Americans do of what goes on
in Brazilian slums.
Start by writing software for smaller companies, because it's easier to sell
to them. It's worth so much to sell stuff to big companies that the people
selling them the crap they currently use spend a lot of time and money to do
it. And while you can outhack Oracle with one frontal lobe tied behind your
back, you can't outsell an Oracle salesman. So if you want to win through
better technology, aim at smaller customers. [4]
They're the more strategically valuable part of the market anyway. In
technology, the low end always eats the high end. It's easier to make an
inexpensive product more powerful than to make a powerful product cheaper. So
the products that start as cheap, simple options tend to gradually grow more
powerful till, like water rising in a room, they squash the "high-end"
products against the ceiling. Sun did this to mainframes, and Intel is doing
it to Sun. Microsoft Word did it to desktop publishing software like Interleaf
and Framemaker. Mass-market digital cameras are doing it to the expensive
models made for professionals. Avid did it to the manufacturers of specialized
video editing systems, and now Apple is doing it to Avid. _Henry Ford_ did it
to the car makers that preceded him. If you build the simple, inexpensive
option, you'll not only find it easier to sell at first, but you'll also be in
the best position to conquer the rest of the market.
It's very dangerous to let anyone fly under you. If you have the cheapest,
easiest product, you'll own the low end. And if you don't, you're in the
crosshairs of whoever does.
**Raising Money**
To make all this happen, you're going to need money. Some startups have been
self-funding-- Microsoft for example-- but most aren't. I think it's wise to
take money from investors. To be self-funding, you have to start as a
consulting company, and it's hard to switch from that to a product company.
Financially, a startup is like a pass/fail course. The way to get rich from a
startup is to maximize the company's chances of succeeding, not to maximize
the amount of stock you retain. So if you can trade stock for something that
improves your odds, it's probably a smart move.
To most hackers, getting investors seems like a terrifying and mysterious
process. Actually it's merely tedious. I'll try to give an outline of how it
works.
The first thing you'll need is a few tens of thousands of dollars to pay your
expenses while you develop a prototype. This is called seed capital. Because
so little money is involved, raising seed capital is comparatively easy-- at
least in the sense of getting a quick yes or no.
Usually you get seed money from individual rich people called "angels." Often
they're people who themselves got rich from technology. At the seed stage,
investors don't expect you to have an elaborate business plan. Most know that
they're supposed to decide quickly. It's not unusual to get a check within a
week based on a half-page agreement.
We started Viaweb with $10,000 of seed money from our friend Julian. But he
gave us a lot more than money. He's a former CEO and also a corporate lawyer,
so he gave us a lot of valuable advice about business, and also did all the
legal work of getting us set up as a company. Plus he introduced us to one of
the two angel investors who supplied our next round of funding.
Some angels, especially those with technology backgrounds, may be satisfied
with a demo and a verbal description of what you plan to do. But many will
want a copy of your business plan, if only to remind themselves what they
invested in.
Our angels asked for one, and looking back, I'm amazed how much worry it
caused me. "Business plan" has that word "business" in it, so I figured it had
to be something I'd have to read a book about business plans to write. Well,
it doesn't. At this stage, all most investors expect is a brief description of
what you plan to do and how you're going to make money from it, and the
resumes of the founders. If you just sit down and write out what you've been
saying to one another, that should be fine. It shouldn't take more than a
couple hours, and you'll probably find that writing it all down gives you more
ideas about what to do.
For the angel to have someone to make the check out to, you're going to have
to have some kind of company. Merely incorporating yourselves isn't hard. The
problem is, for the company to exist, you have to decide who the founders are,
and how much stock they each have. If there are two founders with the same
qualifications who are both equally committed to the business, that's easy.
But if you have a number of people who are expected to contribute in varying
degrees, arranging the proportions of stock can be hard. And once you've done
it, it tends to be set in stone.
I have no tricks for dealing with this problem. All I can say is, try hard to
do it right. I do have a rule of thumb for recognizing when you have, though.
When everyone feels they're getting a slightly bad deal, that they're doing
more than they should for the amount of stock they have, the stock is
optimally apportioned.
There is more to setting up a company than incorporating it, of course:
insurance, business license, unemployment compensation, various things with
the IRS. I'm not even sure what the list is, because we, ah, skipped all that.
When we got real funding near the end of 1996, we hired a great CFO, who fixed
everything retroactively. It turns out that no one comes and arrests you if
you don't do everything you're supposed to when starting a company. And a good
thing too, or a lot of startups would never get started. [5]
It can be dangerous to delay turning yourself into a company, because one or
more of the founders might decide to split off and start another company doing
the same thing. This does happen. So when you set up the company, as well as
as apportioning the stock, you should get all the founders to sign something
agreeing that everyone's ideas belong to this company, and that this company
is going to be everyone's only job.
[If this were a movie, ominous music would begin here.]
While you're at it, you should ask what else they've signed. One of the worst
things that can happen to a startup is to run into intellectual property
problems. We did, and it came closer to killing us than any competitor ever
did.
As we were in the middle of getting bought, we discovered that one of our
people had, early on, been bound by an agreement that said all his ideas
belonged to the giant company that was paying for him to go to grad school. In
theory, that could have meant someone else owned big chunks of our software.
So the acquisition came to a screeching halt while we tried to sort this out.
The problem was, since we'd been about to be acquired, we'd allowed ourselves
to run low on cash. Now we needed to raise more to keep going. But it's hard
to raise money with an IP cloud over your head, because investors can't judge
how serious it is.
Our existing investors, knowing that we needed money and had nowhere else to
get it, at this point attempted certain gambits which I will not describe in
detail, except to remind readers that the word "angel" is a metaphor. The
founders thereupon proposed to walk away from the company, after giving the
investors a brief tutorial on how to administer the servers themselves. And
while this was happening, the acquirers used the delay as an excuse to welch
on the deal.
Miraculously it all turned out ok. The investors backed down; we did another
round of funding at a reasonable valuation; the giant company finally gave us
a piece of paper saying they didn't own our software; and six months later we
were bought by Yahoo for much more than the earlier acquirer had agreed to
pay. So we were happy in the end, though the experience probably took several
years off my life.
Don't do what we did. Before you consummate a startup, ask everyone about
their previous IP history.
Once you've got a company set up, it may seem presumptuous to go knocking on
the doors of rich people and asking them to invest tens of thousands of
dollars in something that is really just a bunch of guys with some ideas. But
when you look at it from the rich people's point of view, the picture is more
encouraging. Most rich people are looking for good investments. If you really
think you have a chance of succeeding, you're doing them a favor by letting
them invest. Mixed with any annoyance they might feel about being approached
will be the thought: are these guys the next Google?
Usually angels are financially equivalent to founders. They get the same kind
of stock and get diluted the same amount in future rounds. How much stock
should they get? That depends on how ambitious you feel. When you offer x
percent of your company for y dollars, you're implicitly claiming a certain
value for the whole company. Venture investments are usually described in
terms of that number. If you give an investor new shares equal to 5% of those
already outstanding in return for $100,000, then you've done the deal at a
pre-money valuation of $2 million.
How do you decide what the value of the company should be? There is no
rational way. At this stage the company is just a bet. I didn't realize that
when we were raising money. Julian thought we ought to value the company at
several million dollars. I thought it was preposterous to claim that a couple
thousand lines of code, which was all we had at the time, were worth several
million dollars. Eventually we settled on one million, because Julian said no
one would invest in a company with a valuation any lower. [6]
What I didn't grasp at the time was that the valuation wasn't just the value
of the code we'd written so far. It was also the value of our ideas, which
turned out to be right, and of all the future work we'd do, which turned out
to be a lot.
The next round of funding is the one in which you might deal with actual
[venture capital firms](venturecapital.html). But don't wait till you've
burned through your last round of funding to start approaching them. VCs are
slow to make up their minds. They can take months. You don't want to be
running out of money while you're trying to negotiate with them.
Getting money from an actual VC firm is a bigger deal than getting money from
angels. The amounts of money involved are larger, millions usually. So the
deals take longer, dilute you more, and impose more onerous conditions.
Sometimes the VCs want to install a new CEO of their own choosing. Usually the
claim is that you need someone mature and experienced, with a business
background. Maybe in some cases this is true. And yet Bill Gates was young and
inexperienced and had no business background, and he seems to have done ok.
Steve Jobs got booted out of his own company by someone mature and
experienced, with a business background, who then proceeded to ruin the
company. So I think people who are mature and experienced, with a business
background, may be overrated. We used to call these guys "newscasters,"
because they had neat hair and spoke in deep, confident voices, and generally
didn't know much more than they read on the teleprompter.
We talked to a number of VCs, but eventually we ended up financing our startup
entirely with angel money. The main reason was that we feared a brand-name VC
firm would stick us with a newscaster as part of the deal. That might have
been ok if he was content to limit himself to talking to the press, but what
if he wanted to have a say in running the company? That would have led to
disaster, because our software was so complex. We were a company whose whole
m.o. was to win through better technology. The strategic decisions were mostly
decisions about technology, and we didn't need any help with those.
This was also one reason we didn't go public. Back in 1998 our CFO tried to
talk me into it. In those days you could go public as a dogfood portal, so as
a company with a real product and real revenues, we might have done well. But
I feared it would have meant taking on a newscaster-- someone who, as they
say, "can talk Wall Street's language."
I'm happy to see Google is bucking that trend. They didn't talk Wall Street's
language when they did their IPO, and Wall Street didn't buy. And now Wall
Street is collectively kicking itself. They'll pay attention next time. Wall
Street learns new languages fast when money is involved.
You have more leverage negotiating with VCs than you realize. The reason is
other VCs. I know a number of VCs now, and when you talk to them you realize
that it's a seller's market. Even now there is too much money chasing too few
good deals.
VCs form a pyramid. At the top are famous ones like Sequoia and Kleiner
Perkins, but beneath those are a huge number you've never heard of. What they
all have in common is that a dollar from them is worth one dollar. Most VCs
will tell you that they don't just provide money, but connections and advice.
If you're talking to Vinod Khosla or John Doerr or Mike Moritz, this is true.
But such advice and connections can come very expensive. And as you go down
the food chain the VCs get rapidly dumber. A few steps down from the top
you're basically talking to bankers who've picked up a few new vocabulary
words from reading _Wired_. (Does your product use _XML?_) So I'd advise you
to be skeptical about claims of experience and connections. Basically, a VC is
a source of money. I'd be inclined to go with whoever offered the most money
the soonest with the least strings attached.
You may wonder how much to tell VCs. And you should, because some of them may
one day be funding your competitors. I think the best plan is not to be
overtly secretive, but not to tell them everything either. After all, as most
VCs say, they're more interested in the people than the ideas. The main reason
they want to talk about your idea is to judge you, not the idea. So as long as
you seem like you know what you're doing, you can probably keep a few things
back from them. [7]
Talk to as many VCs as you can, even if you don't want their money, because a)
they may be on the board of someone who will buy you, and b) if you seem
impressive, they'll be discouraged from investing in your competitors. The
most efficient way to reach VCs, especially if you only want them to know
about you and don't want their money, is at the conferences that are
occasionally organized for startups to present to them.
**Not Spending It**
When and if you get an infusion of real money from investors, what should you
do with it? Not spend it, that's what. In nearly every startup that fails, the
proximate cause is running out of money. Usually there is something deeper
wrong. But even a proximate cause of death is worth trying hard to avoid.
During the Bubble many startups tried to "get big fast." Ideally this meant
getting a lot of customers fast. But it was easy for the meaning to slide over
into hiring a lot of people fast.
Of the two versions, the one where you get a lot of customers fast is of
course preferable. But even that may be overrated. The idea is to get there
first and get all the users, leaving none for competitors. But I think in most
businesses the advantages of being first to market are not so overwhelmingly
great. Google is again a case in point. When they appeared it seemed as if
search was a mature market, dominated by big players who'd spent millions to
build their brands: Yahoo, Lycos, Excite, Infoseek, Altavista, Inktomi. Surely
1998 was a little late to arrive at the party.
But as the founders of Google knew, brand is worth next to nothing in the
search business. You can come along at any point and make something better,
and users will gradually seep over to you. As if to emphasize the point,
Google never did any advertising. They're like dealers; they sell the stuff,
but they know better than to use it themselves.
The competitors Google buried would have done better to spend those millions
improving their software. Future startups should learn from that mistake.
Unless you're in a market where products are as undifferentiated as cigarettes
or vodka or laundry detergent, spending a lot on brand advertising is a sign
of breakage. And few if any Web businesses are so undifferentiated. The dating
sites are running big ad campaigns right now, which is all the more evidence
they're ripe for the picking. (Fee, fie, fo, fum, I smell a company run by
marketing guys.)
We were compelled by circumstances to grow slowly, and in retrospect it was a
good thing. The founders all learned to do every job in the company. As well
as writing software, I had to do sales and customer support. At sales I was
not very good. I was persistent, but I didn't have the smoothness of a good
salesman. My message to potential customers was: you'd be stupid not to sell
online, and if you sell online you'd be stupid to use anyone else's software.
Both statements were true, but that's not the way to convince people.
I was great at customer support though. Imagine talking to a customer support
person who not only knew everything about the product, but would apologize
abjectly if there was a bug, and then fix it immediately, while you were on
the phone with them. Customers loved us. And we loved them, because when
you're growing slow by word of mouth, your first batch of users are the ones
who were smart enough to find you by themselves. There is nothing more
valuable, in the early stages of a startup, than smart users. If you listen to
them, they'll tell you exactly how to make a winning product. And not only
will they give you this advice for free, they'll pay you.
We officially launched in early 1996. By the end of that year we had about 70
users. Since this was the era of "get big fast," I worried about how small and
obscure we were. But in fact we were doing exactly the right thing. Once you
get big (in users or employees) it gets hard to change your product. That year
was effectively a laboratory for improving our software. By the end of it, we
were so far ahead of our competitors that they never had a hope of catching
up. And since all the hackers had spent many hours talking to users, we
understood online commerce way better than anyone else.
That's the key to success as a startup. There is nothing more important than
understanding your business. You might think that anyone in a business must,
ex officio, understand it. Far from it. Google's secret weapon was simply that
they understood search. I was working for Yahoo when Google appeared, and
Yahoo didn't understand search. I know because I once tried to convince the
powers that be that we had to make search better, and I got in reply what was
then the party line about it: that Yahoo was no longer a mere "search engine."
Search was now only a small percentage of our page views, less than one
month's growth, and now that we were established as a "media company," or
"portal," or whatever we were, search could safely be allowed to wither and
drop off, like an umbilical cord.
Well, a small fraction of page views they may be, but they are an important
fraction, because they are the page views that Web sessions start with. I
think Yahoo gets that now.
Google understands a few other things most Web companies still don't. The most
important is that you should put users before advertisers, even though the
advertisers are paying and users aren't. One of my favorite bumper stickers
reads "if the people lead, the leaders will follow." Paraphrased for the Web,
this becomes "get all the users, and the advertisers will follow." More
generally, design your product to please users first, and then think about how
to make money from it. If you don't put users first, you leave a gap for
competitors who do.
To make something users love, you have to understand them. And the bigger you
are, the harder that is. So I say "get big slow." The slower you burn through
your funding, the more time you have to learn.
The other reason to spend money slowly is to encourage a culture of cheapness.
That's something Yahoo did understand. David Filo's title was "Chief Yahoo,"
but he was proud that his unofficial title was "Cheap Yahoo." Soon after we
arrived at Yahoo, we got an email from Filo, who had been crawling around our
directory hierarchy, asking if it was really necessary to store so much of our
data on expensive RAID drives. I was impressed by that. Yahoo's market cap
then was already in the billions, and they were still worrying about wasting a
few gigs of disk space.
When you get a couple million dollars from a VC firm, you tend to feel rich.
It's important to realize you're not. A rich company is one with large
revenues. This money isn't revenue. It's money investors have given you in the
hope you'll be able to generate revenues. So despite those millions in the
bank, you're still poor.
For most startups the model should be grad student, not law firm. Aim for cool
and cheap, not expensive and impressive. For us the test of whether a startup
understood this was whether they had Aeron chairs. The Aeron came out during
the Bubble and was very popular with startups. Especially the type, all too
common then, that was like a bunch of kids playing house with money supplied
by VCs. We had office chairs so cheap that the arms all fell off. This was
slightly embarrassing at the time, but in retrospect the grad-studenty
atmosphere of our office was another of those things we did right without
knowing it.
Our offices were in a wooden triple-decker in Harvard Square. It had been an
apartment until about the 1970s, and there was still a claw-footed bathtub in
the bathroom. It must once have been inhabited by someone fairly eccentric,
because a lot of the chinks in the walls were stuffed with aluminum foil, as
if to protect against cosmic rays. When eminent visitors came to see us, we
were a bit sheepish about the low production values. But in fact that place
was the perfect space for a startup. We felt like our role was to be impudent
underdogs instead of corporate stuffed shirts, and that is exactly the spirit
you want.
An apartment is also the right kind of place for developing software. Cube
farms suck for that, as you've probably discovered if you've tried it. Ever
notice how much easier it is to hack at home than at work? So why not make
work more like home?
When you're looking for space for a startup, don't feel that it has to look
professional. Professional means doing good work, not elevators and glass
walls. I'd advise most startups to avoid corporate space at first and just
rent an apartment. You want to live at the office in a startup, so why not
have a place designed to be lived in as your office?
Besides being cheaper and better to work in, apartments tend to be in better
locations than office buildings. And for a startup location is very important.
The key to productivity is for people to come back to work after dinner. Those
hours after the phone stops ringing are by far the best for getting work done.
Great things happen when a group of employees go out to dinner together, talk
over ideas, and then come back to their offices to implement them. So you want
to be in a place where there are a lot of restaurants around, not some dreary
office park that's a wasteland after 6:00 PM. Once a company shifts over into
the model where everyone drives home to the suburbs for dinner, however late,
you've lost something extraordinarily valuable. God help you if you actually
start in that mode.
If I were going to start a startup today, there are only three places I'd
consider doing it: on the Red Line near Central, Harvard, or Davis Squares
(Kendall is too sterile); in Palo Alto on University or California Aves; and
in Berkeley immediately north or south of campus. These are the only places I
know that have the right kind of vibe.
The most important way to not spend money is by not hiring people. I may be an
extremist, but I think hiring people is the worst thing a company can do. To
start with, people are a recurring expense, which is the worst kind. They also
tend to cause you to grow out of your space, and perhaps even move to the sort
of uncool office building that will make your software worse. But worst of
all, they slow you down: instead of sticking your head in someone's office and
checking out an idea with them, eight people have to have a meeting about it.
So the fewer people you can hire, the better.
During the Bubble a lot of startups had the opposite policy. They wanted to
get "staffed up" as soon as possible, as if you couldn't get anything done
unless there was someone with the corresponding job title. That's big company
thinking. Don't hire people to fill the gaps in some a priori org chart. The
only reason to hire someone is to do something you'd like to do but can't.
If hiring unnecessary people is expensive and slows you down, why do nearly
all companies do it? I think the main reason is that people like the idea of
having a lot of people working for them. This weakness often extends right up
to the CEO. If you ever end up running a company, you'll find the most common
question people ask is how many employees you have. This is their way of
weighing you. It's not just random people who ask this; even reporters do. And
they're going to be a lot more impressed if the answer is a thousand than if
it's ten.
This is ridiculous, really. If two companies have the same revenues, it's the
one with fewer employees that's more impressive. When people used to ask me
how many people our startup had, and I answered "twenty," I could see them
thinking that we didn't count for much. I used to want to add "but our main
competitor, whose ass we regularly kick, has a hundred and forty, so can we
have credit for the larger of the two numbers?"
As with office space, the number of your employees is a choice between seeming
impressive, and being impressive. Any of you who were [nerds](nerds.html) in
high school know about this choice. Keep doing it when you start a company.
**Should You?**
But should you start a company? Are you the right sort of person to do it? If
you are, is it worth it?
More people are the right sort of person to start a startup than realize it.
That's the main reason I wrote this. There could be ten times more startups
than there are, and that would probably be a good thing.
I was, I now realize, exactly the right sort of person to start a startup. But
the idea terrified me at first. I was forced into it because I was a
[Lisp](icad.html) hacker. The company I'd been consulting for seemed to be
running into trouble, and there were not a lot of other companies using Lisp.
Since I couldn't bear the thought of programming in another language (this was
1995, remember, when "another language" meant C++) the only option seemed to
be to start a new company using Lisp.
I realize this sounds far-fetched, but if you're a Lisp hacker you'll know
what I mean. And if the idea of starting a startup frightened me so much that
I only did it out of necessity, there must be a lot of people who would be
good at it but who are too intimidated to try.
So who should start a startup? Someone who is a good hacker, between about 23
and 38, and who wants to solve the money problem in one shot instead of
getting paid gradually over a conventional working life.
I can't say precisely what a good hacker is. At a first rate university this
might include the top half of computer science majors. Though of course you
don't have to be a CS major to be a hacker; I was a philosophy major in
college.
It's hard to tell whether you're a good hacker, especially when you're young.
Fortunately the process of starting startups tends to select them
automatically. What drives people to start startups is (or should be) looking
at existing technology and thinking, don't these guys realize they should be
doing x, y, and z? And that's also a sign that one is a good hacker.
I put the lower bound at 23 not because there's something that doesn't happen
to your brain till then, but because you need to see what it's like in an
existing business before you try running your own. The business doesn't have
to be a startup. I spent a year working for a software company to pay off my
college loans. It was the worst year of my adult life, but I learned, without
realizing it at the time, a lot of valuable lessons about the software
business. In this case they were mostly negative lessons: don't have a lot of
meetings; don't have chunks of code that multiple people own; don't have a
sales guy running the company; don't make a high-end product; don't let your
code get too big; don't leave finding bugs to QA people; don't go too long
between releases; don't isolate developers from users; don't move from
Cambridge to Route 128; and so on. [8] But negative lessons are just as
valuable as positive ones. Perhaps even more valuable: it's hard to repeat a
brilliant performance, but it's straightforward to avoid errors. [9]
The other reason it's hard to start a company before 23 is that people won't
take you seriously. VCs won't trust you, and will try to reduce you to a
mascot as a condition of funding. Customers will worry you're going to flake
out and leave them stranded. Even you yourself, unless you're very unusual,
will feel your age to some degree; you'll find it awkward to be the boss of
someone much older than you, and if you're 21, hiring only people younger
rather limits your options.
Some people could probably start a company at 18 if they wanted to. Bill Gates
was 19 when he and Paul Allen started Microsoft. (Paul Allen was 22, though,
and that probably made a difference.) So if you're thinking, I don't care what
he says, I'm going to start a company now, you may be the sort of person who
could get away with it.
The other cutoff, 38, has a lot more play in it. One reason I put it there is
that I don't think many people have the physical stamina much past that age. I
used to work till 2:00 or 3:00 AM every night, seven days a week. I don't know
if I could do that now.
Also, startups are a big risk financially. If you try something that blows up
and leaves you broke at 26, big deal; a lot of 26 year olds are broke. By 38
you can't take so many risks-- especially if you have kids.
My final test may be the most restrictive. Do you actually want to start a
startup? What it amounts to, economically, is compressing your working life
into the smallest possible space. Instead of working at an ordinary rate for
40 years, you work like hell for four. And maybe end up with nothing-- though
in that case it probably won't take four years.
During this time you'll do little but work, because when you're not working,
your competitors will be. My only leisure activities were running, which I
needed to do to keep working anyway, and about fifteen minutes of reading a
night. I had a girlfriend for a total of two months during that three year
period. Every couple weeks I would take a few hours off to visit a used
bookshop or go to a friend's house for dinner. I went to visit my family
twice. Otherwise I just worked.
Working was often fun, because the people I worked with were some of my best
friends. Sometimes it was even technically interesting. But only about 10% of
the time. The best I can say for the other 90% is that some of it is funnier
in hindsight than it seemed then. Like the time the power went off in
Cambridge for about six hours, and we made the mistake of trying to start a
gasoline powered generator inside our offices. I won't try that again.
I don't think the amount of bullshit you have to deal with in a startup is
more than you'd endure in an ordinary working life. It's probably less, in
fact; it just seems like a lot because it's compressed into a short period. So
mainly what a startup buys you is time. That's the way to think about it if
you're trying to decide whether to start one. If you're the sort of person who
would like to solve the money problem once and for all instead of working for
a salary for 40 years, then a startup makes sense.
For a lot of people the conflict is between startups and graduate school. Grad
students are just the age, and just the sort of people, to start software
startups. You may worry that if you do you'll blow your chances of an academic
career. But it's possible to be part of a startup and stay in grad school,
especially at first. Two of our three original hackers were in grad school the
whole time, and both got their [degrees](tlbphd.html). There are few sources
of energy so powerful as a procrastinating grad student.
If you do have to leave grad school, in the worst case it won't be for too
long. If a startup fails, it will probably fail quickly enough that you can
return to academic life. And if it succeeds, you may find you no longer have
such a burning desire to be an assistant professor.
If you want to do it, do it. Starting a startup is not the great mystery it
seems from outside. It's not something you have to know about "business" to
do. Build something users love, and spend less than you make. How hard is
that?
**Notes**
[1] Google's revenues are about two billion a year, but half comes from ads on
other sites.
[2] One advantage startups have over established companies is that there are
no discrimination laws about starting businesses. For example, I would be
reluctant to start a startup with a woman who had small children, or was
likely to have them soon. But you're not allowed to ask prospective employees
if they plan to have kids soon. Believe it or not, under current US law,
you're not even allowed to discriminate on the basis of intelligence. Whereas
when you're starting a company, you can discriminate on any basis you want
about who you start it with.
[3] Learning to hack is a lot cheaper than business school, because you can do
it mostly on your own. For the price of a Linux box, a copy of K&R, and a few
hours of advice from your neighbor's fifteen year old son, you'll be well on
your way.
[4] Corollary: Avoid starting a startup to sell things to the biggest company
of all, the government. Yes, there are lots of opportunities to sell them
technology. But let someone else start those startups.
[5] A friend who started a company in Germany told me they do care about the
paperwork there, and that there's more of it. Which helps explain why there
are not more startups in Germany.
[6] At the seed stage our valuation was in principle $100,000, because Julian
got 10% of the company. But this is a very misleading number, because the
money was the least important of the things Julian gave us.
[7] The same goes for companies that seem to want to acquire you. There will
be a few that are only pretending to in order to pick your brains. But you can
never tell for sure which these are, so the best approach is to seem entirely
open, but to fail to mention a few critical technical secrets.
[8] I was as bad an employee as this place was a company. I apologize to
anyone who had to work with me there.
[9] You could probably write a book about how to succeed in business by doing
everything in exactly the opposite way from the DMV.
**Thanks** to Trevor Blackwell, Sarah Harlin, Jessica Livingston, and Robert
Morris for reading drafts of this essay, and to Steve Melendez and Gregory
Price for inviting me to speak.
|
November 2004
A lot of people are writing now about why Kerry lost. Here I want to examine a
more specific question: why were the exit polls so wrong?
In Ohio, which Kerry ultimately lost 49-51, exit polls gave him a 52-48
victory. And this wasn't just random error. In every swing state they
overestimated the Kerry vote. In Florida, which Bush ultimately won 52-47,
exit polls predicted a dead heat.
(These are not early numbers. They're from about midnight eastern time, long
after polls closed in Ohio and Florida. And yet by the next afternoon the exit
poll numbers online corresponded to the returns. The only way I can imagine
this happening is if those in charge of the exit polls cooked the books after
seeing the actual returns. But that's another issue.)
What happened? The source of the problem may be a variant of the Bradley
Effect. This term was invented after Tom Bradley, the black mayor of Los
Angeles, lost an election for governor of California despite a comfortable
lead in the polls. Apparently voters were afraid to say they planned to vote
against him, lest their motives be (perhaps correctly) suspected.
It seems likely that something similar happened in exit polls this year. In
theory, exit polls ought to be very accurate. You're not asking people what
they would do. You're asking what they just did.
How can you get errors asking that? Because some people don't respond. To get
a truly random sample, pollsters ask, say, every 20th person leaving the
polling place who they voted for. But not everyone wants to answer. And the
pollsters can't simply ignore those who won't, or their sample isn't random
anymore. So what they do, apparently, is note down the age and race and sex of
the person, and guess from that who they voted for.
This works so long as there is no _correlation_ between who people vote for
and whether they're willing to talk about it. But this year there may have
been. It may be that a significant number of those who voted for Bush didn't
want to say so.
Why not? Because people in the US are more conservative than they're willing
to admit. The values of the elite in this country, at least at the moment, are
NPR values. The average person, as I think both Republicans and Democrats
would agree, is more socially conservative. But while some openly flaunt the
fact that they don't share the opinions of the elite, others feel a little
nervous about it, as if they had bad table manners.
For example, according to current NPR values, you [can't say](say.html)
anything that might be perceived as disparaging towards homosexuals. To do so
is "homophobic." And yet a large number of Americans are deeply religious, and
the Bible is quite explicit on the subject of homosexuality. What are they to
do? I think what many do is keep their opinions, but keep them to themselves.
They know what they believe, but they also know what they're supposed to
believe. And so when a stranger (for example, a pollster) asks them their
opinion about something like gay marriage, they will not always say what they
really think.
When the values of the elite are liberal, polls will tend to underestimate the
conservativeness of ordinary voters. This seems to me the leading theory to
explain why the exit polls were so far off this year. NPR values said one
ought to vote for Kerry. So all the people who voted for Kerry felt virtuous
for doing so, and were eager to tell pollsters they had. No one who voted for
Kerry did it as an act of quiet defiance.
|
March 2006
_(This essay is derived from a talk at Google.)_
A few weeks ago I found to my surprise that I'd been granted four
[patents](http://paulgraham.infogami.com/blog/morepatents). This was all the
more surprising because I'd only applied for three. The patents aren't mine,
of course. They were assigned to Viaweb, and became Yahoo's when they bought
us. But the news set me thinking about the question of software patents
generally.
Patents are a hard problem. I've had to advise most of the startups we've
funded about them, and despite years of experience I'm still not always sure
I'm giving the right advice.
One thing I do feel pretty certain of is that if you're against software
patents, you're against patents in general. Gradually our machines consist
more and more of software. Things that used to be done with levers and cams
and gears are now done with loops and trees and closures. There's nothing
special about physical embodiments of control systems that should make them
patentable, and the software equivalent not.
Unfortunately, patent law is inconsistent on this point. Patent law in most
countries says that algorithms aren't patentable. This rule is left over from
a time when "algorithm" meant something like the Sieve of Eratosthenes. In
1800, people could not see as readily as we can that a great many patents on
mechanical objects were really patents on the algorithms they embodied.
Patent lawyers still have to pretend that's what they're doing when they
patent algorithms. You must not use the word "algorithm" in the title of a
patent application, just as you must not use the word "essays" in the title of
a book. If you want to patent an algorithm, you have to frame it as a computer
system executing that algorithm. Then it's mechanical; phew. The default
euphemism for algorithm is "system and method." Try a patent search for that
phrase and see how many results you get.
Since software patents are no different from hardware patents, people who say
"software patents are evil" are saying simply "patents are evil." So why do so
many people complain about software patents specifically?
I think the problem is more with the patent office than the concept of
software patents. Whenever software meets government, bad things happen,
because software changes fast and government changes slow. The patent office
has been overwhelmed by both the volume and the novelty of applications for
software patents, and as a result they've made a lot of mistakes.
The most common is to grant patents that shouldn't be granted. To be
patentable, an invention has to be more than new. It also has to be non-
obvious. And this, especially, is where the USPTO has been dropping the ball.
Slashdot has an icon that expresses the problem vividly: a knife and fork with
the words "patent pending" superimposed.
The scary thing is, this is the _only_ icon they have for patent stories.
Slashdot readers now take it for granted that a story about a patent will be
about a bogus patent. That's how bad the problem has become.
The problem with Amazon's notorious one-click patent, for example, is not that
it's a software patent, but that it's obvious. Any online store that kept
people's shipping addresses would have implemented this. The reason Amazon did
it first was not that they were especially smart, but because they were one of
the earliest sites with enough clout to force customers to log in before they
could buy something. [1]
We, as hackers, know the USPTO is letting people patent the knives and forks
of our world. The problem is, the USPTO are not hackers. They're probably good
at judging new inventions for casting steel or grinding lenses, but they don't
understand software yet.
At this point an optimist would be tempted to add "but they will eventually."
Unfortunately that might not be true. The problem with software patents is an
instance of a more general one: the patent office takes a while to understand
new technology. If so, this problem will only get worse, because the rate of
technological change seems to be increasing. In thirty years, the patent
office may understand the sort of things we now patent as software, but there
will be other new types of inventions they understand even less.
Applying for a patent is a negotiation. You generally apply for a broader
patent than you think you'll be granted, and the examiners reply by throwing
out some of your claims and granting others. So I don't really blame Amazon
for applying for the one-click patent. The big mistake was the patent
office's, for not insisting on something narrower, with real technical
content. By granting such an over-broad patent, the USPTO in effect slept with
Amazon on the first date. Was Amazon supposed to say no?
Where Amazon went over to the dark side was not in applying for the patent,
but in enforcing it. A lot of companies (Microsoft, for example) have been
granted large numbers of preposterously over-broad patents, but they keep them
mainly for defensive purposes. Like nuclear weapons, the main role of big
companies' patent portfolios is to threaten anyone who attacks them with a
counter-suit. Amazon's suit against Barnes & Noble was thus the equivalent of
a nuclear first strike.
That suit probably hurt Amazon more than it helped them. Barnes & Noble was a
lame site; Amazon would have crushed them anyway. To attack a rival they could
have ignored, Amazon put a lasting black mark on their own reputation. Even
now I think if you asked hackers to free-associate about Amazon, the one-click
patent would turn up in the first ten topics.
Google clearly doesn't feel that merely holding patents is evil. They've
applied for a lot of them. Are they hypocrites? Are patents evil?
There are really two variants of that question, and people answering it often
aren't clear in their own minds which they're answering. There's a narrow
variant: is it bad, given the current legal system, to apply for patents? and
also a broader one: is it bad that the current legal system allows patents?
These are separate questions. For example, in preindustrial societies like
medieval Europe, when someone attacked you, you didn't call the police. There
were no police. When attacked, you were supposed to fight back, and there were
conventions about how to do it. Was this wrong? That's two questions: was it
wrong to take justice into your own hands, and was it wrong that you had to?
We tend to say yes to the second, but no to the first. If no one else will
defend you, you have to defend yourself. [2]
The situation with patents is similar. Business is a kind of ritualized
warfare. Indeed, it evolved from actual warfare: most early traders switched
on the fly from merchants to pirates depending on how strong you seemed. In
business there are certain rules describing how companies may and may not
compete with one another, and someone deciding that they're going to play by
their own rules is missing the point. Saying "I'm not going to apply for
patents just because everyone else does" is not like saying "I'm not going to
lie just because everyone else does." It's more like saying "I'm not going to
use TCP/IP just because everyone else does." Oh yes you are.
A closer comparison might be someone seeing a hockey game for the first time,
realizing with shock that the players were _deliberately_ bumping into one
another, and deciding that one would on no account be so rude when playing
hockey oneself.
Hockey allows checking. It's part of the game. If your team refuses to do it,
you simply lose. So it is in business. Under the present rules, patents are
part of the game.
What does that mean in practice? We tell the startups we fund not to worry
about infringing patents, because startups rarely get sued for patent
infringement. There are only two reasons someone might sue you: for money, or
to prevent you from competing with them. Startups are too poor to be worth
suing for money. And in practice they don't seem to get sued much by
competitors, either. They don't get sued by other startups because (a) patent
suits are an expensive distraction, and (b) since the other startups are as
young as they are, their patents probably haven't issued yet. [3] Nor do
startups, at least in the software business, seem to get sued much by
established competitors. Despite all the patents Microsoft holds, I don't know
of an instance where they sued a startup for patent infringement. Companies
like Microsoft and Oracle don't win by winning lawsuits. That's too uncertain.
They win by locking competitors out of their sales channels. If you do manage
to threaten them, they're more likely to buy you than sue you.
When you read of big companies filing patent suits against smaller ones, it's
usually a big company on the way down, grasping at straws. For example,
Unisys's attempts to enforce their patent on LZW compression. When you see a
big company threatening patent suits, sell. When a company starts fighting
over IP, it's a sign they've lost the real battle, for users.
A company that sues competitors for patent infringement is like a defender who
has been beaten so thoroughly that he turns to plead with the referee. You
don't do that if you can still reach the ball, even if you genuinely believe
you've been fouled. So a company threatening patent suits is a company in
[trouble](http://www.theregister.co.uk/2006/03/15/azul_sues_sun/).
When we were working on Viaweb, a bigger company in the e-commerce business
was granted a patent on online ordering, or something like that. I got a call
from a VP there asking if we'd like to license it. I replied that I thought
the patent was completely bogus, and would never hold up in court. "Ok," he
replied. "So, are you guys hiring?"
If your startup grows big enough, however, you'll start to get sued, no matter
what you do. If you go public, for example, you'll be sued by multiple patent
trolls who hope you'll pay them off to go away. More on them later.
In other words, no one will sue you for patent infringement till you have
money, and once you have money, people will sue you whether they have grounds
to or not. So I advise fatalism. Don't waste your time worrying about patent
infringement. You're probably violating a patent every time you tie your
shoelaces. At the start, at least, just worry about making something great and
getting lots of users. If you grow to the point where anyone considers you
worth attacking, you're doing well.
We do advise the companies we fund to apply for patents, but not so they can
sue competitors. Successful startups either get bought or grow into big
companies. If a startup wants to grow into a big company, they should apply
for patents to build up the patent portfolio they'll need to maintain an armed
truce with other big companies. If they want to get bought, they should apply
for patents because patents are part of the mating dance with acquirers.
Most startups that succeed do it by getting bought, and most acquirers care
about patents. Startup acquisitions are usually a build-vs-buy decision for
the acquirer. Should we buy this little startup or build our own? And two
things, especially, make them decide not to build their own: if you already
have a large and rapidly growing user base, and if you have a fairly solid
patent application on critical parts of your software.
There's a third reason big companies should prefer buying to building: that if
they built their own, they'd screw it up. But few big companies are smart
enough yet to admit this to themselves. It's usually the acquirer's engineers
who are asked how hard it would be for the company to build their own, and
they overestimate their abilities. [4] A patent seems to change the balance.
It gives the acquirer an excuse to admit they couldn't copy what you're doing.
It may also help them to grasp what's special about your technology.
Frankly, it surprises me how small a role patents play in the software
business. It's kind of ironic, considering all the dire things experts say
about software patents stifling innovation, but when one looks closely at the
software business, the most striking thing is how little patents seem to
matter.
In other fields, companies regularly sue competitors for patent infringement.
For example, the airport baggage scanning business was for many years a cozy
duopoly shared between two companies, InVision and L-3. In 2002 a startup
called Reveal appeared, with new technology that let them build scanners a
third the size. They were sued for patent infringement before they'd even
released a product.
You rarely hear that kind of story in our world. The one example I've found
is, embarrassingly enough, Yahoo, which filed a patent suit against a gaming
startup called Xfire in 2005. Xfire doesn't seem to be a very big deal, and
it's hard to say why Yahoo felt threatened. Xfire's VP of engineering had
worked at Yahoo on similar stuff-- in fact, he was listed as an inventor on
the patent Yahoo sued over-- so perhaps there was something personal about it.
My guess is that someone at Yahoo goofed. At any rate they didn't pursue the
suit very vigorously.
Why do patents play so small a role in software? I can think of three possible
reasons.
One is that software is so complicated that patents by themselves are not
worth very much. I may be maligning other fields here, but it seems that in
most types of engineering you can hand the details of some new technique to a
group of medium-high quality people and get the desired result. For example,
if someone develops a new process for smelting ore that gets a better yield,
and you assemble a team of qualified experts and tell them about it, they'll
be able to get the same yield. This doesn't seem to work in software. Software
is so subtle and unpredictable that "qualified experts" don't get you very
far.
That's why we rarely hear phrases like "qualified expert" in the software
business. What that level of ability can get you is, say, to make your
software compatible with some other piece of software-- in eight months, at
enormous cost. To do anything harder you need individual brilliance. If you
assemble a team of qualified experts and tell them to make a new web-based
email program, they'll get their asses kicked by a team of inspired nineteen
year olds.
Experts can implement, but they can't [design](taste.html). Or rather,
expertise in implementation is the only kind most people, including the
experts themselves, can measure. [5]
But design is a definite skill. It's not just an airy intangible. Things
always seem intangible when you don't understand them. Electricity seemed an
airy intangible to most people in 1800. Who knew there was so much to know
about it? So it is with design. Some people are good at it and some people are
bad at it, and there's something very tangible they're good or bad at.
The reason design counts so much in software is probably that there are fewer
constraints than on physical things. Building physical things is expensive and
dangerous. The space of possible choices is smaller; you tend to have to work
as part of a larger group; and you're subject to a lot of regulations. You
don't have any of that if you and a couple friends decide to create a new web-
based application.
Because there's so much scope for design in software, a successful application
tends to be way more than the sum of its patents. What protects little
companies from being copied by bigger competitors is not just their patents,
but the thousand little things the big company will get wrong if they try.
The second reason patents don't count for much in our world is that startups
rarely attack big companies head-on, the way Reveal did. In the software
business, startups beat established companies by transcending them. Startups
don't build desktop word processing programs to compete with Microsoft Word.
[6] They build Writely. If this paradigm is crowded, just wait for the next
one; they run pretty frequently on this route.
Fortunately for startups, big companies are extremely good at denial. If you
take the trouble to attack them from an oblique angle, they'll meet you half-
way and maneuver to keep you in their blind spot. To sue a startup would mean
admitting it was dangerous, and that often means seeing something the big
company doesn't want to see. IBM used to sue its mainframe competitors
regularly, but they didn't bother much about the microcomputer industry
because they didn't want to see the threat it posed. Companies building web
based apps are similarly protected from Microsoft, which even now doesn't want
to imagine a world in which Windows is irrelevant.
The third reason patents don't seem to matter very much in software is public
opinion-- or rather, hacker opinion. In a recent
[interview](http://www.computing.co.uk/forbes/news/2152720/interview-steve-
ballmer-linux), Steve Ballmer coyly left open the possibility of attacking
Linux on patent grounds. But I doubt Microsoft would ever be so stupid. They'd
face the mother of all boycotts. And not just from the technical community in
general; a lot of their own people would rebel.
Good hackers care a lot about matters of principle, and they are highly
mobile. If a company starts misbehaving, smart people won't work there. For
some reason this seems to be more true in software than other businesses. I
don't think it's because hackers have intrinsically higher principles so much
as that their skills are easily transferrable. Perhaps we can split the
difference and say that mobility gives hackers the luxury of being principled.
Google's "don't be evil" policy may for this reason be the most valuable thing
they've discovered. It's very constraining in some ways. If Google does do
something evil, they get doubly whacked for it: once for whatever they did,
and again for hypocrisy. But I think it's worth it. It helps them to hire the
best people, and it's better, even from a purely selfish point of view, to be
constrained by principles than by stupidity.
(I wish someone would get this point across to the present administration.)
I'm not sure what the proportions are of the preceding three ingredients, but
the custom among the big companies seems to be not to sue the small ones, and
the startups are mostly too busy and too poor to sue one another. So despite
the huge number of software patents there's not a lot of suing going on. With
one exception: patent trolls.
Patent trolls are companies consisting mainly of lawyers whose whole business
is to accumulate patents and threaten to sue companies who actually make
things. Patent trolls, it seems safe to say, are evil. I feel a bit stupid
saying that, because when you're saying something that Richard Stallman and
Bill Gates would both agree with, you must be perilously close to tautologies.
The CEO of Forgent, one of the most notorious patent trolls, says that what
his company does is "the American way." Actually that's not true. The American
way is to make money by [creating wealth](wealth.html), not by suing people.
[7] What companies like Forgent do is actually the proto-industrial way. In
the period just before the industrial revolution, some of the greatest
fortunes in countries like England and France were made by courtiers who
extracted some lucrative right from the crown-- like the right to collect
taxes on the import of silk-- and then used this to squeeze money from the
merchants in that business. So when people compare patent trolls to the mafia,
they're more right than they know, because the mafia too are not merely bad,
but bad specifically in the sense of being an obsolete business model.
Patent trolls seem to have caught big companies by surprise. In the last
couple years they've extracted hundreds of millions of dollars from them.
Patent trolls are hard to fight precisely because they create nothing. Big
companies are safe from being sued by other big companies because they can
threaten a counter-suit. But because patent trolls don't make anything,
there's nothing they can be sued for. I predict this loophole will get closed
fairly quickly, at least by legal standards. It's clearly an abuse of the
system, and the victims are powerful. [8]
But evil as patent trolls are, I don't think they hamper innovation much. They
don't sue till a startup has made money, and by that point the innovation that
generated it has already happened. I can't think of a startup that avoided
working on some problem because of patent trolls.
So much for hockey as the game is played now. What about the more theoretical
question of whether hockey would be a better game without checking? Do patents
encourage or discourage innovation?
This is a very hard question to answer in the general case. People write whole
books on the topic. One of my main hobbies is the history of technology, and
even though I've studied the subject for years, it would take me several weeks
of research to be able to say whether patents have in general been a net win.
One thing I can say is that 99.9% of the people who express opinions on the
subject do it not based on such research, but out of a kind of religious
conviction. At least, that's the polite way of putting it; the colloquial
version involves speech coming out of organs not designed for that purpose.
Whether they encourage innovation or not, patents were at least intended to.
You don't get a patent for nothing. In return for the exclusive right to use
an idea, you have to _publish_ it, and it was largely to encourage such
openness that patents were established.
Before patents, people protected ideas by keeping them secret. With patents,
central governments said, in effect, if you tell everyone your idea, we'll
protect it for you. There is a parallel here to the rise of civil order, which
happened at roughly the same time. Before central governments were powerful
enough to enforce order, rich people had private armies. As governments got
more powerful, they gradually compelled magnates to cede most responsibility
for protecting them. (Magnates still have bodyguards, but no longer to protect
them from other magnates.)
Patents, like police, are involved in many abuses. But in both cases the
default is something worse. The choice is not "patents or freedom?" any more
than it is "police or freedom?" The actual questions are respectively "patents
or secrecy?" and "police or gangs?"
As with gangs, we have some idea what secrecy would be like, because that's
how things used to be. The economy of medieval Europe was divided up into
little tribes, each jealously guarding their privileges and secrets. In
Shakespeare's time, "mystery" was synonymous with "craft." Even today we can
see an echo of the secrecy of medieval guilds, in the now pointless secrecy of
the Masons.
The most memorable example of medieval industrial secrecy is probably Venice,
which forbade glassblowers to leave the city, and sent assassins after those
who tried. We might like to think we wouldn't go so far, but the movie
industry has already tried to pass
[laws](http://news.com.com/2100-1026_3-5106684.html) prescribing three year
prison terms just for putting movies on public networks. Want to try a
frightening thought experiment? If the movie industry could have any law they
wanted, where would they stop? Short of the death penalty, one assumes, but
how close would they get?
Even worse than the spectacular abuses might be the overall decrease in
efficiency that would accompany increased secrecy. As anyone who has dealt
with organizations that operate on a "need to know" basis can attest, dividing
information up into little cells is terribly inefficient. The flaw in the
"need to know" principle is that you don't _know_ who needs to know something.
An idea from one area might spark a great discovery in another. But the
discoverer doesn't know he needs to know it.
If secrecy were the only protection for ideas, companies wouldn't just have to
be secretive with other companies; they'd have to be secretive internally.
This would encourage what is already the worst trait of big companies.
I'm not saying secrecy would be worse than patents, just that we couldn't
discard patents for free. Businesses would become more secretive to
compensate, and in some fields this might get ugly. Nor am I defending the
current patent system. There is clearly a lot that's broken about it. But the
breakage seems to affect software less than most other fields.
In the software business I know from experience whether patents encourage or
discourage innovation, and the answer is the type that people who like to
argue about public policy least like to hear: they don't affect innovation
much, one way or the other. Most innovation in the software business happens
in startups, and startups should simply ignore other companies' patents. At
least, that's what we advise, and we bet money on that advice.
The only real role of patents, for most startups, is as an element of the
mating dance with acquirers. There patents do help a little. And so they do
encourage innovation indirectly, in that they give more power to startups,
which is where, pound for pound, the most innovation happens. But even in the
mating dance, patents are of secondary importance. It matters more to make
something great and get a lot of users.
**Notes**
[1] You have to be careful here, because a great discovery often seems obvious
in retrospect. One-click ordering, however, is not such a discovery.
[2] "Turn the other cheek" skirts the issue; the critical question is not how
to deal with slaps, but sword thrusts.
[3] Applying for a patent is now very slow, but it might actually be bad if
that got fixed. At the moment the time it takes to get a patent is
conveniently just longer than the time it takes a startup to succeed or fail.
[4] Instead of the canonical "could you build this?" maybe the corp dev guys
should be asking "will you build this?" or even "why haven't you already built
this?"
[5] Design ability is so hard to measure that you can't even trust the design
world's internal standards. You can't assume that someone with a degree in
design is any good at design, or that an eminent designer is any better than
his peers. If that worked, any company could build products as good as Apple's
just by hiring sufficiently qualified designers.
[6] If anyone wanted to try, we'd be interested to hear from them. I suspect
it's one of those things that's not as hard as everyone assumes.
[7] Patent trolls can't even claim, like speculators, that they "create"
liquidity.
[8] If big companies don't want to wait for the government to take action,
there is a way to fight back themselves. For a long time I thought there
wasn't, because there was nothing to grab onto. But there is one resource
patent trolls need: lawyers. Big technology companies between them generate a
lot of legal business. If they agreed among themselves never to do business
with any firm employing anyone who had worked for a patent troll, either as an
employee or as outside counsel, they could probably starve the trolls of the
lawyers they need.
**Thanks** to Dan Bloomberg, Paul Buchheit, Sarah Harlin, Jessica Livingston,
and Peter Norvig for reading drafts of this, to Joel Lehrer and Peter Eng for
answering my questions about patents, and to Ankur Pansari for inviting me to
speak.
|
November 2019
Everyone knows that to do great work you need both natural ability and
determination. But there's a third ingredient that's not as well understood:
an obsessive interest in a particular topic.
To explain this point I need to burn my reputation with some group of people,
and I'm going to choose bus ticket collectors. There are people who collect
old bus tickets. Like many collectors, they have an obsessive interest in the
minutiae of what they collect. They can keep track of distinctions between
different types of bus tickets that would be hard for the rest of us to
remember. Because we don't care enough. What's the point of spending so much
time thinking about old bus tickets?
Which leads us to the second feature of this kind of obsession: there is no
point. A bus ticket collector's love is disinterested. They're not doing it to
impress us or to make themselves rich, but for its own sake.
When you look at the lives of people who've done great work, you see a
consistent pattern. They often begin with a bus ticket collector's obsessive
interest in something that would have seemed pointless to most of their
contemporaries. One of the most striking features of Darwin's book about his
voyage on the Beagle is the sheer depth of his interest in natural history.
His curiosity seems infinite. Ditto for Ramanujan, sitting by the hour working
out on his slate what happens to series.
It's a mistake to think they were "laying the groundwork" for the discoveries
they made later. There's too much intention in that metaphor. Like bus ticket
collectors, they were doing it because they liked it.
But there is a difference between Ramanujan and a bus ticket collector. Series
matter, and bus tickets don't.
If I had to put the recipe for genius into one sentence, that might be it: to
have a disinterested obsession with something that matters.
Aren't I forgetting about the other two ingredients? Less than you might
think. An obsessive interest in a topic is both a proxy for ability and a
substitute for determination. Unless you have sufficient mathematical
aptitude, you won't find series interesting. And when you're obsessively
interested in something, you don't need as much determination: you don't need
to push yourself as hard when curiosity is pulling you.
An obsessive interest will even bring you luck, to the extent anything can.
Chance, as Pasteur said, favors the prepared mind, and if there's one thing an
obsessed mind is, it's prepared.
The disinterestedness of this kind of obsession is its most important feature.
Not just because it's a filter for earnestness, but because it helps you
discover new ideas.
The paths that lead to new ideas tend to look unpromising. If they looked
promising, other people would already have explored them. How do the people
who do great work discover these paths that others overlook? The popular story
is that they simply have better vision: because they're so talented, they see
paths that others miss. But if you look at the way great discoveries are made,
that's not what happens. Darwin didn't pay closer attention to individual
species than other people because he saw that this would lead to great
discoveries, and they didn't. He was just really, really interested in such
things.
Darwin couldn't turn it off. Neither could Ramanujan. They didn't discover the
hidden paths that they did because they seemed promising, but because they
couldn't help it. That's what allowed them to follow paths that someone who
was merely ambitious would have ignored.
What rational person would decide that the way to write great novels was to
begin by spending several years creating an imaginary elvish language, like
Tolkien, or visiting every household in southwestern Britain, like Trollope?
No one, including Tolkien and Trollope.
The bus ticket theory is similar to Carlyle's famous definition of genius as
an infinite capacity for taking pains. But there are two differences. The bus
ticket theory makes it clear that the source of this infinite capacity for
taking pains is not infinite diligence, as Carlyle seems to have meant, but
the sort of infinite interest that collectors have. It also adds an important
qualification: an infinite capacity for taking pains about something that
matters.
So what matters? You can never be sure. It's precisely because no one can tell
in advance which paths are promising that you can discover new ideas by
working on what you're interested in.
But there are some heuristics you can use to guess whether an obsession might
be one that matters. For example, it's more promising if you're creating
something, rather than just consuming something someone else creates. It's
more promising if something you're interested in is difficult, especially if
it's [_more difficult for other people_](work.html) than it is for you. And
the obsessions of talented people are more likely to be promising. When
talented people become interested in random things, they're not truly random.
But you can never be sure. In fact, here's an interesting idea that's also
rather alarming if it's true: it may be that to do great work, you also have
to waste a lot of time.
In many different areas, reward is proportionate to risk. If that rule holds
here, then the way to find paths that lead to truly great work is to be
willing to expend a lot of effort on things that turn out to be every bit as
unpromising as they seem.
I'm not sure if this is true. On one hand, it seems surprisingly difficult to
waste your time so long as you're working hard on something interesting. So
much of what you do ends up being useful. But on the other hand, the rule
about the relationship between risk and reward is so powerful that it seems to
hold wherever risk occurs. [_Newton's_](disc.html) case, at least, suggests
that the risk/reward rule holds here. He's famous for one particular obsession
of his that turned out to be unprecedentedly fruitful: using math to describe
the world. But he had two other obsessions, alchemy and theology, that seem to
have been complete wastes of time. He ended up net ahead. His bet on what we
now call physics paid off so well that it more than compensated for the other
two. But were the other two necessary, in the sense that he had to take big
risks to make such big discoveries? I don't know.
Here's an even more alarming idea: might one make all bad bets? It probably
happens quite often. But we don't know how often, because these people don't
become famous.
It's not merely that the returns from following a path are hard to predict.
They change dramatically over time. 1830 was a really good time to be
obsessively interested in natural history. If Darwin had been born in 1709
instead of 1809, we might never have heard of him.
What can one do in the face of such uncertainty? One solution is to hedge your
bets, which in this case means to follow the obviously promising paths instead
of your own private obsessions. But as with any hedge, you're decreasing
reward when you decrease risk. If you forgo working on what you like in order
to follow some more conventionally ambitious path, you might miss something
wonderful that you'd otherwise have discovered. That too must happen all the
time, perhaps even more often than the genius whose bets all fail.
The other solution is to let yourself be interested in lots of different
things. You don't decrease your upside if you switch between equally genuine
interests based on which seems to be working so far. But there is a danger
here too: if you work on too many different projects, you might not get deeply
enough into any of them.
One interesting thing about the bus ticket theory is that it may help explain
why different types of people excel at different kinds of work. Interest is
much more unevenly distributed than ability. If natural ability is all you
need to do great work, and natural ability is evenly distributed, you have to
invent elaborate theories to explain the skewed distributions we see among
those who actually do great work in various fields. But it may be that much of
the skew has a simpler explanation: different people are interested in
different things.
The bus ticket theory also explains why people are less likely to do great
work after they have children. Here interest has to compete not just with
external obstacles, but with another interest, and one that for most people is
extremely powerful. It's harder to find time for work after you have kids, but
that's the easy part. The real change is that you don't want to.
But the most exciting implication of the bus ticket theory is that it suggests
ways to encourage great work. If the recipe for genius is simply natural
ability plus hard work, all we can do is hope we have a lot of ability, and
work as hard as we can. But if interest is a critical ingredient in genius, we
may be able, by cultivating interest, to cultivate genius.
For example, for the very ambitious, the bus ticket theory suggests that the
way to do great work is to relax a little. Instead of gritting your teeth and
diligently pursuing what all your peers agree is the most promising line of
research, maybe you should try doing something just for fun. And if you're
stuck, that may be the vector along which to break out.
I've always liked [_Hamming's_](hamming.html) famous double-barrelled
question: what are the most important problems in your field, and why aren't
you working on one of them? It's a great way to shake yourself up. But it may
be overfitting a bit. It might be at least as useful to ask yourself: if you
could take a year off to work on something that probably wouldn't be important
but would be really interesting, what would it be?
The bus ticket theory also suggests a way to avoid slowing down as you get
older. Perhaps the reason people have fewer new ideas as they get older is not
simply that they're losing their edge. It may also be because once you become
established, you can no longer mess about with irresponsible side projects the
way you could when you were young and no one cared what you did.
The solution to that is obvious: remain irresponsible. It will be hard,
though, because the apparently random projects you take up to stave off
decline will read to outsiders as evidence of it. And you yourself won't know
for sure that they're wrong. But it will at least be more fun to work on what
you want.
It may even be that we can cultivate a habit of intellectual bus ticket
collecting in kids. The usual plan in education is to start with a broad,
shallow focus, then gradually become more specialized. But I've done the
opposite with my kids. I know I can count on their school to handle the broad,
shallow part, so I take them deep.
When they get interested in something, however random, I encourage them to go
preposterously, bus ticket collectorly, deep. I don't do this because of the
bus ticket theory. I do it because I want them to feel the joy of learning,
and they're never going to feel that about something I'm making them learn. It
has to be something they're interested in. I'm just following the path of
least resistance; depth is a byproduct. But if in trying to show them the joy
of learning I also end up training them to go deep, so much the better.
Will it have any effect? I have no idea. But that uncertainty may be the most
interesting point of all. There is so much more to learn about how to do great
work. As old as human civilization feels, it's really still very young if we
haven't nailed something so basic. It's exciting to think there are still
discoveries to make about discovery. If that's the sort of thing you're
interested in.
**Notes**
[1] There are other types of collecting that illustrate this point better than
bus tickets, but they're also more popular. It seemed just as well to use an
inferior example rather than offend more people by telling them their hobby
doesn't matter.
[2] I worried a little about using the word "disinterested," since some people
mistakenly believe it means not interested. But anyone who expects to be a
genius will have to know the meaning of such a basic word, so I figure they
may as well start now.
[3] Think how often genius must have been nipped in the bud by people being
told, or telling themselves, to stop messing about and be responsible.
Ramanujan's mother was a huge enabler. Imagine if she hadn't been. Imagine if
his parents had made him go out and get a job instead of sitting around at
home doing math.
On the other hand, anyone quoting the preceding paragraph to justify not
getting a job is probably mistaken.
[4] 1709 Darwin is to time what the [_Milanese Leonardo_](cities.html) is to
space.
[5] "An infinite capacity for taking pains" is a paraphrase of what Carlyle
wrote. What he wrote, in his _History of Frederick the Great_ , was "... it is
the fruit of 'genius' (which means transcendent capacity of taking trouble,
first of all)...." Since the paraphrase seems the name of the idea at this
point, I kept it.
Carlyle's _History_ was published in 1858. In 1785 H�rault de S�chelles quoted
Buffon as saying "Le g�nie n'est qu'une plus grande aptitude � la patience."
(Genius is only a greater aptitude for patience.)
[6] Trollope was establishing the system of postal routes. He himself sensed
the obsessiveness with which he pursued this goal.
> It is amusing to watch how a passion will grow upon a man. During those two
> years it was the ambition of my life to cover the country with rural letter-
> carriers.
Even Newton occasionally sensed the degree of his obsessiveness. After
computing pi to 15 digits, he wrote in a letter to a friend:
> I am ashamed to tell you to how many figures I carried these computations,
> having no other business at the time.
Incidentally, Ramanujan was also a compulsive calculator. As Kanigel writes in
his excellent biography:
> One Ramanujan scholar, B. M. Wilson, later told how Ramanujan's research
> into number theory was often "preceded by a table of numerical results,
> carried usually to a length from which most of us would shrink."
[7] Working to understand the natural world counts as creating rather than
consuming.
Newton tripped over this distinction when he chose to work on theology. His
beliefs did not allow him to see it, but chasing down paradoxes in nature is
fruitful in a way that chasing down paradoxes in sacred texts is not.
[8] How much of people's propensity to become interested in a topic is inborn?
My experience so far suggests the answer is: most of it. Different kids get
interested in different things, and it's hard to make a child interested in
something they wouldn't otherwise be. Not in a way that sticks. The most you
can do on behalf of a topic is to make sure it gets a fair showing � to make
it clear to them, for example, that there's more to math than the dull drills
they do in school. After that it's up to the child.
**Thanks** to Marc Andreessen, Trevor Blackwell, Patrick Collison, Kevin
Lacker, Jessica Livingston, Jackie McDonough, Robert Morris, Lisa Randall, Zak
Stone, and [_my 7 year
old_](https://twitter.com/paulg/status/1196537802621669376) for reading drafts
of this.
|
September 2022
I recently told applicants to Y Combinator that the best advice I could give
for getting in, per word, was
> Explain what you've learned from users.
That tests a lot of things: whether you're paying attention to users, how well
you understand them, and even how much they need what you're making.
Afterward I asked myself the same question. What have I learned from YC's
users, the startups we've funded?
The first thing that came to mind was that most startups have the same
problems. No two have exactly the same problems, but it's surprising how much
the problems remain the same, regardless of what they're making. Once you've
advised 100 startups all doing different things, you rarely encounter problems
you haven't seen before.
This fact is one of the things that makes YC work. But I didn't know it when
we started YC. I only had a few data points: our own startup, and those
started by friends. It was a surprise to me how often the same problems recur
in different forms. Many later stage investors might never realize this,
because later stage investors might not advise 100 startups in their whole
career, but a YC partner will get this much experience in the first year or
two.
That's one advantage of funding large numbers of early stage companies rather
than smaller numbers of later-stage ones. You get a lot of data. Not just
because you're looking at more companies, but also because more goes wrong.
But knowing (nearly) all the problems startups can encounter doesn't mean that
advising them can be automated, or reduced to a formula. There's no substitute
for individual office hours with a YC partner. Each startup is unique, which
means they have to be advised by specific partners who know them well. [1]
We learned that the hard way, in the notorious "batch that broke YC" in the
summer of 2012. Up till that point we treated the partners as a pool. When a
startup requested office hours, they got the next available slot posted by any
partner. That meant every partner had to know every startup. This worked fine
up to 60 startups, but when the batch grew to 80, everything broke. The
founders probably didn't realize anything was wrong, but the partners were
confused and unhappy because halfway through the batch they still didn't know
all the companies yet. [2]
At first I was puzzled. How could things be fine at 60 startups and broken at
80? It was only a third more. Then I realized what had happened. We were using
an _O(n 2)_ algorithm. So of course it blew up.
The solution we adopted was the classic one in these situations. We sharded
the batch into smaller groups of startups, each overseen by a dedicated group
of partners. That fixed the problem, and has worked fine ever since. But the
batch that broke YC was a powerful demonstration of how individualized the
process of advising startups has to be.
Another related surprise is how bad founders can be at realizing what their
problems are. Founders will sometimes come in to talk about some problem, and
we'll discover another much bigger one in the course of the conversation. For
example (and this case is all too common), founders will come in to talk about
the difficulties they're having raising money, and after digging into their
situation, it turns out the reason is that the company is doing badly, and
investors can tell. Or founders will come in worried that they still haven't
cracked the problem of user acquisition, and the reason turns out to be that
their product isn't good enough. There have been times when I've asked "Would
you use this yourself, if you hadn't built it?" and the founders, on thinking
about it, said "No." Well, there's the reason you're having trouble getting
users.
Often founders know what their problems are, but not their relative
importance. [3] They'll come in to talk about three problems they're worrying
about. One is of moderate importance, one doesn't matter at all, and one will
kill the company if it isn't addressed immediately. It's like watching one of
those horror movies where the heroine is deeply upset that her boyfriend
cheated on her, and only mildly curious about the door that's mysteriously
ajar. You want to say: never mind about your boyfriend, think about that door!
Fortunately in office hours you can. So while startups still die with some
regularity, it's rarely because they wandered into a room containing a
murderer. The YC partners can warn them where the murderers are.
Not that founders listen. That was another big surprise: how often founders
don't listen to us. A couple weeks ago I talked to a partner who had been
working for YC for a couple batches and was starting to see the pattern. "They
come back a year later," she said, "and say 'We wish we'd listened to you.'"
It took me a long time to figure out why founders don't listen. At first I
thought it was mere stubbornness. That's part of the reason, but another and
probably more important reason is that so much about startups is
[counterintuitive](before.html). And when you tell someone something
counterintuitive, what it sounds to them is wrong. So the reason founders
don't listen to us is that they don't _believe_ us. At least not till
experience teaches them otherwise. [4]
The reason startups are so counterintuitive is that they're so different from
most people's other experiences. No one knows what it's like except those
who've done it. Which is why YC partners should usually have been founders
themselves. But strangely enough, the counterintuitiveness of startups turns
out to be another of the things that make YC work. If it weren't
counterintuitive, founders wouldn't need our advice about how to do it.
Focus is doubly important for early stage startups, because not only do they
have a hundred different problems, they don't have anyone to work on them
except the founders. If the founders focus on things that don't matter,
there's no one focusing on the things that do. So the essence of what happens
at YC is to figure out which problems matter most, then cook up ideas for
solving them — ideally at a resolution of a week or less — and then try those
ideas and measure how well they worked. The focus is on action, with
measurable, near-term results.
This doesn't imply that founders should rush forward regardless of the
consequences. If you correct course at a high enough frequency, you can be
simultaneously decisive at a micro scale and tentative at a macro scale. The
result is a somewhat winding path, but executed very rapidly, like the path a
running back takes downfield. And in practice there's less backtracking than
you might expect. Founders usually guess right about which direction to run
in, especially if they have someone experienced like a YC partner to bounce
their hypotheses off. And when they guess wrong, they notice fast, because
they'll talk about the results at office hours the next week. [5]
A small improvement in navigational ability can make you a lot faster, because
it has a double effect: the path is shorter, and you can travel faster along
it when you're more certain it's the right one. That's where a lot of YC's
value lies, in helping founders get an extra increment of focus that lets them
move faster. And since moving fast is the essence of a startup, YC in effect
makes startups more startup-like.
Speed defines startups. Focus enables speed. YC improves focus.
Why are founders uncertain about what to do? Partly because startups almost by
definition are doing something new, which means no one knows how to do it yet,
or in most cases even what "it" is. Partly because startups are so
counterintuitive generally. And partly because many founders, especially young
and ambitious ones, have been trained to win the wrong way. That took me years
to figure out. The educational system in most countries trains you to win by
[hacking the test](lesson.html) instead of actually doing whatever it's
supposed to measure. But that stops working when you start a startup. So part
of what YC does is to retrain founders to stop trying to hack the test. (It
takes a surprisingly long time. A year in, you still see them reverting to
their old habits.)
YC is not simply more experienced founders passing on their knowledge. It's
more like specialization than apprenticeship. The knowledge of the YC partners
and the founders have different shapes: It wouldn't be worthwhile for a
founder to acquire the encyclopedic knowledge of startup problems that a YC
partner has, just as it wouldn't be worthwhile for a YC partner to acquire the
depth of domain knowledge that a founder has. That's why it can still be
valuable for an experienced founder to do YC, just as it can still be valuable
for an experienced athlete to have a coach.
The other big thing YC gives founders is colleagues, and this may be even more
important than the advice of partners. If you look at history, great work
clusters around certain places and institutions: Florence in the late 15th
century, the University of G�ttingen in the late 19th, _The New Yorker_ under
Ross, Bell Labs, Xerox PARC. However good you are, good colleagues make you
better. Indeed, very ambitious people probably need colleagues more than
anyone else, because they're so starved for them in everyday life.
Whether or not YC manages one day to be listed alongside those famous
clusters, it won't be for lack of trying. We were very aware of this
historical phenomenon and deliberately designed YC to be one. By this point
it's not bragging to say that it's the biggest cluster of great startup
founders. Even people trying to attack YC concede that.
Colleagues and startup founders are two of the most powerful forces in the
world, so you'd expect it to have a big effect to combine them. Before YC, to
the extent people thought about the question at all, most assumed they
couldn't be combined — that loneliness was the price of independence. That was
how it felt to us when we started our own startup in Boston in the 1990s. We
had a handful of older people we could go to for advice (of varying quality),
but no peers. There was no one we could commiserate with about the misbehavior
of investors, or speculate with about the future of technology. I often tell
founders to make something they themselves want, and YC is certainly that: it
was designed to be exactly what we wanted when we were starting a startup.
One thing we wanted was to be able to get seed funding without having to make
the rounds of random rich people. That has become a commodity now, at least in
the US. But great colleagues can never become a commodity, because the fact
that they cluster in some places means they're proportionally absent from the
rest.
Something magical happens where they do cluster though. The energy in the room
at a YC dinner is like nothing else I've experienced. We would have been happy
just to have one or two other startups to talk to. When you have a whole
roomful it's another thing entirely.
YC founders aren't just inspired by one another. They also help one another.
That's the happiest thing I've learned about startup founders: how generous
they can be in helping one another. We noticed this in the first batch and
consciously designed YC to magnify it. The result is something far more
intense than, say, a university. Between the partners, the alumni, and their
batchmates, founders are surrounded by people who want to help them, and can.
**Notes**
[1] This is why I've never liked it when people refer to YC as a "bootcamp."
It's intense like a bootcamp, but the opposite in structure. Instead of
everyone doing the same thing, they're each talking to YC partners to figure
out what their specific startup needs.
[2] When I say the summer 2012 batch was broken, I mean it felt to the
partners that something was wrong. Things weren't yet so broken that the
startups had a worse experience. In fact that batch did unusually well.
[3] This situation reminds me of the research showing that people are much
better at answering questions than they are at judging how accurate their
answers are. The two phenomena feel very similar.
[4] The [Airbnbs](airbnbs.html) were particularly good at listening — partly
because they were flexible and disciplined, but also because they'd had such a
rough time during the preceding year. They were ready to listen.
[5] The optimal unit of decisiveness depends on how long it takes to get
results, and that depends on the type of problem you're solving. When you're
negotiating with investors, it could be a couple days, whereas if you're
building hardware it could be months.
**Thanks** to Trevor Blackwell, Jessica Livingston, Harj Taggar, and Garry Tan
for reading drafts of this.
|
February 2022
Writing about something, even something you know well, usually shows you that
you didn't know it as well as you thought. Putting ideas into words is a
severe test. The first words you choose are usually wrong; you have to rewrite
sentences over and over to get them exactly right. And your ideas won't just
be imprecise, but incomplete too. Half the ideas that end up in an essay will
be ones you thought of while you were writing it. Indeed, that's why I write
them.
Once you publish something, the convention is that whatever you wrote was what
you thought before you wrote it. These were your ideas, and now you've
expressed them. But you know this isn't true. You know that putting your ideas
into words changed them. And not just the ideas you published. Presumably
there were others that turned out to be too broken to fix, and those you
discarded instead.
It's not just having to commit your ideas to specific words that makes writing
so exacting. The real test is reading what you've written. You have to pretend
to be a neutral reader who knows nothing of what's in your head, only what you
wrote. When he reads what you wrote, does it seem correct? Does it seem
complete? If you make an effort, you can read your writing as if you were a
complete stranger, and when you do the news is usually bad. It takes me many
cycles before I can get an essay past the stranger. But the stranger is
rational, so you always can, if you ask him what he needs. If he's not
satisfied because you failed to mention x or didn't qualify some sentence
sufficiently, then you mention x or add more qualifications. Happy now? It may
cost you some nice sentences, but you have to resign yourself to that. You
just have to make them as good as you can and still satisfy the stranger.
This much, I assume, won't be that controversial. I think it will accord with
the experience of anyone who has tried to write about anything nontrivial.
There may exist people whose thoughts are so perfectly formed that they just
flow straight into words. But I've never known anyone who could do this, and
if I met someone who said they could, it would seem evidence of their
limitations rather than their ability. Indeed, this is a trope in movies: the
guy who claims to have a plan for doing some difficult thing, and who when
questioned further, taps his head and says "It's all up here." Everyone
watching the movie knows what that means. At best the plan is vague and
incomplete. Very likely there's some undiscovered flaw that invalidates it
completely. At best it's a plan for a plan.
In precisely defined domains it's possible to form complete ideas in your
head. People can play chess in their heads, for example. And mathematicians
can do some amount of math in their heads, though they don't seem to feel sure
of a proof over a certain length till they write it down. But this only seems
possible with ideas you can express in a formal language. [1] Arguably what
such people are doing is putting ideas into words in their heads. I can to
some extent write essays in my head. I'll sometimes think of a paragraph while
walking or lying in bed that survives nearly unchanged in the final version.
But really I'm writing when I do this. I'm doing the mental part of writing;
my fingers just aren't moving as I do it. [2]
You can know a great deal about something without writing about it. Can you
ever know so much that you wouldn't learn more from trying to explain what you
know? I don't think so. I've written about at least two subjects I know well —
Lisp hacking and startups — and in both cases I learned a lot from writing
about them. In both cases there were things I didn't consciously realize till
I had to explain them. And I don't think my experience was anomalous. A great
deal of knowledge is unconscious, and experts have if anything a higher
proportion of unconscious knowledge than beginners.
I'm not saying that writing is the best way to explore all ideas. If you have
ideas about architecture, presumably the best way to explore them is to build
actual buildings. What I'm saying is that however much you learn from
exploring ideas in other ways, you'll still learn new things from writing
about them.
Putting ideas into words doesn't have to mean writing, of course. You can also
do it the old way, by talking. But in my experience, writing is the stricter
test. You have to commit to a single, optimal sequence of words. Less can go
unsaid when you don't have tone of voice to carry meaning. And you can focus
in a way that would seem excessive in conversation. I'll often spend 2 weeks
on an essay and reread drafts 50 times. If you did that in conversation it
would seem evidence of some kind of mental disorder. If you're lazy, of
course, writing and talking are equally useless. But if you want to push
yourself to get things right, writing is the steeper hill. [3]
The reason I've spent so long establishing this rather obvious point is that
it leads to another that many people will find shocking. If writing down your
ideas always makes them more precise and more complete, then no one who hasn't
written about a topic has fully formed ideas about it. And someone who never
writes has no fully formed ideas about anything nontrivial.
It feels to them as if they do, especially if they're not in the habit of
critically examining their own thinking. Ideas can feel complete. It's only
when you try to put them into words that you discover they're not. So if you
never subject your ideas to that test, you'll not only never have fully formed
ideas, but also never realize it.
Putting ideas into words is certainly no guarantee that they'll be right. Far
from it. But though it's not a sufficient condition, it is a necessary one.
**Notes**
[1] Machinery and circuits are formal languages.
[2] I thought of this sentence as I was walking down the street in Palo Alto.
[3] There are two senses of talking to someone: a strict sense in which the
conversation is verbal, and a more general sense in which it can take any
form, including writing. In the limit case (e.g. Seneca's letters),
conversation in the latter sense becomes essay writing.
It can be very useful to talk (in either sense) with other people as you're
writing something. But a verbal conversation will never be more exacting than
when you're talking about something you're writing.
**Thanks** to Trevor Blackwell, Patrick Collison, and Robert Morris for
reading drafts of this.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
July 2010
I realized recently that what one thinks about in the shower in the morning is
more important than I'd thought. I knew it was a good time to have ideas. Now
I'd go further: now I'd say it's hard to do a really good job on anything you
don't think about in the shower.
Everyone who's worked on difficult problems is probably familiar with the
phenomenon of working hard to figure something out, failing, and then suddenly
seeing the answer a bit later while doing something else. There's a kind of
thinking you do without trying to. I'm increasingly convinced this type of
thinking is not merely helpful in solving hard problems, but necessary. The
tricky part is, you can only control it indirectly. [1]
I think most people have one top idea in their mind at any given time. That's
the idea their thoughts will drift toward when they're allowed to drift
freely. And this idea will thus tend to get all the benefit of that type of
thinking, while others are starved of it. Which means it's a disaster to let
the wrong idea become the top one in your mind.
What made this clear to me was having an idea I didn't want as the top one in
my mind for two long stretches.
I'd noticed startups got way less done when they started raising money, but it
was not till we ourselves raised money that I understood why. The problem is
not the actual time it takes to meet with investors. The problem is that once
you start raising money, raising money becomes the top idea in your mind. That
becomes what you think about when you take a shower in the morning. And that
means other questions aren't.
I'd hated raising money when I was running Viaweb, but I'd forgotten why I
hated it so much. When we raised money for Y Combinator, I remembered. Money
matters are particularly likely to become the top idea in your mind. The
reason is that they have to be. It's hard to get money. It's not the sort of
thing that happens by default. It's not going to happen unless you let it
become the thing you think about in the shower. And then you'll make little
progress on anything else you'd rather be working on. [2]
(I hear similar complaints from friends who are professors. Professors
nowadays seem to have become professional fundraisers who do a little research
on the side. It may be time to fix that.)
The reason this struck me so forcibly is that for most of the preceding 10
years I'd been able to think about what I wanted. So the contrast when I
couldn't was sharp. But I don't think this problem is unique to me, because
just about every startup I've seen grinds to a halt when they start raising
money � or [talking to acquirers](corpdev.html).
You can't directly control where your thoughts drift. If you're controlling
them, they're not drifting. But you can control them indirectly, by
controlling what situations you let yourself get into. That has been the
lesson for me: be careful what you let become critical to you. Try to get
yourself into situations where the most urgent problems are ones you want to
think about.
You don't have complete control, of course. An emergency could push other
thoughts out of your head. But barring emergencies you have a good deal of
indirect control over what becomes the top idea in your mind.
I've found there are two types of thoughts especially worth avoiding �
thoughts like the Nile Perch in the way they push out more interesting ideas.
One I've already mentioned: thoughts about money. Getting money is almost by
definition an attention sink. The other is disputes. These too are engaging in
the wrong way: they have the same velcro-like shape as genuinely interesting
ideas, but without the substance. So avoid disputes if you want to get real
work done. [3]
Even Newton fell into this trap. After publishing his theory of colors in 1672
he found himself distracted by disputes for years, finally concluding that the
only solution was to stop publishing:
> I see I have made myself a slave to Philosophy, but if I get free of Mr
> Linus's business I will resolutely bid adew to it eternally, excepting what
> I do for my privat satisfaction or leave to come out after me. For I see a
> man must either resolve to put out nothing new or become a slave to defend
> it. [4]
Linus and his students at Liege were among the more tenacious critics.
Newton's biographer Westfall seems to feel he was overreacting:
> Recall that at the time he wrote, Newton's "slavery" consisted of five
> replies to Liege, totalling fourteen printed pages, over the course of a
> year.
I'm more sympathetic to Newton. The problem was not the 14 pages, but the pain
of having this stupid controversy constantly reintroduced as the top idea in a
mind that wanted so eagerly to think about other things.
Turning the other cheek turns out to have selfish advantages. Someone who does
you an injury hurts you twice: first by the injury itself, and second by
taking up your time afterward thinking about it. If you learn to ignore
injuries you can at least avoid the second half. I've found I can to some
extent avoid thinking about nasty things people have done to me by telling
myself: this doesn't deserve space in my head. I'm always delighted to find
I've forgotten the details of disputes, because that means I hadn't been
thinking about them. My wife thinks I'm more forgiving than she is, but my
motives are purely selfish.
I suspect a lot of people aren't sure what's the top idea in their mind at any
given time. I'm often mistaken about it. I tend to think it's the idea I'd
want to be the top one, rather than the one that is. But it's easy to figure
this out: just take a shower. What topic do your thoughts keep returning to?
If it's not what you want to be thinking about, you may want to change
something.
**Notes**
[1] No doubt there are already names for this type of thinking, but I call it
"ambient thought."
[2] This was made particularly clear in our case, because neither of the funds
we raised was difficult, and yet in both cases the process dragged on for
months. Moving large amounts of money around is never something people treat
casually. The attention required increases with the amount—maybe not linearly,
but definitely monotonically.
[3] Corollary: Avoid becoming an administrator, or your job will consist of
dealing with money and disputes.
[4] Letter to Oldenburg, quoted in Westfall, Richard, _Life of Isaac Newton_ ,
p. 107.
**Thanks** to Sam Altman, Patrick Collison, Jessica Livingston, and Robert
Morris for reading drafts of this.
|
November 2014
It struck me recently how few of the most successful people I know are mean.
There are exceptions, but remarkably few.
Meanness isn't rare. In fact, one of the things the internet has shown us is
how mean people can be. A few decades ago, only famous people and professional
writers got to publish their opinions. Now everyone can, and we can all see
the long tail of meanness that had previously been hidden.
And yet while there are clearly a lot of mean people out there, there are next
to none among the most successful people I know. What's going on here? Are
meanness and success inversely correlated?
Part of what's going on, of course, is selection bias. I only know people who
work in certain fields: startup founders, programmers, professors. I'm willing
to believe that successful people in other fields are mean. Maybe successful
hedge fund managers are mean; I don't know enough to say. It seems quite
likely that most successful drug lords are mean. But there are at least big
chunks of the world that mean people don't rule, and that territory seems to
be growing.
My wife and Y Combinator cofounder Jessica is one of those rare people who
have x-ray vision for character. Being married to her is like standing next to
an airport baggage scanner. She came to the startup world from investment
banking, and she has always been struck both by how consistently successful
startup founders turn out to be good people, and how consistently bad people
fail as startup founders.
Why? I think there are several reasons. One is that being mean makes you
stupid. That's why I hate fights. You never do your best work in a fight,
because fights are not sufficiently general. Winning is always a function of
the situation and the people involved. You don't win fights by thinking of big
ideas but by thinking of tricks that work in one particular case. And yet
fighting is just as much work as thinking about real problems. Which is
particularly painful to someone who cares how their brain is used: your brain
goes fast but you get nowhere, like a car spinning its wheels.
Startups don't win by attacking. They win by transcending. There are
exceptions of course, but usually the way to win is to race ahead, not to stop
and fight.
Another reason mean founders lose is that they can't get the best people to
work for them. They can hire people who will put up with them because they
need a job. But the best people have other options. A mean person can't
convince the best people to work for him unless he is super convincing. And
while having the best people helps any organization, it's critical for
startups.
There is also a complementary force at work: if you want to build great
things, it helps to be driven by a spirit of benevolence. The startup founders
who end up richest are not the ones driven by money. The ones driven by money
take the big acquisition offer that nearly every successful startup gets en
route. [1] The ones who keep going are driven by something else. They may not
say so explicitly, but they're usually trying to improve the world. Which
means people with a desire to improve the world have a natural advantage. [2]
The exciting thing is that startups are not just one random type of work in
which meanness and success are inversely correlated. This kind of work is the
future.
For most of history success meant control of scarce resources. One got that by
fighting, whether literally in the case of pastoral nomads driving hunter-
gatherers into marginal lands, or metaphorically in the case of Gilded Age
financiers contending with one another to assemble railroad monopolies. For
most of history, success meant success at zero-sum games. And in most of them
meanness was not a handicap but probably an advantage.
That is changing. Increasingly the games that matter are not zero-sum.
Increasingly you win not by fighting to get control of a scarce resource, but
by having new ideas and building new things. [3]
There have long been games where you won by having new ideas. In the third
century BC, Archimedes won by doing that. At least until an invading Roman
army killed him. Which illustrates why this change is happening: for new ideas
to matter, you need a certain degree of civil order. And not just not being at
war. You also need to prevent the sort of economic violence that nineteenth
century magnates practiced against one another and communist countries
practiced against their citizens. People need to feel that what they create
can't be stolen. [4]
That has always been the case for thinkers, which is why this trend began with
them. When you think of successful people from history who weren't ruthless,
you get mathematicians and writers and artists. The exciting thing is that
their m.o. seems to be spreading. The games played by intellectuals are
leaking into the real world, and this is reversing the historical polarity of
the relationship between meanness and success.
So I'm really glad I stopped to think about this. Jessica and I have always
worked hard to teach our kids not to be mean. We tolerate noise and mess and
junk food, but not meanness. And now I have both an additional reason to crack
down on it, and an additional argument to use when I do: that being mean makes
you fail.
**Notes**
[1] I'm not saying all founders who take big acquisition offers are driven
only by money, but rather that those who don't aren't. Plus one can have
benevolent motives for being driven by money — for example, to take care of
one's family, or to be free to work on projects that improve the world.
[2] It's unlikely that every successful startup improves the world. But their
founders, like parents, truly believe they do. Successful founders are in love
with their companies. And while this sort of love is as blind as the love
people have for one another, it is genuine.
[3] [Peter Thiel](http://startupclass.samaltman.com/courses/lec05) would point
out that successful founders still get rich from controlling monopolies, just
monopolies they create rather than ones they capture. And while this is
largely true, it means a big change in the sort of person who wins.
[4] To be fair, the Romans didn't mean to kill Archimedes. The Roman commander
specifically ordered that he be spared. But he got killed in the chaos anyway.
In sufficiently disordered times, even thinking requires control of scarce
resources, because living at all is a scarce resource.
**Thanks** to Sam Altman, Ron Conway, Daniel Gackle, Jessica Livingston,
Robert Morris, Geoff Ralston, and Fred Wilson for reading drafts of this.
|
Kevin Kelleher suggested an interesting way to compare programming languages:
to describe each in terms of the problem it fixes. The surprising thing is how
many, and how well, languages can be described this way.
**Algol:** Assembly language is too low-level.
**Pascal:** Algol doesn't have enough data types.
**Modula:** Pascal is too wimpy for systems programming.
**Simula:** Algol isn't good enough at simulations.
**Smalltalk:** Not everything in Simula is an object.
**Fortran:** Assembly language is too low-level.
**Cobol:** Fortran is scary.
**PL/1:** Fortran doesn't have enough data types.
**Ada:** Every existing language is missing something.
**Basic:** Fortran is scary.
**APL:** Fortran isn't good enough at manipulating arrays.
**J:** APL requires its own character set.
**C:** Assembly language is too low-level.
**C++:** C is too low-level.
**Java:** C++ is a kludge. And Microsoft is going to crush us.
**C#:** Java is controlled by Sun.
**Lisp:** Turing Machines are an awkward way to describe computation.
**Scheme:** MacLisp is a kludge.
**T:** Scheme has no libraries.
**Common Lisp:** There are too many dialects of Lisp.
**Dylan:** Scheme has no libraries, and Lisp syntax is scary.
**Perl:** Shell scripts/awk/sed are not enough like programming languages.
**Python:** Perl is a kludge.
**Ruby:** Perl is a kludge, and Lisp syntax is scary.
**Prolog:** Programming is not enough like logic.
|
January 2020
_(I originally intended this for startup founders, who are often surprised by
the attention they get as their companies grow, but it applies equally to
anyone who becomes famous.)_
If you become sufficiently famous, you'll acquire some fans who like you too
much. These people are sometimes called "fanboys," and though I dislike that
term, I'm going to have to use it here. We need some word for them, because
this is a distinct phenomenon from someone simply liking your work.
A fanboy is obsessive and uncritical. Liking you becomes part of their
identity, and they create an image of you in their own head that is much
better than reality. Everything you do is good, because you do it. If you do
something bad, they find a way to see it as good. And their love for you is
not, usually, a quiet, private one. They want everyone to know how great you
are.
Well, you may be thinking, I could do without this kind of obsessive fan, but
I know there are all kinds of people in the world, and if this is the worst
consequence of fame, that's not so bad.
Unfortunately this is not the worst consequence of fame. As well as fanboys,
you'll have haters.
A hater is obsessive and uncritical. Disliking you becomes part of their
identity, and they create an image of you in their own head that is much worse
than reality. Everything you do is bad, because you do it. If you do something
good, they find a way to see it as bad. And their dislike for you is not,
usually, a quiet, private one. They want everyone to know how awful you are.
If you're thinking of checking, I'll save you the trouble. The second and
fifth paragraphs are identical except for "good" being switched to "bad" and
so on.
I spent years puzzling about haters. What are they, and where do they come
from? Then one day it dawned on me. Haters are just fanboys with the sign
switched.
Note that by haters, I don't simply mean trolls. I'm not talking about people
who say bad things about you and then move on. I'm talking about the much
smaller group of people for whom this becomes a kind of obsession and who do
it repeatedly over a long period.
Like fans, haters seem to be an automatic consequence of fame. Anyone
sufficiently famous will have them. And like fans, haters are energized by the
fame of whoever they hate. They hear a song by some pop singer. They don't
like it much. If the singer were an obscure one, they'd just forget about it.
But instead they keep hearing her name, and this seems to drive some people
crazy. Everyone's always going on about this singer, but she's no good! She's
a fraud!
That word "fraud" is an important one. It's the spectral signature of a hater
to regard the object of their hatred as a
[_fraud_](https://twitter.com/search?q=Musk%20fraud&src=typed_query&f=live).
They can't deny their fame. Indeed, their fame is if anything exaggerated in
the hater's mind. They notice every mention of the singer's name, because
every mention makes them angrier. In their own minds they exaggerate both the
singer's fame and her lack of talent, and the only way to reconcile those two
ideas is to conclude that she has tricked everyone.
What sort of people become haters? Can anyone become one? I'm not sure about
this, but I've noticed some patterns. Haters are generally losers in a very
specific sense: although they are occasionally talented, they have never
achieved much. And indeed, anyone successful enough to have achieved
significant fame would be unlikely to regard another famous person as a fraud
on that account, because anyone famous knows how random fame is.
But haters are not always complete losers. They are not always the proverbial
guy living in his mom's basement. Many are, but some have some amount of
talent. In fact I suspect that a sense of frustrated talent is what drives
some people to become haters. They're not just saying "It's unfair that so-
and-so is famous," but "It's unfair that so-and-so is famous, and not me."
Could a hater be cured if they achieved something impressive? My guess is
that's a moot point, because they [_never will_](mean.html). I've been able to
observe for long enough that I'm fairly confident the pattern works both ways:
not only do people who do great work never become haters, haters never do
great work. Although I dislike the word "fanboy," it's evocative of something
important about both haters and fanboys. It implies that the fanboy is so
slavishly predictable in his admiration that he's diminished as a result, that
he's less than a man.
Haters seem even more diminished. I can imagine being a fanboy. I can think of
people whose work I admire so much that I could abase myself before them out
of sheer gratitude. If P. G. Wodehouse were still alive, I could see myself
being a Wodehouse fanboy. But I could not imagine being a hater.
Knowing that haters are just fanboys with the sign bit flipped makes it much
easier to deal with them. We don't need a separate theory of haters. We can
just use existing techniques for dealing with obsessive fans.
The most important of which is simply not to think much about them. If you're
like most people who become famous enough to acquire haters, your initial
reaction will be one of mystification. Why does this guy seem to have it in
for me? Where does his obsessive energy come from, and what makes him so
appallingly nasty? What did I do to set him off? Is it something I can fix?
The mistake here is to think of the hater as someone you have a dispute with.
When you have a dispute with someone, it's usually a good idea to try to
understand why they're upset and then fix things if you can. Disputes are
distracting. But it's a false analogy to think of a hater as someone you have
a dispute with. It's an understandable mistake, if you've never encountered
haters before. But when you realize that you're dealing with a hater, and what
a hater is, it's clear that it's a waste of time even to think about them. If
you have obsessive fans, do you spend any time wondering what makes them love
you so much? No, you just think "some people are kind of crazy," and that's
the end of it.
Since haters are equivalent to fanboys, that's the way to deal with them too.
There may have been something that set them off. But it's not something that
would have set off a normal person, so there's no reason to spend any time
thinking about it. It's not you, it's them.
**Notes**
[1] There are of course some people who are genuine frauds. How can you
distinguish between x calling y a fraud because x is a hater, and because y is
a fraud? Look at neutral opinion. Actual frauds are usually pretty
conspicuous. Thoughtful people are rarely taken in by them. So if there are
some thoughtful people who like y, you can usually assume y is not a fraud.
[2] I would make an exception for teenagers, who sometimes act in such extreme
ways that they are literally not themselves. I can imagine a teenage kid being
a hater and then growing out of it. But not anyone over 25.
[3] I have a much worse memory for misdeeds than my wife Jessica, who is a
connoisseur of character, but I don't wish it were better. Most disputes are a
waste of time even if you're in the right, and it's easy to bury the hatchet
with someone if you can't remember why you were mad at them.
[4] A competent hater will not merely attack you individually but will try to
get mobs after you. In some cases you may want to refute whatever bogus claim
they made in order to do so. But err on the side of not, because ultimately it
probably won't matter.
**Thanks** to Austen Allred, Trevor Blackwell, Patrick Collison, Christine
Ford, Daniel Gackle, Jessica Livingston, Robert Morris, Elon Musk, Harj
Taggar, and Peter Thiel for reading drafts of this.
|
March 2012
As a child I read a book of stories about a famous judge in eighteenth century
Japan called Ooka Tadasuke. One of the cases he decided was brought by the
owner of a food shop. A poor student who could afford only rice was eating his
rice while enjoying the delicious cooking smells coming from the food shop.
The owner wanted the student to pay for the smells he was enjoying.
The student was stealing his smells!
This story often comes to mind when I hear the RIAA and MPAA accusing people
of stealing music and movies.
It sounds ridiculous to us to treat smells as property. But I can imagine
scenarios in which one could charge for smells. Imagine we were living on a
moon base where we had to buy air by the liter. I could imagine air suppliers
adding scents at an extra charge.
The reason it seems ridiculous to us to treat smells as property is that it
wouldn't work to. It would work on a moon base, though.
What counts as property depends on what works to treat as property. And that
not only can change, but has changed. Humans may always (for some definition
of human and always) have treated small items carried on one's person as
property. But hunter gatherers didn't treat land, for example, as property in
the way we do. [1]
The reason so many people think of property as having a single unchanging
definition is that its definition changes very slowly. [2] But we are in the
midst of such a change now. The record labels and movie studios used to
distribute what they made like air shipped through tubes on a moon base. But
with the arrival of networks, it's as if we've moved to a planet with a
breathable atmosphere. Data moves like smells now. And through a combination
of wishful thinking and short-term greed, the labels and studios have put
themselves in the position of the food shop owner, accusing us all of stealing
their smells.
(The reason I say short-term greed is that the underlying problem with the
labels and studios is that the people who run them are driven by bonuses
rather than equity. If they were driven by equity they'd be looking for ways
to take advantage of technological change instead of fighting it. But building
new things takes too long. Their bonuses depend on this year's revenues, and
the best way to increase those is to extract more money from stuff they do
already.)
So what does this mean? Should people not be able to charge for content?
There's not a single yes or no answer to that question. People should be able
to charge for content when it works to charge for content.
But by "works" I mean something more subtle than "when they can get away with
it." I mean when people can charge for content without warping society in
order to do it. After all, the companies selling smells on the moon base could
continue to sell them on the Earth, if they lobbied successfully for laws
requiring us all to continue to breathe through tubes down here too, even
though we no longer needed to.
The crazy legal measures that the labels and studios have been taking have a
lot of that flavor. Newspapers and magazines are just as screwed, but they are
at least declining gracefully. The RIAA and MPAA would make us breathe through
tubes if they could.
Ultimately it comes down to common sense. When you're abusing the legal system
by trying to use mass lawsuits against randomly chosen people as a form of
exemplary punishment, or lobbying for laws that would break the Internet if
they passed, that's ipso facto evidence you're using a definition of property
that doesn't work.
This is where it's helpful to have working democracies and multiple sovereign
countries. If the world had a single, autocratic government, the labels and
studios could buy laws making the definition of property be whatever they
wanted. But fortunately there are still some countries that are not copyright
colonies of the US, and even in the US,
[politicians](http://tctechcrunch2011.files.wordpress.com/2012/01/congress-on-
sopa-done.png) still seem to be afraid of actual voters, in sufficient
numbers. [3]
The people running the US may not like it when voters or other countries
refuse to bend to their will, but ultimately it's in all our interest that
there's not a single point of attack for people trying to warp the law to
serve their own purposes. Private property is an extremely useful idea —
arguably one of our greatest inventions. So far, each new definition of it has
brought us increasing material wealth. [4] It seems reasonable to suppose the
newest one will too. It would be a disaster if we all had to keep running an
obsolete version just because a few powerful people were too lazy to upgrade.
**Notes**
[1] If you want to learn more about hunter gatherers I strongly recommend
Elizabeth Marshall Thomas's [_The Harmless
People_](http://www.amazon.com/Harmless-People-Elizabeth-Marshall-
Thomas/dp/0394427793) and [_The Old Way_](http://www.amazon.com/Old-Way-Story-
First-People/dp/0374225524).
[2] Change in the definition of property is driven mostly by technological
progress, however, and since technological progress is accelerating, so
presumably will the rate of change in the definition of property. Which means
it's all the more important for societies to be able to respond gracefully to
such changes, because they will come at an ever increasing rate.
[3] As far as I know, the term "copyright colony" was first used by [Myles
Peterson](http://torrentfreak.com/australia-us-copyright-colony-or-just-a-
good-friend-120121/).
[4] The state of technology isn't simply a function of the definition of
property. They each constrain the other. But that being so, you can't mess
with the definition of property without affecting (and probably harming) the
state of technology. The history of the USSR offers a vivid illustration of
that.
**Thanks** to Sam Altman and Geoff Ralston for reading drafts of this.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
March 2007
_(This essay is derived from talks at the 2007 Startup School and the
Berkeley CSUA.)_
We've now been doing Y Combinator long enough to have some data about success
rates. Our first batch, in the summer of 2005, had eight startups in it. Of
those eight, it now looks as if at least four succeeded. Three have been
acquired: [Reddit](http://reddit.com) was a merger of two, Reddit and
Infogami, and a third was acquired that we can't talk about yet. Another from
that batch was [Loopt](http://loopt.com), which is doing so well they could
probably be acquired in about ten minutes if they wanted to.
So about half the founders from that first summer, less than two years ago,
are now rich, at least by their standards. (One thing you learn when you get
rich is that there are many degrees of it.)
I'm not ready to predict our success rate will stay as high as 50%. That first
batch could have been an anomaly. But we should be able to do better than the
oft-quoted (and probably made up) standard figure of 10%. I'd feel safe aiming
at 25%.
Even the founders who fail don't seem to have such a bad time. Of those first
eight startups, three are now probably dead. In two cases the founders just
went on to do other things at the end of the summer. I don't think they were
traumatized by the experience. The closest to a traumatic failure was Kiko,
whose founders kept working on their startup for a whole year before being
squashed by Google Calendar. But they ended up happy. They sold their software
on eBay for a quarter of a million dollars. After they paid back their angel
investors, they had about a year's salary each. [1] Then they immediately went
on to start a new and much more exciting startup,
[Justin.TV](http://justin.tv).
So here is an even more striking statistic: 0% of that first batch had a
terrible experience. They had ups and downs, like every startup, but I don't
think any would have traded it for a job in a cubicle. And that statistic is
probably not an anomaly. Whatever our long-term success rate ends up being, I
think the rate of people who wish they'd gotten a regular job will stay close
to 0%.
The big mystery to me is: why don't more people start startups? If nearly
everyone who does it prefers it to a regular job, and a significant percentage
get rich, why doesn't everyone want to do this? A lot of people think we get
thousands of applications for each funding cycle. In fact we usually only get
several hundred. Why don't more people apply? And while it must seem to anyone
watching this world that startups are popping up like crazy, the number is
small compared to the number of people with the necessary skills. The great
majority of programmers still go straight from college to cubicle, and stay
there.
It seems like people are not acting in their own interest. What's going on?
Well, I can answer that. Because of Y Combinator's position at the very start
of the venture funding process, we're probably the world's leading experts on
the psychology of people who aren't sure if they want to start a company.
There's nothing wrong with being unsure. If you're a hacker thinking about
starting a startup and hesitating before taking the leap, you're part of a
grand tradition. Larry and Sergey seem to have felt the same before they
started Google, and so did Jerry and Filo before they started Yahoo. In fact,
I'd guess the most successful startups are the ones started by uncertain
hackers rather than gung-ho business guys.
We have some evidence to support this. Several of the most successful startups
we've funded told us later that they only decided to apply at the last moment.
Some decided only hours before the deadline.
The way to deal with uncertainty is to analyze it into components. Most people
who are reluctant to do something have about eight different reasons mixed
together in their heads, and don't know themselves which are biggest. Some
will be justified and some bogus, but unless you know the relative proportion
of each, you don't know whether your overall uncertainty is mostly justified
or mostly bogus.
So I'm going to list all the components of people's reluctance to start
startups, and explain which are real. Then would-be founders can use this as a
checklist to examine their own feelings.
I admit my goal is to increase your self-confidence. But there are two things
different here from the usual confidence-building exercise. One is that I'm
motivated to be honest. Most people in the confidence-building business have
already achieved their goal when you buy the book or pay to attend the seminar
where they tell you how great you are. Whereas if I encourage people to start
startups who shouldn't, I make my own life worse. If I encourage too many
people to apply to Y Combinator, it just means more work for me, because I
have to read all the applications.
The other thing that's going to be different is my approach. Instead of being
positive, I'm going to be negative. Instead of telling you "come on, you can
do it" I'm going to consider all the reasons you aren't doing it, and show why
most (but not all) should be ignored. We'll start with the one everyone's born
with.
**1\. Too young**
A lot of people think they're too young to start a startup. Many are right.
The median age worldwide is about 27, so probably a third of the population
can truthfully say they're too young.
What's too young? One of our goals with Y Combinator was to discover the lower
bound on the age of startup founders. It always seemed to us that investors
were too conservative here—that they wanted to fund professors, when really
they should be funding grad students or even undergrads.
The main thing we've discovered from pushing the edge of this envelope is not
where the edge is, but how fuzzy it is. The outer limit may be as low as 16.
We don't look beyond 18 because people younger than that can't legally enter
into contracts. But the most successful founder we've funded so far, Sam
Altman, was 19 at the time.
Sam Altman, however, is an outlying data point. When he was 19, he seemed like
he had a 40 year old inside him. There are other 19 year olds who are 12
inside.
There's a reason we have a distinct word "adult" for people over a certain
age. There is a threshold you cross. It's conventionally fixed at 21, but
different people cross it at greatly varying ages. You're old enough to start
a startup if you've crossed this threshold, whatever your age.
How do you tell? There are a couple tests adults use. I realized these tests
existed after meeting Sam Altman, actually. I noticed that I felt like I was
talking to someone much older. Afterward I wondered, what am I even measuring?
What made him seem older?
One test adults use is whether you still have the kid flake reflex. When
you're a little kid and you're asked to do something hard, you can cry and say
"I can't do it" and the adults will probably let you off. As a kid there's a
magic button you can press by saying "I'm just a kid" that will get you out of
most difficult situations. Whereas adults, by definition, are not allowed to
flake. They still do, of course, but when they do they're ruthlessly pruned.
The other way to tell an adult is by how they react to a challenge. Someone
who's not yet an adult will tend to respond to a challenge from an adult in a
way that acknowledges their dominance. If an adult says "that's a stupid
idea," a kid will either crawl away with his tail between his legs, or rebel.
But rebelling presumes inferiority as much as submission. The adult response
to "that's a stupid idea," is simply to look the other person in the eye and
say "Really? Why do you think so?"
There are a lot of adults who still react childishly to challenges, of course.
What you don't often find are kids who react to challenges like adults. When
you do, you've found an adult, whatever their age.
**2\. Too inexperienced**
I once wrote that startup founders should be at least 23, and that people
should work for another company for a few years before starting their own. I
no longer believe that, and what changed my mind is the example of the
startups we've funded.
I still think 23 is a better age than 21. But the best way to get experience
if you're 21 is to start a startup. So, paradoxically, if you're too
inexperienced to start a startup, what you should do is start one. That's a
way more efficient cure for inexperience than a normal job. In fact, getting a
normal job may actually make you less able to start a startup, by turning you
into a tame animal who thinks he needs an office to work in and a product
manager to tell him what software to write.
What really convinced me of this was the Kikos. They started a startup right
out of college. Their inexperience caused them to make a lot of mistakes. But
by the time we funded their second startup, a year later, they had become
extremely formidable. They were certainly not tame animals. And there is no
way they'd have grown so much if they'd spent that year working at Microsoft,
or even Google. They'd still have been diffident junior programmers.
So now I'd advise people to go ahead and start startups right out of college.
There's no better time to take risks than when you're young. Sure, you'll
probably fail. But even failure will get you to the ultimate goal faster than
getting a job.
It worries me a bit to be saying this, because in effect we're advising people
to educate themselves by failing at our expense, but it's the truth.
**3\. Not determined enough**
You need a lot of determination to succeed as a startup founder. It's probably
the single best predictor of success.
Some people may not be determined enough to make it. It's hard for me to say
for sure, because I'm so determined that I can't imagine what's going on in
the heads of people who aren't. But I know they exist.
Most hackers probably underestimate their determination. I've seen a lot
become visibly more determined as they get used to running a startup. I can
think of several we've funded who would have been delighted at first to be
bought for $2 million, but are now set on world domination.
How can you tell if you're determined enough, when Larry and Sergey themselves
were unsure at first about starting a company? I'm guessing here, but I'd say
the test is whether you're sufficiently driven to work on your own projects.
Though they may have been unsure whether they wanted to start a company, it
doesn't seem as if Larry and Sergey were meek little research assistants,
obediently doing their advisors' bidding. They started projects of their own.
**4\. Not smart enough**
You may need to be moderately smart to succeed as a startup founder. But if
you're worried about this, you're probably mistaken. If you're smart enough to
worry that you might not be smart enough to start a startup, you probably are.
And in any case, starting a startup just doesn't require that much
intelligence. Some startups do. You have to be good at math to write
Mathematica. But most companies do more mundane stuff where the decisive
factor is effort, not brains. Silicon Valley can warp your perspective on
this, because there's a cult of smartness here. People who aren't smart at
least try to act that way. But if you think it takes a lot of intelligence to
get rich, try spending a couple days in some of the fancier bits of New York
or LA.
If you don't think you're smart enough to start a startup doing something
technically difficult, just write enterprise software. Enterprise software
companies aren't technology companies, they're sales companies, and sales
depends mostly on effort.
**5\. Know nothing about business**
This is another variable whose coefficient should be zero. You don't need to
know anything about business to start a startup. The initial focus should be
the product. All you need to know in this phase is how to build things people
want. If you succeed, you'll have to think about how to make money from it.
But this is so easy you can pick it up on the fly.
I get a fair amount of flak for telling founders just to make something great
and not worry too much about making money. And yet all the empirical evidence
points that way: pretty much 100% of startups that make something popular
manage to make money from it. And acquirers tell me privately that revenue is
not what they buy startups for, but their strategic value. Which means,
because they made something people want. Acquirers know the rule holds for
them too: if users love you, you can always make money from that somehow, and
if they don't, the cleverest business model in the world won't save you.
So why do so many people argue with me? I think one reason is that they hate
the idea that a bunch of twenty year olds could get rich from building
something cool that doesn't make any money. They just don't want that to be
possible. But how possible it is doesn't depend on how much they want it to
be.
For a while it annoyed me to hear myself described as some kind of
irresponsible pied piper, leading impressionable young hackers down the road
to ruin. But now I realize this kind of controversy is a sign of a good idea.
The most valuable truths are the ones most people don't believe. They're like
undervalued stocks. If you start with them, you'll have the whole field to
yourself. So when you find an idea you know is good but most people disagree
with, you should not merely ignore their objections, but push aggressively in
that direction. In this case, that means you should seek out ideas that would
be popular but seem hard to make money from.
We'll bet a seed round you can't make something popular that we can't figure
out how to make money from.
**6\. No cofounder**
Not having a cofounder is a real problem. A startup is too much for one person
to bear. And though we differ from other investors on a lot of questions, we
all agree on this. All investors, without exception, are more likely to fund
you with a cofounder than without.
We've funded two single founders, but in both cases we suggested their first
priority should be to find a cofounder. Both did. But we'd have preferred them
to have cofounders before they applied. It's not super hard to get a cofounder
for a project that's just been funded, and we'd rather have cofounders
committed enough to sign up for something super hard.
If you don't have a cofounder, what should you do? Get one. It's more
important than anything else. If there's no one where you live who wants to
start a startup with you, move where there are people who do. If no one wants
to work with you on your current idea, switch to an idea people want to work
on.
If you're still in school, you're surrounded by potential cofounders. A few
years out it gets harder to find them. Not only do you have a smaller pool to
draw from, but most already have jobs, and perhaps even families to support.
So if you had friends in college you used to scheme about startups with, stay
in touch with them as well as you can. That may help keep the dream alive.
It's possible you could meet a cofounder through something like a user's group
or a conference. But I wouldn't be too optimistic. You need to work with
someone to know whether you want them as a cofounder. [2]
The real lesson to draw from this is not how to find a cofounder, but that you
should start startups when you're young and there are lots of them around.
**7\. No idea**
In a sense, it's not a problem if you don't have a good idea, because most
startups change their idea anyway. In the average Y Combinator startup, I'd
guess 70% of the idea is new at the end of the first three months. Sometimes
it's 100%.
In fact, we're so sure the founders are more important than the initial idea
that we're going to try something new this funding cycle. We're going to let
people apply with no idea at all. If you want, you can answer the question on
the application form that asks what you're going to do with "We have no idea."
If you seem really good we'll accept you anyway. We're confident we can sit
down with you and cook up some promising project.
Really this just codifies what we do already. We put little weight on the
idea. We ask mainly out of politeness. The kind of question on the application
form that we really care about is the one where we ask what cool things you've
made. If what you've made is version one of a promising startup, so much the
better, but the main thing we care about is whether you're good at making
things. Being lead developer of a popular open source project counts almost as
much.
That solves the problem if you get funded by Y Combinator. What about in the
general case? Because in another sense, it is a problem if you don't have an
idea. If you start a startup with no idea, what do you do next?
So here's the brief recipe for getting startup ideas. Find something that's
missing in your own life, and supply that need—no matter how specific to you
it seems. Steve Wozniak built himself a computer; who knew so many other
people would want them? A need that's narrow but genuine is a better starting
point than one that's broad but hypothetical. So even if the problem is simply
that you don't have a date on Saturday night, if you can think of a way to fix
that by writing software, you're onto something, because a lot of other people
have the same problem.
**8\. No room for more startups**
A lot of people look at the ever-increasing number of startups and think "this
can't continue." Implicit in their thinking is a fallacy: that there is some
limit on the number of startups there could be. But this is false. No one
claims there's any limit on the number of people who can work for salary at
1000-person companies. Why should there be any limit on the number who can
work for equity at 5-person companies? [3]
Nearly everyone who works is satisfying some kind of need. Breaking up
companies into smaller units doesn't make those needs go away. Existing needs
would probably get satisfied more efficiently by a network of startups than by
a few giant, hierarchical organizations, but I don't think that would mean
less opportunity, because satisfying current needs would lead to more.
Certainly this tends to be the case in individuals. Nor is there anything
wrong with that. We take for granted things that medieval kings would have
considered effeminate luxuries, like whole buildings heated to spring
temperatures year round. And if things go well, our descendants will take for
granted things we would consider shockingly luxurious. There is no absolute
standard for material wealth. Health care is a component of it, and that alone
is a black hole. For the foreseeable future, people will want ever more
material wealth, so there is no limit to the amount of work available for
companies, and for startups in particular.
Usually the limited-room fallacy is not expressed directly. Usually it's
implicit in statements like "there are only so many startups Google,
Microsoft, and Yahoo can buy." Maybe, though the list of acquirers is a lot
longer than that. And whatever you think of other acquirers, Google is not
stupid. The reason big companies buy startups is that they've created
something valuable. And why should there be any limit to the number of
valuable startups companies can acquire, any more than there is a limit to the
amount of wealth individual people want? Maybe there would be practical limits
on the number of startups any one acquirer could assimilate, but if there is
value to be had, in the form of upside that founders are willing to forgo in
return for an immediate payment, acquirers will evolve to consume it. Markets
are pretty smart that way.
**9\. Family to support**
This one is real. I wouldn't advise anyone with a family to start a startup.
I'm not saying it's a bad idea, just that I don't want to take responsibility
for advising it. I'm willing to take responsibility for telling 22 year olds
to start startups. So what if they fail? They'll learn a lot, and that job at
Microsoft will still be waiting for them if they need it. But I'm not prepared
to cross moms.
What you can do, if you have a family and want to start a startup, is start a
consulting business you can then gradually turn into a product business.
Empirically the chances of pulling that off seem very small. You're never
going to produce Google this way. But at least you'll never be without an
income.
Another way to decrease the risk is to join an existing startup instead of
starting your own. Being one of the first employees of a startup is a lot like
being a founder, in both the good ways and the bad. You'll be roughly 1/n^2
founder, where n is your employee number.
As with the question of cofounders, the real lesson here is to start startups
when you're young.
**10\. Independently wealthy**
This is my excuse for not starting a startup. Startups are stressful. Why do
it if you don't need the money? For every "serial entrepreneur," there are
probably twenty sane ones who think "Start another company? Are you crazy?"
I've come close to starting new startups a couple times, but I always pull
back because I don't want four years of my life to be consumed by random
schleps. I know this business well enough to know you can't do it half-
heartedly. What makes a good startup founder so dangerous is his willingness
to endure infinite schleps.
There is a bit of a problem with retirement, though. Like a lot of people, I
like to work. And one of the many weird little problems you discover when you
get rich is that a lot of the interesting people you'd like to work with are
not rich. They need to work at something that pays the bills. Which means if
you want to have them as colleagues, you have to work at something that pays
the bills too, even though you don't need to. I think this is what drives a
lot of serial entrepreneurs, actually.
That's why I love working on Y Combinator so much. It's an excuse to work on
something interesting with people I like.
**11\. Not ready for commitment**
This was my reason for not starting a startup for most of my twenties. Like a
lot of people that age, I valued freedom most of all. I was reluctant to do
anything that required a commitment of more than a few months. Nor would I
have wanted to do anything that completely took over my life the way a startup
does. And that's fine. If you want to spend your time travelling around, or
playing in a band, or whatever, that's a perfectly legitimate reason not to
start a company.
If you start a startup that succeeds, it's going to consume at least three or
four years. (If it fails, you'll be done a lot quicker.) So you shouldn't do
it if you're not ready for commitments on that scale. Be aware, though, that
if you get a regular job, you'll probably end up working there for as long as
a startup would take, and you'll find you have much less spare time than you
might expect. So if you're ready to clip on that ID badge and go to that
orientation session, you may also be ready to start that startup.
**12\. Need for structure**
I'm told there are people who need structure in their lives. This seems to be
a nice way of saying they need someone to tell them what to do. I believe such
people exist. There's plenty of empirical evidence: armies, religious cults,
and so on. They may even be the majority.
If you're one of these people, you probably shouldn't start a startup. In
fact, you probably shouldn't even go to work for one. In a good startup, you
don't get told what to do very much. There may be one person whose job title
is CEO, but till the company has about twelve people no one should be telling
anyone what to do. That's too inefficient. Each person should just do what
they need to without anyone telling them.
If that sounds like a recipe for chaos, think about a soccer team. Eleven
people manage to work together in quite complicated ways, and yet only in
occasional emergencies does anyone tell anyone else what to do. A reporter
once asked David Beckham if there were any language problems at Real Madrid,
since the players were from about eight different countries. He said it was
never an issue, because everyone was so good they never had to talk. They all
just did the right thing.
How do you tell if you're independent-minded enough to start a startup? If
you'd bristle at the suggestion that you aren't, then you probably are.
**13\. Fear of uncertainty**
Perhaps some people are deterred from starting startups because they don't
like the uncertainty. If you go to work for Microsoft, you can predict fairly
accurately what the next few years will be like—all too accurately, in fact.
If you start a startup, anything might happen.
Well, if you're troubled by uncertainty, I can solve that problem for you: if
you start a startup, it will probably fail. Seriously, though, this is not a
bad way to think about the whole experience. Hope for the best, but expect the
worst. In the worst case, it will at least be interesting. In the best case
you might get rich.
No one will blame you if the startup tanks, so long as you made a serious
effort. There may once have been a time when employers would regard that as a
mark against you, but they wouldn't now. I asked managers at big companies,
and they all said they'd prefer to hire someone who'd tried to start a startup
and failed over someone who'd spent the same time working at a big company.
Nor will investors hold it against you, as long as you didn't fail out of
laziness or incurable stupidity. I'm told there's a lot of stigma attached to
failing in other places—in Europe, for example. Not here. In America,
companies, like practically everything else, are disposable.
**14\. Don't realize what you're avoiding**
One reason people who've been out in the world for a year or two make better
founders than people straight from college is that they know what they're
avoiding. If their startup fails, they'll have to get a job, and they know how
much jobs suck.
If you've had summer jobs in college, you may think you know what jobs are
like, but you probably don't. Summer jobs at technology companies are not real
jobs. If you get a summer job as a waiter, that's a real job. Then you have to
carry your weight. But software companies don't hire students for the summer
as a source of cheap labor. They do it in the hope of recruiting them when
they graduate. So while they're happy if you produce, they don't expect you
to.
That will change if you get a real job after you graduate. Then you'll have to
earn your keep. And since most of what big companies do is boring, you're
going to have to work on boring stuff. Easy, compared to college, but boring.
At first it may seem cool to get paid for doing easy stuff, after paying to do
hard stuff in college. But that wears off after a few months. Eventually it
gets demoralizing to work on dumb stuff, even if it's easy and you get paid a
lot.
And that's not the worst of it. The thing that really sucks about having a
regular job is the expectation that you're supposed to be there at certain
times. Even Google is afflicted with this, apparently. And what this means, as
everyone who's had a regular job can tell you, is that there are going to be
times when you have absolutely no desire to work on anything, and you're going
to have to go to work anyway and sit in front of your screen and pretend to.
To someone who likes work, as most good hackers do, this is torture.
In a startup, you skip all that. There's no concept of office hours in most
startups. Work and life just get mixed together. But the good thing about that
is that no one minds if you have a life at work. In a startup you can do
whatever you want most of the time. If you're a founder, what you want to do
most of the time is work. But you never have to pretend to.
If you took a nap in your office in a big company, it would seem
unprofessional. But if you're starting a startup and you fall asleep in the
middle of the day, your cofounders will just assume you were tired.
**15\. Parents want you to be a doctor**
A significant number of would-be startup founders are probably dissuaded from
doing it by their parents. I'm not going to say you shouldn't listen to them.
Families are entitled to their own traditions, and who am I to argue with
them? But I will give you a couple reasons why a safe career might not be what
your parents really want for you.
One is that parents tend to be more conservative for their kids than they
would be for themselves. This is actually a rational response to their
situation. Parents end up sharing more of their kids' ill fortune than good
fortune. Most parents don't mind this; it's part of the job; but it does tend
to make them excessively conservative. And erring on the side of conservatism
is still erring. In almost everything, reward is proportionate to risk. So by
protecting their kids from risk, parents are, without realizing it, also
protecting them from rewards. If they saw that, they'd want you to take more
risks.
The other reason parents may be mistaken is that, like generals, they're
always fighting the last war. If they want you to be a doctor, odds are it's
not just because they want you to help the sick, but also because it's a
prestigious and lucrative career. [4] But not so lucrative or prestigious as
it was when their opinions were formed. When I was a kid in the seventies, a
doctor was _the_ thing to be. There was a sort of golden triangle involving
doctors, Mercedes 450SLs, and tennis. All three vertices now seem pretty
dated.
The parents who want you to be a doctor may simply not realize how much things
have changed. Would they be that unhappy if you were Steve Jobs instead? So I
think the way to deal with your parents' opinions about what you should do is
to treat them like feature requests. Even if your only goal is to please them,
the way to do that is not simply to give them what they ask for. Instead think
about why they're asking for something, and see if there's a better way to
give them what they need.
**16\. A job is the default**
This leads us to the last and probably most powerful reason people get regular
jobs: it's the default thing to do. Defaults are enormously powerful,
precisely because they operate without any conscious choice.
To almost everyone except criminals, it seems an axiom that if you need money,
you should get a job. Actually this tradition is not much more than a hundred
years old. Before that, the default way to make a living was by farming. It's
a bad plan to treat something only a hundred years old as an axiom. By
historical standards, that's something that's changing pretty rapidly.
We may be seeing another such change right now. I've read a lot of economic
history, and I understand the startup world pretty well, and it now seems to
me fairly likely that we're seeing the beginning of a change like the one from
farming to manufacturing.
And you know what? If you'd been around when that change began (around 1000 in
Europe) it would have seemed to nearly everyone that running off to the city
to make your fortune was a crazy thing to do. Though serfs were in principle
forbidden to leave their manors, it can't have been that hard to run away to a
city. There were no guards patrolling the perimeter of the village. What
prevented most serfs from leaving was that it seemed insanely risky. Leave
one's plot of land? Leave the people you'd spent your whole life with, to live
in a giant city of three or four thousand complete strangers? How would you
live? How would you get food, if you didn't grow it?
Frightening as it seemed to them, it's now the default with us to live by our
wits. So if it seems risky to you to start a startup, think how risky it once
seemed to your ancestors to live as we do now. Oddly enough, the people who
know this best are the very ones trying to get you to stick to the old model.
How can Larry and Sergey say you should come work as their employee, when they
didn't get jobs themselves?
Now we look back on medieval peasants and wonder how they stood it. How grim
it must have been to till the same fields your whole life with no hope of
anything better, under the thumb of lords and priests you had to give all your
surplus to and acknowledge as your masters. I wouldn't be surprised if one day
people look back on what we consider a normal job in the same way. How grim it
would be to commute every day to a cubicle in some soulless office complex,
and be told what to do by someone you had to acknowledge as a boss—someone who
could call you into their office and say "take a seat," and you'd sit! Imagine
having to ask _permission_ to release software to users. Imagine being sad on
Sunday afternoons because the weekend was almost over, and tomorrow you'd have
to get up and go to work. How did they stand it?
It's exciting to think we may be on the cusp of another shift like the one
from farming to manufacturing. That's why I care about startups. Startups
aren't interesting just because they're a way to make a lot of money. I
couldn't care less about other ways to do that, like speculating in
securities. At most those are interesting the way puzzles are. There's more
going on with startups. They may represent one of those rare, historic shifts
in the way [wealth](wealth.html) is created.
That's ultimately what drives us to work on Y Combinator. We want to make
money, if only so we don't have to stop doing it, but that's not the main
goal. There have only been a handful of these great economic shifts in human
history. It would be an amazing hack to make one happen faster.
**Notes**
[1] The only people who lost were us. The angels had convertible debt, so they
had first claim on the proceeds of the auction. Y Combinator only got 38 cents
on the dollar.
[2] The best kind of organization for that might be an open source project,
but those don't involve a lot of face to face meetings. Maybe it would be
worth starting one that did.
[3] There need to be some number of big companies to acquire the startups, so
the number of big companies couldn't decrease to zero.
[4] Thought experiment: If doctors did the same work, but as impoverished
outcasts, which parents would still want their kids to be doctors?
**Thanks** to Trevor Blackwell, Jessica Livingston, and Robert Morris for
reading drafts of this, to the founders of Zenter for letting me use their
web-based PowerPoint killer even though it isn't launched yet, and to Ming-Hay
Luk of the Berkeley CSUA for inviting me to speak.
[Comment](http://news.ycombinator.com/comments?id=6668) on this essay.
|
After a link to [Beating the Averages](avg.html) was posted on slashdot, some
readers wanted to hear in more detail about the specific technical advantages
we got from using Lisp in Viaweb. For those who are interested, here are some
excerpts from a talk I gave in April 2001 at BBN Labs in Cambridge, MA.
|
January 2003
_(This article was given as a talk at the 2003 Spam Conference. It describes
the work I've done to improve the performance of the algorithm described in[A
Plan for Spam](spam.html), and what I plan to do in the future.)_
The first discovery I'd like to present here is an algorithm for lazy
evaluation of research papers. Just write whatever you want and don't cite any
previous work, and indignant readers will send you references to all the
papers you should have cited. I discovered this algorithm after ``A Plan for
Spam'' [1] was on Slashdot.
Spam filtering is a subset of text classification, which is a well established
field, but the first papers about Bayesian spam filtering per se seem to have
been two given at the same conference in 1998, one by Pantel and Lin [2], and
another by a group from Microsoft Research [3].
When I heard about this work I was a bit surprised. If people had been onto
Bayesian filtering four years ago, why wasn't everyone using it? When I read
the papers I found out why. Pantel and Lin's filter was the more effective of
the two, but it only caught 92% of spam, with 1.16% false positives.
When I tried writing a Bayesian spam filter, it caught 99.5% of spam with less
than .03% false positives [4]. It's always alarming when two people trying the
same experiment get widely divergent results. It's especially alarming here
because those two sets of numbers might yield opposite conclusions. Different
users have different requirements, but I think for many people a filtering
rate of 92% with 1.16% false positives means that filtering is not an
acceptable solution, whereas 99.5% with less than .03% false positives means
that it is.
So why did we get such different numbers? I haven't tried to reproduce Pantel
and Lin's results, but from reading the paper I see five things that probably
account for the difference.
One is simply that they trained their filter on very little data: 160 spam and
466 nonspam mails. Filter performance should still be climbing with data sets
that small. So their numbers may not even be an accurate measure of the
performance of their algorithm, let alone of Bayesian spam filtering in
general.
But I think the most important difference is probably that they ignored
message headers. To anyone who has worked on spam filters, this will seem a
perverse decision. And yet in the very first filters I tried writing, I
ignored the headers too. Why? Because I wanted to keep the problem neat. I
didn't know much about mail headers then, and they seemed to me full of random
stuff. There is a lesson here for filter writers: don't ignore data. You'd
think this lesson would be too obvious to mention, but I've had to learn it
several times.
Third, Pantel and Lin stemmed the tokens, meaning they reduced e.g. both
``mailing'' and ``mailed'' to the root ``mail''. They may have felt they were
forced to do this by the small size of their corpus, but if so this is a kind
of premature optimization.
Fourth, they calculated probabilities differently. They used all the tokens,
whereas I only use the 15 most significant. If you use all the tokens you'll
tend to miss longer spams, the type where someone tells you their life story
up to the point where they got rich from some multilevel marketing scheme. And
such an algorithm would be easy for spammers to spoof: just add a big chunk of
random text to counterbalance the spam terms.
Finally, they didn't bias against false positives. I think any spam filtering
algorithm ought to have a convenient knob you can twist to decrease the false
positive rate at the expense of the filtering rate. I do this by counting the
occurrences of tokens in the nonspam corpus double.
I don't think it's a good idea to treat spam filtering as a straight text
classification problem. You can use text classification techniques, but
solutions can and should reflect the fact that the text is email, and spam in
particular. Email is not just text; it has structure. Spam filtering is not
just classification, because false positives are so much worse than false
negatives that you should treat them as a different kind of error. And the
source of error is not just random variation, but a live human spammer working
actively to defeat your filter.
**Tokens**
Another project I heard about after the Slashdot article was Bill Yerazunis'
[CRM114](http://crm114.sourceforge.net) [5]. This is the counterexample to the
design principle I just mentioned. It's a straight text classifier, but such a
stunningly effective one that it manages to filter spam almost perfectly
without even knowing that's what it's doing.
Once I understood how CRM114 worked, it seemed inevitable that I would
eventually have to move from filtering based on single words to an approach
like this. But first, I thought, I'll see how far I can get with single words.
And the answer is, surprisingly far.
Mostly I've been working on smarter tokenization. On current spam, I've been
able to achieve filtering rates that approach CRM114's. These techniques are
mostly orthogonal to Bill's; an optimal solution might incorporate both.
``A Plan for Spam'' uses a very simple definition of a token. Letters, digits,
dashes, apostrophes, and dollar signs are constituent characters, and
everything else is a token separator. I also ignored case.
Now I have a more complicated definition of a token:
1. Case is preserved.
2. Exclamation points are constituent characters.
3. Periods and commas are constituents if they occur between two digits. This lets me get ip addresses and prices intact.
4. A price range like $20-25 yields two tokens, $20 and $25.
5. Tokens that occur within the To, From, Subject, and Return-Path lines, or within urls, get marked accordingly. E.g. ``foo'' in the Subject line becomes ``Subject*foo''. (The asterisk could be any character you don't allow as a constituent.)
Such measures increase the filter's vocabulary, which makes it more
discriminating. For example, in the current filter, ``free'' in the Subject
line has a spam probability of 98%, whereas the same token in the body has a
spam probability of only 65%.
Here are some of the current probabilities [6]:
Subject*FREE 0.9999
free!! 0.9999
To*free 0.9998
Subject*free 0.9782
free! 0.9199
Free 0.9198
Url*free 0.9091
FREE 0.8747
From*free 0.7636
free 0.6546
In the Plan for Spam filter, all these tokens would have had the same
probability, .7602. That filter recognized about 23,000 tokens. The current
one recognizes about 187,000.
The disadvantage of having a larger universe of tokens is that there is more
chance of misses. Spreading your corpus out over more tokens has the same
effect as making it smaller. If you consider exclamation points as
constituents, for example, then you could end up not having a spam probability
for free with seven exclamation points, even though you know that free with
just two exclamation points has a probability of 99.99%.
One solution to this is what I call degeneration. If you can't find an exact
match for a token, treat it as if it were a less specific version. I consider
terminal exclamation points, uppercase letters, and occurring in one of the
five marked contexts as making a token more specific. For example, if I don't
find a probability for ``Subject*free!'', I look for probabilities for
``Subject*free'', ``free!'', and ``free'', and take whichever one is farthest
from .5.
Here are the alternatives [7] considered if the filter sees ``FREE!!!'' in the
Subject line and doesn't have a probability for it.
Subject*Free!!!
Subject*free!!!
Subject*FREE!
Subject*Free!
Subject*free!
Subject*FREE
Subject*Free
Subject*free
FREE!!!
Free!!!
free!!!
FREE!
Free!
free!
FREE
Free
free
If you do this, be sure to consider versions with initial caps as well as all
uppercase and all lowercase. Spams tend to have more sentences in imperative
mood, and in those the first word is a verb. So verbs with initial caps have
higher spam probabilities than they would in all lowercase. In my filter, the
spam probability of ``Act'' is 98% and for ``act'' only 62%.
If you increase your filter's vocabulary, you can end up counting the same
word multiple times, according to your old definition of ``same''. Logically,
they're not the same token anymore. But if this still bothers you, let me add
from experience that the words you seem to be counting multiple times tend to
be exactly the ones you'd want to.
Another effect of a larger vocabulary is that when you look at an incoming
mail you find more interesting tokens, meaning those with probabilities far
from .5. I use the 15 most interesting to decide if mail is spam. But you can
run into a problem when you use a fixed number like this. If you find a lot of
maximally interesting tokens, the result can end up being decided by whatever
random factor determines the ordering of equally interesting tokens. One way
to deal with this is to treat some as more interesting than others.
For example, the token ``dalco'' occurs 3 times in my spam corpus and never in
my legitimate corpus. The token ``Url*optmails'' (meaning ``optmails'' within
a url) occurs 1223 times. And yet, as I used to calculate probabilities for
tokens, both would have the same spam probability, the threshold of .99.
That doesn't feel right. There are theoretical arguments for giving these two
tokens substantially different probabilities (Pantel and Lin do), but I
haven't tried that yet. It does seem at least that if we find more than 15
tokens that only occur in one corpus or the other, we ought to give priority
to the ones that occur a lot. So now there are two threshold values. For
tokens that occur only in the spam corpus, the probability is .9999 if they
occur more than 10 times and .9998 otherwise. Ditto at the other end of the
scale for tokens found only in the legitimate corpus.
I may later scale token probabilities substantially, but this tiny amount of
scaling at least ensures that tokens get sorted the right way.
Another possibility would be to consider not just 15 tokens, but all the
tokens over a certain threshold of interestingness. Steven Hauser does this in
his statistical spam filter [8]. If you use a threshold, make it very high, or
spammers could spoof you by packing messages with more innocent words.
Finally, what should one do about html? I've tried the whole spectrum of
options, from ignoring it to parsing it all. Ignoring html is a bad idea,
because it's full of useful spam signs. But if you parse it all, your filter
might degenerate into a mere html recognizer. The most effective approach
seems to be the middle course, to notice some tokens but not others. I look at
a, img, and font tags, and ignore the rest. Links and images you should
certainly look at, because they contain urls.
I could probably be smarter about dealing with html, but I don't think it's
worth putting a lot of time into this. Spams full of html are easy to filter.
The smarter spammers already avoid it. So performance in the future should not
depend much on how you deal with html.
**Performance**
Between December 10 2002 and January 10 2003 I got about 1750 spams. Of these,
4 got through. That's a filtering rate of about 99.75%.
Two of the four spams I missed got through because they happened to use words
that occur often in my legitimate email.
The third was one of those that exploit an insecure cgi script to send mail to
third parties. They're hard to filter based just on the content because the
headers are innocent and they're careful about the words they use. Even so I
can usually catch them. This one squeaked by with a probability of .88, just
under the threshold of .9.
Of course, looking at multiple token sequences would catch it easily. ``Below
is the result of your feedback form'' is an instant giveaway.
The fourth spam was what I call a spam-of-the-future, because this is what I
expect spam to evolve into: some completely neutral text followed by a url. In
this case it was was from someone saying they had finally finished their
homepage and would I go look at it. (The page was of course an ad for a porn
site.)
If the spammers are careful about the headers and use a fresh url, there is
nothing in spam-of-the-future for filters to notice. We can of course counter
by sending a crawler to look at the page. But that might not be necessary. The
response rate for spam-of-the-future must be low, or everyone would be doing
it. If it's low enough, it [won't pay](wfks.html) for spammers to send it, and
we won't have to work too hard on filtering it.
Now for the really shocking news: during that same one-month period I got
_three_ false positives.
In a way it's a relief to get some false positives. When I wrote ``A Plan for
Spam'' I hadn't had any, and I didn't know what they'd be like. Now that I've
had a few, I'm relieved to find they're not as bad as I feared. False
positives yielded by statistical filters turn out to be mails that sound a lot
like spam, and these tend to be the ones you would least mind missing [9].
Two of the false positives were newsletters from companies I've bought things
from. I never asked to receive them, so arguably they were spams, but I count
them as false positives because I hadn't been deleting them as spams before.
The reason the filters caught them was that both companies in January switched
to commercial email senders instead of sending the mails from their own
servers, and both the headers and the bodies became much spammier.
The third false positive was a bad one, though. It was from someone in Egypt
and written in all uppercase. This was a direct result of making tokens case
sensitive; the Plan for Spam filter wouldn't have caught it.
It's hard to say what the overall false positive rate is, because we're up in
the noise, statistically. Anyone who has worked on filters (at least,
effective filters) will be aware of this problem. With some emails it's hard
to say whether they're spam or not, and these are the ones you end up looking
at when you get filters really tight. For example, so far the filter has
caught two emails that were sent to my address because of a typo, and one sent
to me in the belief that I was someone else. Arguably, these are neither my
spam nor my nonspam mail.
Another false positive was from a vice president at Virtumundo. I wrote to
them pretending to be a customer, and since the reply came back through
Virtumundo's mail servers it had the most incriminating headers imaginable.
Arguably this isn't a real false positive either, but a sort of Heisenberg
uncertainty effect: I only got it because I was writing about spam filtering.
Not counting these, I've had a total of five false positives so far, out of
about 7740 legitimate emails, a rate of .06%. The other two were a notice that
something I bought was back-ordered, and a party reminder from Evite.
I don't think this number can be trusted, partly because the sample is so
small, and partly because I think I can fix the filter not to catch some of
these.
False positives seem to me a different kind of error from false negatives.
Filtering rate is a measure of performance. False positives I consider more
like bugs. I approach improving the filtering rate as optimization, and
decreasing false positives as debugging.
So these five false positives are my bug list. For example, the mail from
Egypt got nailed because the uppercase text made it look to the filter like a
Nigerian spam. This really is kind of a bug. As with html, the email being all
uppercase is really conceptually _one_ feature, not one for each word. I need
to handle case in a more sophisticated way.
So what to make of this .06%? Not much, I think. You could treat it as an
upper bound, bearing in mind the small sample size. But at this stage it is
more a measure of the bugs in my implementation than some intrinsic false
positive rate of Bayesian filtering.
**Future**
What next? Filtering is an optimization problem, and the key to optimization
is profiling. Don't try to guess where your code is slow, because you'll guess
wrong. _Look_ at where your code is slow, and fix that. In filtering, this
translates to: look at the spams you miss, and figure out what you could have
done to catch them.
For example, spammers are now working aggressively to evade filters, and one
of the things they're doing is breaking up and misspelling words to prevent
filters from recognizing them. But working on this is not my first priority,
because I still have no trouble catching these spams [10].
There are two kinds of spams I currently do have trouble with. One is the type
that pretends to be an email from a woman inviting you to go chat with her or
see her profile on a dating site. These get through because they're the one
type of sales pitch you can make without using sales talk. They use the same
vocabulary as ordinary email.
The other kind of spams I have trouble filtering are those from companies in
e.g. Bulgaria offering contract programming services. These get through
because I'm a programmer too, and the spams are full of the same words as my
real mail.
I'll probably focus on the personal ad type first. I think if I look closer
I'll be able to find statistical differences between these and my real mail.
The style of writing is certainly different, though it may take multiword
filtering to catch that. Also, I notice they tend to repeat the url, and
someone including a url in a legitimate mail wouldn't do that [11].
The outsourcing type are going to be hard to catch. Even if you sent a crawler
to the site, you wouldn't find a smoking statistical gun. Maybe the only
answer is a central list of domains advertised in spams [12]. But there can't
be that many of this type of mail. If the only spams left were unsolicited
offers of contract programming services from Bulgaria, we could all probably
move on to working on something else.
Will statistical filtering actually get us to that point? I don't know. Right
now, for me personally, spam is not a problem. But spammers haven't yet made a
serious effort to spoof statistical filters. What will happen when they do?
I'm not optimistic about filters that work at the network level [13]. When
there is a static obstacle worth getting past, spammers are pretty efficient
at getting past it. There is already a company called Assurance Systems that
will run your mail through Spamassassin and tell you whether it will get
filtered out.
Network-level filters won't be completely useless. They may be enough to kill
all the "opt-in" spam, meaning spam from companies like Virtumundo and
Equalamail who claim that they're really running opt-in lists. You can filter
those based just on the headers, no matter what they say in the body. But
anyone willing to falsify headers or use open relays, presumably including
most porn spammers, should be able to get some message past network-level
filters if they want to. (By no means the message they'd like to send though,
which is something.)
The kind of filters I'm optimistic about are ones that calculate probabilities
based on each individual user's mail. These can be much more effective, not
only in avoiding false positives, but in filtering too: for example, finding
the recipient's email address base-64 encoded anywhere in a message is a very
good spam indicator.
But the real advantage of individual filters is that they'll all be different.
If everyone's filters have different probabilities, it will make the spammers'
optimization loop, what programmers would call their edit-compile-test cycle,
appallingly slow. Instead of just tweaking a spam till it gets through a copy
of some filter they have on their desktop, they'll have to do a test mailing
for each tweak. It would be like programming in a language without an
interactive toplevel, and I wouldn't wish that on anyone.
**Notes**
[1] Paul Graham. ``A Plan for Spam.'' August 2002.
http://paulgraham.com/spam.html.
Probabilities in this algorithm are calculated using a degenerate case of
Bayes' Rule. There are two simplifying assumptions: that the probabilities of
features (i.e. words) are independent, and that we know nothing about the
prior probability of an email being spam.
The first assumption is widespread in text classification. Algorithms that use
it are called ``naive Bayesian.''
The second assumption I made because the proportion of spam in my incoming
mail fluctuated so much from day to day (indeed, from hour to hour) that the
overall prior ratio seemed worthless as a predictor. If you assume that
P(spam) and P(nonspam) are both .5, they cancel out and you can remove them
from the formula.
If you were doing Bayesian filtering in a situation where the ratio of spam to
nonspam was consistently very high or (especially) very low, you could
probably improve filter performance by incorporating prior probabilities. To
do this right you'd have to track ratios by time of day, because spam and
legitimate mail volume both have distinct daily patterns.
[2] Patrick Pantel and Dekang Lin. ``SpamCop-- A Spam Classification &
Organization Program.'' Proceedings of AAAI-98 Workshop on Learning for Text
Categorization.
[3] Mehran Sahami, Susan Dumais, David Heckerman and Eric Horvitz. ``A
Bayesian Approach to Filtering Junk E-Mail.'' Proceedings of AAAI-98 Workshop
on Learning for Text Categorization.
[4] At the time I had zero false positives out of about 4,000 legitimate
emails. If the next legitimate email was a false positive, this would give us
.03%. These false positive rates are untrustworthy, as I explain later. I
quote a number here only to emphasize that whatever the false positive rate
is, it is less than 1.16%.
[5] Bill Yerazunis. ``Sparse Binary Polynomial Hash Message Filtering and The
CRM114 Discriminator.'' Proceedings of 2003 Spam Conference.
[6] In ``A Plan for Spam'' I used thresholds of .99 and .01. It seems
justifiable to use thresholds proportionate to the size of the corpora. Since
I now have on the order of 10,000 of each type of mail, I use .9999 and .0001.
[7] There is a flaw here I should probably fix. Currently, when
``Subject*foo'' degenerates to just ``foo'', what that means is you're getting
the stats for occurrences of ``foo'' in the body or header lines other than
those I mark. What I should do is keep track of statistics for ``foo'' overall
as well as specific versions, and degenerate from ``Subject*foo'' not to
``foo'' but to ``Anywhere*foo''. Ditto for case: I should degenerate from
uppercase to any-case, not lowercase.
It would probably be a win to do this with prices too, e.g. to degenerate from
``$129.99'' to ``$--9.99'', ``$--.99'', and ``$--''.
You could also degenerate from words to their stems, but this would probably
only improve filtering rates early on when you had small corpora.
[8] Steven Hauser. ``Statistical Spam Filter Works for Me.''
http://www.sofbot.com.
[9] False positives are not all equal, and we should remember this when
comparing techniques for stopping spam. Whereas many of the false positives
caused by filters will be near-spams that you wouldn't mind missing, false
positives caused by blacklists, for example, will be just mail from people who
chose the wrong ISP. In both cases you catch mail that's near spam, but for
blacklists nearness is physical, and for filters it's textual.
[10] If spammers get good enough at obscuring tokens for this to be a problem,
we can respond by simply removing whitespace, periods, commas, etc. and using
a dictionary to pick the words out of the resulting sequence. And of course
finding words this way that weren't visible in the original text would in
itself be evidence of spam.
Picking out the words won't be trivial. It will require more than just
reconstructing word boundaries; spammers both add (``xHot nPorn cSite'') and
omit (``P#rn'') letters. Vision research may be useful here, since human
vision is the limit that such tricks will approach.
[11] In general, spams are more repetitive than regular email. They want to
pound that message home. I currently don't allow duplicates in the top 15
tokens, because you could get a false positive if the sender happens to use
some bad word multiple times. (In my current filter, ``dick'' has a spam
probabilty of .9999, but it's also a name.) It seems we should at least notice
duplication though, so I may try allowing up to two of each token, as Brian
Burton does in SpamProbe.
[12] This is what approaches like Brightmail's will degenerate into once
spammers are pushed into using mad-lib techniques to generate everything else
in the message.
[13] It's sometimes argued that we should be working on filtering at the
network level, because it is more efficient. What people usually mean when
they say this is: we currently filter at the network level, and we don't want
to start over from scratch. But you can't dictate the problem to fit your
solution.
Historically, scarce-resource arguments have been the losing side in debates
about software design. People only tend to use them to justify choices
(inaction in particular) made for other reasons.
**Thanks** to Sarah Harlin, Trevor Blackwell, and Dan Giffin for reading
drafts of this paper, and to Dan again for most of the infrastructure that
this filter runs on.
**Related:**
|
August 2003
We may be able to improve the accuracy of Bayesian spam filters by having them
follow links to see what's waiting at the other end. Richard Jowsey of
[death2spam](http://death2spam.com) now does this in borderline cases, and
reports that it works well.
Why only do it in borderline cases? And why only do it once?
As I mentioned in [Will Filters Kill Spam?](wfks.html), following all the urls
in a spam would have an amusing side-effect. If popular email clients did this
in order to filter spam, the spammer's servers would take a serious pounding.
The more I think about this, the better an idea it seems. This isn't just
amusing; it would be hard to imagine a more perfectly targeted counterattack
on spammers.
So I'd like to suggest an additional feature to those working on spam filters:
a "punish" mode which, if turned on, would spider every url in a suspected
spam n times, where n could be set by the user. [1]
As many people have noted, one of the problems with the current email system
is that it's too passive. It does whatever you tell it. So far all the
suggestions for fixing the problem seem to involve new protocols. This one
wouldn't.
If widely used, auto-retrieving spam filters would make the email system
_rebound._ The huge volume of the spam, which has so far worked in the
spammer's favor, would now work against him, like a branch snapping back in
his face. Auto-retrieving spam filters would drive the spammer's
[costs](http://www.bork.ca/pics/?path=incoming&img=bill.jpg) up, and his sales
down: his bandwidth usage would go through the roof, and his servers would
grind to a halt under the load, which would make them unavailable to the
people who would have responded to the spam.
Pump out a million emails an hour, get a million hits an hour on your servers.
We would want to ensure that this is only done to suspected spams. As a rule,
any url sent to millions of people is likely to be a spam url, so submitting
every http request in every email would work fine nearly all the time. But
there are a few cases where this isn't true: the urls at the bottom of mails
sent from free email services like Yahoo Mail and Hotmail, for example.
To protect such sites, and to prevent abuse, auto-retrieval should be combined
with blacklists of spamvertised sites. Only sites on a blacklist would get
crawled, and sites would be blacklisted only after being inspected by humans.
The lifetime of a spam must be several hours at least, so it should be easy to
update such a list in time to interfere with a spam promoting a new site. [2]
High-volume auto-retrieval would only be practical for users on high-bandwidth
connections, but there are enough of those to cause spammers serious trouble.
Indeed, this solution neatly mirrors the problem. The problem with spam is
that in order to reach a few gullible people the spammer sends mail to
everyone. The non-gullible recipients are merely collateral damage. But the
non-gullible majority won't stop getting spam until they can stop (or threaten
to stop) the gullible from responding to it. Auto-retrieving spam filters
offer them a way to do this.
Would that kill spam? Not quite. The biggest spammers could probably protect
their servers against auto-retrieving filters. However, the easiest and
cheapest way for them to do it would be to include working unsubscribe links
in their mails. And this would be a necessity for smaller fry, and for
"legitimate" sites that hired spammers to promote them. So if auto-retrieving
filters became widespread, they'd become auto-unsubscribing filters.
In this scenario, spam would, like OS crashes, viruses, and popups, become one
of those plagues that only afflict people who don't bother to use the right
software.
**Notes**
[1] Auto-retrieving filters will have to follow redirects, and should in some
cases (e.g. a page that just says "click here") follow more than one level of
links. Make sure too that the http requests are indistinguishable from those
of popular Web browsers, including the order and referrer.
If the response doesn't come back within x amount of time, default to some
fairly high spam probability.
Instead of making n constant, it might be a good idea to make it a function of
the number of spams that have been seen mentioning the site. This would add a
further level of protection against abuse and accidents.
[2] The original version of this article used the term "whitelist" instead of
"blacklist". Though they were to work like blacklists, I preferred to call
them whitelists because it might make them less vulnerable to legal attack.
This just seems to have confused readers, though.
There should probably be multiple blacklists. A single point of failure would
be vulnerable both to attack and abuse.
**Thanks** to Brian Burton, Bill Yerazunis, Dan Giffin, Eric Raymond, and
Richard Jowsey for reading drafts of this.
|
October 2022
If there were intelligent beings elsewhere in the universe, they'd share
certain truths in common with us. The truths of mathematics would be the same,
because they're true by definition. Ditto for the truths of physics; the mass
of a carbon atom would be the same on their planet. But I think we'd share
other truths with aliens besides the truths of math and physics, and that it
would be worthwhile to think about what these might be.
For example, I think we'd share the principle that a controlled experiment
testing some hypothesis entitles us to have proportionally increased belief in
it. It seems fairly likely, too, that it would be true for aliens that one can
get better at something by practicing. We'd probably share Occam's razor.
There doesn't seem anything specifically human about any of these ideas.
We can only guess, of course. We can't say for sure what forms intelligent
life might take. Nor is it my goal here to explore that question, interesting
though it is. The point of the idea of alien truth is not that it gives us a
way to speculate about what forms intelligent life might take, but that it
gives us a threshold, or more precisely a target, for truth. If you're trying
to find the most general truths short of those of math or physics, then
presumably they'll be those we'd share in common with other forms of
intelligent life.
Alien truth will work best as a heuristic if we err on the side of generosity.
If an idea might plausibly be relevant to aliens, that's enough. Justice, for
example. I wouldn't want to bet that all intelligent beings would understand
the concept of justice, but I wouldn't want to bet against it either.
The idea of alien truth is related to Erdos's idea of God's book. He used to
describe a particularly good proof as being in God's book, the implication
being (a) that a sufficiently good proof was more discovered than invented,
and (b) that its goodness would be universally recognized. If there's such a
thing as alien truth, then there's more in God's book than math.
What should we call the search for alien truth? The obvious choice is
"philosophy." Whatever else philosophy includes, it should probably include
this. I'm fairly sure Aristotle would have thought so. One could even make the
case that the search for alien truth is, if not an accurate description _of_
philosophy, a good definition _for_ it. I.e. that it's what people who call
themselves philosophers should be doing, whether or not they currently are.
But I'm not wedded to that; doing it is what matters, not what we call it.
We may one day have something like alien life among us in the form of AIs. And
that may in turn allow us to be precise about what truths an intelligent being
would have to share with us. We might find, for example, that it's impossible
to create something we'd consider intelligent that doesn't use Occam's razor.
We might one day even be able to prove that. But though this sort of research
would be very interesting, it's not necessary for our purposes, or even the
same field; the goal of philosophy, if we're going to call it that, would be
to see what ideas we come up with using alien truth as a target, not to say
precisely where the threshold of it is. Those two questions might one day
converge, but they'll converge from quite different directions, and till they
do, it would be too constraining to restrict ourselves to thinking only about
things we're certain would be alien truths. Especially since this will
probably be one of those areas where the best guesses turn out to be
surprisingly close to optimal. (Let's see if that one does.)
Whatever we call it, the attempt to discover alien truths would be a
worthwhile undertaking. And curiously enough, that is itself probably an alien
truth.
**Thanks** to Trevor Blackwell, Greg Brockman, Patrick Collison, Robert
Morris, and Michael Nielsen for reading drafts of this.
|
November 2022
In the science fiction books I read as a kid, reading had often been replaced
by some more efficient way of acquiring knowledge. Mysterious "tapes" would
load it into one's brain like a program being loaded into a computer.
That sort of thing is unlikely to happen anytime soon. Not just because it
would be hard to build a replacement for reading, but because even if one
existed, it would be insufficient. Reading about x doesn't just teach you
about x; it also teaches you how to write. [1]
Would that matter? If we replaced reading, would anyone need to be good at
writing?
The reason it would matter is that writing is not just a way to convey ideas,
but also a way to have them.
A good writer doesn't just think, and then write down what he thought, as a
sort of transcript. A good writer will almost always discover new things in
the process of writing. And there is, as far as I know, no substitute for this
kind of discovery. Talking about your ideas with other people is a good way to
develop them. But even after doing this, you'll find you still discover new
things when you sit down to write. There is a kind of thinking that can only
be done by [_writing_](words.html).
There are of course kinds of thinking that can be done without writing. If you
don't need to go too deeply into a problem, you can solve it without writing.
If you're thinking about how two pieces of machinery should fit together,
writing about it probably won't help much. And when a problem can be described
formally, you can sometimes solve it in your head. But if you need to solve a
complicated, ill-defined problem, it will almost always help to write about
it. Which in turn means that someone who's not good at writing will almost
always be at a disadvantage in solving such problems.
You can't think well without writing well, and you can't write well without
reading well. And I mean that last "well" in both senses. You have to be good
at reading, and read good things. [2]
People who just want information may find other ways to get it. But people who
want to have ideas can't afford to.
**Notes**
[1] Audiobooks can give you examples of good writing, but having them read to
you doesn't teach you as much about writing as reading them yourself.
[2] By "good at reading" I don't mean good at the mechanics of reading. You
don't have to be good at extracting words from the page so much as extracting
meaning from the words.
|
August 2004
In a recent [talk](gh.html) I said something that upset a lot of people: that
you could get smarter programmers to work on a Python project than you could
to work on a Java project.
I didn't mean by this that Java programmers are dumb. I meant that Python
programmers are smart. It's a lot of work to learn a new programming language.
And people don't learn Python because it will get them a job; they learn it
because they genuinely like to program and aren't satisfied with the languages
they already know.
Which makes them exactly the kind of programmers companies should want to
hire. Hence what, for lack of a better name, I'll call the Python paradox: if
a company chooses to write its software in a comparatively esoteric language,
they'll be able to hire better programmers, because they'll attract only those
who cared enough to learn it. And for programmers the paradox is even more
pronounced: the language to learn, if you want to get a good job, is a
language that people don't learn merely to get a job.
Only a few companies have been smart enough to realize this so far. But there
is a kind of selection going on here too: they're exactly the companies
programmers would most like to work for. Google, for example. When they
advertise Java programming jobs, they also want Python experience.
A friend of mine who knows nearly all the widely used languages uses Python
for most of his projects. He says the main reason is that he likes the way
source code looks. That may seem a frivolous reason to choose one language
over another. But it is not so frivolous as it sounds: when you program, you
spend more time reading code than writing it. You push blobs of source code
around the way a sculptor does blobs of clay. So a language that makes source
code ugly is maddening to an exacting programmer, as clay full of lumps would
be to a sculptor.
At the mention of ugly source code, people will of course think of Perl. But
the superficial ugliness of Perl is not the sort I mean. Real ugliness is not
harsh-looking syntax, but having to build programs out of the wrong concepts.
Perl may look like a cartoon character swearing, but there are
[cases](icad.html) where it surpasses Python conceptually.
So far, anyway. Both languages are of course [moving](hundred.html) targets.
But they share, along with Ruby (and Icon, and Joy, and J, and Lisp, and
Smalltalk) the fact that they're created by, and used by, people who really
care about programming. And those tend to be the ones who do it well.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
March 2005
_(Parts of this essay began as replies to students who wrote to me with
questions.)_
Recently I've had several emails from computer science undergrads asking what
to do in college. I might not be the best source of advice, because I was a
philosophy major in college. But I took so many CS classes that most CS majors
thought I was one. I was certainly a hacker, at least.
**Hacking**
What should you do in college to become a [good hacker](gh.html)? There are
two main things you can do: become very good at programming, and learn a lot
about specific, cool problems. These turn out to be equivalent, because each
drives you to do the other.
The way to be good at programming is to work (a) a lot (b) on hard problems.
And the way to make yourself work on hard problems is to work on some very
engaging project.
Odds are this project won't be a class assignment. My friend Robert learned a
lot by writing network software when he was an undergrad. One of his projects
was to connect Harvard to the Arpanet; it had been one of the original nodes,
but by 1984 the connection had died. [1] Not only was this work not for a
class, but because he spent all his time on it and neglected his studies, he
was kicked out of school for a year. [2] It all evened out in the end, and now
he's a professor at MIT. But you'll probably be happier if you don't go to
that extreme; it caused him a lot of worry at the time.
Another way to be good at programming is to find other people who are good at
it, and learn what they know. Programmers tend to sort themselves into tribes
according to the type of work they do and the tools they use, and some tribes
are [smarter](pypar.html) than others. Look around you and see what the smart
people seem to be working on; there's usually a reason.
Some of the smartest people around you are professors. So one way to find
interesting work is to volunteer as a research assistant. Professors are
especially interested in people who can solve tedious system-administration
type problems for them, so that is a way to get a foot in the door. What they
fear are flakes and resume padders. It's all too common for an assistant to
result in a net increase in work. So you have to make it clear you'll mean a
net decrease.
Don't be put off if they say no. Rejection is almost always less personal than
the rejectee imagines. Just move on to the next. (This applies to dating too.)
Beware, because although most professors are smart, not all of them work on
interesting stuff. Professors have to publish novel results to advance their
careers, but there is more competition in more interesting areas of research.
So what less ambitious professors do is turn out a series of papers whose
conclusions are novel because no one else cares about them. You're better off
avoiding these.
I never worked as a research assistant, so I feel a bit dishonest recommending
that route. I learned to program by writing stuff of my own, particularly by
trying to reverse-engineer Winograd's SHRDLU. I was as obsessed with that
program as a mother with a new baby.
Whatever the disadvantages of working by yourself, the advantage is that the
project is all your own. You never have to compromise or ask anyone's
permission, and if you have a new idea you can just sit down and start
implementing it.
In your own projects you don't have to worry about novelty (as professors do)
or profitability (as businesses do). All that matters is how hard the project
is technically, and that has no correlation to the nature of the application.
"Serious" applications like databases are often trivial and dull technically
(if you ever suffer from insomnia, try reading the technical literature about
databases) while "frivolous" applications like games are often very
sophisticated. I'm sure there are game companies out there working on products
with more intellectual content than the research at the bottom nine tenths of
university CS departments.
If I were in college now I'd probably work on graphics: a network game, for
example, or a tool for 3D animation. When I was an undergrad there weren't
enough cycles around to make graphics interesting, but it's hard to imagine
anything more fun to work on now.
**Math**
When I was in college, a lot of the professors believed (or at least wished)
that [computer science](hp.html) was a branch of math. This idea was strongest
at Harvard, where there wasn't even a CS major till the 1980s; till then one
had to major in applied math. But it was nearly as bad at Cornell. When I told
the fearsome Professor Conway that I was interested in AI (a hot topic then),
he told me I should major in math. I'm still not sure whether he thought AI
required math, or whether he thought AI was nonsense and that majoring in
something rigorous would cure me of such stupid ambitions.
In fact, the amount of math you need as a hacker is a lot less than most
university departments like to admit. I don't think you need much more than
high school math plus a few concepts from the theory of computation. (You have
to know what an n^2 algorithm is if you want to avoid writing them.) Unless
you're planning to write math applications, of course. Robotics, for example,
is all math.
But while you don't literally need math for most kinds of hacking, in the
sense of knowing 1001 tricks for differentiating formulas, math is very much
worth studying for its own sake. It's a valuable source of metaphors for
almost any kind of work.[3] I wish I'd studied more math in college for that
reason.
Like a lot of people, I was mathematically abused as a child. I learned to
think of math as a collection of formulas that were neither beautiful nor had
any relation to my life (despite attempts to translate them into "word
problems"), but had to be memorized in order to do well on tests.
One of the most valuable things you could do in college would be to learn what
math is really about. This may not be easy, because a lot of good
mathematicians are bad teachers. And while there are many popular books on
math, few seem good. The best I can think of are W. W. Sawyer's. And of course
Euclid. [4]
**Everything**
Thomas Huxley said "Try to learn something about everything and everything
about something." Most universities aim at this ideal.
But what's everything? To me it means, all that people learn in the course of
working honestly on hard problems. All such work tends to be related, in that
ideas and techniques from one field can often be transplanted successfully to
others. Even others that seem quite distant. For example, I write
[essays](essay.html) the same way I write software: I sit down and blow out a
lame version 1 as fast as I can type, then spend several weeks rewriting it.
Working on hard problems is not, by itself, enough. Medieval alchemists were
working on a hard problem, but their approach was so bogus that there was
little to learn from studying it, except possibly about people's ability to
delude themselves. Unfortunately the sort of AI I was trying to learn in
college had the same flaw: a very hard problem, blithely approached with
hopelessly inadequate techniques. Bold? Closer to fraudulent.
The social sciences are also fairly bogus, because they're so much influenced
by intellectual [fashions](say.html). If a physicist met a colleague from 100
years ago, he could teach him some new things; if a psychologist met a
colleague from 100 years ago, they'd just get into an ideological argument.
Yes, of course, you'll learn something by taking a psychology class. The point
is, you'll learn more by taking a class in another department.
The worthwhile departments, in my opinion, are math, the hard sciences,
engineering, history (especially economic and social history, and the history
of science), architecture, and the classics. A survey course in art history
may be worthwhile. Modern literature is important, but the way to learn about
it is just to read. I don't know enough about music to say.
You can skip the social sciences, philosophy, and the various departments
created recently in response to political pressures. Many of these fields talk
about important problems, certainly. But the way they talk about them is
useless. For example, philosophy talks, among other things, about our
obligations to one another; but you can learn more about this from a wise
grandmother or E. B. White than from an academic philosopher.
I speak here from experience. I should probably have been offended when people
laughed at Clinton for saying "It depends on what the meaning of the word 'is'
is." I took about five classes in college on what the meaning of "is" is.
Another way to figure out which fields are worth studying is to create the
_dropout graph._ For example, I know many people who switched from math to
computer science because they found math too hard, and no one who did the
opposite. People don't do hard things gratuitously; no one will work on a
harder problem unless it is proportionately (or at least log(n)) more
rewarding. So probably math is more worth studying than computer science. By
similar comparisons you can make a graph of all the departments in a
university. At the bottom you'll find the subjects with least intellectual
content.
If you use this method, you'll get roughly the same answer I just gave.
Language courses are an anomaly. I think they're better considered as
extracurricular activities, like pottery classes. They'd be far more useful
when combined with some time living in a country where the language is spoken.
On a whim I studied Arabic as a freshman. It was a lot of work, and the only
lasting benefits were a weird ability to identify semitic roots and some
insights into how people recognize words.
Studio art and creative writing courses are wildcards. Usually you don't get
taught much: you just work (or don't work) on whatever you want, and then sit
around offering "crits" of one another's creations under the vague supervision
of the teacher. But writing and art are both very hard problems that (some)
people work honestly at, so they're worth doing, especially if you can find a
good teacher.
**Jobs**
Of course college students have to think about more than just learning. There
are also two practical problems to consider: jobs, and graduate school.
In theory a liberal education is not supposed to supply job training. But
everyone knows this is a bit of a fib. Hackers at every college learn
practical skills, and not by accident.
What you should learn to get a job depends on the kind you want. If you want
to work in a big company, learn how to hack [Blub](avg.html) on Windows. If
you want to work at a cool little company or research lab, you'll do better to
learn Ruby on Linux. And if you want to start your own company, which I think
will be more and more common, master the most powerful tools you can find,
because you're going to be in a race against your competitors, and they'll be
your horse.
There is not a direct correlation between the skills you should learn in
college and those you'll use in a job. You should aim slightly high in
college.
In workouts a football player may bench press 300 pounds, even though he may
never have to exert anything like that much force in the course of a game.
Likewise, if your professors try to make you learn stuff that's more advanced
than you'll need in a job, it may not just be because they're academics,
detached from the real world. They may be trying to make you lift weights with
your brain.
The programs you write in classes differ in three critical ways from the ones
you'll write in the real world: they're small; you get to start from scratch;
and the problem is usually artificial and predetermined. In the real world,
programs are bigger, tend to involve existing code, and often require you to
figure out what the problem is before you can solve it.
You don't have to wait to leave (or even enter) college to learn these skills.
If you want to learn how to deal with existing code, for example, you can
contribute to open-source projects. The sort of employer you want to work for
will be as impressed by that as good grades on class assignments.
In existing open-source projects you don't get much practice at the third
skill, deciding what problems to solve. But there's nothing to stop you
starting new projects of your own. And good employers will be even more
impressed with that.
What sort of problem should you try to solve? One way to answer that is to ask
what you need as a user. For example, I stumbled on a good algorithm for spam
filtering because I wanted to stop getting spam. Now what I wish I had was a
mail reader that somehow prevented my inbox from filling up. I tend to use my
inbox as a todo list. But that's like using a screwdriver to open bottles;
what one really wants is a bottle opener.
**Grad School**
What about grad school? Should you go? And how do you get into a good one?
In principle, grad school is professional training in research, and you
shouldn't go unless you want to do research as a career. And yet half the
people who get PhDs in CS don't go into research. I didn't go to grad school
to become a professor. I went because I wanted to learn more.
So if you're mainly interested in hacking and you go to grad school, you'll
find a lot of other people who are similarly out of their element. And if half
the people around you are out of their element in the same way you are, are
you really out of your element?
There's a fundamental problem in "computer science," and it surfaces in
situations like this. No one is sure what "research" is supposed to be. A lot
of research is hacking that had to be crammed into the form of an academic
paper to yield one more quantum of publication.
So it's kind of misleading to ask whether you'll be at home in grad school,
because very few people are quite at home in computer science. The whole field
is uncomfortable in its own skin. So the fact that you're mainly interested in
hacking shouldn't deter you from going to grad school. Just be warned you'll
have to do a lot of stuff you don't like.
Number one will be your dissertation. Almost everyone hates their dissertation
by the time they're done with it. The process inherently tends to produce an
unpleasant result, like a cake made out of whole wheat flour and baked for
twelve hours. Few dissertations are read with pleasure, especially by their
authors.
But thousands before you have suffered through writing a dissertation. And
aside from that, grad school is close to paradise. Many people remember it as
the happiest time of their lives. And nearly all the rest, including me,
remember it as a period that would have been, if they hadn't had to write a
dissertation. [5]
The danger with grad school is that you don't see the scary part upfront. PhD
programs start out as college part 2, with several years of classes. So by the
time you face the horror of writing a dissertation, you're already several
years in. If you quit now, you'll be a grad-school dropout, and you probably
won't like that idea. When Robert got kicked out of grad school for writing
the Internet worm of 1988, I envied him enormously for finding a way out
without the stigma of failure.
On the whole, grad school is probably better than most alternatives. You meet
a lot of smart people, and your glum procrastination will at least be a
powerful common bond. And of course you have a PhD at the end. I forgot about
that. I suppose that's worth something.
The greatest advantage of a PhD (besides being the union card of academia, of
course) may be that it gives you some baseline confidence. For example, the
Honeywell thermostats in my house have the most atrocious UI. My mother, who
has the same model, diligently spent a day reading the user's manual to learn
how to operate hers. She assumed the problem was with her. But I can think to
myself "If someone with a PhD in computer science can't understand this
thermostat, it _must_ be badly designed."
If you still want to go to grad school after this equivocal recommendation, I
can give you solid advice about how to get in. A lot of my friends are CS
professors now, so I have the inside story about admissions. It's quite
different from college. At most colleges, admissions officers decide who gets
in. For PhD programs, the professors do. And they try to do it well, because
the people they admit are going to be working for them.
Apparently only recommendations really matter at the best schools.
Standardized tests count for nothing, and grades for little. The essay is
mostly an opportunity to disqualify yourself by saying something stupid. The
only thing professors trust is recommendations, preferably from people they
know. [6]
So if you want to get into a PhD program, the key is to impress your
professors. And from my friends who are professors I know what impresses them:
not merely trying to impress them. They're not impressed by students who get
good grades or want to be their research assistants so they can get into grad
school. They're impressed by students who get good grades and want to be their
research assistants because they're genuinely interested in the topic.
So the best thing you can do in college, whether you want to get into grad
school or just be good at hacking, is figure out what you truly like. It's
hard to trick professors into letting you into grad school, and impossible to
trick problems into letting you solve them. College is where faking stops
working. From this point, unless you want to go work for a big company, which
is like reverting to high school, the only way forward is through doing what
you [love](love.html).
**Notes**
[1] No one seems to have minded, which shows how unimportant the Arpanet
(which became the Internet) was as late as 1984.
[2] This is why, when I became an employer, I didn't care about GPAs. In fact,
we actively sought out people who'd failed out of school. We once put up
posters around Harvard saying "Did you just get kicked out for doing badly in
your classes because you spent all your time working on some project of your
own? Come work for us!" We managed to find a kid who had been, and he was a
great hacker.
When Harvard kicks undergrads out for a year, they have to get jobs. The idea
is to show them how awful the real world is, so they'll understand how lucky
they are to be in college. This plan backfired with the guy who came to work
for us, because he had more fun than he'd had in school, and made more that
year from stock options than any of his professors did in salary. So instead
of crawling back repentant at the end of the year, he took another year off
and went to Europe. He did eventually graduate at about 26.
[3] Eric Raymond says the best metaphors for hackers are in set theory,
combinatorics, and graph theory.
Trevor Blackwell reminds you to take math classes intended for math majors.
"'Math for engineers' classes sucked mightily. In fact any 'x for engineers'
sucks, where x includes math, law, writing and visual design."
[4] Other highly recommended books: _What is Mathematics?_ , by Courant and
Robbins; _Geometry and the Imagination_ by Hilbert and Cohn-Vossen. And for
those interested in graphic design, [Byrne's
Euclid](http://www.math.ubc.ca/people/faculty/cass/Euclid/byrne.html).
[5] If you wanted to have the perfect life, the thing to do would be to go to
grad school, secretly write your dissertation in the first year or two, and
then just enjoy yourself for the next three years, dribbling out a chapter at
a time. This prospect will make grad students' mouths water, but I know of no
one who's had the discipline to pull it off.
[6] One professor friend says that 15-20% of the grad students they admit each
year are "long shots." But what he means by long shots are people whose
applications are perfect in every way, except that no one on the admissions
committee knows the professors who wrote the recommendations.
So if you want to get into grad school in the sciences, you need to go to
college somewhere with real research professors. Otherwise you'll seem a risky
bet to admissions committees, no matter how good you are.
Which implies a surprising but apparently inevitable consequence: little
liberal arts colleges are doomed. Most smart high school kids at least
consider going into the sciences, even if they ultimately choose not to. Why
go to a college that limits their options?
**Thanks** to Trevor Blackwell, Alex Lewin, Jessica Livingston, Robert Morris,
Eric Raymond, and several [anonymous CS professors](undergrad2.html) for
reading drafts of this, and to the students whose questions began it.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
October 2012
One advantage of Y Combinator's early, broad focus is that we see trends
before most other people. And one of the most conspicuous trends in the last
batch was the large number of hardware startups. Out of 84 companies, 7 were
making hardware. On the whole they've done better than the companies that
weren't.
They've faced resistance from investors of course. Investors have a deep-
seated bias against hardware. But investors' opinions are a trailing
indicator. The best founders are better at seeing the future than the best
investors, because the best founders are making it.
There is no one single force driving this trend. Hardware [does
well](http://bits.blogs.nytimes.com/2012/05/11/pebble-smartwatch-tops-out-
at-10-million-on-kickstarter/) on crowdfunding sites. The spread of
[tablets](http://paulgraham.com/tablets.html) makes it possible to build new
things [controlled by](http://lockitron.com) and even
[incorporating](http://doublerobotics.com) them. [Electric
motors](http://www.boostedboards.com/) have improved. Wireless connectivity of
various types can now be taken for granted. It's getting more straightforward
to get things manufactured. Arduinos, 3D printing, laser cutters, and more
accessible CNC milling are making hardware easier to prototype. Retailers are
less of a bottleneck as customers increasingly buy online.
One question I can answer is why hardware is suddenly cool. It always was
cool. Physical things are great. They just haven't been as great a way to
start a [rapidly growing](growth.html) business as software. But that rule may
not be permanent. It's not even that old; it only dates from about 1990. Maybe
the advantage of software will turn out to have been temporary. Hackers love
to build hardware, and customers love to buy it. So if the ease of shipping
hardware even approached the ease of shipping software, we'd see a lot more
hardware startups.
It wouldn't be the first time something was a bad idea till it wasn't. And it
wouldn't be the first time investors learned that lesson from founders.
So if you want to work on hardware, don't be deterred from doing it because
you worry investors will discriminate against you. And in particular, don't be
deterred from [applying](http://ycombinator.com/apply.html) to Y Combinator
with a hardware idea, because we're especially interested in hardware
startups.
We know there's room for the [next Steve Jobs](ambitious.html). But there's
almost certainly also room for the first <Your Name Here>.
**Thanks** to Sam Altman, Trevor Blackwell, David Cann, Sanjay Dastoor, Paul
Gerhardt, Cameron Robertson, Harj Taggar, and Garry Tan for reading drafts of
this.
|
July 2010
When we sold our startup in 1998 I suddenly got a lot of money. I now had to
think about something I hadn't had to think about before: how not to lose it.
I knew it was possible to go from rich to poor, just as it was possible to go
from poor to rich. But while I'd spent a lot of the past several years
studying the paths from [poor to rich](wealth.html), I knew practically
nothing about the paths from rich to poor. Now, in order to avoid them, I had
to learn where they were.
So I started to pay attention to how fortunes are lost. If you'd asked me as a
kid how rich people became poor, I'd have said by spending all their money.
That's how it happens in books and movies, because that's the colorful way to
do it. But in fact the way most fortunes are lost is not through excessive
expenditure, but through bad investments.
It's hard to spend a fortune without noticing. Someone with ordinary tastes
would find it hard to blow through more than a few tens of thousands of
dollars without thinking "wow, I'm spending a lot of money." Whereas if you
start trading derivatives, you can lose a million dollars (as much as you
want, really) in the blink of an eye.
In most people's minds, spending money on luxuries sets off alarms that making
investments doesn't. Luxuries seem self-indulgent. And unless you got the
money by inheriting it or winning a lottery, you've already been thoroughly
trained that self-indulgence leads to trouble. Investing bypasses those
alarms. You're not spending the money; you're just moving it from one asset to
another. Which is why people trying to sell you expensive things say "it's an
investment."
The solution is to develop new alarms. This can be a tricky business, because
while the alarms that prevent you from overspending are so basic that they may
even be in our DNA, the ones that prevent you from making bad investments have
to be learned, and are sometimes fairly counterintuitive.
A few days ago I realized something surprising: the situation with time is
much the same as with money. The most dangerous way to lose time is not to
spend it having fun, but to spend it doing fake work. When you spend time
having fun, you know you're being self-indulgent. Alarms start to go off
fairly quickly. If I woke up one morning and sat down on the sofa and watched
TV all day, I'd feel like something was terribly wrong. Just thinking about it
makes me wince. I'd start to feel uncomfortable after sitting on a sofa
watching TV for 2 hours, let alone a whole day.
And yet I've definitely had days when I might as well have sat in front of a
TV all day — days at the end of which, if I asked myself what I got done that
day, the answer would have been: basically, nothing. I feel bad after these
days too, but nothing like as bad as I'd feel if I spent the whole day on the
sofa watching TV. If I spent a whole day watching TV I'd feel like I was
descending into perdition. But the same alarms don't go off on the days when I
get nothing done, because I'm doing stuff that seems, superficially, like real
work. Dealing with email, for example. You do it sitting at a desk. It's not
fun. So it must be work.
With time, as with money, avoiding pleasure is no longer enough to protect
you. It probably was enough to protect hunter-gatherers, and perhaps all pre-
industrial societies. So nature and nurture combine to make us avoid self-
indulgence. But the world has gotten more complicated: the most dangerous
traps now are new behaviors that bypass our alarms about self-indulgence by
mimicking more virtuous types. And the worst thing is, they're not even fun.
**Thanks** to Sam Altman, Trevor Blackwell, Patrick Collison, Jessica
Livingston, and Robert Morris for reading drafts of this.
|
May 2008
Adults lie constantly to kids. I'm not saying we should stop, but I think we
should at least examine which lies we tell and why.
There may also be a benefit to us. We were all lied to as kids, and some of
the lies we were told still affect us. So by studying the ways adults lie to
kids, we may be able to clear our heads of lies we were told.
I'm using the word "lie" in a very general sense: not just overt falsehoods,
but also all the more subtle ways we mislead kids. Though "lie" has negative
connotations, I don't mean to suggest we should never do this—just that we
should pay attention when we do. [1]
One of the most remarkable things about the way we lie to kids is how broad
the conspiracy is. All adults know what their culture lies to kids about:
they're the questions you answer "Ask your parents." If a kid asked who won
the World Series in 1982 or what the atomic weight of carbon was, you could
just tell him. But if a kid asks you "Is there a God?" or "What's a
prostitute?" you'll probably say "Ask your parents."
Since we all agree, kids see few cracks in the view of the world presented to
them. The biggest disagreements are between parents and schools, but even
those are small. Schools are careful what they say about controversial topics,
and if they do contradict what parents want their kids to believe, parents
either pressure the school into keeping
[quiet](http://www.google.com/search?q=parents+complain+inappropriate+book) or
move their kids to a new school.
The conspiracy is so thorough that most kids who discover it do so only by
discovering internal contradictions in what they're told. It can be traumatic
for the ones who wake up during the operation. Here's what happened to
Einstein:
> Through the reading of popular scientific books I soon reached the
> conviction that much in the stories of the Bible could not be true. The
> consequence was a positively fanatic freethinking coupled with the
> impression that youth is intentionally being deceived by the state through
> lies: it was a crushing impression. [2]
I remember that feeling. By 15 I was convinced the world was corrupt from end
to end. That's why movies like _The Matrix_ have such resonance. Every kid
grows up in a fake world. In a way it would be easier if the forces behind it
were as clearly differentiated as a bunch of evil machines, and one could make
a clean break just by taking a pill.
**Protection**
If you ask adults why they lie to kids, the most common reason they give is to
protect them. And kids do need protecting. The environment you want to create
for a newborn child will be quite unlike the streets of a big city.
That seems so obvious it seems wrong to call it a lie. It's certainly not a
bad lie to tell, to give a baby the impression the world is quiet and warm and
safe. But this harmless type of lie can turn sour if left unexamined.
Imagine if you tried to keep someone in as protected an environment as a
newborn till age 18. To mislead someone so grossly about the world would seem
not protection but abuse. That's an extreme example, of course; when parents
do that sort of thing it becomes national news. But you see the same problem
on a smaller scale in the malaise teenagers feel in suburbia.
The main purpose of suburbia is to provide a protected environment for
children to grow up in. And it seems great for 10 year olds. I liked living in
suburbia when I was 10. I didn't notice how sterile it was. My whole world was
no bigger than a few friends' houses I bicycled to and some woods I ran around
in. On a log scale I was midway between crib and globe. A suburban street was
just the right size. But as I grew older, suburbia started to feel
suffocatingly fake.
Life can be pretty good at 10 or 20, but it's often frustrating at 15\. This
is too big a problem to solve here, but certainly one reason life sucks at 15
is that kids are trapped in a world designed for 10 year olds.
What do parents hope to protect their children from by raising them in
suburbia? A friend who moved out of Manhattan said merely that her 3 year old
daughter "saw too much." Off the top of my head, that might include: people
who are high or drunk, poverty, madness, gruesome medical conditions, sexual
behavior of various degrees of oddness, and violent anger.
I think it's the anger that would worry me most if I had a 3 year old. I was
29 when I moved to New York and I was surprised even then. I wouldn't want a 3
year old to see some of the disputes I saw. It would be too frightening. A lot
of the things adults conceal from smaller children, they conceal because
they'd be frightening, not because they want to conceal the existence of such
things. Misleading the child is just a byproduct.
This seems one of the most justifiable types of lying adults do to kids. But
because the lies are indirect we don't keep a very strict accounting of them.
Parents know they've concealed the facts about sex, and many at some point sit
their kids down and explain more. But few tell their kids about the
differences between the real world and the cocoon they grew up in. Combine
this with the confidence parents try to instill in their kids, and every year
you get a new crop of 18 year olds who think they know how to run the world.
Don't all 18 year olds think they know how to run the world? Actually this
seems to be a recent innovation, no more than about 100 years old. In
preindustrial times teenage kids were junior members of the adult world and
comparatively well aware of their shortcomings. They could see they weren't as
strong or skillful as the village smith. In past times people lied to kids
about some things more than we do now, but the lies implicit in an artificial,
protected environment are a recent invention. Like a lot of new inventions,
the rich got this first. Children of kings and great magnates were the first
to grow up out of touch with the world. Suburbia means half the population can
live like kings in that respect.
**Sex (and Drugs)**
I'd have different worries about raising teenage kids in New York. I'd worry
less about what they'd see, and more about what they'd do. I went to college
with a lot of kids who grew up in Manhattan, and as a rule they seemed pretty
jaded. They seemed to have lost their virginity at an average of about 14 and
by college had tried more drugs than I'd even heard of.
The reasons parents don't want their teenage kids having sex are complex.
There are some obvious dangers: pregnancy and sexually transmitted diseases.
But those aren't the only reasons parents don't want their kids having sex.
The average parents of a 14 year old girl would hate the idea of her having
sex even if there were zero risk of pregnancy or sexually transmitted
diseases.
Kids can probably sense they aren't being told the whole story. After all,
pregnancy and sexually transmitted diseases are just as much a problem for
adults, and they have sex.
What really bothers parents about their teenage kids having sex? Their dislike
of the idea is so visceral it's probably inborn. But if it's inborn it should
be universal, and there are plenty of societies where parents don't mind if
their teenage kids have sex—indeed, where it's normal for 14 year olds to
become mothers. So what's going on? There does seem to be a universal taboo
against sex with prepubescent children. One can imagine evolutionary reasons
for that. And I think this is the main reason parents in industrialized
societies dislike teenage kids having sex. They still think of them as
children, even though biologically they're not, so the taboo against child sex
still has force.
One thing adults conceal about sex they also conceal about drugs: that it can
cause great pleasure. That's what makes sex and drugs so dangerous. The desire
for them can cloud one's judgement—which is especially frightening when the
judgement being clouded is the already wretched judgement of a teenage kid.
Here parents' desires conflict. Older societies told kids they had bad
judgement, but modern parents want their children to be confident. This may
well be a better plan than the old one of putting them in their place, but it
has the side effect that after having implicitly lied to kids about how good
their judgement is, we then have to lie again about all the things they might
get into trouble with if they believed us.
If parents told their kids the truth about sex and drugs, it would be: the
reason you should avoid these things is that you have lousy judgement. People
with twice your experience still get burned by them. But this may be one of
those cases where the truth wouldn't be convincing, because one of the
symptoms of bad judgement is believing you have good judgement. When you're
too weak to lift something, you can tell, but when you're making a decision
impetuously, you're all the more sure of it.
**Innocence**
Another reason parents don't want their kids having sex is that they want to
keep them innocent. Adults have a certain model of how kids are supposed to
behave, and it's different from what they expect of other adults.
One of the most obvious differences is the words kids are allowed to use. Most
parents use words when talking to other adults that they wouldn't want their
kids using. They try to hide even the existence of these words for as long as
they can. And this is another of those conspiracies everyone participates in:
everyone knows you're not supposed to swear in front of kids.
I've never heard more different explanations for anything parents tell kids
than why they shouldn't swear. Every parent I know forbids their children to
swear, and yet no two of them have the same justification. It's clear most
start with not wanting kids to swear, then make up the reason afterward.
So my theory about what's going on is that the _function_ of swearwords is to
mark the speaker as an adult. There's no difference in the meaning of "shit"
and "poopoo." So why should one be ok for kids to say and one forbidden? The
only explanation is: by definition. [3]
Why does it bother adults so much when kids do things reserved for adults? The
idea of a foul-mouthed, cynical 10 year old leaning against a lamppost with a
cigarette hanging out of the corner of his mouth is very disconcerting. But
why?
One reason we want kids to be innocent is that we're programmed to like
certain kinds of helplessness. I've several times heard mothers say they
deliberately refrained from correcting their young children's
mispronunciations because they were so cute. And if you think about it,
cuteness is helplessness. Toys and cartoon characters meant to be cute always
have clueless expressions and stubby, ineffectual limbs.
It's not surprising we'd have an inborn desire to love and protect helpless
creatures, considering human offspring are so helpless for so long. Without
the helplessness that makes kids cute, they'd be very annoying. They'd merely
seem like incompetent adults. But there's more to it than that. The reason our
hypothetical jaded 10 year old bothers me so much is not just that he'd be
annoying, but that he'd have cut off his prospects for growth so early. To be
jaded you have to think you know how the world works, and any theory a 10 year
old had about that would probably be a pretty narrow one.
Innocence is also open-mindedness. We want kids to be innocent so they can
continue to learn. Paradoxical as it sounds, there are some kinds of knowledge
that get in the way of other kinds of knowledge. If you're going to learn that
the world is a brutal place full of people trying to take advantage of one
another, you're better off learning it last. Otherwise you won't bother
learning much more.
Very smart adults often seem unusually innocent, and I don't think this is a
coincidence. I think they've deliberately avoided learning about certain
things. Certainly I do. I used to think I wanted to know everything. Now I
know I don't.
**Death**
After sex, death is the topic adults lie most conspicuously about to kids. Sex
I believe they conceal because of deep taboos. But why do we conceal death
from kids? Probably because small children are particularly horrified by it.
They want to feel safe, and death is the ultimate threat.
One of the most spectacular lies our parents told us was about the death of
our first cat. Over the years, as we asked for more details, they were
compelled to invent more, so the story grew quite elaborate. The cat had died
at the vet's office. Of what? Of the anaesthesia itself. Why was the cat at
the vet's office? To be fixed. And why had such a routine operation killed it?
It wasn't the vet's fault; the cat had a congenitally weak heart; the
anaesthesia was too much for it; but there was no way anyone could have known
this in advance. It was not till we were in our twenties that the truth came
out: my sister, then about three, had accidentally stepped on the cat and
broken its back.
They didn't feel the need to tell us the cat was now happily in cat heaven. My
parents never claimed that people or animals who died had "gone to a better
place," or that we'd meet them again. It didn't seem to harm us.
My grandmother told us an edited version of the death of my grandfather. She
said they'd been sitting reading one day, and when she said something to him,
he didn't answer. He seemed to be asleep, but when she tried to rouse him, she
couldn't. "He was gone." Having a heart attack sounded like falling asleep.
Later I learned it hadn't been so neat, and the heart attack had taken most of
a day to kill him.
Along with such outright lies, there must have been a lot of changing the
subject when death came up. I can't remember that, of course, but I can infer
it from the fact that I didn't really grasp I was going to die till I was
about 19. How could I have missed something so obvious for so long? Now that
I've seen parents managing the subject, I can see how: questions about death
are gently but firmly turned aside.
On this topic, especially, they're met half-way by kids. Kids often want to be
lied to. They want to believe they're living in a comfortable, safe world as
much as their parents want them to believe it. [4]
**Identity**
Some parents feel a strong adherence to an ethnic or religious group and want
their kids to feel it too. This usually requires two different kinds of lying:
the first is to tell the child that he or she is an X, and the second is
whatever specific lies Xes differentiate themselves by believing. [5]
Telling a child they have a particular ethnic or religious identity is one of
the stickiest things you can tell them. Almost anything else you tell a kid,
they can change their mind about later when they start to think for
themselves. But if you tell a kid they're a member of a certain group, that
seems nearly impossible to shake.
This despite the fact that it can be one of the most premeditated lies parents
tell. When parents are of different religions, they'll often agree between
themselves that their children will be "raised as Xes." And it works. The kids
obligingly grow up considering themselves as Xes, despite the fact that if
their parents had chosen the other way, they'd have grown up considering
themselves as Ys.
One reason this works so well is the second kind of lie involved. The truth is
common property. You can't distinguish your group by doing things that are
rational, and believing things that are true. If you want to set yourself
apart from other people, you have to do things that are arbitrary, and believe
things that are false. And after having spent their whole lives doing things
that are arbitrary and believing things that are false, and being regarded as
odd by "outsiders" on that account, the cognitive dissonance pushing children
to regard themselves as Xes must be enormous. If they aren't an X, why are
they attached to all these arbitrary beliefs and customs? If they aren't an X,
why do all the non-Xes call them one?
This form of lie is not without its uses. You can use it to carry a payload of
beneficial beliefs, and they will also become part of the child's identity.
You can tell the child that in addition to never wearing the color yellow,
believing the world was created by a giant rabbit, and always snapping their
fingers before eating fish, Xes are also particularly honest and industrious.
Then X children will grow up feeling it's part of their identity to be honest
and industrious.
This probably accounts for a lot of the spread of modern religions, and
explains why their doctrines are a combination of the useful and the bizarre.
The bizarre half is what makes the religion stick, and the useful half is the
payload. [6]
**Authority**
One of the least excusable reasons adults lie to kids is to maintain power
over them. Sometimes these lies are truly sinister, like a child molester
telling his victims they'll get in trouble if they tell anyone what happened
to them. Others seem more innocent; it depends how badly adults lie to
maintain their power, and what they use it for.
Most adults make some effort to conceal their flaws from children. Usually
their motives are mixed. For example, a father who has an affair generally
conceals it from his children. His motive is partly that it would worry them,
partly that this would introduce the topic of sex, and partly (a larger part
than he would admit) that he doesn't want to tarnish himself in their eyes.
If you want to learn what lies are told to kids, read almost any book written
to teach them about "issues." [7] Peter Mayle wrote one called _Why Are We
Getting a Divorce?_ It begins with the three most important things to remember
about divorce, one of which is:
> You shouldn't put the blame on one parent, because divorce is never only one
> person's fault. [8]
Really? When a man runs off with his secretary, is it always partly his wife's
fault? But I can see why Mayle might have said this. Maybe it's more important
for kids to respect their parents than to know the truth about them.
But because adults conceal their flaws, and at the same time insist on high
standards of behavior for kids, a lot of kids grow up feeling they fall
hopelessly short. They walk around feeling horribly evil for having used a
swearword, while in fact most of the adults around them are doing much worse
things.
This happens in intellectual as well as moral questions. The more confident
people are, the more willing they seem to be to answer a question "I don't
know." Less confident people feel they have to have an answer or they'll look
bad. My parents were pretty good about admitting when they didn't know things,
but I must have been told a lot of lies of this type by teachers, because I
rarely heard a teacher say "I don't know" till I got to college. I remember
because it was so surprising to hear someone say that in front of a class.
The first hint I had that teachers weren't omniscient came in sixth grade,
after my father contradicted something I'd learned in school. When I protested
that the teacher had said the opposite, my father replied that the guy had no
idea what he was talking about—that he was just an elementary school teacher,
after all.
_Just_ a teacher? The phrase seemed almost grammatically ill-formed. Didn't
teachers know everything about the subjects they taught? And if not, why were
they the ones teaching us?
The sad fact is, US public school teachers don't generally understand the
stuff they're teaching very well. There are some sterling exceptions, but as a
rule people planning to go into teaching rank academically near the bottom of
the college population. So the fact that I still thought at age 11 that
teachers were infallible shows what a job the system must have done on my
brain.
**School**
What kids get taught in school is a complex mix of lies. The most excusable
are those told to simplify ideas to make them easy to learn. The problem is, a
lot of propaganda gets slipped into the curriculum in the name of
simplification.
Public school textbooks represent a compromise between what various powerful
groups want kids to be told. The lies are rarely overt. Usually they consist
either of omissions or of over-emphasizing certain topics at the expense of
others. The view of history we got in elementary school was a crude
hagiography, with at least one representative of each powerful group.
The famous scientists I remember were Einstein, Marie Curie, and George
Washington Carver. Einstein was a big deal because his work led to the atom
bomb. Marie Curie was involved with X-rays. But I was mystified about Carver.
He seemed to have done stuff with peanuts.
It's obvious now that he was on the list because he was black (and for that
matter that Marie Curie was on it because she was a woman), but as a kid I was
confused for years about him. I wonder if it wouldn't have been better just to
tell us the truth: that there weren't any famous black scientists. Ranking
George Washington Carver with Einstein misled us not only about science, but
about the obstacles blacks faced in his time.
As subjects got softer, the lies got more frequent. By the time you got to
politics and recent history, what we were taught was pretty much pure
propaganda. For example, we were taught to regard political leaders as
saints—especially the recently martyred Kennedy and King. It was astonishing
to learn later that they'd both been serial womanizers, and that Kennedy was a
speed freak to boot. (By the time King's plagiarism emerged, I'd lost the
ability to be surprised by the misdeeds of famous people.)
I doubt you could teach kids recent history without teaching them lies,
because practically everyone who has anything to say about it has some kind of
spin to put on it. Much recent history _consists_ of spin. It would probably
be better just to teach them metafacts like that.
Probably the biggest lie told in schools, though, is that the way to succeed
is through following "the rules." In fact most such rules are just hacks to
manage large groups efficiently.
**Peace**
Of all the reasons we lie to kids, the most powerful is probably the same
mundane reason they lie to us.
Often when we lie to people it's not part of any conscious strategy, but
because they'd react violently to the truth. Kids, almost by definition, lack
self-control. They react violently to things—and so they get lied to a lot.
[9]
A few Thanksgivings ago, a friend of mine found himself in a situation that
perfectly illustrates the complex motives we have when we lie to kids. As the
roast turkey appeared on the table, his alarmingly perceptive 5 year old son
suddenly asked if the turkey had wanted to die. Foreseeing disaster, my friend
and his wife rapidly improvised: yes, the turkey had wanted to die, and in
fact had lived its whole life with the aim of being their Thanksgiving dinner.
And that (phew) was the end of that.
Whenever we lie to kids to protect them, we're usually also lying to keep the
peace.
One consequence of this sort of calming lie is that we grow up thinking
horrible things are normal. It's hard for us to feel a sense of urgency as
adults over something we've literally been trained not to worry about. When I
was about 10 I saw a documentary on pollution that put me into a panic. It
seemed the planet was being irretrievably ruined. I went to my mother
afterward to ask if this was so. I don't remember what she said, but she made
me feel better, so I stopped worrying about it.
That was probably the best way to handle a frightened 10 year old. But we
should understand the price. This sort of lie is one of the main reasons bad
things persist: we're all trained to ignore them.
**Detox**
A sprinter in a race almost immediately enters a state called "oxygen debt."
His body switches to an emergency source of energy that's faster than regular
aerobic respiration. But this process builds up waste products that ultimately
require extra oxygen to break down, so at the end of the race he has to stop
and pant for a while to recover.
We arrive at adulthood with a kind of truth debt. We were told a lot of lies
to get us (and our parents) through our childhood. Some may have been
necessary. Some probably weren't. But we all arrive at adulthood with heads
full of lies.
There's never a point where the adults sit you down and explain all the lies
they told you. They've forgotten most of them. So if you're going to clear
these lies out of your head, you're going to have to do it yourself.
Few do. Most people go through life with bits of packing material adhering to
their minds and never know it. You probably never can completely undo the
effects of lies you were told as a kid, but it's worth trying. I've found that
whenever I've been able to undo a lie I was told, a lot of other things fell
into place.
Fortunately, once you arrive at adulthood you get a valuable new resource you
can use to figure out what lies you were told. You're now one of the liars.
You get to watch behind the scenes as adults spin the world for the next
generation of kids.
The first step in clearing your head is to realize how far you are from a
neutral observer. When I left high school I was, I thought, a complete
skeptic. I'd realized high school was crap. I thought I was ready to question
everything I knew. But among the many other things I was ignorant of was how
much debris there already was in my head. It's not enough to consider your
mind a blank slate. You have to consciously erase it.
**Notes**
[1] One reason I stuck with such a brutally simple word is that the lies we
tell kids are probably not quite as harmless as we think. If you look at what
adults told children in the past, it's shocking how much they lied to them.
Like us, they did it with the best intentions. So if we think we're as open as
one could reasonably be with children, we're probably fooling ourselves. Odds
are people in 100 years will be as shocked at some of the lies we tell as we
are at some of the lies people told 100 years ago.
I can't predict which these will be, and I don't want to write an essay that
will seem dumb in 100 years. So instead of using special euphemisms for lies
that seem excusable according to present fashions, I'm just going to call all
our lies lies.
(I have omitted one type: lies told to play games with kids' credulity. These
range from "make-believe," which is not really a lie because it's told with a
wink, to the frightening lies told by older siblings. There's not much to say
about these: I wouldn't want the first type to go away, and wouldn't expect
the second type to.)
[2] Calaprice, Alice (ed.), _The Quotable Einstein_ , Princeton University
Press, 1996.
[3] If you ask parents why kids shouldn't swear, the less educated ones
usually reply with some question-begging answer like "it's inappropriate,"
while the more educated ones come up with elaborate rationalizations. In fact
the less educated parents seem closer to the truth.
[4] As a friend with small children pointed out, it's easy for small children
to consider themselves immortal, because time seems to pass so slowly for
them. To a 3 year old, a day feels like a month might to an adult. So 80 years
sounds to him like 2400 years would to us.
[5] I realize I'm going to get endless grief for classifying religion as a
type of lie. Usually people skirt that issue with some equivocation implying
that lies believed for a sufficiently long time by sufficiently large numbers
of people are immune to the usual standards for truth. But because I can't
predict which lies future generations will consider inexcusable, I can't
safely omit any type we tell. Yes, it seems unlikely that religion will be out
of fashion in 100 years, but no more unlikely than it would have seemed to
someone in 1880 that schoolchildren in 1980 would be taught that masturbation
was perfectly normal and not to feel guilty about it.
[6] Unfortunately the payload can consist of bad customs as well as good ones.
For example, there are certain qualities that some groups in America consider
"acting white." In fact most of them could as accurately be called "acting
Japanese." There's nothing specifically white about such customs. They're
common to all cultures with long traditions of living in cities. So it is
probably a losing bet for a group to consider behaving the opposite way as
part of its identity.
[7] In this context, "issues" basically means "things we're going to lie to
them about." That's why there's a special name for these topics.
[8] Mayle, Peter, _Why Are We Getting a Divorce?_ , Harmony, 1988.
[9] The ironic thing is, this is also the main reason kids lie to adults. If
you freak out when people tell you alarming things, they won't tell you them.
Teenagers don't tell their parents what happened that night they were supposed
to be staying at a friend's house for the same reason parents don't tell 5
year olds the truth about the Thanksgiving turkey. They'd freak if they knew.
**Thanks** to Sam Altman, Marc Andreessen, Trevor Blackwell, Patrick Collison,
Jessica Livingston, Jackie McDonough, Robert Morris, and David Sloo for
reading drafts of this. And since there are some controversial ideas here, I
should add that none of them agreed with everything in it.
|
April 2009
[Om Malik](http://gigaom.com/2009/04/03/google-may-buy-twitter-or-not-but-why-
is-twitter-so-hot/) is the most recent of many people to ask why Twitter is
such a big deal.
The reason is that it's a new messaging protocol, where you don't specify the
recipients. New protocols are rare. Or more precisely, new protocols that take
off are. There are only a handful of commonly used ones: TCP/IP (the
Internet), SMTP (email), HTTP (the web), and so on. So any new protocol is a
big deal. But Twitter is a protocol owned by a private company. That's even
rarer.
Curiously, the fact that the founders of Twitter have been slow to monetize it
may in the long run prove to be an advantage. Because they haven't tried to
control it too much, Twitter feels to everyone like previous protocols. One
forgets it's owned by a private company. That must have made it easier for
Twitter to spread.
|
April 2009
Recently I realized I'd been holding two ideas in my head that would explode
if combined.
The first is that startups may represent a [new economic phase](highres.html),
on the scale of the Industrial Revolution. I'm not sure of this, but there
seems a decent chance it's true. People are dramatically more productive as
founders or early employees of startups—imagine how much less Larry and Sergey
would have achieved if they'd gone to work for a big company—and that scale of
improvement can change social customs.
The second idea is that startups are a type of business that flourishes in
certain places that [specialize](startuphubs.html) in it—that Silicon Valley
specializes in startups in the same way Los Angeles specializes in movies, or
New York in finance. [1]
What if both are true? What if startups are both a new economic phase and also
a type of business that only flourishes in certain centers?
If so, this revolution is going to be particularly revolutionary. All previous
revolutions have spread. Agriculture, cities, and industrialization all spread
widely. If startups end up being like the movie business, with just a handful
of centers and one dominant one, that's going to have novel consequences.
There are already signs that startups may not spread particularly well. The
spread of startups seems to be proceeding slower than the spread of the
Industrial Revolution, despite the fact that communication is so much faster
now.
Within a few decades of the founding of Boulton & Watt there were steam
engines scattered over northern Europe and North America. Industrialization
didn't spread much beyond those regions for a while. It only spread to places
where there was a strong middle class—countries where a private citizen could
make a fortune without having it confiscated. Otherwise it wasn't worth
investing in factories. But in a country with a strong middle class it was
easy for industrial techniques to take root. An individual mine or factory
owner could decide to install a steam engine, and within a few years he could
probably find someone local to make him one. So steam engines spread fast. And
they spread widely, because the locations of mines and factories were
determined by features like rivers, harbors, and sources of raw materials. [2]
Startups don't seem to spread so well, partly because they're more a social
than a technical phenomenon, and partly because they're not tied to geography.
An individual European manufacturer could import industrial techniques and
they'd work fine. This doesn't seem to work so well with startups: you need a
community of expertise, as you do in the movie business. [3] Plus there aren't
the same forces driving startups to spread. Once railroads or electric power
grids were invented, every region had to have them. An area without railroads
or power was a rich potential market. But this isn't true with startups.
There's no need for a Microsoft of France or Google of Germany.
Governments may decide they want to encourage startups locally, but government
policy can't call them into being the way a genuine need could.
How will this all play out? If I had to predict now, I'd say that startups
will spread, but very slowly, because their spread will be driven not by
government policies (which won't work) or by market need (which doesn't exist)
but, to the extent that it happens at all, by the same random factors that
have caused startup culture to spread thus far. And such random factors will
increasingly be outweighed by the pull of existing startup hubs.
Silicon Valley is where it is because William Shockley wanted to move back to
Palo Alto, where he grew up, and the experts he lured west to work with him
liked it so much they stayed. Seattle owes much of its position as a tech
center to the same cause: Gates and Allen wanted to move home. Otherwise
Albuquerque might have Seattle's place in the rankings. Boston is a tech
center because it's the intellectual capital of the US and probably the world.
And if Battery Ventures hadn't turned down Facebook, Boston would be
significantly bigger now on the startup radar screen.
But of course it's not a coincidence that Facebook got funded in the Valley
and not Boston. There are more and bolder investors in Silicon Valley than in
Boston, and even undergrads know it.
Boston's case illustrates the difficulty you'd have establishing a new startup
hub this late in the game. If you wanted to create a startup hub by
reproducing the way existing ones happened, the [way to do
it](siliconvalley.html) would be to establish a first-rate research university
in a place so nice that rich people wanted to live there. Then the town would
be hospitable to both groups you need: both founders and investors. That's the
combination that yielded Silicon Valley. But Silicon Valley didn't have
Silicon Valley to compete with. If you tried now to create a startup hub by
planting a great university in a nice place, it would have a harder time
getting started, because many of the best startups it produced would be sucked
away to existing startup hubs.
Recently I suggested a potential shortcut: [pay startups to move](maybe.html).
Once you had enough good startups in one place, it would create a self-
sustaining chain reaction. Founders would start to move there without being
paid, because that was where their peers were, and investors would appear too,
because that was where the deals were.
In practice I doubt any government would have the balls to try this, or the
brains to do it right. I didn't mean it as a practical suggestion, but more as
an exploration of the lower bound of what it would take to create a startup
hub deliberately.
The most likely scenario is (1) that no government will successfully establish
a startup hub, and (2) that the spread of startup culture will thus be driven
by the random factors that have driven it so far, but (3) that these factors
will be increasingly outweighed by the pull of existing startup hubs. Result:
this revolution, if it is one, will be unusually localized.
**Notes**
[1] There are two very different types of startup: one kind that evolves
naturally, and one kind that's called into being to "commercialize" a
scientific discovery. Most computer/software startups are now the first type,
and most pharmaceutical startups the second. When I talk about startups in
this essay, I mean type I startups. There is no difficulty making type II
startups spread: all you have to do is fund medical research labs;
commercializing whatever new discoveries the boffins throw off is as
straightforward as building a new airport. Type II startups neither require
nor produce startup culture. But that means having type II startups won't get
you type I startups. Philadelphia is a case in point: lots of type II
startups, but hardly any type I.
Incidentally, Google may appear to be an instance of a type II startup, but it
wasn't. Google is not pagerank commercialized. They could have used another
algorithm and everything would have turned out the same. What made Google
Google is that they cared about doing search well at a critical point in the
evolution of the web.
[2] Watt didn't invent the steam engine. His critical invention was a
refinement that made steam engines dramatically more efficient: the separate
condenser. But that oversimplifies his role. He had such a different attitude
to the problem and approached it with such energy that he transformed the
field. Perhaps the most accurate way to put it would be to say that Watt
reinvented the steam engine.
[3] The biggest counterexample here is Skype. If you're doing something that
would get shut down in the US, it becomes an advantage to be located
elsewhere. That's why Kazaa took the place of Napster. And the expertise and
connections the founders gained from running Kazaa helped ensure the success
of Skype.
**Thanks** to Patrick Collison, Jessica Livingston, and Fred Wilson for
reading drafts of this.
|
November 2016
If you're a California voter, there is an important proposition on your ballot
this year: Proposition 62, which bans the death penalty.
When I was younger I used to think the debate about the death penalty was
about when it's ok to take a human life. Is it ok to kill a killer?
But that is not the issue here.
The real world does not work like the version I was shown on TV growing up.
The police often arrest the wrong person. Defendants' lawyers are often
incompetent. And prosecutors are often motivated more by publicity than
justice.
In the real world, [about 4%](http://time.com/79572/more-innocent-people-on-
death-row-than-estimated-study/) of people sentenced to death are innocent. So
this is not about whether it's ok to kill killers. This is about whether it's
ok to kill innocent people.
A child could answer that one for you.
This year, in California, you have a chance to end this, by voting yes on
Proposition 62. But beware, because there is another proposition, Proposition
66, whose goal is to make it easier to execute people. So yes on 62, no on 66.
It's time.
|
March 2009
About twenty years ago people noticed computers and TV were on a collision
course and started to speculate about what they'd produce when they converged.
We now know the answer: computers. It's clear now that even by using the word
"convergence" we were giving TV too much credit. This won't be convergence so
much as replacement. People may still watch things they call "TV shows," but
they'll watch them mostly on computers.
What decided the contest for computers? Four forces, three of which one could
have predicted, and one that would have been harder to.
One predictable cause of victory is that the Internet is an open platform.
Anyone can build whatever they want on it, and the market picks the winners.
So innovation happens at hacker speeds instead of big company speeds.
The second is Moore's Law, which has worked its usual magic on Internet
bandwidth. [1]
The third reason computers won is piracy. Users prefer it not just because
it's free, but because it's more convenient. Bittorrent and YouTube have
already trained a new generation of viewers that the place to watch shows is
on a computer screen. [2]
The somewhat more surprising force was one specific type of innovation: social
applications. The average teenage kid has a pretty much infinite capacity for
talking to their friends. But they can't physically be with them all the time.
When I was in high school the solution was the telephone. Now it's social
networks, multiplayer games, and various messaging applications. The way you
reach them all is through a computer. [3] Which means every teenage kid (a)
wants a computer with an Internet connection, (b) has an incentive to figure
out how to use it, and (c) spends countless hours in front of it.
This was the most powerful force of all. This was what made everyone want
computers. Nerds got computers because they liked them. Then gamers got them
to play games on. But it was connecting to other people that got everyone
else: that's what made even grandmas and 14 year old girls want computers.
After decades of running an IV drip right into their audience, people in the
entertainment business had understandably come to think of them as rather
passive. They thought they'd be able to dictate the way shows reached
audiences. But they underestimated the force of their desire to connect with
one another.
Facebook killed TV. That is wildly oversimplified, of course, but probably as
close to the truth as you can get in three words.
___
The TV networks already seem, grudgingly, to see where things are going, and
have responded by putting their stuff, grudgingly, online. But they're still
dragging their heels. They still seem to wish people would watch shows on TV
instead, just as newspapers that put their stories online still seem to wish
people would wait till the next morning and read them printed on paper. They
should both just face the fact that the Internet is the primary medium.
They'd be in a better position if they'd done that earlier. When a new medium
arises that's powerful enough to make incumbents nervous, then it's probably
powerful enough to win, and the best thing they can do is jump in immediately.
Whether they like it or not, big changes are coming, because the Internet
dissolves the two cornerstones of broadcast media: synchronicity and locality.
On the Internet, you don't have to send everyone the same signal, and you
don't have to send it to them from a local source. People will watch what they
want when they want it, and group themselves according to whatever shared
interest they feel most strongly. Maybe their strongest shared interest will
be their physical location, but I'm guessing not. Which means local TV is
probably dead. It was an artifact of limitations imposed by old technology. If
someone were creating an Internet-based TV company from scratch now, they
might have some plan for shows aimed at specific regions, but it wouldn't be a
top priority.
Synchronicity and locality are tied together. TV network affiliates care
what's on at 10 because that delivers viewers for local news at 11. This
connection adds more brittleness than strength, however: people don't watch
what's on at 10 because they want to watch the news afterward.
TV networks will fight these trends, because they don't have sufficient
flexibility to adapt to them. They're hemmed in by local affiliates in much
the same way car companies are hemmed in by dealers and unions. Inevitably,
the people running the networks will take the easy route and try to keep the
old model running for a couple more years, just as the record labels have
done.
A recent article in the _Wall Street Journal_ described how TV networks were
trying to add more live shows, partly as a way to make viewers watch TV
synchronously instead of watching recorded shows when it suited them. Instead
of delivering what viewers want, they're trying to force them to change their
habits to suit the networks' obsolete business model. That never works unless
you have a monopoly or cartel to enforce it, and even then it only works
temporarily.
The other reason networks like live shows is that they're cheaper to produce.
There they have the right idea, but they haven't followed it to its
conclusion. Live content can be way cheaper than networks realize, and the way
to take advantage of dramatic decreases in cost is to [increase
volume](http://justin.tv). The networks are prevented from seeing this whole
line of reasoning because they still think of themselves as being in the
broadcast business—as sending one signal to everyone. [4]
___
[Now](badeconomy.html) would be a good time to start any company that competes
with TV networks. That's what a lot of Internet startups are, though they may
not have had this as an explicit goal. People only have so many leisure hours
a day, and TV is premised on such long sessions (unlike Google, which prides
itself on sending users on their way quickly) that anything that takes up
their time is competing with it. But in addition to such indirect competitors,
I think TV companies will increasingly face direct ones.
Even in cable TV, the long tail was lopped off prematurely by the threshold
you had to get over to start a new channel. It will be longer on the Internet,
and there will be more mobility within it. In this new world, the existing
players will only have the advantages any big company has in its market.
That will change the balance of power between the networks and the people who
produce shows. The networks used to be gatekeepers. They distributed your
work, and sold advertising on it. Now the people who produce a show can
distribute it themselves. The main value networks supply now is ad sales.
Which will tend to put them in the position of service providers rather than
publishers.
Shows will change even more. On the Internet there's no reason to keep their
current format, or even the fact that they have a single format. Indeed, the
more interesting sort of convergence that's coming is between shows and games.
But on the question of what sort of entertainment gets distributed on the
Internet in 20 years, I wouldn't dare to make any predictions, except that
things will change a lot. We'll get whatever the most imaginative people can
cook up. That's why the Internet won.
**Notes**
[1] Thanks to Trevor Blackwell for this point. He adds: "I remember the eyes
of phone companies gleaming in the early 90s when they talked about
convergence. They thought most programming would be on demand, and they would
implement it and make a lot of money. It didn't work out. They assumed that
their local network infrastructure would be critical to do video on-demand,
because you couldn't possibly stream it from a few data centers over the
internet. At the time (1992) the entire cross-country Internet bandwidth
wasn't enough for one video stream. But wide-area bandwidth increased more
than they expected and they were beaten by iTunes and Hulu."
[2] Copyright owners tend to focus on the aspect they see of piracy, which is
the lost revenue. They therefore think what drives users to do it is the
desire to get something for free. But iTunes shows that people will pay for
stuff online, if you make it easy. A significant component of piracy is simply
that it offers a better user experience.
[3] Or a phone that is actually a computer. I'm not making any predictions
about the size of the device that will replace TV, just that it will have a
browser and get data via the Internet.
[4] Emmett Shear writes: "I'd argue the long tail for sports may be even
larger than the long tail for other kinds of content. Anyone can broadcast a
high school football game that will be interesting to 10,000 people or so,
even if the quality of production is not so good."
**Thanks** to Sam Altman, Trevor Blackwell, Nancy Cook, Michael Seibel, Emmett
Shear, and Fred Wilson for reading drafts of this.
|
January 2005
_(I wrote this talk for a high school. I never actually gave it, because the
school authorities vetoed the plan to invite me.)_
When I said I was speaking at a high school, my friends were curious. What
will you say to high school students? So I asked them, what do you wish
someone had told you in high school? Their answers were remarkably similar. So
I'm going to tell you what we all wish someone had told us.
I'll start by telling you something you don't have to know in high school:
what you want to do with your life. People are always asking you this, so you
think you're supposed to have an answer. But adults ask this mainly as a
conversation starter. They want to know what sort of person you are, and this
question is just to get you talking. They ask it the way you might poke a
hermit crab in a tide pool, to see what it does.
If I were back in high school and someone asked about my plans, I'd say that
my first priority was to learn what the options were. You don't need to be in
a rush to choose your life's work. What you need to do is discover what you
like. You have to work on stuff you like if you want to be good at what you
do.
It might seem that nothing would be easier than deciding what you like, but it
turns out to be hard, partly because it's hard to get an accurate picture of
most jobs. Being a doctor is not the way it's portrayed on TV. Fortunately you
can also watch real doctors, by volunteering in hospitals. [1]
But there are other jobs you can't learn about, because no one is doing them
yet. Most of the work I've done in the last ten years didn't exist when I was
in high school. The world changes fast, and the rate at which it changes is
itself speeding up. In such a world it's not a good idea to have fixed plans.
And yet every May, speakers all over the country fire up the Standard
Graduation Speech, the theme of which is: don't give up on your dreams. I know
what they mean, but this is a bad way to put it, because it implies you're
supposed to be bound by some plan you made early on. The computer world has a
name for this: premature optimization. And it is synonymous with disaster.
These speakers would do better to say simply, don't give up.
What they really mean is, don't get demoralized. Don't think that you can't do
what other people can. And I agree you shouldn't underestimate your potential.
People who've done great things tend to seem as if they were a race apart. And
most biographies only exaggerate this illusion, partly due to the worshipful
attitude biographers inevitably sink into, and partly because, knowing how the
story ends, they can't help streamlining the plot till it seems like the
subject's life was a matter of destiny, the mere unfolding of some innate
genius. In fact I suspect if you had the sixteen year old Shakespeare or
Einstein in school with you, they'd seem impressive, but not totally unlike
your other friends.
Which is an uncomfortable thought. If they were just like us, then they had to
work very hard to do what they did. And that's one reason we like to believe
in genius. It gives us an excuse for being lazy. If these guys were able to do
what they did only because of some magic Shakespeareness or Einsteinness, then
it's not our fault if we can't do something as good.
I'm not saying there's no such thing as genius. But if you're trying to choose
between two theories and one gives you an excuse for being lazy, the other one
is probably right.
So far we've cut the Standard Graduation Speech down from "don't give up on
your dreams" to "what someone else can do, you can do." But it needs to be cut
still further. There is _some_ variation in natural ability. Most people
overestimate its role, but it does exist. If I were talking to a guy four feet
tall whose ambition was to play in the NBA, I'd feel pretty stupid saying, you
can do anything if you really try. [2]
We need to cut the Standard Graduation Speech down to, "what someone else with
your abilities can do, you can do; and don't underestimate your abilities."
But as so often happens, the closer you get to the truth, the messier your
sentence gets. We've taken a nice, neat (but wrong) slogan, and churned it up
like a mud puddle. It doesn't make a very good speech anymore. But worse
still, it doesn't tell you what to do anymore. Someone with your abilities?
What are your abilities?
**Upwind**
I think the solution is to work in the other direction. Instead of working
back from a goal, work forward from promising situations. This is what most
successful people actually do anyway.
In the graduation-speech approach, you decide where you want to be in twenty
years, and then ask: what should I do now to get there? I propose instead that
you don't commit to anything in the future, but just look at the options
available now, and choose those that will give you the most promising range of
options afterward.
It's not so important what you work on, so long as you're not wasting your
time. Work on things that interest you and increase your options, and worry
later about which you'll take.
Suppose you're a college freshman deciding whether to major in math or
economics. Well, math will give you more options: you can go into almost any
field from math. If you major in math it will be easy to get into grad school
in economics, but if you major in economics it will be hard to get into grad
school in math.
Flying a glider is a good metaphor here. Because a glider doesn't have an
engine, you can't fly into the wind without losing a lot of altitude. If you
let yourself get far downwind of good places to land, your options narrow
uncomfortably. As a rule you want to stay upwind. So I propose that as a
replacement for "don't give up on your dreams." Stay upwind.
How do you do that, though? Even if math is upwind of economics, how are you
supposed to know that as a high school student?
Well, you don't, and that's what you need to find out. Look for smart people
and hard problems. Smart people tend to clump together, and if you can find
such a clump, it's probably worthwhile to join it. But it's not
straightforward to find these, because there is a lot of faking going on.
To a newly arrived undergraduate, all university departments look much the
same. The professors all seem forbiddingly intellectual and publish papers
unintelligible to outsiders. But while in some fields the papers are
unintelligible because they're full of hard ideas, in others they're
deliberately written in an obscure way to seem as if they're saying something
important. This may seem a scandalous proposition, but it has been
experimentally verified, in the famous _Social Text_ affair. Suspecting that
the papers published by literary theorists were often just intellectual-
sounding nonsense, a physicist deliberately wrote a paper full of
intellectual-sounding nonsense, and submitted it to a literary theory journal,
which published it.
The best protection is always to be working on hard problems. Writing novels
is hard. Reading novels isn't. Hard means worry: if you're not worrying that
something you're making will come out badly, or that you won't be able to
understand something you're studying, then it isn't hard enough. There has to
be suspense.
Well, this seems a grim view of the world, you may think. What I'm telling you
is that you should worry? Yes, but it's not as bad as it sounds. It's
exhilarating to overcome worries. You don't see faces much happier than people
winning gold medals. And you know why they're so happy? Relief.
I'm not saying this is the only way to be happy. Just that some kinds of worry
are not as bad as they sound.
**Ambition**
In practice, "stay upwind" reduces to "work on hard problems." And you can
start today. I wish I'd grasped that in high school.
Most people like to be good at what they do. In the so-called real world this
need is a powerful force. But high school students rarely benefit from it,
because they're given a fake thing to do. When I was in high school, I let
myself believe that my job was to be a high school student. And so I let my
need to be good at what I did be satisfied by merely doing well in school.
If you'd asked me in high school what the difference was between high school
kids and adults, I'd have said it was that adults had to earn a living. Wrong.
It's that adults take responsibility for themselves. Making a living is only a
small part of it. Far more important is to take intellectual responsibility
for oneself.
If I had to go through high school again, I'd treat it like a day job. I don't
mean that I'd slack in school. Working at something as a day job doesn't mean
doing it badly. It means not being defined by it. I mean I wouldn't think of
myself as a high school student, just as a musician with a day job as a waiter
doesn't think of himself as a waiter. [3] And when I wasn't working at my day
job I'd start trying to do real work.
When I ask people what they regret most about high school, they nearly all say
the same thing: that they wasted so much time. If you're wondering what you're
doing now that you'll regret most later, that's probably it. [4]
Some people say this is inevitable — that high school students aren't capable
of getting anything done yet. But I don't think this is true. And the proof is
that you're bored. You probably weren't bored when you were eight. When you're
eight it's called "playing" instead of "hanging out," but it's the same thing.
And when I was eight, I was rarely bored. Give me a back yard and a few other
kids and I could play all day.
The reason this got stale in middle school and high school, I now realize, is
that I was ready for something else. Childhood was getting old.
I'm not saying you shouldn't hang out with your friends — that you should all
become humorless little robots who do nothing but work. Hanging out with
friends is like chocolate cake. You enjoy it more if you eat it occasionally
than if you eat nothing but chocolate cake for every meal. No matter how much
you like chocolate cake, you'll be pretty queasy after the third meal of it.
And that's what the malaise one feels in high school is: mental queasiness.
[5]
You may be thinking, we have to do more than get good grades. We have to have
_extracurricular activities._ But you know perfectly well how bogus most of
these are. Collecting donations for a charity is an admirable thing to do, but
it's not _hard._ It's not getting something done. What I mean by getting
something done is learning how to write well, or how to program computers, or
what life was really like in preindustrial societies, or how to draw the human
face from life. This sort of thing rarely translates into a line item on a
college application.
**Corruption**
It's dangerous to design your life around getting into college, because the
people you have to impress to get into college are not a very discerning
audience. At most colleges, it's not the professors who decide whether you get
in, but admissions officers, and they are nowhere near as smart. They're the
NCOs of the intellectual world. They can't tell how smart you are. The mere
existence of prep schools is proof of that.
Few parents would pay so much for their kids to go to a school that didn't
improve their admissions prospects. Prep schools openly say this is one of
their aims. But what that means, if you stop to think about it, is that they
can hack the admissions process: that they can take the very same kid and make
him seem a more appealing candidate than he would if he went to the local
public school. [6]
Right now most of you feel your job in life is to be a promising college
applicant. But that means you're designing your life to satisfy a process so
mindless that there's a whole industry devoted to subverting it. No wonder you
become cynical. The malaise you feel is the same that a producer of reality TV
shows or a tobacco industry executive feels. And you don't even get paid a
lot.
So what do you do? What you should not do is rebel. That's what I did, and it
was a mistake. I didn't realize exactly what was happening to us, but I
smelled a major rat. And so I just gave up. Obviously the world sucked, so why
bother?
When I discovered that one of our teachers was herself using Cliff's Notes, it
seemed par for the course. Surely it meant nothing to get a good grade in such
a class.
In retrospect this was stupid. It was like someone getting fouled in a soccer
game and saying, hey, you fouled me, that's against the rules, and walking off
the field in indignation. Fouls happen. The thing to do when you get fouled is
not to lose your cool. Just keep playing.
By putting you in this situation, society has fouled you. Yes, as you suspect,
a lot of the stuff you learn in your classes is crap. And yes, as you suspect,
the college admissions process is largely a charade. But like many fouls, this
one was unintentional. [7] So just keep playing.
Rebellion is almost as stupid as obedience. In either case you let yourself be
defined by what they tell you to do. The best plan, I think, is to step onto
an orthogonal vector. Don't just do what they tell you, and don't just refuse
to. Instead treat school as a day job. As day jobs go, it's pretty sweet.
You're done at 3 o'clock, and you can even work on your own stuff while you're
there.
**Curiosity**
And what's your real job supposed to be? Unless you're Mozart, your first task
is to figure that out. What are the great things to work on? Where are the
imaginative people? And most importantly, what are you interested in? The word
"aptitude" is misleading, because it implies something innate. The most
powerful sort of aptitude is a consuming interest in some question, and such
interests are often acquired tastes.
A distorted version of this idea has filtered into popular culture under the
name "passion." I recently saw an ad for waiters saying they wanted people
with a "passion for service." The real thing is not something one could have
for waiting on tables. And passion is a bad word for it. A better name would
be curiosity.
Kids are curious, but the curiosity I mean has a different shape from kid
curiosity. Kid curiosity is broad and shallow; they ask why at random about
everything. In most adults this curiosity dries up entirely. It has to: you
can't get anything done if you're always asking why about everything. But in
ambitious adults, instead of drying up, curiosity becomes narrow and deep. The
mud flat morphs into a well.
Curiosity turns work into play. For Einstein, relativity wasn't a book full of
hard stuff he had to learn for an exam. It was a mystery he was trying to
solve. So it probably felt like less work to him to invent it than it would
seem to someone now to learn it in a class.
One of the most dangerous illusions you get from school is the idea that doing
great things requires a lot of discipline. Most subjects are taught in such a
boring way that it's only by discipline that you can flog yourself through
them. So I was surprised when, early in college, I read a quote by
Wittgenstein saying that he had no self-discipline and had never been able to
deny himself anything, not even a cup of coffee.
Now I know a number of people who do great work, and it's the same with all of
them. They have little discipline. They're all terrible procrastinators and
find it almost impossible to make themselves do anything they're not
interested in. One still hasn't sent out his half of the thank-you notes from
his wedding, four years ago. Another has 26,000 emails in her inbox.
I'm not saying you can get away with zero self-discipline. You probably need
about the amount you need to go running. I'm often reluctant to go running,
but once I do, I enjoy it. And if I don't run for several days, I feel ill.
It's the same with people who do great things. They know they'll feel bad if
they don't work, and they have enough discipline to get themselves to their
desks to start working. But once they get started, interest takes over, and
discipline is no longer necessary.
Do you think Shakespeare was gritting his teeth and diligently trying to write
Great Literature? Of course not. He was having fun. That's why he's so good.
If you want to do good work, what you need is a great curiosity about a
promising question. The critical moment for Einstein was when he looked at
Maxwell's equations and said, what the hell is going on here?
It can take years to zero in on a productive question, because it can take
years to figure out what a subject is really about. To take an extreme
example, consider math. Most people think they hate math, but the boring stuff
you do in school under the name "mathematics" is not at all like what
mathematicians do.
The great mathematician G. H. Hardy said he didn't like math in high school
either. He only took it up because he was better at it than the other
students. Only later did he realize math was interesting — only later did he
start to ask questions instead of merely answering them correctly.
When a friend of mine used to grumble because he had to write a paper for
school, his mother would tell him: find a way to make it interesting. That's
what you need to do: find a question that makes the world interesting. People
who do great things look at the same world everyone else does, but notice some
odd detail that's compellingly mysterious.
And not only in intellectual matters. Henry Ford's great question was, why do
cars have to be a luxury item? What would happen if you treated them as a
commodity? Franz Beckenbauer's was, in effect, why does everyone have to stay
in his position? Why can't defenders score goals too?
**Now**
If it takes years to articulate great questions, what do you do now, at
sixteen? Work toward finding one. Great questions don't appear suddenly. They
gradually congeal in your head. And what makes them congeal is experience. So
the way to find great questions is not to search for them — not to wander
about thinking, what great discovery shall I make? You can't answer that; if
you could, you'd have made it.
The way to get a big idea to appear in your head is not to hunt for big ideas,
but to put in a lot of time on work that interests you, and in the process
keep your mind open enough that a big idea can take roost. Einstein, Ford, and
Beckenbauer all used this recipe. They all knew their work like a piano player
knows the keys. So when something seemed amiss to them, they had the
confidence to notice it.
Put in time how and on what? Just pick a project that seems interesting: to
master some chunk of material, or to make something, or to answer some
question. Choose a project that will take less than a month, and make it
something you have the means to finish. Do something hard enough to stretch
you, but only just, especially at first. If you're deciding between two
projects, choose whichever seems most fun. If one blows up in your face, start
another. Repeat till, like an internal combustion engine, the process becomes
self-sustaining, and each project generates the next one. (This could take
years.)
It may be just as well not to do a project "for school," if that will restrict
you or make it seem like work. Involve your friends if you want, but not too
many, and only if they're not flakes. Friends offer moral support (few
startups are started by one person), but secrecy also has its advantages.
There's something pleasing about a secret project. And you can take more
risks, because no one will know if you fail.
Don't worry if a project doesn't seem to be on the path to some goal you're
supposed to have. Paths can bend a lot more than you think. So let the path
grow out the project. The most important thing is to be excited about it,
because it's by doing that you learn.
Don't disregard unseemly motivations. One of the most powerful is the desire
to be better than other people at something. Hardy said that's what got him
started, and I think the only unusual thing about him is that he admitted it.
Another powerful motivator is the desire to do, or know, things you're not
supposed to. Closely related is the desire to do something audacious. Sixteen
year olds aren't supposed to write novels. So if you try, anything you achieve
is on the plus side of the ledger; if you fail utterly, you're doing no worse
than expectations. [8]
Beware of bad models. Especially when they excuse laziness. When I was in high
school I used to write "existentialist" short stories like ones I'd seen by
famous writers. My stories didn't have a lot of plot, but they were very deep.
And they were less work to write than entertaining ones would have been. I
should have known that was a danger sign. And in fact I found my stories
pretty boring; what excited me was the idea of writing serious, intellectual
stuff like the famous writers.
Now I have enough experience to realize that those famous writers actually
sucked. Plenty of famous people do; in the short term, the quality of one's
work is only a small component of fame. I should have been less worried about
doing something that seemed cool, and just done something I liked. That's the
actual road to coolness anyway.
A key ingredient in many projects, almost a project on its own, is to find
good books. Most books are bad. Nearly all textbooks are bad. [9] So don't
assume a subject is to be learned from whatever book on it happens to be
closest. You have to search actively for the tiny number of good books.
The important thing is to get out there and do stuff. Instead of waiting to be
taught, go out and learn.
Your life doesn't have to be shaped by admissions officers. It could be shaped
by your own curiosity. It is for all ambitious adults. And you don't have to
wait to start. In fact, you don't have to wait to be an adult. There's no
switch inside you that magically flips when you turn a certain age or graduate
from some institution. You start being an adult when you decide to take
responsibility for your life. You can do that at any age. [10]
This may sound like bullshit. I'm just a minor, you may think, I have no
money, I have to live at home, I have to do what adults tell me all day long.
Well, most adults labor under restrictions just as cumbersome, and they manage
to get things done. If you think it's restrictive being a kid, imagine having
kids.
The only real difference between adults and high school kids is that adults
realize they need to get things done, and high school kids don't. That
realization hits most people around 23. But I'm letting you in on the secret
early. So get to work. Maybe you can be the first generation whose greatest
regret from high school isn't how much time you wasted.
**Notes**
[1] A doctor friend warns that even this can give an inaccurate picture. "Who
knew how much time it would take up, how little autonomy one would have for
endless years of training, and how unbelievably annoying it is to carry a
beeper?"
[2] His best bet would probably be to become dictator and intimidate the NBA
into letting him play. So far the closest anyone has come is Secretary of
Labor.
[3] A day job is one you take to pay the bills so you can do what you really
want, like play in a band, or invent relativity.
Treating high school as a day job might actually make it easier for some
students to get good grades. If you treat your classes as a game, you won't be
demoralized if they seem pointless.
However bad your classes, you need to get good grades in them to get into a
decent college. And that _is_ worth doing, because universities are where a
lot of the clumps of smart people are these days.
[4] The second biggest regret was caring so much about unimportant things. And
especially about what other people thought of them.
I think what they really mean, in the latter case, is caring what random
people thought of them. Adults care just as much what other people think, but
they get to be more selective about the other people.
I have about thirty friends whose opinions I care about, and the opinion of
the rest of the world barely affects me. The problem in high school is that
your peers are chosen for you by accidents of age and geography, rather than
by you based on respect for their judgement.
[5] The key to wasting time is distraction. Without distractions it's too
obvious to your brain that you're not doing anything with it, and you start to
feel uncomfortable. If you want to measure how dependent you've become on
distractions, try this experiment: set aside a chunk of time on a weekend and
sit alone and think. You can have a notebook to write your thoughts down in,
but nothing else: no friends, TV, music, phone, IM, email, Web, games, books,
newspapers, or magazines. Within an hour most people will feel a strong
craving for distraction.
[6] I don't mean to imply that the only function of prep schools is to trick
admissions officers. They also generally provide a better education. But try
this thought experiment: suppose prep schools supplied the same superior
education but had a tiny (.001) negative effect on college admissions. How
many parents would still send their kids to them?
It might also be argued that kids who went to prep schools, because they've
learned more, _are_ better college candidates. But this seems empirically
false. What you learn in even the best high school is rounding error compared
to what you learn in college. Public school kids arrive at college with a
slight disadvantage, but they start to pull ahead in the sophomore year.
(I'm not saying public school kids are smarter than preppies, just that they
are within any given college. That follows necessarily if you agree prep
schools improve kids' admissions prospects.)
[7] Why does society foul you? Indifference, mainly. There are simply no
outside forces pushing high school to be good. The air traffic control system
works because planes would crash otherwise. Businesses have to deliver because
otherwise competitors would take their customers. But no planes crash if your
school sucks, and it has no competitors. High school isn't evil; it's random;
but random is pretty bad.
[8] And then of course there is money. It's not a big factor in high school,
because you can't do much that anyone wants. But a lot of great things were
created mainly to make money. Samuel Johnson said "no man but a blockhead ever
wrote except for money." (Many hope he was exaggerating.)
[9] Even college textbooks are bad. When you get to college, you'll find that
(with a few stellar exceptions) the textbooks are not written by the leading
scholars in the field they describe. Writing college textbooks is unpleasant
work, done mostly by people who need the money. It's unpleasant because the
publishers exert so much control, and there are few things worse than close
supervision by someone who doesn't understand what you're doing. This
phenomenon is apparently [even worse](http://www.edutopia.org/muddle-machine)
in the production of high school textbooks.
[10] Your teachers are always telling you to behave like adults. I wonder if
they'd like it if you did. You may be loud and disorganized, but you're very
docile compared to adults. If you actually started acting like adults, it
would be just as if a bunch of adults had been transposed into your bodies.
Imagine the reaction of an FBI agent or taxi driver or reporter to being told
they had to ask permission to go the bathroom, and only one person could go at
a time. To say nothing of the things you're taught. If a bunch of actual
adults suddenly found themselves trapped in high school, the first thing
they'd do is form a union and renegotiate all the rules with the
administration.
**Thanks** to Ingrid Bassett, Trevor Blackwell, Rich Draves, Dan Giffin, Sarah
Harlin, Jessica Livingston, Jackie McDonough, Robert Morris, Mark Nitzberg,
Lisa Randall, and Aaron Swartz for reading drafts of this, and to many others
for talking to me about high school.
|
July 2008
At this year's startup school, David Heinemeier Hansson gave a
[talk](http://www.omnisio.com/startupschool08/david-heinemeier-hansson-at-
startup-school-08) in which he suggested that startup founders should do
things the old fashioned way. Instead of hoping to get rich by building a
valuable company and then selling stock in a "liquidity event," founders
should start companies that make money and live off the revenues.
Sounds like a good plan. Let's think about the optimal way to do this.
One disadvantage of living off the revenues of your company is that you have
to keep running it. And as anyone who runs their own business can tell you,
that requires your complete attention. You can't just start a business and
check out once things are going well, or they stop going well surprisingly
fast.
The main economic motives of startup founders seem to be freedom and security.
They want enough money that (a) they don't have to worry about running out of
money and (b) they can spend their time how they want. Running your own
business offers neither. You certainly don't have freedom: no boss is so
demanding. Nor do you have security, because if you stop paying attention to
the company, its revenues go away, and with them your income.
The best case, for most people, would be if you could hire someone to manage
the company for you once you'd grown it to a certain size. Suppose you could
find a really good manager. Then you would have both freedom and security. You
could pay as little attention to the business as you wanted, knowing that your
manager would keep things running smoothly. And that being so, revenues would
continue to flow in, so you'd have security as well.
There will of course be some founders who wouldn't like that idea: the ones
who like running their company so much that there's nothing else they'd rather
do. But this group must be small. The way you succeed in most businesses is to
be fanatically attentive to customers' needs. What are the odds that your own
desires would coincide exactly with the demands of this powerful, external
force?
Sure, running your own company can be fairly interesting. Viaweb was more
interesting than any job I'd had before. And since I made much more money from
it, it offered the highest ratio of income to boringness of anything I'd done,
by orders of magnitude. But was it _the_ most interesting work I could imagine
doing? No.
Whether the number of founders in the same position is asymptotic or merely
large, there are certainly a lot of them. For them the right approach would be
to hand the company over to a professional manager eventually, if they could
find one who was good enough.
_____
So far so good. But what if your manager was hit by a bus? What you really
want is a management company to run your company for you. Then you don't
depend on any one person.
If you own rental property, there are companies you can hire to manage it for
you. Some will do everything, from finding tenants to fixing leaks. Of course,
running companies is a lot more complicated than managing rental property, but
let's suppose there were management companies that could do it for you. They'd
charge a lot, but wouldn't it be worth it? I'd sacrifice a large percentage of
the income for the extra peace of mind.
I realize what I'm describing already sounds too good to be true, but I can
think of a way to make it even more attractive. If company management
companies existed, there would be an additional service they could offer
clients: they could let them insure their returns by pooling their risk. After
all, even a perfect manager can't save a company when, as sometimes happens,
its whole market dies, just as property managers can't save you from the
building burning down. But a company that managed a large enough number of
companies could say to all its clients: we'll combine the revenues from all
your companies, and pay you your proportionate share.
If such management companies existed, they'd offer the maximum of freedom and
security. Someone would run your company for you, and you'd be protected even
if it happened to die.
Let's think about how such a management company might be organized. The
simplest way would be to have a new kind of stock representing the total pool
of companies they were managing. When you signed up, you'd trade your
company's stock for shares of this pool, in proportion to an estimate of your
company's value that you'd both agreed upon. Then you'd automatically get your
share of the returns of the whole pool.
The catch is that because this kind of trade would be hard to undo, you
couldn't switch management companies. But there's a way they could fix that:
suppose all the company management companies got together and agreed to allow
their clients to exchange shares in all their pools. Then you could, in
effect, simultaneously choose all the management companies to run yours for
you, in whatever proportion you wanted, and change your mind later as often as
you wanted.
If such pooled-risk company management companies existed, signing up with one
would seem the ideal plan for most people following the route David advocated.
Good news: they do exist. What I've just described is an acquisition by a
public company.
_____
Unfortunately, though public acquirers are structurally identical to pooled-
risk company management companies, they don't think of themselves that way.
With a property management company, you can just walk in whenever you want and
say "manage my rental property for me" and they'll do it. Whereas acquirers
are, as of this writing, extremely fickle. Sometimes they're in a buying mood
and they'll overpay enormously; other times they're not interested. They're
like property management companies run by madmen. Or more precisely, by
Benjamin Graham's Mr. Market.
So while on average public acquirers behave like pooled-risk company managers,
you need a window of several years to get average case performance. If you
wait long enough (five years, say) you're likely to hit an up cycle where some
acquirer is hot to buy you. But you can't choose when it happens.
You can't assume investors will carry you for as long as you might have to
wait. Your company has to make money. Opinions are divided about how early to
focus on that. [Joe Kraus](http://susanitsa.wordpress.com/2006/11/08/the-joe-
kraus-qa-better-late/) says you should try charging customers right away. And
yet some of the most successful startups, including Google, ignored revenue at
first and concentrated exclusively on development. The answer probably depends
on the type of company you're starting. I can imagine some where trying to
make sales would be a good heuristic for product design, and others where it
would just be a distraction. The test is probably whether it helps you to
understand your users.
You can choose whichever revenue strategy you think is best for the type of
company you're starting, so long as you're profitable. Being profitable
ensures you'll get at least the average of the acquisition market—in which
public companies do behave as pooled-risk company management companies.
David isn't mistaken in saying you should start a company to live off its
revenues. The mistake is thinking this is somehow opposed to starting a
company and selling it. In fact, for most people the latter is merely the
optimal case of the former.
**Thanks** to Trevor Blackwell, Jessica Livingston, Michael Mandel, Robert
Morris, and Fred Wilson for reading drafts of this.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
September 2012
A startup is a company designed to grow fast. Being newly founded does not in
itself make a company a startup. Nor is it necessary for a startup to work on
technology, or take venture funding, or have some sort of "exit." The only
essential thing is growth. Everything else we associate with startups follows
from growth.
If you want to start one it's important to understand that. Startups are so
hard that you can't be pointed off to the side and hope to succeed. You have
to know that growth is what you're after. The good news is, if you get growth,
everything else tends to fall into place. Which means you can use growth like
a compass to make almost every decision you face.
**Redwoods**
Let's start with a distinction that should be obvious but is often overlooked:
not every newly founded company is a startup. Millions of companies are
started every year in the US. Only a tiny fraction are startups. Most are
service businesses — restaurants, barbershops, plumbers, and so on. These are
not startups, except in a few unusual cases. A barbershop isn't designed to
grow fast. Whereas a search engine, for example, is.
When I say startups are designed to grow fast, I mean it in two senses. Partly
I mean designed in the sense of intended, because most startups fail. But I
also mean startups are different by nature, in the same way a redwood seedling
has a different destiny from a bean sprout.
That difference is why there's a distinct word, "startup," for companies
designed to grow fast. If all companies were essentially similar, but some
through luck or the efforts of their founders ended up growing very fast, we
wouldn't need a separate word. We could just talk about super-successful
companies and less successful ones. But in fact startups do have a different
sort of DNA from other businesses. Google is not just a barbershop whose
founders were unusually lucky and hard-working. Google was different from the
beginning.
To grow rapidly, you need to make something you can sell to a big market.
That's the difference between Google and a barbershop. A barbershop doesn't
scale.
For a company to grow really big, it must (a) make something lots of people
want, and (b) reach and serve all those people. Barbershops are doing fine in
the (a) department. Almost everyone needs their hair cut. The problem for a
barbershop, as for any retail establishment, is (b). A barbershop serves
customers in person, and few will travel far for a haircut. And even if they
did, the barbershop couldn't accomodate them. [1]
Writing software is a great way to solve (b), but you can still end up
constrained in (a). If you write software to teach Tibetan to Hungarian
speakers, you'll be able to reach most of the people who want it, but there
won't be many of them. If you make software to teach English to Chinese
speakers, however, you're in startup territory.
Most businesses are tightly constrained in (a) or (b). The distinctive feature
of successful startups is that they're not.
**Ideas**
It might seem that it would always be better to start a startup than an
ordinary business. If you're going to start a company, why not start the type
with the most potential? The catch is that this is a (fairly) efficient
market. If you write software to teach Tibetan to Hungarians, you won't have
much competition. If you write software to teach English to Chinese speakers,
you'll face ferocious competition, precisely because that's such a larger
prize. [2]
The constraints that limit ordinary companies also protect them. That's the
tradeoff. If you start a barbershop, you only have to compete with other local
barbers. If you start a search engine you have to compete with the whole
world.
The most important thing that the constraints on a normal business protect it
from is not competition, however, but the difficulty of coming up with new
ideas. If you open a bar in a particular neighborhood, as well as limiting
your potential and protecting you from competitors, that geographic constraint
also helps define your company. Bar + neighborhood is a sufficient idea for a
small business. Similarly for companies constrained in (a). Your niche both
protects and defines you.
Whereas if you want to start a startup, you're probably going to have to think
of something fairly novel. A startup has to make something it can deliver to a
large market, and ideas of that type are so valuable that all the obvious ones
are already taken.
That space of ideas has been so thoroughly picked over that a startup
generally has to work on something everyone else has overlooked. I was going
to write that one has to make a conscious effort to find ideas everyone else
has overlooked. But that's not how most startups get started. Usually
successful startups happen because the founders are sufficiently different
from other people that ideas few others can see seem obvious to them. Perhaps
later they step back and notice they've found an idea in everyone else's blind
spot, and from that point make a deliberate effort to stay there. [3] But at
the moment when successful startups get started, much of the innovation is
unconscious.
What's different about successful founders is that they can see different
problems. It's a particularly good combination both to be good at technology
and to face problems that can be solved by it, because technology changes so
rapidly that formerly bad ideas often become good without anyone noticing.
Steve Wozniak's problem was that he wanted his own computer. That was an
unusual problem to have in 1975. But technological change was about to make it
a much more common one. Because he not only wanted a computer but knew how to
build them, Wozniak was able to make himself one. And the problem he solved
for himself became one that Apple solved for millions of people in the coming
years. But by the time it was obvious to ordinary people that this was a big
market, Apple was already established.
Google has similar origins. Larry Page and Sergey Brin wanted to search the
web. But unlike most people they had the technical expertise both to notice
that existing search engines were not as good as they could be, and to know
how to improve them. Over the next few years their problem became everyone's
problem, as the web grew to a size where you didn't have to be a picky search
expert to notice the old algorithms weren't good enough. But as happened with
Apple, by the time everyone else realized how important search was, Google was
entrenched.
That's one connection between startup ideas and technology. Rapid change in
one area uncovers big, soluble problems in other areas. Sometimes the changes
are advances, and what they change is solubility. That was the kind of change
that yielded Apple; advances in chip technology finally let Steve Wozniak
design a computer he could afford. But in Google's case the most important
change was the growth of the web. What changed there was not solubility but
bigness.
The other connection between startups and technology is that startups create
new ways of doing things, and new ways of doing things are, in the broader
sense of the word, new technology. When a startup both begins with an idea
exposed by technological change and makes a product consisting of technology
in the narrower sense (what used to be called "high technology"), it's easy to
conflate the two. But the two connections are distinct and in principle one
could start a startup that was neither driven by technological change, nor
whose product consisted of technology except in the broader sense. [4]
**Rate**
How fast does a company have to grow to be considered a startup? There's no
precise answer to that. "Startup" is a pole, not a threshold. Starting one is
at first no more than a declaration of one's ambitions. You're committing not
just to starting a company, but to starting a fast growing one, and you're
thus committing to search for one of the rare ideas of that type. But at first
you have no more than commitment. Starting a startup is like being an actor in
that respect. "Actor" too is a pole rather than a threshold. At the beginning
of his career, an actor is a waiter who goes to auditions. Getting work makes
him a successful actor, but he doesn't only become an actor when he's
successful.
So the real question is not what growth rate makes a company a startup, but
what growth rate successful startups tend to have. For founders that's more
than a theoretical question, because it's equivalent to asking if they're on
the right path.
The growth of a successful startup usually has three phases:
1. There's an initial period of slow or no growth while the startup tries to figure out what it's doing.
2. As the startup figures out how to make something lots of people want and how to reach those people, there's a period of rapid growth.
3. Eventually a successful startup will grow into a big company. Growth will slow, partly due to internal limits and partly because the company is starting to bump up against the limits of the markets it serves. [5]
Together these three phases produce an S-curve. The phase whose growth defines
the startup is the second one, the ascent. Its length and slope determine how
big the company will be.
The slope is the company's growth rate. If there's one number every founder
should always know, it's the company's growth rate. That's the measure of a
startup. If you don't know that number, you don't even know if you're doing
well or badly.
When I first meet founders and ask what their growth rate is, sometimes they
tell me "we get about a hundred new customers a month." That's not a rate.
What matters is not the absolute number of new customers, but the ratio of new
customers to existing ones. If you're really getting a constant number of new
customers every month, you're in trouble, because that means your growth rate
is decreasing.
During Y Combinator we measure growth rate per week, partly because there is
so little time before Demo Day, and partly because startups early on need
frequent feedback from their users to tweak what they're doing. [6]
A good growth rate during YC is 5-7% a week. If you can hit 10% a week you're
doing exceptionally well. If you can only manage 1%, it's a sign you haven't
yet figured out what you're doing.
The best thing to measure the growth rate of is revenue. The next best, for
startups that aren't charging initially, is active users. That's a reasonable
proxy for revenue growth because whenever the startup does start trying to
make money, their revenues will probably be a constant multiple of active
users. [7]
**Compass**
We usually advise startups to pick a growth rate they think they can hit, and
then just try to hit it every week. The key word here is "just." If they
decide to grow at 7% a week and they hit that number, they're successful for
that week. There's nothing more they need to do. But if they don't hit it,
they've failed in the only thing that mattered, and should be correspondingly
alarmed.
Programmers will recognize what we're doing here. We're turning starting a
startup into an optimization problem. And anyone who has tried optimizing code
knows how wonderfully effective that sort of narrow focus can be. Optimizing
code means taking an existing program and changing it to use less of
something, usually time or memory. You don't have to think about what the
program should do, just make it faster. For most programmers this is very
satisfying work. The narrow focus makes it a sort of puzzle, and you're
generally surprised how fast you can solve it.
Focusing on hitting a growth rate reduces the otherwise bewilderingly
multifarious problem of starting a startup to a single problem. You can use
that target growth rate to make all your decisions for you; anything that gets
you the growth you need is ipso facto right. Should you spend two days at a
conference? Should you hire another programmer? Should you focus more on
marketing? Should you spend time courting some big customer? Should you add x
feature? Whatever gets you your target growth rate. [8]
Judging yourself by weekly growth doesn't mean you can look no more than a
week ahead. Once you experience the pain of missing your target one week (it
was the only thing that mattered, and you failed at it), you become interested
in anything that could spare you such pain in the future. So you'll be willing
for example to hire another programmer, who won't contribute to this week's
growth but perhaps in a month will have implemented some new feature that will
get you more users. But only if (a) the distraction of hiring someone won't
make you miss your numbers in the short term, and (b) you're sufficiently
worried about whether you can keep hitting your numbers without hiring someone
new.
It's not that you don't think about the future, just that you think about it
no more than necessary.
In theory this sort of hill-climbing could get a startup into trouble. They
could end up on a local maximum. But in practice that never happens. Having to
hit a growth number every week forces founders to act, and acting versus not
acting is the high bit of succeeding. Nine times out of ten, sitting around
strategizing is just a form of procrastination. Whereas founders' intuitions
about which hill to climb are usually better than they realize. Plus the
maxima in the space of startup ideas are not spiky and isolated. Most fairly
good ideas are adjacent to even better ones.
The fascinating thing about optimizing for growth is that it can actually
discover startup ideas. You can use the need for growth as a form of
evolutionary pressure. If you start out with some initial plan and modify it
as necessary to keep hitting, say, 10% weekly growth, you may end up with a
quite different company than you meant to start. But anything that grows
consistently at 10% a week is almost certainly a better idea than you started
with.
There's a parallel here to small businesses. Just as the constraint of being
located in a particular neighborhood helps define a bar, the constraint of
growing at a certain rate can help define a startup.
You'll generally do best to follow that constraint wherever it leads rather
than being influenced by some initial vision, just as a scientist is better
off following the truth wherever it leads rather than being influenced by what
he wishes were the case. When Richard Feynman said that the imagination of
nature was greater than the imagination of man, he meant that if you just keep
following the truth you'll discover cooler things than you could ever have
made up. For startups, growth is a constraint much like truth. Every
successful startup is at least partly a product of the imagination of growth.
[9]
**Value**
It's hard to find something that grows consistently at several percent a week,
but if you do you may have found something surprisingly valuable. If we
project forward we see why.
weeklyyearly
1%1.7x
2%2.8x
5%12.6x
7%33.7x
10%142.0x
A company that grows at 1% a week will grow 1.7x a year, whereas a company
that grows at 5% a week will grow 12.6x. A company making $1000 a month (a
typical number early in YC) and growing at 1% a week will 4 years later be
making $7900 a month, which is less than a good programmer makes in salary in
Silicon Valley. A startup that grows at 5% a week will in 4 years be making
$25 million a month. [10]
Our ancestors must rarely have encountered cases of exponential growth,
because our intuitions are no guide here. What happens to fast growing
startups tends to surprise even the founders.
Small variations in growth rate produce qualitatively different outcomes.
That's why there's a separate word for startups, and why startups do things
that ordinary companies don't, like raising money and getting acquired. And,
strangely enough, it's also why they fail so frequently.
Considering how valuable a successful startup can become, anyone familiar with
the concept of expected value would be surprised if the failure rate weren't
high. If a successful startup could make a founder $100 million, then even if
the chance of succeeding were only 1%, the expected value of starting one
would be $1 million. And the probability of a group of sufficiently smart and
determined founders succeeding on that scale might be significantly over 1%.
For the right people — e.g. the young Bill Gates — the probability might be
20% or even 50%. So it's not surprising that so many want to take a shot at
it. In an efficient market, the number of failed startups should be
proportionate to the size of the successes. And since the latter is huge the
former should be too. [11]
What this means is that at any given time, the great majority of startups will
be working on something that's never going to go anywhere, and yet glorifying
their doomed efforts with the grandiose title of "startup."
This doesn't bother me. It's the same with other high-beta vocations, like
being an actor or a novelist. I've long since gotten used to it. But it seems
to bother a lot of people, particularly those who've started ordinary
businesses. Many are annoyed that these so-called startups get all the
attention, when hardly any of them will amount to anything.
If they stepped back and looked at the whole picture they might be less
indignant. The mistake they're making is that by basing their opinions on
anecdotal evidence they're implicitly judging by the median rather than the
average. If you judge by the median startup, the whole concept of a startup
seems like a fraud. You have to invent a bubble to explain why founders want
to start them or investors want to fund them. But it's a mistake to use the
median in a domain with so much variation. If you look at the average outcome
rather than the median, you can understand why investors like them, and why,
if they aren't median people, it's a rational choice for founders to start
them.
**Deals**
Why do investors like startups so much? Why are they so hot to invest in
photo-sharing apps, rather than solid money-making businesses? Not only for
the obvious reason.
The test of any investment is the ratio of return to risk. Startups pass that
test because although they're appallingly risky, the returns when they do
succeed are so high. But that's not the only reason investors like startups.
An ordinary slower-growing business might have just as good a ratio of return
to risk, if both were lower. So why are VCs interested only in high-growth
companies? The reason is that they get paid by getting their capital back,
ideally after the startup IPOs, or failing that when it's acquired.
The other way to get returns from an investment is in the form of dividends.
Why isn't there a parallel VC industry that invests in ordinary companies in
return for a percentage of their profits? Because it's too easy for people who
control a private company to funnel its revenues to themselves (e.g. by buying
overpriced components from a supplier they control) while making it look like
the company is making little profit. Anyone who invested in private companies
in return for dividends would have to pay close attention to their books.
The reason VCs like to invest in startups is not simply the returns, but also
because such investments are so easy to oversee. The founders can't enrich
themselves without also enriching the investors. [12]
Why do founders want to take the VCs' money? Growth, again. The constraint
between good ideas and growth operates in both directions. It's not merely
that you need a scalable idea to grow. If you have such an idea and don't grow
fast enough, competitors will. Growing too slowly is particularly dangerous in
a business with network effects, which the best startups usually have to some
degree.
Almost every company needs some amount of funding to get started. But startups
often raise money even when they are or could be profitable. It might seem
foolish to sell stock in a profitable company for less than you think it will
later be worth, but it's no more foolish than buying insurance. Fundamentally
that's how the most successful startups view fundraising. They could grow the
company on its own revenues, but the extra money and help supplied by VCs will
let them grow even faster. Raising money lets you _choose_ your growth rate.
Money to grow faster is always at the command of the most successful startups,
because the VCs need them more than they need the VCs. A profitable startup
could if it wanted just grow on its own revenues. Growing slower might be
slightly dangerous, but chances are it wouldn't kill them. Whereas VCs need to
invest in startups, and in particular the most successful startups, or they'll
be out of business. Which means that any sufficiently promising startup will
be offered money on terms they'd be crazy to refuse. And yet because of the
scale of the successes in the startup business, VCs can still make money from
such investments. You'd have to be crazy to believe your company was going to
become as valuable as a high growth rate can make it, but some do.
Pretty much every successful startup will get acquisition offers too. Why?
What is it about startups that makes other companies want to buy them? [13]
Fundamentally the same thing that makes everyone else want the stock of
successful startups: a rapidly growing company is valuable. It's a good thing
eBay bought Paypal, for example, because Paypal is now responsible for 43% of
their sales and probably more of their growth.
But acquirers have an additional reason to want startups. A rapidly growing
company is not merely valuable, but dangerous. If it keeps expanding, it might
expand into the acquirer's own territory. Most product acquisitions have some
component of fear. Even if an acquirer isn't threatened by the startup itself,
they might be alarmed at the thought of what a competitor could do with it.
And because startups are in this sense doubly valuable to acquirers, acquirers
will often pay more than an ordinary investor would. [14]
**Understand**
The combination of founders, investors, and acquirers forms a natural
ecosystem. It works so well that those who don't understand it are driven to
invent conspiracy theories to explain how neatly things sometimes turn out.
Just as our ancestors did to explain the apparently too neat workings of the
natural world. But there is no secret cabal making it all work.
If you start from the mistaken assumption that Instagram was worthless, you
have to invent a secret boss to force Mark Zuckerberg to buy it. To anyone who
knows Mark Zuckerberg, that is the reductio ad absurdum of the initial
assumption. The reason he bought Instagram was that it was valuable and
dangerous, and what made it so was growth.
If you want to understand startups, understand growth. Growth drives
everything in this world. Growth is why startups usually work on technology —
because ideas for fast growing companies are so rare that the best way to find
new ones is to discover those recently made viable by change, and technology
is the best source of rapid change. Growth is why it's a rational choice
economically for so many founders to try starting a startup: growth makes the
successful companies so valuable that the expected value is high even though
the risk is too. Growth is why VCs want to invest in startups: not just
because the returns are high but also because generating returns from capital
gains is easier to manage than generating returns from dividends. Growth
explains why the most successful startups take VC money even if they don't
need to: it lets them choose their growth rate. And growth explains why
successful startups almost invariably get acquisition offers. To acquirers a
fast-growing company is not merely valuable but dangerous too.
It's not just that if you want to succeed in some domain, you have to
understand the forces driving it. Understanding growth is what starting a
startup _consists_ of. What you're really doing (and to the dismay of some
observers, all you're really doing) when you start a startup is committing to
solve a harder type of problem than ordinary businesses do. You're committing
to search for one of the rare ideas that generates rapid growth. Because these
ideas are so valuable, finding one is hard. The startup is the embodiment of
your discoveries so far. Starting a startup is thus very much like deciding to
be a research scientist: you're not committing to solve any specific problem;
you don't know for sure which problems are soluble; but you're committing to
try to discover something no one knew before. A startup founder is in effect
an economic research scientist. Most don't discover anything that remarkable,
but some discover relativity.
**Notes**
[1] Strictly speaking it's not lots of customers you need but a big market,
meaning a high product of number of customers times how much they'll pay. But
it's dangerous to have too few customers even if they pay a lot, or the power
that individual customers have over you could turn you into a de facto
consulting firm. So whatever market you're in, you'll usually do best to err
on the side of making the broadest type of product for it.
[2] One year at Startup School David Heinemeier Hansson encouraged programmers
who wanted to start businesses to use a restaurant as a model. What he meant,
I believe, is that it's fine to start software companies constrained in (a) in
the same way a restaurant is constrained in (b). I agree. Most people should
not try to start startups.
[3] That sort of stepping back is one of the things we focus on at Y
Combinator. It's common for founders to have discovered something intuitively
without understanding all its implications. That's probably true of the
biggest discoveries in any field.
[4] I got it wrong in ["How to Make Wealth"](wealth.html) when I said that a
startup was a small company that takes on a hard technical problem. That is
the most common recipe but not the only one.
[5] In principle companies aren't limited by the size of the markets they
serve, because they could just expand into new markets. But there seem to be
limits on the ability of big companies to do that. Which means the slowdown
that comes from bumping up against the limits of one's markets is ultimately
just another way in which internal limits are expressed.
It may be that some of these limits could be overcome by changing the shape of
the organization — specifically by sharding it.
[6] This is, obviously, only for startups that have already launched or can
launch during YC. A startup building a new database will probably not do that.
On the other hand, launching something small and then using growth rate as
evolutionary pressure is such a valuable technique that any company that could
start this way probably should.
[7] If the startup is taking the Facebook/Twitter route and building something
they hope will be very popular but from which they don't yet have a definite
plan to make money, the growth rate has to be higher, even though it's a proxy
for revenue growth, because such companies need huge numbers of users to
succeed at all.
Beware too of the edge case where something spreads rapidly but the churn is
high as well, so that you have good net growth till you run through all the
potential users, at which point it suddenly stops.
[8] Within YC when we say it's ipso facto right to do whatever gets you
growth, it's implicit that this excludes trickery like buying users for more
than their lifetime value, counting users as active when they're really not,
bleeding out invites at a regularly increasing rate to manufacture a perfect
growth curve, etc. Even if you were able to fool investors with such tricks,
you'd ultimately be hurting yourself, because you're throwing off your own
compass.
[9] Which is why it's such a dangerous mistake to believe that successful
startups are simply the embodiment of some brilliant initial idea. What you're
looking for initially is not so much a great idea as an idea that could evolve
into a great one. The danger is that promising ideas are not merely blurry
versions of great ones. They're often different in kind, because the early
adopters you evolve the idea upon have different needs from the rest of the
market. For example, the idea that evolves into Facebook isn't merely a subset
of Facebook; the idea that evolves into Facebook is a site for Harvard
undergrads.
[10] What if a company grew at 1.7x a year for a really long time? Could it
not grow just as big as any successful startup? In principle yes, of course.
If our hypothetical company making $1000 a month grew at 1% a week for 19
years, it would grow as big as a company growing at 5% a week for 4 years. But
while such trajectories may be common in, say, real estate development, you
don't see them much in the technology business. In technology, companies that
grow slowly tend not to grow as big.
[11] Any expected value calculation varies from person to person depending on
their utility function for money. I.e. the first million is worth more to most
people than subsequent millions. How much more depends on the person. For
founders who are younger or more ambitious the utility function is flatter.
Which is probably part of the reason the founders of the most successful
startups of all tend to be on the young side.
[12] More precisely, this is the case in the biggest winners, which is where
all the returns come from. A startup founder could pull the same trick of
enriching himself at the company's expense by selling them overpriced
components. But it wouldn't be worth it for the founders of Google to do that.
Only founders of failing startups would even be tempted, but those are
writeoffs from the VCs' point of view anyway.
[13] Acquisitions fall into two categories: those where the acquirer wants the
business, and those where the acquirer just wants the employees. The latter
type is sometimes called an HR acquisition. Though nominally acquisitions and
sometimes on a scale that has a significant effect on the expected value
calculation for potential founders, HR acquisitions are viewed by acquirers as
more akin to hiring bonuses.
[14] I once explained this to some founders who had recently arrived from
Russia. They found it novel that if you threatened a company they'd pay a
premium for you. "In Russia they just kill you," they said, and they were only
partly joking. Economically, the fact that established companies can't simply
eliminate new competitors may be one of the most valuable aspects of the rule
of law. And so to the extent we see incumbents suppressing competitors via
regulations or patent suits, we should worry, not because it's a departure
from the rule of law per se but from what the rule of law is aiming at.
**Thanks** to Sam Altman, Marc Andreessen, Paul Buchheit, Patrick Collison,
Jessica Livingston, Geoff Ralston, and Harj Taggar for reading drafts of this.
|
May 2021
Most people think of nerds as quiet, diffident people. In ordinary social
situations they are — as quiet and diffident as the star quarterback would be
if he found himself in the middle of a physics symposium. And for the same
reason: they are fish out of water. But the apparent diffidence of nerds is an
illusion due to the fact that when non-nerds observe them, it's usually in
ordinary social situations. In fact some nerds are quite fierce.
The fierce nerds are a small but interesting group. They are as a rule
extremely competitive — more competitive, I'd say, than highly competitive
non-nerds. Competition is more personal for them. Partly perhaps because
they're not emotionally mature enough to distance themselves from it, but also
because there's less randomness in the kinds of competition they engage in,
and they are thus more justified in taking the results personally.
Fierce nerds also tend to be somewhat overconfident, especially when young. It
might seem like it would be a disadvantage to be mistaken about one's
abilities, but empirically it isn't. Up to a point, confidence is a self-
fullfilling prophecy.
Another quality you find in most fierce nerds is intelligence. Not all nerds
are smart, but the fierce ones are always at least moderately so. If they
weren't, they wouldn't have the confidence to be fierce. [1]
There's also a natural connection between nerdiness and [_independent-
mindedness_](think.html). It's hard to be independent-minded without being
somewhat socially awkward, because conventional beliefs are so often mistaken,
or at least arbitrary. No one who was both independent-minded and ambitious
would want to waste the effort it takes to fit in. And the independent-
mindedness of the fierce nerds will obviously be of the
[_aggressive_](conformism.html) rather than the passive type: they'll be
annoyed by rules, rather than dreamily unaware of them.
I'm less sure why fierce nerds are impatient, but most seem to be. You notice
it first in conversation, where they tend to interrupt you. This is merely
annoying, but in the more promising fierce nerds it's connected to a deeper
impatience about solving problems. Perhaps the competitiveness and impatience
of fierce nerds are not separate qualities, but two manifestations of a single
underlying drivenness.
When you combine all these qualities in sufficient quantities, the result is
quite formidable. The most vivid example of fierce nerds in action may be
James Watson's _The Double Helix_. The first sentence of the book is "I have
never seen Francis Crick in a modest mood," and the portrait he goes on to
paint of Crick is the quintessential fierce nerd: brilliant, socially awkward,
competitive, independent-minded, overconfident. But so is the implicit
portrait he paints of himself. Indeed, his lack of social awareness makes both
portraits that much more realistic, because he baldly states all sorts of
opinions and motivations that a smoother person would conceal. And moreover
it's clear from the story that Crick and Watson's fierce nerdiness was
integral to their success. Their independent-mindedness caused them to
consider approaches that most others ignored, their overconfidence allowed
them to work on problems they only half understood (they were literally
described as "clowns" by one eminent insider), and their impatience and
competitiveness got them to the answer ahead of two other groups that would
otherwise have found it within the next year, if not the next several months.
[2]
The idea that there could be fierce nerds is an unfamiliar one not just to
many normal people but even to some young nerds. Especially early on, nerds
spend so much of their time in ordinary social situations and so little doing
real work that they get a lot more evidence of their awkwardness than their
power. So there will be some who read this description of the fierce nerd and
realize "Hmm, that's me." And it is to you, young fierce nerd, that I now
turn.
I have some good news, and some bad news. The good news is that your
fierceness will be a great help in solving difficult problems. And not just
the kind of scientific and technical problems that nerds have traditionally
solved. As the world progresses, the number of things you can win at by
getting the right answer increases. Recently [_getting rich_](richnow.html)
became one of them: 7 of the 8 richest people in America are now fierce nerds.
Indeed, being a fierce nerd is probably even more helpful in business than in
nerds' original territory of scholarship. Fierceness seems optional there.
Darwin for example doesn't seem to have been especially fierce. Whereas it's
impossible to be the CEO of a company over a certain size without being
fierce, so now that nerds can win at business, fierce nerds will increasingly
monopolize the really big successes.
The bad news is that if it's not exercised, your fierceness will turn to
bitterness, and you will become an intellectual playground bully: the grumpy
sysadmin, the forum troll, the [_hater_](fh.html), the shooter down of [_new
ideas_](newideas.html).
How do you avoid this fate? Work on ambitious projects. If you succeed, it
will bring you a kind of satisfaction that neutralizes bitterness. But you
don't need to have succeeded to feel this; merely working on hard projects
gives most fierce nerds some feeling of satisfaction. And those it doesn't, it
at least keeps busy. [3]
Another solution may be to somehow turn off your fierceness, by devoting
yourself to meditation or psychotherapy or something like that. Maybe that's
the right answer for some people. I have no idea. But it doesn't seem the
optimal solution to me. If you're given a sharp knife, it seems to me better
to use it than to blunt its edge to avoid cutting yourself.
If you do choose the ambitious route, you'll have a tailwind behind you. There
has never been a better time to be a nerd. In the past century we've seen a
continuous transfer of power from dealmakers to technicians — from the
charismatic to the competent — and I don't see anything on the horizon that
will end it. At least not till the nerds end it themselves by bringing about
the singularity.
**Notes**
[1] To be a nerd is to be socially awkward, and there are two distinct ways to
do that: to be playing the same game as everyone else, but badly, and to be
playing a different game. The smart nerds are the latter type.
[2] The same qualities that make fierce nerds so effective can also make them
very annoying. Fierce nerds would do well to remember this, and (a) try to
keep a lid on it, and (b) seek out organizations and types of work where
getting the right answer matters more than preserving social harmony. In
practice that means small groups working on hard problems. Which fortunately
is the most fun kind of environment anyway.
[3] If success neutralizes bitterness, why are there some people who are at
least moderately successful and yet still quite bitter? Because people's
potential bitterness varies depending on how naturally bitter their
personality is, and how ambitious they are: someone who's naturally very
bitter will still have a lot left after success neutralizes some of it, and
someone who's very ambitious will need proportionally more success to satisfy
that ambition.
So the worst-case scenario is someone who's both naturally bitter and
extremely ambitious, and yet only moderately successful.
**Thanks** to Trevor Blackwell, Steve Blank, Patrick Collison, Jessica
Livingston, Amjad Masad, and Robert Morris for reading drafts of this.
|
December 2014
Many startups go through a point a few months before they die where although
they have a significant amount of money in the bank, they're also losing a lot
each month, and revenue growth is either nonexistent or mediocre. The company
has, say, 6 months of runway. Or to put it more brutally, 6 months before
they're out of business. They expect to avoid that by raising more from
investors. [1]
That last sentence is the fatal one.
There may be nothing founders are so prone to delude themselves about as how
interested investors will be in giving them additional funding. It's hard to
convince investors the first time too, but founders expect that. What bites
them the second time is a confluence of three forces:
1. The company is spending more now than it did the first time it raised money.
2. Investors have much higher standards for companies that have already raised money.
3. The company is now starting to read as a failure. The first time it raised money, it was neither a success nor a failure; it was too early to ask. Now it's possible to ask that question, and the default answer is failure, because at this point that is the default outcome.
I'm going to call the situation I described in the first paragraph "the fatal
pinch." I try to resist coining phrases, but making up a name for this
situation may snap founders into realizing when they're in it.
One of the things that makes the fatal pinch so dangerous is that it's self-
reinforcing. Founders overestimate their chances of raising more money, and so
are slack about reaching profitability, which further decreases their chances
of raising money.
Now that you know about the fatal pinch, how do you avoid it? Y Combinator
tells founders who raise money to act as if it's the last they'll ever get.
Because the self-reinforcing nature of this situation works the other way too:
the less you need further investment, the easier it is to get.
What do you do if you're already in the fatal pinch? The first step is to re-
evaluate the probability of raising more money. I will now, by an amazing feat
of clairvoyance, do this for you: the probability is zero. [2]
Three options remain: you can shut down the company, you can increase how much
you make, and you can decrease how much you spend.
You should shut down the company if you're certain it will fail no matter what
you do. Then at least you can give back the money you have left, and save
yourself however many months you would have spent riding it down.
Companies rarely _have_ to fail though. What I'm really doing here is giving
you the option of admitting you've already given up.
If you don't want to shut down the company, that leaves increasing revenues
and decreasing expenses. In most startups, expenses = people, and decreasing
expenses = firing people. [3] Deciding to fire people is usually hard, but
there's one case in which it shouldn't be: when there are people you already
know you should fire but you're in denial about it. If so, now's the time.
If that makes you profitable, or will enable you to make it to profitability
on the money you have left, you've avoided the immediate danger.
Otherwise you have three options: you either have to fire good people, get
some or all of the employees to take less salary for a while, or increase
revenues.
Getting people to take less salary is a weak solution that will only work when
the problem isn't too bad. If your current trajectory won't quite get you to
profitability but you can get over the threshold by cutting salaries a little,
you might be able to make the case to everyone for doing it. Otherwise you're
probably just postponing the problem, and that will be obvious to the people
whose salaries you're proposing to cut. [4]
Which leaves two options, firing good people and making more money. While
trying to balance them, keep in mind the eventual goal: to be a successful
product company in the sense of having a single thing lots of people use.
You should lean more toward firing people if the source of your trouble is
overhiring. If you went out and hired 15 people before you even knew what you
were building, you've created a broken company. You need to figure out what
you're building, and it will probably be easier to do that with a handful of
people than 15. Plus those 15 people might not even be the ones you need for
whatever you end up building. So the solution may be to shrink and then figure
out what direction to grow in. After all, you're not doing those 15 people any
favors if you fly the company into ground with them aboard. They'll all lose
their jobs eventually, along with all the time they expended on this doomed
company.
Whereas if you only have a handful of people, it may be better to focus on
trying to make more money. It may seem facile to suggest a startup make more
money, as if that could be done for the asking. Usually a startup is already
trying as hard as it can to sell whatever it sells. What I'm suggesting here
is not so much to try harder to make money but to try to make money in a
different way. For example, if you have only one person selling while the rest
are writing code, consider having everyone work on selling. What good will
more code do you when you're out of business? If you have to write code to
close a certain deal, go ahead; that follows from everyone working on selling.
But only work on whatever will get you the most revenue the soonest.
Another way to make money differently is to sell different things, and in
particular to do more consultingish work. I say consultingish because there is
a long slippery slope from making products to pure consulting, and you don't
have to go far down it before you start to offer something really attractive
to customers. Although your product may not be very appealing yet, if you're a
startup your programmers will often be way better than the ones your customers
have. Or you may have expertise in some new field they don't understand. So if
you change your sales conversations just a little from "do you want to buy our
product?" to "what do you need that you'd pay a lot for?" you may find it's
suddenly a lot easier to extract money from customers.
Be ruthlessly mercenary when you start doing this, though. You're trying to
save your company from death here, so make customers pay a lot, quickly. And
to the extent you can, try to avoid the worst pitfalls of consulting. The
ideal thing might be if you built a precisely defined derivative version of
your product for the customer, and it was otherwise a straight product sale.
You keep the IP and no billing by the hour.
In the best case, this consultingish work may not be just something you do to
survive, but may turn out to be the [thing-that-doesn't-scale](ds.html) that
defines your company. Don't expect it to be, but as you dive into individual
users' needs, keep your eyes open for narrow openings that have wide vistas
beyond.
There is usually so much demand for custom work that unless you're really
incompetent there has to be some point down the slope of consulting at which
you can survive. But I didn't use the term slippery slope by accident;
customers' insatiable demand for custom work will always be pushing you toward
the bottom. So while you'll probably survive, the problem now becomes to
survive with the least damage and distraction.
The good news is, plenty of successful startups have passed through near-death
experiences and gone on to flourish. You just have to realize in time that
you're near death. And if you're in the fatal pinch, you are.
**Notes**
[1] There are a handful of companies that can't reasonably expect to make
money for the first year or two, because what they're building takes so long.
For these companies substitute "progress" for "revenue growth." You're not one
of these companies unless your initial investors agreed in advance that you
were. And frankly even these companies wish they weren't, because the
illiquidity of "progress" puts them at the mercy of investors.
[2] There's a variant of the fatal pinch where your existing investors help
you along by promising to invest more. Or rather, where you read them as
promising to invest more, while they think they're just mentioning the
possibility. The way to solve this problem, if you have 8 months of runway or
less, is to try to get the money right now. Then you'll either get the money,
in which case (immediate) problem solved, or at least prevent your investors
from helping you to remain in denial about your fundraising prospects.
[3] Obviously, if you have significant expenses other than salaries that you
can eliminate, do it now.
[4] Unless of course the source of the problem is that you're paying
yourselves high salaries. If by cutting the founders' salaries to the minimum
you need, you can make it to profitability, you should. But it's a bad sign if
you needed to read this to realize that.
**Thanks** to Sam Altman, Paul Buchheit, Jessica Livingston, and Geoff Ralston
for reading drafts of this.
|
June 2013
_(This talk was written for an audience of investors.)_
Y Combinator has now funded 564 startups including the current batch, which
has 53. The total valuation of the 287 that have valuations (either by raising
an equity round, getting acquired, or dying) is about $11.7 billion, and the
511 prior to the current batch have collectively raised about $1.7 billion.
[1]
As usual those numbers are dominated by a few big winners. The top 10 startups
account for 8.6 of that 11.7 billion. But there is a peloton of younger
startups behind them. There are about 40 more that have a shot at being really
big.
Things got a little out of hand last summer when we had 84 companies in the
batch, so we tightened up our filter to decrease the batch size. [2] Several
journalists have tried to interpret that as evidence for some macro story they
were telling, but the reason had nothing to do with any external trend. The
reason was that we discovered we were using an n² algorithm, and we needed to
buy time to fix it. Fortunately we've come up with several techniques for
sharding YC, and the problem now seems to be fixed. With a new more scaleable
model and only 53 companies, the current batch feels like a walk in the park.
I'd guess we can grow another 2 or 3x before hitting the next bottleneck. [3]
One consequence of funding such a large number of startups is that we see
trends early. And since fundraising is one of the main things we help startups
with, we're in a good position to notice trends in investing.
I'm going to take a shot at describing where these trends are leading. Let's
start with the most basic question: will the future be better or worse than
the past? Will investors, in the aggregate, make more money or less?
I think more. There are multiple forces at work, some of which will decrease
returns, and some of which will increase them. I can't predict for sure which
forces will prevail, but I'll describe them and you can decide for yourself.
There are two big forces driving change in startup funding: it's becoming
cheaper to start a startup, and startups are becoming a more normal thing to
do.
When I graduated from college in 1986, there were essentially two options: get
a job or go to grad school. Now there's a third: start your own company.
That's a big change. In principle it was possible to start your own company in
1986 too, but it didn't seem like a real possibility. It seemed possible to
start a consulting company, or a niche product company, but it didn't seem
possible to start a company that would become big. [4]
That kind of change, from 2 paths to 3, is the sort of big social shift that
only happens once every few generations. I think we're still at the beginning
of this one. It's hard to predict how big a deal it will be. As big a deal as
the Industrial Revolution? Maybe. Probably not. But it will be a big enough
deal that it takes almost everyone by surprise, because those big social
shifts always do.
One thing we can say for sure is that there will be a lot more startups. The
monolithic, hierarchical companies of the mid 20th century are being
[replaced](highres.html) by networks of smaller companies. This process is not
just something happening now in Silicon Valley. It started decades ago, and
it's happening as far afield as the car industry. It has a long way to run.
[5]
The other big driver of change is that startups are becoming cheaper to start.
And in fact the two forces are related: the decreasing cost of starting a
startup is one of the reasons startups are becoming a more normal thing to do.
The fact that startups need less money means founders will increasingly have
the upper hand over investors. You still need just as much of their energy and
imagination, but they don't need as much of your money. Because founders have
the upper hand, they'll retain an increasingly large share of the stock in,
and [control of](control.html), their companies. Which means investors will
get less stock and less control.
Does that mean investors will make less money? Not necessarily, because there
will be more good startups. The total amount of desirable startup stock
available to investors will probably increase, because the number of desirable
startups will probably grow faster than the percentage they sell to investors
shrinks.
There's a rule of thumb in the VC business that there are about 15 companies a
year that will be really successful. Although a lot of investors unconsciously
treat this number as if it were some sort of cosmological constant, I'm
certain it isn't. There are probably limits on the rate at which technology
can develop, but that's not the limiting factor now. If it were, each
successful startup would be founded the month it became possible, and that is
not the case. Right now the limiting factor on the number of big hits is the
number of sufficiently good founders starting companies, and that number can
and will increase. There are still a lot of people who'd make great founders
who never end up starting a company. You can see that from how randomly some
of the most successful startups got started. So many of the biggest startups
almost didn't happen that there must be a lot of equally good startups that
actually didn't happen.
There might be 10x or even 50x more good founders out there. As more of them
go ahead and start startups, those 15 big hits a year could easily become 50
or even 100. [6]
What about returns, though? Are we heading for a world in which returns will
be pinched by increasingly high valuations? I think the top firms will
actually make more money than they have in the past. High returns don't come
from investing at low valuations. They come from investing in the companies
that do really well. So if there are more of those to be had each year, the
best pickers should have more hits.
This means there should be more variability in the VC business. The firms that
can recognize and attract the best startups will do even better, because there
will be more of them to recognize and attract. Whereas the bad firms will get
the leftovers, as they do now, and yet pay a higher price for them.
Nor do I think it will be a problem that founders keep control of their
companies for longer. The empirical evidence on that is already clear:
investors make more money as founders' bitches than their bosses. Though
somewhat humiliating, this is actually good news for investors, because it
takes less time to serve founders than to micromanage them.
What about angels? I think there is a lot of opportunity there. It used to
suck to be an angel investor. You couldn't get access to the best deals,
unless you got lucky like Andy Bechtolsheim, and when you did invest in a
startup, VCs might try to strip you of your stock when they arrived later. Now
an angel can go to something like Demo Day or AngelList and have access to the
same deals VCs do. And the days when VCs could wash angels out of the cap
table are long gone.
I think one of the biggest unexploited opportunities in startup investing
right now is angel-sized investments made quickly. Few investors understand
the cost that raising money from them imposes on startups. When the company
consists only of the founders, everything grinds to a halt during fundraising,
which can easily take 6 weeks. The current high cost of fundraising means
there is room for low-cost investors to undercut the rest. And in this
context, low-cost means deciding quickly. If there were a reputable investor
who invested $100k on good terms and promised to decide yes or no within 24
hours, they'd get access to almost all the best deals, because every good
startup would approach them first. It would be up to them to pick, because
every bad startup would approach them first too, but at least they'd see
everything. Whereas if an investor is notorious for taking a long time to make
up their mind or negotiating a lot about valuation, founders will save them
for last. And in the case of the most promising startups, which tend to have
an easy time raising money, last can easily become never.
Will the number of big hits grow linearly with the total number of new
startups? Probably not, for two reasons. One is that the scariness of starting
a startup in the old days was a pretty effective filter. Now that the cost of
failing is becoming lower, we should expect founders to do it more. That's not
a bad thing. It's common in technology for an innovation that decreases the
cost of failure to increase the number of failures and yet leave you net
ahead.
The other reason the number of big hits won't grow proportionately to the
number of startups is that there will start to be an increasing number of idea
clashes. Although the finiteness of the number of good ideas is not the reason
there are only 15 big hits a year, the number has to be finite, and the more
startups there are, the more we'll see multiple companies doing the same thing
at the same time. It will be interesting, in a bad way, if idea clashes become
a lot more common. [7]
Mostly because of the increasing number of early failures, the startup
business of the future won't simply be the same shape, scaled up. What used to
be an obelisk will become a pyramid. It will be a little wider at the top, but
a lot wider at the bottom.
What does that mean for investors? One thing it means is that there will be
more opportunities for investors at the earliest stage, because that's where
the volume of our imaginary solid is growing fastest. Imagine the obelisk of
investors that corresponds to the obelisk of startups. As it widens out into a
pyramid to match the startup pyramid, all the contents are adhering to the
top, leaving a vacuum at the bottom.
That opportunity for investors mostly means an opportunity for new investors,
because the degree of risk an existing investor or firm is comfortable taking
is one of the hardest things for them to change. Different types of investors
are adapted to different degrees of risk, but each has its specific degree of
risk deeply imprinted on it, not just in the procedures they follow but in the
personalities of the people who work there.
I think the biggest danger for VCs, and also the biggest opportunity, is at
the series A stage. Or rather, what used to be the series A stage before
series As turned into de facto series B rounds.
Right now, VCs often knowingly invest too much money at the series A stage.
They do it because they feel they need to get a big chunk of each series A
company to compensate for the opportunity cost of the board seat it consumes.
Which means when there is a lot of competition for a deal, the number that
moves is the valuation (and thus amount invested) rather than the percentage
of the company being sold. Which means, especially in the case of more
promising startups, that series A investors often make companies take more
money than they want.
Some VCs lie and claim the company really needs that much. Others are more
candid, and admit their financial models require them to own a certain
percentage of each company. But we all know the amounts being raised in series
A rounds are not determined by asking what would be best for the companies.
They're determined by VCs starting from the amount of the company they want to
own, and the market setting the valuation and thus the amount invested.
Like a lot of bad things, this didn't happen intentionally. The VC business
backed into it as their initial assumptions gradually became obsolete. The
traditions and financial models of the VC business were established when
founders needed investors more. In those days it was natural for founders to
sell VCs a big chunk of their company in the series A round. Now founders
would prefer to sell less, and VCs are digging in their heels because they're
not sure if they can make money buying less than 20% of each series A company.
The reason I describe this as a danger is that series A investors are
increasingly at odds with the startups they supposedly serve, and that tends
to come back to bite you eventually. The reason I describe it as an
opportunity is that there is now a lot of potential energy built up, as the
market has moved away from VCs' traditional business model. Which means the
first VC to break ranks and start to do series A rounds for as much equity as
founders want to sell (and with no "option pool" that comes only from the
founders' shares) stands to reap huge benefits.
What will happen to the VC business when that happens? Hell if I know. But I
bet that particular firm will end up ahead. If one top-tier VC firm started to
do series A rounds that started from the amount the company needed to raise
and let the percentage acquired vary with the market, instead of the other way
around, they'd instantly get almost all the best startups. And that's where
the money is.
You can't fight market forces forever. Over the last decade we've seen the
percentage of the company sold in series A rounds creep inexorably downward.
40% used to be common. Now VCs are fighting to hold the line at 20%. But I am
daily waiting for the line to collapse. It's going to happen. You may as well
anticipate it, and look bold.
Who knows, maybe VCs will make more money by doing the right thing. It
wouldn't be the first time that happened. Venture capital is a business where
occasional big successes generate hundredfold returns. How much confidence can
you really have in financial models for something like that anyway? The big
successes only have to get a tiny bit less occasional to compensate for a 2x
decrease in the stock sold in series A rounds.
If you want to find new opportunities for investing, look for things founders
complain about. Founders are your customers, and the things they complain
about are unsatisfied demand. I've given two examples of things founders
complain about most—investors who take too long to make up their minds, and
excessive dilution in series A rounds—so those are good places to look now.
But the more general recipe is: do something founders want.
**Notes**
[1] I realize revenue and not fundraising is the proper test of success for a
startup. The reason we quote statistics about fundraising is because those are
the numbers we have. We couldn't talk meaningfully about revenues without
including the numbers from the most successful startups, and we don't have
those. We often discuss revenue growth with the earlier stage startups,
because that's how we gauge their progress, but when companies reach a certain
size it gets presumptuous for a seed investor to do that.
In any case, companies' market caps do eventually become a function of
revenues, and post-money valuations of funding rounds are at least guesses by
pros about where those market caps will end up.
The reason only 287 have valuations is that the rest have mostly raised money
on convertible notes, and although convertible notes often have valuation
caps, a valuation cap is merely an upper bound on a valuation.
[2] We didn't try to accept a particular number. We have no way of doing that
even if we wanted to. We just tried to be significantly pickier.
[3] Though you never know with bottlenecks, I'm guessing the next one will be
coordinating efforts among partners.
[4] I realize starting a company doesn't have to mean starting a
[startup](growth.html). There will be lots of people starting normal companies
too. But that's not relevant to an audience of investors.
Geoff Ralston reports that in Silicon Valley it seemed thinkable to start a
startup in the mid 1980s. It would have started there. But I know it didn't to
undergraduates on the East Coast.
[5] This trend is one of the main causes of the increase in economic
inequality in the US since the mid twentieth century. The person who would in
1950 have been the general manager of the x division of Megacorp is now the
founder of the x company, and owns significant equity in it.
[6] If Congress passes the [founder visa](foundervisa.html) in a non-broken
form, that alone could in principle get us up to 20x, since 95% of the world's
population lives outside the US.
[7] If idea clashes got bad enough, it could change what it means to be a
startup. We currently advise startups mostly to ignore competitors. We tell
them startups are competitive like running, not like soccer; you don't have to
go and steal the ball away from the other team. But if idea clashes became
common enough, maybe you'd start to have to. That would be unfortunate.
**Thanks** to Sam Altman, Paul Buchheit, Dalton Caldwell, Patrick Collison,
Jessica Livingston, Andrew Mason, Geoff Ralston, and Garry Tan for reading
drafts of this.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
September 2009
Like all investors, we spend a lot of time trying to learn how to predict
which startups will succeed. We probably spend more time thinking about it
than most, because we invest the earliest. Prediction is usually all we have
to rely on.
We learned quickly that the most important predictor of success is
determination. At first we thought it might be intelligence. Everyone likes to
believe that's what makes startups succeed. It makes a better story that a
company won because its founders were so smart. The PR people and reporters
who spread such stories probably believe them themselves. But while it
certainly helps to be smart, it's not the deciding factor. There are plenty of
people as smart as Bill Gates who achieve nothing.
In most domains, talent is overrated compared to determination—partly because
it makes a better story, partly because it gives onlookers an excuse for being
lazy, and partly because after a while determination starts to look like
talent.
I can't think of any field in which determination is overrated, but the
relative importance of determination and talent probably do vary somewhat.
Talent probably matters more in types of work that are purer, in the sense
that one is solving mostly a single type of problem instead of many different
types. I suspect determination would not take you as far in math as it would
in, say, organized crime.
I don't mean to suggest by this comparison that types of work that depend more
on talent are always more admirable. Most people would agree it's more
admirable to be good at math than memorizing long strings of digits, even
though the latter depends more on natural ability.
Perhaps one reason people believe startup founders win by being smarter is
that intelligence does matter more in technology startups than it used to in
earlier types of companies. You probably do need to be a bit smarter to
dominate Internet search than you had to be to dominate railroads or hotels or
newspapers. And that's probably an ongoing trend. But even in the highest of
high tech industries, success still depends more on determination than brains.
If determination is so important, can we isolate its components? Are some more
important than others? Are there some you can cultivate?
The simplest form of determination is sheer willfulness. When you want
something, you must have it, no matter what.
A good deal of willfulness must be inborn, because it's common to see families
where one sibling has much more of it than another. Circumstances can alter
it, but at the high end of the scale, nature seems to be more important than
nurture. Bad circumstances can break the spirit of a strong-willed person, but
I don't think there's much you can do to make a weak-willed person stronger-
willed.
Being strong-willed is not enough, however. You also have to be hard on
yourself. Someone who was strong-willed but self-indulgent would not be called
determined. Determination implies your willfulness is balanced by discipline.
That word balance is a significant one. The more willful you are, the more
disciplined you have to be. The stronger your will, the less anyone will be
able to argue with you except yourself. And someone has to argue with you,
because everyone has base impulses, and if you have more will than discipline
you'll just give into them and end up on a local maximum like drug addiction.
We can imagine will and discipline as two fingers squeezing a slippery melon
seed. The harder they squeeze, the further the seed flies, but they must both
squeeze equally or the seed spins off sideways.
If this is true it has interesting implications, because discipline can be
cultivated, and in fact does tend to vary quite a lot in the course of an
individual's life. If determination is effectively the product of will and
discipline, then you can become more determined by being more disciplined. [1]
Another consequence of the melon seed model is that the more willful you are,
the more dangerous it is to be undisciplined. There seem to be plenty of
examples to confirm that. In some very energetic people's lives you see
something like wing flutter, where they alternate between doing great work and
doing absolutely nothing. Externally this would look a lot like bipolar
disorder.
The melon seed model is inaccurate in at least one respect, however: it's
static. In fact the dangers of indiscipline increase with temptation. Which
means, interestingly, that determination tends to erode itself. If you're
sufficiently determined to achieve great things, this will probably increase
the number of temptations around you. Unless you become proportionally more
disciplined, willfulness will then get the upper hand, and your achievement
will revert to the mean.
That's why Shakespeare's Caesar thought thin men so dangerous. They weren't
tempted by the minor perquisites of power.
The melon seed model implies it's possible to be too disciplined. Is it? I
think there probably are people whose willfulness is crushed down by excessive
discipline, and who would achieve more if they weren't so hard on themselves.
One reason the young sometimes succeed where the old fail is that they don't
realize how incompetent they are. This lets them do a kind of deficit
spending. When they first start working on something, they overrate their
achievements. But that gives them confidence to keep working, and their
performance improves. Whereas someone clearer-eyed would see their initial
incompetence for what it was, and perhaps be discouraged from continuing.
There's one other major component of determination: ambition. If willfulness
and discipline are what get you to your destination, ambition is how you
choose it.
I don't know if it's exactly right to say that ambition is a component of
determination, but they're not entirely orthogonal. It would seem a misnomer
if someone said they were very determined to do something trivially easy.
And fortunately ambition seems to be quite malleable; there's a lot you can do
to increase it. Most people don't know how ambitious to be, especially when
they're young. They don't know what's hard, or what they're capable of. And
this problem is exacerbated by having few peers. Ambitious people are rare, so
if everyone is mixed together randomly, as they tend to be early in people's
lives, then the ambitious ones won't have many ambitious peers. When you take
people like this and put them together with other ambitious people, they bloom
like dying plants given water. Probably most ambitious people are starved for
the sort of encouragement they'd get from ambitious peers, whatever their age.
[2]
Achievements also tend to increase your ambition. With each step you gain
confidence to stretch further next time.
So here in sum is how determination seems to work: it consists of willfulness
balanced with discipline, aimed by ambition. And fortunately at least two of
these three qualities can be cultivated. You may be able to increase your
strength of will somewhat; you can definitely learn self-discipline; and
almost everyone is practically malnourished when it comes to ambition.
I feel like I understand determination a bit better now. But only a bit:
willfulness, discipline, and ambition are all concepts almost as complicated
as determination. [3]
Note too that determination and talent are not the whole story. There's a
third factor in achievement: how much you like the work. If you really
[love](love.html) working on something, you don't need determination to drive
you; it's what you'd do anyway. But most types of work have aspects one
doesn't like, because most types of work consist of doing things for other
people, and it's very unlikely that the tasks imposed by their needs will
happen to align exactly with what you want to do.
Indeed, if you want to create the most [wealth](wealth.html), the way to do it
is to focus more on their needs than your interests, and make up the
difference with determination.
**Notes**
[1] Loosely speaking. What I'm claiming with the melon seed model is more like
determination is proportionate to wd^m - k|w - d|^n, where w is will and d
discipline.
[2] Which means one of the best ways to help a society generally is to create
[events](http://startupschool.org) and [institutions](http://ycombinator.com)
that bring ambitious people together. It's like pulling the control rods out
of a reactor: the energy they emit encourages other ambitious people, instead
of being absorbed by the normal people they're usually surrounded with.
Conversely, it's probably a mistake to do as some European countries have done
and try to ensure none of your universities is significantly better than the
others.
[3] For example, willfulness clearly has two subcomponents, stubbornness and
energy. The first alone yields someone who's stubbornly inert. The second
alone yields someone flighty. As willful people get older or otherwise lose
their energy, they tend to become merely stubborn.
**Thanks** to Sam Altman, Jessica Livingston, and Robert Morris for reading
drafts of this.
|
August 2007
A good programmer working intensively on his own code can hold it in his mind
the way a mathematician holds a problem he's working on. Mathematicians don't
answer questions by working them out on paper the way schoolchildren are
taught to. They do more in their heads: they try to understand a problem space
well enough that they can walk around it the way you can walk around the
memory of the house you grew up in. At its best programming is the same. You
hold the whole program in your head, and you can manipulate it at will.
That's particularly valuable at the start of a project, because initially the
most important thing is to be able to change what you're doing. Not just to
solve the problem in a different way, but to change the problem you're
solving.
Your code is your understanding of the problem you're exploring. So it's only
when you have your code in your head that you really understand the problem.
It's not easy to get a program into your head. If you leave a project for a
few months, it can take days to really understand it again when you return to
it. Even when you're actively working on a program it can take half an hour to
load into your head when you start work each day. And that's in the best case.
Ordinary programmers working in typical office conditions never enter this
mode. Or to put it more dramatically, ordinary programmers working in typical
office conditions never really understand the problems they're solving.
Even the best programmers don't always have the whole program they're working
on loaded into their heads. But there are things you can do to help:
1. **Avoid distractions.** Distractions are bad for many types of work, but especially bad for programming, because programmers tend to operate at the limit of the detail they can handle.
The danger of a distraction depends not on how long it is, but on how much it
scrambles your brain. A programmer can leave the office and go and get a
sandwich without losing the code in his head. But the wrong kind of
interruption can wipe your brain in 30 seconds.
Oddly enough, scheduled distractions may be worse than unscheduled ones. If
you know you have a meeting in an hour, you don't even start working on
something hard.
2. **Work in long stretches.** Since there's a fixed cost each time you start working on a program, it's more efficient to work in a few long sessions than many short ones. There will of course come a point where you get stupid because you're tired. This varies from person to person. I've heard of people hacking for 36 hours straight, but the most I've ever been able to manage is about 18, and I work best in chunks of no more than 12.
The optimum is not the limit you can physically endure. There's an advantage
as well as a cost of breaking up a project. Sometimes when you return to a
problem after a rest, you find your unconscious mind has left an answer
waiting for you.
3. **Use succinct languages.** More [powerful](power.html) programming languages make programs shorter. And programmers seem to think of programs at least partially in the language they're using to write them. The more succinct the language, the shorter the program, and the easier it is to load and keep in your head.
You can magnify the effect of a powerful language by using a style called
bottom-up programming, where you write programs in multiple layers, the lower
ones acting as programming languages for those above. If you do this right,
you only have to keep the topmost layer in your head.
4. **Keep rewriting your program.** Rewriting a program often yields a cleaner design. But it would have advantages even if it didn't: you have to understand a program completely to rewrite it, so there is no better way to get one loaded into your head.
5. **Write rereadable code.** All programmers know it's good to write readable code. But you yourself are the most important reader. Especially in the beginning; a prototype is a conversation with yourself. And when writing for yourself you have different priorities. If you're writing for other people, you may not want to make code too dense. Some parts of a program may be easiest to read if you spread things out, like an introductory textbook. Whereas if you're writing code to make it easy to reload into your head, it may be best to go for brevity.
6. **Work in small groups.** When you manipulate a program in your head, your vision tends to stop at the edge of the code you own. Other parts you don't understand as well, and more importantly, can't take liberties with. So the smaller the number of programmers, the more completely a project can mutate. If there's just one programmer, as there often is at first, you can do all-encompassing redesigns.
7. **Don't have multiple people editing the same piece of code.** You never understand other people's code as well as your own. No matter how thoroughly you've read it, you've only read it, not written it. So if a piece of code is written by multiple authors, none of them understand it as well as a single author would.
And of course you can't safely redesign something other people are working on.
It's not just that you'd have to ask permission. You don't even let yourself
think of such things. Redesigning code with several authors is like changing
laws; redesigning code you alone control is like seeing the other
interpretation of an ambiguous image.
If you want to put several people to work on a project, divide it into
components and give each to one person.
8. **Start small.** A program gets easier to hold in your head as you become familiar with it. You can start to treat parts as black boxes once you feel confident you've fully explored them. But when you first start working on a project, you're forced to see everything. If you start with too big a problem, you may never quite be able to encompass it. So if you need to write a big, complex program, the best way to begin may not be to write a spec for it, but to write a prototype that solves a subset of the problem. Whatever the advantages of planning, they're often outweighed by the advantages of being able to keep a program in your head.
It's striking how often programmers manage to hit all eight points by
accident. Someone has an idea for a new project, but because it's not
officially sanctioned, he has to do it in off hours—which turn out to be more
productive because there are no distractions. Driven by his enthusiasm for the
new project he works on it for many hours at a stretch. Because it's initially
just an experiment, instead of a "production" language he uses a mere
"scripting" language—which is in fact far more powerful. He completely
rewrites the program several times; that wouldn't be justifiable for an
official project, but this is a labor of love and he wants it to be perfect.
And since no one is going to see it except him, he omits any comments except
the note-to-self variety. He works in a small group perforce, because he
either hasn't told anyone else about the idea yet, or it seems so unpromising
that no one else is allowed to work on it. Even if there is a group, they
couldn't have multiple people editing the same code, because it changes too
fast for that to be possible. And the project starts small because the idea
_is_ small at first; he just has some cool hack he wants to try out.
Even more striking are the number of officially sanctioned projects that
manage to do _all eight things wrong_. In fact, if you look at the way
software gets written in most organizations, it's almost as if they were
deliberately trying to do things wrong. In a sense, they are. One of the
defining qualities of organizations since there have been such a thing is to
treat individuals as interchangeable parts. This works well for more
parallelizable tasks, like fighting wars. For most of history a well-drilled
army of professional soldiers could be counted on to beat an army of
individual warriors, no matter how valorous. But having ideas is not very
parallelizable. And that's what programs are: ideas.
It's not merely true that organizations dislike the idea of depending on
individual genius, it's a tautology. It's part of the definition of an
organization not to. Of our current concept of an organization, at least.
Maybe we could define a new kind of organization that combined the efforts of
individuals without requiring them to be interchangeable. Arguably a market is
such a form of organization, though it may be more accurate to describe a
market as a degenerate case—as what you get by default when organization isn't
possible.
Probably the best we'll do is some kind of hack, like making the programming
parts of an organization work differently from the rest. Perhaps the optimal
solution is for big companies not even to try to develop ideas in house, but
simply to [buy](hiring.html) them. But regardless of what the solution turns
out to be, the first step is to realize there's a problem. There is a
contradiction in the very phrase "software company." The two words are pulling
in opposite directions. Any good programmer in a large organization is going
to be at odds with it, because organizations are designed to prevent what
programmers strive for.
Good programmers manage to get a lot done anyway. But often it requires
practically an act of rebellion against the organizations that employ them.
Perhaps it will help if more people understand that the way programmers behave
is driven by the demands of the work they do. It's not because they're
irresponsible that they work in long binges during which they blow off all
other obligations, plunge straight into programming instead of writing specs
first, and rewrite code that already works. It's not because they're
unfriendly that they prefer to work alone, or growl at people who pop their
head in the door to say hello. This apparently random collection of annoying
habits has a single explanation: the power of holding a program in one's head.
Whether or not understanding this can help large organizations, it can
certainly help their competitors. The weakest point in big companies is that
they don't let individual programmers do great work. So if you're a little
startup, this is the place to attack them. Take on the kind of problems that
have to be solved in one big brain.
**Thanks** to Sam Altman, David Greenspan, Aaron Iba, Jessica Livingston,
Robert Morris, Peter Norvig, Lisa Randall, Emmett Shear, Sergei Tsarev, and
Stephen Wolfram for reading drafts of this.
|
July 2009
The Segway hasn't delivered on its initial promise, to put it mildly. There
are several reasons why, but one is that people don't want to be seen riding
them. Someone riding a Segway looks like a dork.
My friend Trevor Blackwell built [his own Segway](http://tlb.org/#scooter),
which we called the Segwell. He also built a one-wheeled version, [the
Eunicycle](http://tlb.org/#eunicycle), which looks exactly like a regular
unicycle till you realize the rider isn't pedaling. He has ridden them both to
downtown Mountain View to get coffee. When he rides the Eunicycle, people
smile at him. But when he rides the Segwell, they shout abuse from their cars:
"Too lazy to walk, ya fuckin homo?"
Why do Segways provoke this reaction? The reason you look like a dork riding a
Segway is that you look _smug_. You don't seem to be working hard enough.
Someone riding a motorcycle isn't working any harder. But because he's sitting
astride it, he seems to be making an effort. When you're riding a Segway
you're just standing there. And someone who's being whisked along while
seeming to do no work — someone in a sedan chair, for example — can't help but
look smug.
Try this thought experiment and it becomes clear: imagine something that
worked like the Segway, but that you rode with one foot in front of the other,
like a skateboard. That wouldn't seem nearly as uncool.
So there may be a way to capture more of the market Segway hoped to reach:
make a version that doesn't look so easy for the rider. It would also be
helpful if the styling was in the tradition of skateboards or bicycles rather
than medical devices.
Curiously enough, what got Segway into this problem was that the company was
itself a kind of Segway. It was too easy for them; they were too successful
raising money. If they'd had to grow the company gradually, by iterating
through several versions they sold to real users, they'd have learned pretty
quickly that people looked stupid riding them. Instead they had enough to work
in secret. They had focus groups aplenty, I'm sure, but they didn't have the
people yelling insults out of cars. So they never realized they were zooming
confidently down a blind alley.
|
April 2001
This essay developed out of conversations I've had with several other
programmers about why Java smelled suspicious. It's not a critique of Java! It
is a case study of hacker's radar.
Over time, hackers develop a nose for good (and bad) technology. I thought it
might be interesting to try and write down what made Java seem suspect to me.
Some people who've read this think it's an interesting attempt to write about
something that hasn't been written about before. Others say I will get in
trouble for appearing to be writing about things I don't understand. So, just
in case it does any good, let me clarify that I'm not writing here about Java
(which I have never used) but about hacker's radar (which I have thought about
a lot).
* * *
The aphorism "you can't tell a book by its cover" originated in the times when
books were sold in plain cardboard covers, to be bound by each purchaser
according to his own taste. In those days, you couldn't tell a book by its
cover. But publishing has advanced since then: present-day publishers work
hard to make the cover something you can tell a book by.
I spend a lot of time in bookshops and I feel as if I have by now learned to
understand everything publishers mean to tell me about a book, and perhaps a
bit more. The time I haven't spent in bookshops I've spent mostly in front of
computers, and I feel as if I've learned, to some degree, to judge technology
by its cover as well. It may be just luck, but I've saved myself from a few
technologies that turned out to be real stinkers.
So far, Java seems like a stinker to me. I've never written a Java program,
never more than glanced over reference books about it, but I have a hunch that
it won't be a very successful language. I may turn out to be mistaken; making
predictions about technology is a dangerous business. But for what it's worth,
as a sort of time capsule, here's why I don't like the look of Java:
1\. It has been so energetically hyped. Real standards don't have to be
promoted. No one had to promote C, or Unix, or HTML. A real standard tends to
be already established by the time most people hear about it. On the hacker
radar screen, Perl is as big as Java, or bigger, just on the strength of its
own merits.
2\. It's aimed low. In the original Java white paper, Gosling explicitly says
Java was designed not to be too difficult for programmers used to C. It was
designed to be another C++: C plus a few ideas taken from more advanced
languages. Like the creators of sitcoms or junk food or package tours, Java's
designers were consciously designing a product for people not as smart as
them. Historically, languages designed for other people to use have been bad:
Cobol, PL/I, Pascal, Ada, C++. The good languages have been those that were
designed for their own creators: C, Perl, Smalltalk, Lisp.
3\. It has ulterior motives. Someone once said that the world would be a
better place if people only wrote books because they had something to say,
rather than because they wanted to write a book. Likewise, the reason we hear
about Java all the time is not because it has something to say about
programming languages. We hear about Java as part of a plan by Sun to
undermine Microsoft.
4\. No one loves it. C, Perl, Python, Smalltalk, and Lisp programmers love
their languages. I've never heard anyone say that they loved Java.
5\. People are forced to use it. A lot of the people I know using Java are
using it because they feel they have to. Either it's something they felt they
had to do to get funded, or something they thought customers would want, or
something they were told to do by management. These are smart people; if the
technology was good, they'd have used it voluntarily.
6\. It has too many cooks. The best programming languages have been developed
by small groups. Java seems to be run by a committee. If it turns out to be a
good language, it will be the first time in history that a committee has
designed a good language.
7\. It's bureaucratic. From what little I know about Java, there seem to be a
lot of protocols for doing things. Really good languages aren't like that.
They let you do what you want and get out of the way.
8\. It's pseudo-hip. Sun now pretends that Java is a grassroots, open-source
language effort like Perl or Python. This one just happens to be controlled by
a giant company. So the language is likely to have the same drab clunkiness as
anything else that comes out of a big company.
9\. It's designed for large organizations. Large organizations have different
aims from hackers. They want languages that are (believed to be) suitable for
use by large teams of mediocre programmers-- languages with features that,
like the speed limiters in U-Haul trucks, prevent fools from doing too much
damage. Hackers don't like a language that talks down to them. Hackers just
want power. Historically, languages designed for large organizations (PL/I,
Ada) have lost, while hacker languages (C, Perl) have won. The reason: today's
teenage hacker is tomorrow's CTO.
10\. The wrong people like it. The programmers I admire most are not, on the
whole, captivated by Java. Who does like Java? Suits, who don't know one
language from another, but know that they keep hearing about Java in the
press; programmers at big companies, who are amazed to find that there is
something even better than C++; and plug-and-chug undergrads, who are ready to
like anything that might get them a job (will this be on the test?). These
people's opinions change with every wind.
11\. Its daddy is in a pinch. Sun's business model is being undermined on two
fronts. Cheap Intel processors, of the same type used in desktop machines, are
now more than fast enough for servers. And FreeBSD seems to be at least as
good an OS for servers as Solaris. Sun's advertising implies that you need Sun
servers for industrial strength applications. If this were true, Yahoo would
be first in line to buy Suns; but when I worked there, the servers were all
Intel boxes running FreeBSD. This bodes ill for Sun's future. If Sun runs into
trouble, they could drag Java down with them.
12\. The DoD likes it. The Defense Department is encouraging developers to use
Java. This seems to me the most damning sign of all. The Defense Department
does a fine (though expensive) job of defending the country, but they love
plans and procedures and protocols. Their culture is the opposite of hacker
culture; on questions of software they will tend to bet wrong. The last time
the DoD really liked a programming language, it was Ada.
Bear in mind, this is not a critique of Java, but a critique of its cover. I
don't know Java well enough to like it or dislike it. This is just an
explanation of why I don't find that I'm eager to learn it.
It may seem cavalier to dismiss a language before you've even tried writing
programs in it. But this is something all programmers have to do. There are
too many technologies out there to learn them all. You have to learn to judge
by outward signs which will be worth your time. I have likewise cavalierly
dismissed Cobol, Ada, Visual Basic, the IBM AS400, VRML, ISO 9000, the SET
protocol, VMS, Novell Netware, and CORBA, among others. They just smelled
wrong.
It could be that in Java's case I'm mistaken. It could be that a language
promoted by one big company to undermine another, designed by a committee for
a "mainstream" audience, hyped to the skies, and beloved of the DoD, happens
nonetheless to be a clean, beautiful, powerful language that I would love
programming in. It could be, but it seems very unlikely.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
August 2013
When people hurt themselves lifting heavy things, it's usually because they
try to lift with their back. The right way to lift heavy things is to let your
legs do the work. Inexperienced founders make the same mistake when trying to
convince investors. They try to convince with their pitch. Most would be
better off if they let their startup do the work — if they started by
understanding why their startup is worth investing in, then simply explained
this well to investors.
Investors are looking for startups that will be very successful. But that test
is not as simple as it sounds. In startups, as in a lot of other domains, the
distribution of outcomes follows a power law, but in startups the curve is
startlingly steep. The big successes are so big they [dwarf](swan.html) the
rest. And since there are only a handful each year (the conventional wisdom is
15), investors treat "big success" as if it were binary. Most are interested
in you if you seem like you have a chance, however small, of being one of the
15 big successes, and otherwise not. [1]
(There are a handful of angels who'd be interested in a company with a high
probability of being moderately successful. But angel investors like big
successes too.)
How do you seem like you'll be one of the big successes? You need three
things: formidable founders, a promising market, and (usually) some evidence
of success so far.
**Formidable**
The most important ingredient is formidable founders. Most investors decide in
the first few minutes whether you seem like a winner or a loser, and once
their opinion is set it's hard to change. [2] Every startup has reasons both
to invest and not to invest. If investors think you're a winner they focus on
the former, and if not they focus on the latter. For example, it might be a
rich market, but with a slow sales cycle. If investors are impressed with you
as founders, they say they want to invest because it's a rich market, and if
not, they say they can't invest because of the slow sales cycle.
They're not necessarily trying to mislead you. Most investors are genuinely
unclear in their own minds why they like or dislike startups. If you seem like
a winner, they'll like your idea more. But don't be too smug about this
weakness of theirs, because you have it too; almost everyone does.
There is a role for ideas of course. They're fuel for the fire that starts
with liking the founders. Once investors like you, you'll see them reaching
for ideas: they'll be saying "yes, and you could also do x." (Whereas when
they don't like you, they'll be saying "but what about y?")
But the foundation of convincing investors is to seem formidable, and since
this isn't a word most people use in conversation much, I should explain what
it means. A formidable person is one who seems like they'll get what they
want, regardless of whatever obstacles are in the way. Formidable is close to
confident, except that someone could be confident and mistaken. Formidable is
roughly justifiably confident.
There are a handful of people who are really good at seeming formidable — some
because they actually are very formidable and just let it show, and others
because they are more or less con artists. [3] But most founders, including
many who will go on to start very successful companies, are not that good at
seeming formidable the first time they try fundraising. What should they do?
[4]
What they should not do is try to imitate the swagger of more experienced
founders. Investors are not always that good at judging technology, but
they're good at judging confidence. If you try to act like something you're
not, you'll just end up in an uncanny valley. You'll depart from sincere, but
never arrive at convincing.
**Truth**
The way to seem most formidable as an inexperienced founder is to stick to the
truth. How formidable you seem isn't a constant. It varies depending on what
you're saying. Most people can seem confident when they're saying "one plus
one is two," because they know it's true. The most diffident person would be
puzzled and even slightly contemptuous if they told a VC "one plus one is two"
and the VC reacted with skepticism. The magic ability of people who are good
at seeming formidable is that they can do this with the sentence "we're going
to make a billion dollars a year." But you can do the same, if not with that
sentence with some fairly impressive ones, so long as you convince yourself
first.
That's the secret. Convince yourself that your startup is worth investing in,
and then when you explain this to investors they'll believe you. And by
convince yourself, I don't mean play mind games with yourself to boost your
confidence. I mean truly evaluate whether your startup is worth investing in.
If it isn't, don't try to raise money. [5] But if it is, you'll be telling the
truth when you tell investors it's worth investing in, and they'll sense that.
You don't have to be a smooth presenter if you understand something well and
tell the truth about it.
To evaluate whether your startup is worth investing in, you have to be a
domain expert. If you're not a domain expert, you can be as convinced as you
like about your idea, and it will seem to investors no more than an instance
of the Dunning-Kruger effect. Which in fact it will usually be. And investors
can tell fairly quickly whether you're a domain expert by how well you answer
their questions. Know everything about your market. [6]
Why do founders persist in trying to convince investors of things they're not
convinced of themselves? Partly because we've all been trained to.
When my friends Robert Morris and Trevor Blackwell were in grad school, one of
their fellow students was on the receiving end of a question from their
faculty advisor that we still quote today. When the unfortunate fellow got to
his last slide, the professor burst out:
> Which one of these conclusions do you actually believe?
One of the artifacts of the way schools are organized is that we all get
trained to talk even when we have nothing to say. If you have a ten page paper
due, then ten pages you must write, even if you only have one page of ideas.
Even if you have no ideas. You have to produce something. And all too many
startups go into fundraising in the same spirit. When they think it's time to
raise money, they try gamely to make the best case they can for their startup.
Most never think of pausing beforehand to ask whether what they're saying is
actually convincing, because they've all been trained to treat the need to
present as a given — as an area of fixed size, over which however much truth
they have must needs be spread, however thinly.
The time to raise money is not when you need it, or when you reach some
artificial deadline like a Demo Day. It's when you can convince investors, and
not before. [7]
And unless you're a good con artist, you'll never convince investors if you're
not convinced yourself. They're far better at detecting bullshit than you are
at producing it, even if you're producing it unknowingly. If you try to
convince investors before you've convinced yourself, you'll be wasting both
your time.
But pausing first to convince yourself will do more than save you from wasting
your time. It will force you to organize your thoughts. To convince yourself
that your startup is worth investing in, you'll have to figure out why it's
worth investing in. And if you can do that you'll end up with more than added
confidence. You'll also have a provisional roadmap of how to succeed.
**Market**
Notice I've been careful to talk about whether a startup is worth investing
in, rather than whether it's going to succeed. No one knows whether a startup
is going to succeed. And it's a good thing for investors that this is so,
because if you could know in advance whether a startup would succeed, the
stock price would already be the future price, and there would be no room for
investors to make money. Startup investors know that every investment is a
bet, and against pretty long odds.
So to prove you're worth investing in, you don't have to prove you're going to
succeed, just that you're a sufficiently good bet. What makes a startup a
sufficiently good bet? In addition to formidable founders, you need a
plausible path to owning a big piece of a big market. Founders think of
startups as ideas, but investors think of them as markets. If there are x
number of customers who'd pay an average of $y per year for what you're
making, then the total addressable market, or TAM, of your company is $xy.
Investors don't expect you to collect all that money, but it's an upper bound
on how big you can get.
Your target market has to be big, and it also has to be capturable by you. But
the market doesn't have to be big yet, nor do you necessarily have to be in it
yet. Indeed, it's often better to start in a [small](ds.html) market that will
either turn into a big one or from which you can move into a big one. There
just has to be some plausible sequence of hops that leads to dominating a big
market a few years down the line.
The standard of plausibility varies dramatically depending on the age of the
startup. A three month old company at Demo Day only needs to be a promising
experiment that's worth funding to see how it turns out. Whereas a two year
old company raising a series A round needs to be able to show the experiment
worked. [8]
But every company that gets really big is "lucky" in the sense that their
growth is due mostly to some external wave they're riding, so to make a
convincing case for becoming huge, you have to identify some specific trend
you'll benefit from. Usually you can find this by asking "why now?" If this is
such a great idea, why hasn't someone else already done it? Ideally the answer
is that it only recently became a good idea, because something changed, and no
one else has noticed yet.
Microsoft for example was not going to grow huge selling Basic interpreters.
But by starting there they were perfectly poised to expand up the stack of
microcomputer software as microcomputers grew powerful enough to support one.
And microcomputers turned out to be a really huge wave, bigger than even the
most optimistic observers would have predicted in 1975.
But while Microsoft did really well and there is thus a temptation to think
they would have seemed a great bet a few months in, they probably didn't.
Good, but not great. No company, however successful, ever looks more than a
pretty good bet a few months in. Microcomputers turned out to be a big deal,
and Microsoft both executed well and got lucky. But it was by no means obvious
that this was how things would play out. Plenty of companies seem as good a
bet a few months in. I don't know about startups in general, but at least half
the startups we fund could make as good a case as Microsoft could have for
being on a path to dominating a large market. And who can reasonably expect
more of a startup than that?
**Rejection**
If you can make as good a case as Microsoft could have, will you convince
investors? Not always. A lot of VCs would have rejected Microsoft. [9]
Certainly some rejected Google. And getting rejected will put you in a
slightly awkward position, because as you'll see when you start fundraising,
the most common question you'll get from investors will be "who else is
investing?" What do you say if you've been fundraising for a while and no one
has committed yet? [10]
The people who are really good at acting formidable often solve this problem
by giving investors the impression that while no investors have committed yet,
several are about to. This is arguably a permissible tactic. It's slightly
dickish of investors to care more about who else is investing than any other
aspect of your startup, and misleading them about how far along you are with
other investors seems the complementary countermove. It's arguably an instance
of scamming a scammer. But I don't recommend this approach to most founders,
because most founders wouldn't be able to carry it off. This is the single
most common lie told to investors, and you have to be really good at lying to
tell members of some profession the most common lie they're told.
If you're not a master of negotiation (and perhaps even if you are) the best
solution is to tackle the problem head-on, and to explain why investors have
turned you down and why they're mistaken. If you know you're on the right
track, then you also know why investors were wrong to reject you. Experienced
investors are well aware that the best ideas are also the scariest. They all
know about the VCs who rejected Google. If instead of seeming evasive and
ashamed about having been turned down (and thereby implicitly agreeing with
the verdict) you talk candidly about what scared investors about you, you'll
seem more confident, which they like, and you'll probably also do a better job
of presenting that aspect of your startup. At the very least, that worry will
now be out in the open instead of being a gotcha left to be discovered by the
investors you're currently talking to, who will be proud of and thus attached
to their discovery. [11]
This strategy will work best with the best investors, who are both hard to
bluff and who already believe most other investors are conventional-minded
drones doomed always to miss the big outliers. Raising money is not like
applying to college, where you can assume that if you can get into MIT, you
can also get into Foobar State. Because the best investors are much smarter
than the rest, and the best startup ideas look initially like [bad
ideas](startupideas.html), it's not uncommon for a startup to be rejected by
all the VCs except the best ones. That's what happened to Dropbox. Y
Combinator started in Boston, and for the first 3 years we ran alternating
batches in Boston and Silicon Valley. Because Boston investors were so few and
so timid, we used to ship Boston batches out for a second Demo Day in Silicon
Valley. Dropbox was part of a Boston batch, which means all those Boston
investors got the first look at Dropbox, and none of them closed the deal. Yet
another backup and syncing thing, they all thought. A couple weeks later,
Dropbox raised a series A round from Sequoia. [12]
**Different**
Not understanding that investors view investments as bets combines with the
ten page paper mentality to prevent founders from even considering the
possibility of being certain of what they're saying. They think they're trying
to convince investors of something very uncertain — that their startup will be
huge — and convincing anyone of something like that must obviously entail some
wild feat of salesmanship. But in fact when you raise money you're trying to
convince investors of something so much less speculative — whether the company
has all the elements of a good bet — that you can approach the problem in a
qualitatively different way. You can convince yourself, then convince them.
And when you convince them, use the same matter-of-fact language you used to
convince yourself. You wouldn't use vague, grandiose marketing-speak among
yourselves. Don't use it with investors either. It not only doesn't work on
them, but seems a mark of incompetence. Just be concise. Many investors
explicitly use that as a test, reasoning (correctly) that if you can't explain
your plans concisely, you don't really understand them. But even investors who
don't have a rule about this will be bored and frustrated by unclear
explanations. [13]
So here's the recipe for impressing investors when you're not already good at
seeming formidable:
1. Make something worth investing in.
2. Understand why it's worth investing in.
3. Explain that clearly to investors.
If you're saying something you know is true, you'll seem confident when you're
saying it. Conversely, never let pitching draw you into bullshitting. As long
as you stay on the territory of truth, you're strong. Make the truth good,
then just tell it.
**Notes**
[1] There's no reason to believe this number is a constant. In fact it's our
explicit goal at Y Combinator to increase it, by encouraging people to start
startups who otherwise wouldn't have.
[2] Or more precisely, investors decide whether you're a loser or possibly a
winner. If you seem like a winner, they may then, depending on how much you're
raising, have several more meetings with you to test whether that initial
impression holds up.
But if you seem like a loser they're done, at least for the next year or so.
And when they decide you're a loser they usually decide in way less than the
50 minutes they may have allotted for the first meeting. Which explains the
astonished stories one always hears about VC inattentiveness. How could these
people make investment decisions well when they're checking their messages
during startups' presentations? The solution to that mystery is that they've
already made the decision.
[3] The two are not mutually exclusive. There are people who are both
genuinely formidable, and also really good at acting that way.
[4] How can people who will go on to create giant companies not seem
formidable early on? I think the main reason is that their experience so far
has trained them to keep their wings folded, as it were. Family, school, and
jobs encourage cooperation, not conquest. And it's just as well they do,
because even being Genghis Khan is probably 99% cooperation. But the result is
that most people emerge from the tube of their upbringing in their early
twenties compressed into the shape of the tube. Some find they have wings and
start to spread them. But this takes a few years. In the beginning even they
don't know yet what they're capable of.
[5] In fact, change what you're doing. You're investing your own time in your
startup. If you're not convinced that what you're working on is a sufficiently
good bet, why are you even working on that?
[6] When investors ask you a question you don't know the answer to, the best
response is neither to bluff nor give up, but instead to explain how you'd
figure out the answer. If you can work out a preliminary answer on the spot,
so much the better, but explain that's what you're doing.
[7] At YC we try to ensure startups are ready to raise money on Demo Day by
encouraging them to ignore investors and instead focus on their companies till
about a week before. That way most reach the stage where they're sufficiently
convincing well before Demo Day. But not all do, so we also give any startup
that wants to the option of deferring to a later Demo Day.
[8] Founders are often surprised by how much harder it is to raise the next
round. There is a qualitative difference in investors' attitudes. It's like
the difference between being judged as a kid and as an adult. The next time
you raise money, it's not enough to be promising. You have to be delivering
results.
So although it works well to show growth graphs at either stage, investors
treat them differently. At three months, a growth graph is mostly evidence
that the founders are effective. At two years, it has to be evidence of a
promising market and a company tuned to exploit it.
[9] By this I mean that if the present day equivalent of the 3 month old
Microsoft presented at a Demo Day, there would be investors who turned them
down. Microsoft itself didn't raise outside money, and indeed the venture
business barely existed when they got started in 1975.
[10] The best investors rarely care who else is investing, but mediocre
investors almost all do. So you can use this question as a test of investor
quality.
[11] To use this technique, you'll have to find out why investors who rejected
you did so, or at least what they claim was the reason. That may require
asking, because investors don't always volunteer a lot of detail. Make it
clear when you ask that you're not trying to dispute their decision — just
that if there is some weakness in your plans, you need to know about it. You
won't always get a real reason out of them, but you should at least try.
[12] Dropbox wasn't rejected by all the East Coast VCs. There was one firm
that wanted to invest but tried to lowball them.
[13] Alfred Lin points out that it's doubly important for the explanation of a
startup to be clear and concise, because it has to convince at one remove: it
has to work not just on the partner you talk to, but when that partner re-
tells it to colleagues.
We consciously optimize for this at YC. When we work with founders create a
Demo Day pitch, the last step is to imagine how an investor would sell it to
colleagues.
**Thanks** to Marc Andreessen, Sam Altman, Patrick Collison, Ron Conway, Chris
Dixon, Alfred Lin, Ben Horowitz, Steve Huffman, Jessica Livingston, Greg
Mcadoo, Andrew Mason, Geoff Ralston, Yuri Sagalov, Emmett Shear, Rajat Suri,
Garry Tan, Albert Wenger, Fred Wilson, and Qasar Younis for reading drafts of
this.
|
November 2020
There are some kinds of work that you can't do well without thinking
differently from your peers. To be a successful scientist, for example, it's
not enough just to be correct. Your ideas have to be both correct and novel.
You can't publish papers saying things other people already know. You need to
say things no one else has realized yet.
The same is true for investors. It's not enough for a public market investor
to predict correctly how a company will do. If a lot of other people make the
same prediction, the stock price will already reflect it, and there's no room
to make money. The only valuable insights are the ones most other investors
don't share.
You see this pattern with startup founders too. You don't want to start a
startup to do something that everyone agrees is a good idea, or there will
already be other companies doing it. You have to do something that sounds to
most other people like a bad idea, but that you know isn't � like writing
software for a tiny computer used by a few thousand hobbyists, or starting a
site to let people rent airbeds on strangers' floors.
Ditto for essayists. An essay that told people things they already knew would
be boring. You have to tell them something [_new_](useful.html).
But this pattern isn't universal. In fact, it doesn't hold for most kinds of
work. In most kinds of work � to be an administrator, for example � all you
need is the first half. All you need is to be right. It's not essential that
everyone else be wrong.
There's room for a little novelty in most kinds of work, but in practice
there's a fairly sharp distinction between the kinds of work where it's
essential to be independent-minded, and the kinds where it's not.
I wish someone had told me about this distinction when I was a kid, because
it's one of the most important things to think about when you're deciding what
kind of work you want to do. Do you want to do the kind of work where you can
only win by thinking differently from everyone else? I suspect most people's
unconscious mind will answer that question before their conscious mind has a
chance to. I know mine does.
Independent-mindedness seems to be more a matter of nature than nurture. Which
means if you pick the wrong type of work, you're going to be unhappy. If
you're naturally independent-minded, you're going to find it frustrating to be
a middle manager. And if you're naturally conventional-minded, you're going to
be sailing into a headwind if you try to do original research.
One difficulty here, though, is that people are often mistaken about where
they fall on the spectrum from conventional- to independent-minded.
Conventional-minded people don't like to think of themselves as conventional-
minded. And in any case, it genuinely feels to them as if they make up their
own minds about everything. It's just a coincidence that their beliefs are
identical to their peers'. And the independent-minded, meanwhile, are often
unaware how different their ideas are from conventional ones, at least till
they state them publicly. [1]
By the time they reach adulthood, most people know roughly how smart they are
(in the narrow sense of ability to solve pre-set problems), because they're
constantly being tested and ranked according to it. But schools generally
ignore independent-mindedness, except to the extent they try to suppress it.
So we don't get anything like the same kind of feedback about how independent-
minded we are.
There may even be a phenomenon like Dunning-Kruger at work, where the most
conventional-minded people are confident that they're independent-minded,
while the genuinely independent-minded worry they might not be independent-
minded enough.
___________
Can you make yourself more independent-minded? I think so. This quality may be
largely inborn, but there seem to be ways to magnify it, or at least not to
suppress it.
One of the most effective techniques is one practiced unintentionally by most
nerds: simply to be less aware what conventional beliefs are. It's hard to be
a conformist if you don't know what you're supposed to conform to. Though
again, it may be that such people already are independent-minded. A
conventional-minded person would probably feel anxious not knowing what other
people thought, and make more effort to find out.
It matters a lot who you surround yourself with. If you're surrounded by
conventional-minded people, it will constrain which ideas you can express, and
that in turn will constrain which ideas you have. But if you surround yourself
with independent-minded people, you'll have the opposite experience: hearing
other people say surprising things will encourage you to, and to think of
more.
Because the independent-minded find it uncomfortable to be surrounded by
conventional-minded people, they tend to self-segregate once they have a
chance to. The problem with high school is that they haven't yet had a chance
to. Plus high school tends to be an inward-looking little world whose
inhabitants lack confidence, both of which magnify the forces of conformism.
So high school is often a [_bad time_](nerds.html) for the independent-minded.
But there is some advantage even here: it teaches you what to avoid. If you
later find yourself in a situation that makes you think "this is like high
school," you know you should get out. [2]
Another place where the independent- and conventional-minded are thrown
together is in successful startups. The founders and early employees are
almost always independent-minded; otherwise the startup wouldn't be
successful. But conventional-minded people greatly outnumber independent-
minded ones, so as the company grows, the original spirit of independent-
mindedness is inevitably diluted. This causes all kinds of problems besides
the obvious one that the company starts to suck. One of the strangest is that
the founders find themselves able to speak more freely with founders of other
companies than with their own employees. [3]
Fortunately you don't have to spend all your time with independent-minded
people. It's enough to have one or two you can talk to regularly. And once you
find them, they're usually as eager to talk as you are; they need you too.
Although universities no longer have the kind of monopoly they used to have on
education, good universities are still an excellent way to meet independent-
minded people. Most students will still be conventional-minded, but you'll at
least find clumps of independent-minded ones, rather than the near zero you
may have found in high school.
It also works to go in the other direction: as well as cultivating a small
collection of independent-minded friends, to try to meet as many different
types of people as you can. It will decrease the influence of your immediate
peers if you have several other groups of peers. Plus if you're part of
several different worlds, you can often import ideas from one to another.
But by different types of people, I don't mean demographically different. For
this technique to work, they have to think differently. So while it's an
excellent idea to go and visit other countries, you can probably find people
who think differently right around the corner. When I meet someone who knows a
lot about something unusual (which includes practically everyone, if you dig
deep enough), I try to learn what they know that other people don't. There are
almost always surprises here. It's a good way to make conversation when you
meet strangers, but I don't do it to make conversation. I really want to know.
You can expand the source of influences in time as well as space, by reading
history. When I read history I do it not just to learn what happened, but to
try to get inside the heads of people who lived in the past. How did things
look to them? This is hard to do, but worth the effort for the same reason
it's worth travelling far to triangulate a point.
You can also take more explicit measures to prevent yourself from
automatically adopting conventional opinions. The most general is to cultivate
an attitude of skepticism. When you hear someone say something, stop and ask
yourself "Is that true?" Don't say it out loud. I'm not suggesting that you
impose on everyone who talks to you the burden of proving what they say, but
rather that you take upon yourself the burden of evaluating what they say.
Treat it as a puzzle. You know that some accepted ideas will later turn out to
be wrong. See if you can guess which. The end goal is not to find flaws in the
things you're told, but to find the new ideas that had been concealed by the
broken ones. So this game should be an exciting quest for novelty, not a
boring protocol for intellectual hygiene. And you'll be surprised, when you
start asking "Is this true?", how often the answer is not an immediate yes. If
you have any imagination, you're more likely to have too many leads to follow
than too few.
More generally your goal should be not to let anything into your head
unexamined, and things don't always enter your head in the form of statements.
Some of the most powerful influences are implicit. How do you even notice
these? By standing back and watching how other people get their ideas.
When you stand back at a sufficient distance, you can see ideas spreading
through groups of people like waves. The most obvious are in fashion: you
notice a few people wearing a certain kind of shirt, and then more and more,
until half the people around you are wearing the same shirt. You may not care
much what you wear, but there are intellectual fashions too, and you
definitely don't want to participate in those. Not just because you want
sovereignty over your own thoughts, but because [_unfashionable_](nov.html)
ideas are disproportionately likely to lead somewhere interesting. The best
place to find undiscovered ideas is where no one else is looking. [4]
___________
To go beyond this general advice, we need to look at the internal structure of
independent-mindedness � at the individual muscles we need to exercise, as it
were. It seems to me that it has three components: fastidiousness about truth,
resistance to being told what to think, and curiosity.
Fastidiousness about truth means more than just not believing things that are
false. It means being careful about degree of belief. For most people, degree
of belief rushes unexamined toward the extremes: the unlikely becomes
impossible, and the probable becomes certain. [5] To the independent-minded,
this seems unpardonably sloppy. They're willing to have anything in their
heads, from highly speculative hypotheses to (apparent) tautologies, but on
subjects they care about, everything has to be labelled with a carefully
considered degree of belief. [6]
The independent-minded thus have a horror of ideologies, which require one to
accept a whole collection of beliefs at once, and to treat them as articles of
faith. To an independent-minded person that would seem revolting, just as it
would seem to someone fastidious about food to take a bite of a submarine
sandwich filled with a large variety of ingredients of indeterminate age and
provenance.
Without this fastidiousness about truth, you can't be truly independent-
minded. It's not enough just to have resistance to being told what to think.
Those kind of people reject conventional ideas only to replace them with the
most random conspiracy theories. And since these conspiracy theories have
often been manufactured to capture them, they end up being less independent-
minded than ordinary people, because they're subject to a much more exacting
master than mere convention. [7]
Can you increase your fastidiousness about truth? I would think so. In my
experience, merely thinking about something you're fastidious about causes
that fastidiousness to grow. If so, this is one of those rare virtues we can
have more of merely by wanting it. And if it's like other forms of
fastidiousness, it should also be possible to encourage in children. I
certainly got a strong dose of it from my father. [8]
The second component of independent-mindedness, resistance to being told what
to think, is the most visible of the three. But even this is often
misunderstood. The big mistake people make about it is to think of it as a
merely negative quality. The language we use reinforces that idea. You're _un_
conventional. You _don't_ care what other people think. But it's not just a
kind of immunity. In the most independent-minded people, the desire not to be
told what to think is a positive force. It's not mere skepticism, but an
active [_delight_](gba.html) in ideas that subvert the conventional wisdom,
the more counterintuitive the better.
Some of the most novel ideas seemed at the time almost like practical jokes.
Think how often your reaction to a novel idea is to laugh. I don't think it's
because novel ideas are funny per se, but because novelty and humor share a
certain kind of surprisingness. But while not identical, the two are close
enough that there is a definite correlation between having a sense of humor
and being independent-minded � just as there is between being humorless and
being conventional-minded. [9]
I don't think we can significantly increase our resistance to being told what
to think. It seems the most innate of the three components of independent-
mindedness; people who have this quality as adults usually showed all too
visible signs of it as children. But if we can't increase our resistance to
being told what to think, we can at least shore it up, by surrounding
ourselves with other independent-minded people.
The third component of independent-mindedness, curiosity, may be the most
interesting. To the extent that we can give a brief answer to the question of
where novel ideas come from, it's curiosity. That's what people are usually
feeling before having them.
In my experience, independent-mindedness and curiosity predict one another
perfectly. Everyone I know who's independent-minded is deeply curious, and
everyone I know who's conventional-minded isn't. Except, curiously, children.
All small children are curious. Perhaps the reason is that even the
conventional-minded have to be curious in the beginning, in order to learn
what the conventions are. Whereas the independent-minded are the gluttons of
curiosity, who keep eating even after they're full. [10]
The three components of independent-mindedness work in concert: fastidiousness
about truth and resistance to being told what to think leave space in your
brain, and curiosity finds new ideas to fill it.
Interestingly, the three components can substitute for one another in much the
same way muscles can. If you're sufficiently fastidious about truth, you don't
need to be as resistant to being told what to think, because fastidiousness
alone will create sufficient gaps in your knowledge. And either one can
compensate for curiosity, because if you create enough space in your brain,
your discomfort at the resulting vacuum will add force to your curiosity. Or
curiosity can compensate for them: if you're sufficiently curious, you don't
need to clear space in your brain, because the new ideas you discover will
push out the conventional ones you acquired by default.
Because the components of independent-mindedness are so interchangeable, you
can have them to varying degrees and still get the same result. So there is
not just a single model of independent-mindedness. Some independent-minded
people are openly subversive, and others are quietly curious. They all know
the secret handshake though.
Is there a way to cultivate curiosity? To start with, you want to avoid
situations that suppress it. How much does the work you're currently doing
engage your curiosity? If the answer is "not much," maybe you should change
something.
The most important active step you can take to cultivate your curiosity is
probably to seek out the topics that engage it. Few adults are equally curious
about everything, and it doesn't seem as if you can choose which topics
interest you. So it's up to you to [_find_](genius.html) them. Or invent them,
if necessary.
Another way to increase your curiosity is to indulge it, by investigating
things you're interested in. Curiosity is unlike most other appetites in this
respect: indulging it tends to increase rather than to sate it. Questions lead
to more questions.
Curiosity seems to be more individual than fastidiousness about truth or
resistance to being told what to think. To the degree people have the latter
two, they're usually pretty general, whereas different people can be curious
about very different things. So perhaps curiosity is the compass here.
Perhaps, if your goal is to discover novel ideas, your motto should not be "do
what you love" so much as "do what you're curious about."
**Notes**
[1] One convenient consequence of the fact that no one identifies as
conventional-minded is that you can say what you like about conventional-
minded people without getting in too much trouble. When I wrote [_"The Four
Quadrants of Conformism"_](conformism.html) I expected a firestorm of rage
from the aggressively conventional-minded, but in fact it was quite muted.
They sensed that there was something about the essay that they disliked
intensely, but they had a hard time finding a specific passage to pin it on.
[2] When I ask myself what in my life is like high school, the answer is
Twitter. It's not just full of conventional-minded people, as anything its
size will inevitably be, but subject to violent storms of conventional-
mindedness that remind me of descriptions of Jupiter. But while it probably is
a net loss to spend time there, it has at least made me think more about the
distinction between independent- and conventional-mindedness, which I probably
wouldn't have done otherwise.
[3] The decrease in independent-mindedness in growing startups is still an
open problem, but there may be solutions.
Founders can delay the problem by making a conscious effort only to hire
independent-minded people. Which of course also has the ancillary benefit that
they have better ideas.
Another possible solution is to create policies that somehow disrupt the force
of conformism, much as control rods slow chain reactions, so that the
conventional-minded aren't as dangerous. The physical separation of Lockheed's
Skunk Works may have had this as a side benefit. Recent examples suggest
employee forums like Slack may not be an unmitigated good.
The most radical solution would be to grow revenues without growing the
company. You think hiring that junior PR person will be cheap, compared to a
programmer, but what will be the effect on the average level of independent-
mindedness in your company? (The growth in staff relative to faculty seems to
have had a similar effect on universities.) Perhaps the rule about outsourcing
work that's not your "core competency" should be augmented by one about
outsourcing work done by people who'd ruin your culture as employees.
Some investment firms already seem to be able to grow revenues without growing
the number of employees. Automation plus the ever increasing articulation of
the "tech stack" suggest this may one day be possible for product companies.
[4] There are intellectual fashions in every field, but their influence
varies. One of the reasons politics, for example, tends to be boring is that
it's so extremely subject to them. The threshold for having opinions about
politics is much [_lower_](identity.html) than the one for having opinions
about set theory. So while there are some ideas in politics, in practice they
tend to be swamped by waves of intellectual fashion.
[5] The conventional-minded are often fooled by the strength of their opinions
into believing that they're independent-minded. But strong convictions are not
a sign of independent-mindedness. Rather the opposite.
[6] Fastidiousness about truth doesn't imply that an independent-minded person
won't be dishonest, but that he won't be deluded. It's sort of like the
definition of a gentleman as someone who is never unintentionally rude.
[7] You see this especially among political extremists. They think themselves
nonconformists, but actually they're niche conformists. Their opinions may be
different from the average person's, but they are often more influenced by
their peers' opinions than the average person's are.
[8] If we broaden the concept of fastidiousness about truth so that it
excludes pandering, bogusness, and pomposity as well as falsehood in the
strict sense, our model of independent-mindedness can expand further into the
arts.
[9] This correlation is far from perfect, though. G�del and Dirac don't seem
to have been very strong in the humor department. But someone who is both
"neurotypical" and humorless is very likely to be conventional-minded.
[10] Exception: gossip. Almost everyone is curious about gossip.
**Thanks** to Trevor Blackwell, Paul Buchheit, Patrick Collison, Jessica
Livingston, Robert Morris, Harj Taggar, and Peter Thiel for reading drafts of
this.
|
February 2009
Hacker News was two years old last week. Initially it was supposed to be a
side project—an application to sharpen Arc on, and a place for current and
future Y Combinator founders to exchange news. It's grown bigger and taken up
more time than I expected, but I don't regret that because I've learned so
much from working on it.
**Growth**
When we launched in February 2007, weekday traffic was around 1600 daily
uniques. It's since [grown](http://ycombinator.com/images/2yeartraffic.png) to
around 22,000. This growth rate is a bit higher than I'd like. I'd like the
site to grow, since a site that isn't growing at least slowly is probably
dead. But I wouldn't want it to grow as large as Digg or Reddit—mainly because
that would dilute the character of the site, but also because I don't want to
spend all my time dealing with scaling.
I already have problems enough with that. Remember, the original motivation
for HN was to test a new programming language, and moreover one that's focused
on experimenting with language design, not performance. Every time the site
gets slow, I fortify myself by recalling McIlroy and Bentley's famous quote
> The key to performance is elegance, not battalions of special cases.
and look for the bottleneck I can remove with least code. So far I've been
able to keep up, in the sense that performance has remained consistently
mediocre despite 14x growth. I don't know what I'll do next, but I'll probably
think of something.
This is my attitude to the site generally. Hacker News is an experiment, and
an experiment in a very young field. Sites of this type are only a few years
old. Internet conversation generally is only a few decades old. So we've
probably only discovered a fraction of what we eventually will.
That's why I'm so optimistic about HN. When a technology is this young, the
existing solutions are usually terrible; which means it must be possible to do
much better; which means many problems that seem insoluble aren't. Including,
I hope, the problem that has afflicted so many previous communities: being
ruined by growth.
**Dilution**
Users have worried about that since the site was a few months old. So far
these alarms have been false, but they may not always be. Dilution is a hard
problem. But probably soluble; it doesn't mean much that open conversations
have "always" been destroyed by growth when "always" equals 20 instances.
But it's important to remember we're trying to solve a new problem, because
that means we're going to have to try new things, most of which probably won't
work. A couple weeks ago I tried displaying the names of users with the
highest average comment scores in orange. [1] That was a mistake. Suddenly a
culture that had been more or less united was divided into haves and have-
nots. I didn't realize how united the culture had been till I saw it divided.
It was painful to watch. [2]
So orange usernames won't be back. (Sorry about that.) But there will be other
equally broken-seeming ideas in the future, and the ones that turn out to work
will probably seem just as broken as those that don't.
Probably the most important thing I've learned about dilution is that it's
measured more in behavior than users. It's bad behavior you want to keep out
more than bad people. User behavior turns out to be surprisingly malleable. If
people are [expected](http://ycombinator.com/newswelcome.html) to behave well,
they tend to; and vice versa.
Though of course forbidding bad behavior does tend to keep away bad people,
because they feel uncomfortably constrained in a place where they have to
behave well. But this way of keeping them out is gentler and probably also
more effective than overt barriers.
It's pretty clear now that the broken windows theory applies to community
sites as well. The theory is that minor forms of bad behavior encourage worse
ones: that a neighborhood with lots of graffiti and broken windows becomes one
where robberies occur. I was living in New York when Giuliani introduced the
reforms that made the broken windows theory famous, and the transformation was
miraculous. And I was a Reddit user when the opposite happened there, and the
transformation was equally dramatic.
I'm not criticizing Steve and Alexis. What happened to Reddit didn't happen
out of neglect. From the start they had a policy of censoring nothing except
spam. Plus Reddit had different goals from Hacker News. Reddit was a startup,
not a side project; its goal was to grow as fast as possible. Combine rapid
growth and zero censorship, and the result is a free for all. But I don't
think they'd do much differently if they were doing it again. Measured by
traffic, Reddit is much more successful than Hacker News.
But what happened to Reddit won't inevitably happen to HN. There are several
local maxima. There can be places that are free for alls and places that are
more thoughtful, just as there are in the real world; and people will behave
differently depending on which they're in, just as they do in the real world.
I've observed this in the wild. I've seen people cross-posting on Reddit and
Hacker News who actually took the trouble to write two versions, a flame for
Reddit and a more subdued version for HN.
**Submissions**
There are two major types of problems a site like Hacker News needs to avoid:
bad stories and bad comments. So far the danger of bad stories seems smaller.
The stories on the frontpage now are still roughly the ones that would have
been there when HN started.
I once thought I'd have to weight votes to keep crap off the frontpage, but I
haven't had to yet. I wouldn't have predicted the frontpage would hold up so
well, and I'm not sure why it has. Perhaps only the more thoughtful users care
enough to submit and upvote links, so the marginal cost of one random new user
approaches zero. Or perhaps the frontpage protects itself, by advertising what
type of submission is expected.
The most dangerous thing for the frontpage is stuff that's too easy to upvote.
If someone proves a new theorem, it takes some work by the reader to decide
whether or not to upvote it. An amusing cartoon takes less. A rant with a
rallying cry as the title takes zero, because people vote it up without even
reading it.
Hence what I call the Fluff Principle: on a user-voted news site, the links
that are easiest to judge will take over unless you take specific measures to
prevent it.
Hacker News has two kinds of protections against fluff. The most common types
of fluff links are banned as off-topic. Pictures of kittens, political
diatribes, and so on are explicitly banned. This keeps out most fluff, but not
all of it. Some links are both fluff, in the sense of being very short, and
also on topic.
There's no single solution to that. If a link is just an empty rant, editors
will sometimes kill it even if it's on topic in the sense of being about
hacking, because it's not on topic by the real standard, which is to engage
one's intellectual curiosity. If the posts on a site are characteristically of
this type I sometimes ban it, which means new stuff at that url is auto-
killed. If a post has a linkbait title, editors sometimes rephrase it to be
more matter-of-fact. This is especially necessary with links whose titles are
rallying cries, because otherwise they become implicit "vote up if you believe
such-and-such" posts, which are the most extreme form of fluff.
The techniques for dealing with links have to evolve, because the links do.
The existence of aggregators has already affected what they aggregate. Writers
now deliberately write things to draw traffic from aggregators—sometimes even
specific ones. (No, the irony of this statement is not lost on me.) Then there
are the more sinister mutations, like linkjacking—posting a paraphrase of
someone else's article and submitting that instead of the original. These can
get a lot of upvotes, because a lot of what's good in an article often
survives; indeed, the closer the paraphrase is to plagiarism, the more
survives. [3]
I think it's important that a site that kills submissions provide a way for
users to see what got killed if they want to. That keeps editors honest, and
just as importantly, makes users confident they'd know if the editors stopped
being honest. HN users can do this by flipping a switch called showdead in
their profile. [4]
**Comments**
Bad comments seem to be a harder problem than bad submissions. While the
quality of links on the frontpage of HN hasn't changed much, the quality of
the median comment may have decreased somewhat.
There are two main kinds of badness in comments: meanness and stupidity. There
is a lot of overlap between the two—mean comments are disproportionately
likely also to be dumb—but the strategies for dealing with them are different.
Meanness is easier to control. You can have rules saying one shouldn't be
mean, and if you enforce them it seems possible to keep a lid on meanness.
Keeping a lid on stupidity is harder, perhaps because stupidity is not so
easily distinguishable. Mean people are more likely to know they're being mean
than stupid people are to know they're being stupid.
The most dangerous form of stupid comment is not the long but mistaken
argument, but the dumb joke. Long but mistaken arguments are actually quite
rare. There is a strong correlation between comment quality and length; if you
wanted to compare the quality of comments on community sites, average length
would be a good predictor. Probably the cause is human nature rather than
anything specific to comment threads. Probably it's simply that stupidity more
often takes the form of having few ideas than wrong ones.
Whatever the cause, stupid comments tend to be short. And since it's hard to
write a short comment that's distinguished for the amount of information it
conveys, people try to distinguish them instead by being funny. The most
tempting format for stupid comments is the supposedly witty put-down, probably
because put-downs are the easiest form of humor. [5] So one advantage of
forbidding meanness is that it also cuts down on these.
Bad comments are like kudzu: they take over rapidly. Comments have much more
effect on new comments than submissions have on new submissions. If someone
submits a lame article, the other submissions don't all become lame. But if
someone posts a stupid comment on a thread, that sets the tone for the region
around it. People reply to dumb jokes with dumb jokes.
Maybe the solution is to add a delay before people can respond to a comment,
and make the length of the delay inversely proportional to some prediction of
its quality. Then dumb threads would grow slower. [6]
**People**
I notice most of the techniques I've described are conservative: they're aimed
at preserving the character of the site rather than enhancing it. I don't
think that's a bias of mine. It's due to the shape of the problem. Hacker News
had the good fortune to start out good, so in this case it's literally a
matter of preservation. But I think this principle would also apply to sites
with different origins.
The good things in a community site come from people more than technology;
it's mainly in the prevention of bad things that technology comes into play.
Technology certainly can enhance discussion. Nested comments do, for example.
But I'd rather use a site with primitive features and smart, nice users than a
more advanced one whose users were idiots or [trolls](trolls.html).
So the most important thing a community site can do is attract the kind of
people it wants. A site trying to be as big as possible wants to attract
everyone. But a site aiming at a particular subset of users has to attract
just those—and just as importantly, repel everyone else. I've made a conscious
effort to do this on HN. The graphic design is as plain as possible, and the
site rules discourage dramatic link titles. The goal is that the only thing to
interest someone arriving at HN for the first time should be the ideas
expressed there.
The downside of tuning a site to attract certain people is that, to those
people, it can be too attractive. I'm all too aware how addictive Hacker News
can be. For me, as for many users, it's a kind of virtual town square. When I
want to take a break from working, I walk into the square, just as I might
into Harvard Square or University Ave in the physical world. [7] But an online
square is more dangerous than a physical one. If I spent half the day
loitering on University Ave, I'd notice. I have to walk a mile to get there,
and sitting in a cafe feels different from working. But visiting an online
forum takes just a click, and feels superficially very much like working. You
may be wasting your time, but you're not idle. Someone is
[wrong](http://xkcd.com/386/) on the Internet, and you're fixing the problem.
Hacker News is definitely useful. I've learned a lot from things I've read on
HN. I've written several essays that began as comments there. So I wouldn't
want the site to go away. But I would like to be sure it's not a net drag on
productivity. What a disaster that would be, to attract thousands of smart
people to a site that caused them to waste lots of time. I wish I could be
100% sure that's not a description of HN.
I feel like the addictiveness of games and social applications is still a
mostly unsolved problem. The situation now is like it was with crack in the
1980s: we've invented terribly addictive new things, and we haven't yet
evolved ways to protect ourselves from them. We will eventually, and that's
one of the problems I hope to focus on next.
**Notes**
[1] I tried ranking users by both average and median comment score, and
average (with the high score thrown out) seemed the more accurate predictor of
high quality. Median may be the more accurate predictor of low quality though.
[2] Another thing I learned from this experiment is that if you're going to
distinguish between people, you better be sure you do it right. This is one
problem where rapid prototyping doesn't work.
Indeed, that's the intellectually honest argument for not discriminating
between various types of people. The reason not to do it is not that
everyone's the same, but that it's bad to do wrong and hard to do right.
[3] When I catch egregiously linkjacked posts I replace the url with that of
whatever they copied. Sites that habitually linkjack get banned.
[4] Digg is notorious for its lack of transparency. The root of the problem is
not that the guys running Digg are especially sneaky, but that they use the
wrong algorithm for generating their frontpage. Instead of bubbling up from
the bottom as they get more votes, as on Reddit, stories start at the top and
get pushed down by new arrivals.
The reason for the difference is that Digg is derived from Slashdot, while
Reddit is derived from Delicious/popular. Digg is Slashdot with voting instead
of editors, and Reddit is Delicious/popular with voting instead of
bookmarking. (You can still see fossils of their origins in their graphic
design.)
Digg's algorithm is very vulnerable to gaming, because any story that makes it
onto the frontpage is the new top story. Which in turn forces Digg to respond
with extreme countermeasures. A lot of startups have some kind of secret about
the subterfuges they had to resort to in the early days, and I suspect Digg's
is the extent to which the top stories were de facto chosen by human editors.
[5] The dialog on Beavis and Butthead was composed largely of these, and when
I read comments on really bad sites I can hear them in their voices.
[6] I suspect most of the techniques for discouraging stupid comments have yet
to be discovered. Xkcd implemented a particularly clever one in its IRC
channel: don't allow the same thing twice. Once someone has said "fail," no
one can ever say it again. This would penalize short comments especially,
because they have less room to avoid collisions in.
Another promising idea is the [stupid filter](http://stupidfilter.org), which
is just like a probabilistic spam filter, but trained on corpora of stupid and
non-stupid comments instead.
You may not have to kill bad comments to solve the problem. Comments at the
bottom of a long thread are rarely seen, so it may be enough to incorporate a
prediction of quality in the comment sorting algorithm.
[7] What makes most suburbs so demoralizing is that there's no center to walk
to.
**Thanks** to Justin Kan, Jessica Livingston, Robert Morris, Alexis Ohanian,
Emmet Shear, and Fred Wilson for reading drafts of this.
[Comment](http://news.ycombinator.com/item?id=495053) on this essay.
|
April 2021
Every year since 1982, _Forbes_ magazine has published a list of the richest
Americans. If we compare the 100 richest people in 1982 to the 100 richest in
2020, we notice some big differences.
In 1982 the most common source of wealth was inheritance. Of the 100 richest
people, 60 inherited from an ancestor. There were 10 du Pont heirs alone. By
2020 the number of heirs had been cut in half, accounting for only 27 of the
biggest 100 fortunes.
Why would the percentage of heirs decrease? Not because inheritance taxes
increased. In fact, they decreased significantly during this period. The
reason the percentage of heirs has decreased is not that fewer people are
inheriting great fortunes, but that more people are making them.
How are people making these new fortunes? Roughly 3/4 by starting companies
and 1/4 by investing. Of the 73 new fortunes in 2020, 56 derive from founders'
or early employees' equity (52 founders, 2 early employees, and 2 wives of
founders), and 17 from managing investment funds.
There were no fund managers among the 100 richest Americans in 1982. Hedge
funds and private equity firms existed in 1982, but none of their founders
were rich enough yet to make it into the top 100. Two things changed: fund
managers discovered new ways to generate high returns, and more investors were
willing to trust them with their money. [1]
But the main source of new fortunes now is starting companies, and when you
look at the data, you see big changes there too. People get richer from
starting companies now than they did in 1982, because the companies do
different things.
In 1982, there were two dominant sources of new wealth: oil and real estate.
Of the 40 new fortunes in 1982, at least 24 were due primarily to oil or real
estate. Now only a small number are: of the 73 new fortunes in 2020, 4 were
due to real estate and only 2 to oil.
By 2020 the biggest source of new wealth was what are sometimes called "tech"
companies. Of the 73 new fortunes, about 30 derive from such companies. These
are particularly common among the richest of the rich: 8 of the top 10
fortunes in 2020 were new fortunes of this type.
Arguably it's slightly misleading to treat tech as a category. Isn't Amazon
really a retailer, and Tesla a car maker? Yes and no. Maybe in 50 years, when
what we call tech is taken for granted, it won't seem right to put these two
businesses in the same category. But at the moment at least, there is
definitely something they share in common that distinguishes them. What
retailer starts AWS? What car maker is run by someone who also has a rocket
company?
The tech companies behind the top 100 fortunes also form a well-differentiated
group in the sense that they're all companies that venture capitalists would
readily invest in, and the others mostly not. And there's a reason why: these
are mostly companies that win by having better technology, rather than just a
CEO who's really driven and good at making deals.
To that extent, the rise of the tech companies represents a qualitative
change. The oil and real estate magnates of the 1982 Forbes 400 didn't win by
making better technology. They won by being really driven and good at making
deals. [2] And indeed, that way of getting rich is so old that it predates the
Industrial Revolution. The courtiers who got rich in the (nominal) service of
European royal houses in the 16th and 17th centuries were also, as a rule,
really driven and good at making deals.
People who don't look any deeper than the Gini coefficient look back on the
world of 1982 as the good old days, because those who got rich then didn't get
as rich. But if you dig into _how_ they got rich, the old days don't look so
good. In 1982, 84% of the richest 100 people got rich by inheritance,
extracting natural resources, or doing real estate deals. Is that really
better than a world in which the richest people get rich by starting tech
companies?
Why are people starting so many more new companies than they used to, and why
are they getting so rich from it? The answer to the first question, curiously
enough, is that it's misphrased. We shouldn't be asking why people are
starting companies, but why they're starting companies _again_. [3]
In 1892, the _New York Herald Tribune_ compiled a list of all the millionaires
in America. They found 4047 of them. How many had inherited their wealth then?
Only about 20%, which is less than the proportion of heirs today. And when you
investigate the sources of the new fortunes, 1892 looks even more like today.
Hugh Rockoff found that "many of the richest ... gained their initial edge
from the new technology of mass production." [4]
So it's not 2020 that's the anomaly here, but 1982. The real question is why
so few people had gotten rich from starting companies in 1982\. And the answer
is that even as the _Herald Tribune_ 's list was being compiled, a wave of
[_consolidation_](re.html) was sweeping through the American economy. In the
late 19th and early 20th centuries, financiers like J. P. Morgan combined
thousands of smaller companies into a few hundred giant ones with commanding
economies of scale. By the end of World War II, as Michael Lind writes, "the
major sectors of the economy were either organized as government-backed
cartels or dominated by a few oligopolistic corporations." [5]
In 1960, most of the people who start startups today would have gone to work
for one of them. You could get rich from starting your own company in 1890 and
in 2020, but in 1960 it was not really a viable option. You couldn't break
through the oligopolies to get at the markets. So the prestigious route in
1960 was not to start your own company, but to work your way up the corporate
ladder at an existing one. [6]
Making everyone a corporate employee decreased economic inequality (and every
other kind of variation), but if your model of normal is the mid 20th century,
you have a very misleading model in that respect. J. P. Morgan's economy
turned out to be just a phase, and starting in the 1970s, it began to break
up.
Why did it break up? Partly senescence. The big companies that seemed models
of scale and efficiency in 1930 had by 1970 become slack and bloated. By 1970
the rigid structure of the economy was full of cosy nests that various groups
had built to insulate themselves from market forces. During the Carter
administration the federal government realized something was amiss and began,
in a process they called "deregulation," to roll back the policies that
propped up the oligopolies.
But it wasn't just decay from within that broke up J. P. Morgan's economy.
There was also pressure from without, in the form of new technology, and
particularly microelectronics. The best way to envision what happened is to
imagine a pond with a crust of ice on top. Initially the only way from the
bottom to the surface is around the edges. But as the ice crust weakens, you
start to be able to punch right through the middle.
The edges of the pond were pure tech: companies that actually described
themselves as being in the electronics or software business. When you used the
word "startup" in 1990, that was what you meant. But now startups are punching
right through the middle of the ice crust and displacing incumbents like
retailers and TV networks and car companies. [7]
But though the breakup of J. P. Morgan's economy created a new world in the
technological sense, it was a reversion to the norm in the social sense. If
you only look back as far as the mid 20th century, it seems like people
getting rich by starting their own companies is a recent phenomenon. But if
you look back further, you realize it's actually the default. So what we
should expect in the future is more of the same. Indeed, we should expect both
the number and wealth of founders to grow, because every decade it gets easier
to start a startup.
Part of the reason it's getting easier to start a startup is social. Society
is (re)assimilating the concept. If you start one now, your parents won't
freak out the way they would have a generation ago, and knowledge about how to
do it is much more widespread. But the main reason it's easier to start a
startup now is that it's cheaper. Technology has driven down the cost of both
building products and acquiring customers.
The decreasing cost of starting a startup has in turn changed the balance of
power between founders and investors. Back when starting a startup meant
building a factory, you needed investors' permission to do it at all. But now
investors need founders more than founders need investors, and that, combined
with the increasing amount of venture capital available, has driven up
valuations. [8]
So the decreasing cost of starting a startup increases the number of rich
people in two ways: it means that more people start them, and that those who
do can raise money on better terms.
But there's also a third factor at work: the companies themselves are more
valuable, because newly founded companies grow faster than they used to.
Technology hasn't just made it cheaper to build and distribute things, but
faster too.
This trend has been running for a long time. IBM, founded in 1896, took 45
years to reach a billion 2020 dollars in revenue. Hewlett-Packard, founded in
1939, took 25 years. Microsoft, founded in 1975, took 13 years. Now the norm
for fast-growing companies is 7 or 8 years. [9]
Fast growth has a double effect on the value of founders' stock. The value of
a company is a function of its revenue and its growth rate. So if a company
grows faster, you not only get to a billion dollars in revenue sooner, but the
company is more valuable when it reaches that point than it would be if it
were growing slower.
That's why founders sometimes get so rich so young now. The low initial cost
of starting a startup means founders can start young, and the fast growth of
companies today means that if they succeed they could be surprisingly rich
just a few years later.
It's easier now to start and grow a company than it has ever been. That means
more people start them, that those who do get better terms from investors, and
that the resulting companies become more valuable. Once you understand how
these mechanisms work, and that startups were suppressed for most of the 20th
century, you don't have to resort to some vague right turn the country took
under Reagan to explain why America's Gini coefficient is increasing. Of
course the Gini coefficient is increasing. With more people starting more
valuable companies, how could it not be?
**Notes**
[1] Investment firms grew rapidly after a regulatory change by the Labor
Department in 1978 allowed pension funds to invest in them, but the effects of
this growth were not yet visible in the top 100 fortunes in 1982.
[2] George Mitchell deserves mention as an exception. Though really driven and
good at making deals, he was also the first to figure out how to use fracking
to get natural gas out of shale.
[3] When I say people are starting more companies, I mean the type of company
meant to [_grow_](growth.html) very big. There has actually been a decrease in
the last couple decades in the overall number of new companies. But the vast
majority of companies are small retail and service businesses. So what the
statistics about the decreasing number of new businesses mean is that people
are starting fewer shoe stores and barber shops.
People sometimes get [_confused_](https://www.inc.com/magazine/201505/leigh-
buchanan/the-vanishing-startups-in-decline.html) when they see a graph
labelled "startups" that's going down, because there are two senses of the
word "startup": (1) the founding of a company, and (2) a particular type of
company designed to grow big fast. The statistics mean startup in sense (1),
not sense (2).
[4] Rockoff, Hugh. "Great Fortunes of the Gilded Age." NBER Working Paper
14555, 2008.
[5] Lind, Michael. _Land of Promise._ HarperCollins, 2012.
It's also likely that the high tax rates in the mid 20th century deterred
people from starting their own companies. Starting one's own company is risky,
and when risk isn't rewarded, people opt for [_safety_](inequality.html)
instead.
But it wasn't simply cause and effect. The oligopolies and high tax rates of
the mid 20th century were all of a piece. Lower taxes are not just a cause of
entrepreneurship, but an effect as well: the people getting rich in the mid
20th century from real estate and oil exploration lobbied for and got huge tax
loopholes that made their effective tax rate much lower, and presumably if it
had been more common to grow big companies by building new technology, the
people doing that would have lobbied for their own loopholes as well.
[6] That's why the people who did get rich in the mid 20th century so often
got rich from oil exploration or real estate. Those were the two big areas of
the economy that weren't susceptible to consolidation.
[7] The pure tech companies used to be called "high technology" startups. But
now that startups can punch through the middle of the ice crust, we don't need
a separate name for the edges, and the term "high-tech" has a decidedly
[_retro_](https://books.google.com/ngrams/graph?content=high+tech&year_start=1900&year_end=2019&corpus=en-2019&smoothing=3)
sound.
[8] Higher valuations mean you either sell less stock to get a given amount of
money, or get more money for a given amount of stock. The typical startup does
some of each. Obviously you end up richer if you keep more stock, but you
should also end up richer if you raise more money, because (a) it should make
the company more successful, and (b) you should be able to last longer before
the next round, or not even need one. Notice all those shoulds though. In
practice a lot of money slips through them.
It might seem that the huge rounds raised by startups nowadays contradict the
claim that it has become cheaper to start one. But there's no contradiction
here; the startups that raise the most are the ones doing it by choice, in
order to grow faster, not the ones doing it because they need the money to
survive. There's nothing like not needing money to make people offer it to
you.
You would think, after having been on the side of labor in its fight with
capital for almost two centuries, that the far left would be happy that labor
has finally prevailed. But none of them seem to be. You can almost hear them
saying "No, no, not _that_ way."
[9] IBM was created in 1911 by merging three companies, the most important of
which was Herman Hollerith's Tabulating Machine Company, founded in 1896. In
1941 its revenues were $60 million.
Hewlett-Packard's revenues in 1964 were $125 million.
Microsoft's revenues in 1988 were $590 million.
**Thanks** to Trevor Blackwell, Jessica Livingston, Bob Lesko, Robert Morris,
Russ Roberts, and Alex Tabarrok for reading drafts of this, and to Jon
Erlichman for growth data.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
July 2013
One of the most common types of advice we give at Y Combinator is to do things
that don't scale. A lot of would-be founders believe that startups either take
off or don't. You build something, make it available, and if you've made a
better mousetrap, people beat a path to your door as promised. Or they don't,
in which case the market must not exist. [1]
Actually startups take off because the founders make them take off. There may
be a handful that just grew by themselves, but usually it takes some sort of
push to get them going. A good metaphor would be the cranks that car engines
had before they got electric starters. Once the engine was going, it would
keep going, but there was a separate and laborious process to get it going.
**Recruit**
The most common unscalable thing founders have to do at the start is to
recruit users manually. Nearly all startups have to. You can't wait for users
to come to you. You have to go out and get them.
Stripe is one of the most successful startups we've funded, and the problem
they solved was an urgent one. If anyone could have sat back and waited for
users, it was Stripe. But in fact they're famous within YC for aggressive
early user acquisition.
Startups building things for other startups have a big pool of potential users
in the other companies we've funded, and none took better advantage of it than
Stripe. At YC we use the term "Collison installation" for the technique they
invented. More diffident founders ask "Will you try our beta?" and if the
answer is yes, they say "Great, we'll send you a link." But the Collison
brothers weren't going to wait. When anyone agreed to try Stripe they'd say
"Right then, give me your laptop" and set them up on the spot.
There are two reasons founders resist going out and recruiting users
individually. One is a combination of shyness and laziness. They'd rather sit
at home writing code than go out and talk to a bunch of strangers and probably
be rejected by most of them. But for a startup to succeed, at least one
founder (usually the CEO) will have to spend a lot of time on sales and
marketing. [2]
The other reason founders ignore this path is that the absolute numbers seem
so small at first. This can't be how the big, famous startups got started,
they think. The mistake they make is to underestimate the power of compound
growth. We encourage every startup to measure their progress by weekly [growth
rate](growth.html). If you have 100 users, you need to get 10 more next week
to grow 10% a week. And while 110 may not seem much better than 100, if you
keep growing at 10% a week you'll be surprised how big the numbers get. After
a year you'll have 14,000 users, and after 2 years you'll have 2 million.
You'll be doing different things when you're acquiring users a thousand at a
time, and growth has to slow down eventually. But if the market exists you can
usually start by recruiting users manually and then gradually switch to less
manual methods. [3]
Airbnb is a classic example of this technique. Marketplaces are so hard to get
rolling that you should expect to take heroic measures at first. In Airbnb's
case, these consisted of going door to door in New York, recruiting new users
and helping existing ones improve their listings. When I remember the Airbnbs
during YC, I picture them with rolly bags, because when they showed up for
tuesday dinners they'd always just flown back from somewhere.
**Fragile**
Airbnb now seems like an unstoppable juggernaut, but early on it was so
fragile that about 30 days of going out and engaging in person with users made
the difference between success and failure.
That initial fragility was not a unique feature of Airbnb. Almost all startups
are fragile initially. And that's one of the biggest things inexperienced
founders and investors (and reporters and know-it-alls on forums) get wrong
about them. They unconsciously judge larval startups by the standards of
established ones. They're like someone looking at a newborn baby and
concluding "there's no way this tiny creature could ever accomplish anything."
It's harmless if reporters and know-it-alls dismiss your startup. They always
get things wrong. It's even ok if investors dismiss your startup; they'll
change their minds when they see growth. The big danger is that you'll dismiss
your startup yourself. I've seen it happen. I often have to encourage founders
who don't see the full potential of what they're building. Even Bill Gates
made that mistake. He returned to Harvard for the fall semester after starting
Microsoft. He didn't stay long, but he wouldn't have returned at all if he'd
realized Microsoft was going to be even a fraction of the size it turned out
to be. [4]
The question to ask about an early stage startup is not "is this company
taking over the world?" but "how big could this company get if the founders
did the right things?" And the right things often seem both laborious and
inconsequential at the time. Microsoft can't have seemed very impressive when
it was just a couple guys in Albuquerque writing Basic interpreters for a
market of a few thousand hobbyists (as they were then called), but in
retrospect that was the optimal path to dominating microcomputer software. And
I know Brian Chesky and Joe Gebbia didn't feel like they were en route to the
big time as they were taking "professional" photos of their first hosts'
apartments. They were just trying to survive. But in retrospect that too was
the optimal path to dominating a big market.
How do you find users to recruit manually? If you build something to solve
[your own problems](startupideas.html), then you only have to find your peers,
which is usually straightforward. Otherwise you'll have to make a more
deliberate effort to locate the most promising vein of users. The usual way to
do that is to get some initial set of users by doing a comparatively
untargeted launch, and then to observe which kind seem most enthusiastic, and
seek out more like them. For example, Ben Silbermann noticed that a lot of the
earliest Pinterest users were interested in design, so he went to a conference
of design bloggers to recruit users, and that worked well. [5]
**Delight**
You should take extraordinary measures not just to acquire users, but also to
make them happy. For as long as they could (which turned out to be
surprisingly long), Wufoo sent each new user a hand-written thank you note.
Your first users should feel that signing up with you was one of the best
choices they ever made. And you in turn should be racking your brains to think
of new ways to delight them.
Why do we have to teach startups this? Why is it counterintuitive for
founders? Three reasons, I think.
One is that a lot of startup founders are trained as engineers, and customer
service is not part of the training of engineers. You're supposed to build
things that are robust and elegant, not be slavishly attentive to individual
users like some kind of salesperson. Ironically, part of the reason
engineering is traditionally averse to handholding is that its traditions date
from a time when engineers were less powerful — when they were only in charge
of their narrow domain of building things, rather than running the whole show.
You can be ornery when you're Scotty, but not when you're Kirk.
Another reason founders don't focus enough on individual customers is that
they worry it won't scale. But when founders of larval startups worry about
this, I point out that in their current state they have nothing to lose. Maybe
if they go out of their way to make existing users super happy, they'll one
day have too many to do so much for. That would be a great problem to have.
See if you can make it happen. And incidentally, when it does, you'll find
that delighting customers scales better than you expected. Partly because you
can usually find ways to make anything scale more than you would have
predicted, and partly because delighting customers will by then have permeated
your culture.
I have never once seen a startup lured down a blind alley by trying too hard
to make their initial users happy.
But perhaps the biggest thing preventing founders from realizing how attentive
they could be to their users is that they've never experienced such attention
themselves. Their standards for customer service have been set by the
companies they've been customers of, which are mostly big ones. Tim Cook
doesn't send you a hand-written note after you buy a laptop. He can't. But you
can. That's one advantage of being small: you can provide a level of service
no big company can. [6]
Once you realize that existing conventions are not the upper bound on user
experience, it's interesting in a very pleasant way to think about how far you
could go to delight your users.
**Experience**
I was trying to think of a phrase to convey how extreme your attention to
users should be, and I realized Steve Jobs had already done it: insanely
great. Steve wasn't just using "insanely" as a synonym for "very." He meant it
more literally — that one should focus on quality of execution to a degree
that in everyday life would be considered pathological.
All the most successful startups we've funded have, and that probably doesn't
surprise would-be founders. What novice founders don't get is what insanely
great translates to in a larval startup. When Steve Jobs started using that
phrase, Apple was already an established company. He meant the Mac (and its
documentation and even packaging — such is the nature of obsession) should be
insanely well designed and manufactured. That's not hard for engineers to
grasp. It's just a more extreme version of designing a robust and elegant
product.
What founders have a hard time grasping (and Steve himself might have had a
hard time grasping) is what insanely great morphs into as you roll the time
slider back to the first couple months of a startup's life. It's not the
product that should be insanely great, but the experience of being your user.
The product is just one component of that. For a big company it's necessarily
the dominant one. But you can and should give users an insanely great
experience with an early, incomplete, buggy product, if you make up the
difference with attentiveness.
Can, perhaps, but should? Yes. Over-engaging with early users is not just a
permissible technique for getting growth rolling. For most successful startups
it's a necessary part of the feedback loop that makes the product good. Making
a better mousetrap is not an atomic operation. Even if you start the way most
successful startups have, by building something you yourself need, the first
thing you build is never quite right. And except in domains with big penalties
for making mistakes, it's often better not to aim for perfection initially. In
software, especially, it usually works best to get something in front of users
as soon as it has a quantum of utility, and then see what they do with it.
Perfectionism is often an excuse for procrastination, and in any case your
initial model of users is always inaccurate, even if you're one of them. [7]
The feedback you get from engaging directly with your earliest users will be
the best you ever get. When you're so big you have to resort to focus groups,
you'll wish you could go over to your users' homes and offices and watch them
use your stuff like you did when there were only a handful of them.
**Fire**
Sometimes the right unscalable trick is to focus on a deliberately narrow
market. It's like keeping a fire contained at first to get it really hot
before adding more logs.
That's what Facebook did. At first it was just for Harvard students. In that
form it only had a potential market of a few thousand people, but because they
felt it was really for them, a critical mass of them signed up. After Facebook
stopped being for Harvard students, it remained for students at specific
colleges for quite a while. When I interviewed Mark Zuckerberg at Startup
School, he said that while it was a lot of work creating course lists for each
school, doing that made students feel the site was their natural home.
Any startup that could be described as a marketplace usually has to start in a
subset of the market, but this can work for other startups as well. It's
always worth asking if there's a subset of the market in which you can get a
critical mass of users quickly. [8]
Most startups that use the contained fire strategy do it unconsciously. They
build something for themselves and their friends, who happen to be the early
adopters, and only realize later that they could offer it to a broader market.
The strategy works just as well if you do it unconsciously. The biggest danger
of not being consciously aware of this pattern is for those who naively
discard part of it. E.g. if you don't build something for yourself and your
friends, or even if you do, but you come from the corporate world and your
friends are not early adopters, you'll no longer have a perfect initial market
handed to you on a platter.
Among companies, the best early adopters are usually other startups. They're
more open to new things both by nature and because, having just been started,
they haven't made all their choices yet. Plus when they succeed they grow
fast, and you with them. It was one of many unforeseen advantages of the YC
model (and specifically of making YC big) that B2B startups now have an
instant market of hundreds of other startups ready at hand.
**Meraki**
For [hardware startups](hw.html) there's a variant of doing things that don't
scale that we call "pulling a Meraki." Although we didn't fund Meraki, the
founders were Robert Morris's grad students, so we know their history. They
got started by doing something that really doesn't scale: assembling their
routers themselves.
Hardware startups face an obstacle that software startups don't. The minimum
order for a factory production run is usually several hundred thousand
dollars. Which can put you in a catch-22: without a product you can't generate
the growth you need to raise the money to manufacture your product. Back when
hardware startups had to rely on investors for money, you had to be pretty
convincing to overcome this. The arrival of crowdfunding (or more precisely,
preorders) has helped a lot. But even so I'd advise startups to pull a Meraki
initially if they can. That's what Pebble did. The Pebbles
[assembled](https://sep.turbifycdn.com/ty/cdn/paulgraham/eric.jpg?t=1730199416&)
the first several hundred watches themselves. If they hadn't gone through that
phase, they probably wouldn't have sold $10 million worth of watches when they
did go on Kickstarter.
Like paying excessive attention to early customers, fabricating things
yourself turns out to be valuable for hardware startups. You can tweak the
design faster when you're the factory, and you learn things you'd never have
known otherwise. Eric Migicovsky of Pebble said one of the things he learned
was "how valuable it was to source good screws." Who knew?
**Consult**
Sometimes we advise founders of B2B startups to take over-engagement to an
extreme, and to pick a single user and act as if they were consultants
building something just for that one user. The initial user serves as the form
for your mold; keep tweaking till you fit their needs perfectly, and you'll
usually find you've made something other users want too. Even if there aren't
many of them, there are probably adjacent territories that have more. As long
as you can find just one user who really needs something and can act on that
need, you've got a toehold in making something people want, and that's as much
as any startup needs initially. [9]
Consulting is the canonical example of work that doesn't scale. But (like
other ways of bestowing one's favors liberally) it's safe to do it so long as
you're not being paid to. That's where companies cross the line. So long as
you're a product company that's merely being extra attentive to a customer,
they're very grateful even if you don't solve all their problems. But when
they start paying you specifically for that attentiveness — when they start
paying you by the hour — they expect you to do everything.
Another consulting-like technique for recruiting initially lukewarm users is
to use your software yourselves on their behalf. We did that at Viaweb. When
we approached merchants asking if they wanted to use our software to make
online stores, some said no, but they'd let us make one for them. Since we
would do anything to get users, we did. We felt pretty lame at the time.
Instead of organizing big strategic e-commerce partnerships, we were trying to
sell luggage and pens and men's shirts. But in retrospect it was exactly the
right thing to do, because it taught us how it would feel to merchants to use
our software. Sometimes the feedback loop was near instantaneous: in the
middle of building some merchant's site I'd find I needed a feature we didn't
have, so I'd spend a couple hours implementing it and then resume building the
site.
**Manual**
There's a more extreme variant where you don't just use your software, but are
your software. When you only have a small number of users, you can sometimes
get away with doing by hand things that you plan to automate later. This lets
you launch faster, and when you do finally automate yourself out of the loop,
you'll know exactly what to build because you'll have muscle memory from doing
it yourself.
When manual components look to the user like software, this technique starts
to have aspects of a practical joke. For example, the way Stripe delivered
"instant" merchant accounts to its first users was that the founders manually
signed them up for traditional merchant accounts behind the scenes.
Some startups could be entirely manual at first. If you can find someone with
a problem that needs solving and you can solve it manually, go ahead and do
that for as long as you can, and then gradually automate the bottlenecks. It
would be a little frightening to be solving users' problems in a way that
wasn't yet automatic, but less frightening than the far more common case of
having something automatic that doesn't yet solve anyone's problems.
**Big**
I should mention one sort of initial tactic that usually doesn't work: the Big
Launch. I occasionally meet founders who seem to believe startups are
projectiles rather than powered aircraft, and that they'll make it big if and
only if they're launched with sufficient initial velocity. They want to launch
simultaneously in 8 different publications, with embargoes. And on a tuesday,
of course, since they read somewhere that's the optimum day to launch
something.
It's easy to see how little launches matter. Think of some successful
startups. How many of their launches do you remember? All you need from a
launch is some initial core of users. How well you're doing a few months later
will depend more on how happy you made those users than how many there were of
them. [10]
So why do founders think launches matter? A combination of solipsism and
laziness. They think what they're building is so great that everyone who hears
about it will immediately sign up. Plus it would be so much less work if you
could get users merely by broadcasting your existence, rather than recruiting
them one at a time. But even if what you're building really is great, getting
users will always be a gradual process — partly because great things are
usually also novel, but mainly because users have other things to think about.
Partnerships too usually don't work. They don't work for startups in general,
but they especially don't work as a way to get growth started. It's a common
mistake among inexperienced founders to believe that a partnership with a big
company will be their big break. Six months later they're all saying the same
thing: that was way more work than we expected, and we ended up getting
practically nothing out of it. [11]
It's not enough just to do something extraordinary initially. You have to make
an extraordinary _effort_ initially. Any strategy that omits the effort —
whether it's expecting a big launch to get you users, or a big partner — is
ipso facto suspect.
**Vector**
The need to do something unscalably laborious to get started is so nearly
universal that it might be a good idea to stop thinking of startup ideas as
scalars. Instead we should try thinking of them as pairs of what you're going
to build, plus the unscalable thing(s) you're going to do initially to get the
company going.
It could be interesting to start viewing startup ideas this way, because now
that there are two components you can try to be imaginative about the second
as well as the first. But in most cases the second component will be what it
usually is — recruit users manually and give them an overwhelmingly good
experience — and the main benefit of treating startups as vectors will be to
remind founders they need to work hard in two dimensions. [12]
In the best case, both components of the vector contribute to your company's
DNA: the unscalable things you have to do to get started are not merely a
necessary evil, but change the company permanently for the better. If you have
to be aggressive about user acquisition when you're small, you'll probably
still be aggressive when you're big. If you have to manufacture your own
hardware, or use your software on users's behalf, you'll learn things you
couldn't have learned otherwise. And most importantly, if you have to work
hard to delight users when you only have a handful of them, you'll keep doing
it when you have a lot.
**Notes**
[1] Actually Emerson never mentioned mousetraps specifically. He wrote "If a
man has good corn or wood, or boards, or pigs, to sell, or can make better
chairs or knives, crucibles or church organs, than anybody else, you will find
a broad hard-beaten road to his house, though it be in the woods."
[2] Thanks to Sam Altman for suggesting I make this explicit. And no, you
can't avoid doing sales by hiring someone to do it for you. You have to do
sales yourself initially. Later you can hire a real salesperson to replace
you.
[3] The reason this works is that as you get bigger, your size helps you grow.
Patrick Collison wrote "At some point, there was a very noticeable change in
how Stripe felt. It tipped from being this boulder we had to push to being a
train car that in fact had its own momentum."
[4] One of the more subtle ways in which YC can help founders is by
calibrating their ambitions, because we know exactly how a lot of successful
startups looked when they were just getting started.
[5] If you're building something for which you can't easily get a small set of
users to observe — e.g. enterprise software — and in a domain where you have
no connections, you'll have to rely on cold calls and introductions. But
should you even be working on such an idea?
[6] Garry Tan pointed out an interesting trap founders fall into in the
beginning. They want so much to seem big that they imitate even the flaws of
big companies, like indifference to individual users. This seems to them more
"professional." Actually it's better to embrace the fact that you're small and
use whatever advantages that brings.
[7] Your user model almost couldn't be perfectly accurate, because users'
needs often change in response to what you build for them. Build them a
microcomputer, and suddenly they need to run spreadsheets on it, because the
arrival of your new microcomputer causes someone to invent the spreadsheet.
[8] If you have to choose between the subset that will sign up quickest and
those that will pay the most, it's usually best to pick the former, because
those are probably the early adopters. They'll have a better influence on your
product, and they won't make you expend as much effort on sales. And though
they have less money, you don't need that much to maintain your target growth
rate early on.
[9] Yes, I can imagine cases where you could end up making something that was
really only useful for one user. But those are usually obvious, even to
inexperienced founders. So if it's not obvious you'd be making something for a
market of one, don't worry about that danger.
[10] There may even be an inverse correlation between launch magnitude and
success. The only launches I remember are famous flops like the Segway and
Google Wave. Wave is a particularly alarming example, because I think it was
actually a great idea that was killed partly by its overdone launch.
[11] Google grew big on the back of Yahoo, but that wasn't a partnership.
Yahoo was their customer.
[12] It will also remind founders that an idea where the second component is
empty — an idea where there is nothing you can do to get going, e.g. because
you have no way to find users to recruit manually — is probably a bad idea, at
least for those founders.
**Thanks** to Sam Altman, Paul Buchheit, Patrick Collison, Kevin Hale, Steven
Levy, Jessica Livingston, Geoff Ralston, and Garry Tan for reading drafts of
this.
|
October 2023
One of the most important things I didn't understand about the world when I
was a child is the degree to which the returns for performance are
superlinear.
Teachers and coaches implicitly told us the returns were linear. "You get
out," I heard a thousand times, "what you put in." They meant well, but this
is rarely true. If your product is only half as good as your competitor's, you
don't get half as many customers. You get no customers, and you go out of
business.
It's obviously true that the returns for performance are superlinear in
business. Some think this is a flaw of capitalism, and that if we changed the
rules it would stop being true. But superlinear returns for performance are a
feature of the world, not an artifact of rules we've invented. We see the same
pattern in fame, power, military victories, knowledge, and even benefit to
humanity. In all of these, the rich get richer. [1]
You can't understand the world without understanding the concept of
superlinear returns. And if you're ambitious you definitely should, because
this will be the wave you surf on.
It may seem as if there are a lot of different situations with superlinear
returns, but as far as I can tell they reduce to two fundamental causes:
exponential growth and thresholds.
The most obvious case of superlinear returns is when you're working on
something that grows exponentially. For example, growing bacterial cultures.
When they grow at all, they grow exponentially. But they're tricky to grow.
Which means the difference in outcome between someone who's adept at it and
someone who's not is very great.
Startups can also grow exponentially, and we see the same pattern there. Some
manage to achieve high growth rates. Most don't. And as a result you get
qualitatively different outcomes: the companies with high growth rates tend to
become immensely valuable, while the ones with lower growth rates may not even
survive.
Y Combinator encourages founders to focus on growth rate rather than absolute
numbers. It prevents them from being discouraged early on, when the absolute
numbers are still low. It also helps them decide what to focus on: you can use
growth rate as a compass to tell you how to evolve the company. But the main
advantage is that by focusing on growth rate you tend to get something that
grows exponentially.
YC doesn't explicitly tell founders that with growth rate "you get out what
you put in," but it's not far from the truth. And if growth rate were
proportional to performance, then the reward for performance _p_ over time _t_
would be proportional to _p t_.
Even after decades of thinking about this, I find that sentence startling.
Whenever how well you do depends on how well you've done, you'll get
exponential growth. But neither our DNA nor our customs prepare us for it. No
one finds exponential growth natural; every child is surprised, the first time
they hear it, by the story of the man who asks the king for a single grain of
rice the first day and double the amount each successive day.
What we don't understand naturally we develop customs to deal with, but we
don't have many customs about exponential growth either, because there have
been so few instances of it in human history. In principle herding should have
been one: the more animals you had, the more offspring they'd have. But in
practice grazing land was the limiting factor, and there was no plan for
growing that exponentially.
Or more precisely, no generally applicable plan. There _was_ a way to grow
one's territory exponentially: by conquest. The more territory you control,
the more powerful your army becomes, and the easier it is to conquer new
territory. This is why history is full of empires. But so few people created
or ran empires that their experiences didn't affect customs very much. The
emperor was a remote and terrifying figure, not a source of lessons one could
use in one's own life.
The most common case of exponential growth in preindustrial times was probably
scholarship. The more you know, the easier it is to learn new things. The
result, then as now, was that some people were startlingly more knowledgeable
than the rest about certain topics. But this didn't affect customs much
either. Although empires of ideas can overlap and there can thus be far more
emperors, in preindustrial times this type of empire had little practical
effect. [2]
That has changed in the last few centuries. Now the emperors of ideas can
design bombs that defeat the emperors of territory. But this phenomenon is
still so new that we haven't fully assimilated it. Few even of the
participants realize they're benefitting from exponential growth or ask what
they can learn from other instances of it.
The other source of superlinear returns is embodied in the expression "winner
take all." In a sports match the relationship between performance and return
is a step function: the winning team gets one win whether they do much better
or just slightly better. [3]
The source of the step function is not competition per se, however. It's that
there are thresholds in the outcome. You don't need competition to get those.
There can be thresholds in situations where you're the only participant, like
proving a theorem or hitting a target.
It's remarkable how often a situation with one source of superlinear returns
also has the other. Crossing thresholds leads to exponential growth: the
winning side in a battle usually suffers less damage, which makes them more
likely to win in the future. And exponential growth helps you cross
thresholds: in a market with network effects, a company that grows fast enough
can shut out potential competitors.
Fame is an interesting example of a phenomenon that combines both sources of
superlinear returns. Fame grows exponentially because existing fans bring you
new ones. But the fundamental reason it's so concentrated is thresholds:
there's only so much room on the A-list in the average person's head.
The most important case combining both sources of superlinear returns may be
learning. Knowledge grows exponentially, but there are also thresholds in it.
Learning to ride a bicycle, for example. Some of these thresholds are akin to
machine tools: once you learn to read, you're able to learn anything else much
faster. But the most important thresholds of all are those representing new
discoveries. Knowledge seems to be fractal in the sense that if you push hard
at the boundary of one area of knowledge, you sometimes discover a whole new
field. And if you do, you get first crack at all the new discoveries to be
made in it. Newton did this, and so did Durer and Darwin.
Are there general rules for finding situations with superlinear returns? The
most obvious one is to seek work that compounds.
There are two ways work can compound. It can compound directly, in the sense
that doing well in one cycle causes you to do better in the next. That happens
for example when you're building infrastructure, or growing an audience or
brand. Or work can compound by teaching you, since learning compounds. This
second case is an interesting one because you may feel you're doing badly as
it's happening. You may be failing to achieve your immediate goal. But if
you're learning a lot, then you're getting exponential growth nonetheless.
This is one reason Silicon Valley is so tolerant of failure. People in Silicon
Valley aren't blindly tolerant of failure. They'll only continue to bet on you
if you're learning from your failures. But if you are, you are in fact a good
bet: maybe your company didn't grow the way you wanted, but you yourself have,
and that should yield results eventually.
Indeed, the forms of exponential growth that don't consist of learning are so
often intermixed with it that we should probably treat this as the rule rather
than the exception. Which yields another heuristic: always be learning. If
you're not learning, you're probably not on a path that leads to superlinear
returns.
But don't overoptimize _what_ you're learning. Don't limit yourself to
learning things that are already known to be valuable. You're learning; you
don't know for sure yet what's going to be valuable, and if you're too strict
you'll lop off the outliers.
What about step functions? Are there also useful heuristics of the form "seek
thresholds" or "seek competition?" Here the situation is trickier. The
existence of a threshold doesn't guarantee the game will be worth playing. If
you play a round of Russian roulette, you'll be in a situation with a
threshold, certainly, but in the best case you're no better off. "Seek
competition" is similarly useless; what if the prize isn't worth competing
for? Sufficiently fast exponential growth guarantees both the shape and
magnitude of the return curve — because something that grows fast enough will
grow big even if it's trivially small at first — but thresholds only guarantee
the shape. [4]
A principle for taking advantage of thresholds has to include a test to ensure
the game is worth playing. Here's one that does: if you come across something
that's mediocre yet still popular, it could be a good idea to replace it. For
example, if a company makes a product that people dislike yet still buy, then
presumably they'd buy a better alternative if you made one. [5]
It would be great if there were a way to find promising intellectual
thresholds. Is there a way to tell which questions have whole new fields
beyond them? I doubt we could ever predict this with certainty, but the prize
is so valuable that it would be useful to have predictors that were even a
little better than random, and there's hope of finding those. We can to some
degree predict when a research problem _isn't_ likely to lead to new
discoveries: when it seems legit but boring. Whereas the kind that do lead to
new discoveries tend to seem very mystifying, but perhaps unimportant. (If
they were mystifying and obviously important, they'd be famous open questions
with lots of people already working on them.) So one heuristic here is to be
driven by curiosity rather than careerism — to give free rein to your
curiosity instead of working on what you're supposed to.
The prospect of superlinear returns for performance is an exciting one for the
ambitious. And there's good news in this department: this territory is
expanding in both directions. There are more types of work in which you can
get superlinear returns, and the returns themselves are growing.
There are two reasons for this, though they're so closely intertwined that
they're more like one and a half: progress in technology, and the decreasing
importance of organizations.
Fifty years ago it used to be much more necessary to be part of an
organization to work on ambitious projects. It was the only way to get the
resources you needed, the only way to have colleagues, and the only way to get
distribution. So in 1970 your prestige was in most cases the prestige of the
organization you belonged to. And prestige was an accurate predictor, because
if you weren't part of an organization, you weren't likely to achieve much.
There were a handful of exceptions, most notably artists and writers, who
worked alone using inexpensive tools and had their own brands. But even they
were at the mercy of organizations for reaching audiences. [6]
A world dominated by organizations damped variation in the returns for
performance. But this world has eroded significantly just in my lifetime. Now
a lot more people can have the freedom that artists and writers had in the
20th century. There are lots of ambitious projects that don't require much
initial funding, and lots of new ways to learn, make money, find colleagues,
and reach audiences.
There's still plenty of the old world left, but the rate of change has been
dramatic by historical standards. Especially considering what's at stake. It's
hard to imagine a more fundamental change than one in the returns for
performance.
Without the damping effect of institutions, there will be more variation in
outcomes. Which doesn't imply everyone will be better off: people who do well
will do even better, but those who do badly will do worse. That's an important
point to bear in mind. Exposing oneself to superlinear returns is not for
everyone. Most people will be better off as part of the pool. So who should
shoot for superlinear returns? Ambitious people of two types: those who know
they're so good that they'll be net ahead in a world with higher variation,
and those, particularly the young, who can afford to risk trying it to find
out. [7]
The switch away from institutions won't simply be an exodus of their current
inhabitants. Many of the new winners will be people they'd never have let in.
So the resulting democratization of opportunity will be both greater and more
authentic than any tame intramural version the institutions themselves might
have cooked up.
Not everyone is happy about this great unlocking of ambition. It threatens
some vested interests and contradicts some ideologies. [8] But if you're an
ambitious individual it's good news for you. How should you take advantage of
it?
The most obvious way to take advantage of superlinear returns for performance
is by doing exceptionally good work. At the far end of the curve, incremental
effort is a bargain. All the more so because there's less competition at the
far end — and not just for the obvious reason that it's hard to do something
exceptionally well, but also because people find the prospect so intimidating
that few even try. Which means it's not just a bargain to do exceptional work,
but a bargain even to try to.
There are many variables that affect how good your work is, and if you want to
be an outlier you need to get nearly all of them right. For example, to do
something exceptionally well, you have to be interested in it. Mere diligence
is not enough. So in a world with superlinear returns, it's even more valuable
to know what you're interested in, and to find ways to work on it. [9] It will
also be important to choose work that suits your circumstances. For example,
if there's a kind of work that inherently requires a huge expenditure of time
and energy, it will be increasingly valuable to do it when you're young and
don't yet have children.
There's a surprising amount of technique to doing great work. It's not just a
matter of trying hard. I'm going to take a shot giving a recipe in one
paragraph.
Choose work you have a natural aptitude for and a deep interest in. Develop a
habit of working on your own projects; it doesn't matter what they are so long
as you find them excitingly ambitious. Work as hard as you can without burning
out, and this will eventually bring you to one of the frontiers of knowledge.
These look smooth from a distance, but up close they're full of gaps. Notice
and explore such gaps, and if you're lucky one will expand into a whole new
field. Take as much risk as you can afford; if you're not failing occasionally
you're probably being too conservative. Seek out the best colleagues. Develop
good taste and learn from the best examples. Be honest, especially with
yourself. Exercise and eat and sleep well and avoid the more dangerous drugs.
When in doubt, follow your curiosity. It never lies, and it knows more than
you do about what's worth paying attention to. [10]
And there is of course one other thing you need: to be lucky. Luck is always a
factor, but it's even more of a factor when you're working on your own rather
than as part of an organization. And though there are some valid aphorisms
about luck being where preparedness meets opportunity and so on, there's also
a component of true chance that you can't do anything about. The solution is
to take multiple shots. Which is another reason to start taking risks early.
The best example of a field with superlinear returns is probably science. It
has exponential growth, in the form of learning, combined with thresholds at
the extreme edge of performance — literally at the limits of knowledge.
The result has been a level of inequality in scientific discovery that makes
the wealth inequality of even the most stratified societies seem mild by
comparison. Newton's discoveries were arguably greater than all his
contemporaries' combined. [11]
This point may seem obvious, but it might be just as well to spell it out.
Superlinear returns imply inequality. The steeper the return curve, the
greater the variation in outcomes.
In fact, the correlation between superlinear returns and inequality is so
strong that it yields another heuristic for finding work of this type: look
for fields where a few big winners outperform everyone else. A kind of work
where everyone does about the same is unlikely to be one with superlinear
returns.
What are fields where a few big winners outperform everyone else? Here are
some obvious ones: sports, politics, art, music, acting, directing, writing,
math, science, starting companies, and investing. In sports the phenomenon is
due to externally imposed thresholds; you only need to be a few percent faster
to win every race. In politics, power grows much as it did in the days of
emperors. And in some of the other fields (including politics) success is
driven largely by fame, which has its own source of superlinear growth. But
when we exclude sports and politics and the effects of fame, a remarkable
pattern emerges: the remaining list is exactly the same as the list of fields
where you have to be [_independent-minded_](think.html) to succeed — where
your ideas have to be not just correct, but novel as well. [12]
This is obviously the case in science. You can't publish papers saying things
that other people have already said. But it's just as true in investing, for
example. It's only useful to believe that a company will do well if most other
investors don't; if everyone else thinks the company will do well, then its
stock price will already reflect that, and there's no room to make money.
What else can we learn from these fields? In all of them you have to put in
the initial effort. Superlinear returns seem small at first. _At this rate,_
you find yourself thinking, _I'll never get anywhere._ But because the reward
curve rises so steeply at the far end, it's worth taking extraordinary
measures to get there.
In the startup world, the name for this principle is "do things that don't
scale." If you pay a ridiculous amount of attention to your tiny initial set
of customers, ideally you'll kick off exponential growth by word of mouth. But
this same principle applies to anything that grows exponentially. Learning,
for example. When you first start learning something, you feel lost. But it's
worth making the initial effort to get a toehold, because the more you learn,
the easier it will get.
There's another more subtle lesson in the list of fields with superlinear
returns: not to equate work with a job. For most of the 20th century the two
were identical for nearly everyone, and as a result we've inherited a custom
that equates productivity with having a job. Even now to most people the
phrase "your work" means their job. But to a writer or artist or scientist it
means whatever they're currently studying or creating. For someone like that,
their work is something they carry with them from job to job, if they have
jobs at all. It may be done for an employer, but it's part of their portfolio.
It's an intimidating prospect to enter a field where a few big winners
outperform everyone else. Some people do this deliberately, but you don't need
to. If you have sufficient natural ability and you follow your curiosity
sufficiently far, you'll end up in one. Your curiosity won't let you be
interested in boring questions, and interesting questions tend to create
fields with superlinear returns if they're not already part of one.
The territory of superlinear returns is by no means static. Indeed, the most
extreme returns come from expanding it. So while both ambition and curiosity
can get you into this territory, curiosity may be the more powerful of the
two. Ambition tends to make you climb existing peaks, but if you stick close
enough to an interesting enough question, it may grow into a mountain beneath
you.
**Notes**
There's a limit to how sharply you can distinguish between effort,
performance, and return, because they're not sharply distinguished in fact.
What counts as return to one person might be performance to another. But
though the borders of these concepts are blurry, they're not meaningless. I've
tried to write about them as precisely as I could without crossing into error.
[1] Evolution itself is probably the most pervasive example of superlinear
returns for performance. But this is hard for us to empathize with because
we're not the recipients; we're the returns.
[2] Knowledge did of course have a practical effect before the Industrial
Revolution. The development of agriculture changed human life completely. But
this kind of change was the result of broad, gradual improvements in
technique, not the discoveries of a few exceptionally learned people.
[3] It's not mathematically correct to describe a step function as
superlinear, but a step function starting from zero works like a superlinear
function when it describes the reward curve for effort by a rational actor. If
it starts at zero then the part before the step is below any linearly
increasing return, and the part after the step must be above the necessary
return at that point or no one would bother.
[4] Seeking competition could be a good heuristic in the sense that some
people find it motivating. It's also somewhat of a guide to promising
problems, because it's a sign that other people find them promising. But it's
a very imperfect sign: often there's a clamoring crowd chasing some problem,
and they all end up being trumped by someone quietly working on another one.
[5] Not always, though. You have to be careful with this rule. When something
is popular despite being mediocre, there's often a hidden reason why. Perhaps
monopoly or regulation make it hard to compete. Perhaps customers have bad
taste or have broken procedures for deciding what to buy. There are huge
swathes of mediocre things that exist for such reasons.
[6] In my twenties I wanted to be an [_artist_](worked.html) and even went to
art school to study painting. Mostly because I liked art, but a nontrivial
part of my motivation came from the fact that artists seemed least at the
mercy of organizations.
[7] In principle everyone is getting superlinear returns. Learning compounds,
and everyone learns in the course of their life. But in practice few push this
kind of everyday learning to the point where the return curve gets really
steep.
[8] It's unclear exactly what advocates of "equity" mean by it. They seem to
disagree among themselves. But whatever they mean is probably at odds with a
world in which institutions have less power to control outcomes, and a handful
of outliers do much better than everyone else.
It may seem like bad luck for this concept that it arose at just the moment
when the world was shifting in the opposite direction, but I don't think this
was a coincidence. I think one reason it arose now is because its adherents
feel threatened by rapidly increasing variation in performance.
[9] Corollary: Parents who pressure their kids to work on something
prestigious, like medicine, even though they have no interest in it, will be
hosing them even more than they have in the past.
[10] The original version of this paragraph was the first draft of "[_How to
Do Great Work_](greatwork.html)." As soon as I wrote it I realized it was a
more important topic than superlinear returns, so I paused the present essay
to expand this paragraph into its own. Practically nothing remains of the
original version, because after I finished "How to Do Great Work" I rewrote it
based on that.
[11] Before the Industrial Revolution, people who got rich usually did it like
emperors: capturing some resource made them more powerful and enabled them to
capture more. Now it can be done like a scientist, by discovering or building
something uniquely valuable. Most people who get rich use a mix of the old and
the new ways, but in the most advanced economies the ratio has [_shifted
dramatically_](richnow.html) toward discovery just in the last half century.
[12] It's not surprising that conventional-minded people would dislike
inequality if independent-mindedness is one of the biggest drivers of it. But
it's not simply that they don't want anyone to have what they can't. The
conventional-minded literally can't imagine what it's like to have novel
ideas. So the whole phenomenon of great variation in performance seems
unnatural to them, and when they encounter it they assume it must be due to
cheating or to some malign external influence.
**Thanks** to Trevor Blackwell, Patrick Collison, Tyler Cowen, Jessica
Livingston, Harj Taggar, and Garry Tan for reading drafts of this.
|
May 2003
_(This essay is derived from a guest lecture at Harvard, which incorporated
an earlier talk at Northeastern.)_
When I finished grad school in computer science I went to art school to study
painting. A lot of people seemed surprised that someone interested in
computers would also be interested in painting. They seemed to think that
hacking and painting were very different kinds of work-- that hacking was
cold, precise, and methodical, and that painting was the frenzied expression
of some primal urge.
Both of these images are wrong. Hacking and painting have a lot in common. In
fact, of all the different types of people I've known, hackers and painters
are among the most alike.
What hackers and painters have in common is that they're both makers. Along
with composers, architects, and writers, what hackers and painters are trying
to do is make good things. They're not doing research per se, though if in the
course of trying to make good things they discover some new technique, so much
the better.
I've never liked the term "computer science." The main reason I don't like it
is that there's no such thing. Computer science is a grab bag of tenuously
related areas thrown together by an accident of history, like Yugoslavia. At
one end you have people who are really mathematicians, but call what they're
doing computer science so they can get DARPA grants. In the middle you have
people working on something like the natural history of computers-- studying
the behavior of algorithms for routing data through networks, for example. And
then at the other extreme you have the hackers, who are trying to write
interesting software, and for whom computers are just a medium of expression,
as concrete is for architects or paint for painters. It's as if
mathematicians, physicists, and architects all had to be in the same
department.
Sometimes what the hackers do is called "software engineering," but this term
is just as misleading. Good software designers are no more engineers than
architects are. The border between architecture and engineering is not sharply
defined, but it's there. It falls between what and how: architects decide what
to do, and engineers figure out how to do it.
What and how should not be kept too separate. You're asking for trouble if you
try to decide what to do without understanding how to do it. But hacking can
certainly be more than just deciding how to implement some spec. At its best,
it's creating the spec-- though it turns out the best way to do that is to
implement it.
Perhaps one day "computer science" will, like Yugoslavia, get broken up into
its component parts. That might be a good thing. Especially if it meant
independence for my native land, hacking.
Bundling all these different types of work together in one department may be
convenient administratively, but it's confusing intellectually. That's the
other reason I don't like the name "computer science." Arguably the people in
the middle are doing something like an experimental science. But the people at
either end, the hackers and the mathematicians, are not actually doing
science.
The mathematicians don't seem bothered by this. They happily set to work
proving theorems like the other mathematicians over in the math department,
and probably soon stop noticing that the building they work in says ``computer
science'' on the outside. But for the hackers this label is a problem. If what
they're doing is called science, it makes them feel they ought to be acting
scientific. So instead of doing what they really want to do, which is to
design beautiful software, hackers in universities and research labs feel they
ought to be writing research papers.
In the best case, the papers are just a formality. Hackers write cool
software, and then write a paper about it, and the paper becomes a proxy for
the achievement represented by the software. But often this mismatch causes
problems. It's easy to drift away from building beautiful things toward
building ugly things that make more suitable subjects for research papers.
Unfortunately, beautiful things don't always make the best subjects for
papers. Number one, research must be original-- and as anyone who has written
a PhD dissertation knows, the way to be sure that you're exploring virgin
territory is to stake out a piece of ground that no one wants. Number two,
research must be substantial-- and awkward systems yield meatier papers,
because you can write about the obstacles you have to overcome in order to get
things done. Nothing yields meaty problems like starting with the wrong
assumptions. Most of AI is an example of this rule; if you assume that
knowledge can be represented as a list of predicate logic expressions whose
arguments represent abstract concepts, you'll have a lot of papers to write
about how to make this work. As Ricky Ricardo used to say, "Lucy, you got a
lot of explaining to do."
The way to create something beautiful is often to make subtle tweaks to
something that already exists, or to combine existing ideas in a slightly new
way. This kind of work is hard to convey in a research paper.
So why do universities and research labs continue to judge hackers by
publications? For the same reason that "scholastic aptitude" gets measured by
simple-minded standardized tests, or the productivity of programmers gets
measured in lines of code. These tests are easy to apply, and there is nothing
so tempting as an easy test that kind of works.
Measuring what hackers are actually trying to do, designing beautiful
software, would be much more difficult. You need a good [sense of
design](taste.html) to judge good design. And there is no correlation, except
possibly a [negative](http://www.apa.org/journals/features/psp7761121.pdf)
one, between people's ability to recognize good design and their confidence
that they can.
The only external test is time. Over time, beautiful things tend to thrive,
and ugly things tend to get discarded. Unfortunately, the amounts of time
involved can be longer than human lifetimes. Samuel Johnson said it took a
hundred years for a writer's reputation to converge. You have to wait for the
writer's influential friends to die, and then for all their followers to die.
I think hackers just have to resign themselves to having a large random
component in their reputations. In this they are no different from other
makers. In fact, they're lucky by comparison. The influence of fashion is not
nearly so great in hacking as it is in painting.
There are worse things than having people misunderstand your work. A worse
danger is that you will yourself misunderstand your work. Related fields are
where you go looking for ideas. If you find yourself in the computer science
department, there is a natural temptation to believe, for example, that
hacking is the applied version of what theoretical computer science is the
theory of. All the time I was in graduate school I had an uncomfortable
feeling in the back of my mind that I ought to know more theory, and that it
was very remiss of me to have forgotten all that stuff within three weeks of
the final exam.
Now I realize I was mistaken. Hackers need to understand the theory of
computation about as much as painters need to understand paint chemistry. You
need to know how to calculate time and space complexity and about Turing
completeness. You might also want to remember at least the concept of a state
machine, in case you have to write a parser or a regular expression library.
Painters in fact have to remember a good deal more about paint chemistry than
that.
I've found that the best sources of ideas are not the other fields that have
the word "computer" in their names, but the other fields inhabited by makers.
Painting has been a much richer source of ideas than the theory of
computation.
For example, I was taught in college that one ought to figure out a program
completely on paper before even going near a computer. I found that I did not
program this way. I found that I liked to program sitting in front of a
computer, not a piece of paper. Worse still, instead of patiently writing out
a complete program and assuring myself it was correct, I tended to just spew
out code that was hopelessly broken, and gradually beat it into shape.
Debugging, I was taught, was a kind of final pass where you caught typos and
oversights. The way I worked, it seemed like programming consisted of
debugging.
For a long time I felt bad about this, just as I once felt bad that I didn't
hold my pencil the way they taught me to in elementary school. If I had only
looked over at the other makers, the painters or the architects, I would have
realized that there was a name for what I was doing: sketching. As far as I
can tell, the way they taught me to program in college was all wrong. You
should figure out programs as you're writing them, just as writers and
painters and architects do.
Realizing this has real implications for software design. It means that a
programming language should, above all, be malleable. A programming language
is for [thinking](piraha.html) of programs, not for expressing programs you've
already thought of. It should be a pencil, not a pen. Static typing would be a
fine idea if people actually did write programs the way they taught me to in
college. But that's not how any of the hackers I know write programs. We need
a language that lets us scribble and smudge and smear, not a language where
you have to sit with a teacup of types balanced on your knee and make polite
conversation with a strict old aunt of a compiler.
While we're on the subject of static typing, identifying with the makers will
save us from another problem that afflicts the sciences: math envy. Everyone
in the sciences secretly believes that mathematicians are smarter than they
are. I think mathematicians also believe this. At any rate, the result is that
scientists tend to make their work look as mathematical as possible. In a
field like physics this probably doesn't do much harm, but the further you get
from the natural sciences, the more of a problem it becomes.
A page of formulas just looks so impressive. (Tip: for extra impressiveness,
use Greek variables.) And so there is a great temptation to work on problems
you can treat formally, rather than problems that are, say, important.
If hackers identified with other makers, like writers and painters, they
wouldn't feel tempted to do this. Writers and painters don't suffer from math
envy. They feel as if they're doing something completely unrelated. So are
hackers, I think.
If universities and research labs keep hackers from doing the kind of work
they want to do, perhaps the place for them is in companies. Unfortunately,
most companies won't let hackers do what they want either. Universities and
research labs force hackers to be scientists, and companies force them to be
engineers.
I only discovered this myself quite recently. When Yahoo bought Viaweb, they
asked me what I wanted to do. I had never liked the business side very much,
and said that I just wanted to hack. When I got to Yahoo, I found that what
hacking meant to them was implementing software, not designing it. Programmers
were seen as technicians who translated the visions (if that is the word) of
product managers into code.
This seems to be the default plan in big companies. They do it because it
decreases the standard deviation of the outcome. Only a small percentage of
hackers can actually design software, and it's hard for the people running a
company to pick these out. So instead of entrusting the future of the software
to one brilliant hacker, most companies set things up so that it is designed
by committee, and the hackers merely implement the design.
If you want to make money at some point, remember this, because this is one of
the reasons startups win. Big companies want to decrease the standard
deviation of design outcomes because they want to avoid disasters. But when
you damp oscillations, you lose the high points as well as the low. This is
not a problem for big companies, because they don't win by making great
products. Big companies win by sucking less than other big companies.
So if you can figure out a way to get in a design war with a company big
enough that its software is designed by product managers, they'll never be
able to keep up with you. These opportunities are not easy to find, though.
It's hard to engage a big company in a design war, just as it's hard to engage
an opponent inside a castle in hand to hand combat. It would be pretty easy to
write a better word processor than Microsoft Word, for example, but Microsoft,
within the castle of their operating system monopoly, probably wouldn't even
notice if you did.
The place to fight design wars is in new markets, where no one has yet managed
to establish any fortifications. That's where you can win big by taking the
bold approach to design, and having the same people both design and implement
the product. Microsoft themselves did this at the start. So did Apple. And
Hewlett-Packard. I suspect almost every successful startup has.
So one way to build great software is to start your own startup. There are two
problems with this, though. One is that in a startup you have to do so much
besides write software. At Viaweb I considered myself lucky if I got to hack a
quarter of the time. And the things I had to do the other three quarters of
the time ranged from tedious to terrifying. I have a benchmark for this,
because I once had to leave a board meeting to have some cavities filled. I
remember sitting back in the dentist's chair, waiting for the drill, and
feeling like I was on vacation.
The other problem with startups is that there is not much overlap between the
kind of software that makes money and the kind that's interesting to write.
Programming languages are interesting to write, and Microsoft's first product
was one, in fact, but no one will pay for programming languages now. If you
want to make money, you tend to be forced to work on problems that are too
nasty for anyone to solve for free.
All makers face this problem. Prices are determined by supply and demand, and
there is just not as much demand for things that are fun to work on as there
is for things that solve the mundane problems of individual customers. Acting
in off-Broadway plays just doesn't pay as well as wearing a gorilla suit in
someone's booth at a trade show. Writing novels doesn't pay as well as writing
ad copy for garbage disposals. And hacking programming languages doesn't pay
as well as figuring out how to connect some company's legacy database to their
Web server.
I think the answer to this problem, in the case of software, is a concept
known to nearly all makers: the day job. This phrase began with musicians, who
perform at night. More generally, it means that you have one kind of work you
do for money, and another for love.
Nearly all makers have day jobs early in their careers. Painters and writers
notoriously do. If you're lucky you can get a day job that's closely related
to your real work. Musicians often seem to work in record stores. A hacker
working on some programming language or operating system might likewise be
able to get a day job using it. [1]
When I say that the answer is for hackers to have day jobs, and work on
beautiful software on the side, I'm not proposing this as a new idea. This is
what open-source hacking is all about. What I'm saying is that open-source is
probably the right model, because it has been independently confirmed by all
the other makers.
It seems surprising to me that any employer would be reluctant to let hackers
work on open-source projects. At Viaweb, we would have been reluctant to hire
anyone who didn't. When we interviewed programmers, the main thing we cared
about was what kind of software they wrote in their spare time. You can't do
anything really well unless you love it, and if you love to hack you'll
inevitably be working on projects of your own. [2]
Because hackers are makers rather than scientists, the right place to look for
metaphors is not in the sciences, but among other kinds of makers. What else
can painting teach us about hacking?
One thing we can learn, or at least confirm, from the example of painting is
how to learn to hack. You learn to paint mostly by doing it. Ditto for
hacking. Most hackers don't learn to hack by taking college courses in
programming. They learn to hack by writing programs of their own at age
thirteen. Even in college classes, you learn to hack mostly by hacking. [3]
Because painters leave a trail of work behind them, you can watch them learn
by doing. If you look at the work of a painter in chronological order, you'll
find that each painting builds on things that have been learned in previous
ones. When there's something in a painting that works very well, you can
usually find version 1 of it in a smaller form in some earlier painting.
I think most makers work this way. Writers and architects seem to as well.
Maybe it would be good for hackers to act more like painters, and regularly
start over from scratch, instead of continuing to work for years on one
project, and trying to incorporate all their later ideas as revisions.
The fact that hackers learn to hack by doing it is another sign of how
different hacking is from the sciences. Scientists don't learn science by
doing it, but by doing labs and problem sets. Scientists start out doing work
that's perfect, in the sense that they're just trying to reproduce work
someone else has already done for them. Eventually, they get to the point
where they can do original work. Whereas hackers, from the start, are doing
original work; it's just very bad. So hackers start original, and get good,
and scientists start good, and get original.
The other way makers learn is from examples. For a painter, a museum is a
reference library of techniques. For hundreds of years it has been part of the
traditional education of painters to copy the works of the great masters,
because copying forces you to look closely at the way a painting is made.
Writers do this too. Benjamin Franklin learned to write by summarizing the
points in the essays of Addison and Steele and then trying to reproduce them.
Raymond Chandler did the same thing with detective stories.
Hackers, likewise, can learn to program by looking at good programs-- not just
at what they do, but the source code too. One of the less publicized benefits
of the open-source movement is that it has made it easier to learn to program.
When I learned to program, we had to rely mostly on examples in books. The one
big chunk of code available then was Unix, but even this was not open source.
Most of the people who read the source read it in illicit photocopies of John
Lions' book, which though written in 1977 was not allowed to be published
until 1996.
Another example we can take from painting is the way that paintings are
created by gradual refinement. Paintings usually begin with a sketch.
Gradually the details get filled in. But it is not merely a process of filling
in. Sometimes the original plans turn out to be mistaken. Countless paintings,
when you look at them in xrays, turn out to have limbs that have been moved or
facial features that have been readjusted.
Here's a case where we can learn from painting. I think hacking should work
this way too. It's unrealistic to expect that the specifications for a program
will be perfect. You're better off if you admit this up front, and write
programs in a way that allows specifications to change on the fly.
(The structure of large companies makes this hard for them to do, so here is
another place where startups have an advantage.)
Everyone by now presumably knows about the danger of premature optimization. I
think we should be just as worried about premature design-- deciding too early
what a program should do.
The right tools can help us avoid this danger. A good programming language
should, like oil paint, make it easy to change your mind. Dynamic typing is a
win here because you don't have to commit to specific data representations up
front. But the key to flexibility, I think, is to make the language very
[abstract](power.html). The easiest program to change is one that's very
short.
This sounds like a paradox, but a great painting has to be better than it has
to be. For example, when Leonardo painted the portrait of [Ginevra de
Benci](ginevra.html) in the National Gallery, he put a juniper bush behind her
head. In it he carefully painted each individual leaf. Many painters might
have thought, this is just something to put in the background to frame her
head. No one will look that closely at it.
Not Leonardo. How hard he worked on part of a painting didn't depend at all on
how closely he expected anyone to look at it. He was like Michael Jordan.
Relentless.
Relentlessness wins because, in the aggregate, unseen details become visible.
When people walk by the portrait of Ginevra de Benci, their attention is often
immediately arrested by it, even before they look at the label and notice that
it says Leonardo da Vinci. All those unseen details combine to produce
something that's just stunning, like a thousand barely audible voices all
singing in tune.
Great software, likewise, requires a fanatical devotion to beauty. If you look
inside good software, you find that parts no one is ever supposed to see are
beautiful too. I'm not claiming I write great software, but I know that when
it comes to code I behave in a way that would make me eligible for
prescription drugs if I approached everyday life the same way. It drives me
crazy to see code that's badly indented, or that uses ugly variable names.
If a hacker were a mere implementor, turning a spec into code, then he could
just work his way through it from one end to the other like someone digging a
ditch. But if the hacker is a creator, we have to take inspiration into
account.
In hacking, like painting, work comes in cycles. Sometimes you get excited
about some new project and you want to work sixteen hours a day on it. Other
times nothing seems interesting.
To do good work you have to take these cycles into account, because they're
affected by how you react to them. When you're driving a car with a manual
transmission on a hill, you have to back off the clutch sometimes to avoid
stalling. Backing off can likewise prevent ambition from stalling. In both
painting and hacking there are some tasks that are terrifyingly ambitious, and
others that are comfortingly routine. It's a good idea to save some easy tasks
for moments when you would otherwise stall.
In hacking, this can literally mean saving up bugs. I like debugging: it's the
one time that hacking is as straightforward as people think it is. You have a
totally constrained problem, and all you have to do is solve it. Your program
is supposed to do x. Instead it does y. Where does it go wrong? You know
you're going to win in the end. It's as relaxing as painting a wall.
The example of painting can teach us not only how to manage our own work, but
how to work together. A lot of the great art of the past is the work of
multiple hands, though there may only be one name on the wall next to it in
the museum. Leonardo was an apprentice in the workshop of Verrocchio and
painted one of the angels in his [Baptism of Christ](baptism.html). This sort
of thing was the rule, not the exception. Michelangelo was considered
especially dedicated for insisting on painting all the figures on the ceiling
of the Sistine Chapel himself.
As far as I know, when painters worked together on a painting, they never
worked on the same parts. It was common for the master to paint the principal
figures and for assistants to paint the others and the background. But you
never had one guy painting over the work of another.
I think this is the right model for collaboration in software too. Don't push
it too far. When a piece of code is being hacked by three or four different
people, no one of whom really owns it, it will end up being like a common-
room. It will tend to feel bleak and abandoned, and accumulate cruft. The
right way to collaborate, I think, is to divide projects into sharply defined
modules, each with a definite owner, and with interfaces between them that are
as carefully designed and, if possible, as articulated as programming
languages.
Like painting, most software is intended for a human audience. And so hackers,
like painters, must have empathy to do really great work. You have to be able
to see things from the user's point of view.
When I was a kid I was always being told to look at things from someone else's
point of view. What this always meant in practice was to do what someone else
wanted, instead of what I wanted. This of course gave empathy a bad name, and
I made a point of not cultivating it.
Boy, was I wrong. It turns out that looking at things from other people's
point of view is practically the secret of success. It doesn't necessarily
mean being self-sacrificing. Far from it. Understanding how someone else sees
things doesn't imply that you'll act in his interest; in some situations-- in
war, for example-- you want to do exactly the opposite. [4]
Most makers make things for a human audience. And to engage an audience you
have to understand what they need. Nearly all the greatest paintings are
paintings of people, for example, because people are what people are
interested in.
Empathy is probably the single most important difference between a good hacker
and a great one. Some hackers are quite smart, but when it comes to empathy
are practically solipsists. It's hard for such people to design great software
[5], because they can't see things from the user's point of view.
One way to tell how good people are at empathy is to watch them explain a
technical question to someone without a technical background. We probably all
know people who, though otherwise smart, are just comically bad at this. If
someone asks them at a dinner party what a programming language is, they'll
say something like ``Oh, a high-level language is what the compiler uses as
input to generate object code.'' High-level language? Compiler? Object code?
Someone who doesn't know what a programming language is obviously doesn't know
what these things are, either.
Part of what software has to do is explain itself. So to write good software
you have to understand how little users understand. They're going to walk up
to the software with no preparation, and it had better do what they guess it
will, because they're not going to read the manual. The best system I've ever
seen in this respect was the original Macintosh, in 1985. It did what software
almost never does: it just worked. [6]
Source code, too, should explain itself. If I could get people to remember
just one quote about programming, it would be the one at the beginning of
_Structure and Interpretation of Computer Programs._
> Programs should be written for people to read, and only incidentally for
> machines to execute.
You need to have empathy not just for your users, but for your readers. It's
in your interest, because you'll be one of them. Many a hacker has written a
program only to find on returning to it six months later that he has no idea
how it works. I know several people who've sworn off Perl after such
experiences. [7]
Lack of empathy is associated with intelligence, to the point that there is
even something of a fashion for it in some places. But I don't think there's
any correlation. You can do well in math and the natural sciences without
having to learn empathy, and people in these fields tend to be smart, so the
two qualities have come to be associated. But there are plenty of dumb people
who are bad at empathy too. Just listen to the people who call in with
questions on talk shows. They ask whatever it is they're asking in such a
roundabout way that the hosts often have to rephrase the question for them.
So, if hacking works like painting and writing, is it as cool? After all, you
only get one life. You might as well spend it working on something great.
Unfortunately, the question is hard to answer. There is always a big time lag
in prestige. It's like light from a distant star. Painting has prestige now
because of great work people did five hundred years ago. At the time, no one
thought these paintings were as important as we do today. It would have seemed
very odd to people at the time that Federico da Montefeltro, the Duke of
Urbino, would one day be known mostly as the guy with the strange nose in a
[painting](montefeltro.html) by Piero della Francesca.
So while I admit that hacking doesn't seem as cool as painting now, we should
remember that painting itself didn't seem as cool in its glory days as it does
now.
What we can say with some confidence is that these are the glory days of
hacking. In most fields the great work is done early on. The paintings made
between 1430 and 1500 are still unsurpassed. Shakespeare appeared just as
professional theater was being born, and pushed the medium so far that every
playwright since has had to live in his shadow. Albrecht Durer did the same
thing with engraving, and Jane Austen with the novel.
Over and over we see the same pattern. A new medium appears, and people are so
excited about it that they explore most of its possibilities in the first
couple generations. Hacking seems to be in this phase now.
Painting was not, in Leonardo's time, as cool as his work helped make it. How
cool hacking turns out to be will depend on what we can do with this new
medium.
**Notes**
[1] The greatest damage that photography has done to painting may be the fact
that it killed the best day job. Most of the great painters in history
supported themselves by painting portraits.
[2] I've been told that Microsoft discourages employees from contributing to
open-source projects, even in their spare time. But so many of the best
hackers work on open-source projects now that the main effect of this policy
may be to ensure that they won't be able to hire any first-rate programmers.
[3] What you learn about programming in college is much like what you learn
about books or clothes or dating: what bad taste you had in high school.
[4] Here's an example of applied empathy. At Viaweb, if we couldn't decide
between two alternatives, we'd ask, what would our competitors hate most? At
one point a competitor added a feature to their software that was basically
useless, but since it was one of few they had that we didn't, they made much
of it in the trade press. We could have tried to explain that the feature was
useless, but we decided it would annoy our competitor more if we just
implemented it ourselves, so we hacked together our own version that
afternoon.
[5] Except text editors and compilers. Hackers don't need empathy to design
these, because they are themselves typical users.
[6] Well, almost. They overshot the available RAM somewhat, causing much
inconvenient disk swapping, but this could be fixed within a few months by
buying an additional disk drive.
[7] The way to make programs easy to read is not to stuff them with comments.
I would take Abelson and Sussman's quote a step further. Programming languages
should be designed to express algorithms, and only incidentally to tell
computers how to execute them. A good programming language ought to be better
for explaining software than English. You should only need comments when there
is some kind of kludge you need to warn readers about, just as on a road there
are only arrows on parts with unexpectedly sharp curves.
**Thanks** to Trevor Blackwell, Robert Morris, Dan Giffin, and Lisa Randall
for reading drafts of this, and to Henry Leitner and Larry Finkelstein for
inviting me to speak.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
March 2012
One of the more surprising things I've noticed while working on Y Combinator
is how frightening the most ambitious startup ideas are. In this essay I'm
going to demonstrate this phenomenon by describing some. Any one of them could
make you a billionaire. That might sound like an attractive prospect, and yet
when I describe these ideas you may notice you find yourself shrinking away
from them.
Don't worry, it's not a sign of weakness. Arguably it's a sign of sanity. The
biggest startup ideas are terrifying. And not just because they'd be a lot of
work. The biggest ideas seem to threaten your identity: you wonder if you'd
have enough ambition to carry them through.
There's a scene in _Being John Malkovich_ where the nerdy hero encounters a
very attractive, sophisticated woman. She says to him:
> Here's the thing: If you ever got me, you wouldn't have a clue what to do
> with me.
That's what these ideas say to us.
This phenomenon is one of the most important things you can understand about
startups. [1] You'd expect big startup ideas to be attractive, but actually
they tend to repel you. And that has a bunch of consequences. It means these
ideas are invisible to most people who try to think of startup ideas, because
their subconscious filters them out. Even the most ambitious people are
probably best off approaching them obliquely.
**1\. A New Search Engine**
The best ideas are just on the right side of impossible. I don't know if this
one is possible, but there are signs it might be. Making a new search engine
means competing with Google, and recently I've noticed some cracks in their
fortress.
The point when it became clear to me that Microsoft had lost their way was
when they decided to get into the search business. That was not a natural move
for Microsoft. They did it because they were afraid of Google, and Google was
in the search business. But this meant (a) Google was now setting Microsoft's
agenda, and (b) Microsoft's agenda consisted of stuff they weren't good at.
Microsoft : Google :: Google : Facebook.
That does not by itself mean there's room for a new search engine, but lately
when using Google search I've found myself nostalgic for the old days, when
Google was true to its own slightly aspy self. Google used to give me a page
of the right answers, fast, with no clutter. Now the results seem inspired by
the Scientologist principle that what's true is what's true for you. And the
pages don't have the clean, sparse feel they used to. Google search results
used to look like the output of a Unix utility. Now if I accidentally put the
cursor in the wrong place, anything might happen.
The way to win here is to build the search engine all the hackers use. A
search engine whose users consisted of the top 10,000 hackers and no one else
would be in a very powerful position despite its small size, just as Google
was when it was that search engine. And for the first time in over a decade
the idea of switching seems thinkable to me.
Since anyone capable of starting this company is one of those 10,000 hackers,
the route is at least straightforward: make the search engine you yourself
want. Feel free to make it excessively hackerish. Make it really good for code
search, for example. Would you like search queries to be Turing complete?
Anything that gets you those 10,000 users is ipso facto good.
Don't worry if something you want to do will constrain you in the long term,
because if you don't get that initial core of users, there won't be a long
term. If you can just build something that you and your friends genuinely
prefer to Google, you're already about 10% of the way to an IPO, just as
Facebook was (though they probably didn't realize it) when they got all the
Harvard undergrads.
**2\. Replace Email**
Email was not designed to be used the way we use it now. Email is not a
messaging protocol. It's a todo list. Or rather, my inbox is a todo list, and
email is the way things get onto it. But it is a disastrously bad todo list.
I'm open to different types of solutions to this problem, but I suspect that
tweaking the inbox is not enough, and that email has to be replaced with a new
protocol. This new protocol should be a todo list protocol, not a messaging
protocol, although there is a degenerate case where what someone wants you to
do is: read the following text.
As a todo list protocol, the new protocol should give more power to the
recipient than email does. I want there to be more restrictions on what
someone can put on my todo list. And when someone can put something on my todo
list, I want them to tell me more about what they want from me. Do they want
me to do something beyond just reading some text? How important is it? (There
obviously has to be some mechanism to prevent people from saying everything is
important.) When does it have to be done?
This is one of those ideas that's like an irresistible force meeting an
immovable object. On one hand, entrenched protocols are impossible to replace.
On the other, it seems unlikely that people in 100 years will still be living
in the same email hell we do now. And if email is going to get replaced
eventually, why not now?
If you do it right, you may be able to avoid the usual chicken and egg problem
new protocols face, because some of the most powerful people in the world will
be among the first to switch to it. They're all at the mercy of email too.
Whatever you build, make it fast. GMail has become painfully slow. [2] If you
made something no better than GMail, but fast, that alone would let you start
to pull users away from GMail.
GMail is slow because Google can't afford to spend a lot on it. But people
will pay for this. I'd have no problem paying $50 a month. Considering how
much time I spend in email, it's kind of scary to think how much I'd be
justified in paying. At least $1000 a month. If I spend several hours a day
reading and writing email, that would be a cheap way to make my life better.
**3\. Replace Universities**
People are all over this idea lately, and I think they're onto something. I'm
reluctant to suggest that an institution that's been around for a millennium
is finished just because of some mistakes they made in the last few decades,
but certainly in the last few decades US universities seem to have been headed
down the wrong path. One could do a lot better for a lot less money.
I don't think universities will disappear. They won't be replaced wholesale.
They'll just lose the de facto monopoly on certain types of learning that they
once had. There will be many different ways to learn different things, and
some may look quite different from universities. Y Combinator itself is
arguably one of them.
Learning is such a big problem that changing the way people do it will have a
wave of secondary effects. For example, the name of the university one went to
is treated by a lot of people (correctly or not) as a credential in its own
right. If learning breaks up into many little pieces, credentialling may
separate from it. There may even need to be replacements for campus social
life (and oddly enough, YC even has aspects of that).
You could replace high schools too, but there you face bureaucratic obstacles
that would slow down a startup. Universities seem the place to start.
**4\. Internet Drama**
Hollywood has been slow to embrace the Internet. That was a mistake, because I
think we can now call a winner in the race between delivery mechanisms, and it
is the Internet, not cable.
A lot of the reason is the horribleness of cable clients, also known as TVs.
Our family didn't wait for Apple TV. We hated our last TV so much that a few
months ago we replaced it with an iMac bolted to the wall. It's a little
inconvenient to control it with a wireless mouse, but the overall experience
is much better than the nightmare UI we had to deal with before.
Some of the attention people currently devote to watching movies and TV can be
stolen by things that seem completely unrelated, like social networking apps.
More can be stolen by things that are a little more closely related, like
games. But there will probably always remain some residual demand for
conventional drama, where you sit passively and watch as a plot happens. So
how do you deliver drama via the Internet? Whatever you make will have to be
on a larger scale than Youtube clips. When people sit down to watch a show,
they want to know what they're going to get: either part of a series with
familiar characters, or a single longer "movie" whose basic premise they know
in advance.
There are two ways delivery and payment could play out. Either some company
like Netflix or Apple will be the app store for entertainment, and you'll
reach audiences through them. Or the would-be app stores will be too
overreaching, or too technically inflexible, and companies will arise to
supply payment and streaming a la carte to the producers of drama. If that's
the way things play out, there will also be a need for such infrastructure
companies.
**5\. The Next Steve Jobs**
I was talking recently to someone who knew Apple well, and I asked him if the
people now running the company would be able to keep creating new things the
way Apple had under Steve Jobs. His answer was simply "no." I already feared
that would be the answer. I asked more to see how he'd qualify it. But he
didn't qualify it at all. No, there will be no more great new stuff beyond
whatever's currently in the pipeline. Apple's revenues may continue to rise
for a long time, but as Microsoft shows, revenue is a lagging indicator in the
technology business.
So if Apple's not going to make the next iPad, who is? None of the existing
players. None of them are run by product visionaries, and empirically you
can't seem to get those by hiring them. Empirically the way you get a product
visionary as CEO is for him to found the company and not get fired. So the
company that creates the next wave of hardware is probably going to have to be
a startup.
I realize it sounds preposterously ambitious for a startup to try to become as
big as Apple. But no more ambitious than it was for Apple to become as big as
Apple, and they did it. Plus a startup taking on this problem now has an
advantage the original Apple didn't: the example of Apple. Steve Jobs has
shown us what's possible. That helps would-be successors both directly, as
Roger Bannister did, by showing how much better you can do than people did
before, and indirectly, as Augustus did, by lodging the idea in users' minds
that a single person could unroll the future for them. [3]
Now Steve is gone there's a vacuum we can all feel. If a new company led
boldly into the future of hardware, users would follow. The CEO of that
company, the "next Steve Jobs," might not measure up to Steve Jobs. But he
wouldn't have to. He'd just have to do a better job than Samsung and HP and
Nokia, and that seems pretty doable.
**6\. Bring Back Moore's Law**
The last 10 years have reminded us what Moore's Law actually says. Till about
2002 you could safely misinterpret it as promising that clock speeds would
double every 18 months. Actually what it says is that circuit densities will
double every 18 months. It used to seem pedantic to point that out. Not any
more. Intel can no longer give us faster CPUs, just more of them.
This Moore's Law is not as good as the old one. Moore's Law used to mean that
if your software was slow, all you had to do was wait, and the inexorable
progress of hardware would solve your problems. Now if your software is slow
you have to rewrite it to do more things in parallel, which is a lot more work
than waiting.
It would be great if a startup could give us something of the old Moore's Law
back, by writing software that could make a large number of CPUs look to the
developer like one very fast CPU. There are several ways to approach this
problem. The most ambitious is to try to do it automatically: to write a
compiler that will parallelize our code for us. There's a name for this
compiler, _the sufficiently smart compiler,_ and it is a byword for
impossibility. But is it really impossible? Is there no configuration of the
bits in memory of a present day computer that is this compiler? If you really
think so, you should try to prove it, because that would be an interesting
result. And if it's not impossible but simply very hard, it might be worth
trying to write it. The expected value would be high even if the chance of
succeeding was low.
The reason the expected value is so high is web services. If you could write
software that gave programmers the convenience of the way things were in the
old days, you could offer it to them as a web service. And that would in turn
mean that you got practically all the users.
Imagine there was another processor manufacturer that could still translate
increased circuit densities into increased clock speeds. They'd take most of
Intel's business. And since web services mean that no one sees their
processors anymore, by writing the sufficiently smart compiler you could
create a situation indistinguishable from you being that manufacturer, at
least for the server market.
The least ambitious way of approaching the problem is to start from the other
end, and offer programmers more parallelizable Lego blocks to build programs
out of, like Hadoop and MapReduce. Then the programmer still does much of the
work of optimization.
There's an intriguing middle ground where you build a semi-automatic
weapon—where there's a human in the loop. You make something that looks to the
user like the sufficiently smart compiler, but inside has people, using highly
developed optimization tools to find and eliminate bottlenecks in users'
programs. These people might be your employees, or you might create a
marketplace for optimization.
An optimization marketplace would be a way to generate the sufficiently smart
compiler piecemeal, because participants would immediately start writing bots.
It would be a curious state of affairs if you could get to the point where
everything could be done by bots, because then you'd have made the
sufficiently smart compiler, but no one person would have a complete copy of
it.
I realize how crazy all this sounds. In fact, what I like about this idea is
all the different ways in which it's wrong. The whole idea of focusing on
optimization is counter to the general trend in software development for the
last several decades. Trying to write the sufficiently smart compiler is by
definition a mistake. And even if it weren't, compilers are the sort of
software that's supposed to be created by open source projects, not companies.
Plus if this works it will deprive all the programmers who take pleasure in
making multithreaded apps of so much amusing complexity. The forum troll I
have by now internalized doesn't even know where to begin in raising
objections to this project. Now that's what I call a startup idea.
**7\. Ongoing Diagnosis**
But wait, here's another that could face even greater resistance: ongoing,
automatic medical diagnosis.
One of my tricks for generating startup ideas is to imagine the ways in which
we'll seem backward to future generations. And I'm pretty sure that to people
50 or 100 years in the future, it will seem barbaric that people in our era
waited till they had symptoms to be diagnosed with conditions like heart
disease and cancer.
For example, in 2004 Bill Clinton found he was feeling short of breath.
Doctors discovered that several of his arteries were over 90% blocked and 3
days later he had a quadruple bypass. It seems reasonable to assume Bill
Clinton has the best medical care available. And yet even he had to wait till
his arteries were over 90% blocked to learn that the number was over 90%.
Surely at some point in the future we'll know these numbers the way we now
know something like our weight. Ditto for cancer. It will seem preposterous to
future generations that we wait till patients have physical symptoms to be
diagnosed with cancer. Cancer will show up on some sort of radar screen
immediately.
(Of course, what shows up on the radar screen may be different from what we
think of now as cancer. I wouldn't be surprised if at any given time we have
ten or even hundreds of microcancers going at once, none of which normally
amount to anything.)
A lot of the obstacles to ongoing diagnosis will come from the fact that it's
going against the grain of the medical profession. The way medicine has always
worked is that patients come to doctors with problems, and the doctors figure
out what's wrong. A lot of doctors don't like the idea of going on the medical
equivalent of what lawyers call a "fishing expedition," where you go looking
for problems without knowing what you're looking for. They call the things
that get discovered this way "incidentalomas," and they are something of a
nuisance.
For example, a friend of mine once had her brain scanned as part of a study.
She was horrified when the doctors running the study discovered what appeared
to be a large tumor. After further testing, it turned out to be a harmless
cyst. But it cost her a few days of terror. A lot of doctors worry that if you
start scanning people with no symptoms, you'll get this on a giant scale: a
huge number of false alarms that make patients panic and require expensive and
perhaps even dangerous tests to resolve. But I think that's just an artifact
of current limitations. If people were scanned all the time and we got better
at deciding what was a real problem, my friend would have known about this
cyst her whole life and known it was harmless, just as we do a birthmark.
There is room for a lot of startups here. In addition to the technical
obstacles all startups face, and the bureaucratic obstacles all medical
startups face, they'll be going against thousands of years of medical
tradition. But it will happen, and it will be a great thing—so great that
people in the future will feel as sorry for us as we do for the generations
that lived before anaesthesia and antibiotics.
**Tactics**
Let me conclude with some tactical advice. If you want to take on a problem as
big as the ones I've discussed, don't make a direct frontal attack on it.
Don't say, for example, that you're going to replace email. If you do that you
raise too many expectations. Your employees and investors will constantly be
asking "are we there yet?" and you'll have an army of haters waiting to see
you fail. Just say you're building todo-list software. That sounds harmless.
People can notice you've replaced email when it's a _fait accompli_. [4]
Empirically, the way to do really big things seems to be to start with
deceptively small things. Want to dominate microcomputer software? Start by
writing a Basic interpreter for a machine with a few thousand users. Want to
make the universal web site? Start by building a site for Harvard undergrads
to stalk one another.
Empirically, it's not just for other people that you need to start small. You
need to for your own sake. Neither Bill Gates nor Mark Zuckerberg knew at
first how big their companies were going to get. All they knew was that they
were onto something. Maybe it's a bad idea to have really big ambitions
initially, because the bigger your ambition, the longer it's going to take,
and the further you project into the future, the more likely you'll get it
wrong.
I think the way to use these big ideas is not to try to identify a precise
point in the future and then ask yourself how to get from here to there, like
the popular image of a visionary. You'll be better off if you operate like
Columbus and just head in a general westerly direction. Don't try to construct
the future like a building, because your current blueprint is almost certainly
mistaken. Start with something you know works, and when you expand, expand
westward.
The popular image of the visionary is someone with a clear view of the future,
but empirically it may be better to have a blurry one.
**Notes**
[1] It's also one of the most important things VCs fail to understand about
startups. Most expect founders to walk in with a clear plan for the future,
and judge them based on that. Few consciously realize that in the biggest
successes there is the least correlation between the initial plan and what the
startup eventually becomes.
[2] This sentence originally read "GMail is painfully slow." Thanks to Paul
Buchheit for the correction.
[3] Roger Bannister is famous as the first person to run a mile in under 4
minutes. But his world record only lasted 46 days. Once he showed it could be
done, lots of others followed. Ten years later Jim Ryun ran a 3:59 mile as a
high school junior.
[4] If you want to be the next Apple, maybe you don't even want to start with
consumer electronics. Maybe at first you make something hackers use. Or you
make something popular but apparently unimportant, like a headset or router.
All you need is a bridgehead.
**Thanks** to Sam Altman, Trevor Blackwell, Paul Buchheit, Patrick Collison,
Aaron Iba, Jessica Livingston, Robert Morris, Harj Taggar and Garry Tan for
reading drafts of this.
|
February 2009
I finally realized today why politics and religion yield such uniquely useless
discussions.
As a rule, any mention of religion on an online forum degenerates into a
religious argument. Why? Why does this happen with religion and not with
Javascript or baking or other topics people talk about on forums?
What's different about religion is that people don't feel they need to have
any particular expertise to have opinions about it. All they need is strongly
held beliefs, and anyone can have those. No thread about Javascript will grow
as fast as one about religion, because people feel they have to be over some
threshold of expertise to post comments about that. But on religion everyone's
an expert.
Then it struck me: this is the problem with politics too. Politics, like
religion, is a topic where there's no threshold of expertise for expressing an
opinion. All you need is strong convictions.
Do religion and politics have something in common that explains this
similarity? One possible explanation is that they deal with questions that
have no definite answers, so there's no back pressure on people's opinions.
Since no one can be proven wrong, every opinion is equally valid, and sensing
this, everyone lets fly with theirs.
But this isn't true. There are certainly some political questions that have
definite answers, like how much a new government policy will cost. But the
more precise political questions suffer the same fate as the vaguer ones.
I think what religion and politics have in common is that they become part of
people's identity, and people can never have a fruitful argument about
something that's part of their identity. By definition they're partisan.
Which topics engage people's identity depends on the people, not the topic.
For example, a discussion about a battle that included citizens of one or more
of the countries involved would probably degenerate into a political argument.
But a discussion today about a battle that took place in the Bronze Age
probably wouldn't. No one would know what side to be on. So it's not politics
that's the source of the trouble, but identity. When people say a discussion
has degenerated into a religious war, what they really mean is that it has
started to be driven mostly by people's identities. [1]
Because the point at which this happens depends on the people rather than the
topic, it's a mistake to conclude that because a question tends to provoke
religious wars, it must have no answer. For example, the question of the
relative merits of programming languages often degenerates into a religious
war, because so many programmers identify as X programmers or Y programmers.
This sometimes leads people to conclude the question must be unanswerable—that
all languages are equally good. Obviously that's false: anything else people
make can be well or badly designed; why should this be uniquely impossible for
programming languages? And indeed, you can have a fruitful discussion about
the relative merits of programming languages, so long as you exclude people
who respond from identity.
More generally, you can have a fruitful discussion about a topic only if it
doesn't engage the identities of any of the participants. What makes politics
and religion such minefields is that they engage so many people's identities.
But you could in principle have a useful conversation about them with some
people. And there are other topics that might seem harmless, like the relative
merits of Ford and Chevy pickup trucks, that you couldn't safely talk about
with
[others](http://www.theledger.com/apps/pbcs.dll/article?AID=/20060418/NEWS/604180378/1039).
The most intriguing thing about this theory, if it's right, is that it
explains not merely which kinds of discussions to avoid, but how to have
better ideas. If people can't think clearly about anything that has become
part of their identity, then all other things being equal, the best plan is to
let as few things into your identity as possible. [2]
Most people reading this will already be fairly tolerant. But there is a step
beyond thinking of yourself as x but tolerating y: not even to consider
yourself an x. The more labels you have for yourself, the dumber they make
you.
**Notes**
[1] When that happens, it tends to happen fast, like a core going critical.
The threshold for participating goes down to zero, which brings in more
people. And they tend to say incendiary things, which draw more and angrier
counterarguments.
[2] There may be some things it's a net win to include in your identity. For
example, being a scientist. But arguably that is more of a placeholder than an
actual label—like putting NMI on a form that asks for your middle
initial—because it doesn't commit you to believing anything in particular. A
scientist isn't committed to believing in natural selection in the same way a
biblical literalist is committed to rejecting it. All he's committed to is
following the evidence wherever it leads.
Considering yourself a scientist is equivalent to putting a sign in a cupboard
saying "this cupboard must be kept empty." Yes, strictly speaking, you're
putting something in the cupboard, but not in the ordinary sense.
**Thanks** to Sam Altman, Trevor Blackwell, Paul Buchheit, and Robert Morris
for reading drafts of this.
|
May 2006
_(This essay is derived from a keynote at Xtech.)_
Startups happen in clusters. There are a lot of them in Silicon Valley and
Boston, and few in Chicago or Miami. A country that wants startups will
probably also have to reproduce whatever makes these clusters form.
I've claimed that the [recipe](siliconvalley.html) is a great university near
a town smart people like. If you set up those conditions within the US,
startups will form as inevitably as water droplets condense on a cold piece of
metal. But when I consider what it would take to reproduce Silicon Valley in
another country, it's clear the US is a particularly humid environment.
Startups condense more easily here.
It is by no means a lost cause to try to create a silicon valley in another
country. There's room not merely to equal Silicon Valley, but to surpass it.
But if you want to do that, you have to understand the advantages startups get
from being in America.
**1\. The US Allows Immigration.**
For example, I doubt it would be possible to reproduce Silicon Valley in
Japan, because one of Silicon Valley's most distinctive features is
immigration. Half the people there speak with accents. And the Japanese don't
like immigration. When they think about how to make a Japanese silicon valley,
I suspect they unconsciously frame it as how to make one consisting only of
Japanese people. This way of framing the question probably guarantees failure.
A silicon valley has to be a mecca for the smart and the ambitious, and you
can't have a mecca if you don't let people into it.
Of course, it's not saying much that America is more open to immigration than
Japan. Immigration policy is one area where a competitor could do better.
**2\. The US Is a Rich Country.**
I could see India one day producing a rival to Silicon Valley. Obviously they
have the right people: you can tell that by the number of Indians in the
current Silicon Valley. The problem with India itself is that it's still so
poor.
In poor countries, things we take for granted are missing. A friend of mine
visiting India sprained her ankle falling down the steps in a railway station.
When she turned to see what had happened, she found the steps were all
different heights. In industrialized countries we walk down steps our whole
lives and never think about this, because there's an infrastructure that
prevents such a staircase from being built.
The US has never been so poor as some countries are now. There have never been
swarms of beggars in the streets of American cities. So we have no data about
what it takes to get from the swarms-of-beggars stage to the silicon-valley
stage. Could you have both at once, or does there have to be some baseline
prosperity before you get a silicon valley?
I suspect there is some speed limit to the evolution of an economy. Economies
are made out of people, and attitudes can only change a certain amount per
generation. [1]
**3\. The US Is Not (Yet) a Police State.**
Another country I could see wanting to have a silicon valley is China. But I
doubt they could do it yet either. China still seems to be a police state, and
although present rulers seem enlightened compared to the last, even
enlightened despotism can probably only get you part way toward being a great
economic power.
It can get you factories for building things designed elsewhere. Can it get
you the designers, though? Can imagination flourish where people can't
criticize the government? Imagination means having odd ideas, and it's hard to
have odd ideas about technology without also having odd ideas about politics.
And in any case, many technical ideas do have political implications. So if
you squash dissent, the back pressure will propagate into technical fields.
[2]
Singapore would face a similar problem. Singapore seems very aware of the
importance of encouraging startups. But while energetic government
intervention may be able to make a port run efficiently, it can't coax
startups into existence. A state that bans chewing gum has a long way to go
before it could create a San Francisco.
Do you need a San Francisco? Might there not be an alternate route to
innovation that goes through obedience and cooperation instead of
individualism? Possibly, but I'd bet not. Most imaginative people seem to
share a certain prickly [independence](gba.html), whenever and wherever they
lived. You see it in Diogenes telling Alexander to get out of his light and
two thousand years later in Feynman breaking into safes at Los Alamos. [3]
Imaginative people don't want to follow or lead. They're most productive when
everyone gets to do what they want.
Ironically, of all rich countries the US has lost the most civil liberties
recently. But I'm not too worried yet. I'm hoping once the present
administration is out, the natural openness of American culture will reassert
itself.
**4\. American Universities Are Better.**
You need a great university to seed a silicon valley, and so far there are few
outside the US. I asked a handful of American computer science professors
which universities in Europe were most admired, and they all basically said
"Cambridge" followed by a long pause while they tried to think of others.
There don't seem to be many universities elsewhere that compare with the best
in America, at least in technology.
In some countries this is the result of a deliberate policy. The German and
Dutch governments, perhaps from fear of elitism, try to ensure that all
universities are roughly equal in quality. The downside is that none are
especially good. The best professors are spread out, instead of being
concentrated as they are in the US. This probably makes them less productive,
because they don't have good colleagues to inspire them. It also means no one
university will be good enough to act as a mecca, attracting talent from
abroad and causing startups to form around it.
The case of Germany is a strange one. The Germans invented the modern
university, and up till the 1930s theirs were the best in the world. Now they
have none that stand out. As I was mulling this over, I found myself thinking:
"I can understand why German universities declined in the 1930s, after they
excluded Jews. But surely they should have bounced back by now." Then I
realized: maybe not. There are few Jews left in Germany and most Jews I know
would not want to move there. And if you took any great American university
and removed the Jews, you'd have some pretty big gaps. So maybe it would be a
lost cause trying to create a silicon valley in Germany, because you couldn't
establish the level of university you'd need as a seed. [4]
It's natural for US universities to compete with one another because so many
are private. To reproduce the quality of American universities you probably
also have to reproduce this. If universities are controlled by the central
government, log-rolling will pull them all toward the mean: the new Institute
of X will end up at the university in the district of a powerful politician,
instead of where it should be.
**5\. You Can Fire People in America.**
I think one of the biggest obstacles to creating startups in Europe is the
attitude toward employment. The famously rigid labor laws hurt every company,
but startups especially, because startups have the least time to spare for
bureaucratic hassles.
The difficulty of firing people is a particular problem for startups because
they have no redundancy. Every person has to do their job well.
But the problem is more than just that some startup might have a problem
firing someone they needed to. Across industries and countries, there's a
strong inverse correlation between performance and job security. Actors and
directors are fired at the end of each film, so they have to deliver every
time. Junior professors are fired by default after a few years unless the
university chooses to grant them tenure. Professional athletes know they'll be
pulled if they play badly for just a couple games. At the other end of the
scale (at least in the US) are auto workers, New York City schoolteachers, and
civil servants, who are all nearly impossible to fire. The trend is so clear
that you'd have to be willfully blind not to see it.
Performance isn't everything, you say? Well, are auto workers, schoolteachers,
and civil servants _happier_ than actors, professors, and professional
athletes?
European public opinion will apparently tolerate people being fired in
industries where they really care about performance. Unfortunately the only
industry they care enough about so far is soccer. But that is at least a
precedent.
**6\. In America Work Is Less Identified with Employment.**
The problem in more traditional places like Europe and Japan goes deeper than
the employment laws. More dangerous is the attitude they reflect: that an
employee is a kind of servant, whom the employer has a duty to protect. It
used to be that way in America too. In 1970 you were still supposed to get a
job with a big company, for whom ideally you'd work your whole career. In
return the company would take care of you: they'd try not to fire you, cover
your medical expenses, and support you in old age.
Gradually employment has been shedding such paternalistic overtones and
becoming simply an economic exchange. But the importance of the new model is
not just that it makes it easier for startups to grow. More important, I
think, is that it it makes it easier for people to _start_ startups.
Even in the US most kids graduating from college still think they're supposed
to get jobs, as if you couldn't be productive without being someone's
employee. But the less you identify work with employment, the easier it
becomes to start a startup. When you see your career as a series of different
types of work, instead of a lifetime's service to a single employer, there's
less risk in starting your own company, because you're only replacing one
segment instead of discarding the whole thing.
The old ideas are so powerful that even the most successful startup founders
have had to struggle against them. A year after the founding of Apple, Steve
Wozniak still hadn't quit HP. He still planned to work there for life. And
when Jobs found someone to give Apple serious venture funding, on the
condition that Woz quit, he initially refused, arguing that he'd designed both
the Apple I and the Apple II while working at HP, and there was no reason he
couldn't continue.
**7\. America Is Not Too Fussy.**
If there are any laws regulating businesses, you can assume larval startups
will break most of them, because they don't know what the laws are and don't
have time to find out.
For example, many startups in America begin in places where it's not really
legal to run a business. Hewlett-Packard, Apple, and Google were all run out
of garages. Many more startups, including ours, were initially run out of
apartments. If the laws against such things were actually enforced, most
startups wouldn't happen.
That could be a problem in fussier countries. If Hewlett and Packard tried
running an electronics company out of their garage in Switzerland, the old
lady next door would report them to the municipal authorities.
But the worst problem in other countries is probably the effort required just
to start a company. A friend of mine started a company in Germany in the early
90s, and was shocked to discover, among many other regulations, that you
needed $20,000 in capital to incorporate. That's one reason I'm not typing
this on an Apfel laptop. Jobs and Wozniak couldn't have come up with that kind
of money in a company financed by selling a VW bus and an HP calculator. We
couldn't have started Viaweb either. [5]
Here's a tip for governments that want to encourage startups: read the stories
of existing startups, and then try to simulate what would have happened in
your country. When you hit something that would have killed Apple, prune it
off.
_Startups are[marginal](marginal.html)._ They're started by the poor and the
timid; they begin in marginal space and spare time; they're started by people
who are supposed to be doing something else; and though businesses, their
founders often know nothing about business. Young startups are fragile. A
society that trims its margins sharply will kill them all.
**8\. America Has a Large Domestic Market.**
What sustains a startup in the beginning is the prospect of getting their
initial product out. The successful ones therefore make the first version as
simple as possible. In the US they usually begin by making something just for
the local market.
This works in America, because the local market is 300 million people. It
wouldn't work so well in Sweden. In a small country, a startup has a harder
task: they have to sell internationally from the start.
The EU was designed partly to simulate a single, large domestic market. The
problem is that the inhabitants still speak many different languages. So a
software startup in Sweden is still at a disadvantage relative to one in the
US, because they have to deal with internationalization from the beginning.
It's significant that the most famous recent startup in Europe, Skype, worked
on a problem that was intrinsically international.
However, for better or worse it looks as if Europe will in a few decades speak
a single language. When I was a student in Italy in 1990, few Italians spoke
English. Now all educated people seem to be expected to-- and Europeans do not
like to seem uneducated. This is presumably a taboo subject, but if present
trends continue, French and German will eventually go the way of Irish and
Luxembourgish: they'll be spoken in homes and by eccentric nationalists.
**9\. America Has Venture Funding.**
Startups are easier to start in America because funding is easier to get.
There are now a few VC firms outside the US, but startup funding doesn't only
come from VC firms. A more important source, because it's more personal and
comes earlier in the process, is money from individual angel investors. Google
might never have got to the point where they could raise millions from VC
funds if they hadn't first raised a hundred thousand from Andy Bechtolsheim.
And he could help them because he was one of the founders of Sun. This pattern
is repeated constantly in startup hubs. It's this pattern that _makes_ them
startup hubs.
The good news is, all you have to do to get the process rolling is get those
first few startups successfully launched. If they stick around after they get
rich, startup founders will almost automatically fund and encourage new
startups.
The bad news is that the cycle is slow. It probably takes five years, on
average, before a startup founder can make angel investments. And while
governments _might_ be able to set up local VC funds by supplying the money
themselves and recruiting people from existing firms to run them, only organic
growth can produce angel investors.
Incidentally, America's private universities are one reason there's so much
venture capital. A lot of the money in VC funds comes from their endowments.
So another advantage of private universities is that a good chunk of the
country's wealth is managed by enlightened investors.
**10\. America Has Dynamic Typing for Careers.**
Compared to other industrialized countries the US is disorganized about
routing people into careers. For example, in America people often don't decide
to go to medical school till they've finished college. In Europe they
generally decide in high school.
The European approach reflects the old idea that each person has a single,
definite occupation-- which is not far from the idea that each person has a
natural "station" in life. If this were true, the most efficient plan would be
to discover each person's station as early as possible, so they could receive
the training appropriate to it.
In the US things are more haphazard. But that turns out to be an advantage as
an economy gets more liquid, just as dynamic typing turns out to work better
than static for ill-defined problems. This is particularly true with startups.
"Startup founder" is not the sort of career a high school student would
choose. If you ask at that age, people will choose conservatively. They'll
choose well-understood occupations like engineer, or doctor, or lawyer.
Startups are the kind of thing people don't plan, so you're more likely to get
them in a society where it's ok to make career decisions on the fly.
For example, in theory the purpose of a PhD program is to train you to do
research. But fortunately in the US this is another rule that isn't very
strictly enforced. In the US most people in CS PhD programs are there simply
because they wanted to learn more. They haven't decided what they'll do
afterward. So American grad schools spawn a lot of startups, because students
don't feel they're failing if they don't go into research.
Those worried about America's "competitiveness" often suggest spending more on
public schools. But perhaps America's lousy public schools have a hidden
advantage. Because they're so bad, the kids adopt an attitude of waiting for
college. I did; I knew I was learning so little that I wasn't even learning
what the choices were, let alone which to choose. This is demoralizing, but it
does at least make you keep an open mind.
Certainly if I had to choose between bad high schools and good universities,
like the US, and good high schools and bad universities, like most other
industrialized countries, I'd take the US system. Better to make everyone feel
like a late bloomer than a failed child prodigy.
**Attitudes**
There's one item conspicuously missing from this list: American attitudes.
Americans are said to be more entrepreneurial, and less afraid of risk. But
America has no monopoly on this. Indians and Chinese seem plenty
entrepreneurial, perhaps more than Americans.
Some say Europeans are less energetic, but I don't believe it. I think the
problem with Europe is not that they lack balls, but that they lack examples.
Even in the US, the most successful startup founders are often technical
people who are quite timid, initially, about the idea of starting their own
company. Few are the sort of backslapping extroverts one thinks of as
typically American. They can usually only summon up the activation energy to
start a startup when they meet people who've done it and realize they could
too.
I think what holds back European hackers is simply that they don't meet so
many people who've done it. You see that variation even within the US.
Stanford students are more entrepreneurial than Yale students, but not because
of some difference in their characters; the Yale students just have fewer
examples.
I admit there seem to be different attitudes toward ambition in Europe and the
US. In the US it's ok to be overtly ambitious, and in most of Europe it's not.
But this can't be an intrinsically European quality; previous generations of
Europeans were as ambitious as Americans. What happened? My hypothesis is that
ambition was discredited by the terrible things ambitious people did in the
first half of the twentieth century. Now swagger is out. (Even now the image
of a very ambitious German presses a button or two, doesn't it?)
It would be surprising if European attitudes weren't affected by the disasters
of the twentieth century. It takes a while to be optimistic after events like
that. But ambition is human nature. Gradually it will re-emerge. [6]
**How To Do Better**
I don't mean to suggest by this list that America is the perfect place for
startups. It's the best place so far, but the sample size is small, and "so
far" is not very long. On historical time scales, what we have now is just a
prototype.
So let's look at Silicon Valley the way you'd look at a product made by a
competitor. What weaknesses could you exploit? How could you make something
users would like better? The users in this case are those critical few
thousand people you'd like to move to your silicon valley.
To start with, Silicon Valley is too far from San Francisco. Palo Alto, the
original ground zero, is about thirty miles away, and the present center more
like forty. So people who come to work in Silicon Valley face an unpleasant
choice: either live in the boring sprawl of the valley proper, or live in San
Francisco and endure an hour commute each way.
The best thing would be if the silicon valley were not merely closer to the
interesting city, but interesting itself. And there is a lot of room for
improvement here. Palo Alto is not so bad, but everything built since is the
worst sort of strip development. You can measure how demoralizing it is by the
number of people who will sacrifice two hours a day commuting rather than live
there.
Another area in which you could easily surpass Silicon Valley is public
transportation. There is a train running the length of it, and by American
standards it's not bad. Which is to say that to Japanese or Europeans it would
seem like something out of the third world.
The kind of people you want to attract to your silicon valley like to get
around by train, bicycle, and on foot. So if you want to beat America, design
a town that puts cars last. It will be a while before any American city can
bring itself to do that.
**Capital Gains**
There are also a couple things you could do to beat America at the national
level. One would be to have lower capital gains taxes. It doesn't seem
critical to have the lowest _income_ taxes, because to take advantage of
those, people have to move. [7] But if capital gains rates vary, you move
assets, not yourself, so changes are reflected at market speeds. The lower the
rate, the cheaper it is to buy stock in growing companies as opposed to real
estate, or bonds, or stocks bought for the dividends they pay.
So if you want to encourage startups you should have a low rate on capital
gains. Politicians are caught between a rock and a hard place here, however:
make the capital gains rate low and be accused of creating "tax breaks for the
rich," or make it high and starve growing companies of investment capital. As
Galbraith said, politics is a matter of choosing between the unpalatable and
the disastrous. A lot of governments experimented with the disastrous in the
twentieth century; now the trend seems to be toward the merely unpalatable.
Oddly enough, the leaders now are European countries like Belgium, which has a
capital gains tax rate of zero.
**Immigration**
The other place you could beat the US would be with smarter immigration
policy. There are huge gains to be made here. Silicon valleys are made of
people, remember.
Like a company whose software runs on Windows, those in the current Silicon
Valley are all too aware of the shortcomings of the INS, but there's little
they can do about it. They're hostages of the platform.
America's immigration system has never been well run, and since 2001 there has
been an additional admixture of paranoia. What fraction of the smart people
who want to come to America can even get in? I doubt even half. Which means if
you made a competing technology hub that let in all smart people, you'd
immediately get more than half the world's top talent, for free.
US immigration policy is particularly ill-suited to startups, because it
reflects a model of work from the 1970s. It assumes good technical people have
college degrees, and that work means working for a big company.
If you don't have a college degree you can't get an H1B visa, the type usually
issued to programmers. But a test that excludes Steve Jobs, Bill Gates, and
Michael Dell can't be a good one. Plus you can't get a visa for working on
your own company, only for working as an employee of someone else's. And if
you want to apply for citizenship you daren't work for a startup at all,
because if your sponsor goes out of business, you have to start over.
American immigration policy keeps out most smart people, and channels the rest
into unproductive jobs. It would be easy to do better. Imagine if, instead,
you treated immigration like recruiting-- if you made a conscious effort to
seek out the smartest people and get them to come to your country.
A country that got immigration right would have a huge advantage. At this
point you could become a mecca for smart people simply by having an
immigration system that let them in.
**A Good Vector**
If you look at the kinds of things you have to do to create an environment
where startups condense, none are great sacrifices. Great universities?
Livable towns? Civil liberties? Flexible employment laws? Immigration policies
that let in smart people? Tax laws that encourage growth? It's not as if you
have to risk destroying your country to get a silicon valley; these are all
good things in their own right.
And then of course there's the question, can you afford not to? I can imagine
a future in which the default choice of ambitious young people is to start
their [own](hiring.html) company rather than work for someone else's. I'm not
sure that will happen, but it's where the trend points now. And if that is the
future, places that don't have startups will be a whole step behind, like
those that missed the Industrial Revolution.
**Notes**
[1] On the verge of the Industrial Revolution, England was already the richest
country in the world. As far as such things can be compared, per capita income
in England in 1750 was higher than India's in 1960.
Deane, Phyllis, _The First Industrial Revolution_ , Cambridge University
Press, 1965.
[2] This has already happened once in China, during the Ming Dynasty, when the
country turned its back on industrialization at the command of the court. One
of Europe's advantages was that it had no government powerful enough to do
that.
[3] Of course, Feynman and Diogenes were from adjacent traditions, but
Confucius, though more polite, was no more willing to be told what to think.
[4] For similar reasons it might be a lost cause to try to establish a silicon
valley in Israel. Instead of no Jews moving there, only Jews would move there,
and I don't think you could build a silicon valley out of just Jews any more
than you could out of just Japanese.
(This is not a remark about the qualities of these groups, just their sizes.
Japanese are only about 2% of the world population, and Jews about .2%.)
[5] According to the World Bank, the initial capital requirement for German
companies is 47.6% of the per capita income. Doh.
World Bank, _Doing Business in 2006_ , http://doingbusiness.org
[6] For most of the twentieth century, Europeans looked back on the summer of
1914 as if they'd been living in a dream world. It seems more accurate (or at
least, as accurate) to call the years after 1914 a nightmare than to call
those before a dream. A lot of the optimism Europeans consider distinctly
American is simply what they too were feeling in 1914.
[7] The point where things start to go wrong seems to be about 50%. Above that
people get serious about tax avoidance. The reason is that the payoff for
avoiding tax grows hyperexponentially (x/1-x for 0 < x < 1). If your income
tax rate is 10%, moving to Monaco would only give you 11% more income, which
wouldn't even cover the extra cost. If it's 90%, you'd get ten times as much
income. And at 98%, as it was briefly in Britain in the 70s, moving to Monaco
would give you fifty times as much income. It seems quite likely that European
governments of the 70s never drew this curve.
**Thanks** to Trevor Blackwell, Matthias Felleisen, Jessica Livingston, Robert
Morris, Neil Rimer, Hugues Steinier, Brad Templeton, Fred Wilson, and Stephen
Wolfram for reading drafts of this, and to Ed Dumbill for inviting me to
speak.
|
August 2005
_(This essay is derived from a talk at Oscon 2005.)_
Lately companies have been paying more attention to open source. Ten years ago
there seemed a real danger Microsoft would extend its monopoly to servers. It
seems safe to say now that open source has prevented that. A recent survey
found 52% of companies are replacing Windows servers with Linux servers. [1]
More significant, I think, is _which_ 52% they are. At this point, anyone
proposing to run Windows on servers should be prepared to explain what they
know about servers that Google, Yahoo, and Amazon don't.
But the biggest thing business has to learn from open source is not about
Linux or Firefox, but about the forces that produced them. Ultimately these
will affect a lot more than what software you use.
We may be able to get a fix on these underlying forces by triangulating from
open source and blogging. As you've probably noticed, they have a lot in
common.
Like open source, blogging is something people do themselves, for free,
because they enjoy it. Like open source hackers, bloggers compete with people
working for money, and often win. The method of ensuring quality is also the
same: Darwinian. Companies ensure quality through rules to prevent employees
from screwing up. But you don't need that when the audience can communicate
with one another. People just produce whatever they want; the good stuff
spreads, and the bad gets ignored. And in both cases, feedback from the
audience improves the best work.
Another thing blogging and open source have in common is the Web. People have
always been willing to do great work for free, but before the Web it was
harder to reach an audience or collaborate on projects.
**Amateurs**
I think the most important of the new principles business has to learn is that
people work a lot harder on stuff they like. Well, that's news to no one. So
how can I claim business has to learn it? When I say business doesn't know
this, I mean the structure of business doesn't reflect it.
Business still reflects an older model, exemplified by the French word for
working: _travailler_. It has an English cousin, travail, and what it means is
torture. [2]
This turns out not to be the last word on work, however. As societies get
richer, they learn something about work that's a lot like what they learn
about diet. We know now that the healthiest diet is the one our peasant
ancestors were forced to eat because they were poor. Like rich food, idleness
only seems desirable when you don't get enough of it. I think we were designed
to work, just as we were designed to eat a certain amount of fiber, and we
feel bad if we don't.
There's a name for people who work for the love of it: amateurs. The word now
has such bad connotations that we forget its etymology, though it's staring us
in the face. "Amateur" was originally rather a complimentary word. But the
thing to be in the twentieth century was professional, which amateurs, by
definition, are not.
That's why the business world was so surprised by one lesson from open source:
that people working for love often surpass those working for money. Users
don't switch from Explorer to Firefox because they want to hack the source.
They switch because it's a better browser.
It's not that Microsoft isn't trying. They know controlling the browser is one
of the keys to retaining their monopoly. The problem is the same they face in
operating systems: they can't pay people enough to build something better than
a group of inspired hackers will build for free.
I suspect professionalism was always overrated-- not just in the literal sense
of working for money, but also connotations like formality and detachment.
Inconceivable as it would have seemed in, say, 1970, I think professionalism
was largely a fashion, driven by conditions that happened to exist in the
twentieth century.
One of the most powerful of those was the existence of "channels."
Revealingly, the same term was used for both products and information: there
were distribution channels, and TV and radio channels.
It was the narrowness of such channels that made professionals seem so
superior to amateurs. There were only a few jobs as professional journalists,
for example, so competition ensured the average journalist was fairly good.
Whereas anyone can express opinions about current events in a bar. And so the
average person expressing his opinions in a bar sounds like an idiot compared
to a journalist writing about the subject.
On the Web, the barrier for publishing your ideas is even lower. You don't
have to buy a drink, and they even let kids in. Millions of people are
publishing online, and the average level of what they're writing, as you might
expect, is not very good. This has led some in the media to conclude that
blogs don't present much of a threat-- that blogs are just a fad.
Actually, the fad is the word "blog," at least the way the print media now use
it. What they mean by "blogger" is not someone who publishes in a weblog
format, but anyone who publishes online. That's going to become a problem as
the Web becomes the default medium for publication. So I'd like to suggest an
alternative word for someone who publishes online. How about "writer?"
Those in the print media who dismiss the writing online because of its low
average quality are missing an important point: no one reads the _average_
blog. In the old world of channels, it meant something to talk about average
quality, because that's what you were getting whether you liked it or not. But
now you can read any writer you want. So the average quality of writing online
isn't what the print media are competing against. They're competing against
the best writing online. And, like Microsoft, they're losing.
I know that from my own experience as a reader. Though most print publications
are online, I probably read two or three articles on individual people's sites
for every one I read on the site of a newspaper or magazine.
And when I read, say, New York Times stories, I never reach them through the
Times front page. Most I find through aggregators like Google News or Slashdot
or Delicious. Aggregators show how much [better](http://reddit.com) you can do
than the channel. The New York Times front page is a list of articles written
by people who work for the New York Times. Delicious is a list of articles
that are interesting. And it's only now that you can see the two side by side
that you notice how little overlap there is.
Most articles in the print media are boring. For example, the president
notices that a majority of voters now think invading Iraq was a mistake, so he
makes an address to the nation to drum up support. Where is the man bites dog
in that? I didn't hear the speech, but I could probably tell you exactly what
he said. A speech like that is, in the most literal sense, not news: there is
nothing _new_ in it. [3]
Nor is there anything new, except the names and places, in most "news" about
things going wrong. A child is abducted; there's a tornado; a ferry sinks;
someone gets bitten by a shark; a small plane crashes. And what do you learn
about the world from these stories? Absolutely nothing. They're outlying data
points; what makes them gripping also makes them irrelevant.
As in software, when professionals produce such crap, it's not surprising if
amateurs can do better. Live by the channel, die by the channel: if you depend
on an oligopoly, you sink into bad habits that are hard to overcome when you
suddenly get competition. [4]
**Workplaces**
Another thing blogs and open source software have in common is that they're
often made by people working at home. That may not seem surprising. But it
should be. It's the architectural equivalent of a home-made aircraft shooting
down an F-18. Companies spend millions to build office buildings for a single
purpose: to be a place to work. And yet people working in their own homes,
which aren't even designed to be workplaces, end up being more productive.
This proves something a lot of us have suspected. The average office is a
miserable place to get work done. And a lot of what makes offices bad are the
very qualities we associate with professionalism. The sterility of offices is
supposed to suggest efficiency. But suggesting efficiency is a different thing
from actually being efficient.
The atmosphere of the average workplace is to productivity what flames painted
on the side of a car are to speed. And it's not just the way offices look
that's bleak. The way people act is just as bad.
Things are different in a startup. Often as not a startup begins in an
apartment. Instead of matching beige cubicles they have an assortment of
furniture they bought used. They work odd hours, wearing the most casual of
clothing. They look at whatever they want online without worrying whether it's
"work safe." The cheery, bland language of the office is replaced by wicked
humor. And you know what? The company at this stage is probably the most
productive it's ever going to be.
Maybe it's not a coincidence. Maybe some aspects of professionalism are
actually a net lose.
To me the most demoralizing aspect of the traditional office is that you're
supposed to be there at certain times. There are usually a few people in a
company who really have to, but the reason most employees work fixed hours is
that the company can't measure their productivity.
The basic idea behind office hours is that if you can't make people work, you
can at least prevent them from having fun. If employees have to be in the
building a certain number of hours a day, and are forbidden to do non-work
things while there, then they must be working. In theory. In practice they
spend a lot of their time in a no-man's land, where they're neither working
nor having fun.
If you could measure how much work people did, many companies wouldn't need
any fixed workday. You could just say: this is what you have to do. Do it
whenever you like, wherever you like. If your work requires you to talk to
other people in the company, then you may need to be here a certain amount.
Otherwise we don't care.
That may seem utopian, but it's what we told people who came to work for our
company. There were no fixed office hours. I never showed up before 11 in the
morning. But we weren't saying this to be benevolent. We were saying: if you
work here we expect you to get a lot done. Don't try to fool us just by being
here a lot.
The problem with the facetime model is not just that it's demoralizing, but
that the people pretending to work interrupt the ones actually working. I'm
convinced the facetime model is the main reason large organizations have so
many meetings. Per capita, large organizations accomplish very little. And yet
all those people have to be on site at least eight hours a day. When so much
time goes in one end and so little achievement comes out the other, something
has to give. And meetings are the main mechanism for taking up the slack.
For one year I worked at a regular nine to five job, and I remember well the
strange, cozy feeling that comes over one during meetings. I was very aware,
because of the novelty, that I was being paid for programming. It seemed just
amazing, as if there was a machine on my desk that spat out a dollar bill
every two minutes no matter what I did. Even while I was in the bathroom! But
because the imaginary machine was always running, I felt I always ought to be
working. And so meetings felt wonderfully relaxing. They counted as work, just
like programming, but they were so much easier. All you had to do was sit and
look attentive.
Meetings are like an opiate with a network effect. So is email, on a smaller
scale. And in addition to the direct cost in time, there's the cost in
fragmentation-- breaking people's day up into bits too small to be useful.
You can see how dependent you've become on something by removing it suddenly.
So for big companies I propose the following experiment. Set aside one day
where meetings are forbidden-- where everyone has to sit at their desk all day
and work without interruption on things they can do without talking to anyone
else. Some amount of communication is necessary in most jobs, but I'm sure
many employees could find eight hours worth of stuff they could do by
themselves. You could call it "Work Day."
The other problem with pretend work is that it often looks better than real
work. When I'm writing or hacking I spend as much time just thinking as I do
actually typing. Half the time I'm sitting drinking a cup of tea, or walking
around the neighborhood. This is a critical phase-- this is where ideas come
from-- and yet I'd feel guilty doing this in most offices, with everyone else
looking busy.
It's hard to see how bad some practice is till you have something to compare
it to. And that's one reason open source, and even blogging in some cases, are
so important. They show us what real work looks like.
We're funding eight new startups at the moment. A friend asked what they were
doing for office space, and seemed surprised when I said we expected them to
work out of whatever apartments they found to live in. But we didn't propose
that to save money. We did it because we want their software to be good.
Working in crappy informal spaces is one of the things startups do right
without realizing it. As soon as you get into an office, work and life start
to drift apart.
That is one of the key tenets of professionalism. Work and life are supposed
to be separate. But that part, I'm convinced, is a mistake.
**Bottom-Up**
The third big lesson we can learn from open source and blogging is that ideas
can bubble up from the bottom, instead of flowing down from the top. Open
source and blogging both work bottom-up: people make what they want, and the
best stuff prevails.
Does this sound familiar? It's the principle of a market economy. Ironically,
though open source and blogs are done for free, those worlds resemble market
economies, while most companies, for all their talk about the value of free
markets, are run internally like communist states.
There are two forces that together steer design: ideas about what to do next,
and the enforcement of quality. In the channel era, both flowed down from the
top. For example, newspaper editors assigned stories to reporters, then edited
what they wrote.
Open source and blogging show us things don't have to work that way. Ideas and
even the enforcement of quality can flow bottom-up. And in both cases the
results are not merely acceptable, but better. For example, open source
software is more reliable precisely because it's open source; anyone can find
mistakes.
The same happens with writing. As we got close to publication, I found I was
very worried about the essays in [Hackers &
Painters](http://www.amazon.com/exec/obidos/tg/detail/-/0596006624) that
hadn't been online. Once an essay has had a couple thousand page views I feel
reasonably confident about it. But these had had literally orders of magnitude
less scrutiny. It felt like releasing software without testing it.
That's what all publishing used to be like. If you got ten people to read a
manuscript, you were lucky. But I'd become so used to publishing online that
the old method now seemed alarmingly unreliable, like navigating by dead
reckoning once you'd gotten used to a GPS.
The other thing I like about publishing online is that you can write what you
want and publish when you want. Earlier this year I wrote
[something](inequality.html) that seemed suitable for a magazine, so I sent it
to an editor I know. As I was waiting to hear back, I found to my surprise
that I was hoping they'd reject it. Then I could put it online right away. If
they accepted it, it wouldn't be read by anyone for months, and in the
meantime I'd have to fight word-by-word to save it from being mangled by some
twenty five year old copy editor. [5]
Many employees would _like_ to build great things for the companies they work
for, but more often than not management won't let them. How many of us have
heard stories of employees going to management and saying, please let us build
this thing to make money for you-- and the company saying no? The most famous
example is probably Steve Wozniak, who originally wanted to build
microcomputers for his then-employer, HP. And they turned him down. On the
blunderometer, this episode ranks with IBM accepting a non-exclusive license
for DOS. But I think this happens all the time. We just don't hear about it
usually, because to prove yourself right you have to quit and start your own
company, like Wozniak did.
**Startups**
So these, I think, are the three big lessons open source and blogging have to
teach business: (1) that people work harder on stuff they like, (2) that the
standard office environment is very unproductive, and (3) that bottom-up often
works better than top-down.
I can imagine managers at this point saying: what is this guy talking about?
What good does it do me to know that my programmers would be more productive
working at home on their own projects? I need their asses in here working on
version 3.2 of our software, or we're never going to make the release date.
And it's true, the benefit that specific manager could derive from the forces
I've described is near zero. When I say business can learn from open source, I
don't mean any specific business can. I mean business can learn about new
conditions the same way a gene pool does. I'm not claiming companies can get
smarter, just that dumb ones will die.
So what will business look like when it has assimilated the lessons of open
source and blogging? I think the big obstacle preventing us from seeing the
future of business is the assumption that people working for you have to be
employees. But think about what's going on underneath: the company has some
money, and they pay it to the employee in the hope that he'll make something
worth more than they paid him. Well, there are other ways to arrange that
relationship. Instead of paying the guy money as a salary, why not give it to
him as investment? Then instead of coming to your office to work on your
projects, he can work wherever he wants on projects of his own.
Because few of us know any alternative, we have no idea how much better we
could do than the traditional employer-employee relationship. Such customs
evolve with glacial slowness. Our employer-employee relationship still retains
a big chunk of master-servant DNA. [6]
I dislike being on either end of it. I'll work my ass off for a customer, but
I resent being told what to do by a boss. And being a boss is also horribly
frustrating; half the time it's easier just to do stuff yourself than to get
someone else to do it for you. I'd rather do almost anything than give or
receive a performance review.
On top of its unpromising origins, employment has accumulated a lot of cruft
over the years. The list of what you can't ask in job interviews is now so
long that for convenience I assume it's infinite. Within the office you now
have to walk on eggshells lest anyone [say](say.html) or do something that
makes the company prey to a lawsuit. And God help you if you fire anyone.
Nothing shows more clearly that employment is not an ordinary economic
relationship than companies being sued for firing people. In any purely
economic relationship you're free to do what you want. If you want to stop
buying steel pipe from one supplier and start buying it from another, you
don't have to explain why. No one can accuse you of _unjustly_ switching pipe
suppliers. Justice implies some kind of paternal obligation that isn't there
in transactions between equals.
Most of the legal restrictions on employers are intended to protect employees.
But you can't have action without an equal and opposite reaction. You can't
expect employers to have some kind of paternal responsibility toward employees
without putting employees in the position of children. And that seems a bad
road to go down.
Next time you're in a moderately large city, drop by the main post office and
watch the body language of the people working there. They have the same sullen
resentment as children made to do something they don't want to. Their union
has exacted pay increases and work restrictions that would have been the envy
of previous generations of postal workers, and yet they don't seem any happier
for it. It's demoralizing to be on the receiving end of a paternalistic
relationship, no matter how cozy the terms. Just ask any teenager.
I see the disadvantages of the employer-employee relationship because I've
been on both sides of a better one: the investor-founder relationship. I
wouldn't claim it's painless. When I was running a startup, the thought of our
investors used to keep me up at night. And now that I'm an
[investor](http://ycombinator.com), the thought of our startups keeps me up at
night. All the pain of whatever problem you're trying to solve is still there.
But the pain hurts less when it isn't mixed with resentment.
I had the misfortune to participate in what amounted to a controlled
experiment to prove that. After Yahoo bought our startup I went to work for
them. I was doing exactly the same work, except with bosses. And to my horror
I started acting like a child. The situation pushed buttons I'd forgotten I
had.
The big advantage of investment over employment, as the examples of open
source and blogging suggest, is that people working on projects of their own
are enormously more productive. And a [startup](start.html) is a project of
one's own in two senses, both of them important: it's creatively one's own,
and also economically ones's own.
Google is a rare example of a big company in tune with the forces I've
described. They've tried hard to make their offices less sterile than the
usual cube farm. They give employees who do great work large grants of stock
to simulate the rewards of a startup. They even let hackers spend 20% of their
time on their own projects.
Why not let people spend 100% of their time on their own projects, and instead
of trying to approximate the value of what they create, give them the actual
market value? Impossible? That is in fact what venture capitalists do.
So am I claiming that no one is going to be an employee anymore-- that
everyone should go and start a startup? Of course not. But more people could
do it than do it now. At the moment, even the smartest students leave school
thinking they have to get a [job](hiring.html). Actually what they need to do
is make something valuable. A job is one way to do that, but the more
ambitious ones will ordinarily be better off taking money from an investor
than an employer.
Hackers tend to think business is for MBAs. But business administration is not
what you're doing in a startup. What you're doing is business _creation_. And
the first phase of that is mostly product creation-- that is, hacking. That's
the hard part. It's a lot harder to create something people love than to take
something people love and figure out how to make money from it.
Another thing that keeps people away from starting startups is the risk.
Someone with kids and a mortgage should think twice before doing it. But most
young hackers have neither.
And as the example of open source and blogging suggests, you'll enjoy it more,
even if you fail. You'll be working on your own thing, instead of going to
some office and doing what you're told. There may be more pain in your own
company, but it won't hurt as much.
That may be the greatest effect, in the long run, of the forces underlying
open source and blogging: finally ditching the old paternalistic employer-
employee relationship, and replacing it with a purely economic one, between
equals.
**Notes**
[1] Survey by Forrester Research reported in the cover story of Business Week,
31 Jan 2005. Apparently someone believed you have to replace the actual server
in order to switch the operating system.
[2] It derives from the late Latin _tripalium_ , a torture device so called
because it consisted of three stakes. I don't know how the stakes were used.
"Travel" has the same root.
[3] It would be much bigger news, in that sense, if the president faced
unscripted questions by giving a press conference.
[4] One measure of the incompetence of newspapers is that so many still make
you register to read stories. I have yet to find a blog that tried that.
[5] They accepted the article, but I took so long to send them the final
version that by the time I did the section of the magazine they'd accepted it
for had disappeared in a reorganization.
[6] The word "boss" is derived from the Dutch _baas_ , meaning "master."
**Thanks** to Sarah Harlin, Jessica Livingston, and Robert Morris for reading
drafts of this.
|
January 2015
No one, VC or angel, has invested in more of the top startups than Ron Conway.
He knows what happened in every deal in the Valley, half the time because he
arranged it.
And yet he's a super nice guy. In fact, nice is not the word. Ronco is good. I
know of zero instances in which he has behaved badly. It's hard even to
imagine.
When I first came to Silicon Valley I thought "How lucky that someone so
powerful is so benevolent." But gradually I realized it wasn't luck. It was by
being benevolent that Ronco became so powerful. All the deals he gets to
invest in come to him through referrals. Google did. Facebook did. Twitter was
a referral from Evan Williams himself. And the reason so many people refer
deals to him is that he's proven himself to be a good guy.
Good does not mean being a pushover. I would not want to face an angry Ronco.
But if Ron's angry at you, it's because you did something wrong. Ron is so old
school he's Old Testament. He will smite you in his just wrath, but there's no
malice in it.
In almost every domain there are advantages to seeming good. It makes people
trust you. But actually being good is an expensive way to seem good. To an
amoral person it might seem to be overkill.
In some fields it might be, but apparently not in the startup world. Though
plenty of investors are jerks, there is a clear trend among them: the most
successful investors are also the most upstanding. [1]
It was not always this way. I would not feel confident saying that about
investors twenty years ago.
What changed? The startup world became more transparent and more
unpredictable. Both make it harder to seem good without actually being good.
It's obvious why transparency has that effect. When an investor maltreats a
founder now, it gets out. Maybe not all the way to the press, but other
founders hear about it, and that investor starts to lose deals. [2]
The effect of unpredictability is more subtle. It increases the work of being
inconsistent. If you're going to be two-faced, you have to know who you should
be nice to and who you can get away with being nasty to. In the startup world,
things change so rapidly that you can't tell. The random college kid you talk
to today might in a couple years be the CEO of the hottest startup in the
Valley. If you can't tell who to be nice to, you have to be nice to everyone.
And probably the only people who can manage that are the people who are
genuinely good.
In a sufficiently connected and unpredictable world, you can't seem good
without being good.
As often happens, Ron discovered how to be the investor of the future by
accident. He didn't foresee the future of startup investing, realize it would
pay to be upstanding, and force himself to behave that way. It would feel
unnatural to him to behave any other way. He was already [living in the
future](startupideas.html).
Fortunately that future is not limited to the startup world. The startup world
is more transparent and unpredictable than most, but almost everywhere the
trend is in that direction.
**Notes**
[1] I'm not saying that if you sort investors by benevolence you've also
sorted them by returns, but rather that if you do a scatterplot with
benevolence on the x axis and returns on the y, you'd see a clear upward
trend.
[2] Y Combinator in particular, because it aggregates data from so many
startups, has a pretty comprehensive view of investor behavior.
**Thanks** to Sam Altman and Jessica Livingston for reading drafts of this.
|
December 2008
_(I originally wrote this at the request of a company producing a report
about entrepreneurship. Unfortunately after reading it they decided it was too
controversial to include.)_
VC funding will probably dry up somewhat during the present recession, like it
usually does in bad times. But this time the result may be different. This
time the number of new startups may not decrease. And that could be dangerous
for VCs.
When VC funding dried up after the Internet Bubble, startups dried up too.
There were not a lot of new startups being founded in 2003\. But startups
aren't tied to VC the way they were 10 years ago. It's now possible for VCs
and startups to diverge. And if they do, they may not reconverge once the
economy gets better.
The reason startups no longer depend so much on VCs is one that everyone in
the startup business knows by now: it has gotten much cheaper to start a
startup. There are four main reasons: Moore's law has made hardware cheap;
open source has made software free; the web has made marketing and
distribution free; and more powerful programming languages mean development
teams can be smaller. These changes have pushed the cost of starting a startup
down into the noise. In a lot of startups—probaby most startups funded by Y
Combinator—the biggest expense is simply the founders' living expenses. We've
had startups that were profitable on revenues of $3000 a month.
$3000 is insignificant as revenues go. Why should anyone care about a startup
making $3000 a month? Because, although insignificant as _revenue_ , this
amount of money can change a startup's _funding_ situation completely.
Someone running a startup is always calculating in the back of their mind how
much "runway" they have—how long they have till the money in the bank runs out
and they either have to be profitable, raise more money, or go out of
business. Once you cross the threshold of profitability, however low, your
runway becomes infinite. It's a qualitative change, like the stars turning
into lines and disappearing when the Enterprise accelerates to warp speed.
Once you're profitable you don't need investors' money. And because Internet
startups have become so cheap to run, the threshold of profitability can be
trivially low. Which means many Internet startups don't need VC-scale
investments anymore. For many startups, VC funding has, in the language of
VCs, gone from a must-have to a nice-to-have.
This change happened while no one was looking, and its effects have been
largely masked so far. It was during the trough after the Internet Bubble that
it became trivially cheap to start a startup, but few realized it because
startups were so out of fashion. When startups came back into fashion, around
2005, investors were starting to write checks again. And while founders may
not have needed VC money the way they used to, they were willing to take it if
offered—partly because there was a tradition of startups taking VC money, and
partly because startups, like dogs, tend to eat when given the opportunity. As
long as VCs were writing checks, founders were never forced to explore the
limits of how little they needed them. There were a few startups who hit these
limits accidentally because of their unusual circumstances—most famously
37signals, which hit the limit because they crossed into startup land from the
other direction: they started as a consulting firm, so they had revenue before
they had a product.
VCs and founders are like two components that used to be bolted together.
Around 2000 the bolt was removed. Because the components have so far been
subjected to the same forces, they still seem to be joined together, but
really one is just resting on the other. A sharp impact would make them fly
apart. And the present recession could be that impact.
Because of Y Combinator's position at the extreme end of the spectrum, we'd be
the first to see signs of a separation between founders and investors, and we
are in fact seeing it. For example, though the stock market crash does seem to
have made investors more cautious, it doesn't seem to have had any effect on
the number of people who want to start startups. We take applications for
funding every 6 months. Applications for the current funding cycle closed on
October 17, well after the markets tanked, and even so we got a record number,
up 40% from the same cycle a year before.
Maybe things will be different a year from now, if the economy continues to
get worse, but so far there is zero slackening of interest among potential
founders. That's different from the way things felt in 2001. Then there was a
widespread feeling among potential founders that startups were over, and that
one should just go to grad school. That isn't happening this time, and part of
the reason is that even in a bad economy it's not that hard to build something
that makes $3000 a month. If investors stop writing checks, who cares?
We also see signs of a divergence between founders and investors in the
attitudes of existing startups we've funded. I was talking to one recently
that had a round fall through at the last minute over the sort of trifle that
breaks deals when investors feel they have the upper hand—over an uncertainty
about whether the founders had correctly filed their 83(b) forms, if you can
believe that. And yet this startup is obviously going to succeed: their
traffic and revenue graphs look like a jet taking off. So I asked them if they
wanted me to introduce them to more investors. To my surprise, they said
no—that they'd just spent four months dealing with investors, and they were
actually a lot happier now that they didn't have to. There was a friend they
wanted to hire with the investor money, and now they'd have to postpone that.
But otherwise they felt they had enough in the bank to make it to
profitability. To make sure, they were moving to a cheaper apartment. And in
this economy I bet they got a good deal on it.
I've detected this "investors aren't worth the trouble" vibe from several YC
founders I've talked to recently. At least one startup from the most recent
(summer) cycle may not even raise angel money, let alone VC.
[Ticketstumbler](http://ticketstumbler.com) made it to profitability on Y
Combinator's $15,000 investment and they hope not to need more. This surprised
even us. Although YC is based on the idea of it being cheap to start a
startup, we never anticipated that founders would grow successful startups on
nothing more than YC funding.
If founders decide VCs aren't worth the trouble, that could be bad for VCs.
When the economy bounces back in a few years and they're ready to write checks
again, they may find that founders have moved on.
There is a founder community just as there's a VC community. They all know one
another, and techniques spread rapidly between them. If one tries a new
programming language or a new hosting provider and gets good results, 6 months
later half of them are using it. And the same is true for funding. The current
generation of founders want to raise money from VCs, and Sequoia specifically,
because Larry and Sergey took money from VCs, and Sequoia specifically.
Imagine what it would do to the VC business if the next hot company didn't
take VC at all.
VCs think they're playing a zero sum game. In fact, it's not even that. If you
lose a deal to Benchmark, you lose that deal, but VC as an industry still
wins. If you lose a deal to None, all VCs lose.
This recession may be different from the one after the Internet Bubble. This
time founders may keep starting startups. And if they do, VCs will have to
keep writing checks, or they could become irrelevant.
**Thanks** to Sam Altman, Trevor Blackwell, David Hornik, Jessica Livingston,
Robert Morris, and Fred Wilson for reading drafts of this.
|
July 2006I've discovered a handy test for figuring out what you're addicted
to. Imagine you were going to spend the weekend at a friend's house
on a little island off the coast of Maine. There are no shops on
the island and you won't be able to leave while you're there. Also,
you've never been to this house before, so you can't assume it will
have more than any house might.What, besides clothes and toiletries, do you make a point of packing?
That's what you're addicted to. For example, if you find yourself
packing a bottle of vodka (just in case), you may want to stop and
think about that.For me the list is four things: books, earplugs, a notebook, and a
pen.There are other things I might bring if I thought of it, like music,
or tea, but I can live without them. I'm not so addicted to caffeine
that I wouldn't risk the house not having any tea, just for a
weekend.Quiet is another matter. I realize it seems a bit eccentric to
take earplugs on a trip to an island off the coast of Maine. If
anywhere should be quiet, that should. But what if the person in
the next room snored? What if there was a kid playing basketball?
(Thump, thump, thump... thump.) Why risk it? Earplugs are small.Sometimes I can think with noise. If I already have momentum on
some project, I can work in noisy places. I can edit an essay or
debug code in an airport. But airports are not so bad: most of the
noise is whitish. I couldn't work with the sound of a sitcom coming
through the wall, or a car in the street playing thump-thump music.And of course there's another kind of thinking, when you're starting
something new, that requires complete quiet. You never
know when this will strike. It's just as well to carry plugs.The notebook and pen are professional equipment, as it were. Though
actually there is something druglike about them, in the sense that
their main purpose is to make me feel better. I hardly ever go
back and read stuff I write down in notebooks. It's just that if
I can't write things down, worrying about remembering one idea gets
in the way of having the next. Pen and paper wick ideas.The best notebooks I've found are made by a company called Miquelrius.
I use their smallest size, which is about 2.5 x 4 in.
The secret to writing on such
narrow pages is to break words only when you run out of space, like
a Latin inscription. I use the cheapest plastic Bic ballpoints,
partly because their gluey ink doesn't seep through pages, and
partly so I don't worry about losing them.I only started carrying a notebook about three years ago. Before
that I used whatever scraps of paper I could find. But the problem
with scraps of paper is that they're not ordered. In a notebook
you can guess what a scribble means by looking at the pages
around it. In the scrap era I was constantly finding notes I'd
written years before that might say something I needed to remember,
if I could only figure out what.As for books, I know the house would probably have something to
read. On the average trip I bring four books and only read one of
them, because I find new books to read en route. Really bringing
books is insurance.I realize this dependence on books is not entirely good—that what
I need them for is distraction. The books I bring on trips are
often quite virtuous, the sort of stuff that might be assigned
reading in a college class. But I know my motives aren't virtuous.
I bring books because if the world gets boring I need to be able
to slip into another distilled by some writer. It's like eating
jam when you know you should be eating fruit.There is a point where I'll do without books. I was walking in
some steep mountains once, and decided I'd rather just think, if I
was bored, rather than carry a single unnecessary ounce. It wasn't
so bad. I found I could entertain myself by having ideas instead
of reading other people's. If you stop eating jam, fruit starts
to taste better.So maybe I'll try not bringing books on some future trip. They're
going to have to pry the plugs out of my cold, dead ears, however. |
Want to start a startup? Get funded by
Y Combinator.
March 2008, rev. June 2008Technology tends to separate normal from natural. Our bodies
weren't designed to eat the foods that people in rich countries eat, or
to get so little exercise.
There may be a similar problem with the way we work:
a normal job may be as bad for us intellectually as white flour
or sugar is for us physically.I began to suspect this after spending several years working
with startup founders. I've now worked with over 200 of them, and I've
noticed a definite difference between programmers working on their
own startups and those working for large organizations.
I wouldn't say founders seem happier, necessarily;
starting a startup can be very stressful. Maybe the best way to put
it is to say that they're happier in the sense that your body is
happier during a long run than sitting on a sofa eating
doughnuts.Though they're statistically abnormal, startup founders seem to be
working in a way that's more natural for humans.I was in Africa last year and saw a lot of animals in the wild that
I'd only seen in zoos before. It was remarkable how different they
seemed. Particularly lions. Lions in the wild seem about ten times
more alive. They're like different animals. I suspect that working
for oneself feels better to humans in much the same way that living
in the wild must feel better to a wide-ranging predator like a lion.
Life in a zoo is easier, but it isn't the life they were designed
for.
TreesWhat's so unnatural about working for a big company? The root of
the problem is that humans weren't meant to work in such large
groups.Another thing you notice when you see animals in the wild is that
each species thrives in groups of a certain size. A herd of impalas
might have 100 adults; baboons maybe 20; lions rarely 10. Humans
also seem designed to work in groups, and what I've read about
hunter-gatherers accords with research on organizations and my own
experience to suggest roughly what the ideal size is: groups of 8
work well; by 20 they're getting hard to manage; and a group of 50
is really unwieldy.
[1]
Whatever the upper limit is, we are clearly not meant to work in
groups of several hundred. And yet—for reasons having more
to do with technology than human nature—a great many people
work for companies with hundreds or thousands of employees.Companies know groups that large wouldn't work, so they divide
themselves into units small enough to work together. But to
coordinate these they have to introduce something new: bosses.These smaller groups are always arranged in a tree structure. Your
boss is the point where your group attaches to the tree. But when
you use this trick for dividing a large group into smaller ones,
something strange happens that I've never heard anyone mention
explicitly. In the group one level up from yours, your boss
represents your entire group. A group of 10 managers is not merely
a group of 10 people working together in the usual way. It's really
a group of groups. Which means for a group of 10 managers to work
together as if they were simply a group of 10 individuals, the group
working for each manager would have to work as if they were a single
person—the workers and manager would each share only one
person's worth of freedom between them.In practice a group of people are never able to act as if they were
one person. But in a large organization divided into groups in
this way, the pressure is always in that direction. Each group
tries its best to work as if it were the small group of individuals
that humans were designed to work in. That was the point of creating
it. And when you propagate that constraint, the result is that
each person gets freedom of action in inverse proportion to the
size of the entire tree.
[2]Anyone who's worked for a large organization has felt this. You
can feel the difference between working for a company with 100
employees and one with 10,000, even if your group has only 10 people.
Corn SyrupA group of 10 people within a large organization is a kind of fake
tribe. The number of people you interact with is about right. But
something is missing: individual initiative. Tribes of hunter-gatherers
have much more freedom. The leaders have a little more power than other
members of the tribe, but they don't generally tell them what to
do and when the way a boss can.It's not your boss's fault. The real problem is that in the group
above you in the hierarchy, your entire group is one virtual person.
Your boss is just the way that constraint is imparted to you.So working in a group of 10 people within a large organization feels
both right and wrong at the same time. On the surface it feels
like the kind of group you're meant to work in, but something major
is missing. A job at a big company is like high fructose corn
syrup: it has some of the qualities of things you're meant to like,
but is disastrously lacking in others.Indeed, food is an excellent metaphor to explain what's wrong with
the usual sort of job.For example, working for a big company is the default thing to do,
at least for programmers. How bad could it be? Well, food shows
that pretty clearly. If you were dropped at a random point in
America today, nearly all the food around you would be bad for you.
Humans were not designed to eat white flour, refined sugar, high
fructose corn syrup, and hydrogenated vegetable oil. And yet if
you analyzed the contents of the average grocery store you'd probably
find these four ingredients accounted for most of the calories.
"Normal" food is terribly bad for you. The only people who eat
what humans were actually designed to eat are a few Birkenstock-wearing
weirdos in Berkeley.If "normal" food is so bad for us, why is it so common? There are
two main reasons. One is that it has more immediate appeal. You
may feel lousy an hour after eating that pizza, but eating the first
couple bites feels great. The other is economies of scale.
Producing junk food scales; producing fresh vegetables doesn't.
Which means (a) junk food can be very cheap, and (b) it's worth
spending a lot to market it.If people have to choose between something that's cheap, heavily
marketed, and appealing in the short term, and something that's
expensive, obscure, and appealing in the long term, which do you
think most will choose?It's the same with work. The average MIT graduate wants to work
at Google or Microsoft, because it's a recognized brand, it's safe,
and they'll get paid a good salary right away. It's the job
equivalent of the pizza they had for lunch. The drawbacks will
only become apparent later, and then only in a vague sense of
malaise.And founders and early employees of startups, meanwhile, are like
the Birkenstock-wearing weirdos of Berkeley: though a tiny minority
of the population, they're the ones living as humans are meant to.
In an artificial world, only extremists live naturally.
ProgrammersThe restrictiveness of big company jobs is particularly hard on
programmers, because the essence of programming is to build new
things. Sales people make much the same pitches every day; support
people answer much the same questions; but once you've written a
piece of code you don't need to write it again. So a programmer
working as programmers are meant to is always making new things.
And when you're part of an organization whose structure gives each
person freedom in inverse proportion to the size of the tree, you're
going to face resistance when you do something new.This seems an inevitable consequence of bigness. It's true even
in the smartest companies. I was talking recently to a founder who
considered starting a startup right out of college, but went to
work for Google instead because he thought he'd learn more there.
He didn't learn as much as he expected. Programmers learn by doing,
and most of the things he wanted to do, he couldn't—sometimes
because the company wouldn't let him, but often because the company's
code wouldn't let him. Between the drag of legacy code, the overhead
of doing development in such a large organization, and the restrictions
imposed by interfaces owned by other groups, he could only try a
fraction of the things he would have liked to. He said he has
learned much more in his own startup, despite the fact that he has
to do all the company's errands as well as programming, because at
least when he's programming he can do whatever he wants.An obstacle downstream propagates upstream. If you're not allowed
to implement new ideas, you stop having them. And vice versa: when
you can do whatever you want, you have more ideas about what to do.
So working for yourself makes your brain more powerful in the same
way a low-restriction exhaust system makes an engine more powerful.Working for yourself doesn't have to mean starting a startup, of
course. But a programmer deciding between a regular job at a big
company and their own startup is probably going to learn more doing
the startup.You can adjust the amount of freedom you get by scaling the size
of company you work for. If you start the company, you'll have the
most freedom. If you become one of the first 10 employees you'll
have almost as much freedom as the founders. Even a company with
100 people will feel different from one with 1000.Working for a small company doesn't ensure freedom. The tree
structure of large organizations sets an upper bound on freedom,
not a lower bound. The head of a small company may still choose
to be a tyrant. The point is that a large organization is compelled
by its structure to be one.
ConsequencesThat has real consequences for both organizations and individuals.
One is that companies will inevitably slow down as they grow larger,
no matter how hard they try to keep their startup mojo. It's a
consequence of the tree structure that every large organization is
forced to adopt.Or rather, a large organization could only avoid slowing down if
they avoided tree structure. And since human nature limits the
size of group that can work together, the only way I can imagine
for larger groups to avoid tree structure would be to have no
structure: to have each group actually be independent, and to work
together the way components of a market economy do.That might be worth exploring. I suspect there are already some
highly partitionable businesses that lean this way. But I don't
know any technology companies that have done it.There is one thing companies can do short of structuring themselves
as sponges: they can stay small. If I'm right, then it really
pays to keep a company as small as it can be at every stage.
Particularly a technology company. Which means it's doubly important
to hire the best people. Mediocre hires hurt you twice: they get
less done, but they also make you big, because you need more of
them to solve a given problem.For individuals the upshot is the same: aim small. It will always
suck to work for large organizations, and the larger the organization,
the more it will suck.In an essay I wrote a couple years ago
I advised graduating seniors
to work for a couple years for another company before starting their
own. I'd modify that now. Work for another company if you want
to, but only for a small one, and if you want to start your own
startup, go ahead.The reason I suggested college graduates not start startups immediately
was that I felt most would fail. And they will. But ambitious
programmers are better off doing their own thing and failing than
going to work at a big company. Certainly they'll learn more. They
might even be better off financially. A lot of people in their
early twenties get into debt, because their expenses grow even
faster than the salary that seemed so high when they left school.
At least if you start a startup and fail your net worth will be
zero rather than negative.
[3]We've now funded so many different types of founders that we have
enough data to see patterns, and there seems to be no benefit from
working for a big company. The people who've worked for a few years
do seem better than the ones straight out of college, but only
because they're that much older.The people who come to us from big companies often seem kind of
conservative. It's hard to say how much is because big companies
made them that way, and how much is the natural conservatism that
made them work for the big companies in the first place. But
certainly a large part of it is learned. I know because I've seen
it burn off.Having seen that happen so many times is one of the things that
convinces me that working for oneself, or at least for a small
group, is the natural way for programmers to live. Founders arriving
at Y Combinator often have the downtrodden air of refugees. Three
months later they're transformed: they have so much more
confidence
that they seem as if they've grown several inches taller.
[4]
Strange as this sounds, they seem both more worried and happier at the same
time. Which is exactly how I'd describe the way lions seem in the
wild.Watching employees get transformed into founders makes it clear
that the difference between the two is due mostly to environment—and
in particular that the environment in big companies is toxic to
programmers. In the first couple weeks of working on their own
startup they seem to come to life, because finally they're working
the way people are meant to.Notes[1]
When I talk about humans being meant or designed to live a
certain way, I mean by evolution.[2]
It's not only the leaves who suffer. The constraint propagates
up as well as down. So managers are constrained too; instead of
just doing things, they have to act through subordinates.[3]
Do not finance your startup with credit cards. Financing a
startup with debt is usually a stupid move, and credit card debt
stupidest of all. Credit card debt is a bad idea, period. It is
a trap set by evil companies for the desperate and the foolish.[4]
The founders we fund used to be younger (initially we encouraged
undergrads to apply), and the first couple times I saw this I used
to wonder if they were actually getting physically taller.Thanks to Trevor Blackwell, Ross Boucher, Aaron Iba, Abby
Kirigin, Ivan Kirigin, Jessica Livingston, and Robert Morris for
reading drafts of this. |
April 2004To the popular press, "hacker" means someone who breaks
into computers. Among programmers it means a good programmer.
But the two meanings are connected. To programmers,
"hacker" connotes mastery in the most literal sense: someone
who can make a computer do what he wants—whether the computer
wants to or not.To add to the confusion, the noun "hack" also has two senses. It can
be either a compliment or an insult. It's called a hack when
you do something in an ugly way. But when you do something
so clever that you somehow beat the system, that's also
called a hack. The word is used more often in the former than
the latter sense, probably because ugly solutions are more
common than brilliant ones.Believe it or not, the two senses of "hack" are also
connected. Ugly and imaginative solutions have something in
common: they both break the rules. And there is a gradual
continuum between rule breaking that's merely ugly (using
duct tape to attach something to your bike) and rule breaking
that is brilliantly imaginative (discarding Euclidean space).Hacking predates computers. When he
was working on the Manhattan Project, Richard Feynman used to
amuse himself by breaking into safes containing secret documents.
This tradition continues today.
When we were in grad school, a hacker friend of mine who spent too much
time around MIT had
his own lock picking kit.
(He now runs a hedge fund, a not unrelated enterprise.)It is sometimes hard to explain to authorities why one would
want to do such things.
Another friend of mine once got in trouble with the government for
breaking into computers. This had only recently been declared
a crime, and the FBI found that their usual investigative
technique didn't work. Police investigation apparently begins with
a motive. The usual motives are few: drugs, money, sex,
revenge. Intellectual curiosity was not one of the motives on
the FBI's list. Indeed, the whole concept seemed foreign to
them.Those in authority tend to be annoyed by hackers'
general attitude of disobedience. But that disobedience is
a byproduct of the qualities that make them good programmers.
They may laugh at the CEO when he talks in generic corporate
newspeech, but they also laugh at someone who tells them
a certain problem can't be solved.
Suppress one, and you suppress the other.This attitude is sometimes affected. Sometimes young programmers
notice the eccentricities of eminent hackers and decide to
adopt some of their own in order to seem smarter.
The fake version is not merely
annoying; the prickly attitude of these posers
can actually slow the process of innovation.But even factoring in their annoying eccentricities,
the disobedient attitude of hackers is a net win. I wish its
advantages were better understood.For example, I suspect people in Hollywood are
simply mystified by
hackers' attitudes toward copyrights. They are a perennial
topic of heated discussion on Slashdot.
But why should people who program computers
be so concerned about copyrights, of all things?Partly because some companies use mechanisms to prevent
copying. Show any hacker a lock and his first thought is
how to pick it. But there is a deeper reason that
hackers are alarmed by measures like copyrights and patents.
They see increasingly aggressive measures to protect
"intellectual property"
as a threat to the intellectual
freedom they need to do their job.
And they are right.It is by poking about inside current technology that
hackers get ideas for the next generation. No thanks,
intellectual homeowners may say, we don't need any
outside help. But they're wrong.
The next generation of computer technology has
often—perhaps more often than not—been developed by outsiders.In 1977 there was no doubt some group within IBM developing
what they expected to be
the next generation of business computer. They were mistaken.
The next generation of business computer was
being developed on entirely different lines by two long-haired
guys called Steve in a garage in Los Altos. At about the
same time, the powers that be
were cooperating to develop the
official next generation operating system, Multics.
But two guys who thought Multics excessively complex went off
and wrote their own. They gave it a name that
was a joking reference to Multics: Unix.The latest intellectual property laws impose
unprecedented restrictions on the sort of poking around that
leads to new ideas. In the past, a competitor might use patents
to prevent you from selling a copy of something they
made, but they couldn't prevent you from
taking one apart to see how it worked. The latest
laws make this a crime. How are we
to develop new technology if we can't study current
technology to figure out how to improve it?Ironically, hackers have brought this on themselves.
Computers are responsible for the problem. The control systems
inside machines used to be physical: gears and levers and cams.
Increasingly, the brains (and thus the value) of products is
in software. And by this I mean software in the general sense:
i.e. data. A song on an LP is physically stamped into the
plastic. A song on an iPod's disk is merely stored on it.Data is by definition easy to copy. And the Internet
makes copies easy to distribute. So it is no wonder
companies are afraid. But, as so often happens, fear has
clouded their judgement. The government has responded
with draconian laws to protect intellectual property.
They probably mean well. But
they may not realize that such laws will do more harm
than good.Why are programmers so violently opposed to these laws?
If I were a legislator, I'd be interested in this
mystery—for the same reason that, if I were a farmer and suddenly
heard a lot of squawking coming from my hen house one night,
I'd want to go out and investigate. Hackers are not stupid,
and unanimity is very rare in this world.
So if they're all squawking,
perhaps there is something amiss.Could it be that such laws, though intended to protect America,
will actually harm it? Think about it. There is something
very American about Feynman breaking into safes during
the Manhattan Project. It's hard to imagine the authorities
having a sense of humor about such things over
in Germany at that time. Maybe it's not a coincidence.Hackers are unruly. That is the essence of hacking. And it
is also the essence of Americanness. It is no accident
that Silicon Valley
is in America, and not France, or Germany,
or England, or Japan. In those countries, people color inside
the lines.I lived for a while in Florence. But after I'd been there
a few months I realized that what I'd been unconsciously hoping
to find there was back in the place I'd just left.
The reason Florence is famous is that in 1450, it was New York.
In 1450 it was filled with the kind of turbulent and ambitious
people you find now in America. (So I went back to America.)It is greatly to America's advantage that it is
a congenial atmosphere for the right sort of unruliness—that
it is a home not just for the smart, but for smart-alecks.
And hackers are invariably smart-alecks. If we had a national
holiday, it would be April 1st. It says a great deal about
our work that we use the same word for a brilliant or a
horribly cheesy solution. When we cook one up we're not
always 100% sure which kind it is. But as long as it has
the right sort of wrongness, that's a promising sign.
It's odd that people
think of programming as precise and methodical. Computers
are precise and methodical. Hacking is something you do
with a gleeful laugh.In our world some of the most characteristic solutions
are not far removed from practical
jokes. IBM was no doubt rather surprised by the consequences
of the licensing deal for DOS, just as the hypothetical
"adversary" must be when Michael Rabin solves a problem by
redefining it as one that's easier to solve.Smart-alecks have to develop a keen sense of how much they
can get away with. And lately hackers
have sensed a change
in the atmosphere.
Lately hackerliness seems rather frowned upon.To hackers the recent contraction in civil liberties seems
especially ominous. That must also mystify outsiders.
Why should we care especially about civil
liberties? Why programmers, more than
dentists or salesmen or landscapers?Let me put the case in terms a government official would appreciate.
Civil liberties are not just an ornament, or a quaint
American tradition. Civil liberties make countries rich.
If you made a graph of
GNP per capita vs. civil liberties, you'd notice a definite
trend. Could civil liberties really be a cause, rather
than just an effect? I think so. I think a society in which
people can do and say what they want will also tend to
be one in which the most efficient solutions win, rather than
those sponsored by the most influential people.
Authoritarian countries become corrupt;
corrupt countries become poor; and poor countries are weak.
It seems to me there is
a Laffer curve for government power, just as for
tax revenues. At least, it seems likely enough that it
would be stupid to try the experiment and find out. Unlike
high tax rates, you can't repeal totalitarianism if it
turns out to be a mistake.This is why hackers worry. The government spying on people doesn't
literally make programmers write worse code. It just leads
eventually to a world in which bad ideas win. And because
this is so important to hackers, they're especially sensitive
to it. They can sense totalitarianism approaching from a
distance, as animals can sense an approaching
thunderstorm.It would be ironic if, as hackers fear, recent measures
intended to protect national security and intellectual property
turned out to be a missile aimed right at what makes
America successful. But it would not be the first time that
measures taken in an atmosphere of panic had
the opposite of the intended effect.There is such a thing as Americanness.
There's nothing like living abroad to teach you that.
And if you want to know whether something will nurture or squash
this quality, it would be hard to find a better focus
group than hackers, because they come closest of any group
I know to embodying it. Closer, probably, than
the men running our government,
who for all their talk of patriotism
remind me more of Richelieu or Mazarin
than Thomas Jefferson or George Washington.When you read what the founding fathers had to say for
themselves, they sound more like hackers.
"The spirit of resistance to government,"
Jefferson wrote, "is so valuable on certain occasions, that I wish
it always to be kept alive."Imagine an American president saying that today.
Like the remarks of an outspoken old grandmother, the sayings of
the founding fathers have embarrassed generations of
their less confident successors. They remind us where we come from.
They remind us that it is the people who break rules that are
the source of America's wealth and power.Those in a position to impose rules naturally want them to be
obeyed. But be careful what you ask for. You might get it.Thanks to Ken Anderson, Trevor Blackwell, Daniel Giffin,
Sarah Harlin, Shiro Kawai, Jessica Livingston, Matz,
Jackie McDonough, Robert Morris, Eric Raymond, Guido van Rossum,
David Weinberger, and
Steven Wolfram for reading drafts of this essay.
(The image shows Steves Jobs and Wozniak
with a "blue box."
Photo by Margret Wozniak. Reproduced by permission of Steve
Wozniak.) |
May 2021Noora Health, a nonprofit I've
supported for years, just launched
a new NFT. It has a dramatic name, Save Thousands of Lives,
because that's what the proceeds will do.Noora has been saving lives for 7 years. They run programs in
hospitals in South Asia to teach new mothers how to take care of
their babies once they get home. They're in 165 hospitals now. And
because they know the numbers before and after they start at a new
hospital, they can measure the impact they have. It is massive.
For every 1000 live births, they save 9 babies.This number comes from a study
of 133,733 families at 28 different
hospitals that Noora conducted in collaboration with the Better
Birth team at Ariadne Labs, a joint center for health systems
innovation at Brigham and Womens Hospital and Harvard T.H. Chan
School of Public Health.Noora is so effective that even if you measure their costs in the
most conservative way, by dividing their entire budget by the number
of lives saved, the cost of saving a life is the lowest I've seen.
$1,235.For this NFT, they're going to issue a public report tracking how
this specific tranche of money is spent, and estimating the number
of lives saved as a result.NFTs are a new territory, and this way of using them is especially
new, but I'm excited about its potential. And I'm excited to see
what happens with this particular auction, because unlike an NFT
representing something that has already happened,
this NFT gets better as the price gets higher.The reserve price was about $2.5 million, because that's what it
takes for the name to be accurate: that's what it costs to save
2000 lives. But the higher the price of this NFT goes, the more
lives will be saved. What a sentence to be able to write. |
February 2020What should an essay be? Many people would say persuasive. That's
what a lot of us were taught essays should be. But I think we can
aim for something more ambitious: that an essay should be useful.To start with, that means it should be correct. But it's not enough
merely to be correct. It's easy to make a statement correct by
making it vague. That's a common flaw in academic writing, for
example. If you know nothing at all about an issue, you can't go
wrong by saying that the issue is a complex one, that there are
many factors to be considered, that it's a mistake to take too
simplistic a view of it, and so on.Though no doubt correct, such statements tell the reader nothing.
Useful writing makes claims that are as strong as they can be made
without becoming false.For example, it's more useful to say that Pike's Peak is near the
middle of Colorado than merely somewhere in Colorado. But if I say
it's in the exact middle of Colorado, I've now gone too far, because
it's a bit east of the middle.Precision and correctness are like opposing forces. It's easy to
satisfy one if you ignore the other. The converse of vaporous
academic writing is the bold, but false, rhetoric of demagogues.
Useful writing is bold, but true.It's also two other things: it tells people something important,
and that at least some of them didn't already know.Telling people something they didn't know doesn't always mean
surprising them. Sometimes it means telling them something they
knew unconsciously but had never put into words. In fact those may
be the more valuable insights, because they tend to be more
fundamental.Let's put them all together. Useful writing tells people something
true and important that they didn't already know, and tells them
as unequivocally as possible.Notice these are all a matter of degree. For example, you can't
expect an idea to be novel to everyone. Any insight that you have
will probably have already been had by at least one of the world's
7 billion people. But it's sufficient if an idea is novel to a lot
of readers.Ditto for correctness, importance, and strength. In effect the four
components are like numbers you can multiply together to get a score
for usefulness. Which I realize is almost awkwardly reductive, but
nonetheless true._____
How can you ensure that the things you say are true and novel and
important? Believe it or not, there is a trick for doing this. I
learned it from my friend Robert Morris, who has a horror of saying
anything dumb. His trick is not to say anything unless he's sure
it's worth hearing. This makes it hard to get opinions out of him,
but when you do, they're usually right.Translated into essay writing, what this means is that if you write
a bad sentence, you don't publish it. You delete it and try again.
Often you abandon whole branches of four or five paragraphs. Sometimes
a whole essay.You can't ensure that every idea you have is good, but you can
ensure that every one you publish is, by simply not publishing the
ones that aren't.In the sciences, this is called publication bias, and is considered
bad. When some hypothesis you're exploring gets inconclusive results,
you're supposed to tell people about that too. But with essay
writing, publication bias is the way to go.My strategy is loose, then tight. I write the first draft of an
essay fast, trying out all kinds of ideas. Then I spend days rewriting
it very carefully.I've never tried to count how many times I proofread essays, but
I'm sure there are sentences I've read 100 times before publishing
them. When I proofread an essay, there are usually passages that
stick out in an annoying way, sometimes because they're clumsily
written, and sometimes because I'm not sure they're true. The
annoyance starts out unconscious, but after the tenth reading or
so I'm saying "Ugh, that part" each time I hit it. They become like
briars that catch your sleeve as you walk past. Usually I won't
publish an essay till they're all gone till I can read through
the whole thing without the feeling of anything catching.I'll sometimes let through a sentence that seems clumsy, if I can't
think of a way to rephrase it, but I will never knowingly let through
one that doesn't seem correct. You never have to. If a sentence
doesn't seem right, all you have to do is ask why it doesn't, and
you've usually got the replacement right there in your head.This is where essayists have an advantage over journalists. You
don't have a deadline. You can work for as long on an essay as you
need to get it right. You don't have to publish the essay at all,
if you can't get it right. Mistakes seem to lose courage in the
face of an enemy with unlimited resources. Or that's what it feels
like. What's really going on is that you have different expectations
for yourself. You're like a parent saying to a child "we can sit
here all night till you eat your vegetables." Except you're the
child too.I'm not saying no mistake gets through. For example, I added condition
(c) in "A Way to Detect Bias"
after readers pointed out that I'd
omitted it. But in practice you can catch nearly all of them.There's a trick for getting importance too. It's like the trick I
suggest to young founders for getting startup ideas: to make something
you yourself want. You can use yourself as a proxy for the reader.
The reader is not completely unlike you, so if you write about
topics that seem important to you, they'll probably seem important
to a significant number of readers as well.Importance has two factors. It's the number of people something
matters to, times how much it matters to them. Which means of course
that it's not a rectangle, but a sort of ragged comb, like a Riemann
sum.The way to get novelty is to write about topics you've thought about
a lot. Then you can use yourself as a proxy for the reader in this
department too. Anything you notice that surprises you, who've
thought about the topic a lot, will probably also surprise a
significant number of readers. And here, as with correctness and
importance, you can use the Morris technique to ensure that you
will. If you don't learn anything from writing an essay, don't
publish it.You need humility to measure novelty, because acknowledging the
novelty of an idea means acknowledging your previous ignorance of
it. Confidence and humility are often seen as opposites, but in
this case, as in many others, confidence helps you to be humble.
If you know you're an expert on some topic, you can freely admit
when you learn something you didn't know, because you can be confident
that most other people wouldn't know it either.The fourth component of useful writing, strength, comes from two
things: thinking well, and the skillful use of qualification. These
two counterbalance each other, like the accelerator and clutch in
a car with a manual transmission. As you try to refine the expression
of an idea, you adjust the qualification accordingly. Something
you're sure of, you can state baldly with no qualification at all,
as I did the four components of useful writing. Whereas points that
seem dubious have to be held at arm's length with perhapses.As you refine an idea, you're pushing in the direction of less
qualification. But you can rarely get it down to zero. Sometimes
you don't even want to, if it's a side point and a fully refined
version would be too long.Some say that qualifications weaken writing. For example, that you
should never begin a sentence in an essay with "I think," because
if you're saying it, then of course you think it. And it's true
that "I think x" is a weaker statement than simply "x." Which is
exactly why you need "I think." You need it to express your degree
of certainty.But qualifications are not scalars. They're not just experimental
error. There must be 50 things they can express: how broadly something
applies, how you know it, how happy you are it's so, even how it
could be falsified. I'm not going to try to explore the structure
of qualification here. It's probably more complex than the whole
topic of writing usefully. Instead I'll just give you a practical
tip: Don't underestimate qualification. It's an important skill in
its own right, not just a sort of tax you have to pay in order to
avoid saying things that are false. So learn and use its full range.
It may not be fully half of having good ideas, but it's part of
having them.There's one other quality I aim for in essays: to say things as
simply as possible. But I don't think this is a component of
usefulness. It's more a matter of consideration for the reader. And
it's a practical aid in getting things right; a mistake is more
obvious when expressed in simple language. But I'll admit that the
main reason I write simply is not for the reader's sake or because
it helps get things right, but because it bothers me to use more
or fancier words than I need to. It seems inelegant, like a program
that's too long.I realize florid writing works for some people. But unless you're
sure you're one of them, the best advice is to write as simply as
you can._____
I believe the formula I've given you, importance + novelty +
correctness + strength, is the recipe for a good essay. But I should
warn you that it's also a recipe for making people mad.The root of the problem is novelty. When you tell people something
they didn't know, they don't always thank you for it. Sometimes the
reason people don't know something is because they don't want to
know it. Usually because it contradicts some cherished belief. And
indeed, if you're looking for novel ideas, popular but mistaken
beliefs are a good place to find them. Every popular mistaken belief
creates a dead zone of ideas around
it that are relatively unexplored because they contradict it.The strength component just makes things worse. If there's anything
that annoys people more than having their cherished assumptions
contradicted, it's having them flatly contradicted.Plus if you've used the Morris technique, your writing will seem
quite confident. Perhaps offensively confident, to people who
disagree with you. The reason you'll seem confident is that you are
confident: you've cheated, by only publishing the things you're
sure of. It will seem to people who try to disagree with you that
you never admit you're wrong. In fact you constantly admit you're
wrong. You just do it before publishing instead of after.And if your writing is as simple as possible, that just makes things
worse. Brevity is the diction of command. If you watch someone
delivering unwelcome news from a position of inferiority, you'll
notice they tend to use lots of words, to soften the blow. Whereas
to be short with someone is more or less to be rude to them.It can sometimes work to deliberately phrase statements more weakly
than you mean. To put "perhaps" in front of something you're actually
quite sure of. But you'll notice that when writers do this, they
usually do it with a wink.I don't like to do this too much. It's cheesy to adopt an ironic
tone for a whole essay. I think we just have to face the fact that
elegance and curtness are two names for the same thing.You might think that if you work sufficiently hard to ensure that
an essay is correct, it will be invulnerable to attack. That's sort
of true. It will be invulnerable to valid attacks. But in practice
that's little consolation.In fact, the strength component of useful writing will make you
particularly vulnerable to misrepresentation. If you've stated an
idea as strongly as you could without making it false, all anyone
has to do is to exaggerate slightly what you said, and now it is
false.Much of the time they're not even doing it deliberately. One of the
most surprising things you'll discover, if you start writing essays,
is that people who disagree with you rarely disagree with what
you've actually written. Instead they make up something you said
and disagree with that.For what it's worth, the countermove is to ask someone who does
this to quote a specific sentence or passage you wrote that they
believe is false, and explain why. I say "for what it's worth"
because they never do. So although it might seem that this could
get a broken discussion back on track, the truth is that it was
never on track in the first place.Should you explicitly forestall likely misinterpretations? Yes, if
they're misinterpretations a reasonably smart and well-intentioned
person might make. In fact it's sometimes better to say something
slightly misleading and then add the correction than to try to get
an idea right in one shot. That can be more efficient, and can also
model the way such an idea would be discovered.But I don't think you should explicitly forestall intentional
misinterpretations in the body of an essay. An essay is a place to
meet honest readers. You don't want to spoil your house by putting
bars on the windows to protect against dishonest ones. The place
to protect against intentional misinterpretations is in end-notes.
But don't think you can predict them all. People are as ingenious
at misrepresenting you when you say something they don't want to
hear as they are at coming up with rationalizations for things they
want to do but know they shouldn't. I suspect it's the same skill._____
As with most other things, the way to get better at writing essays
is to practice. But how do you start? Now that we've examined the
structure of useful writing, we can rephrase that question more
precisely. Which constraint do you relax initially? The answer is,
the first component of importance: the number of people who care
about what you write.If you narrow the topic sufficiently, you can probably find something
you're an expert on. Write about that to start with. If you only
have ten readers who care, that's fine. You're helping them, and
you're writing. Later you can expand the breadth of topics you write
about.The other constraint you can relax is a little surprising: publication.
Writing essays doesn't have to mean publishing them. That may seem
strange now that the trend is to publish every random thought, but
it worked for me. I wrote what amounted to essays in notebooks for
about 15 years. I never published any of them and never expected
to. I wrote them as a way of figuring things out. But when the web
came along I'd had a lot of practice.Incidentally,
Steve
Wozniak did the same thing. In high school he
designed computers on paper for fun. He couldn't build them because
he couldn't afford the components. But when Intel launched 4K DRAMs
in 1975, he was ready._____
How many essays are there left to write though? The answer to that
question is probably the most exciting thing I've learned about
essay writing. Nearly all of them are left to write.Although the essay
is an old form, it hasn't been assiduously
cultivated. In the print era, publication was expensive, and there
wasn't enough demand for essays to publish that many. You could
publish essays if you were already well known for writing something
else, like novels. Or you could write book reviews that you took
over to express your own ideas. But there was not really a direct
path to becoming an essayist. Which meant few essays got written,
and those that did tended to be about a narrow range of subjects.Now, thanks to the internet, there's a path. Anyone can publish
essays online. You start in obscurity, perhaps, but at least you
can start. You don't need anyone's permission.It sometimes happens that an area of knowledge sits quietly for
years, till some change makes it explode. Cryptography did this to
number theory. The internet is doing it to the essay.The exciting thing is not that there's a lot left to write, but
that there's a lot left to discover. There's a certain kind of idea
that's best discovered by writing essays. If most essays are still
unwritten, most such ideas are still undiscovered.Notes[1] Put railings on the balconies, but don't put bars on the windows.[2] Even now I sometimes write essays that are not meant for
publication. I wrote several to figure out what Y Combinator should
do, and they were really helpful.Thanks to Trevor Blackwell, Daniel Gackle, Jessica Livingston, and
Robert Morris for reading drafts of this. |
December 2014If the world were static, we could have monotonically increasing
confidence in our beliefs. The more (and more varied) experience
a belief survived, the less likely it would be false. Most people
implicitly believe something like this about their opinions. And
they're justified in doing so with opinions about things that don't
change much, like human nature. But you can't trust your opinions
in the same way about things that change, which could include
practically everything else.When experts are wrong, it's often because they're experts on an
earlier version of the world.Is it possible to avoid that? Can you protect yourself against
obsolete beliefs? To some extent, yes. I spent almost a decade
investing in early stage startups, and curiously enough protecting
yourself against obsolete beliefs is exactly what you have to do
to succeed as a startup investor. Most really good startup ideas
look like bad ideas at first, and many of those look bad specifically
because some change in the world just switched them from bad to
good. I spent a lot of time learning to recognize such ideas, and
the techniques I used may be applicable to ideas in general.The first step is to have an explicit belief in change. People who
fall victim to a monotonically increasing confidence in their
opinions are implicitly concluding the world is static. If you
consciously remind yourself it isn't, you start to look for change.Where should one look for it? Beyond the moderately useful
generalization that human nature doesn't change much, the unfortunate
fact is that change is hard to predict. This is largely a tautology
but worth remembering all the same: change that matters usually
comes from an unforeseen quarter.So I don't even try to predict it. When I get asked in interviews
to predict the future, I always have to struggle to come up with
something plausible-sounding on the fly, like a student who hasn't
prepared for an exam.
[1]
But it's not out of laziness that I haven't
prepared. It seems to me that beliefs about the future are so
rarely correct that they usually aren't worth the extra rigidity
they impose, and that the best strategy is simply to be aggressively
open-minded. Instead of trying to point yourself in the right
direction, admit you have no idea what the right direction is, and
try instead to be super sensitive to the winds of change.It's ok to have working hypotheses, even though they may constrain
you a bit, because they also motivate you. It's exciting to chase
things and exciting to try to guess answers. But you have to be
disciplined about not letting your hypotheses harden into anything
more.
[2]I believe this passive m.o. works not just for evaluating new ideas
but also for having them. The way to come up with new ideas is not
to try explicitly to, but to try to solve problems and simply not
discount weird hunches you have in the process.The winds of change originate in the unconscious minds of domain
experts. If you're sufficiently expert in a field, any weird idea
or apparently irrelevant question that occurs to you is ipso facto
worth exploring.
[3]
Within Y Combinator, when an idea is described
as crazy, it's a compliment—in fact, on average probably a
higher compliment than when an idea is described as good.Startup investors have extraordinary incentives for correcting
obsolete beliefs. If they can realize before other investors that
some apparently unpromising startup isn't, they can make a huge
amount of money. But the incentives are more than just financial.
Investors' opinions are explicitly tested: startups come to them
and they have to say yes or no, and then, fairly quickly, they learn
whether they guessed right. The investors who say no to a Google
(and there were several) will remember it for the rest of their
lives.Anyone who must in some sense bet on ideas rather than merely
commenting on them has similar incentives. Which means anyone who
wants such incentives can have them, by turning their comments into
bets: if you write about a topic in some fairly durable and public
form, you'll find you worry much more about getting things right
than most people would in a casual conversation.
[4]Another trick I've found to protect myself against obsolete beliefs
is to focus initially on people rather than ideas. Though the nature
of future discoveries is hard to predict, I've found I can predict
quite well what sort of people will make them. Good new ideas come
from earnest, energetic, independent-minded people.Betting on people over ideas saved me countless times as an investor.
We thought Airbnb was a bad idea, for example. But we could tell
the founders were earnest, energetic, and independent-minded.
(Indeed, almost pathologically so.) So we suspended disbelief and
funded them.This too seems a technique that should be generally applicable.
Surround yourself with the sort of people new ideas come from. If
you want to notice quickly when your beliefs become obsolete, you
can't do better than to be friends with the people whose discoveries
will make them so.It's hard enough already not to become the prisoner of your own
expertise, but it will only get harder, because change is accelerating.
That's not a recent trend; change has been accelerating since the
paleolithic era. Ideas beget ideas. I don't expect that to change.
But I could be wrong.
Notes[1]
My usual trick is to talk about aspects of the present that
most people haven't noticed yet.[2]
Especially if they become well enough known that people start
to identify them with you. You have to be extra skeptical about
things you want to believe, and once a hypothesis starts to be
identified with you, it will almost certainly start to be in that
category.[3]
In practice "sufficiently expert" doesn't require one to be
recognized as an expert—which is a trailing indicator in any
case. In many fields a year of focused work plus caring a lot would
be enough.[4]
Though they are public and persist indefinitely, comments on
e.g. forums and places like Twitter seem empirically to work like
casual conversation. The threshold may be whether what you write
has a title.
Thanks to Sam Altman, Patrick Collison, and Robert Morris
for reading drafts of this. |
Want to start a startup? Get funded by
Y Combinator.
November 2005Does "Web 2.0" mean anything? Till recently I thought it didn't,
but the truth turns out to be more complicated. Originally, yes,
it was meaningless. Now it seems to have acquired a meaning. And
yet those who dislike the term are probably right, because if it
means what I think it does, we don't need it.I first heard the phrase "Web 2.0" in the name of the Web 2.0
conference in 2004. At the time it was supposed to mean using "the
web as a platform," which I took to refer to web-based applications.
[1]So I was surprised at a conference this summer when Tim O'Reilly
led a session intended to figure out a definition of "Web 2.0."
Didn't it already mean using the web as a platform? And if it
didn't already mean something, why did we need the phrase at all?OriginsTim says the phrase "Web 2.0" first
arose in "a brainstorming session between
O'Reilly and Medialive International." What is Medialive International?
"Producers of technology tradeshows and conferences," according to
their site. So presumably that's what this brainstorming session
was about. O'Reilly wanted to organize a conference about the web,
and they were wondering what to call it.I don't think there was any deliberate plan to suggest there was a
new version of the web. They just wanted to make the point
that the web mattered again. It was a kind of semantic deficit
spending: they knew new things were coming, and the "2.0" referred
to whatever those might turn out to be.And they were right. New things were coming. But the new version
number led to some awkwardness in the short term. In the process
of developing the pitch for the first conference, someone must have
decided they'd better take a stab at explaining what that "2.0"
referred to. Whatever it meant, "the web as a platform" was at
least not too constricting.The story about "Web 2.0" meaning the web as a platform didn't live
much past the first conference. By the second conference, what
"Web 2.0" seemed to mean was something about democracy. At least,
it did when people wrote about it online. The conference itself
didn't seem very grassroots. It cost $2800, so the only people who
could afford to go were VCs and people from big companies.And yet, oddly enough, Ryan Singel's article
about the conference in Wired News spoke of "throngs of
geeks." When a friend of mine asked Ryan about this, it was news
to him. He said he'd originally written something like "throngs
of VCs and biz dev guys" but had later shortened it just to "throngs,"
and that this must have in turn been expanded by the editors into
"throngs of geeks." After all, a Web 2.0 conference would presumably
be full of geeks, right?Well, no. There were about 7. Even Tim O'Reilly was wearing a
suit, a sight so alien I couldn't parse it at first. I saw
him walk by and said to one of the O'Reilly people "that guy looks
just like Tim.""Oh, that's Tim. He bought a suit."
I ran after him, and sure enough, it was. He explained that he'd
just bought it in Thailand.The 2005 Web 2.0 conference reminded me of Internet trade shows
during the Bubble, full of prowling VCs looking for the next hot
startup. There was that same odd atmosphere created by a large
number of people determined not to miss out. Miss out on what?
They didn't know. Whatever was going to happen—whatever Web 2.0
turned out to be.I wouldn't quite call it "Bubble 2.0" just because VCs are eager
to invest again. The Internet is a genuinely big deal. The bust
was as much an overreaction as
the boom. It's to be expected that once we started to pull out of
the bust, there would be a lot of growth in this area, just as there
was in the industries that spiked the sharpest before the Depression.The reason this won't turn into a second Bubble is that the IPO
market is gone. Venture investors
are driven by exit strategies. The reason they were funding all
those laughable startups during the late 90s was that they hoped
to sell them to gullible retail investors; they hoped to be laughing
all the way to the bank. Now that route is closed. Now the default
exit strategy is to get bought, and acquirers are less prone to
irrational exuberance than IPO investors. The closest you'll get
to Bubble valuations is Rupert Murdoch paying $580 million for
Myspace. That's only off by a factor of 10 or so.1. AjaxDoes "Web 2.0" mean anything more than the name of a conference
yet? I don't like to admit it, but it's starting to. When people
say "Web 2.0" now, I have some idea what they mean. And the fact
that I both despise the phrase and understand it is the surest proof
that it has started to mean something.One ingredient of its meaning is certainly Ajax, which I can still
only just bear to use without scare quotes. Basically, what "Ajax"
means is "Javascript now works." And that in turn means that
web-based applications can now be made to work much more like desktop
ones.As you read this, a whole new generation
of software is being written to take advantage of Ajax. There
hasn't been such a wave of new applications since microcomputers
first appeared. Even Microsoft sees it, but it's too late for them
to do anything more than leak "internal"
documents designed to give the impression they're on top of this
new trend.In fact the new generation of software is being written way too
fast for Microsoft even to channel it, let alone write their own
in house. Their only hope now is to buy all the best Ajax startups
before Google does. And even that's going to be hard, because
Google has as big a head start in buying microstartups as it did
in search a few years ago. After all, Google Maps, the canonical
Ajax application, was the result of a startup they bought.So ironically the original description of the Web 2.0 conference
turned out to be partially right: web-based applications are a big
component of Web 2.0. But I'm convinced they got this right by
accident. The Ajax boom didn't start till early 2005, when Google
Maps appeared and the term "Ajax" was coined.2. DemocracyThe second big element of Web 2.0 is democracy. We now have several
examples to prove that amateurs can
surpass professionals, when they have the right kind of system to
channel their efforts. Wikipedia
may be the most famous. Experts have given Wikipedia middling
reviews, but they miss the critical point: it's good enough. And
it's free, which means people actually read it. On the web, articles
you have to pay for might as well not exist. Even if you were
willing to pay to read them yourself, you can't link to them.
They're not part of the conversation.Another place democracy seems to win is in deciding what counts as
news. I never look at any news site now except Reddit.
[2]
I know if something major
happens, or someone writes a particularly interesting article, it
will show up there. Why bother checking the front page of any
specific paper or magazine? Reddit's like an RSS feed for the whole
web, with a filter for quality. Similar sites include Digg, a technology news site that's
rapidly approaching Slashdot in popularity, and del.icio.us, the collaborative
bookmarking network that set off the "tagging" movement. And whereas
Wikipedia's main appeal is that it's good enough and free, these
sites suggest that voters do a significantly better job than human
editors.The most dramatic example of Web 2.0 democracy is not in the selection
of ideas, but their production.
I've noticed for a while that the stuff I read on individual people's
sites is as good as or better than the stuff I read in newspapers
and magazines. And now I have independent evidence: the top links
on Reddit are generally links to individual people's sites rather
than to magazine articles or news stories.My experience of writing
for magazines suggests an explanation. Editors. They control the
topics you can write about, and they can generally rewrite whatever
you produce. The result is to damp extremes. Editing yields 95th
percentile writing—95% of articles are improved by it, but 5% are
dragged down. 5% of the time you get "throngs of geeks."On the web, people can publish whatever they want. Nearly all of
it falls short of the editor-damped writing in print publications.
But the pool of writers is very, very large. If it's large enough,
the lack of damping means the best writing online should surpass
the best in print.
[3]
And now that the web has evolved mechanisms
for selecting good stuff, the web wins net. Selection beats damping,
for the same reason market economies beat centrally planned ones.Even the startups are different this time around. They are to the
startups of the Bubble what bloggers are to the print media. During
the Bubble, a startup meant a company headed by an MBA that was
blowing through several million dollars of VC money to "get big
fast" in the most literal sense. Now it means a smaller, younger, more technical group that just
decided to make something great. They'll decide later if they want
to raise VC-scale funding, and if they take it, they'll take it on
their terms.3. Don't Maltreat UsersI think everyone would agree that democracy and Ajax are elements
of "Web 2.0." I also see a third: not to maltreat users. During
the Bubble a lot of popular sites were quite high-handed with users.
And not just in obvious ways, like making them register, or subjecting
them to annoying ads. The very design of the average site in the
late 90s was an abuse. Many of the most popular sites were loaded
with obtrusive branding that made them slow to load and sent the
user the message: this is our site, not yours. (There's a physical
analog in the Intel and Microsoft stickers that come on some
laptops.)I think the root of the problem was that sites felt they were giving
something away for free, and till recently a company giving anything
away for free could be pretty high-handed about it. Sometimes it
reached the point of economic sadism: site owners assumed that the
more pain they caused the user, the more benefit it must be to them.
The most dramatic remnant of this model may be at salon.com, where
you can read the beginning of a story, but to get the rest you have
sit through a movie.At Y Combinator we advise all the startups we fund never to lord
it over users. Never make users register, unless you need to in
order to store something for them. If you do make users register,
never make them wait for a confirmation link in an email; in fact,
don't even ask for their email address unless you need it for some
reason. Don't ask them any unnecessary questions. Never send them
email unless they explicitly ask for it. Never frame pages you
link to, or open them in new windows. If you have a free version
and a pay version, don't make the free version too restricted. And
if you find yourself asking "should we allow users to do x?" just
answer "yes" whenever you're unsure. Err on the side of generosity.In How to Start a Startup I advised startups
never to let anyone fly under them, meaning never to let any other
company offer a cheaper, easier solution. Another way to fly low
is to give users more power. Let users do what they want. If you
don't and a competitor does, you're in trouble.iTunes is Web 2.0ish in this sense. Finally you can buy individual
songs instead of having to buy whole albums. The recording industry
hated the idea and resisted it as long as possible. But it was
obvious what users wanted, so Apple flew under the labels.
[4]
Though really it might be better to describe iTunes as Web 1.5.
Web 2.0 applied to music would probably mean individual bands giving
away DRMless songs for free.The ultimate way to be nice to users is to give them something for
free that competitors charge for. During the 90s a lot of people
probably thought we'd have some working system for micropayments
by now. In fact things have gone in the other direction. The most
successful sites are the ones that figure out new ways to give stuff
away for free. Craigslist has largely destroyed the classified ad
sites of the 90s, and OkCupid looks likely to do the same to the
previous generation of dating sites.Serving web pages is very, very cheap. If you can make even a
fraction of a cent per page view, you can make a profit. And
technology for targeting ads continues to improve. I wouldn't be
surprised if ten years from now eBay had been supplanted by an
ad-supported freeBay (or, more likely, gBay).Odd as it might sound, we tell startups that they should try to
make as little money as possible. If you can figure out a way to
turn a billion dollar industry into a fifty million dollar industry,
so much the better, if all fifty million go to you. Though indeed,
making things cheaper often turns out to generate more money in the
end, just as automating things often turns out to generate more
jobs.The ultimate target is Microsoft. What a bang that balloon is going
to make when someone pops it by offering a free web-based alternative
to MS Office.
[5]
Who will? Google? They seem to be taking their
time. I suspect the pin will be wielded by a couple of 20 year old
hackers who are too naive to be intimidated by the idea. (How hard
can it be?)The Common ThreadAjax, democracy, and not dissing users. What do they all have in
common? I didn't realize they had anything in common till recently,
which is one of the reasons I disliked the term "Web 2.0" so much.
It seemed that it was being used as a label for whatever happened
to be new—that it didn't predict anything.But there is a common thread. Web 2.0 means using the web the way
it's meant to be used. The "trends" we're seeing now are simply
the inherent nature of the web emerging from under the broken models
that got imposed on it during the Bubble.I realized this when I read an interview with
Joe Kraus, the co-founder of Excite.
[6]
Excite really never got the business model right at all. We fell
into the classic problem of how when a new medium comes out it
adopts the practices, the content, the business models of the old
medium—which fails, and then the more appropriate models get
figured out.
It may have seemed as if not much was happening during the years
after the Bubble burst. But in retrospect, something was happening:
the web was finding its natural angle of repose. The democracy
component, for example—that's not an innovation, in the sense of
something someone made happen. That's what the web naturally tends
to produce.Ditto for the idea of delivering desktop-like applications over the
web. That idea is almost as old as the web. But the first time
around it was co-opted by Sun, and we got Java applets. Java has
since been remade into a generic replacement for C++, but in 1996
the story about Java was that it represented a new model of software.
Instead of desktop applications, you'd run Java "applets" delivered
from a server.This plan collapsed under its own weight. Microsoft helped kill it,
but it would have died anyway. There was no uptake among hackers.
When you find PR firms promoting
something as the next development platform, you can be sure it's
not. If it were, you wouldn't need PR firms to tell you, because
hackers would already be writing stuff on top of it, the way sites
like Busmonster used Google Maps as a
platform before Google even meant it to be one.The proof that Ajax is the next hot platform is that thousands of
hackers have spontaneously started building things on top
of it. Mikey likes it.There's another thing all three components of Web 2.0 have in common.
Here's a clue. Suppose you approached investors with the following
idea for a Web 2.0 startup:
Sites like del.icio.us and flickr allow users to "tag" content
with descriptive tokens. But there is also huge source of
implicit tags that they ignore: the text within web links.
Moreover, these links represent a social network connecting the
individuals and organizations who created the pages, and by using
graph theory we can compute from this network an estimate of the
reputation of each member. We plan to mine the web for these
implicit tags, and use them together with the reputation hierarchy
they embody to enhance web searches.
How long do you think it would take them on average to realize that
it was a description of Google?Google was a pioneer in all three components of Web 2.0: their core
business sounds crushingly hip when described in Web 2.0 terms,
"Don't maltreat users" is a subset of "Don't be evil," and of course
Google set off the whole Ajax boom with Google Maps.Web 2.0 means using the web as it was meant to be used, and Google
does. That's their secret. They're sailing with the wind, instead of sitting
becalmed praying for a business model, like the print media, or
trying to tack upwind by suing their customers, like Microsoft and
the record labels.
[7]Google doesn't try to force things to happen their way. They try
to figure out what's going to happen, and arrange to be standing
there when it does. That's the way to approach technology—and
as business includes an ever larger technological component, the
right way to do business.The fact that Google is a "Web 2.0" company shows that, while
meaningful, the term is also rather bogus. It's like the word
"allopathic." It just means doing things right, and it's a bad
sign when you have a special word for that.
Notes[1]
From the conference
site, June 2004: "While the first wave of the Web was closely
tied to the browser, the second wave extends applications across
the web and enables a new generation of services and business
opportunities." To the extent this means anything, it seems to be
about
web-based applications.[2]
Disclosure: Reddit was funded by
Y Combinator. But although
I started using it out of loyalty to the home team, I've become a
genuine addict. While we're at it, I'm also an investor in
!MSFT, having sold all my shares earlier this year.[3]
I'm not against editing. I spend more time editing than
writing, and I have a group of picky friends who proofread almost
everything I write. What I dislike is editing done after the fact
by someone else.[4]
Obvious is an understatement. Users had been climbing in through
the window for years before Apple finally moved the door.[5]
Hint: the way to create a web-based alternative to Office may
not be to write every component yourself, but to establish a protocol
for web-based apps to share a virtual home directory spread across
multiple servers. Or it may be to write it all yourself.[6]
In Jessica Livingston's
Founders at
Work.[7]
Microsoft didn't sue their customers directly, but they seem
to have done all they could to help SCO sue them.Thanks to Trevor Blackwell, Sarah Harlin, Jessica Livingston, Peter
Norvig, Aaron Swartz, and Jeff Weiner for reading drafts of this, and to the
guys at O'Reilly and Adaptive Path for answering my questions. |
April 2012A palliative care nurse called Bronnie Ware made a list of the
biggest regrets
of the dying. Her list seems plausible. I could see
myself — can see myself — making at least 4 of these
5 mistakes.If you had to compress them into a single piece of advice, it might
be: don't be a cog. The 5 regrets paint a portrait of post-industrial
man, who shrinks himself into a shape that fits his circumstances,
then turns dutifully till he stops.The alarming thing is, the mistakes that produce these regrets are
all errors of omission. You forget your dreams, ignore your family,
suppress your feelings, neglect your friends, and forget to be
happy. Errors of omission are a particularly dangerous type of
mistake, because you make them by default.I would like to avoid making these mistakes. But how do you avoid
mistakes you make by default? Ideally you transform your life so
it has other defaults. But it may not be possible to do that
completely. As long as these mistakes happen by default, you probably
have to be reminded not to make them. So I inverted the 5 regrets,
yielding a list of 5 commands
Don't ignore your dreams; don't work too much; say what you
think; cultivate friendships; be happy.
which I then put at the top of the file I use as a todo list. |
December 2014I've read Villehardouin's chronicle of the Fourth Crusade at least
two times, maybe three. And yet if I had to write down everything
I remember from it, I doubt it would amount to much more than a
page. Multiply this times several hundred, and I get an uneasy
feeling when I look at my bookshelves. What use is it to read all
these books if I remember so little from them?A few months ago, as I was reading Constance Reid's excellent
biography of Hilbert, I figured out if not the answer to this
question, at least something that made me feel better about it.
She writes:
Hilbert had no patience with mathematical lectures which filled
the students with facts but did not teach them how to frame a
problem and solve it. He often used to tell them that "a perfect
formulation of a problem is already half its solution."
That has always seemed to me an important point, and I was even
more convinced of it after hearing it confirmed by Hilbert.But how had I come to believe in this idea in the first place? A
combination of my own experience and other things I'd read. None
of which I could at that moment remember! And eventually I'd forget
that Hilbert had confirmed it too. But my increased belief in the
importance of this idea would remain something I'd learned from
this book, even after I'd forgotten I'd learned it.Reading and experience train your model of the world. And even if
you forget the experience or what you read, its effect on your model
of the world persists. Your mind is like a compiled program you've
lost the source of. It works, but you don't know why.The place to look for what I learned from Villehardouin's chronicle
is not what I remember from it, but my mental models of the crusades,
Venice, medieval culture, siege warfare, and so on. Which doesn't
mean I couldn't have read more attentively, but at least the harvest
of reading is not so miserably small as it might seem.This is one of those things that seem obvious in retrospect. But
it was a surprise to me and presumably would be to anyone else who
felt uneasy about (apparently) forgetting so much they'd read.Realizing it does more than make you feel a little better about
forgetting, though. There are specific implications.For example, reading and experience are usually "compiled" at the
time they happen, using the state of your brain at that time. The
same book would get compiled differently at different points in
your life. Which means it is very much worth reading important
books multiple times. I always used to feel some misgivings about
rereading books. I unconsciously lumped reading together with work
like carpentry, where having to do something again is a sign you
did it wrong the first time. Whereas now the phrase "already read"
seems almost ill-formed.Intriguingly, this implication isn't limited to books. Technology
will increasingly make it possible to relive our experiences. When
people do that today it's usually to enjoy them again (e.g. when
looking at pictures of a trip) or to find the origin of some bug in
their compiled code (e.g. when Stephen Fry succeeded in remembering
the childhood trauma that prevented him from singing). But as
technologies for recording and playing back your life improve, it
may become common for people to relive experiences without any goal
in mind, simply to learn from them again as one might when rereading
a book.Eventually we may be able not just to play back experiences but
also to index and even edit them. So although not knowing how you
know things may seem part of being human, it may not be.
Thanks to Sam Altman, Jessica Livingston, and Robert Morris for reading
drafts of this. |
September 2007In high school I decided I was going to study philosophy in college.
I had several motives, some more honorable than others. One of the
less honorable was to shock people. College was regarded as job
training where I grew up, so studying philosophy seemed an impressively
impractical thing to do. Sort of like slashing holes in your clothes
or putting a safety pin through your ear, which were other forms
of impressive impracticality then just coming into fashion.But I had some more honest motives as well. I thought studying
philosophy would be a shortcut straight to wisdom. All the people
majoring in other things would just end up with a bunch of domain
knowledge. I would be learning what was really what.I'd tried to read a few philosophy books. Not recent ones; you
wouldn't find those in our high school library. But I tried to
read Plato and Aristotle. I doubt I believed I understood them,
but they sounded like they were talking about something important.
I assumed I'd learn what in college.The summer before senior year I took some college classes. I learned
a lot in the calculus class, but I didn't learn much in Philosophy
101. And yet my plan to study philosophy remained intact. It was
my fault I hadn't learned anything. I hadn't read the books we
were assigned carefully enough. I'd give Berkeley's Principles
of Human Knowledge another shot in college. Anything so admired
and so difficult to read must have something in it, if one could
only figure out what.Twenty-six years later, I still don't understand Berkeley. I have
a nice edition of his collected works. Will I ever read it? Seems
unlikely.The difference between then and now is that now I understand why
Berkeley is probably not worth trying to understand. I think I see
now what went wrong with philosophy, and how we might fix it.WordsI did end up being a philosophy major for most of college. It
didn't work out as I'd hoped. I didn't learn any magical truths
compared to which everything else was mere domain knowledge. But
I do at least know now why I didn't. Philosophy doesn't really
have a subject matter in the way math or history or most other
university subjects do. There is no core of knowledge one must
master. The closest you come to that is a knowledge of what various
individual philosophers have said about different topics over the
years. Few were sufficiently correct that people have forgotten
who discovered what they discovered.Formal logic has some subject matter. I took several classes in
logic. I don't know if I learned anything from them.
[1]
It does seem to me very important to be able to flip ideas around in
one's head: to see when two ideas don't fully cover the space of
possibilities, or when one idea is the same as another but with a
couple things changed. But did studying logic teach me the importance
of thinking this way, or make me any better at it? I don't know.There are things I know I learned from studying philosophy. The
most dramatic I learned immediately, in the first semester of
freshman year, in a class taught by Sydney Shoemaker. I learned
that I don't exist. I am (and you are) a collection of cells that
lurches around driven by various forces, and calls itself I. But
there's no central, indivisible thing that your identity goes with.
You could conceivably lose half your brain and live. Which means
your brain could conceivably be split into two halves and each
transplanted into different bodies. Imagine waking up after such
an operation. You have to imagine being two people.The real lesson here is that the concepts we use in everyday life
are fuzzy, and break down if pushed too hard. Even a concept as
dear to us as I. It took me a while to grasp this, but when I
did it was fairly sudden, like someone in the nineteenth century
grasping evolution and realizing the story of creation they'd been
told as a child was all wrong.
[2]
Outside of math there's a limit
to how far you can push words; in fact, it would not be a bad
definition of math to call it the study of terms that have precise
meanings. Everyday words are inherently imprecise. They work well
enough in everyday life that you don't notice. Words seem to work,
just as Newtonian physics seems to. But you can always make them
break if you push them far enough.I would say that this has been, unfortunately for philosophy, the
central fact of philosophy. Most philosophical debates are not
merely afflicted by but driven by confusions over words. Do we
have free will? Depends what you mean by "free." Do abstract ideas
exist? Depends what you mean by "exist."Wittgenstein is popularly credited with the idea that most philosophical
controversies are due to confusions over language. I'm not sure
how much credit to give him. I suspect a lot of people realized
this, but reacted simply by not studying philosophy, rather than
becoming philosophy professors.How did things get this way? Can something people have spent
thousands of years studying really be a waste of time? Those are
interesting questions. In fact, some of the most interesting
questions you can ask about philosophy. The most valuable way to
approach the current philosophical tradition may be neither to get
lost in pointless speculations like Berkeley, nor to shut them down
like Wittgenstein, but to study it as an example of reason gone
wrong.HistoryWestern philosophy really begins with Socrates, Plato, and Aristotle.
What we know of their predecessors comes from fragments and references
in later works; their doctrines could be described as speculative
cosmology that occasionally strays into analysis. Presumably they
were driven by whatever makes people in every other society invent
cosmologies.
[3]With Socrates, Plato, and particularly Aristotle, this tradition
turned a corner. There started to be a lot more analysis. I suspect
Plato and Aristotle were encouraged in this by progress in math.
Mathematicians had by then shown that you could figure things out
in a much more conclusive way than by making up fine sounding stories
about them.
[4]People talk so much about abstractions now that we don't realize
what a leap it must have been when they first started to. It was
presumably many thousands of years between when people first started
describing things as hot or cold and when someone asked "what is
heat?" No doubt it was a very gradual process. We don't know if
Plato or Aristotle were the first to ask any of the questions they
did. But their works are the oldest we have that do this on a large
scale, and there is a freshness (not to say naivete) about them
that suggests some of the questions they asked were new to them,
at least.Aristotle in particular reminds me of the phenomenon that happens
when people discover something new, and are so excited by it that
they race through a huge percentage of the newly discovered territory
in one lifetime. If so, that's evidence of how new this kind of
thinking was.
[5]This is all to explain how Plato and Aristotle can be very impressive
and yet naive and mistaken. It was impressive even to ask the
questions they did. That doesn't mean they always came up with
good answers. It's not considered insulting to say that ancient
Greek mathematicians were naive in some respects, or at least lacked
some concepts that would have made their lives easier. So I hope
people will not be too offended if I propose that ancient philosophers
were similarly naive. In particular, they don't seem to have fully
grasped what I earlier called the central fact of philosophy: that
words break if you push them too far."Much to the surprise of the builders of the first digital computers,"
Rod Brooks wrote, "programs written for them usually did not work."
[6]
Something similar happened when people first started trying
to talk about abstractions. Much to their surprise, they didn't
arrive at answers they agreed upon. In fact, they rarely seemed
to arrive at answers at all.They were in effect arguing about artifacts induced by sampling at
too low a resolution.The proof of how useless some of their answers turned out to be is
how little effect they have. No one after reading Aristotle's
Metaphysics does anything differently as a result.
[7]Surely I'm not claiming that ideas have to have practical applications
to be interesting? No, they may not have to. Hardy's boast that
number theory had no use whatsoever wouldn't disqualify it. But
he turned out to be mistaken. In fact, it's suspiciously hard to
find a field of math that truly has no practical use. And Aristotle's
explanation of the ultimate goal of philosophy in Book A of the
Metaphysics implies that philosophy should be useful too.Theoretical KnowledgeAristotle's goal was to find the most general of general principles.
The examples he gives are convincing: an ordinary worker builds
things a certain way out of habit; a master craftsman can do more
because he grasps the underlying principles. The trend is clear:
the more general the knowledge, the more admirable it is. But then
he makes a mistake—possibly the most important mistake in the
history of philosophy. He has noticed that theoretical knowledge
is often acquired for its own sake, out of curiosity, rather than
for any practical need. So he proposes there are two kinds of
theoretical knowledge: some that's useful in practical matters and
some that isn't. Since people interested in the latter are interested
in it for its own sake, it must be more noble. So he sets as his
goal in the Metaphysics the exploration of knowledge that has no
practical use. Which means no alarms go off when he takes on grand
but vaguely understood questions and ends up getting lost in a sea
of words.His mistake was to confuse motive and result. Certainly, people
who want a deep understanding of something are often driven by
curiosity rather than any practical need. But that doesn't mean
what they end up learning is useless. It's very valuable in practice
to have a deep understanding of what you're doing; even if you're
never called on to solve advanced problems, you can see shortcuts
in the solution of simple ones, and your knowledge won't break down
in edge cases, as it would if you were relying on formulas you
didn't understand. Knowledge is power. That's what makes theoretical
knowledge prestigious. It's also what causes smart people to be
curious about certain things and not others; our DNA is not so
disinterested as we might think.So while ideas don't have to have immediate practical applications
to be interesting, the kinds of things we find interesting will
surprisingly often turn out to have practical applications.The reason Aristotle didn't get anywhere in the Metaphysics was
partly that he set off with contradictory aims: to explore the most
abstract ideas, guided by the assumption that they were useless.
He was like an explorer looking for a territory to the north of
him, starting with the assumption that it was located to the south.And since his work became the map used by generations of future
explorers, he sent them off in the wrong direction as well.
[8]
Perhaps worst of all, he protected them from both the criticism of
outsiders and the promptings of their own inner compass by establishing
the principle that the most noble sort of theoretical knowledge had
to be useless.The Metaphysics is mostly a failed experiment. A few ideas from
it turned out to be worth keeping; the bulk of it has had no effect
at all. The Metaphysics is among the least read of all famous
books. It's not hard to understand the way Newton's Principia
is, but the way a garbled message is.Arguably it's an interesting failed experiment. But unfortunately
that was not the conclusion Aristotle's successors derived from
works like the Metaphysics.
[9]
Soon after, the western world
fell on intellectual hard times. Instead of version 1s to be
superseded, the works of Plato and Aristotle became revered texts
to be mastered and discussed. And so things remained for a shockingly
long time. It was not till around 1600 (in Europe, where the center
of gravity had shifted by then) that one found people confident
enough to treat Aristotle's work as a catalog of mistakes. And
even then they rarely said so outright.If it seems surprising that the gap was so long, consider how little
progress there was in math between Hellenistic times and the
Renaissance.In the intervening years an unfortunate idea took hold: that it
was not only acceptable to produce works like the Metaphysics,
but that it was a particularly prestigious line of work, done by a
class of people called philosophers. No one thought to go back and
debug Aristotle's motivating argument. And so instead of correcting
the problem Aristotle discovered by falling into it—that you can
easily get lost if you talk too loosely about very abstract ideas—they
continued to fall into it.The SingularityCuriously, however, the works they produced continued to attract
new readers. Traditional philosophy occupies a kind of singularity
in this respect. If you write in an unclear way about big ideas,
you produce something that seems tantalizingly attractive to
inexperienced but intellectually ambitious students. Till one knows
better, it's hard to distinguish something that's hard to understand
because the writer was unclear in his own mind from something like
a mathematical proof that's hard to understand because the ideas
it represents are hard to understand. To someone who hasn't learned
the difference, traditional philosophy seems extremely attractive:
as hard (and therefore impressive) as math, yet broader in scope.
That was what lured me in as a high school student.This singularity is even more singular in having its own defense
built in. When things are hard to understand, people who suspect
they're nonsense generally keep quiet. There's no way to prove a
text is meaningless. The closest you can get is to show that the
official judges of some class of texts can't distinguish them from
placebos.
[10]And so instead of denouncing philosophy, most people who suspected
it was a waste of time just studied other things. That alone is
fairly damning evidence, considering philosophy's claims. It's
supposed to be about the ultimate truths. Surely all smart people
would be interested in it, if it delivered on that promise.Because philosophy's flaws turned away the sort of people who might
have corrected them, they tended to be self-perpetuating. Bertrand
Russell wrote in a letter in 1912:
Hitherto the people attracted to philosophy have been mostly those
who loved the big generalizations, which were all wrong, so that
few people with exact minds have taken up the subject.
[11]
His response was to launch Wittgenstein at it, with dramatic results.I think Wittgenstein deserves to be famous not for the discovery
that most previous philosophy was a waste of time, which judging
from the circumstantial evidence must have been made by every smart
person who studied a little philosophy and declined to pursue it
further, but for how he acted in response.
[12]
Instead of quietly
switching to another field, he made a fuss, from inside. He was
Gorbachev.The field of philosophy is still shaken from the fright Wittgenstein
gave it.
[13]
Later in life he spent a lot of time talking about
how words worked. Since that seems to be allowed, that's what a
lot of philosophers do now. Meanwhile, sensing a vacuum in the
metaphysical speculation department, the people who used to do
literary criticism have been edging Kantward, under new names like
"literary theory," "critical theory," and when they're feeling
ambitious, plain "theory." The writing is the familiar word salad:
Gender is not like some of the other grammatical modes which
express precisely a mode of conception without any reality that
corresponds to the conceptual mode, and consequently do not express
precisely something in reality by which the intellect could be
moved to conceive a thing the way it does, even where that motive
is not something in the thing as such.
[14]
The singularity I've described is not going away. There's a market
for writing that sounds impressive and can't be disproven. There
will always be both supply and demand. So if one group abandons
this territory, there will always be others ready to occupy it.A ProposalWe may be able to do better. Here's an intriguing possibility.
Perhaps we should do what Aristotle meant to do, instead of what
he did. The goal he announces in the Metaphysics seems one worth
pursuing: to discover the most general truths. That sounds good.
But instead of trying to discover them because they're useless,
let's try to discover them because they're useful.I propose we try again, but that we use that heretofore despised
criterion, applicability, as a guide to keep us from wondering
off into a swamp of abstractions. Instead of trying to answer the
question:
What are the most general truths?
let's try to answer the question
Of all the useful things we can say, which are the most general?
The test of utility I propose is whether we cause people who read
what we've written to do anything differently afterward. Knowing
we have to give definite (if implicit) advice will keep us from
straying beyond the resolution of the words we're using.The goal is the same as Aristotle's; we just approach it from a
different direction.As an example of a useful, general idea, consider that of the
controlled experiment. There's an idea that has turned out to be
widely applicable. Some might say it's part of science, but it's
not part of any specific science; it's literally meta-physics (in
our sense of "meta"). The idea of evolution is another. It turns
out to have quite broad applications—for example, in genetic
algorithms and even product design. Frankfurt's distinction between
lying and bullshitting seems a promising recent example.
[15]These seem to me what philosophy should look like: quite general
observations that would cause someone who understood them to do
something differently.Such observations will necessarily be about things that are imprecisely
defined. Once you start using words with precise meanings, you're
doing math. So starting from utility won't entirely solve the
problem I described above—it won't flush out the metaphysical
singularity. But it should help. It gives people with good
intentions a new roadmap into abstraction. And they may thereby
produce things that make the writing of the people with bad intentions
look bad by comparison.One drawback of this approach is that it won't produce the sort of
writing that gets you tenure. And not just because it's not currently
the fashion. In order to get tenure in any field you must not
arrive at conclusions that members of tenure committees can disagree
with. In practice there are two kinds of solutions to this problem.
In math and the sciences, you can prove what you're saying, or at
any rate adjust your conclusions so you're not claiming anything
false ("6 of 8 subjects had lower blood pressure after the treatment").
In the humanities you can either avoid drawing any definite conclusions
(e.g. conclude that an issue is a complex one), or draw conclusions
so narrow that no one cares enough to disagree with you.The kind of philosophy I'm advocating won't be able to take either
of these routes. At best you'll be able to achieve the essayist's
standard of proof, not the mathematician's or the experimentalist's.
And yet you won't be able to meet the usefulness test without
implying definite and fairly broadly applicable conclusions. Worse
still, the usefulness test will tend to produce results that annoy
people: there's no use in telling people things they already believe,
and people are often upset to be told things they don't.Here's the exciting thing, though. Anyone can do this. Getting
to general plus useful by starting with useful and cranking up the
generality may be unsuitable for junior professors trying to get
tenure, but it's better for everyone else, including professors who
already have it. This side of the mountain is a nice gradual slope.
You can start by writing things that are useful but very specific,
and then gradually make them more general. Joe's has good burritos.
What makes a good burrito? What makes good food? What makes
anything good? You can take as long as you want. You don't have
to get all the way to the top of the mountain. You don't have to
tell anyone you're doing philosophy.If it seems like a daunting task to do philosophy, here's an
encouraging thought. The field is a lot younger than it seems.
Though the first philosophers in the western tradition lived about
2500 years ago, it would be misleading to say the field is 2500
years old, because for most of that time the leading practitioners
weren't doing much more than writing commentaries on Plato or
Aristotle while watching over their shoulders for the next invading
army. In the times when they weren't, philosophy was hopelessly
intermingled with religion. It didn't shake itself free till a
couple hundred years ago, and even then was afflicted by the
structural problems I've described above. If I say this, some will
say it's a ridiculously overbroad and uncharitable generalization,
and others will say it's old news, but here goes: judging from their
works, most philosophers up to the present have been wasting their
time. So in a sense the field is still at the first step.
[16]That sounds a preposterous claim to make. It won't seem so
preposterous in 10,000 years. Civilization always seems old, because
it's always the oldest it's ever been. The only way to say whether
something is really old or not is by looking at structural evidence,
and structurally philosophy is young; it's still reeling from the
unexpected breakdown of words.Philosophy is as young now as math was in 1500. There is a lot
more to discover.Notes
[1]
In practice formal logic is not much use, because despite
some progress in the last 150 years we're still only able to formalize
a small percentage of statements. We may never do that much better,
for the same reason 1980s-style "knowledge representation" could
never have worked; many statements may have no representation more
concise than a huge, analog brain state.[2]
It was harder for Darwin's contemporaries to grasp this than
we can easily imagine. The story of creation in the Bible is not
just a Judeo-Christian concept; it's roughly what everyone must
have believed since before people were people. The hard part of
grasping evolution was to realize that species weren't, as they
seem to be, unchanging, but had instead evolved from different,
simpler organisms over unimaginably long periods of time.Now we don't have to make that leap. No one in an industrialized
country encounters the idea of evolution for the first time as an
adult. Everyone's taught about it as a child, either as truth or
heresy.[3]
Greek philosophers before Plato wrote in verse. This must
have affected what they said. If you try to write about the nature
of the world in verse, it inevitably turns into incantation. Prose
lets you be more precise, and more tentative.[4]
Philosophy is like math's
ne'er-do-well brother. It was born when Plato and Aristotle looked
at the works of their predecessors and said in effect "why can't
you be more like your brother?" Russell was still saying the same
thing 2300 years later.Math is the precise half of the most abstract ideas, and philosophy
the imprecise half. It's probably inevitable that philosophy will
suffer by comparison, because there's no lower bound to its precision.
Bad math is merely boring, whereas bad philosophy is nonsense. And
yet there are some good ideas in the imprecise half.[5]
Aristotle's best work was in logic and zoology, both of which
he can be said to have invented. But the most dramatic departure
from his predecessors was a new, much more analytical style of
thinking. He was arguably the first scientist.[6]
Brooks, Rodney, Programming in Common Lisp, Wiley, 1985, p.
94.[7]
Some would say we depend on Aristotle more than we realize,
because his ideas were one of the ingredients in our common culture.
Certainly a lot of the words we use have a connection with Aristotle,
but it seems a bit much to suggest that we wouldn't have the concept
of the essence of something or the distinction between matter and
form if Aristotle hadn't written about them.One way to see how much we really depend on Aristotle would be to
diff European culture with Chinese: what ideas did European culture
have in 1800 that Chinese culture didn't, in virtue of Aristotle's
contribution?[8]
The meaning of the word "philosophy" has changed over time.
In ancient times it covered a broad range of topics, comparable in
scope to our "scholarship" (though without the methodological
implications). Even as late as Newton's time it included what we
now call "science." But core of the subject today is still what
seemed to Aristotle the core: the attempt to discover the most
general truths.Aristotle didn't call this "metaphysics." That name got assigned
to it because the books we now call the Metaphysics came after
(meta = after) the Physics in the standard edition of Aristotle's
works compiled by Andronicus of Rhodes three centuries later. What
we call "metaphysics" Aristotle called "first philosophy."[9]
Some of Aristotle's immediate successors may have realized
this, but it's hard to say because most of their works are lost.[10]
Sokal, Alan, "Transgressing the Boundaries: Toward a Transformative
Hermeneutics of Quantum Gravity," Social Text 46/47, pp. 217-252.Abstract-sounding nonsense seems to be most attractive when it's
aligned with some axe the audience already has to grind. If this
is so we should find it's most popular with groups that are (or
feel) weak. The powerful don't need its reassurance.[11]
Letter to Ottoline Morrell, December 1912. Quoted in:Monk, Ray, Ludwig Wittgenstein: The Duty of Genius, Penguin, 1991,
p. 75.[12]
A preliminary result, that all metaphysics between Aristotle
and 1783 had been a waste of time, is due to I. Kant.[13]
Wittgenstein asserted a sort of mastery to which the inhabitants
of early 20th century Cambridge seem to have been peculiarly
vulnerable—perhaps partly because so many had been raised religious
and then stopped believing, so had a vacant space in their heads
for someone to tell them what to do (others chose Marx or Cardinal
Newman), and partly because a quiet, earnest place like Cambridge
in that era had no natural immunity to messianic figures, just as
European politics then had no natural immunity to dictators.[14]
This is actually from the Ordinatio of Duns Scotus (ca.
1300), with "number" replaced by "gender." Plus ca change.Wolter, Allan (trans), Duns Scotus: Philosophical Writings, Nelson,
1963, p. 92.[15]
Frankfurt, Harry, On Bullshit, Princeton University Press,
2005.[16]
Some introductions to philosophy now take the line that
philosophy is worth studying as a process rather than for any
particular truths you'll learn. The philosophers whose works they
cover would be rolling in their graves at that. They hoped they
were doing more than serving as examples of how to argue: they hoped
they were getting results. Most were wrong, but it doesn't seem
an impossible hope.This argument seems to me like someone in 1500 looking at the lack
of results achieved by alchemy and saying its value was as a process.
No, they were going about it wrong. It turns out it is possible
to transmute lead into gold (though not economically at current
energy prices), but the route to that knowledge was to
backtrack and try another approach.Thanks to Trevor Blackwell, Paul Buchheit, Jessica Livingston,
Robert Morris, Mark Nitzberg, and Peter Norvig for reading drafts of this. |
April 2005"Suits make a corporate comeback," says the New
York Times. Why does this sound familiar? Maybe because
the suit was also back in February,
September
2004, June
2004, March
2004, September
2003,
November
2002,
April 2002,
and February
2002.
Why do the media keep running stories saying suits are back? Because
PR firms tell
them to. One of the most surprising things I discovered
during my brief business career was the existence of the PR industry,
lurking like a huge, quiet submarine beneath the news. Of the
stories you read in traditional media that aren't about politics,
crimes, or disasters, more than half probably come from PR firms.I know because I spent years hunting such "press hits." Our startup spent
its entire marketing budget on PR: at a time when we were assembling
our own computers to save money, we were paying a PR firm $16,000
a month. And they were worth it. PR is the news equivalent of
search engine optimization; instead of buying ads, which readers
ignore, you get yourself inserted directly into the stories. [1]Our PR firm
was one of the best in the business. In 18 months, they got press
hits in over 60 different publications.
And we weren't the only ones they did great things for.
In 1997 I got a call from another
startup founder considering hiring them to promote his company. I
told him they were PR gods, worth every penny of their outrageous
fees. But I remember thinking his company's name was odd.
Why call an auction site "eBay"?
SymbiosisPR is not dishonest. Not quite. In fact, the reason the best PR
firms are so effective is precisely that they aren't dishonest.
They give reporters genuinely valuable information. A good PR firm
won't bug reporters just because the client tells them to; they've
worked hard to build their credibility with reporters, and they
don't want to destroy it by feeding them mere propaganda.If anyone is dishonest, it's the reporters. The main reason PR
firms exist is that reporters are lazy. Or, to put it more nicely,
overworked. Really they ought to be out there digging up stories
for themselves. But it's so tempting to sit in their offices and
let PR firms bring the stories to them. After all, they know good
PR firms won't lie to them.A good flatterer doesn't lie, but tells his victim selective truths
(what a nice color your eyes are). Good PR firms use the same
strategy: they give reporters stories that are true, but whose truth
favors their clients.For example, our PR firm often pitched stories about how the Web
let small merchants compete with big ones. This was perfectly true.
But the reason reporters ended up writing stories about this
particular truth, rather than some other one, was that small merchants
were our target market, and we were paying the piper.Different publications vary greatly in their reliance on PR firms.
At the bottom of the heap are the trade press, who make most of
their money from advertising and would give the magazines away for
free if advertisers would let them. [2] The average
trade publication is a bunch of ads, glued together by just enough
articles to make it look like a magazine. They're so desperate for
"content" that some will print your press releases almost verbatim,
if you take the trouble to write them to read like articles.At the other extreme are publications like the New York Times
and the Wall Street Journal. Their reporters do go out and
find their own stories, at least some of the time. They'll listen
to PR firms, but briefly and skeptically. We managed to get press
hits in almost every publication we wanted, but we never managed
to crack the print edition of the Times. [3]The weak point of the top reporters is not laziness, but vanity.
You don't pitch stories to them. You have to approach them as if
you were a specimen under their all-seeing microscope, and make it
seem as if the story you want them to run is something they thought
of themselves.Our greatest PR coup was a two-part one. We estimated, based on
some fairly informal math, that there were about 5000 stores on the
Web. We got one paper to print this number, which seemed neutral
enough. But once this "fact" was out there in print, we could quote
it to other publications, and claim that with 1000 users we had 20%
of the online store market.This was roughly true. We really did have the biggest share of the
online store market, and 5000 was our best guess at its size. But
the way the story appeared in the press sounded a lot more definite.Reporters like definitive statements. For example, many of the
stories about Jeremy Jaynes's conviction say that he was one of the
10 worst spammers. This "fact" originated in Spamhaus's ROKSO list,
which I think even Spamhaus would admit is a rough guess at the top
spammers. The first stories about Jaynes cited this source, but
now it's simply repeated as if it were part of the indictment.
[4]All you can say with certainty about Jaynes is that he was a fairly
big spammer. But reporters don't want to print vague stuff like
"fairly big." They want statements with punch, like "top ten." And
PR firms give them what they want.
Wearing suits, we're told, will make us
3.6
percent more productive.BuzzWhere the work of PR firms really does get deliberately misleading is in
the generation of "buzz." They usually feed the same story to
several different publications at once. And when readers see similar
stories in multiple places, they think there is some important trend
afoot. Which is exactly what they're supposed to think.When Windows 95 was launched, people waited outside stores
at midnight to buy the first copies. None of them would have been
there without PR firms, who generated such a buzz in
the news media that it became self-reinforcing, like a nuclear chain
reaction.I doubt PR firms realize it yet, but the Web makes it possible to
track them at work. If you search for the obvious phrases, you
turn up several efforts over the years to place stories about the
return of the suit. For example, the Reuters article
that got picked up by USA
Today in September 2004. "The suit is back," it begins.Trend articles like this are almost always the work of
PR firms. Once you know how to read them, it's straightforward to
figure out who the client is. With trend stories, PR firms usually
line up one or more "experts" to talk about the industry generally.
In this case we get three: the NPD Group, the creative director of
GQ, and a research director at Smith Barney. [5] When
you get to the end of the experts, look for the client. And bingo,
there it is: The Men's Wearhouse.Not surprising, considering The Men's Wearhouse was at that moment
running ads saying "The Suit is Back." Talk about a successful
press hit-- a wire service article whose first sentence is your own
ad copy.The secret to finding other press hits from a given pitch
is to realize that they all started from the same document back at
the PR firm. Search for a few key phrases and the names of the
clients and the experts, and you'll turn up other variants of this
story.Casual
fridays are out and dress codes are in writes Diane E. Lewis
in The Boston Globe. In a remarkable coincidence, Ms. Lewis's
industry contacts also include the creative director of GQ.Ripped jeans and T-shirts are out, writes Mary Kathleen Flynn in
US News & World Report. And she too knows the
creative director of GQ.Men's suits
are back writes Nicole Ford in Sexbuzz.Com ("the ultimate men's
entertainment magazine").Dressing
down loses appeal as men suit up at the office writes Tenisha
Mercer of The Detroit News.
Now that so many news articles are online, I suspect you could find
a similar pattern for most trend stories placed by PR firms. I
propose we call this new sport "PR diving," and I'm sure there are
far more striking examples out there than this clump of five stories.OnlineAfter spending years chasing them, it's now second nature
to me to recognize press hits for what they are. But before we
hired a PR firm I had no idea where articles in the mainstream media
came from. I could tell a lot of them were crap, but I didn't
realize why.Remember the exercises in critical reading you did in school, where
you had to look at a piece of writing and step back and ask whether
the author was telling the whole truth? If you really want to be
a critical reader, it turns out you have to step back one step
further, and ask not just whether the author is telling the truth,
but why he's writing about this subject at all.Online, the answer tends to be a lot simpler. Most people who
publish online write what they write for the simple reason that
they want to. You
can't see the fingerprints of PR firms all over the articles, as
you can in so many print publications-- which is one of the reasons,
though they may not consciously realize it, that readers trust
bloggers more than Business Week.I was talking recently to a friend who works for a
big newspaper. He thought the print media were in serious trouble,
and that they were still mostly in denial about it. "They think
the decline is cyclic," he said. "Actually it's structural."In other words, the readers are leaving, and they're not coming
back.
Why? I think the main reason is that the writing online is more honest.
Imagine how incongruous the New York Times article about
suits would sound if you read it in a blog:
The urge to look corporate-- sleek, commanding,
prudent, yet with just a touch of hubris on your well-cut sleeve--
is an unexpected development in a time of business disgrace.
The problem
with this article is not just that it originated in a PR firm.
The whole tone is bogus. This is the tone of someone writing down
to their audience.Whatever its flaws, the writing you find online
is authentic. It's not mystery meat cooked up
out of scraps of pitch letters and press releases, and pressed into
molds of zippy
journalese. It's people writing what they think.I didn't realize, till there was an alternative, just how artificial
most of the writing in the mainstream media was. I'm not saying
I used to believe what I read in Time and Newsweek. Since high
school, at least, I've thought of magazines like that more as
guides to what ordinary people were being
told to think than as
sources of information. But I didn't realize till the last
few years that writing for publication didn't have to mean writing
that way. I didn't realize you could write as candidly and
informally as you would if you were writing to a friend.Readers aren't the only ones who've noticed the
change. The PR industry has too.
A hilarious article
on the site of the PR Society of America gets to the heart of the
matter:
Bloggers are sensitive about becoming mouthpieces
for other organizations and companies, which is the reason they
began blogging in the first place.
PR people fear bloggers for the same reason readers
like them. And that means there may be a struggle ahead. As
this new kind of writing draws readers away from traditional media, we
should be prepared for whatever PR mutates into to compensate.
When I think
how hard PR firms work to score press hits in the traditional
media, I can't imagine they'll work any less hard to feed stories
to bloggers, if they can figure out how.
Notes[1] PR has at least
one beneficial feature: it favors small companies. If PR didn't
work, the only alternative would be to advertise, and only big
companies can afford that.[2] Advertisers pay
less for ads in free publications, because they assume readers
ignore something they get for free. This is why so many trade
publications nominally have a cover price and yet give away free
subscriptions with such abandon.[3] Different sections
of the Times vary so much in their standards that they're
practically different papers. Whoever fed the style section reporter
this story about suits coming back would have been sent packing by
the regular news reporters.[4] The most striking
example I know of this type is the "fact" that the Internet worm
of 1988 infected 6000 computers. I was there when it was cooked up,
and this was the recipe: someone guessed that there were about
60,000 computers attached to the Internet, and that the worm might
have infected ten percent of them.Actually no one knows how many computers the worm infected, because
the remedy was to reboot them, and this destroyed all traces. But
people like numbers. And so this one is now replicated
all over the Internet, like a little worm of its own.[5] Not all were
necessarily supplied by the PR firm. Reporters sometimes call a few
additional sources on their own, like someone adding a few fresh
vegetables to a can of soup.
Thanks to Ingrid Basset, Trevor Blackwell, Sarah Harlin, Jessica
Livingston, Jackie McDonough, Robert Morris, and Aaron Swartz (who
also found the PRSA article) for reading drafts of this.Correction: Earlier versions used a recent
Business Week article mentioning del.icio.us as an example
of a press hit, but Joshua Schachter tells me
it was spontaneous. |
Want to start a startup? Get funded by
Y Combinator.
April 2001, rev. April 2003(This article is derived from a talk given at the 2001 Franz
Developer Symposium.)
In the summer of 1995, my friend Robert Morris and I
started a startup called
Viaweb.
Our plan was to write
software that would let end users build online stores.
What was novel about this software, at the time, was
that it ran on our server, using ordinary Web pages
as the interface.A lot of people could have been having this idea at the
same time, of course, but as far as I know, Viaweb was
the first Web-based application. It seemed such
a novel idea to us that we named the company after it:
Viaweb, because our software worked via the Web,
instead of running on your desktop computer.Another unusual thing about this software was that it
was written primarily in a programming language called
Lisp. It was one of the first big end-user
applications to be written in Lisp, which up till then
had been used mostly in universities and research labs. [1]The Secret WeaponEric Raymond has written an essay called "How to Become a Hacker,"
and in it, among other things, he tells would-be hackers what
languages they should learn. He suggests starting with Python and
Java, because they are easy to learn. The serious hacker will also
want to learn C, in order to hack Unix, and Perl for system
administration and cgi scripts. Finally, the truly serious hacker
should consider learning Lisp:
Lisp is worth learning for the profound enlightenment experience
you will have when you finally get it; that experience will make
you a better programmer for the rest of your days, even if you
never actually use Lisp itself a lot.
This is the same argument you tend to hear for learning Latin. It
won't get you a job, except perhaps as a classics professor, but
it will improve your mind, and make you a better writer in languages
you do want to use, like English.But wait a minute. This metaphor doesn't stretch that far. The
reason Latin won't get you a job is that no one speaks it. If you
write in Latin, no one can understand you. But Lisp is a computer
language, and computers speak whatever language you, the programmer,
tell them to.So if Lisp makes you a better programmer, like he says, why wouldn't
you want to use it? If a painter were offered a brush that would
make him a better painter, it seems to me that he would want to
use it in all his paintings, wouldn't he? I'm not trying to make
fun of Eric Raymond here. On the whole, his advice is good. What
he says about Lisp is pretty much the conventional wisdom. But
there is a contradiction in the conventional wisdom: Lisp will
make you a better programmer, and yet you won't use it.Why not? Programming languages are just tools, after all. If Lisp
really does yield better programs, you should use it. And if it
doesn't, then who needs it?This is not just a theoretical question. Software is a very
competitive business, prone to natural monopolies. A company that
gets software written faster and better will, all other things
being equal, put its competitors out of business. And when you're
starting a startup, you feel this very keenly. Startups tend to
be an all or nothing proposition. You either get rich, or you get
nothing. In a startup, if you bet on the wrong technology, your
competitors will crush you.Robert and I both knew Lisp well, and we couldn't see any reason
not to trust our instincts and go with Lisp. We knew that everyone
else was writing their software in C++ or Perl. But we also knew
that that didn't mean anything. If you chose technology that way,
you'd be running Windows. When you choose technology, you have to
ignore what other people are doing, and consider only what will
work the best.This is especially true in a startup. In a big company, you can
do what all the other big companies are doing. But a startup can't
do what all the other startups do. I don't think a lot of people
realize this, even in startups.The average big company grows at about ten percent a year. So if
you're running a big company and you do everything the way the
average big company does it, you can expect to do as well as the
average big company-- that is, to grow about ten percent a year.The same thing will happen if you're running a startup, of course.
If you do everything the way the average startup does it, you should
expect average performance. The problem here is, average performance
means that you'll go out of business. The survival rate for startups
is way less than fifty percent. So if you're running a startup,
you had better be doing something odd. If not, you're in trouble.Back in 1995, we knew something that I don't think our competitors
understood, and few understand even now: when you're writing
software that only has to run on your own servers, you can use
any language you want. When you're writing desktop software,
there's a strong bias toward writing applications in the same
language as the operating system. Ten years ago, writing applications
meant writing applications in C. But with Web-based software,
especially when you have the source code of both the language and
the operating system, you can use whatever language you want.This new freedom is a double-edged sword, however. Now that you
can use any language, you have to think about which one to use.
Companies that try to pretend nothing has changed risk finding that
their competitors do not.If you can use any language, which do you use? We chose Lisp.
For one thing, it was obvious that rapid development would be
important in this market. We were all starting from scratch, so
a company that could get new features done before its competitors
would have a big advantage. We knew Lisp was a really good language
for writing software quickly, and server-based applications magnify
the effect of rapid development, because you can release software
the minute it's done.If other companies didn't want to use Lisp, so much the better.
It might give us a technological edge, and we needed all the help
we could get. When we started Viaweb, we had no experience in
business. We didn't know anything about marketing, or hiring
people, or raising money, or getting customers. Neither of us had
ever even had what you would call a real job. The only thing we
were good at was writing software. We hoped that would save us.
Any advantage we could get in the software department, we would
take.So you could say that using Lisp was an experiment. Our hypothesis
was that if we wrote our software in Lisp, we'd be able to get
features done faster than our competitors, and also to do things
in our software that they couldn't do. And because Lisp was so
high-level, we wouldn't need a big development team, so our costs
would be lower. If this were so, we could offer a better product
for less money, and still make a profit. We would end up getting
all the users, and our competitors would get none, and eventually
go out of business. That was what we hoped would happen, anyway.What were the results of this experiment? Somewhat surprisingly,
it worked. We eventually had many competitors, on the order of
twenty to thirty of them, but none of their software could compete
with ours. We had a wysiwyg online store builder that ran on the
server and yet felt like a desktop application. Our competitors
had cgi scripts. And we were always far ahead of them in features.
Sometimes, in desperation, competitors would try to introduce
features that we didn't have. But with Lisp our development cycle
was so fast that we could sometimes duplicate a new feature within
a day or two of a competitor announcing it in a press release. By
the time journalists covering the press release got round to calling
us, we would have the new feature too.It must have seemed to our competitors that we had some kind of
secret weapon-- that we were decoding their Enigma traffic or
something. In fact we did have a secret weapon, but it was simpler
than they realized. No one was leaking news of their features to
us. We were just able to develop software faster than anyone
thought possible.When I was about nine I happened to get hold of a copy of The Day
of the Jackal, by Frederick Forsyth. The main character is an
assassin who is hired to kill the president of France. The assassin
has to get past the police to get up to an apartment that overlooks
the president's route. He walks right by them, dressed up as an
old man on crutches, and they never suspect him.Our secret weapon was similar. We wrote our software in a weird
AI language, with a bizarre syntax full of parentheses. For years
it had annoyed me to hear Lisp described that way. But now it
worked to our advantage. In business, there is nothing more valuable
than a technical advantage your competitors don't understand. In
business, as in war, surprise is worth as much as force.And so, I'm a little embarrassed to say, I never said anything
publicly about Lisp while we were working on Viaweb. We never
mentioned it to the press, and if you searched for Lisp on our Web
site, all you'd find were the titles of two books in my bio. This
was no accident. A startup should give its competitors as little
information as possible. If they didn't know what language our
software was written in, or didn't care, I wanted to keep it that
way.[2]The people who understood our technology best were the customers.
They didn't care what language Viaweb was written in either, but
they noticed that it worked really well. It let them build great
looking online stores literally in minutes. And so, by word of
mouth mostly, we got more and more users. By the end of 1996 we
had about 70 stores online. At the end of 1997 we had 500. Six
months later, when Yahoo bought us, we had 1070 users. Today, as
Yahoo Store, this software continues to dominate its market. It's
one of the more profitable pieces of Yahoo, and the stores built
with it are the foundation of Yahoo Shopping. I left Yahoo in
1999, so I don't know exactly how many users they have now, but
the last I heard there were about 20,000.
The Blub ParadoxWhat's so great about Lisp? And if Lisp is so great, why doesn't
everyone use it? These sound like rhetorical questions, but actually
they have straightforward answers. Lisp is so great not because
of some magic quality visible only to devotees, but because it is
simply the most powerful language available. And the reason everyone
doesn't use it is that programming languages are not merely
technologies, but habits of mind as well, and nothing changes
slower. Of course, both these answers need explaining.I'll begin with a shockingly controversial statement: programming
languages vary in power.Few would dispute, at least, that high level languages are more
powerful than machine language. Most programmers today would agree
that you do not, ordinarily, want to program in machine language.
Instead, you should program in a high-level language, and have a
compiler translate it into machine language for you. This idea is
even built into the hardware now: since the 1980s, instruction sets
have been designed for compilers rather than human programmers.Everyone knows it's a mistake to write your whole program by hand
in machine language. What's less often understood is that there
is a more general principle here: that if you have a choice of
several languages, it is, all other things being equal, a mistake
to program in anything but the most powerful one. [3]There are many exceptions to this rule. If you're writing a program
that has to work very closely with a program written in a certain
language, it might be a good idea to write the new program in the
same language. If you're writing a program that only has to do
something very simple, like number crunching or bit manipulation,
you may as well use a less abstract language, especially since it
may be slightly faster. And if you're writing a short, throwaway
program, you may be better off just using whatever language has
the best library functions for the task. But in general, for
application software, you want to be using the most powerful
(reasonably efficient) language you can get, and using anything
else is a mistake, of exactly the same kind, though possibly in a
lesser degree, as programming in machine language.You can see that machine language is very low level. But, at least
as a kind of social convention, high-level languages are often all
treated as equivalent. They're not. Technically the term "high-level
language" doesn't mean anything very definite. There's no dividing
line with machine languages on one side and all the high-level
languages on the other. Languages fall along a continuum [4] of
abstractness, from the most powerful all the way down to machine
languages, which themselves vary in power.Consider Cobol. Cobol is a high-level language, in the sense that
it gets compiled into machine language. Would anyone seriously
argue that Cobol is equivalent in power to, say, Python? It's
probably closer to machine language than Python.Or how about Perl 4? Between Perl 4 and Perl 5, lexical closures
got added to the language. Most Perl hackers would agree that Perl
5 is more powerful than Perl 4. But once you've admitted that,
you've admitted that one high level language can be more powerful
than another. And it follows inexorably that, except in special
cases, you ought to use the most powerful you can get.This idea is rarely followed to its conclusion, though. After a
certain age, programmers rarely switch languages voluntarily.
Whatever language people happen to be used to, they tend to consider
just good enough.Programmers get very attached to their favorite languages, and I
don't want to hurt anyone's feelings, so to explain this point I'm
going to use a hypothetical language called Blub. Blub falls right
in the middle of the abstractness continuum. It is not the most
powerful language, but it is more powerful than Cobol or machine
language.And in fact, our hypothetical Blub programmer wouldn't use either
of them. Of course he wouldn't program in machine language. That's
what compilers are for. And as for Cobol, he doesn't know how
anyone can get anything done with it. It doesn't even have x (Blub
feature of your choice).As long as our hypothetical Blub programmer is looking down the
power continuum, he knows he's looking down. Languages less powerful
than Blub are obviously less powerful, because they're missing some
feature he's used to. But when our hypothetical Blub programmer
looks in the other direction, up the power continuum, he doesn't
realize he's looking up. What he sees are merely weird languages.
He probably considers them about equivalent in power to Blub, but
with all this other hairy stuff thrown in as well. Blub is good
enough for him, because he thinks in Blub.When we switch to the point of view of a programmer using any of
the languages higher up the power continuum, however, we find that
he in turn looks down upon Blub. How can you get anything done in
Blub? It doesn't even have y.By induction, the only programmers in a position to see all the
differences in power between the various languages are those who
understand the most powerful one. (This is probably what Eric
Raymond meant about Lisp making you a better programmer.) You can't
trust the opinions of the others, because of the Blub paradox:
they're satisfied with whatever language they happen to use, because
it dictates the way they think about programs.I know this from my own experience, as a high school kid writing
programs in Basic. That language didn't even support recursion.
It's hard to imagine writing programs without using recursion, but
I didn't miss it at the time. I thought in Basic. And I was a
whiz at it. Master of all I surveyed.The five languages that Eric Raymond recommends to hackers fall at
various points on the power continuum. Where they fall relative
to one another is a sensitive topic. What I will say is that I
think Lisp is at the top. And to support this claim I'll tell you
about one of the things I find missing when I look at the other
four languages. How can you get anything done in them, I think,
without macros? [5]Many languages have something called a macro. But Lisp macros are
unique. And believe it or not, what they do is related to the
parentheses. The designers of Lisp didn't put all those parentheses
in the language just to be different. To the Blub programmer, Lisp
code looks weird. But those parentheses are there for a reason.
They are the outward evidence of a fundamental difference between
Lisp and other languages.Lisp code is made out of Lisp data objects. And not in the trivial
sense that the source files contain characters, and strings are
one of the data types supported by the language. Lisp code, after
it's read by the parser, is made of data structures that you can
traverse.If you understand how compilers work, what's really going on is
not so much that Lisp has a strange syntax as that Lisp has no
syntax. You write programs in the parse trees that get generated
within the compiler when other languages are parsed. But these
parse trees are fully accessible to your programs. You can write
programs that manipulate them. In Lisp, these programs are called
macros. They are programs that write programs.Programs that write programs? When would you ever want to do that?
Not very often, if you think in Cobol. All the time, if you think
in Lisp. It would be convenient here if I could give an example
of a powerful macro, and say there! how about that? But if I did,
it would just look like gibberish to someone who didn't know Lisp;
there isn't room here to explain everything you'd need to know to
understand what it meant. In
Ansi Common Lisp I tried to move
things along as fast as I could, and even so I didn't get to macros
until page 160.But I think I can give a kind of argument that might be convincing.
The source code of the Viaweb editor was probably about 20-25%
macros. Macros are harder to write than ordinary Lisp functions,
and it's considered to be bad style to use them when they're not
necessary. So every macro in that code is there because it has to
be. What that means is that at least 20-25% of the code in this
program is doing things that you can't easily do in any other
language. However skeptical the Blub programmer might be about my
claims for the mysterious powers of Lisp, this ought to make him
curious. We weren't writing this code for our own amusement. We
were a tiny startup, programming as hard as we could in order to
put technical barriers between us and our competitors.A suspicious person might begin to wonder if there was some
correlation here. A big chunk of our code was doing things that
are very hard to do in other languages. The resulting software
did things our competitors' software couldn't do. Maybe there was
some kind of connection. I encourage you to follow that thread.
There may be more to that old man hobbling along on his crutches
than meets the eye.Aikido for StartupsBut I don't expect to convince anyone
(over 25)
to go out and learn
Lisp. The purpose of this article is not to change anyone's mind,
but to reassure people already interested in using Lisp-- people
who know that Lisp is a powerful language, but worry because it
isn't widely used. In a competitive situation, that's an advantage.
Lisp's power is multiplied by the fact that your competitors don't
get it.If you think of using Lisp in a startup, you shouldn't worry that
it isn't widely understood. You should hope that it stays that
way. And it's likely to. It's the nature of programming languages
to make most people satisfied with whatever they currently use.
Computer hardware changes so much faster than personal habits that
programming practice is usually ten to twenty years behind the
processor. At places like MIT they were writing programs in
high-level languages in the early 1960s, but many companies continued
to write code in machine language well into the 1980s. I bet a
lot of people continued to write machine language until the processor,
like a bartender eager to close up and go home, finally kicked them
out by switching to a risc instruction set.Ordinarily technology changes fast. But programming languages are
different: programming languages are not just technology, but what
programmers think in. They're half technology and half religion.[6]
And so the median language, meaning whatever language the median
programmer uses, moves as slow as an iceberg. Garbage collection,
introduced by Lisp in about 1960, is now widely considered to be
a good thing. Runtime typing, ditto, is growing in popularity.
Lexical closures, introduced by Lisp in the early 1970s, are now,
just barely, on the radar screen. Macros, introduced by Lisp in the
mid 1960s, are still terra incognita.Obviously, the median language has enormous momentum. I'm not
proposing that you can fight this powerful force. What I'm proposing
is exactly the opposite: that, like a practitioner of Aikido, you
can use it against your opponents.If you work for a big company, this may not be easy. You will have
a hard time convincing the pointy-haired boss to let you build
things in Lisp, when he has just read in the paper that some other
language is poised, like Ada was twenty years ago, to take over
the world. But if you work for a startup that doesn't have
pointy-haired bosses yet, you can, like we did, turn the Blub
paradox to your advantage: you can use technology that your
competitors, glued immovably to the median language, will never be
able to match.If you ever do find yourself working for a startup, here's a handy
tip for evaluating competitors. Read their job listings. Everything
else on their site may be stock photos or the prose equivalent,
but the job listings have to be specific about what they want, or
they'll get the wrong candidates.During the years we worked on Viaweb I read a lot of job descriptions.
A new competitor seemed to emerge out of the woodwork every month
or so. The first thing I would do, after checking to see if they
had a live online demo, was look at their job listings. After a
couple years of this I could tell which companies to worry about
and which not to. The more of an IT flavor the job descriptions
had, the less dangerous the company was. The safest kind were the
ones that wanted Oracle experience. You never had to worry about
those. You were also safe if they said they wanted C++ or Java
developers. If they wanted Perl or Python programmers, that would
be a bit frightening-- that's starting to sound like a company
where the technical side, at least, is run by real hackers. If I
had ever seen a job posting looking for Lisp hackers, I would have
been really worried.
Notes[1] Viaweb at first had two parts: the editor, written in Lisp,
which people used to build their sites, and the ordering system,
written in C, which handled orders. The first version was mostly
Lisp, because the ordering system was small. Later we added two
more modules, an image generator written in C, and a back-office
manager written mostly in Perl.In January 2003, Yahoo released a new version of the editor
written in C++ and Perl. It's hard to say whether the program is no
longer written in Lisp, though, because to translate this program
into C++ they literally had to write a Lisp interpreter: the source
files of all the page-generating templates are still, as far as I
know, Lisp code. (See Greenspun's Tenth Rule.)[2] Robert Morris says that I didn't need to be secretive, because
even if our competitors had known we were using Lisp, they wouldn't
have understood why: "If they were that smart they'd already be
programming in Lisp."[3] All languages are equally powerful in the sense of being Turing
equivalent, but that's not the sense of the word programmers care
about. (No one wants to program a Turing machine.) The kind of
power programmers care about may not be formally definable, but
one way to explain it would be to say that it refers to features
you could only get in the less powerful language by writing an
interpreter for the more powerful language in it. If language A
has an operator for removing spaces from strings and language B
doesn't, that probably doesn't make A more powerful, because you
can probably write a subroutine to do it in B. But if A supports,
say, recursion, and B doesn't, that's not likely to be something
you can fix by writing library functions.[4] Note to nerds: or possibly a lattice, narrowing toward the top;
it's not the shape that matters here but the idea that there is at
least a partial order.[5] It is a bit misleading to treat macros as a separate feature.
In practice their usefulness is greatly enhanced by other Lisp
features like lexical closures and rest parameters.[6] As a result, comparisons of programming languages either take
the form of religious wars or undergraduate textbooks so determinedly
neutral that they're really works of anthropology. People who
value their peace, or want tenure, avoid the topic. But the question
is only half a religious one; there is something there worth
studying, especially if you want to design new languages. |
Want to start a startup? Get funded by
Y Combinator.
October 2014(This essay is derived from a guest lecture in Sam Altman's startup class at
Stanford. It's intended for college students, but much of it is
applicable to potential founders at other ages.)One of the advantages of having kids is that when you have to give
advice, you can ask yourself "what would I tell my own kids?" My
kids are little, but I can imagine what I'd tell them about startups
if they were in college, and that's what I'm going to tell you.Startups are very counterintuitive. I'm not sure why. Maybe it's
just because knowledge about them hasn't permeated our culture yet.
But whatever the reason, starting a startup is a task where you
can't always trust your instincts.It's like skiing in that way. When you first try skiing and you
want to slow down, your instinct is to lean back. But if you lean
back on skis you fly down the hill out of control. So part of
learning to ski is learning to suppress that impulse. Eventually
you get new habits, but at first it takes a conscious effort. At
first there's a list of things you're trying to remember as you
start down the hill.Startups are as unnatural as skiing, so there's a similar list for
startups. Here I'm going to give you the first part of it — the things
to remember if you want to prepare yourself to start a startup.
CounterintuitiveThe first item on it is the fact I already mentioned: that startups
are so weird that if you trust your instincts, you'll make a lot
of mistakes. If you know nothing more than this, you may at least
pause before making them.When I was running Y Combinator I used to joke that our function
was to tell founders things they would ignore. It's really true.
Batch after batch, the YC partners warn founders about mistakes
they're about to make, and the founders ignore them, and then come
back a year later and say "I wish we'd listened."Why do the founders ignore the partners' advice? Well, that's the
thing about counterintuitive ideas: they contradict your intuitions.
They seem wrong. So of course your first impulse is to disregard
them. And in fact my joking description is not merely the curse
of Y Combinator but part of its raison d'etre. If founders' instincts
already gave them the right answers, they wouldn't need us. You
only need other people to give you advice that surprises you. That's
why there are a lot of ski instructors and not many running
instructors.
[1]You can, however, trust your instincts about people. And in fact
one of the most common mistakes young founders make is not to
do that enough. They get involved with people who seem impressive,
but about whom they feel some misgivings personally. Later when
things blow up they say "I knew there was something off about him,
but I ignored it because he seemed so impressive."If you're thinking about getting involved with someone — as a
cofounder, an employee, an investor, or an acquirer — and you
have misgivings about them, trust your gut. If someone seems
slippery, or bogus, or a jerk, don't ignore it.This is one case where it pays to be self-indulgent. Work with
people you genuinely like, and you've known long enough to be sure.
ExpertiseThe second counterintuitive point is that it's not that important
to know a lot about startups. The way to succeed in a startup is
not to be an expert on startups, but to be an expert on your users
and the problem you're solving for them.
Mark Zuckerberg didn't succeed because he was an expert on startups.
He succeeded despite being a complete noob at startups, because he
understood his users really well.If you don't know anything about, say, how to raise an angel round,
don't feel bad on that account. That sort of thing you can learn
when you need to, and forget after you've done it.In fact, I worry it's not merely unnecessary to learn in great
detail about the mechanics of startups, but possibly somewhat
dangerous. If I met an undergrad who knew all about convertible
notes and employee agreements and (God forbid) class FF stock, I
wouldn't think "here is someone who is way ahead of their peers."
It would set off alarms. Because another of the characteristic
mistakes of young founders is to go through the motions of starting
a startup. They make up some plausible-sounding idea, raise money
at a good valuation, rent a cool office, hire a bunch of people.
From the outside that seems like what startups do. But the next
step after rent a cool office and hire a bunch of people is: gradually
realize how completely fucked they are, because while imitating all
the outward forms of a startup they have neglected the one thing
that's actually essential: making something people want.
GameWe saw this happen so often that we made up a name for it: playing
house. Eventually I realized why it was happening. The reason
young founders go through the motions of starting a startup is
because that's what they've been trained to do for their whole lives
up to that point. Think about what you have to do to get into
college, for example. Extracurricular activities, check. Even in
college classes most of the work is as artificial as running laps.I'm not attacking the educational system for being this way. There
will always be a certain amount of fakeness in the work you do when
you're being taught something, and if you measure their performance
it's inevitable that people will exploit the difference to the point
where much of what you're measuring is artifacts of the fakeness.I confess I did it myself in college. I found that in a lot of
classes there might only be 20 or 30 ideas that were the right shape
to make good exam questions. The way I studied for exams in these
classes was not (except incidentally) to master the material taught
in the class, but to make a list of potential exam questions and
work out the answers in advance. When I walked into the final, the
main thing I'd be feeling was curiosity about which of my questions
would turn up on the exam. It was like a game.It's not surprising that after being trained for their whole lives
to play such games, young founders' first impulse on starting a
startup is to try to figure out the tricks for winning at this new
game. Since fundraising appears to be the measure of success for
startups (another classic noob mistake), they always want to know what the
tricks are for convincing investors. We tell them the best way to
convince investors is to make a startup
that's actually doing well, meaning growing fast, and then simply
tell investors so. Then they want to know what the tricks are for
growing fast. And we have to tell them the best way to do that is
simply to make something people want.So many of the conversations YC partners have with young founders
begin with the founder asking "How do we..." and the partner replying
"Just..."Why do the founders always make things so complicated? The reason,
I realized, is that they're looking for the trick.So this is the third counterintuitive thing to remember about
startups: starting a startup is where gaming the system stops
working. Gaming the system may continue to work if you go to work
for a big company. Depending on how broken the company is, you can
succeed by sucking up to the right people, giving the impression
of productivity, and so on.
[2]
But that doesn't work with startups.
There is no boss to trick, only users, and all users care about is
whether your product does what they want. Startups are as impersonal
as physics. You have to make something people want, and you prosper
only to the extent you do.The dangerous thing is, faking does work to some degree on investors.
If you're super good at sounding like you know what you're talking
about, you can fool investors for at least one and perhaps even two
rounds of funding. But it's not in your interest to. The company
is ultimately doomed. All you're doing is wasting your own time
riding it down.So stop looking for the trick. There are tricks in startups, as
there are in any domain, but they are an order of magnitude less
important than solving the real problem. A founder who knows nothing
about fundraising but has made something users love will have an
easier time raising money than one who knows every trick in the
book but has a flat usage graph. And more importantly, the founder
who has made something users love is the one who will go on to
succeed after raising the money.Though in a sense it's bad news in that you're deprived of one of
your most powerful weapons, I think it's exciting that gaming the
system stops working when you start a startup. It's exciting that
there even exist parts of the world where you win by doing good
work. Imagine how depressing the world would be if it were all
like school and big companies, where you either have to spend a lot
of time on bullshit things or lose to people who do.
[3]
I would
have been delighted if I'd realized in college that there were parts
of the real world where gaming the system mattered less than others,
and a few where it hardly mattered at all. But there are, and this
variation is one of the most important things to consider when
you're thinking about your future. How do you win in each type of
work, and what would you like to win by doing?
[4]
All-ConsumingThat brings us to our fourth counterintuitive point: startups are
all-consuming. If you start a startup, it will take over your life
to a degree you cannot imagine. And if your startup succeeds, it
will take over your life for a long time: for several years at the
very least, maybe for a decade, maybe for the rest of your working
life. So there is a real opportunity cost here.Larry Page may seem to have an enviable life, but there are aspects
of it that are unenviable. Basically at 25 he started running as
fast as he could and it must seem to him that he hasn't stopped to
catch his breath since. Every day new shit happens in the Google
empire that only the CEO can deal with, and he, as CEO, has to deal
with it. If he goes on vacation for even a week, a whole week's
backlog of shit accumulates. And he has to bear this uncomplainingly,
partly because as the company's daddy he can never show fear or
weakness, and partly because billionaires get less than zero sympathy
if they talk about having difficult lives. Which has the strange
side effect that the difficulty of being a successful startup founder
is concealed from almost everyone except those who've done it.Y Combinator has now funded several companies that can be called
big successes, and in every single case the founders say the same
thing. It never gets any easier. The nature of the problems change.
You're worrying about construction delays at your London office
instead of the broken air conditioner in your studio apartment.
But the total volume of worry never decreases; if anything it
increases.Starting a successful startup is similar to having kids in that
it's like a button you push that changes your life irrevocably.
And while it's truly wonderful having kids, there are a lot of
things that are easier to do before you have them than after. Many
of which will make you a better parent when you do have kids. And
since you can delay pushing the button for a while, most people in
rich countries do.Yet when it comes to startups, a lot of people seem to think they're
supposed to start them while they're still in college. Are you
crazy? And what are the universities thinking? They go out of
their way to ensure their students are well supplied with contraceptives,
and yet they're setting up entrepreneurship programs and startup
incubators left and right.To be fair, the universities have their hand forced here. A lot
of incoming students are interested in startups. Universities are,
at least de facto, expected to prepare them for their careers. So
students who want to start startups hope universities can teach
them about startups. And whether universities can do this or not,
there's some pressure to claim they can, lest they lose applicants
to other universities that do.Can universities teach students about startups? Yes and no. They
can teach students about startups, but as I explained before, this
is not what you need to know. What you need to learn about are the
needs of your own users, and you can't do that until you actually
start the company.
[5]
So starting a startup is intrinsically
something you can only really learn by doing it. And it's impossible
to do that in college, for the reason I just explained: startups
take over your life. You can't start a startup for real as a
student, because if you start a startup for real you're not a student
anymore. You may be nominally a student for a bit, but you won't even
be that for long.
[6]Given this dichotomy, which of the two paths should you take? Be
a real student and not start a startup, or start a real startup and
not be a student? I can answer that one for you. Do not start a
startup in college. How to start a startup is just a subset of a
bigger problem you're trying to solve: how to have a good life.
And though starting a startup can be part of a good life for a lot
of ambitious people, age 20 is not the optimal time to do it.
Starting a startup is like a brutally fast depth-first search. Most
people should still be searching breadth-first at 20.You can do things in your early 20s that you can't do as well before
or after, like plunge deeply into projects on a whim and travel
super cheaply with no sense of a deadline. For unambitious people,
this sort of thing is the dreaded "failure to launch," but for the
ambitious ones it can be an incomparably valuable sort of exploration.
If you start a startup at 20 and you're sufficiently successful,
you'll never get to do it.
[7]Mark Zuckerberg will never get to bum around a foreign country. He
can do other things most people can't, like charter jets to fly him
to foreign countries. But success has taken a lot of the serendipity
out of his life. Facebook is running him as much as he's running
Facebook. And while it can be very cool to be in the grip of a
project you consider your life's work, there are advantages to
serendipity too, especially early in life. Among other things it
gives you more options to choose your life's work from.There's not even a tradeoff here. You're not sacrificing anything
if you forgo starting a startup at 20, because you're more likely
to succeed if you wait. In the unlikely case that you're 20 and
one of your side projects takes off like Facebook did, you'll face
a choice of running with it or not, and it may be reasonable to run
with it. But the usual way startups take off is for the founders
to make them take off, and it's gratuitously
stupid to do that at 20.
TryShould you do it at any age? I realize I've made startups sound
pretty hard. If I haven't, let me try again: starting a startup
is really hard. What if it's too hard? How can you tell if you're
up to this challenge?The answer is the fifth counterintuitive point: you can't tell. Your
life so far may have given you some idea what your prospects might
be if you tried to become a mathematician, or a professional football
player. But unless you've had a very strange life you haven't done
much that was like being a startup founder.
Starting a startup will change you a lot. So what you're trying
to estimate is not just what you are, but what you could grow into,
and who can do that?For the past 9 years it was my job to predict whether people would
have what it took to start successful startups. It was easy to
tell how smart they were, and most people reading this will be over
that threshold. The hard part was predicting how tough and ambitious they would become. There
may be no one who has more experience at trying to predict that,
so I can tell you how much an expert can know about it, and the
answer is: not much. I learned to keep a completely open mind about
which of the startups in each batch would turn out to be the stars.The founders sometimes think they know. Some arrive feeling sure
they will ace Y Combinator just as they've aced every one of the (few,
artificial, easy) tests they've faced in life so far. Others arrive
wondering how they got in, and hoping YC doesn't discover whatever
mistake caused it to accept them. But there is little correlation
between founders' initial attitudes and how well their companies
do.I've read that the same is true in the military — that the
swaggering recruits are no more likely to turn out to be really
tough than the quiet ones. And probably for the same reason: that
the tests involved are so different from the ones in their previous
lives.If you're absolutely terrified of starting a startup, you probably
shouldn't do it. But if you're merely unsure whether you're up to
it, the only way to find out is to try. Just not now.
IdeasSo if you want to start a startup one day, what should you do in
college? There are only two things you need initially: an idea and
cofounders. And the m.o. for getting both is the same. Which leads
to our sixth and last counterintuitive point: that the way to get
startup ideas is not to try to think of startup ideas.I've written a whole essay on this,
so I won't repeat it all here. But the short version is that if
you make a conscious effort to think of startup ideas, the ideas
you come up with will not merely be bad, but bad and plausible-sounding,
meaning you'll waste a lot of time on them before realizing they're
bad.The way to come up with good startup ideas is to take a step back.
Instead of making a conscious effort to think of startup ideas,
turn your mind into the type that startup ideas form in without any
conscious effort. In fact, so unconsciously that you don't even
realize at first that they're startup ideas.This is not only possible, it's how Apple, Yahoo, Google, and
Facebook all got started. None of these companies were even meant
to be companies at first. They were all just side projects. The
best startups almost have to start as side projects, because great
ideas tend to be such outliers that your conscious mind would reject
them as ideas for companies.Ok, so how do you turn your mind into the type that startup ideas
form in unconsciously? (1) Learn a lot about things that matter,
then (2) work on problems that interest you (3) with people you
like and respect. The third part, incidentally, is how you get
cofounders at the same time as the idea.The first time I wrote that paragraph, instead of "learn a lot about
things that matter," I wrote "become good at some technology." But
that prescription, though sufficient, is too narrow. What was
special about Brian Chesky and Joe Gebbia was not that they were
experts in technology. They were good at design, and perhaps even
more importantly, they were good at organizing groups and making
projects happen. So you don't have to work on technology per se,
so long as you work on problems demanding enough to stretch you.What kind of problems are those? That is very hard to answer in
the general case. History is full of examples of young people who
were working on important problems that no
one else at the time thought were important, and in particular
that their parents didn't think were important. On the other hand,
history is even fuller of examples of parents who thought their
kids were wasting their time and who were right. So how do you
know when you're working on real stuff?
[8]I know how I know. Real problems are interesting, and I am
self-indulgent in the sense that I always want to work on interesting
things, even if no one else cares about them (in fact, especially
if no one else cares about them), and find it very hard to make
myself work on boring things, even if they're supposed to be
important.My life is full of case after case where I worked on something just
because it seemed interesting, and it turned out later to be useful
in some worldly way. Y
Combinator itself was something I only did because it seemed
interesting. So I seem to have some sort of internal compass that
helps me out. But I don't know what other people have in their
heads. Maybe if I think more about this I can come up with heuristics
for recognizing genuinely interesting problems, but for the moment
the best I can offer is the hopelessly question-begging advice that
if you have a taste for genuinely interesting problems, indulging
it energetically is the best way to prepare yourself for a startup.
And indeed, probably also the best way to live.
[9]But although I can't explain in the general case what counts as an
interesting problem, I can tell you about a large subset of them.
If you think of technology as something that's spreading like a
sort of fractal stain, every moving point on the edge represents
an interesting problem. So one guaranteed way to turn your mind
into the type that has good startup ideas is to get yourself to the
leading edge of some technology — to cause yourself, as Paul
Buchheit put it, to "live in the future." When you reach that point,
ideas that will seem to other people uncannily prescient will seem
obvious to you. You may not realize they're startup ideas, but
you'll know they're something that ought to exist.For example, back at Harvard in the mid 90s a fellow grad student
of my friends Robert and Trevor wrote his own voice over IP software.
He didn't mean it to be a startup, and he never tried to turn it
into one. He just wanted to talk to his girlfriend in Taiwan without
paying for long distance calls, and since he was an expert on
networks it seemed obvious to him that the way to do it was turn
the sound into packets and ship it over the Internet. He never did
any more with his software than talk to his girlfriend, but this
is exactly the way the best startups get started.So strangely enough the optimal thing to do in college if you want
to be a successful startup founder is not some sort of new, vocational
version of college focused on "entrepreneurship." It's the classic
version of college as education for its own sake. If you want to
start a startup after college, what you should do in college is
learn powerful things. And if you have genuine intellectual
curiosity, that's what you'll naturally tend to do if you just
follow your own inclinations.
[10]The component of entrepreneurship that really matters is domain
expertise. The way to become Larry Page was to become an expert
on search. And the way to become an expert on search was to be
driven by genuine curiosity, not some ulterior motive.At its best, starting a startup is merely an ulterior motive for
curiosity. And you'll do it best if you introduce the ulterior
motive toward the end of the process.So here is the ultimate advice for young would-be startup founders,
boiled down to two words: just learn.
Notes[1]
Some founders listen more than others, and this tends to be a
predictor of success. One of the things I
remember about the Airbnbs during YC is how intently they listened.[2]
In fact, this is one of the reasons startups are possible. If
big companies weren't plagued by internal inefficiencies, they'd
be proportionately more effective, leaving less room for startups.[3]
In a startup you have to spend a lot of time on schleps, but this sort of work is merely
unglamorous, not bogus.[4]
What should you do if your true calling is gaming the system?
Management consulting.[5]
The company may not be incorporated, but if you start to get
significant numbers of users, you've started it, whether you realize
it yet or not.[6]
It shouldn't be that surprising that colleges can't teach
students how to be good startup founders, because they can't teach
them how to be good employees either.The way universities "teach" students how to be employees is to
hand off the task to companies via internship programs. But you
couldn't do the equivalent thing for startups, because by definition
if the students did well they would never come back.[7]
Charles Darwin was 22 when he received an invitation to travel
aboard the HMS Beagle as a naturalist. It was only because he was
otherwise unoccupied, to a degree that alarmed his family, that he
could accept it. And yet if he hadn't we probably would not know
his name.[8]
Parents can sometimes be especially conservative in this
department. There are some whose definition of important problems
includes only those on the critical path to med school.[9]
I did manage to think of a heuristic for detecting whether you
have a taste for interesting ideas: whether you find known boring
ideas intolerable. Could you endure studying literary theory, or
working in middle management at a large company?[10]
In fact, if your goal is to start a startup, you can stick
even more closely to the ideal of a liberal education than past
generations have. Back when students focused mainly on getting a
job after college, they thought at least a little about how the
courses they took might look to an employer. And perhaps even
worse, they might shy away from taking a difficult class lest they
get a low grade, which would harm their all-important GPA. Good
news: users don't care what your GPA
was. And I've never heard of investors caring either. Y Combinator
certainly never asks what classes you took in college or what grades
you got in them.
Thanks to Sam Altman, Paul Buchheit, John Collison, Patrick
Collison, Jessica Livingston, Robert Morris, Geoff Ralston, and
Fred Wilson for reading drafts of this. |
April 2006(This essay is derived from a talk at the 2006
Startup School.)The startups we've funded so far are pretty quick, but they seem
quicker to learn some lessons than others. I think it's because
some things about startups are kind of counterintuitive.We've now
invested
in enough companies that I've learned a trick
for determining which points are the counterintuitive ones:
they're the ones I have to keep repeating.So I'm going to number these points, and maybe with future startups
I'll be able to pull off a form of Huffman coding. I'll make them
all read this, and then instead of nagging them in detail, I'll
just be able to say: number four!
1. Release Early.The thing I probably repeat most is this recipe for a startup: get
a version 1 out fast, then improve it based on users' reactions.By "release early" I don't mean you should release something full
of bugs, but that you should release something minimal. Users hate
bugs, but they don't seem to mind a minimal version 1, if there's
more coming soon.There are several reasons it pays to get version 1 done fast. One
is that this is simply the right way to write software, whether for
a startup or not. I've been repeating that since 1993, and I haven't seen much since to
contradict it. I've seen a lot of startups die because they were
too slow to release stuff, and none because they were too quick.
[1]One of the things that will surprise you if you build something
popular is that you won't know your users. Reddit now has almost half a million
unique visitors a month. Who are all those people? They have no
idea. No web startup does. And since you don't know your users,
it's dangerous to guess what they'll like. Better to release
something and let them tell you.Wufoo took this to heart and released
their form-builder before the underlying database. You can't even
drive the thing yet, but 83,000 people came to sit in the driver's
seat and hold the steering wheel. And Wufoo got valuable feedback
from it: Linux users complained they used too much Flash, so they
rewrote their software not to. If they'd waited to release everything
at once, they wouldn't have discovered this problem till it was
more deeply wired in.Even if you had no users, it would still be important to release
quickly, because for a startup the initial release acts as a shakedown
cruise. If anything major is broken-- if the idea's no good,
for example, or the founders hate one another-- the stress of getting
that first version out will expose it. And if you have such problems
you want to find them early.Perhaps the most important reason to release early, though, is that
it makes you work harder. When you're working on something that
isn't released, problems are intriguing. In something that's out
there, problems are alarming. There is a lot more urgency once you
release. And I think that's precisely why people put it off. They
know they'll have to work a lot harder once they do.
[2]
2. Keep Pumping Out Features.Of course, "release early" has a second component, without which
it would be bad advice. If you're going to start with something
that doesn't do much, you better improve it fast.What I find myself repeating is "pump out features." And this rule
isn't just for the initial stages. This is something all startups
should do for as long as they want to be considered startups.I don't mean, of course, that you should make your application ever
more complex. By "feature" I mean one unit of hacking-- one quantum
of making users' lives better.As with exercise, improvements beget improvements. If you run every
day, you'll probably feel like running tomorrow. But if you skip
running for a couple weeks, it will be an effort to drag yourself
out. So it is with hacking: the more ideas you implement, the more
ideas you'll have. You should make your system better at least in
some small way every day or two.This is not just a good way to get development done; it is also a
form of marketing. Users love a site that's constantly improving.
In fact, users expect a site to improve. Imagine if you visited a
site that seemed very good, and then returned two months later and
not one thing had changed. Wouldn't it start to seem lame?
[3]They'll like you even better when you improve in response to their
comments, because customers are used to companies ignoring them.
If you're the rare exception-- a company that actually listens--
you'll generate fanatical loyalty. You won't need to advertise,
because your users will do it for you.This seems obvious too, so why do I have to keep repeating it? I
think the problem here is that people get used to how things are.
Once a product gets past the stage where it has glaring flaws, you
start to get used to it, and gradually whatever features it happens
to have become its identity. For example, I doubt many people at
Yahoo (or Google for that matter) realized how much better web mail
could be till Paul Buchheit showed them.I think the solution is to assume that anything you've made is far
short of what it could be. Force yourself, as a sort of intellectual
exercise, to keep thinking of improvements. Ok, sure, what you
have is perfect. But if you had to change something, what would
it be?If your product seems finished, there are two possible explanations:
(a) it is finished, or (b) you lack imagination. Experience suggests
(b) is a thousand times more likely.
3. Make Users Happy.Improving constantly is an instance of a more general rule: make
users happy. One thing all startups have in common is that they
can't force anyone to do anything. They can't force anyone to use
their software, and they can't force anyone to do deals with them.
A startup has to sing for its supper. That's why the successful
ones make great things. They have to, or die.When you're running a startup you feel like a little bit of debris
blown about by powerful winds. The most powerful wind is users.
They can either catch you and loft you up into the sky, as they did
with Google, or leave you flat on the pavement, as they do with
most startups. Users are a fickle wind, but more powerful than any
other. If they take you up, no competitor can keep you down.As a little piece of debris, the rational thing for you to do is
not to lie flat, but to curl yourself into a shape the wind will
catch.I like the wind metaphor because it reminds you how impersonal the
stream of traffic is. The vast majority of people who visit your
site will be casual visitors. It's them you have to design your
site for. The people who really care will find what they want by
themselves.The median visitor will arrive with their finger poised on the Back
button. Think about your own experience: most links you
follow lead to something lame. Anyone who has used the web for
more than a couple weeks has been trained to click on Back after
following a link. So your site has to say "Wait! Don't click on
Back. This site isn't lame. Look at this, for example."There are two things you have to do to make people pause. The most
important is to explain, as concisely as possible, what the hell
your site is about. How often have you visited a site that seemed
to assume you already knew what they did? For example, the corporate
site that says the
company makes
enterprise content management solutions for business that enable
organizations to unify people, content and processes to minimize
business risk, accelerate time-to-value and sustain lower total
cost of ownership.
An established company may get away with such an opaque description,
but no startup can. A startup
should be able to explain in one or two sentences exactly what it
does.
[4]
And not just to users. You need this for everyone:
investors, acquirers, partners, reporters, potential employees, and
even current employees. You probably shouldn't even start a company
to do something that can't be described compellingly in one or two
sentences.The other thing I repeat is to give people everything you've got,
right away. If you have something impressive, try to put it on the
front page, because that's the only one most visitors will see.
Though indeed there's a paradox here: the more you push the good
stuff toward the front, the more likely visitors are to explore
further.
[5]In the best case these two suggestions get combined: you tell
visitors what your site is about by showing them. One of the
standard pieces of advice in fiction writing is "show, don't tell."
Don't say that a character's angry; have him grind his teeth, or
break his pencil in half. Nothing will explain what your site does
so well as using it.The industry term here is "conversion." The job of your site is
to convert casual visitors into users-- whatever your definition
of a user is. You can measure this in your growth rate. Either
your site is catching on, or it isn't, and you must know which. If
you have decent growth, you'll win in the end, no matter how obscure
you are now. And if you don't, you need to fix something.
4. Fear the Right Things.Another thing I find myself saying a lot is "don't worry." Actually,
it's more often "don't worry about this; worry about that instead."
Startups are right to be paranoid, but they sometimes fear the wrong
things.Most visible disasters are not so alarming as they seem. Disasters
are normal in a startup: a founder quits, you discover a patent
that covers what you're doing, your servers keep crashing, you run
into an insoluble technical problem, you have to change your name,
a deal falls through-- these are all par for the course. They won't
kill you unless you let them.Nor will most competitors. A lot of startups worry "what if Google
builds something like us?" Actually big companies are not the ones
you have to worry about-- not even Google. The people at Google
are smart, but no smarter than you; they're not as motivated, because
Google is not going to go out of business if this one product fails;
and even at Google they have a lot of bureaucracy to slow them down.What you should fear, as a startup, is not the established players,
but other startups you don't know exist yet. They're way more
dangerous than Google because, like you, they're cornered animals.Looking just at existing competitors can give you a false sense of
security. You should compete against what someone else could be
doing, not just what you can see people doing. A corollary is that
you shouldn't relax just because you have no visible competitors
yet. No matter what your idea, there's someone else out there
working on the same thing.That's the downside of it being easier to start a startup: more people
are doing it. But I disagree with Caterina Fake when she says that
makes this a bad time to start a startup. More people are starting
startups, but not as many more as could. Most college graduates
still think they have to get a job. The average person can't ignore
something that's been beaten into their head since they were three
just because serving web pages recently got a lot cheaper.And in any case, competitors are not the biggest threat. Way more
startups hose themselves than get crushed by competitors. There
are a lot of ways to do it, but the three main ones are internal
disputes, inertia, and ignoring users. Each is, by itself, enough
to kill you. But if I had to pick the worst, it would be ignoring
users. If you want a recipe for a startup that's going to die,
here it is: a couple of founders who have some great idea they know
everyone is going to love, and that's what they're going to build,
no matter what.Almost everyone's initial plan is broken. If companies stuck to
their initial plans, Microsoft would be selling programming languages,
and Apple would be selling printed circuit boards. In both cases
their customers told them what their business should be-- and they
were smart enough to listen.As Richard Feynman said, the imagination of nature is greater than
the imagination of man. You'll find more interesting things by
looking at the world than you could ever produce just by thinking.
This principle is very powerful. It's why the best abstract painting
still falls short of Leonardo, for example. And it applies to
startups too. No idea for a product could ever be so clever as the
ones you can discover by smashing a beam of prototypes into a beam
of users.
5. Commitment Is a Self-Fulfilling Prophecy.I now have enough experience with startups to be able to say what
the most important quality is in a startup founder, and it's not
what you might think. The most important quality in a startup
founder is determination. Not intelligence-- determination.This is a little depressing. I'd like to believe Viaweb succeeded
because we were smart, not merely determined. A lot of people in
the startup world want to believe that. Not just founders, but
investors too. They like the idea of inhabiting a world ruled by
intelligence. And you can tell they really believe this, because
it affects their investment decisions.Time after time VCs invest in startups founded by eminent professors.
This may work in biotech, where a lot of startups simply commercialize
existing research, but in software you want to invest in students,
not professors. Microsoft, Yahoo, and Google were all founded by
people who dropped out of school to do it. What students lack in
experience they more than make up in dedication.Of course, if you want to get rich, it's not enough merely to be
determined. You have to be smart too, right? I'd like to think
so, but I've had an experience that convinced me otherwise: I spent
several years living in New York.You can lose quite a lot in the brains department and it won't kill
you. But lose even a little bit in the commitment department, and
that will kill you very rapidly.Running a startup is like walking on your hands: it's possible, but
it requires extraordinary effort. If an ordinary employee were
asked to do the things a startup founder has to, he'd be very
indignant. Imagine if you were hired at some big company, and in
addition to writing software ten times faster than you'd ever had
to before, they expected you to answer support calls, administer
the servers, design the web site, cold-call customers, find the
company office space, and go out and get everyone lunch.And to do all this not in the calm, womb-like atmosphere of a big
company, but against a backdrop of constant disasters. That's the
part that really demands determination. In a startup, there's
always some disaster happening. So if you're the least bit inclined
to find an excuse to quit, there's always one right there.But if you lack commitment, chances are it will have been hurting
you long before you actually quit. Everyone who deals with startups
knows how important commitment is, so if they sense you're ambivalent,
they won't give you much attention. If you lack commitment, you'll
just find that for some mysterious reason good things happen to
your competitors but not to you. If you lack commitment, it will
seem to you that you're unlucky.Whereas if you're determined to stick around, people will pay
attention to you, because odds are they'll have to deal with you
later. You're a local, not just a tourist, so everyone has to come
to terms with you.At Y Combinator we sometimes mistakenly fund teams who have the
attitude that they're going to give this startup thing a shot for
three months, and if something great happens, they'll stick with
it-- "something great" meaning either that someone wants to buy
them or invest millions of dollars in them. But if this is your
attitude, "something great" is very unlikely to happen to you,
because both acquirers and investors judge you by your level of
commitment.If an acquirer thinks you're going to stick around no matter what,
they'll be more likely to buy you, because if they don't and you
stick around, you'll probably grow, your price will go up, and
they'll be left wishing they'd bought you earlier. Ditto for
investors. What really motivates investors, even big VCs, is not
the hope of good returns, but the fear of missing out.
[6]
So if
you make it clear you're going to succeed no matter what, and the only
reason you need them is to make it happen a little faster, you're
much more likely to get money.You can't fake this. The only way to convince everyone that you're
ready to fight to the death is actually to be ready to.You have to be the right kind of determined, though. I carefully
chose the word determined rather than stubborn, because stubbornness
is a disastrous quality in a startup. You have to be determined,
but flexible, like a running back. A successful running back doesn't
just put his head down and try to run through people. He improvises:
if someone appears in front of him, he runs around them; if someone
tries to grab him, he spins out of their grip; he'll even run in
the wrong direction briefly if that will help. The one thing he'll
never do is stand still.
[7]
6. There Is Always Room.I was talking recently to a startup founder about whether it might
be good to add a social component to their software. He said he
didn't think so, because the whole social thing was tapped out.
Really? So in a hundred years the only social networking sites
will be the Facebook, MySpace, Flickr, and Del.icio.us? Not likely.There is always room for new stuff. At every point in history,
even the darkest bits of the dark ages, people were discovering
things that made everyone say "why didn't anyone think of that
before?" We know this continued to be true up till 2004, when the
Facebook was founded-- though strictly speaking someone else did
think of that.The reason we don't see the opportunities all around us is that we
adjust to however things are, and assume that's how things have to
be. For example, it would seem crazy to most people to try to make
a better search engine than Google. Surely that field, at least,
is tapped out. Really? In a hundred years-- or even twenty-- are
people still going to search for information using something like
the current Google? Even Google probably doesn't think that.In particular, I don't think there's any limit to the number of
startups. Sometimes you hear people saying "All these guys starting
startups now are going to be disappointed. How many little startups
are Google and Yahoo going to buy, after all?" That sounds cleverly
skeptical, but I can prove it's mistaken. No one proposes that
there's some limit to the number of people who can be employed in
an economy consisting of big, slow-moving companies with a couple
thousand people each. Why should there be any limit to the number
who could be employed by small, fast-moving companies with ten each?
It seems to me the only limit would be the number of people who
want to work that hard.The limit on the number of startups is not the number that can get
acquired by Google and Yahoo-- though it seems even that should
be unlimited, if the startups were actually worth buying-- but the
amount of wealth that can be created. And I don't think there's
any limit on that, except cosmological ones.So for all practical purposes, there is no limit to the number of
startups. Startups make wealth, which means they make things people
want, and if there's a limit on the number of things people want,
we are nowhere near it. I still don't even have a flying car.
7. Don't Get Your Hopes Up.This is another one I've been repeating since long before Y Combinator.
It was practically the corporate motto at Viaweb.Startup founders are naturally optimistic. They wouldn't do it
otherwise. But you should treat your optimism the way you'd treat
the core of a nuclear reactor: as a source of power that's also
very dangerous. You have to build a shield around it, or it will
fry you.The shielding of a reactor is not uniform; the reactor would be
useless if it were. It's pierced in a few places to let pipes in.
An optimism shield has to be pierced too. I think the place to
draw the line is between what you expect of yourself, and what you
expect of other people. It's ok to be optimistic about what you
can do, but assume the worst about machines and other people.This is particularly necessary in a startup, because you tend to
be pushing the limits of whatever you're doing. So things don't
happen in the smooth, predictable way they do in the rest of the
world. Things change suddenly, and usually for the worse.Shielding your optimism is nowhere more important than with deals.
If your startup is doing a deal, just assume it's not going to
happen. The VCs who say they're going to invest in you aren't.
The company that says they're going to buy you isn't. The big
customer who wants to use your system in their whole company won't.
Then if things work out you can be pleasantly surprised.The reason I warn startups not to get their hopes up is not to save
them from being disappointed when things fall through. It's
for a more practical reason: to prevent them from leaning their
company against something that's going to fall over, taking them
with it.For example, if someone says they want to invest in you, there's a
natural tendency to stop looking for other investors. That's why
people proposing deals seem so positive: they want you to
stop looking. And you want to stop too, because doing deals is a
pain. Raising money, in particular, is a huge time sink. So you
have to consciously force yourself to keep looking.Even if you ultimately do the first deal, it will be to your advantage
to have kept looking, because you'll get better terms. Deals are
dynamic; unless you're negotiating with someone unusually honest,
there's not a single point where you shake hands and the deal's
done. There are usually a lot of subsidiary questions to be cleared
up after the handshake, and if the other side senses weakness-- if
they sense you need this deal-- they will be very tempted to screw
you in the details.VCs and corp dev guys are professional negotiators. They're trained
to take advantage of weakness.
[8]
So while they're often nice
guys, they just can't help it. And as pros they do this more than
you. So don't even try to bluff them. The only way a startup can
have any leverage in a deal is genuinely not to need it. And if
you don't believe in a deal, you'll be less likely to depend on it.So I want to plant a hypnotic suggestion in your heads: when you
hear someone say the words "we want to invest in you" or "we want
to acquire you," I want the following phrase to appear automatically
in your head: don't get your hopes up. Just continue running
your company as if this deal didn't exist. Nothing is more likely
to make it close.The way to succeed in a startup is to focus on the goal of getting
lots of users, and keep walking swiftly toward it while investors
and acquirers scurry alongside trying to wave money in your face.
Speed, not MoneyThe way I've described it, starting a startup sounds pretty stressful.
It is. When I talk to the founders of the companies we've funded,
they all say the same thing: I knew it would be hard, but I didn't
realize it would be this hard.So why do it? It would be worth enduring a lot of pain and stress
to do something grand or heroic, but just to make money? Is making
money really that important?No, not really. It seems ridiculous to me when people take business
too seriously. I regard making money as a boring errand to be got
out of the way as soon as possible. There is nothing grand or
heroic about starting a startup per se.So why do I spend so much time thinking about startups? I'll tell
you why. Economically, a startup is best seen not as a way to get
rich, but as a way to work faster. You have to make a living, and
a startup is a way to get that done quickly, instead of letting it
drag on through your whole life.
[9]We take it for granted most of the time, but human life is fairly
miraculous. It is also palpably short. You're given this marvellous
thing, and then poof, it's taken away. You can see why people
invent gods to explain it. But even to people who don't believe
in gods, life commands respect. There are times in most of our
lives when the days go by in a blur, and almost everyone has a
sense, when this happens, of wasting something precious. As Ben
Franklin said, if you love life, don't waste time, because time is
what life is made of.So no, there's nothing particularly grand about making money. That's
not what makes startups worth the trouble. What's important about
startups is the speed. By compressing the dull but necessary task
of making a living into the smallest possible time, you show respect
for life, and there is something grand about that.Notes[1]
Startups can die from releasing something full of bugs, and not
fixing them fast enough, but I don't know of any that died from
releasing something stable but minimal very early, then promptly
improving it.[2]
I know this is why I haven't released Arc. The moment I do,
I'll have people nagging me for features.[3]
A web site is different from a book or movie or desktop application
in this respect. Users judge a site not as a single snapshot, but
as an animation with multiple frames. Of the two, I'd say the rate of
improvement is more important to users than where you currently
are.[4]
It should not always tell this to users, however. For example,
MySpace is basically a replacement mall for mallrats. But it was
wiser for them, initially, to pretend that the site was about bands.[5]
Similarly, don't make users register to try your site. Maybe
what you have is so valuable that visitors should gladly register
to get at it. But they've been trained to expect the opposite.
Most of the things they've tried on the web have sucked-- and
probably especially those that made them register.[6]
VCs have rational reasons for behaving this way. They don't
make their money (if they make money) off their median investments.
In a typical fund, half the companies fail, most of the rest generate
mediocre returns, and one or two "make the fund" by succeeding
spectacularly. So if they miss just a few of the most promising
opportunities, it could hose the whole fund.[7]
The attitude of a running back doesn't translate to soccer.
Though it looks great when a forward dribbles past multiple defenders,
a player who persists in trying such things will do worse in the
long term than one who passes.[8]
The reason Y Combinator never negotiates valuations
is that we're not professional negotiators, and don't want to turn
into them.[9]
There are two ways to do
work you love: (a) to make money, then work
on what you love, or (b) to get a job where you get paid to work on
stuff you love. In practice the first phases of both
consist mostly of unedifying schleps, and in (b) the second phase is less
secure.Thanks to Sam Altman, Trevor Blackwell, Beau Hartshorne, Jessica
Livingston, and Robert Morris for reading drafts of this. |
May 2001
(I wrote this article to help myself understand exactly
what McCarthy discovered. You don't need to know this stuff
to program in Lisp, but it should be helpful to
anyone who wants to
understand the essence of Lisp both in the sense of its
origins and its semantic core. The fact that it has such a core
is one of Lisp's distinguishing features, and the reason why,
unlike other languages, Lisp has dialects.)In 1960, John
McCarthy published a remarkable paper in
which he did for programming something like what Euclid did for
geometry. He showed how, given a handful of simple
operators and a notation for functions, you can
build a whole programming language.
He called this language Lisp, for "List Processing,"
because one of his key ideas was to use a simple
data structure called a list for both
code and data.It's worth understanding what McCarthy discovered, not
just as a landmark in the history of computers, but as
a model for what programming is tending to become in
our own time. It seems to me that there have been
two really clean, consistent models of programming so
far: the C model and the Lisp model.
These two seem points of high ground, with swampy lowlands
between them. As computers have grown more powerful,
the new languages being developed have been moving
steadily toward the Lisp model. A popular recipe
for new programming languages in the past 20 years
has been to take the C model of computing and add to
it, piecemeal, parts taken from the Lisp model,
like runtime typing and garbage collection.In this article I'm going to try to explain in the
simplest possible terms what McCarthy discovered.
The point is not just to learn about an interesting
theoretical result someone figured out forty years ago,
but to show where languages are heading.
The unusual thing about Lisp in fact, the defining
quality of Lisp is that it can be written in
itself. To understand what McCarthy meant by this,
we're going to retrace his steps, with his mathematical
notation translated into running Common Lisp code. |
Aaron Swartz created a scraped
feed
of the essays page. |
October 2015This will come as a surprise to a lot of people, but in some cases
it's possible to detect bias in a selection process without knowing
anything about the applicant pool. Which is exciting because among
other things it means third parties can use this technique to detect
bias whether those doing the selecting want them to or not.You can use this technique whenever (a) you have at least
a random sample of the applicants that were selected, (b) their
subsequent performance is measured, and (c) the groups of
applicants you're comparing have roughly equal distribution of ability.How does it work? Think about what it means to be biased. What
it means for a selection process to be biased against applicants
of type x is that it's harder for them to make it through. Which
means applicants of type x have to be better to get selected than
applicants not of type x.
[1]
Which means applicants of type x
who do make it through the selection process will outperform other
successful applicants. And if the performance of all the successful
applicants is measured, you'll know if they do.Of course, the test you use to measure performance must be a valid
one. And in particular it must not be invalidated by the bias you're
trying to measure.
But there are some domains where performance can be measured, and
in those detecting bias is straightforward. Want to know if the
selection process was biased against some type of applicant? Check
whether they outperform the others. This is not just a heuristic
for detecting bias. It's what bias means.For example, many suspect that venture capital firms are biased
against female founders. This would be easy to detect: among their
portfolio companies, do startups with female founders outperform
those without? A couple months ago, one VC firm (almost certainly
unintentionally) published a study showing bias of this type. First
Round Capital found that among its portfolio companies, startups
with female founders outperformed
those without by 63%.
[2]The reason I began by saying that this technique would come as a
surprise to many people is that we so rarely see analyses of this
type. I'm sure it will come as a surprise to First Round that they
performed one. I doubt anyone there realized that by limiting their
sample to their own portfolio, they were producing a study not of
startup trends but of their own biases when selecting companies.I predict we'll see this technique used more in the future. The
information needed to conduct such studies is increasingly available.
Data about who applies for things is usually closely guarded by the
organizations selecting them, but nowadays data about who gets
selected is often publicly available to anyone who takes the trouble
to aggregate it.
Notes[1]
This technique wouldn't work if the selection process looked
for different things from different types of applicants—for
example, if an employer hired men based on their ability but women
based on their appearance.[2]
As Paul Buchheit points out, First Round excluded their most
successful investment, Uber, from the study. And while it
makes sense to exclude outliers from some types of studies,
studies of returns from startup investing, which is all about
hitting outliers, are not one of them.
Thanks to Sam Altman, Jessica Livingston, and Geoff Ralston for reading
drafts of this. |
November 2005In the next few years, venture capital funds will find themselves
squeezed from four directions. They're already stuck with a seller's
market, because of the huge amounts they raised at the end of the
Bubble and still haven't invested. This by itself is not the end
of the world. In fact, it's just a more extreme version of the
norm
in the VC business: too much money chasing too few deals.Unfortunately, those few deals now want less and less money, because
it's getting so cheap to start a startup. The four causes: open
source, which makes software free; Moore's law, which makes hardware
geometrically closer to free; the Web, which makes promotion free
if you're good; and better languages, which make development a lot
cheaper.When we started our startup in 1995, the first three were our biggest
expenses. We had to pay $5000 for the Netscape Commerce Server,
the only software that then supported secure http connections. We
paid $3000 for a server with a 90 MHz processor and 32 meg of
memory. And we paid a PR firm about $30,000 to promote our launch.Now you could get all three for nothing. You can get the software
for free; people throw away computers more powerful than our first
server; and if you make something good you can generate ten times
as much traffic by word of mouth online than our first PR firm got
through the print media.And of course another big change for the average startup is that
programming languages have improved-- or rather, the median language has. At most startups ten years
ago, software development meant ten programmers writing code in
C++. Now the same work might be done by one or two using Python
or Ruby.During the Bubble, a lot of people predicted that startups would
outsource their development to India. I think a better model for
the future is David Heinemeier Hansson, who outsourced his development
to a more powerful language instead. A lot of well-known applications
are now, like BaseCamp, written by just one programmer. And one
guy is more than 10x cheaper than ten, because (a) he won't waste
any time in meetings, and (b) since he's probably a founder, he can
pay himself nothing.Because starting a startup is so cheap, venture capitalists now
often want to give startups more money than the startups want to
take. VCs like to invest several million at a time. But as one
VC told me after a startup he funded would only take about half a
million, "I don't know what we're going to do. Maybe we'll just
have to give some of it back." Meaning give some of the fund back
to the institutional investors who supplied it, because it wasn't
going to be possible to invest it all.Into this already bad situation comes the third problem: Sarbanes-Oxley.
Sarbanes-Oxley is a law, passed after the Bubble, that drastically
increases the regulatory burden on public companies. And in addition
to the cost of compliance, which is at least two million dollars a
year, the law introduces frightening legal exposure for corporate
officers. An experienced CFO I know said flatly: "I would not
want to be CFO of a public company now."You might think that responsible corporate governance is an area
where you can't go too far. But you can go too far in any law, and
this remark convinced me that Sarbanes-Oxley must have. This CFO
is both the smartest and the most upstanding money guy I know. If
Sarbanes-Oxley deters people like him from being CFOs of public
companies, that's proof enough that it's broken.Largely because of Sarbanes-Oxley, few startups go public now. For
all practical purposes, succeeding now equals getting bought. Which
means VCs are now in the business of finding promising little 2-3
man startups and pumping them up into companies that cost $100
million to acquire. They didn't mean to be in this business; it's
just what their business has evolved into.Hence the fourth problem: the acquirers have begun to realize they
can buy wholesale. Why should they wait for VCs to make the startups
they want more expensive? Most of what the VCs add, acquirers don't
want anyway. The acquirers already have brand recognition and HR
departments. What they really want is the software and the developers,
and that's what the startup is in the early phase: concentrated
software and developers.Google, typically, seems to have been the first to figure this out.
"Bring us your startups early," said Google's speaker at the Startup School. They're quite
explicit about it: they like to acquire startups at just the point
where they would do a Series A round. (The Series A round is the
first round of real VC funding; it usually happens in the first
year.) It is a brilliant strategy, and one that other big technology
companies will no doubt try to duplicate. Unless they want to have
still more of their lunch eaten by Google.Of course, Google has an advantage in buying startups: a lot of the
people there are rich, or expect to be when their options vest.
Ordinary employees find it very hard to recommend an acquisition;
it's just too annoying to see a bunch of twenty year olds get rich
when you're still working for salary. Even if it's the right thing
for your company to do.The Solution(s)Bad as things look now, there is a way for VCs to save themselves.
They need to do two things, one of which won't surprise them, and
another that will seem an anathema.Let's start with the obvious one: lobby to get Sarbanes-Oxley
loosened. This law was created to prevent future Enrons, not to
destroy the IPO market. Since the IPO market was practically dead
when it passed, few saw what bad effects it would have. But now
that technology has recovered from the last bust, we can see clearly
what a bottleneck Sarbanes-Oxley has become.Startups are fragile plants—seedlings, in fact. These seedlings
are worth protecting, because they grow into the trees of the
economy. Much of the economy's growth is their growth. I think
most politicians realize that. But they don't realize just how
fragile startups are, and how easily they can become collateral
damage of laws meant to fix some other problem.Still more dangerously, when you destroy startups, they make very
little noise. If you step on the toes of the coal industry, you'll
hear about it. But if you inadvertantly squash the startup industry,
all that happens is that the founders of the next Google stay in
grad school instead of starting a company.My second suggestion will seem shocking to VCs: let founders cash
out partially in the Series A round. At the moment, when VCs invest
in a startup, all the stock they get is newly issued and all the
money goes to the company. They could buy some stock directly from
the founders as well.Most VCs have an almost religious rule against doing this. They
don't want founders to get a penny till the company is sold or goes
public. VCs are obsessed with control, and they worry that they'll
have less leverage over the founders if the founders have any money.This is a dumb plan. In fact, letting the founders sell a little stock
early would generally be better for the company, because it would
cause the founders' attitudes toward risk to be aligned with the
VCs'. As things currently work, their attitudes toward risk tend
to be diametrically opposed: the founders, who have nothing, would
prefer a 100% chance of $1 million to a 20% chance of $10 million,
while the VCs can afford to be "rational" and prefer the latter.Whatever they say, the reason founders are selling their companies
early instead of doing Series A rounds is that they get paid up
front. That first million is just worth so much more than the
subsequent ones. If founders could sell a little stock early,
they'd be happy to take VC money and bet the rest on a bigger
outcome.So why not let the founders have that first million, or at least
half million? The VCs would get same number of shares for the
money. So what if some of the money would go to the
founders instead of the company?Some VCs will say this is
unthinkable—that they want all their money to be put to work
growing the company. But the fact is, the huge size of current VC
investments is dictated by the structure
of VC funds, not the needs of startups. Often as not these large
investments go to work destroying the company rather than growing
it.The angel investors who funded our startup let the founders sell
some stock directly to them, and it was a good deal for everyone.
The angels made a huge return on that investment, so they're happy.
And for us founders it blunted the terrifying all-or-nothingness
of a startup, which in its raw form is more a distraction than a
motivator.If VCs are frightened at the idea of letting founders partially
cash out, let me tell them something still more frightening: you
are now competing directly with Google.
Thanks to Trevor Blackwell, Sarah Harlin, Jessica
Livingston, and Robert Morris for reading drafts of this. |
May 2021There's one kind of opinion I'd be very afraid to express publicly.
If someone I knew to be both a domain expert and a reasonable person
proposed an idea that sounded preposterous, I'd be very reluctant
to say "That will never work."Anyone who has studied the history of ideas, and especially the
history of science, knows that's how big things start. Someone
proposes an idea that sounds crazy, most people dismiss it, then
it gradually takes over the world.Most implausible-sounding ideas are in fact bad and could be safely
dismissed. But not when they're proposed by reasonable domain
experts. If the person proposing the idea is reasonable, then they
know how implausible it sounds. And yet they're proposing it anyway.
That suggests they know something you don't. And if they have deep
domain expertise, that's probably the source of it.
[1]Such ideas are not merely unsafe to dismiss, but disproportionately
likely to be interesting. When the average person proposes an
implausible-sounding idea, its implausibility is evidence of their
incompetence. But when a reasonable domain expert does it, the
situation is reversed. There's something like an efficient market
here: on average the ideas that seem craziest will, if correct,
have the biggest effect. So if you can eliminate the theory that
the person proposing an implausible-sounding idea is incompetent,
its implausibility switches from evidence that it's boring to
evidence that it's exciting.
[2]Such ideas are not guaranteed to work. But they don't have to be.
They just have to be sufficiently good bets — to have sufficiently
high expected value. And I think on average they do. I think if you
bet on the entire set of implausible-sounding ideas proposed by
reasonable domain experts, you'd end up net ahead.The reason is that everyone is too conservative. The word "paradigm"
is overused, but this is a case where it's warranted. Everyone is
too much in the grip of the current paradigm. Even the people who
have the new ideas undervalue them initially. Which means that
before they reach the stage of proposing them publicly, they've
already subjected them to an excessively strict filter.
[3]The wise response to such an idea is not to make statements, but
to ask questions, because there's a real mystery here. Why has this
smart and reasonable person proposed an idea that seems so wrong?
Are they mistaken, or are you? One of you has to be. If you're the
one who's mistaken, that would be good to know, because it means
there's a hole in your model of the world. But even if they're
mistaken, it should be interesting to learn why. A trap that an
expert falls into is one you have to worry about too.This all seems pretty obvious. And yet there are clearly a lot of
people who don't share my fear of dismissing new ideas. Why do they
do it? Why risk looking like a jerk now and a fool later, instead
of just reserving judgement?One reason they do it is envy. If you propose a radical new idea
and it succeeds, your reputation (and perhaps also your wealth)
will increase proportionally. Some people would be envious if that
happened, and this potential envy propagates back into a conviction
that you must be wrong.Another reason people dismiss new ideas is that it's an easy way
to seem sophisticated. When a new idea first emerges, it usually
seems pretty feeble. It's a mere hatchling. Received wisdom is a
full-grown eagle by comparison. So it's easy to launch a devastating
attack on a new idea, and anyone who does will seem clever to those
who don't understand this asymmetry.This phenomenon is exacerbated by the difference between how those
working on new ideas and those attacking them are rewarded. The
rewards for working on new ideas are weighted by the value of the
outcome. So it's worth working on something that only has a 10%
chance of succeeding if it would make things more than 10x better.
Whereas the rewards for attacking new ideas are roughly constant;
such attacks seem roughly equally clever regardless of the target.People will also attack new ideas when they have a vested interest
in the old ones. It's not surprising, for example, that some of
Darwin's harshest critics were churchmen. People build whole careers
on some ideas. When someone claims they're false or obsolete, they
feel threatened.The lowest form of dismissal is mere factionalism: to automatically
dismiss any idea associated with the opposing faction. The lowest
form of all is to dismiss an idea because of who proposed it.But the main thing that leads reasonable people to dismiss new ideas
is the same thing that holds people back from proposing them: the
sheer pervasiveness of the current paradigm. It doesn't just affect
the way we think; it is the Lego blocks we build thoughts out of.
Popping out of the current paradigm is something only a few people
can do. And even they usually have to suppress their intuitions at
first, like a pilot flying through cloud who has to trust his
instruments over his sense of balance.
[4]Paradigms don't just define our present thinking. They also vacuum
up the trail of crumbs that led to them, making our standards for
new ideas impossibly high. The current paradigm seems so perfect
to us, its offspring, that we imagine it must have been accepted
completely as soon as it was discovered — that whatever the church thought
of the heliocentric model, astronomers must have been convinced as
soon as Copernicus proposed it. Far, in fact, from it. Copernicus
published the heliocentric model in 1532, but it wasn't till the
mid seventeenth century that the balance of scientific opinion
shifted in its favor.
[5]Few understand how feeble new ideas look when they first appear.
So if you want to have new ideas yourself, one of the most valuable
things you can do is to learn what they look like when they're born.
Read about how new ideas happened, and try to get yourself into the
heads of people at the time. How did things look to them, when the
new idea was only half-finished, and even the person who had it was
only half-convinced it was right?But you don't have to stop at history. You can observe big new ideas
being born all around you right now. Just look for a reasonable
domain expert proposing something that sounds wrong.If you're nice, as well as wise, you won't merely resist attacking
such people, but encourage them. Having new ideas is a lonely
business. Only those who've tried it know how lonely. These people
need your help. And if you help them, you'll probably learn something
in the process.Notes[1]
This domain expertise could be in another field. Indeed,
such crossovers tend to be particularly promising.[2]
I'm not claiming this principle extends much beyond math,
engineering, and the hard sciences. In politics, for example,
crazy-sounding ideas generally are as bad as they sound. Though
arguably this is not an exception, because the people who propose
them are not in fact domain experts; politicians are domain experts
in political tactics, like how to get elected and how to get
legislation passed, but not in the world that policy acts upon.
Perhaps no one could be.[3]
This sense of "paradigm" was defined by Thomas Kuhn in his
Structure of Scientific Revolutions, but I also recommend his
Copernican Revolution, where you can see him at work developing the
idea.[4]
This is one reason people with a touch of Asperger's may have
an advantage in discovering new ideas. They're always flying on
instruments.[5]
Hall, Rupert. From Galileo to Newton. Collins, 1963. This
book is particularly good at getting into contemporaries' heads.Thanks to Trevor Blackwell, Patrick Collison, Suhail Doshi, Daniel
Gackle, Jessica Livingston, and Robert Morris for reading drafts of this. |
May 2003If Lisp is so great, why don't more people use it? I was
asked this question by a student in the audience at a
talk I gave recently. Not for the first time, either.In languages, as in so many things, there's not much
correlation between popularity and quality. Why does
John Grisham (King of Torts sales rank, 44) outsell
Jane Austen (Pride and Prejudice sales rank, 6191)?
Would even Grisham claim that it's because he's a better
writer?Here's the first sentence of Pride and Prejudice:
It is a truth universally acknowledged, that a single man
in possession of a good fortune must be in want of a
wife.
"It is a truth universally acknowledged?" Long words for
the first sentence of a love story.Like Jane Austen, Lisp looks hard. Its syntax, or lack
of syntax, makes it look completely unlike
the languages
most people are used to. Before I learned Lisp, I was afraid
of it too. I recently came across a notebook from 1983
in which I'd written:
I suppose I should learn Lisp, but it seems so foreign.
Fortunately, I was 19 at the time and not too resistant to learning
new things. I was so ignorant that learning
almost anything meant learning new things.People frightened by Lisp make up other reasons for not
using it. The standard
excuse, back when C was the default language, was that Lisp
was too slow. Now that Lisp dialects are among
the faster
languages available, that excuse has gone away.
Now the standard excuse is openly circular: that other languages
are more popular.(Beware of such reasoning. It gets you Windows.)Popularity is always self-perpetuating, but it's especially
so in programming languages. More libraries
get written for popular languages, which makes them still
more popular. Programs often have to work with existing programs,
and this is easier if they're written in the same language,
so languages spread from program to program like a virus.
And managers prefer popular languages, because they give them
more leverage over developers, who can more easily be replaced.Indeed, if programming languages were all more or less equivalent,
there would be little justification for using any but the most
popular. But they aren't all equivalent, not by a long
shot. And that's why less popular languages, like Jane Austen's
novels, continue to survive at all. When everyone else is reading
the latest John Grisham novel, there will always be a few people
reading Jane Austen instead. |
January 2003(This article is derived from a keynote talk at the fall 2002 meeting
of NEPLS.)Visitors to this country are often surprised to find that
Americans like to begin a conversation by asking "what do you do?"
I've never liked this question. I've rarely had a
neat answer to it. But I think I have finally solved the problem.
Now, when someone asks me what I do, I look them straight
in the eye and say "I'm designing a
new dialect of Lisp."
I recommend this answer to anyone who doesn't like being asked what
they do. The conversation will turn immediately to other topics.I don't consider myself to be doing research on programming languages.
I'm just designing one, in the same way that someone might design
a building or a chair or a new typeface.
I'm not trying to discover anything new. I just want
to make a language that will be good to program in. In some ways,
this assumption makes life a lot easier.The difference between design and research seems to be a question
of new versus good. Design doesn't have to be new, but it has to
be good. Research doesn't have to be good, but it has to be new.
I think these two paths converge at the top: the best design
surpasses its predecessors by using new ideas, and the best research
solves problems that are not only new, but actually worth solving.
So ultimately we're aiming for the same destination, just approaching
it from different directions.What I'm going to talk about today is what your target looks like
from the back. What do you do differently when you treat
programming languages as a design problem instead of a research topic?The biggest difference is that you focus more on the user.
Design begins by asking, who is this
for and what do they need from it? A good architect,
for example, does not begin by creating a design that he then
imposes on the users, but by studying the intended users and figuring
out what they need.Notice I said "what they need," not "what they want." I don't mean
to give the impression that working as a designer means working as
a sort of short-order cook, making whatever the client tells you
to. This varies from field to field in the arts, but
I don't think there is any field in which the best work is done by
the people who just make exactly what the customers tell them to.The customer is always right in
the sense that the measure of good design is how well it works
for the user. If you make a novel that bores everyone, or a chair
that's horribly uncomfortable to sit in, then you've done a bad
job, period. It's no defense to say that the novel or the chair
is designed according to the most advanced theoretical principles.And yet, making what works for the user doesn't mean simply making
what the user tells you to. Users don't know what all the choices
are, and are often mistaken about what they really want.The answer to the paradox, I think, is that you have to design
for the user, but you have to design what the user needs, not simply
what he says he wants.
It's much like being a doctor. You can't just treat a patient's
symptoms. When a patient tells you his symptoms, you have to figure
out what's actually wrong with him, and treat that.This focus on the user is a kind of axiom from which most of the
practice of good design can be derived, and around which most design
issues center.If good design must do what the user needs, who is the user? When
I say that design must be for users, I don't mean to imply that good
design aims at some kind of
lowest common denominator. You can pick any group of users you
want. If you're designing a tool, for example, you can design it
for anyone from beginners to experts, and what's good design
for one group might be bad for another. The point
is, you have to pick some group of users. I don't think you can
even talk about good or bad design except with
reference to some intended user.You're most likely to get good design if the intended users include
the designer himself. When you design something
for a group that doesn't include you, it tends to be for people
you consider to be less sophisticated than you, not more sophisticated.That's a problem, because looking down on the user, however benevolently,
seems inevitably to corrupt the designer.
I suspect that very few housing
projects in the US were designed by architects who expected to live
in them. You can see the same thing
in programming languages. C, Lisp, and Smalltalk were created for
their own designers to use. Cobol, Ada, and Java, were created
for other people to use.If you think you're designing something for idiots, the odds are
that you're not designing something good, even for idiots.
Even if you're designing something for the most sophisticated
users, though, you're still designing for humans. It's different
in research. In math you
don't choose abstractions because they're
easy for humans to understand; you choose whichever make the
proof shorter. I think this is true for the sciences generally.
Scientific ideas are not meant to be ergonomic.Over in the arts, things are very different. Design is
all about people. The human body is a strange
thing, but when you're designing a chair,
that's what you're designing for, and there's no way around it.
All the arts have to pander to the interests and limitations
of humans. In painting, for example, all other things being
equal a painting with people in it will be more interesting than
one without. It is not merely an accident of history that
the great paintings of the Renaissance are all full of people.
If they hadn't been, painting as a medium wouldn't have the prestige
that it does.Like it or not, programming languages are also for people,
and I suspect the human brain is just as lumpy and idiosyncratic
as the human body. Some ideas are easy for people to grasp
and some aren't. For example, we seem to have a very limited
capacity for dealing with detail. It's this fact that makes
programing languages a good idea in the first place; if we
could handle the detail, we could just program in machine
language.Remember, too, that languages are not
primarily a form for finished programs, but something that
programs have to be developed in. Anyone in the arts could
tell you that you might want different mediums for the
two situations. Marble, for example, is a nice, durable
medium for finished ideas, but a hopelessly inflexible one
for developing new ideas.A program, like a proof,
is a pruned version of a tree that in the past has had
false starts branching off all over it. So the test of
a language is not simply how clean the finished program looks
in it, but how clean the path to the finished program was.
A design choice that gives you elegant finished programs
may not give you an elegant design process. For example,
I've written a few macro-defining macros full of nested
backquotes that look now like little gems, but writing them
took hours of the ugliest trial and error, and frankly, I'm still
not entirely sure they're correct.We often act as if the test of a language were how good
finished programs look in it.
It seems so convincing when you see the same program
written in two languages, and one version is much shorter.
When you approach the problem from the direction of the
arts, you're less likely to depend on this sort of
test. You don't want to end up with a programming
language like marble.For example, it is a huge win in developing software to
have an interactive toplevel, what in Lisp is called a
read-eval-print loop. And when you have one this has
real effects on the design of the language. It would not
work well for a language where you have to declare
variables before using them, for example. When you're
just typing expressions into the toplevel, you want to be
able to set x to some value and then start doing things
to x. You don't want to have to declare the type of x
first. You may dispute either of the premises, but if
a language has to have a toplevel to be convenient, and
mandatory type declarations are incompatible with a
toplevel, then no language that makes type declarations
mandatory could be convenient to program in.In practice, to get good design you have to get close, and stay
close, to your users. You have to calibrate your ideas on actual
users constantly, especially in the beginning. One of the reasons
Jane Austen's novels are so good is that she read them out loud to
her family. That's why she never sinks into self-indulgently arty
descriptions of landscapes,
or pretentious philosophizing. (The philosophy's there, but it's
woven into the story instead of being pasted onto it like a label.)
If you open an average "literary" novel and imagine reading it out loud
to your friends as something you'd written, you'll feel all too
keenly what an imposition that kind of thing is upon the reader.In the software world, this idea is known as Worse is Better.
Actually, there are several ideas mixed together in the concept of
Worse is Better, which is why people are still arguing about
whether worse
is actually better or not. But one of the main ideas in that
mix is that if you're building something new, you should get a
prototype in front of users as soon as possible.The alternative approach might be called the Hail Mary strategy.
Instead of getting a prototype out quickly and gradually refining
it, you try to create the complete, finished, product in one long
touchdown pass. As far as I know, this is a
recipe for disaster. Countless startups destroyed themselves this
way during the Internet bubble. I've never heard of a case
where it worked.What people outside the software world may not realize is that
Worse is Better is found throughout the arts.
In drawing, for example, the idea was discovered during the
Renaissance. Now almost every drawing teacher will tell you that
the right way to get an accurate drawing is not to
work your way slowly around the contour of an object, because errors will
accumulate and you'll find at the end that the lines don't meet.
Instead you should draw a few quick lines in roughly the right place,
and then gradually refine this initial sketch.In most fields, prototypes
have traditionally been made out of different materials.
Typefaces to be cut in metal were initially designed
with a brush on paper. Statues to be cast in bronze
were modelled in wax. Patterns to be embroidered on tapestries
were drawn on paper with ink wash. Buildings to be
constructed from stone were tested on a smaller scale in wood.What made oil paint so exciting, when it
first became popular in the fifteenth century, was that you
could actually make the finished work from the prototype.
You could make a preliminary drawing if you wanted to, but you
weren't held to it; you could work out all the details, and
even make major changes, as you finished the painting.You can do this in software too. A prototype doesn't have to
be just a model; you can refine it into the finished product.
I think you should always do this when you can. It lets you
take advantage of new insights you have along the way. But
perhaps even more important, it's good for morale.Morale is key in design. I'm surprised people
don't talk more about it. One of my first
drawing teachers told me: if you're bored when you're
drawing something, the drawing will look boring.
For example, suppose you have to draw a building, and you
decide to draw each brick individually. You can do this
if you want, but if you get bored halfway through and start
making the bricks mechanically instead of observing each one,
the drawing will look worse than if you had merely suggested
the bricks.Building something by gradually refining a prototype is good
for morale because it keeps you engaged. In software, my
rule is: always have working code. If you're writing
something that you'll be able to test in an hour, then you
have the prospect of an immediate reward to motivate you.
The same is true in the arts, and particularly in oil painting.
Most painters start with a blurry sketch and gradually
refine it.
If you work this way, then in principle
you never have to end the day with something that actually
looks unfinished. Indeed, there is even a saying among
painters: "A painting is never finished, you just stop
working on it." This idea will be familiar to anyone who
has worked on software.Morale is another reason that it's hard to design something
for an unsophisticated user. It's hard to stay interested in
something you don't like yourself. To make something
good, you have to be thinking, "wow, this is really great,"
not "what a piece of shit; those fools will love it."Design means making things for humans. But it's not just the
user who's human. The designer is human too.Notice all this time I've been talking about "the designer."
Design usually has to be under the control of a single person to
be any good. And yet it seems to be possible for several people
to collaborate on a research project. This seems to
me one of the most interesting differences between research and
design.There have been famous instances of collaboration in the arts,
but most of them seem to have been cases of molecular bonding rather
than nuclear fusion. In an opera it's common for one person to
write the libretto and another to write the music. And during the Renaissance,
journeymen from northern
Europe were often employed to do the landscapes in the
backgrounds of Italian paintings. But these aren't true collaborations.
They're more like examples of Robert Frost's
"good fences make good neighbors." You can stick instances
of good design together, but within each individual project,
one person has to be in control.I'm not saying that good design requires that one person think
of everything. There's nothing more valuable than the advice
of someone whose judgement you trust. But after the talking is
done, the decision about what to do has to rest with one person.Why is it that research can be done by collaborators and
design can't? This is an interesting question. I don't
know the answer. Perhaps,
if design and research converge, the best research is also
good design, and in fact can't be done by collaborators.
A lot of the most famous scientists seem to have worked alone.
But I don't know enough to say whether there
is a pattern here. It could be simply that many famous scientists
worked when collaboration was less common.Whatever the story is in the sciences, true collaboration
seems to be vanishingly rare in the arts. Design by committee is a
synonym for bad design. Why is that so? Is there some way to
beat this limitation?I'm inclined to think there isn't-- that good design requires
a dictator. One reason is that good design has to
be all of a piece. Design is not just for humans, but
for individual humans. If a design represents an idea that
fits in one person's head, then the idea will fit in the user's
head too.Related: |
December 2001 (rev. May 2002)
(This article came about in response to some questions on
the LL1 mailing list. It is now
incorporated in Revenge of the Nerds.)When McCarthy designed Lisp in the late 1950s, it was
a radical departure from existing languages,
the most important of which was Fortran.Lisp embodied nine new ideas:
1. Conditionals. A conditional is an if-then-else
construct. We take these for granted now. They were
invented
by McCarthy in the course of developing Lisp.
(Fortran at that time only had a conditional
goto, closely based on the branch instruction in the
underlying hardware.) McCarthy, who was on the Algol committee, got
conditionals into Algol, whence they spread to most other
languages.2. A function type. In Lisp, functions are first class
objects-- they're a data type just like integers, strings,
etc, and have a literal representation, can be stored in variables,
can be passed as arguments, and so on.3. Recursion. Recursion existed as a mathematical concept
before Lisp of course, but Lisp was the first programming language to support
it. (It's arguably implicit in making functions first class
objects.)4. A new concept of variables. In Lisp, all variables
are effectively pointers. Values are what
have types, not variables, and assigning or binding
variables means copying pointers, not what they point to.5. Garbage-collection.6. Programs composed of expressions. Lisp programs are
trees of expressions, each of which returns a value.
(In some Lisps expressions
can return multiple values.) This is in contrast to Fortran
and most succeeding languages, which distinguish between
expressions and statements.It was natural to have this
distinction in Fortran because (not surprisingly in a language
where the input format was punched cards) the language was
line-oriented. You could not nest statements. And
so while you needed expressions for math to work, there was
no point in making anything else return a value, because
there could not be anything waiting for it.This limitation
went away with the arrival of block-structured languages,
but by then it was too late. The distinction between
expressions and statements was entrenched. It spread from
Fortran into Algol and thence to both their descendants.When a language is made entirely of expressions, you can
compose expressions however you want. You can say either
(using Arc syntax)(if foo (= x 1) (= x 2))or(= x (if foo 1 2))7. A symbol type. Symbols differ from strings in that
you can test equality by comparing a pointer.8. A notation for code using trees of symbols.9. The whole language always available.
There is
no real distinction between read-time, compile-time, and runtime.
You can compile or run code while reading, read or run code
while compiling, and read or compile code at runtime.Running code at read-time lets users reprogram Lisp's syntax;
running code at compile-time is the basis of macros; compiling
at runtime is the basis of Lisp's use as an extension
language in programs like Emacs; and reading at runtime
enables programs to communicate using s-expressions, an
idea recently reinvented as XML.
When Lisp was first invented, all these ideas were far
removed from ordinary programming practice, which was
dictated largely by the hardware available in the late 1950s.Over time, the default language, embodied
in a succession of popular languages, has
gradually evolved toward Lisp. 1-5 are now widespread.
6 is starting to appear in the mainstream.
Python has a form of 7, though there doesn't seem to be
any syntax for it.
8, which (with 9) is what makes Lisp macros
possible, is so far still unique to Lisp,
perhaps because (a) it requires those parens, or something
just as bad, and (b) if you add that final increment of power,
you can no
longer claim to have invented a new language, but only
to have designed a new dialect of Lisp ; -)Though useful to present-day programmers, it's
strange to describe Lisp in terms of its
variation from the random expedients other languages
adopted. That was not, probably, how McCarthy
thought of it. Lisp wasn't designed to fix the mistakes
in Fortran; it came about more as the byproduct of an
attempt to axiomatize computation. |
August 2021When people say that in their experience all programming languages
are basically equivalent, they're making a statement not about
languages but about the kind of programming they've done.99.5% of programming consists of gluing together calls to library
functions. All popular languages are equally good at this. So one
can easily spend one's whole career operating in the intersection
of popular programming languages.But the other .5% of programming is disproportionately interesting.
If you want to learn what it consists of, the weirdness of weird
languages is a good clue to follow.Weird languages aren't weird by accident. Not the good ones, at
least. The weirdness of the good ones usually implies the existence
of some form of programming that's not just the usual gluing together
of library calls.A concrete example: Lisp macros. Lisp macros seem weird even to
many Lisp programmers. They're not only not in the intersection of
popular languages, but by their nature would be hard to implement
properly in a language without turning it into a dialect of
Lisp. And macros are definitely evidence of techniques that go
beyond glue programming. For example, solving problems by first
writing a language for problems of that type, and then writing
your specific application in it. Nor is this all you can do with
macros; it's just one region in a space of program-manipulating
techniques that even now is far from fully explored.So if you want to expand your concept of what programming can be,
one way to do it is by learning weird languages. Pick a language
that most programmers consider weird but whose median user is smart,
and then focus on the differences between this language and the
intersection of popular languages. What can you say in this language
that would be impossibly inconvenient to say in others? In the
process of learning how to say things you couldn't previously say,
you'll probably be learning how to think things you couldn't
previously think.
Thanks to Trevor Blackwell, Patrick Collison, Daniel Gackle, Amjad
Masad, and Robert Morris for reading drafts of this.
|
January 2015Corporate Development, aka corp dev, is the group within companies
that buys other companies. If you're talking to someone from corp
dev, that's why, whether you realize it yet or not.It's usually a mistake to talk to corp dev unless (a) you want to
sell your company right now and (b) you're sufficiently likely to
get an offer at an acceptable price. In practice that means startups
should only talk to corp dev when they're either doing really well
or really badly. If you're doing really badly, meaning the company
is about to die, you may as well talk to them, because you have
nothing to lose. And if you're doing really well, you can safely
talk to them, because you both know the price will have to be high,
and if they show the slightest sign of wasting your time, you'll
be confident enough to tell them to get lost.The danger is to companies in the middle. Particularly to young
companies that are growing fast, but haven't been doing it for long
enough to have grown big yet. It's usually a mistake for a promising
company less than a year old even to talk to corp dev.But it's a mistake founders constantly make. When someone from
corp dev wants to meet, the founders tell themselves they should
at least find out what they want. Besides, they don't want to
offend Big Company by refusing to meet.Well, I'll tell you what they want. They want to talk about buying
you. That's what the title "corp dev" means. So before agreeing
to meet with someone from corp dev, ask yourselves, "Do we want to
sell the company right now?" And if the answer is no, tell them
"Sorry, but we're focusing on growing the company." They won't be
offended. And certainly the founders of Big Company won't be
offended. If anything they'll think more highly of you. You'll
remind them of themselves. They didn't sell either; that's why
they're in a position now to buy other companies.
[1]Most founders who get contacted by corp dev already know what it
means. And yet even when they know what corp dev does and know
they don't want to sell, they take the meeting. Why do they do it?
The same mix of denial and wishful thinking that underlies most
mistakes founders make. It's flattering to talk to someone who wants
to buy you. And who knows, maybe their offer will be surprisingly
high. You should at least see what it is, right?No. If they were going to send you an offer immediately by email,
sure, you might as well open it. But that is not how conversations
with corp dev work. If you get an offer at all, it will be at the
end of a long and unbelievably distracting process. And if the
offer is surprising, it will be surprisingly low.Distractions are the thing you can least afford in a startup. And
conversations with corp dev are the worst sort of distraction,
because as well as consuming your attention they undermine your
morale. One of the tricks to surviving a grueling process is not
to stop and think how tired you are. Instead you get into a sort
of flow.
[2]
Imagine what it would do to you if at mile 20 of a
marathon, someone ran up beside you and said "You must feel really
tired. Would you like to stop and take a rest?" Conversations
with corp dev are like that but worse, because the suggestion of
stopping gets combined in your mind with the imaginary high price
you think they'll offer.And then you're really in trouble. If they can, corp dev people
like to turn the tables on you. They like to get you to the point
where you're trying to convince them to buy instead of them trying
to convince you to sell. And surprisingly often they succeed.This is a very slippery slope, greased with some of the most powerful
forces that can work on founders' minds, and attended by an experienced
professional whose full time job is to push you down it.Their tactics in pushing you down that slope are usually fairly
brutal. Corp dev people's whole job is to buy companies, and they
don't even get to choose which. The only way their performance is
measured is by how cheaply they can buy you, and the more ambitious
ones will stop at nothing to achieve that. For example, they'll
almost always start with a lowball offer, just to see if you'll
take it. Even if you don't, a low initial offer will demoralize you
and make you easier to manipulate.And that is the most innocent of their tactics. Just wait till
you've agreed on a price and think you have a done deal, and then
they come back and say their boss has vetoed the deal and won't do
it for more than half the agreed upon price. Happens all the time.
If you think investors can behave badly, it's nothing compared to
what corp dev people can do. Even corp dev people at companies
that are otherwise benevolent.I remember once complaining to a
friend at Google about some nasty trick their corp dev people had
pulled on a YC startup."What happened to Don't be Evil?" I asked."I don't think corp dev got the memo," he replied.The tactics you encounter in M&A conversations can be like nothing
you've experienced in the otherwise comparatively
upstanding world
of Silicon Valley. It's as if a chunk of genetic material from the
old-fashioned robber baron business world got incorporated into the
startup world.
[3]The simplest way to protect yourself is to use the trick that John
D. Rockefeller, whose grandfather was an alcoholic, used to protect
himself from becoming one. He once told a Sunday school class
Boys, do you know why I never became a drunkard? Because I never
took the first drink.
Do you want to sell your company right now? Not eventually, right
now. If not, just don't take the first meeting. They won't be
offended. And you in turn will be guaranteed to be spared one of
the worst experiences that can happen to a startup.If you do want to sell, there's another set of
techniques
for doing
that. But the biggest mistake founders make in dealing with corp
dev is not doing a bad job of talking to them when they're ready
to, but talking to them before they are. So if you remember only
the title of this essay, you already know most of what you need to
know about M&A in the first year.Notes[1]
I'm not saying you should never sell. I'm saying you should
be clear in your own mind about whether you want to sell or not,
and not be led by manipulation or wishful thinking into trying to
sell earlier than you otherwise would have.[2]
In a startup, as in most competitive sports, the task at hand
almost does this for you; you're too busy to feel tired. But when
you lose that protection, e.g. at the final whistle, the fatigue
hits you like a wave. To talk to corp dev is to let yourself feel
it mid-game.[3]
To be fair, the apparent misdeeds of corp dev people are magnified
by the fact that they function as the face of a large organization
that often doesn't know its own mind. Acquirers can be surprisingly
indecisive about acquisitions, and their flakiness is indistinguishable
from dishonesty by the time it filters down to you.Thanks to Marc Andreessen, Jessica Livingston, Geoff
Ralston, and Qasar Younis for reading drafts of this. |
Want to start a startup? Get funded by
Y Combinator.
October 2010
(I wrote this for Forbes, who asked me to write something
about the qualities we look for in founders. In print they had to cut
the last item because they didn't have room.)1. DeterminationThis has turned out to be the most important quality in startup
founders. We thought when we started Y Combinator that the most
important quality would be intelligence. That's the myth in the
Valley. And certainly you don't want founders to be stupid. But
as long as you're over a certain threshold of intelligence, what
matters most is determination. You're going to hit a lot of
obstacles. You can't be the sort of person who gets demoralized
easily.Bill Clerico and Rich Aberman of WePay
are a good example. They're
doing a finance startup, which means endless negotiations with big,
bureaucratic companies. When you're starting a startup that depends
on deals with big companies to exist, it often feels like they're
trying to ignore you out of existence. But when Bill Clerico starts
calling you, you may as well do what he asks, because he is not
going away.
2. FlexibilityYou do not however want the sort of determination implied by phrases
like "don't give up on your dreams." The world of startups is so
unpredictable that you need to be able to modify your dreams on the
fly. The best metaphor I've found for the combination of determination
and flexibility you need is a running back.
He's determined to get
downfield, but at any given moment he may need to go sideways or
even backwards to get there.The current record holder for flexibility may be Daniel Gross of
Greplin. He applied to YC with
some bad ecommerce idea. We told
him we'd fund him if he did something else. He thought for a second,
and said ok. He then went through two more ideas before settling
on Greplin. He'd only been working on it for a couple days when
he presented to investors at Demo Day, but he got a lot of interest.
He always seems to land on his feet.
3. ImaginationIntelligence does matter a lot of course. It seems like the type
that matters most is imagination. It's not so important to be able
to solve predefined problems quickly as to be able to come up with
surprising new ideas. In the startup world, most good ideas
seem
bad initially. If they were obviously good, someone would already
be doing them. So you need the kind of intelligence that produces
ideas with just the right level of craziness.Airbnb is that kind of idea.
In fact, when we funded Airbnb, we
thought it was too crazy. We couldn't believe large numbers of
people would want to stay in other people's places. We funded them
because we liked the founders so much. As soon as we heard they'd
been supporting themselves by selling Obama and McCain branded
breakfast cereal, they were in. And it turned out the idea was on
the right side of crazy after all.
4. NaughtinessThough the most successful founders are usually good people, they
tend to have a piratical gleam in their eye. They're not Goody
Two-Shoes type good. Morally, they care about getting the big
questions right, but not about observing proprieties. That's why
I'd use the word naughty rather than evil. They delight in
breaking
rules, but not rules that matter. This quality may be redundant
though; it may be implied by imagination.Sam Altman of Loopt
is one of the most successful alumni, so we
asked him what question we could put on the Y Combinator application
that would help us discover more people like him. He said to ask
about a time when they'd hacked something to their advantage—hacked in the sense of beating the system, not breaking into
computers. It has become one of the questions we pay most attention
to when judging applications.
5. FriendshipEmpirically it seems to be hard to start a startup with just
one
founder. Most of the big successes have two or three. And the
relationship between the founders has to be strong. They must
genuinely like one another, and work well together. Startups do
to the relationship between the founders what a dog does to a sock:
if it can be pulled apart, it will be.Emmett Shear and Justin Kan of Justin.tv
are a good example of close
friends who work well together. They've known each other since
second grade. They can practically read one another's minds. I'm
sure they argue, like all founders, but I have never once sensed
any unresolved tension between them.Thanks to Jessica Livingston and Chris Steiner for reading drafts of this. |
Want to start a startup? Get funded by
Y Combinator.
October 2011If you look at a list of US cities sorted by population, the number
of successful startups per capita varies by orders of magnitude.
Somehow it's as if most places were sprayed with startupicide.I wondered about this for years. I could see the average town was
like a roach motel for startup ambitions: smart, ambitious people
went in, but no startups came out. But I was never able to figure
out exactly what happened inside the motel—exactly what was
killing all the potential startups.
[1]A couple weeks ago I finally figured it out. I was framing the
question wrong. The problem is not that most towns kill startups.
It's that death is the default for startups,
and most towns don't save them. Instead of thinking of most places
as being sprayed with startupicide, it's more accurate to think of
startups as all being poisoned, and a few places being sprayed with
the antidote.Startups in other places are just doing what startups naturally do:
fail. The real question is, what's saving startups in places
like Silicon Valley?
[2]EnvironmentI think there are two components to the antidote: being in a place
where startups are the cool thing to do, and chance meetings with
people who can help you. And what drives them both is the number
of startup people around you.The first component is particularly helpful in the first stage of
a startup's life, when you go from merely having an interest in
starting a company to actually doing it. It's quite a leap to start
a startup. It's an unusual thing to do. But in Silicon Valley it
seems normal.
[3]In most places, if you start a startup, people treat you as if
you're unemployed. People in the Valley aren't automatically
impressed with you just because you're starting a company, but they
pay attention. Anyone who's been here any amount of time knows not
to default to skepticism, no matter how inexperienced you seem or
how unpromising your idea sounds at first, because they've all seen
inexperienced founders with unpromising sounding ideas who a few
years later were billionaires.Having people around you care about what you're doing is an
extraordinarily powerful force. Even the
most willful people are susceptible to it. About a year after we
started Y Combinator I said something to a partner at a well known
VC firm that gave him the (mistaken) impression I was considering
starting another startup. He responded so eagerly that for about
half a second I found myself considering doing it.In most other cities, the prospect of starting a startup just doesn't
seem real. In the Valley it's not only real but fashionable. That
no doubt causes a lot of people to start startups who shouldn't.
But I think that's ok. Few people are suited to running a startup,
and it's very hard to predict beforehand which are (as I know all
too well from being in the business of trying to predict beforehand),
so lots of people starting startups who shouldn't is probably the
optimal state of affairs. As long as you're at a point in your
life when you can bear the risk of failure, the best way to find
out if you're suited to running a startup is to try
it.ChanceThe second component of the antidote is chance meetings with people
who can help you. This force works in both phases: both in the
transition from the desire to start a startup to starting one, and
the transition from starting a company to succeeding. The power
of chance meetings is more variable than people around you caring
about startups, which is like a sort of background radiation that
affects everyone equally, but at its strongest it is far stronger.Chance meetings produce miracles to compensate for the disasters
that characteristically befall startups. In the Valley, terrible
things happen to startups all the time, just like they do to startups
everywhere. The reason startups are more likely to make it here
is that great things happen to them too. In the Valley, lightning
has a sign bit.For example, you start a site for college students and you decide
to move to the Valley for the summer to work on it. And then on a
random suburban street in Palo Alto you happen to run into Sean
Parker, who understands the domain really well because he started
a similar startup himself, and also knows all the investors. And
moreover has advanced views, for 2004, on founders retaining control of their companies.You can't say precisely what the miracle will be, or even for sure
that one will happen. The best one can say is: if you're in a
startup hub, unexpected good things will probably happen to you,
especially if you deserve them.I bet this is true even for startups we fund. Even with us working
to make things happen for them on purpose rather than by accident,
the frequency of helpful chance meetings in the Valley is so high
that it's still a significant increment on what we can deliver.Chance meetings play a role like the role relaxation plays in having
ideas. Most people have had the experience of working hard on some
problem, not being able to solve it, giving up and going to bed,
and then thinking of the answer in the shower in the morning. What
makes the answer appear is letting your thoughts drift a bit—and thus drift off the wrong
path you'd been pursuing last night and onto the right one adjacent
to it.Chance meetings let your acquaintance drift in the same way taking
a shower lets your thoughts drift. The critical thing in both cases
is that they drift just the right amount. The meeting between Larry
Page and Sergey Brin was a good example. They let their acquaintance
drift, but only a little; they were both meeting someone they had
a lot in common with.For Larry Page the most important component of the antidote was
Sergey Brin, and vice versa. The antidote is
people. It's not the
physical infrastructure of Silicon Valley that makes it work, or
the weather, or anything like that. Those helped get it started,
but now that the reaction is self-sustaining what drives it is the
people.Many observers have noticed that one of the most distinctive things
about startup hubs is the degree to which people help one another
out, with no expectation of getting anything in return. I'm not
sure why this is so. Perhaps it's because startups are less of a
zero sum game than most types of business; they are rarely killed
by competitors. Or perhaps it's because so many startup founders
have backgrounds in the sciences, where collaboration is encouraged.A large part of YC's function is to accelerate that process. We're
a sort of Valley within the Valley, where the density of people
working on startups and their willingness to help one another are
both artificially amplified.NumbersBoth components of the antidote—an environment that encourages
startups, and chance meetings with people who help you—are
driven by the same underlying cause: the number of startup people
around you. To make a startup hub, you need a lot of people
interested in startups.There are three reasons. The first, obviously, is that if you don't
have enough density, the chance meetings don't happen.
[4]
The second is that different startups need such different things, so
you need a lot of people to supply each startup with what they need
most. Sean Parker was exactly what Facebook needed in 2004. Another
startup might have needed a database guy, or someone with connections
in the movie business.This is one of the reasons we fund such a large number of companies,
incidentally. The bigger the community, the greater the chance it
will contain the person who has that one thing you need most.The third reason you need a lot of people to make a startup hub is
that once you have enough people interested in the same problem,
they start to set the social norms. And it is a particularly
valuable thing when the atmosphere around you encourages you to do
something that would otherwise seem too ambitious. In most places
the atmosphere pulls you back toward the mean.I flew into the Bay Area a few days ago. I notice this every time
I fly over the Valley: somehow you can sense something is going on.
Obviously you can sense prosperity in how well kept a
place looks. But there are different kinds of prosperity. Silicon
Valley doesn't look like Boston, or New York, or LA, or DC. I tried
asking myself what word I'd use to describe the feeling the Valley
radiated, and the word that came to mind was optimism.Notes[1]
I'm not saying it's impossible to succeed in a city with few
other startups, just harder. If you're sufficiently good at
generating your own morale, you can survive without external
encouragement. Wufoo was based in Tampa and they succeeded. But
the Wufoos are exceptionally disciplined.[2]
Incidentally, this phenomenon is not limited to startups. Most
unusual ambitions fail, unless the person who has them manages to
find the right sort of community.[3]
Starting a company is common, but starting a startup is rare.
I've talked about the distinction between the two elsewhere, but
essentially a startup is a new business designed for scale. Most
new businesses are service businesses and except in rare cases those
don't scale.[4]
As I was writing this, I had a demonstration of the density of
startup people in the Valley. Jessica and I bicycled to University
Ave in Palo Alto to have lunch at the fabulous Oren's Hummus. As
we walked in, we met Charlie Cheever sitting near the door. Selina
Tobaccowala stopped to say hello on her way out. Then Josh Wilson
came in to pick up a take out order. After lunch we went to get
frozen yogurt. On the way we met Rajat Suri. When we got to the
yogurt place, we found Dave Shen there, and as we walked out we ran
into Yuri Sagalov. We walked with him for a block or so and we ran
into Muzzammil Zaveri, and then a block later we met Aydin Senkut.
This is everyday life in Palo Alto. I wasn't trying to meet people;
I was just having lunch. And I'm sure for every startup founder
or investor I saw that I knew, there were 5 more I didn't. If Ron
Conway had been with us he would have met 30 people he knew.Thanks to Sam Altman, Paul Buchheit, Jessica Livingston, and
Harj Taggar for reading drafts of this. |
Want to start a startup? Get funded by
Y Combinator.
January 2006To do something well you have to like it. That idea is not exactly
novel. We've got it down to four words: "Do what you love." But
it's not enough just to tell people that. Doing what you love is
complicated.The very idea is foreign to what most of us learn as kids. When I
was a kid, it seemed as if work and fun were opposites by definition.
Life had two states: some of the time adults were making you do
things, and that was called work; the rest of the time you could
do what you wanted, and that was called playing. Occasionally the
things adults made you do were fun, just as, occasionally, playing
wasn't—for example, if you fell and hurt yourself. But except
for these few anomalous cases, work was pretty much defined as
not-fun.And it did not seem to be an accident. School, it was implied, was
tedious because it was preparation for grownup work.The world then was divided into two groups, grownups and kids.
Grownups, like some kind of cursed race, had to work. Kids didn't,
but they did have to go to school, which was a dilute version of
work meant to prepare us for the real thing. Much as we disliked
school, the grownups all agreed that grownup work was worse, and
that we had it easy.Teachers in particular all seemed to believe implicitly that work
was not fun. Which is not surprising: work wasn't fun for most of
them. Why did we have to memorize state capitals instead of playing
dodgeball? For the same reason they had to watch over a bunch of
kids instead of lying on a beach. You couldn't just do what you
wanted.I'm not saying we should let little kids do whatever they want.
They may have to be made to work on certain things. But if we make
kids work on dull stuff, it might be wise to tell them that tediousness
is not the defining quality of work, and indeed that the reason
they have to work on dull stuff now is so they can work on more
interesting stuff later.
[1]Once, when I was about 9 or 10, my father told me I could be whatever
I wanted when I grew up, so long as I enjoyed it. I remember that
precisely because it seemed so anomalous. It was like being told
to use dry water. Whatever I thought he meant, I didn't think he
meant work could literally be fun—fun like playing. It
took me years to grasp that.JobsBy high school, the prospect of an actual job was on the horizon.
Adults would sometimes come to speak to us about their work, or we
would go to see them at work. It was always understood that they
enjoyed what they did. In retrospect I think one may have: the
private jet pilot. But I don't think the bank manager really did.The main reason they all acted as if they enjoyed their work was
presumably the upper-middle class convention that you're supposed
to. It would not merely be bad for your career to say that you
despised your job, but a social faux-pas.Why is it conventional to pretend to like what you do? The first
sentence of this essay explains that. If you have to like something
to do it well, then the most successful people will all like what
they do. That's where the upper-middle class tradition comes from.
Just as houses all over America are full of
chairs
that are, without
the owners even knowing it, nth-degree imitations of chairs designed
250 years ago for French kings, conventional attitudes about work
are, without the owners even knowing it, nth-degree imitations of
the attitudes of people who've done great things.What a recipe for alienation. By the time they reach an age to
think about what they'd like to do, most kids have been thoroughly
misled about the idea of loving one's work. School has trained
them to regard work as an unpleasant duty. Having a job is said
to be even more onerous than schoolwork. And yet all the adults
claim to like what they do. You can't blame kids for thinking "I
am not like these people; I am not suited to this world."Actually they've been told three lies: the stuff they've been taught
to regard as work in school is not real work; grownup work is not
(necessarily) worse than schoolwork; and many of the adults around
them are lying when they say they like what they do.The most dangerous liars can be the kids' own parents. If you take
a boring job to give your family a high standard of living, as so
many people do, you risk infecting your kids with the idea that
work is boring.
[2]
Maybe it would be better for kids in this one
case if parents were not so unselfish. A parent who set an example
of loving their work might help their kids more than an expensive
house.
[3]It was not till I was in college that the idea of work finally broke
free from the idea of making a living. Then the important question
became not how to make money, but what to work on. Ideally these
coincided, but some spectacular boundary cases (like Einstein in
the patent office) proved they weren't identical.The definition of work was now to make some original contribution
to the world, and in the process not to starve. But after the habit
of so many years my idea of work still included a large component
of pain. Work still seemed to require discipline, because only
hard problems yielded grand results, and hard problems couldn't
literally be fun. Surely one had to force oneself to work on them.If you think something's supposed to hurt, you're less likely to
notice if you're doing it wrong. That about sums up my experience
of graduate school.BoundsHow much are you supposed to like what you do? Unless you
know that, you don't know when to stop searching. And if, like most
people, you underestimate it, you'll tend to stop searching too
early. You'll end up doing something chosen for you by your parents,
or the desire to make money, or prestige—or sheer inertia.Here's an upper bound: Do what you love doesn't mean, do what you
would like to do most this second. Even Einstein probably
had moments when he wanted to have a cup of coffee, but told himself
he ought to finish what he was working on first.It used to perplex me when I read about people who liked what they
did so much that there was nothing they'd rather do. There didn't
seem to be any sort of work I liked that much. If I had a
choice of (a) spending the next hour working on something or (b)
be teleported to Rome and spend the next hour wandering about, was
there any sort of work I'd prefer? Honestly, no.But the fact is, almost anyone would rather, at any given moment,
float about in the Carribbean, or have sex, or eat some delicious
food, than work on hard problems. The rule about doing what you
love assumes a certain length of time. It doesn't mean, do what
will make you happiest this second, but what will make you happiest
over some longer period, like a week or a month.Unproductive pleasures pall eventually. After a while you get tired
of lying on the beach. If you want to stay happy, you have to do
something.As a lower bound, you have to like your work more than any unproductive
pleasure. You have to like what you do enough that the concept of
"spare time" seems mistaken. Which is not to say you have to spend
all your time working. You can only work so much before you get
tired and start to screw up. Then you want to do something else—even something mindless. But you don't regard this time as the
prize and the time you spend working as the pain you endure to earn
it.I put the lower bound there for practical reasons. If your work
is not your favorite thing to do, you'll have terrible problems
with procrastination. You'll have to force yourself to work, and
when you resort to that the results are distinctly inferior.To be happy I think you have to be doing something you not only
enjoy, but admire. You have to be able to say, at the end, wow,
that's pretty cool. This doesn't mean you have to make something.
If you learn how to hang glide, or to speak a foreign language
fluently, that will be enough to make you say, for a while at least,
wow, that's pretty cool. What there has to be is a test.So one thing that falls just short of the standard, I think, is
reading books. Except for some books in math and the hard sciences,
there's no test of how well you've read a book, and that's why
merely reading books doesn't quite feel like work. You have to do
something with what you've read to feel productive.I think the best test is one Gino Lee taught me: to try to do things
that would make your friends say wow. But it probably wouldn't
start to work properly till about age 22, because most people haven't
had a big enough sample to pick friends from before then.SirensWhat you should not do, I think, is worry about the opinion of
anyone beyond your friends. You shouldn't worry about prestige.
Prestige is the opinion of the rest of the world. When you can ask
the opinions of people whose judgement you respect, what does it
add to consider the opinions of people you don't even know?
[4]This is easy advice to give. It's hard to follow, especially when
you're young.
[5]
Prestige is like a powerful magnet that warps
even your beliefs about what you enjoy. It causes you to work not
on what you like, but what you'd like to like.That's what leads people to try to write novels, for example. They
like reading novels. They notice that people who write them win
Nobel prizes. What could be more wonderful, they think, than to
be a novelist? But liking the idea of being a novelist is not
enough; you have to like the actual work of novel-writing if you're
going to be good at it; you have to like making up elaborate lies.Prestige is just fossilized inspiration. If you do anything well
enough, you'll make it prestigious. Plenty of things we now
consider prestigious were anything but at first. Jazz comes to
mind—though almost any established art form would do. So just
do what you like, and let prestige take care of itself.Prestige is especially dangerous to the ambitious. If you want to
make ambitious people waste their time on errands, the way to do
it is to bait the hook with prestige. That's the recipe for getting
people to give talks, write forewords, serve on committees, be
department heads, and so on. It might be a good rule simply to
avoid any prestigious task. If it didn't suck, they wouldn't have
had to make it prestigious.Similarly, if you admire two kinds of work equally, but one is more
prestigious, you should probably choose the other. Your opinions
about what's admirable are always going to be slightly influenced
by prestige, so if the two seem equal to you, you probably have
more genuine admiration for the less prestigious one.The other big force leading people astray is money. Money by itself
is not that dangerous. When something pays well but is regarded
with contempt, like telemarketing, or prostitution, or personal
injury litigation, ambitious people aren't tempted by it. That
kind of work ends up being done by people who are "just trying to
make a living." (Tip: avoid any field whose practitioners say
this.) The danger is when money is combined with prestige, as in,
say, corporate law, or medicine. A comparatively safe and prosperous
career with some automatic baseline prestige is dangerously tempting
to someone young, who hasn't thought much about what they really
like.The test of whether people love what they do is whether they'd do
it even if they weren't paid for it—even if they had to work at
another job to make a living. How many corporate lawyers would do
their current work if they had to do it for free, in their spare
time, and take day jobs as waiters to support themselves?This test is especially helpful in deciding between different kinds
of academic work, because fields vary greatly in this respect. Most
good mathematicians would work on math even if there were no jobs
as math professors, whereas in the departments at the other end of
the spectrum, the availability of teaching jobs is the driver:
people would rather be English professors than work in ad agencies,
and publishing papers is the way you compete for such jobs. Math
would happen without math departments, but it is the existence of
English majors, and therefore jobs teaching them, that calls into
being all those thousands of dreary papers about gender and identity
in the novels of Conrad. No one does
that
kind of thing for fun.The advice of parents will tend to err on the side of money. It
seems safe to say there are more undergrads who want to be novelists
and whose parents want them to be doctors than who want to be doctors
and whose parents want them to be novelists. The kids think their
parents are "materialistic." Not necessarily. All parents tend to
be more conservative for their kids than they would for themselves,
simply because, as parents, they share risks more than rewards. If
your eight year old son decides to climb a tall tree, or your teenage
daughter decides to date the local bad boy, you won't get a share
in the excitement, but if your son falls, or your daughter gets
pregnant, you'll have to deal with the consequences.DisciplineWith such powerful forces leading us astray, it's not surprising
we find it so hard to discover what we like to work on. Most people
are doomed in childhood by accepting the axiom that work = pain.
Those who escape this are nearly all lured onto the rocks by prestige
or money. How many even discover something they love to work on?
A few hundred thousand, perhaps, out of billions.It's hard to find work you love; it must be, if so few do. So don't
underestimate this task. And don't feel bad if you haven't succeeded
yet. In fact, if you admit to yourself that you're discontented,
you're a step ahead of most people, who are still in denial. If
you're surrounded by colleagues who claim to enjoy work that you
find contemptible, odds are they're lying to themselves. Not
necessarily, but probably.Although doing great work takes less discipline than people think—because the way to do great work is to find something you like so
much that you don't have to force yourself to do it—finding
work you love does usually require discipline. Some people are
lucky enough to know what they want to do when they're 12, and just
glide along as if they were on railroad tracks. But this seems the
exception. More often people who do great things have careers with
the trajectory of a ping-pong ball. They go to school to study A,
drop out and get a job doing B, and then become famous for C after
taking it up on the side.Sometimes jumping from one sort of work to another is a sign of
energy, and sometimes it's a sign of laziness. Are you dropping
out, or boldly carving a new path? You often can't tell yourself.
Plenty of people who will later do great things seem to be disappointments
early on, when they're trying to find their niche.Is there some test you can use to keep yourself honest? One is to
try to do a good job at whatever you're doing, even if you don't
like it. Then at least you'll know you're not using dissatisfaction
as an excuse for being lazy. Perhaps more importantly, you'll get
into the habit of doing things well.Another test you can use is: always produce. For example, if you
have a day job you don't take seriously because you plan to be a
novelist, are you producing? Are you writing pages of fiction,
however bad? As long as you're producing, you'll know you're not
merely using the hazy vision of the grand novel you plan to write
one day as an opiate. The view of it will be obstructed by the all
too palpably flawed one you're actually writing."Always produce" is also a heuristic for finding the work you love.
If you subject yourself to that constraint, it will automatically
push you away from things you think you're supposed to work on,
toward things you actually like. "Always produce" will discover
your life's work the way water, with the aid of gravity, finds the
hole in your roof.Of course, figuring out what you like to work on doesn't mean you
get to work on it. That's a separate question. And if you're
ambitious you have to keep them separate: you have to make a conscious
effort to keep your ideas about what you want from being contaminated
by what seems possible.
[6]It's painful to keep them apart, because it's painful to observe
the gap between them. So most people pre-emptively lower their
expectations. For example, if you asked random people on the street
if they'd like to be able to draw like Leonardo, you'd find most
would say something like "Oh, I can't draw." This is more a statement
of intention than fact; it means, I'm not going to try. Because
the fact is, if you took a random person off the street and somehow
got them to work as hard as they possibly could at drawing for the
next twenty years, they'd get surprisingly far. But it would require
a great moral effort; it would mean staring failure in the eye every
day for years. And so to protect themselves people say "I can't."Another related line you often hear is that not everyone can do
work they love—that someone has to do the unpleasant jobs. Really?
How do you make them? In the US the only mechanism for forcing
people to do unpleasant jobs is the draft, and that hasn't been
invoked for over 30 years. All we can do is encourage people to
do unpleasant work, with money and prestige.If there's something people still won't do, it seems as if society
just has to make do without. That's what happened with domestic
servants. For millennia that was the canonical example of a job
"someone had to do." And yet in the mid twentieth century servants
practically disappeared in rich countries, and the rich have just
had to do without.So while there may be some things someone has to do, there's a good
chance anyone saying that about any particular job is mistaken.
Most unpleasant jobs would either get automated or go undone if no
one were willing to do them.Two RoutesThere's another sense of "not everyone can do work they love"
that's all too true, however. One has to make a living, and it's
hard to get paid for doing work you love. There are two routes to
that destination:
The organic route: as you become more eminent, gradually to
increase the parts of your job that you like at the expense of
those you don't.The two-job route: to work at things you don't like to get money
to work on things you do.
The organic route is more common. It happens naturally to anyone
who does good work. A young architect has to take whatever work
he can get, but if he does well he'll gradually be in a position
to pick and choose among projects. The disadvantage of this route
is that it's slow and uncertain. Even tenure is not real freedom.The two-job route has several variants depending on how long you
work for money at a time. At one extreme is the "day job," where
you work regular hours at one job to make money, and work on what
you love in your spare time. At the other extreme you work at
something till you make enough not to
have to work for money again.The two-job route is less common than the organic route, because
it requires a deliberate choice. It's also more dangerous. Life
tends to get more expensive as you get older, so it's easy to get
sucked into working longer than you expected at the money job.
Worse still, anything you work on changes you. If you work too
long on tedious stuff, it will rot your brain. And the best paying
jobs are most dangerous, because they require your full attention.The advantage of the two-job route is that it lets you jump over
obstacles. The landscape of possible jobs isn't flat; there are
walls of varying heights between different kinds of work.
[7]
The trick of maximizing the parts of your job that you like can get you
from architecture to product design, but not, probably, to music.
If you make money doing one thing and then work on another, you
have more freedom of choice.Which route should you take? That depends on how sure you are of
what you want to do, how good you are at taking orders, how much
risk you can stand, and the odds that anyone will pay (in your
lifetime) for what you want to do. If you're sure of the general
area you want to work in and it's something people are likely to
pay you for, then you should probably take the organic route. But
if you don't know what you want to work on, or don't like to take
orders, you may want to take the two-job route, if you can stand
the risk.Don't decide too soon. Kids who know early what they want to do
seem impressive, as if they got the answer to some math question
before the other kids. They have an answer, certainly, but odds
are it's wrong.A friend of mine who is a quite successful doctor complains constantly
about her job. When people applying to medical school ask her for
advice, she wants to shake them and yell "Don't do it!" (But she
never does.) How did she get into this fix? In high school she
already wanted to be a doctor. And she is so ambitious and determined
that she overcame every obstacle along the way—including,
unfortunately, not liking it.Now she has a life chosen for her by a high-school kid.When you're young, you're given the impression that you'll get
enough information to make each choice before you need to make it.
But this is certainly not so with work. When you're deciding what
to do, you have to operate on ridiculously incomplete information.
Even in college you get little idea what various types of work are
like. At best you may have a couple internships, but not all jobs
offer internships, and those that do don't teach you much more about
the work than being a batboy teaches you about playing baseball.In the design of lives, as in the design of most other things, you
get better results if you use flexible media. So unless you're
fairly sure what you want to do, your best bet may be to choose a
type of work that could turn into either an organic or two-job
career. That was probably part of the reason I chose computers.
You can be a professor, or make a lot of money, or morph it into
any number of other kinds of work.It's also wise, early on, to seek jobs that let you do many different
things, so you can learn faster what various kinds of work are like.
Conversely, the extreme version of the two-job route is dangerous
because it teaches you so little about what you like. If you work
hard at being a bond trader for ten years, thinking that you'll
quit and write novels when you have enough money, what happens when
you quit and then discover that you don't actually like writing
novels?Most people would say, I'd take that problem. Give me a million
dollars and I'll figure out what to do. But it's harder than it
looks. Constraints give your life shape. Remove them and most
people have no idea what to do: look at what happens to those who
win lotteries or inherit money. Much as everyone thinks they want
financial security, the happiest people are not those who have it,
but those who like what they do. So a plan that promises freedom
at the expense of knowing what to do with it may not be as good as
it seems.Whichever route you take, expect a struggle. Finding work you love
is very difficult. Most people fail. Even if you succeed, it's
rare to be free to work on what you want till your thirties or
forties. But if you have the destination in sight you'll be more
likely to arrive at it. If you know you can love work, you're in
the home stretch, and if you know what work you love, you're
practically there.Notes[1]
Currently we do the opposite: when we make kids do boring work,
like arithmetic drills, instead of admitting frankly that it's
boring, we try to disguise it with superficial decorations.[2]
One father told me about a related phenomenon: he found himself
concealing from his family how much he liked his work. When he
wanted to go to work on a saturday, he found it easier to say that
it was because he "had to" for some reason, rather than admitting
he preferred to work than stay home with them.[3]
Something similar happens with suburbs. Parents move to suburbs
to raise their kids in a safe environment, but suburbs are so dull
and artificial that by the time they're fifteen the kids are convinced
the whole world is boring.[4]
I'm not saying friends should be the only audience for your
work. The more people you can help, the better. But friends should
be your compass.[5]
Donald Hall said young would-be poets were mistaken to be so
obsessed with being published. But you can imagine what it would
do for a 24 year old to get a poem published in The New Yorker.
Now to people he meets at parties he's a real poet. Actually he's
no better or worse than he was before, but to a clueless audience
like that, the approval of an official authority makes all the
difference. So it's a harder problem than Hall realizes. The
reason the young care so much about prestige is that the people
they want to impress are not very discerning.[6]
This is isomorphic to the principle that you should prevent
your beliefs about how things are from being contaminated by how
you wish they were. Most people let them mix pretty promiscuously.
The continuing popularity of religion is the most visible index of
that.[7]
A more accurate metaphor would be to say that the graph of jobs
is not very well connected.Thanks to Trevor Blackwell, Dan Friedman, Sarah Harlin,
Jessica Livingston, Jackie McDonough, Robert Morris, Peter Norvig,
David Sloo, and Aaron Swartz
for reading drafts of this. |
October 2015When I talk to a startup that's been operating for more than 8 or
9 months, the first thing I want to know is almost always the same.
Assuming their expenses remain constant and their revenue growth
is what it has been over the last several months, do they make it to
profitability on the money they have left? Or to put it more
dramatically, by default do they live or die?The startling thing is how often the founders themselves don't know.
Half the founders I talk to don't know whether they're default alive
or default dead.If you're among that number, Trevor Blackwell has made a handy
calculator you can use to find out.The reason I want to know first whether a startup is default alive
or default dead is that the rest of the conversation depends on the
answer. If the company is default alive, we can talk about ambitious
new things they could do. If it's default dead, we probably need
to talk about how to save it. We know the current trajectory ends
badly. How can they get off that trajectory?Why do so few founders know whether they're default alive or default
dead? Mainly, I think, because they're not used to asking that.
It's not a question that makes sense to ask early on, any more than
it makes sense to ask a 3 year old how he plans to support
himself. But as the company grows older, the question switches from
meaningless to critical. That kind of switch often takes people
by surprise.I propose the following solution: instead of starting to ask too
late whether you're default alive or default dead, start asking too
early. It's hard to say precisely when the question switches
polarity. But it's probably not that dangerous to start worrying
too early that you're default dead, whereas it's very dangerous to
start worrying too late.The reason is a phenomenon I wrote about earlier: the
fatal pinch.
The fatal pinch is default dead + slow growth + not enough
time to fix it. And the way founders end up in it is by not realizing
that's where they're headed.There is another reason founders don't ask themselves whether they're
default alive or default dead: they assume it will be easy to raise
more money. But that assumption is often false, and worse still, the
more you depend on it, the falser it becomes.Maybe it will help to separate facts from hopes. Instead of thinking
of the future with vague optimism, explicitly separate the components.
Say "We're default dead, but we're counting on investors to save
us." Maybe as you say that, it will set off the same alarms in your
head that it does in mine. And if you set off the alarms sufficiently
early, you may be able to avoid the fatal pinch.It would be safe to be default dead if you could count on investors
saving you. As a rule their interest is a function of
growth. If you have steep revenue growth, say over 5x a year, you
can start to count on investors being interested even if you're not
profitable.
[1]
But investors are so fickle that you can never
do more than start to count on them. Sometimes something about your
business will spook investors even if your growth is great. So no
matter how good your growth is, you can never safely treat fundraising
as more than a plan A. You should always have a plan B as well: you
should know (as in write down) precisely what you'll need to do to
survive if you can't raise more money, and precisely when you'll
have to switch to plan B if plan A isn't working.In any case, growing fast versus operating cheaply is far from the
sharp dichotomy many founders assume it to be. In practice there
is surprisingly little connection between how much a startup spends
and how fast it grows. When a startup grows fast, it's usually
because the product hits a nerve, in the sense of hitting some big
need straight on. When a startup spends a lot, it's usually because
the product is expensive to develop or sell, or simply because
they're wasteful.If you're paying attention, you'll be asking at this point not just
how to avoid the fatal pinch, but how to avoid being default dead.
That one is easy: don't hire too fast. Hiring too fast is by far
the biggest killer of startups that raise money.
[2]Founders tell themselves they need to hire in order to grow. But
most err on the side of overestimating this need rather than
underestimating it. Why? Partly because there's so much work to
do. Naive founders think that if they can just hire enough
people, it will all get done. Partly because successful startups have
lots of employees, so it seems like that's what one does in order
to be successful. In fact the large staffs of successful startups
are probably more the effect of growth than the cause. And
partly because when founders have slow growth they don't want to
face what is usually the real reason: the product is not appealing
enough.Plus founders who've just raised money are often encouraged to
overhire by the VCs who funded them. Kill-or-cure strategies are
optimal for VCs because they're protected by the portfolio effect.
VCs want to blow you up, in one sense of the phrase or the other.
But as a founder your incentives are different. You want above all
to survive.
[3]Here's a common way startups die. They make something moderately
appealing and have decent initial growth. They raise their first
round fairly easily, because the founders seem smart and the idea
sounds plausible. But because the product is only moderately
appealing, growth is ok but not great. The founders convince
themselves that hiring a bunch of people is the way to boost growth.
Their investors agree. But (because the product is only moderately
appealing) the growth never comes. Now they're rapidly running out
of runway. They hope further investment will save them. But because
they have high expenses and slow growth, they're now unappealing
to investors. They're unable to raise more, and the company dies.What the company should have done is address the fundamental problem:
that the product is only moderately appealing. Hiring people is
rarely the way to fix that. More often than not it makes it harder.
At this early stage, the product needs to evolve more than to be
"built out," and that's usually easier with fewer people.
[4]Asking whether you're default alive or default dead may save you
from this. Maybe the alarm bells it sets off will counteract the
forces that push you to overhire. Instead you'll be compelled to
seek growth in other ways. For example, by doing
things that don't scale, or by redesigning the product in the
way only founders can.
And for many if not most startups, these paths to growth will be
the ones that actually work.Airbnb waited 4 months after raising money at the end of Y Combinator
before they hired their first employee. In the meantime the founders
were terribly overworked. But they were overworked evolving Airbnb
into the astonishingly successful organism it is now.Notes[1]
Steep usage growth will also interest investors. Revenue
will ultimately be a constant multiple of usage, so x% usage growth
predicts x% revenue growth. But in practice investors discount
merely predicted revenue, so if you're measuring usage you need a
higher growth rate to impress investors.[2]
Startups that don't raise money are saved from hiring too
fast because they can't afford to. But that doesn't mean you should
avoid raising money in order to avoid this problem, any more than
that total abstinence is the only way to avoid becoming an alcoholic.[3]
I would not be surprised if VCs' tendency to push founders
to overhire is not even in their own interest. They don't know how
many of the companies that get killed by overspending might have
done well if they'd survived. My guess is a significant number.[4]
After reading a draft, Sam Altman wrote:"I think you should make the hiring point more strongly. I think
it's roughly correct to say that YC's most successful companies
have never been the fastest to hire, and one of the marks of a great
founder is being able to resist this urge."Paul Buchheit adds:"A related problem that I see a lot is premature scaling—founders
take a small business that isn't really working (bad unit economics,
typically) and then scale it up because they want impressive growth
numbers. This is similar to over-hiring in that it makes the business
much harder to fix once it's big, plus they are bleeding cash really
fast."
Thanks to Sam Altman, Paul Buchheit, Joe Gebbia, Jessica Livingston,
and Geoff Ralston for reading drafts of this. |
September 2017The most valuable insights are both general and surprising.
F = ma for example. But general and surprising is a hard
combination to achieve. That territory tends to be picked
clean, precisely because those insights are so valuable.Ordinarily, the best that people can do is one without the
other: either surprising without being general (e.g.
gossip), or general without being surprising (e.g.
platitudes).Where things get interesting is the moderately valuable
insights. You get those from small additions of whichever
quality was missing. The more common case is a small
addition of generality: a piece of gossip that's more than
just gossip, because it teaches something interesting about
the world. But another less common approach is to focus on
the most general ideas and see if you can find something new
to say about them. Because these start out so general, you
only need a small delta of novelty to produce a useful
insight.A small delta of novelty is all you'll be able to get most
of the time. Which means if you take this route, your ideas
will seem a lot like ones that already exist. Sometimes
you'll find you've merely rediscovered an idea that did
already exist. But don't be discouraged. Remember the huge
multiplier that kicks in when you do manage to think of
something even a little new.Corollary: the more general the ideas you're talking about,
the less you should worry about repeating yourself. If you
write enough, it's inevitable you will. Your brain is much
the same from year to year and so are the stimuli that hit
it. I feel slightly bad when I find I've said something
close to what I've said before, as if I were plagiarizing
myself. But rationally one shouldn't. You won't say
something exactly the same way the second time, and that
variation increases the chance you'll get that tiny but
critical delta of novelty.And of course, ideas beget ideas. (That sounds
familiar.)
An idea with a small amount of novelty could lead to one
with more. But only if you keep going. So it's doubly
important not to let yourself be discouraged by people who
say there's not much new about something you've discovered.
"Not much new" is a real achievement when you're talking
about the most general ideas. It's not true that there's nothing new under the sun. There
are some domains where there's almost nothing new. But
there's a big difference between nothing and almost nothing,
when it's multiplied by the area under the sun.
Thanks to Sam Altman, Patrick Collison, and Jessica
Livingston for reading drafts of this. |
April 2009I usually avoid politics, but since we now seem to have an administration that's open to suggestions, I'm going to risk making one. The single biggest thing the government could do to increase the number of startups in this country is a policy that would cost nothing: establish a new class of visa for startup founders.The biggest constraint on the number of new startups that get created in the US is not tax policy or employment law or even Sarbanes-Oxley. It's that we won't let the people who want to start them into the country.Letting just 10,000 startup founders into the country each year could have a visible effect on the economy. If we assume 4 people per startup, which is probably an overestimate, that's 2500 new companies. Each year. They wouldn't all grow as big as Google, but out of 2500 some would come close.By definition these 10,000 founders wouldn't be taking jobs from Americans: it could be part of the terms of the visa that they couldn't work for existing companies, only new ones they'd founded. In fact they'd cause there to be
more jobs for Americans, because the companies they started would hire more employees as they grew.The tricky part might seem to be how one defined a startup. But that could be solved quite easily: let the market decide. Startup investors work hard to find the best startups. The government could not do better than to piggyback on their expertise, and use investment by recognized startup investors as the test of whether a company was a real startup.How would the government decide who's a startup investor? The same way they decide what counts as a university for student visas. We'll establish our own accreditation procedure. We know who one another are.10,000 people is a drop in the bucket by immigration standards, but would represent a huge increase in the pool of startup founders. I think this would have such a visible effect on the economy that it would make the legislator who introduced the bill famous. The only way to know for sure would be to try it, and that would cost practically nothing.
Thanks to Trevor Blackwell, Paul Buchheit, Jeff Clavier, David Hornik, Jessica Livingston, Greg Mcadoo, Aydin Senkut, and Fred Wilson for reading drafts of this.Related: |
January 2012A few hours before the Yahoo acquisition was announced in June 1998
I took a snapshot of Viaweb's
site. I thought it might be interesting to look at one day.The first thing one notices is is how tiny the pages are. Screens
were a lot smaller in 1998. If I remember correctly, our frontpage
used to just fit in the size window people typically used then.Browsers then (IE 6 was still 3 years in the future) had few fonts
and they weren't antialiased. If you wanted to make pages that
looked good, you had to render display text as images.You may notice a certain similarity between the Viaweb and Y Combinator logos. We did that
as an inside joke when we started YC. Considering how basic a red
circle is, it seemed surprising to me when we started Viaweb how
few other companies used one as their logo. A bit later I realized
why.On the Company
page you'll notice a mysterious individual called John McArtyem.
Robert Morris (aka Rtm) was so publicity averse after the
Worm that he
didn't want his name on the site. I managed to get him to agree
to a compromise: we could use his bio but not his name. He has
since relaxed a bit
on that point.Trevor graduated at about the same time the acquisition closed, so in the
course of 4 days he went from impecunious grad student to millionaire
PhD. The culmination of my career as a writer of press releases
was one celebrating
his graduation, illustrated with a drawing I did of him during
a meeting.(Trevor also appears as Trevino
Bagwell in our directory of web designers merchants could hire
to build stores for them. We inserted him as a ringer in case some
competitor tried to spam our web designers. We assumed his logo
would deter any actual customers, but it did not.)Back in the 90s, to get users you had to get mentioned in magazines
and newspapers. There were not the same ways to get found online
that there are today. So we used to pay a PR
firm $16,000 a month to get us mentioned in the press. Fortunately
reporters liked
us.In our advice about
getting traffic from search engines (I don't think the term SEO
had been coined yet), we say there are only 7 that matter: Yahoo,
AltaVista, Excite, WebCrawler, InfoSeek, Lycos, and HotBot. Notice
anything missing? Google was incorporated that September.We supported online transactions via a company called
Cybercash,
since if we lacked that feature we'd have gotten beaten up in product
comparisons. But Cybercash was so bad and most stores' order volumes
were so low that it was better if merchants processed orders like phone orders. We had a page in our site trying to talk merchants
out of doing real time authorizations.The whole site was organized like a funnel, directing people to the
test drive.
It was a novel thing to be able to try out software online. We put
cgi-bin in our dynamic urls to fool competitors about how our
software worked.We had some well
known users. Needless to say, Frederick's of Hollywood got the
most traffic. We charged a flat fee of $300/month for big stores,
so it was a little alarming to have users who got lots of traffic.
I once calculated how much Frederick's was costing us in bandwidth,
and it was about $300/month.Since we hosted all the stores, which together were getting just
over 10 million page views per month in June 1998, we consumed what
at the time seemed a lot of bandwidth. We had 2 T1s (3 Mb/sec)
coming into our offices. In those days there was no AWS. Even
colocating servers seemed too risky, considering how often things
went wrong with them. So we had our servers in our offices. Or
more precisely, in Trevor's office. In return for the unique
privilege of sharing his office with no other humans, he had to
share it with 6 shrieking tower servers. His office was nicknamed
the Hot Tub on account of the heat they generated. Most days his
stack of window air conditioners could keep up.For describing pages, we had a template language called RTML, which
supposedly stood for something, but which in fact I named after
Rtm. RTML was Common Lisp augmented by some macros and libraries,
and concealed under a structure editor that made it look like it
had syntax.Since we did continuous releases, our software didn't actually have
versions. But in those days the trade press expected versions, so
we made them up. If we wanted to get lots of attention, we made
the version number an
integer. That "version 4.0" icon was generated by our own
button generator, incidentally. The whole Viaweb site was made
with our software, even though it wasn't an online store, because
we wanted to experience what our users did.At the end of 1997, we released a general purpose shopping search
engine called Shopfind. It
was pretty advanced for the time. It had a programmable crawler
that could crawl most of the different stores online and pick out
the products. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.