In 2005, the writer David Foster Wallace opened his now-famous commencement address at Kenyon College with a short parable:
There are these two young fish swimming along, and they happen to meet an older fish swimming the other way, who nods at them and says, ‘Morning, boys, how’s the water?’ And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes, ‘What the hell is water?’
Wallace assures his audience that he’s not about to present himself as “a wise old fish” waxing philosophic about water to the younger fish. Instead, he clarifies that the point of this story — and indeed, his speech — is that, “the most obvious, ubiquitous, important realities” are often the ones that are the hardest to see.
The water parable illustrates the concept of mental models, or the lens through which we understand the world. Our brains process enormous amounts of data — more than we can possibly keep straight. Mental models distill information into a form we can understand.
But here’s the thing: Most of us have only a few mental models at our disposal. As the adage goes, “To the man with a hammer, everything looks like a nail.” In other words, we see things only in the context of what we know. Farnam Street founder Shane Parrish sums this up in his book, The Great Mental Models:
Most of us study something specific and don’t get exposure to the big ideas of other disciplines. We don’t develop the multidisciplinary mindset that we need to accurately see a problem. And because we don’t have the right models to understand the situation, we overuse the models we do have, and use them even when they don’t belong.
Expanding your toolkit into what billionaire investor Charlie Munger calls “a latticework of mental models” will help you comprehend the world in a more multidimensional way, and make better decisions accordingly.
The idea behind first principles is to break down complicated problems into individual elements, then rebuild them from the ground up. One early adopter of first principles was Aristotle, who said:
In every systematic inquiry (methodos) where there are first principles, or causes, or elements, knowledge and science result from acquiring knowledge of these; for we think we know something just in case we acquire knowledge of the primary causes, the primary first principles, all the way to the elements.
A more modern example of a first principles thinker is Elon Musk, who, in the course of developing SpaceX, wasn’t afraid to ignore decades of conventional thinking when it came to space flight:
“I think people’s thinking process is too bound by convention or analogy to prior experiences,” Musk has said. “It’s rare that people try to think of something on a first principles basis. They’ll say, ‘We’ll do that because it’s always been done that way.’ Or they’ll not do it because ‘Well, nobody’s ever done that, so it must not be good.’ But that’s just a ridiculous way to think.”
Musk’s ability to shrug off pre-existing assumptions about space flight allowed him to reimagine what was possible, which in his case meant cutting the cost of rockets to a price that in years past would have been incomprehensible.
Circle of competence
I’m no genius. I’m smart in spots — but I stay around those spots.IBM founder Tom Watson Sr.
The circle of competence was originally conceived by Warren Buffet as a strategy to guide investment decisions — but it’s also highly relevant to business.
Each of us specializes in something, either through experience or study. The circle of competence entails recognizing our areas of expertise, as well as our limitations. And the truth is, we don’t actually need to understand every nook and cranny of a given situation.
I think about this constantly in the course of running my company, JotForm. With whatever task I’ve got ahead of me, I decide whether it falls within my circle of competence. Coding, for example, is a fairly specialized skill; one that I’ve cultivated over the years. Customer service, however, is much more broad-based. Knowing this has helped me figure out what tasks I should do myself, and which I should delegate out to others better equipped to accomplish them.
Developed by Amazon founder Jeff Bezos, regret minimization helps you frame your thinking beyond the immediate future, focusing instead on what will be most beneficial down the road. What are the projects that, at the end of your career, you’ll regret not having pursued?
Say, for instance, you’re offered a prestigious fellowship that’s going to mean logging several more hours of work each week. Should you take it? Well, it depends. Is the short-term stress worth it in the long-run? Will dedicating extra time each week mean something relatively inconsequential, like cutting into your Netflix bingeing schedule? Or are the effects more serious, like missing out on an important vacation with your family?
Bezos used regret minimization to decide to leave his job and start the book selling business that eventually became Amazon. “I knew that when I was 80, I was not going to regret having tried this,” he explains. “I knew that if I failed, I wouldn’t regret that. But I knew the one thing I might regret was not ever having tried.”
Think about what you want. Then, think of the opposite of what you want. That’s inversion.
It might seem counterintuitive, but inversion relies on the idea that rather than striving for brilliance, we’re better off avoiding stupidity. Munger, a proponent of inversion, holds that difficult problems need to be looked at both forward and backward, because, he says, “many problems can’t be solved forward.”
Say you want to foster more creativity among your team. Thinking forward, you’d consider all the strategies you could implement to improve creativity. With inversion, on the other hand, you’d think about what was inhibiting creativity.
It sounds simple, but you might be surprised by how often you’ll notice a roadblock you otherwise wouldn’t have. Inverting a problem isn’t always going to solve it, but it will help you avoid unnecessary missteps along the way.
This is only a small sampling of the many, many mental models out there. Some, like the Feynman technique, are helpful for anyone who wants to really learn something; others have less universal applications. But ultimately, the more you know — the stronger your latticework — the fewer blind spots you’ll have.