On Twitter, I’ve been chatting with my friend Julia Galef about tensions between thinking creatively and thinking in a way that reduces error.
Of course, all other things being equal, I’m in favour of reducing error in our thinking!
However, all other things are not always equal.
In particular, I believe “there’s a tension, too, between behaviours which maximize accuracy & which maximize creativity… A lot of important truths come from v. irrational ppl.”
Julia has summarized some of her thinking in a blog post, where she disagrees, writing: “I totally agree that we need more experimentation with â€œcrazy ideasâ€! Iâ€™m just skeptical that rationality is, on the margin, in tension with that goal.”
Before getting to Julia’s arguments, I want to flesh out the idea of a tension between maximizing creativity and maximizing accuracy.
Consider the following statement of Feynman’s, on the need to fool himself into believing that he had a creative edge in his work. He’s talking about his early ideas on how to develop a theory of electrons and light (which became, after many years, quantum electrodynamics). The statement is a little jarring to modern sensibilities, but please look past that to the idea he’s trying to convey:
I told myself [of his competitors]: “They’re on the wrong track: I’ve got the track!” Now, in the end, I had to give up those ideas and go over to their ideas of retarded action and so on – my original idea of electrons not acting on themselves disappeared, but because I had been working so hard I found something. So, as long as I can drive myself one way or the other, it’s okay. Even if it’s an illusion, it still makes me go, and this is the kind of thing that keeps me going through the depths.
It’s like the African savages who are going into battle – first they have to gather around and beat drums and jump up and down to build up their energy to fight. I feel the same way, building up my energy by talking to myself and telling myself, “They are trying to do it this way, I’m going to do it that way” and then I get excited and I can go back to work again.
Many of the most creative scientists I know are extremely determined people, willing to explore unusual positions for years. Sometimes, those positions are well grounded. And sometimes, even well after the fact, it’s obvious they were fooling themselves, but somehow their early errors helped them find their way to the truth. They were, to use the mathematician Goro Shimura’s phrase “gifted with the special capability of making many mistakes, mostly in the right direction”.
An extreme example is the physicist Joseph Weber, who pioneered gravitational wave astronomy. The verdict of both his contemporaries and of history is that he was fooling himself: his systems simply didn’t work the way he thought. On the other hand, even though he fooled himself for decades, the principals on the (successful!) LIGO project have repeatedly acknowledged that his work was a major stimulus for them to work on finding gravitational waves. In retrospect, it’s difficult to be anything other than glad that Weber clung so tenaciously to his erroneous beliefs.
For me, what matters here is that: (a) much of Weber’s work was based on an unreasonable belief; and (b) on net, it helped speed up important discoveries.
Weber demonstrates my point in an extreme form. He was outright wrong, and remained so, and yet his erroneous example still served a useful purpose, helping inspire others to pursue ideas that eventually worked. In some sense, this is a collective (rather than individual) version of my point. More common is the case – like Feynman – of a person who may cling to mistaken beliefs for a long period, but ultimately uses that as a bridge to new discovery.
Turning to Julia’s post, she responds to my argument with: “In general, I think overconfidence stifles experimentation”, and argues that the great majority of people in society reject “crazy” ideas – say, seasteading – because they’re overconfident in conventional wisdom.
I agree that people often mistakenly reject unusual ideas because they’re overconfident in the conventional wisdom.
However, I don’t think it’s relevant to my argument. Being overconfident in beliefs that most people hold is not at all the same as being overconfident in beliefs that few people hold.
You may wonder if the underlying cognitive mechanisms are the same, and perhaps there’s some kind of broad disposition to overconfidence?
But if that was the case then you’d expect that someone overconfident in their own unusual ideas would, in other areas, also be overconfident in the conventional wisdom.
However, my anecdotal experience is that a colleague willing to pursue unusual ideas of their own is often particularly sympathetic to unusual ideas from other people in other areas. This suggests that being overconfident in your own crazy ideas isn’t likely to stifle other experimentation.
Julia also suggests several variants on the “strategy of temporarily suspending your disbelief and throwing yourself headlong into something for a while, allowing your emotional state to be as
if you were 100% confident.”
In a sense, Feynman and Weber were practicing an extreme version of this strategy. I don’t know Weber’s work well, but it’s notable that in the details of Feynman’s work he was good at ferreting out error, and not fooling himself. He wasn’t always rigorous – mathematicians have, for instance, spent decades trying to make the path integral rigorous – but there was usually a strong core argument. Indeed, Feynman delivered a very stimulating speech on the value of careful thought in scientific work.
How can this careful approach to the details of argument be reconciled with his remarks about the need to fool yourself in creative work?
I never met Feynman, and can’t say how he reconciled the two points of view. But my own approach in creative work, and I believe many others also take this approach, is to carve out a sort of creative cocoon around nascent ideas.
Consider Apple designer Jony Ive’s remarks at a memorial after Steve Jobs’ death:
Steve used to say to me — and he used to say this a lot — “Hey Jony, here’s a dopey idea.”
And sometimes they were. Really dopey. Sometimes they were truly dreadful. But sometimes they took the air from the room and they left us both completely silent. Bold, crazy, magnificent ideas. Or quiet simple ones, which in their subtlety, their detail, they were utterly profound. And just as Steve loved ideas, and loved making stuff, he treated the process of creativity with a rare and a wonderful reverence. You see, I think he better than anyone understood that while ideas ultimately can be so powerful, they begin as fragile, barely formed thoughts, so easily missed, so easily compromised, so easily just squished.
To be creative, you need to recognize those barely formed thoughts, thoughts which are usually wrong and poorly formed in many ways, but which have some kernel of originality and importance and
truth. And if they seem important enough to be worth pursuing, you construct a creative cocoon around them, a set of stories you tell yourself to protect the idea not just from others, but from your own self doubts. The purpose of those stories isn’t to be an air tight defence. It’s to give you the confidence to nurture the idea, possibly for years, to find out if there’s something really there.
And so, even someone who has extremely high standards for the final details of their work, may have an important component to their thinking which relies on rather woolly arguments. And they may well need to cling to that cocoon. Perhaps other approaches are possible. But my own experience is that this is often the case.
Julia finishes her post with:
One last point: Even if it turned out to be true that irrationality is necessary for innovators, thatâ€™s only a weak defense of your original claim, which was that Iâ€™m significantly overrating the value of rationality in general. Remember, â€œcoming up with brilliant new ideasâ€ is just one domain in which we could evaluate the potential value-add of increased rationality. There are lots of other domains to consider, such as designing policy, allocating philanthropic funds, military strategy, etc. We could certainly talk about those separately; for now, Iâ€™m just noting that you made this original claim about the dubious value of rationality in general, but then your argument focused on this one particular domain, innovation.
To clarify, I didn’t intend my claim to be in general: the tension I see is between creativity and accuracy.
That said, this tension does leak into other areas.
If you’re a funder, say, trying to determine what to fund in AI research, you go and talk to AI experts. And many of those people are likely to have cultivated their own creative cocoons, which will inform their remarks. How a funder should deal with that is a separate essay. My point here is simply that this process of creative cocooning isn’t easily untangled from things like evaluation of work.