In 1987, I wrote my Ph.D. dissertation on heuristic search. My specific topic was the use of statistical decision-making in two-player games. I drew concrete examples from Othello and Chess, ran simulation studies, and provided a theoretical (mathematical) proof of my thesis.
My basic thesis was simple: In the parts of a game that are too complex to search in their entirety, the best move to make against an expert opponent is the same as the best move to make against a random player.
Statistically speaking, it boils down to "If you can't be certain about the future, maximizing expected value is a good idea."
The chess programming community was up in arms. The mere concept that random play and expert play bore any similarity to each other was heretical. The luminaries of the time, including much of the team that trained the Deep Blue team, insisted that it could not possibly be true. (Personally, I thought it was too blindingly obvious to warrant a Ph.D., but I guess I was wrong). They insisted that only deep, chess-specific knowledge could defeat a champion.
Turns out, they were wrong. Statistics and I were right.
See here: Why Bruce was Right (not the actual headline).
Lesson of the day: When conventional wisdom runs against mathematics, stand with conventional wisdom if you want to get promoted. Stand with mathematics if you want to be right.