A slight edit and update of my May 2005 essay, "Why The Informationist," to accompany the blog's 2013 relaunch.
This blog’s seeds were planted in my mind in 1980, during my freshman orientation at Columbia. The core curriculum of Columbia’s undergraduate program introduced us to the great formative works of Western Civilization. In my era though, required classes were considered passé. Most universities had dispensed with the notion entirely, and those few who clung to such outdated educational notions felt compelled to justify their tenacity. At my orientation, one of Columbia’s many professors dedicated to the program spoke to us about its rich history and tradition. He explained that “Contemporary Civilization” began as an inquiry into the causes of the first World War. He paused, looked at the audience, and sympathized: “Now, you’re probably thinking: I may not know what caused World War I, but I’m pretty sure that it wasn’t Plato.” I was hooked. Over the next few years, it all flowed together in my mind—some from my coursework, some from my own reading—as Plato, Jesus, Rav Ashi, Descartes, Locke, Jefferson, Smith, Marx, Mill, Darwin, von Neumann, and numerous others defined the civilization into which I was born.

About the same time, I discovered computers, computing, and computer science. I was never much of a gadget freak, but I fell in love with the stark beauty of algorithmic logic. The ability to focus entirely on process, and to convert an arbitrary set of inputs into logically necessary outputs just felt right. It struck me as the inherently right way to think about complex issues and to solve challenging problems. I next discovered heuristic programming, artificial intelligence, and Bayesian statistics, three related fields devoted to expanding algorithmic thinking from the logically necessary to the merely likely. Algorithmic thinking gave me a lens through which to view the philosophical big picture. It dawned upon me that every one of history’s great philosophers had asserted the commonality of the various areas of human inquiry. Some found the common source in theology, some in physics, some in biology, and some in economics—but all attempted to persuade their readers of the centrality of their preferred source. I also realized something else: our philosophers became increasingly formal over time. Plato hung his insights on a weak framework; Aristotle did much better on that account. Jesus, Paul, and Augustine were informal; Aquinas restored formalism. Maimonides did the same for Rav Ashi. Jefferson operationalized Locke. And so on and so on and so on.
Throughout most of this history, two unanswered questions hung in the air: How much formality is necessary to gain the insights we seek? and How much formality is possible? In the late 19th Century, Russell and Whitehead set out to solve these dilemmas once and for all. They would devise a set of logical rules sufficiently expressive and formal to reduce all human reasoning to mathematical formulae. By the middle of the 20th Century, though, we understood that their goals were unachievable. Gödel’s incompleteness theorem, Heisenberg’s uncertainty principle, and numerous related discoveries taught us that no matter how hard we tried, some things would elude formal treatment and remain unknowable. Some like to attribute the unknowable to God, others prefer to assign it less weighty titles, but one way or another, the ancient quest for the Universal Explanation of Everything came to a screeching halt. We were just going to have to learn to live with a certain amount of uncertainty.
Just about the time that we achieved that insight, technologists invented the digital computer. In short order (less than a decade into the computer age), a number of these technologists glommed onto the idea of growing their “computing machines” into “thinking machines.” Our ancient philosophical quest was reborn. Rather than trying to explain everything, it would work backwards. The “knowledge representation” tools of AI defined the starting point by telling us how formal our treatment had to be. Any area of human inquiry that we could translate into a formal knowledge base could drift into the realm of our computational thinking machines. Algorithms, typically augmented by probabilistic heuristics, could then manipulate the basic represented information and unlock its implications. But this newborn approach could prove to be useful even for problems that eluded that level of formalization, for the simple reason that it imposed a new discipline on our own thinking. Algorithms familiarized many of us with the centrality of logical thinking derived from compact axiom sets—an approach that had rarely before extended beyond academic mathematics.
I spent most of the 1980s getting these various observations and strands of my thinking to gel into something coherent. I saw AI, probability, algorithms, and logic combining into a powerful philosophical methodology with the potential to change the world. I wanted to understand how this methodology could help people make better decisions, businesses devise better strategies, and governments craft better public policies. I kept seeing ways that this methodology could inform my own areas of substantive interests—religion, politics, and foreign affairs (or if you prefer, God, America, and the World). I saw a world undergoing a confusing and often painful transition from industrial age to information age grasping for a way to understand the scope of the consequent changes. At one level, it all seemed so simple—we had computers. At a deeper level, though, we had entered a profoundly new era.
A single sentence defines the information age and reveals how it differs from all earlier epochs: Information is abundant, easy to collect and manipulate, and inexpensive to share. Everything else derives from this single change. Never before has any individual, no matter how erudite, had as ready access to as much information about as many topics as does the least-connected member of the information age. We, a thinking species mired forever in a world of information scarcity have suddenly found ourselves thrust into a world of information abundance. That simple twist changes everything. Every aspect of life that involves the collection, combination, or communication of information—in other words, every aspect of life—must change to accommodate our new reality. By the mid-1980s I found myself thinking: “I may not know what caused the information age, but I’m pretty sure that it wasn’t Plato.” Unless maybe it was
By the end of the decade (I can’t remember exactly when), I realized that I had derived my own philosophical approach: an information-centered, probabilistic, algorithmic, view of the world. I began searching far and wide for others who had derived and developed similar approaches, coined the term “informationism,” declared myself “an informationist,” and proceeded to do nothing with either label until today. I ran a quick Google search to see if someone else had snagged my words in the intervening years. They don’t appear to have assumed a conventional meaning likely to cause confusion or anguish. And quite frankly, I still like them.
So why “The Informationist?”
There’s nothing new under the sun. (That’s Ecclesiastes 1:9, for those keeping score at home). As the years have gone by, I’ve expanded both my training and my reading. I have discovered a number of intellectual traditions whose axioms I generally accept—and many of whose fundamental insights I have derived on my own. In particular, I fell easily and naturally into the netherworld at the intersection of cognitive psychology, economics, law, and management. I found many of my own thoughts reflected in the work of the classic economic liberals; the psychologists who study heuristics and biases; the decision analysts who apply Bayesian probability to formal modeling and decision-making; and the scholars who defined the “law and economics” school of analysis. I worked my way into each of these fields, learned their basics, and made my own modest contributions. I consider them all to be dead-on right about many of their essential claims.
At the same time, though, I remain opposed to fundamentalism in all of its forms. I refuse to adopt every position that a brilliant, insightful writer advocated just because I find his or her writings to be brilliant and insightful. Besides, the mere fact that they wrote first means that I have access to more data than they did. I can see what they saw, contemplate what they said, and see what happened next. That should buy me something—and I tend to use that which I’ve bought.
To pick but one example, I’ve always described myself as a liberal—a label that I wear proudly, déclassé or not. In recent years, I have begun to plumb what that actually means. I would like to understand why many of my contemporaries who call themselves liberals are really illiberal social democrats, while those who consider themselves devotees of the great liberal writers tend to call themselves conservatives and ally themselves with anti-liberals of various stripes. Though I do have some thoughts about the linguistic confusion, I defer them to a different essay. For present purposes, I have distilled what liberalism means to me: a focus on individual freedom.
In practical terms, this belief makes me unabashedly pro-market and pro-democracy. I see the government’s primary role as the developer of infrastructures within which free citizens can make meaningful decisions. As a general rule, I support policies that spread opportunity, that increase choice, and/or that provide the information necessary for choices to have meaning; I typically oppose policies that do none of the three. More specifically, I advocate a foreign policy based upon muscular liberalism, free trade, and open borders; a simplified tax policy that minimizes distortions and promotes fairness by broadening the tax base; and social policies that recognize and support the individual’s right to make private decisions. These are all positions that enable individual choice. I also advocate policies that make individual choice either available or meaningful. I favor social safety nets that help people temper their natural risk aversion by avoiding the full consequences of catastrophe; investments in infrastructures that free private actors to improve the efficiency of their transactions; and regulations that improve markets by increasing transparency, information flow, and robust competition. In the areas closest to my own specialization, I advocate regulations that promote innovation by harnessing technology and oppose those that lock in obsolete technologies and the business models based upon them.
The first subset of these positions likely brands me as something of a “classical liberal” or a “19th Century liberal.” The second subset probably reveals a broader definition of necessary and enabling infrastructures than many earlier liberal writers would have tolerated. I see this broadening as an evolution of liberal ideas appropriate for the networked world of the information age. Those who prefer a more fundamentalist reading of classic liberalism would likely disagree. Which of us is correct? Perhaps not I. Perhaps I am not technically a classical liberal at all, but rather something of a fellow traveler. According to Ludwig von Mises, “liberalism is applied economics; it is social and political policy based on a scientific foundation.” I find myself in only partial agreement. Though I believe strongly in social and political policy based on a scientific foundation, the science upon which I draw is not economics—at least not in its strictest sense. My political philosophy derives from applied information science. Perhaps that’s why I needed to coin a new word. Could any word be more apropos than “informationism?” Informationism is applied information science; it is social and political policy based on a scientific foundation.
So why “The Informationist?”
After many years of publishing scholarly journal articles while refusing to do the legwork necessary to get my general-interest essays published, I finally bit the bullet a few years ago and poured myself into a project geared toward a broad audience. Digital Phoenix: Why the Information Economy Collapsed and How it Will Rise Again (MIT Press, 2005), hit the stands on May 1, 2005. Digital Phoenix is an informationist book, but I tried to keep the philosophy in the background. By and large, it tells the formative stories of the information economy: the Internet investment bubble, the Microsoft trial, the rise and fall of Napster, and the advent of open source. Along the way, it describes the fundamentals of intellectual property and antitrust law; industrial organization and network economics; and artificial intelligence and software engineering that let these formative stories make sense. Nevertheless, I tried hard to make all of this material accessible to a general audience. A large part of my pride in the book stems from my conviction that I succeeded. Now that the book is available, I may soon learn whether or not such pride is warranted. More to the point though, Digital Phoenix motivated me to get my act together and to launch The Informationist.
That work continued through my publication of The Secret Circuit: The Little-Known Court where the Rules of the Information Economy Unfold (Rowman & Littlefield, 2007). That book played off my year clerking at the United States Court of Appeals for the Federal Circuit (CAFC). The history of that "little known" court, best known for hearing all patent appeals, plays a prominent part in the book. The rest of the book addresses the court's docket--patents, international trade, government work, and a few other topics, or as I like to think of it, innovation, globalization, and goverment. If any of those things play important roles in your industry, you need to know more about the CAFC. The Secret Circuit is also an informationist book. Part of the approach to the subject matter teases out the underlying policy objectives, and asks the question: are we working towards or away from a desirable perspective in public policy.
The work also continued in the blog I maintained as "The Informationist" for more than eight years. Anyone interested in the archive of the thoughts amassed there should feel free to request a password. But that blog was emphatically political in its orientation. As the years went by, I began to wonder whether there was a point in publishing analyses of subject matter as diffuse as foreign policy, economics, and American politics. By mid-2013, I decided that it was time to narrow my focus. This relaunch as Informationism represents my attempt to narrow my subject matter to intellectual property and the technology economy. I may deviate at times, but I promise to try to stay focused.
So why “Informationism?”
Ecclesiastes answered that one as well, right up front in 1:2. Vanity of vanities, all is vanity.