Ray Dalio: Systemized And Computerized Decision MakingGuest Post
The following is an excerpt from Principles: Life & Work. I also recently discussed some of these same issues regarding the opportunities and risks of AI at Web Summit with Garry Kasparov, if you’re interested.
In the future, artiﬁcial intelligence will have a profound impact on how we make decisions in every aspect of our lives—especially when combined with the new era of radical transparency about people that’s already upon us. Right now, whether you like it or not, it is easy for anyone to access your digital data to learn a tremendous amount about what you’re like, and this data can be fed into computers that do everything from predict what you’re likely to buy to what you value in life. While this sounds scary to many people, at Bridgewater we have been combining radical transparency with algorithmic decision making for more than thirty years and have found that it produces remarkable results. In fact, I believe that it won’t be long before this kind of computerized decision making guides us nearly as much as our brains do now.
The concept of artiﬁcial intelligence is not new. Even back in the 1970s, when I ﬁrst started experimenting with computerized decision making, it had already been around for nearly twenty years (the term “artiﬁcial intelligence” was ﬁrst introduced in 1956 at a conference at Dartmouth College). While a lot has changed since then, the basic concepts remain the same.
To give you just one ultrasimple example of how computerized decision making works, let’s say you have two principles for heating your home: You want to turn the heat on when the temperature falls below 68 and you want to turn the heat oﬀ between midnight and 5:00 a.m. You can express the relationship between these criteria in a simple decision-making formula: If the temperature is less than 68 degrees and the time is not between 5:00 a.m. and midnight, then turn on the heat. By gathering many such formulas, it’s possible to create a decision-making system that takes in data, applies and weighs the relevant criteria, and recommends a decision.
Specifying our investment decision-making criteria in algorithms and running historical data through them, or specifying our work principles in algorithms and using them to aid in management decision making, are just bigger and more complicated versions of that smart thermostat. They allow us to make more informed and less emotional decisions much faster than we could on our own.
I believe that people will increasingly do this and that computer coding will become as essential as writing. In time, we will use machine assistants as much for decision making as we do for information gathering today. As these machines help us, they will learn about what we are like—what we value, what our strengths and weaknesses are—and they will be able to tailor the advice they give us by automatically seeking out the help of others who are strong where we are weak. It won’t be long before our machine assistants are speaking to others’ machine assistants and collaborating in this way. In fact, that’s beginning to happen already.
Imagine a world in which you can use technology to connect to a system in which you can input the issue you’re dealing with and have exchanges about what you should do and why with the highest-rated thinkers in the world. We’ll soon be able to do this. Before too long, you will be able to tap the highest-quality thinking on nearly every issue you face and get the guidance of a computerized system that weighs diﬀerent points of view. For example, you will be able to ask what lifestyle or career you should choose given what you’re like, or how to best interact with speciﬁc people based on what they’re like. These innovations will help people get out of their own heads and unlock an incredibly powerful form of collective thinking. We are doing this now and have found it way better than traditional thinking.
While this kind of view often leads to talk of artiﬁcial intelligence competing with human intelligence, in my opinion human and artiﬁcial intelligence are far more likely to work together because that will produce the best results. It’ll be decades—and maybe never—before the computer can replicate many of the things that the brain can do in terms of imagination, synthesis, and creativity. That’s because the brain comes genetically programmed with millions of years of abilities honed through evolution. The “science” of decision making that underlies many computer systems remains much less valuable than the “art.” People still make the most important decisions better than computers do. To see this, you need look no further than at the kinds of people who are uniquely successful. Software developers, mathematicians, and game-theory modelers aren’t running away with all the rewards; it is the people who have the most common sense, imagination, and determination.
Only human intelligence can apply the interpretations that are required to provide computer models with appropriate input. For example, a computer can’t tell you how to weigh the value of the time you spend with your loved ones against the time you spend at work or the optimal mix of hours that will provide you with the best marginal utilities for each activity. Only you know what you value most, who you want to share your life with, what kind of environment you want to be in, and ultimately how to make the best choices to bring those things about. What’s more, so much of our thinking comes from the subconscious in ways we don’t understand, that thinking we can model it fully is as unlikely as an animal that has never experienced abstract thinking attempting to deﬁne and replicate it.
Yet at the same time, the brain cannot compete with the computer in many ways. Computers have much greater “determination” than any person, as they will work 24/7 for you. They can process vastly more information, and they can do it much faster, more reliably, and more objectively than you could ever hope to. They can bring millions of possibilities that you never thought of to your attention. Perhaps most important of all, they are immune to the biases and consensus-driven thinking of crowds; they don’t care if what they see is unpopular, and they never panic. During those terrible days after 9/11, when the whole country was being whipsawed by emotion, or the weeks between September 19 and October 10, 2008, when the Dow fell 3,600 points, there were times I felt like hugging our computers. They kept their cool no matter what.
This combination of man and machine is wonderful. The process of man’s mind working with technology is what elevates us—it’s what has taken us from an economy where most people dig in the dirt to today’s Information Age. It’s for that reason that people who have common sense, imagination, and determination, who know what they value and what they want, and who also use computers, math, and game theory, are the best decision makers there are. At Bridgewater, we use our systems much as a driver uses a GPS in a car: not to substitute for our navigational abilities but to supplement them.
In contrast, the main thrust of machine learning in recent years has gone in the direction of data mining, in which powerful computers ingest massive amounts of data and look for patterns. While this approach is popular, it’s risky in cases when the future might be diﬀerent from the past. Investment systems built on machine learning that is not accompanied by deep understanding are dangerous because when some decision rule is widely believed, it becomes widely used, which aﬀects the price. In other words, the value of a widely known insight disappears over time. Without deep understanding, you won’t know if what happened in the past is genuinely of value and, even if it was, you will not be able to know whether or not its value has disappeared—or worse. It’s common for some decision rules to become so popular that they push the price far enough that it becomes smarter to do the opposite.
Remember that computers have no common sense. For example, a computer could easily misconstrue the fact that people wake up in the morning and then eat breakfast to indicate that waking up makes people hungry. I’d rather have fewer bets (ideally uncorrelated ones) in which I am highly conﬁdent than more bets I’m less conﬁdent in, and would consider it intolerable if I couldn’t argue the logic behind any of my decisions. A lot of people vest their blind faith in machine learning because they ﬁnd it much easier than developing deep understanding. For me, that deep understanding is essential, especially for what I do.
I don’t mean to imply that these mimicking or data-mining systems, as I call them, are useless. In fact, I believe that they can be extremely useful in making decisions in which the future range and conﬁguration of events are the same as they’ve been in the past. Given enough computing power, all possible variables can be taken into consideration. For example, by analyzing data about the moves that great chess players have made under certain circumstances, or the procedures great surgeons have used during certain types of operations, valuable programs can be created for chess playing or surgery. Back in 1997, the computer program Deep Blue beat Garry Kasparov, the world’s highest-ranked chess player, using just this approach. But this approach fails in cases where the future is diﬀerent from the past and you don’t know the cause-eﬀect relationships well enough to recognize them all. Understanding these relation-ships as I do has saved me from making mistakes when others did, most obviously in the 2008 ﬁnancial crisis. Nearly everyone else assumed that the future would be similar to the past. Focusing strictly on the logical cause-eﬀect relationships was what allowed us to see what was really going on.
When you get down to it, our brains are essentially computers that are programmed in certain ways, take in data, and spit out instructions. We can program the logic in both the computer that is our mind and the computer that is our tool so that they can work together and even double-check each other. Doing that is fabulous.
For example, suppose we were trying to derive the universal laws that explain species change over time. Theoretically, with enough processing power and time, this should be possible. We would need to make sense of the formulas the computer produces, of course, to make sure that they are not data-mined gibberish, by which I mean based on correlations that are not causal in any way. We would do this by constantly simplifying these rules until their elegance is unmistakable.
Of course, given our brain’s limited capacity and processing speed, it could take us forever to achieve a rich understanding of all the variables that go into evolution. Is all the simplifying and understanding that we employ in our expert systems truly required? Maybe not. There is certainly a risk that changes not in the tested data might still occur. But one might argue that if our data-mining-based formulas seem able to account for the evolution of all species through all time, then the risks of relying on them for just the next ten, twenty, or ﬁfty years is relatively low compared to the beneﬁts of having a formula that appears to work but is not fully understandable (and that, at the very least, might prove useful in helping scientists cure genetic diseases).
In fact, we may be too hung up on understanding; conscious thinking is only one part of understanding. Maybe it’s enough that we derive a formula for change and use it to anticipate what is yet to come. I myself find the excitement, lower risk, and educational value of achieving a deep understanding of cause-effect relationships much more appealing than a reliance on algorithms I don’t understand, so I am drawn to that path. But is it my lower- level preferences and habits that are pulling me in this direction or is it my logic and reason? I’m not sure. I look forward to probing the best minds in artificial intelligence on this (and having them probe me).
Most likely, our competitive natures will compel us to place bigger and bigger bets on relationships computers ﬁnd that are beyond our understanding. Some of those bets will pay oﬀ, while others will backﬁre. I suspect that AI will lead to incredibly fast and remarkable advances, but I also fear that it could lead to our demise.
We are headed for an exciting and perilous new world. That’s our reality. And as always, I believe that we are much better oﬀ preparing to deal with it than wishing it weren’t true.
Article by Ray Dalio, LinkedIn