Bill Tulloh: The concept of “experimental economics” is an enticing one. For those not informed about it, could you encapsulate what “experimental economics” means?
Kevin McCabe: It’s easy to theorize, and it’s easy to be wrong when you theorize. You make assumptions about the world and even if your theories are logically consistent and largely derived from what you believe you already know, they’re still just theories. So you want to have a systematic way of testing theories. And that’s always what experiments have been about in every discipline, whether it’s physics or economics. That is, your theory has certain consequences that you believe will be true, but you don’t know until you test it.
So you may believe people will behave a certain way at an auction and you might even go so far as to say, “Theory says this must be true because this is the only way people should behave in an auction,” only to find out this is not the way people behave at all in an auction. So you’re designing your experiment to a set of assumptions. And until you test those assumptions, you have no idea whether you’re designing something that will work or not. Our view is always that what you can design are the rules of the game, how people interact. What you can’t design is the ways in which they’ll interact, the messages they’ll send each other and how they’ll interact in your system. It’s very similar when you’re trying to engineer an economic system and you think, OK, this auction should behave this way and then you realize, well, this component of the auction may depend heavily on, do people tell the truth, for instance, or are they going to under-reveal or whatever. So you want to test that piece before you go on. You’re trying to build from tests that you know work so that when something fails you know, it’s got to be the way I ultimately combined things, not the individual pieces that were the problem. So it allows more faith in a cumulative design process.
Tulloh: Can you tell us how you put this into practice with your students?
McCabe: When we started doing a workshop, we wanted the students to be building the computations, to actually do the programming because … well, you can theorize, you can sort of see what a second price auction would look like if you had multiple units to sell as opposed to one. However, if you program it, you know exactly what that means because you had to put the rules in to show what has to be done.
When you run an experiment, especially in economics, you have to explain how things are going to run. Theorists often say, “Oh well, that’s…” and they hand-wave on that. But you can’t run an experiment until you’ve provided a computational form of the theory. That is, a way to compute things: the way for information to be processed and for contracts to be computed or whatever. And so experimentalists at that level have always been worried about computation as well.
Now, interestingly enough, that can create a divide. If I’m a theorist and you run an experiment testing my theory and I don’t like the result because it’s not proving my theory to be right, one thing I could complain about was, well, so while you implemented my theory, you used this computational implementation, there’s 100 other ones. Why didn’t you use one of these other computational implementations? And they’re right. But what that means is the computations matter. You can’t ignore them when you’re building theory, and that’s what we’re trying to get the students to understand by building these models that will eventually run as experiments and will eventually be used as theoretical simulations to talk about the consequences.
Tulloh: Does experimental economics tend to be more macro- or micro-oriented, or is it the whole gamut?
McCabe: It tends to be more what I would call institutional-oriented. And you can think of that broadly. Money is an institution, so you can study money in the laboratory and you could even study something like hyperinflation. In that sense, you can study macro but you can’t study macro in the sense of, does it matter that a million people are interacting in a certain way with the monetary system at the moment? In other words, if you think scale is an essential element of the theory of macro that you’re studying, then you would have to run your experiment in that setting with a million people or whatever the scale is.
This has always been a point of debate between macroeconomists and experimental economists. They’ll propose a theory that has a representative consumer and we’ll say, “We can experiment on this representative consumer, your claim is going to behave this way with real consumers.” And then they’ll say, “Oh no, that’s just a fantasy we made up to describe a world with a million people.” Of course then you say, “Wouldn’t it be nice if your fantasy was more of the theory that you were trying to propose?” But that only goes so far. So the answer is as long as people are willing to say, “These are the rules of the game, this is the institution I want to study,” and it’s not so much scale as it is how people interact with that institution that matters.
There’s a very famous experiment where Vernon Smith was looking at perfectly competitive markets. And the theory at the time said, what makes a market perfectly competitive is when we approach an infinite number of buyers and sellers in the market. The common term at the time was “non-atomistic.” The idea behind that was nobody should believe that their behavior could influence price. So Vernon goes into the lab with a double auction that he’s largely copied, although simplified, from the New York Stock Exchange, brings six or eight people in the lab, and it converges to competitive equilibrium. It turns out that the institution itself is playing a powerful role in making the market competitive. It’s not the scale of the number of people in the market that matters. It’s how the rules of the game affect the way people can strategize and the way they can influence price in the market that matters.