In this episode of the "All Else Equal" podcast, Jules Van Binsbergen and Jonathan Burke celebrate their 50th episode milestone while reflecting on significant themes covered in previous episodes. They focus on the concept of "All Else Equal" thinking, which simplifies decision-making by keeping variables constant, although this approach may sometimes lead to oversight of critical externalities. The episode delves into different categories of mistakes stemming from this mindset, such as not considering the actions of broader groups or directly affecting the behavior with one's decision.

The hosts discuss several examples, including regulatory decisions in financial markets and private equity, challenges in implementing effective corporate tax policies, and issues surrounding regulation, such as the unintended consequences of demanding complete disclosure in financial transactions. They also explore topics like university reputations, short-term versus long-term incentive misalignment, and the credibility issues that arise when institutions fail to consider consequences thoroughly.

Main takeaways from the video:

💡
Simplifying decision-making through "All Else Equal" thinking can lead to oversight of important externalities.
💡
The importance of understanding group behaviors and regulatory impacts on private equity and financial markets.
💡
Aligning incentives is crucial but challenging within systems, necessitating a comprehensive approach to effectively address issues like taxation and environmental policies.
Please remember to turn on the CC button to view the subtitles.

Key Vocabularies and Common Phrases:

1. colander [ˈkɑːləndər] - (n.) - A kitchen utensil used to strain foods such as pasta or to rinse vegetables. It resembles a bowl with small holes. - Synonyms: (strainer, sieve, filter)

And you know, at some point you need to wonder if the water situation you're dealing with is a colander.

2. regulatory arbitrage [ˈrɛɡjʊlətɔːri ˈɑːrbəˌtrɑːʒ] - (n.) - The practice of taking advantage of inconsistencies or gaps in regulatory systems to circumvent regulations. - Synonyms: (regulatory gap exploitation, loophole exploitation, non-compliance tactics)

regulatory arbitrage is as old as regulation.

3. undermine [ˌʌndərˈmaɪn] - (v.) - To weaken or damage something, especially gradually or insidiously. - Synonyms: (weaken, sabotage, erode)

But in the long term, it greatly undermined people's confidence in vaccines and hugely undermined modern medicine.

4. equilibrium [ˌiːkwɪˈlɪbriəm] - (n.) - A state of balance or stability within a system. - Synonyms: (balance, stability, symmetry)

And if you make the devil's advocate's arguments and therefore you're associated with the devil, who wants that? And then I think softballing becomes the equilibrium automatically because the culture for admiring somebody, for being the devil's advocate, we've lost that a bit

5. conflate [kənˈfleɪt] - (v.) - To combine different concepts or information into one. - Synonyms: (merge, fuse, blend)

At some point you, you start to confound these two things.

6. profound [prəˈfaʊnd] - (adj.) - Having deep insight or understanding; intense in a significant way. - Synonyms: (deep, significant, insightful)

That's just such a hard truth to swallow, right?

7. empirical [ɛmˈpɪrɪkəl] - (adj.) - Based on observation, experience, or experiment rather than theory or pure logic. - Synonyms: (observational, experimental, factual)

empirical question that needs to constantly be asked is what's the probability that in this particular case the intervention will be successful.

8. incentives [ɪnˈsentɪvz] - (n.) - Things that motivate or encourage someone to do something. - Synonyms: (motivation, encouragement, inducement)

If you fix the incentives themselves, then people through their behavior will accomplish the goals.

9. reputation [ˌrɛpjʊˈteɪʃən] - (n.) - The beliefs or opinions that are generally held about someone or something. - Synonyms: (prestige, standing, respect)

You destroy all the reputation that you've built up.

10. constraints [kənˈstreɪnts] - (n.) - Limitations or restrictions that prevent something from happening. - Synonyms: (limitations, restrictions, confines)

You have to take the constraints as given and you have to say, look, let's align incentives to try to get the behavior we would want to get to add to that a bit.

Ep51 Celebrating 50 Episodes - The Biggest All Else Equal Mistakes

Welcome to the Lauder Institute at the University of Pennsylvania. I'm Jules Van Binsbergen, director of the Institute and a finance professor at the Wharton School. And I'm Jonathan Burke, a finance professor at the Graduate School of Business at Stanford University. This is the All Else Equal podcast.

Foreign. Welcome back, everybody. We have some exciting news to share. We have just hit our 50th episode. We're going to hit a million listens soon, and we thought we would use this episode to look back at some of the major themes we've hit. And of course, we're also looking forward to another exciting number of seasons. So, Jonathan, I think this is a moment for celebration, don't you think?

It is. It is amazing how time flies. I can't believe we've done 50 episodes. But you know, Jules, we named the podcast All Else Equal. And I think what we should do in this 50th episode is go through the highlights of the last 50 and focus on how the All Else Equal thinking has influenced each episode for sure.

Because I think that it's possible to group some of the episodes by certain themes that keep on coming back every single time. Because I do think that All Else Equal thinking is so prevalent and so many people do it all the time. And I think that the first thing to establish and to quickly talk about is why it happens that people do All Else Equal thinking.

And I think that the answer to that question is it's just easier. I don't think we need a more complicated explanation for it than that. I think that going from A to B in your thought process holding All Else Equal is just a much easier process than constantly taking into account all of the other responses and dynamic things that happen around you when you're thinking through a problem.

Let's start going through the groups of mistakes. The first area where I think we can class a set of episodes on was a case where people engaged in All Else Equal thinking. But it wasn't directly the action of the decision that affected other people. It was just the overall environment and not taking into account that environment in the thinking. So that's very abstract, but let's be more specific.

An example of that is somebody who sees what they consider to be a underpriced stock and assuming that they're the only people that have noticed. Whatever they've noticed, they've noticed how great the candy from this company tastes, say, and ignoring the fact that many other people would have tasted the same candy. Yeah.

So in fact, there's information out there you think you have an informational advantage, you end up not being the only one who has that information. Other people already have that information, have already acted upon it or maybe acting upon it at the same time as you. And just because of that reason, what you thought was a great decision, buying the stock thinking it was undervalued actually turns out not to be such a great decision because all of the information that was out there is already in the price today.

So it's of course, the group that makes that happen. Your decision is not affecting the group's decision, but you ignoring the fact that the group is making the same decision as you is going to make you think that you're going to make money when in fact you're not. Yeah.

And so let's be specific. The episode we're talking about is the episode with Cliff asness and then was followed by Pete Brega. In both cases, people were not thinking about how the group behaved. And then finally, the episode we did together on mutual funds where the same thing occurs. You know, I look for a mutual fund manager. I see a manager that has performed really well. I invest. And of course, I ignore the fact that everybody else would have seen the same performance. And they'll also invest in the mutual fund and they'll drive up the assets of the mutual fund to the point that the mutual fund won't outperform in the future.

Indeed. And so I think that the key question to not make that mistake, because we are going to be talking a bit too about how can you force yourself to prevent making these decisions, is that. And this is also always what I tell the students. If you think that you're so great and you think that you're the only one who knows this, what makes you think you have the competitive advantage to be the only one to know this?

And if you cannot come up with any good reason why you would have this competitive advantage, maybe you should think twice and probably you just don't have a competitive advantage. Exactly right. Okay. That's one set of all else equal mistakes.

And there's another set of all else equal mistakes where your decision affects the behavior directly. We've spoken about a lot of these things, but the most obvious place is a regulator who puts into place a regulation and ignores the fact that the people being regulated will now change their behavior. Yeah. And we have a whole bunch of episodes related to private equity and private credit that really go into this.

We've had Jay Clayton recently on the show. We've had Eric Zinterhofer on the show to Talk about private equity. We have had Lawrence Gottlieb on the show to talk about private credit. And I think here, the moment that you start regulating public financial markets because you want to enforce a certain type of behavior, you should take into account the fact that people will want to try to avoid that regulation.

How can you avoid public market regulation? Well, by taking firms private and therefore what previously was publicly traded debt and publicly traded equity will just become privately traded equity and privately traded debt. Let's listen to a clip from a conversation with Jay Clinton, former SEC chair, where he talks about this.

regulatory arbitrage is as old as regulation. For years you had limitations on leverage in equity markets. And then somebody invented swaps. And it's the same type of economic exposure, you know, borrowing to take a long or a short position as it is to put a swap on.

One has a different capital charge to the purchaser than the other. And of course, the users and providers of swaps basically were able to make more money than was expected when you had regulation based purely on the margin rule. So these types of, I'll call them regulatory testing go on all the time.

And so I do think that a reasonable empirical question that needs to constantly be asked is what's the probability that in this particular case the intervention will be successful because the government has enough information to make things better? And maybe in most cases the answer will have to be maybe not.

And that's maybe a disappointing answer. We all wish that wasn't true, but if it is true, there's just not much we can do about it. Another example which I'm sure most of our listeners have experienced, is disclosure. We have this knee jerk reaction, well, how could disclosure be bad? Just force everybody to disclose.

But of course, if you've ever tried to get a mortgage today, what you find is that there are hundreds of pages of disclosures when you want to buy a house. And most of the disclosures are completely irrelevant. And buried deep in those disclosures are important disclosures, but they buried so deep you can't find them.

So here's a perfect example where we say, okay, how can it be bad to disclose? Yes, it can be bad to disclose. Because when people are forced to disclose, what they'll do is disclose everything and in doing so be able to hide the important stuff.

Indeed. You know, the more I think about it, Jonathan, the best analogy I think is just due to the incentives that people have, right? The water just wants to go from the highest point to the lowest point. And once you start to block the water at various places.

The only thing it's going to do is find a different way around whatever blockage you put there. Eventually it's going to get to the lowest point Unless you literally manage to block every possible way in which the water would want to go to the lowest point. And so I think we see a lot of examples of this all the time.

And you know, at some point you need to wonder if the water situation you're dealing with is a colander. There's so many holes that you have to stuff that maybe you should just give up and that this is not something that's fixable. Jules.

I think it's deeper than that. I would say that you cannot dam every water hole. And the reason is just the level of human capital that's going to be brought to bear. So let me give you an example.

We did a whole episode of corporate taxation with Larry Summers. Why am I opposed to the corporate tax? Because I think that corporations don't want to pay taxes and they will invest a huge amount of human capital to figure out ways of not paying taxes.

Absolutely. And there is no way you're going to be able to match that level of human capital. No matter how many plugs you put in, they will find a plug that you haven't plaqued. And those plugs you haven't plugged are basically infinite.

So again, it's an all else equal mistake to assume I can plug all the plugs. I'm not going to take into account the level of human capital arrayed against me. No. And you're absolutely right.

I mean, and if you know that the resources that are being used on the other side outnumber your resources that you can put in by a factor 100 to 1. The question is whether this is a battle that can ever be won. And I think it is often in that ratio. So that makes things very hard to regulate.

And what we talked about is that if there's anything you want to do to fix things because we're only talking about all else equal mistakes that make things hard. Let's also think about solutions. Right. Is affecting the incentives themselves.

If you fix the incentives themselves, then people through their behavior will accomplish the goals. Make sure that the water goes where you want it to go. By making the point where you want to end up lower than the point where you started from, the water will go there automatically.

That is what setting incentives properly is all about. You're absolutely right. If you want things to work, you have to align incentives. But it's extremely difficult to align incentives, okay? And you have to make compromises.

And it may look like, well, the compromise I'm making are so bad I don't want to make them. But you're ignoring the fact that the holes exist. You can't just assume away the holes and say, well, I don't have to make these compromises. I can just assume the world will get better. You know, it won't.

So you have to take the constraints as given and you have to say, look, let's align incentives to try to get the behavior we would want to get to add to that a bit. Right? I mean, when we talked about the episode on agreeing to disagree and what optimal policies were, we did talk about value systems too.

We do have to be wary of the fact that some of the policymakers, when they set the regulations are not actually interested in stuffing all the holes. They just want to make it look like they're doing something about something, regardless of what the eventual consequence of that is.

They may have incentives themselves that wanted them to get elected and other things that may not actually be consistent with solving the problem at hand, truly solving the problem. Rather, it should just look like you're doing something about it.

I mean, another example of an all else equal mistake. Another whole category is this the category of short term versus long term incentives? Right. You have an organization with people in the short term have an incentive, but in following a strategy, they undermine the ability of people in the long term to work effectively.

So let's stop talking about the abstract and let's talk about examples. We can start with say, the vaccination of people during COVID It was the episode with John Ioannidis. I think almost certainly most doctors when the vaccine came out, who really understood genetics would understand that the vaccine itself was not going to stop the spread of COVID but they also understood that the vaccine would greatly help the symptoms of COVID and the amount of people in emergency rooms.

But rather than just say that they knew that if they told everybody the vaccine would protect you and the vaccine, it would stop Covid, many more people would take the vaccine. So in the short term, essentially lying to people and telling people, if you take the vaccine, you'll be protected from COVID meant that there was far greater take up.

And since that's the objective, it worked in the short term. But in the long term, it greatly undermined people's confidence in vaccines and hugely undermined modern medicine. Because some vaccines do in fact stop the spread. The measles vaccine, for example.

And so by undermining People's confidence in vaccines, you undermine people's confidence in the measles vaccine, and that's of great cost to society. No. And you just gave an example of the medical industry. But we had a number of episodes also when we talked to John Edgemendi where we can apply the same idea to universities.

Right? If you as a university, you ruthlessly pursue the truth and people believe you and read you and think of you as the platform where truth is debated and where knowledge is generated, and then suddenly you change your objective to also achieving other objectives, like social objectives.

You can only do that for a very short period of time where you push these social objectives and at the same time pretend to still be pursuing the truth. At some point you, you start to confound these two things. And as a consequence, I think right now, if you look at the surveys and you look at the number of Americans that are not that impressed anymore when they read a newspaper article that says research at University X has found, you can almost wonder whether you should blame them.

Maybe universities have contributed themselves to the reduced reputation that they have. So, yes, in the short term, you can achieve a lot by trying to push things and using your reputation as a truth finding entity. Right.

And try to push these social objectives through. But in the long run, people are going to say, which one of the two are you? Are you just achieving the social objectives? Are you still the place where we can go to try and debate truth? And I think that that confusion and that confounding of those two things has been hugely damaging for the long term reputation of universities.

And I sincerely hope that we get that reputation back. This is a general reputation thing where you establish a reputation and then don't properly take into account that if you do something that goes against the reputation, you destroy all the reputation that you've built up.

You know, a classic or else equal mistake. Another example is forgiving student debt. We did a whole episode on forgiving student debt. If you forgive student debt now, many more people will take on student debt with the expectation that you're going to forgive it in the future and totally undermines the ability to lend to people and the ability to do good in the program.

All right, so Jonathan, let's talk a bit about what are things that our episodes have illustrated we should be doing about it? What are the sort of procedures that we can put in place inside of institutions? What are the things that we can impose upon ourselves to try to not fall for these all else equal mistakes that we're seeing all the time?

You know, one of the best episodes I think we recorded was the episode with Ruth Porat, CFO of Google. And you know, she spoke about what Google does, and to me, it seemed very impressive. Let's hear what she said.

There's a whole protocol at Google around what's called blameless postmortems. It's the goal to actually do a root analysis at the end of something that hasn't gone as well as you want, not blaming anyone, assuming that process actually can lead to bad outcomes.

People have good intent, but you need the right process. So what is it that we can learn through a blameless postmortem such that next time you're in a similar situation, you actually have a better outcome? And I'm very much of the view that one of the most valuable things each of us brings is pattern recognition from some of the toughest things that we've ever gone through.

But you need to take the time to deconstruct what did I learn? What. Pattern recognition is important so that it sets you up better for the next time. But the problem with it is that it works so long as the devil's advocate takes their job very seriously.

But if they don't and they pretend to be a devil's advocate without really being a devil's advocate, that undermines the next time somebody then is a devil's advocate. Because then they go, well, wait a minute, the last time somebody was a devil's advocate, they just softballed.

Now you're hardballing. And again, it's the same similar thing, an all else equal mistake that when you softball, so that's easy for yourself as the devil's advocate, not take into account the externality that the next guy. It'll be harder for them to play a devil's advocate.

I completely agree with you, Jonathan. But I do think that to be able to maintain those systems culture is so incredibly important. Let me give you an example.

Like 10 years ago when we were teaching, just taking the other side of an argument for the fun of it, because you were having fun with the material and the intellectual debate that it generated, just made it an enjoyable intellectual exercise to be the devil's advocate.

It was like sport. You were testing out how strong your arguments were. You were having fun together and trying to figure it all out.

I think that a lot of instructors today, when you ask them, they're very hesitant and very scared to be the devil's advocate because they're worried that the devil's advocate's argument may be confused with the devil himself.

And if you make the devil's advocate's arguments and therefore you're associated with the devil, who wants that? And then I think softballing becomes the equilibrium automatically because the culture for admiring somebody, for being the devil's advocate, we've lost that a bit. And so I think it's important to get that back.

And Neil Ferguson, during the episode on the state of Academia essentially made the same point. A classroom's a very. It's a very delicate space. Professor has a certain authority that shouldn't be abused.

This is what Max Weber was very right about. We shouldn't use that authority to indoctrinate students. We should be able to challenge them by asking difficult questions.

The Socratic dialogue in fact works best when the professor is empowered to say contrarian and challenging things. That was how I used to teach at Cambridge and at Oxford.

It would be quite wrong if one were to have those challenging questions taken out of context and posted on X or Instagram. So I think one has to create those protections because freedom is not some naturally occurring thing.

It requires an institutional framework and it requires constitutional protections both at the national level and at the level of a college in a classroom.

The other thing is being disciplined in thinking. We did a whole episode with Guido Imbens on correlation and causation. You know, I think of that as an all else equal mistake.

Right. By thinking carefully about what is causal and what isn't causal, I think you can really avoid making all else equal mistakes. Yeah. And I think another way to really discipline your thinking is there are two different perspectives that you can take.

One perspective is the world is just chaos because everybody's stupid and everybody's doing it wrong. And I'm going to come in as this all knowing person. I'm going to fix it all by telling people what to do.

Or you can view the world as maybe this is already a very well converged system where a lot of things have already been optimized. And I'm first going to try to understand what everybody's incentive and everybody's actions together would lead to in an equilibrium. And then really understand why the world is the way it is.

Once you understand that complicated dynamic system, suddenly messing with it becomes much harder. Because now it isn't just you coming in to bring order to this organization. No, you have to contend with all of these other optimizing agents that really are not all that stupid.

They all know exactly what they're doing. A good example of this is raising children and psychology as human beings. We've Been raising children for hundreds of thousands of years.

How is it that somebody would come along now and say, we know how to raise children better? I mean, this is ignoring the fact that there are many smart people lived before us all thinking about the same problem.

Why would we suddenly know more than anybody else? Why would we have a competitive advantage over them? And again, I think that disciplining, thinking along the lines of how is it that I am able to think about something new that nobody else before me has thought about?

Now, obviously, there could be technology. I could have special technology I didn't have before that I could use to do something great. But child rearing? I mean, what's the new technology there?

One thing that I really notice also with the young people that we teach is that people automatically assume that everything we do is progress. But if you look at how history has unfolded itself, it wasn't always forward. There were many periods when it was pure regression, wasn't progression always.

And so we're not so sure that the new things that we're trying today are necessarily better, and we may very well come back from them. Obviously, on average, we've made progress, but it's been quite a bumpy ride.

Sometimes things go backwards, and sometimes we've had regimes and things that really didn't work and made everything worse for everybody.

I think that the downside, though, is that there seems to be very little recollection of these things also historically, where old ideas are just sort of slightly recycled and slightly put on in a different lipstick, and suddenly people don't recognize them anymore for the bad ideas that they really were.

That reminds me of the episode we did with John Cochran on capitalism. You look at capitalism, you look around the world. Nothing else has worked better than capitalism. And so then people say, well, but capitalism is so unfair, let's make it more fair.

Ignoring the fact that every time we've tried to make capitalism fair, it's been a failure, and unwilling to face the all else equal constraints. That it might be the fact that capitalism is unfair that's causing capitalism to work.

We did a whole episode with John Corkran on that all else equal mistake. Why does there have to be some inequality? Because there has to be incentives.

And some people will take the incentives and some people will not take the incentives. There has to be a reward for risk, and if it's from each according to their abilities, to each according to their need, and we all get the same amount, then nobody works hard, nobody starts a business, nobody takes hard classes, and we all take art history.

Which is a whole lot more fun than accounting. And then there's no one left to balance the books. So there's a budget constraint. It's a big word in economics. You can't have everything you want.

But that's just such a hard truth to swallow, right? That's the problem. If only we could just have all the benefits from it without it being unfair, wouldn't that be wonderful?

And so there, I think that this Thomas Sowell point, right, There are no solutions, there are just trade offs. I think that really thinking through dynamic systems that way, where you say if I change something, maybe one thing I'll make better, but the system was the way it was for a reason.

So it probably means that I'm sacrificing something else somewhere else. And so having that mindset that such a thing must exist, I just have to figure out what it is rather than assuming that nothing else will change. And I'll just make this one thing better.

Is a way of disciplined thinking that can quite directly prevent you from making all else equal mistakes. Of course, the other area we spent time on was on the environment with Alex Edmonds and Larry Cunningham.

In both cases, the idea that there was an easy way out, we can both make the environment better and make ourselves better. And everybody assuming that that was a possibility, of course, ignoring the all else equal mistake, that if in fact we, we could make ourselves better, why aren't we already doing it?

Yeah. No. That we should already be happier. Oh, for sure. No. And indeed, you're hearing this more and more also in the ESG space, right?

I mean, lots of people are saying that ESG is, and good for the environment and good for profit maximization. And all of these companies just had all of these wonderful profit maximizing opportunities lying around there and we're all ignoring them all this time until the ESG people came around to point out how profitable all of this was really going to be.

Maybe, Jonathan, maybe those companies were all wrong and maybe those ESG people have a point. But it seems if I had to bet on it, it seems unlikely to me, seems incredibly unlikely to me because it would have as an implication that the companies were being stupid beforehand that they were not maximizing profits.

And I just think while obviously there are many people around that don't maximize profits, I'm not going to claim every single decision profit maximizing. But the claim that every decision isn't profit maximizing, that's too strong. Many, many people are looking to make profits.

And if it's so easy to make you better at all. These suggestions are just people suggesting things that seem pretty obvious. And you see it right now with the discussion that a lot of pension plans are having.

A lot of these pension plans didn't have quite the expertise to pick undervalued stocks and make active investment decisions. But now suddenly they're arguing that making alpha is the easiest thing ever. The only thing you have to do is invest in ESG companies and suddenly the alpha is all over the place.

You'll end get a higher return and you save the environment at the same time. If active investing was that easy for them, why weren't they so successful at it before then? It's a very interesting thought process to observe.

It's a clear or else equal mistake well, you know Jules, I guess we have spent the last half an hour talking about everything we did in the last two and a half years. I hope that our audience will be motivated to go back and listen to some of these episodes if they thought the topics were interesting, and hopefully they will be looking forward to many more like this.

Indeed, it's been such a great pleasure so far, Jonathan. I hope we can do many more of these. Thanks for listening to the All Else Equal Podcast.

Please leave us a review at Apple Podcasts. We love to hear from our listeners. Also, be sure to catch our next episode by subscribing or following our show wherever you listen to your podcasts. For more information and episodes, visit allelseequalpodcast.com or follow us on LinkedIn.

The all else Equal Podcast is a joint production of Stanford University's Graduate School of Business and the Lauder Institute at the University of Pennsylvania. It is produced by University FM.

EDUCATION, ECONOMICS, BUSINESS, INCENTIVES, PODCAST, DECISION MAKING, STANFORD GRADUATE SCHOOL OF BUSINESS