On November 5, 2019, Tyler Cowen spoke about his book, Stubborn Attachments, at Stanford University. Cowen’s talk is part of the McCoy Family Center for Ethics in Society's Arrow Lecture Series on Ethics and Leadership, named in honor of the late Nobel Prize winning economist and Stanford professor, Kenneth Arrow. This transcript has been slightly edited for clarity. Rebecca Hasdell: Welcome to the 2019 Arrow Lecture series on ethics and leadership. My name is Rebecca Hasdell, and I'm a postdoctoral fellow with the Basic Income Lab at the Center for Ethics and Society, which is the sponsor of tonight's Arrow Lecture. It is my pleasure to welcome you to this evening to what I'm sure will be an interesting and provocative talk from Professor Tyler Cowen. [embedded content] The Arrow
Tyler Cowen considers the following as important:
This could be interesting, too:
Tyler Durden writes Anatomy Of The Trillion COVID-19 Stimulus Bill
Tyler Durden writes Execs Scramble To Buy Spyware To Keep Tabs On ‘Locked-Down’ Employees
Tyler Durden writes Unpacking China’s Viral Propaganda War
On November 5, 2019, Tyler Cowen spoke about his book, Stubborn Attachments, at Stanford University. Cowen’s talk is part of the McCoy Family Center for Ethics in Society's Arrow Lecture Series on Ethics and Leadership, named in honor of the late Nobel Prize winning economist and Stanford professor, Kenneth Arrow. This transcript has been slightly edited for clarity.
Rebecca Hasdell: Welcome to the 2019 Arrow Lecture series on ethics and leadership. My name is Rebecca Hasdell, and I'm a postdoctoral fellow with the Basic Income Lab at the Center for Ethics and Society, which is the sponsor of tonight's Arrow Lecture. It is my pleasure to welcome you to this evening to what I'm sure will be an interesting and provocative talk from Professor Tyler Cowen.
The Arrow Lectures were created in 2005 and have become among the most prestigious lectures at this university. Previous Arrow Lecturers include an impressive list of distinguished social scientists and philosophers, including Jennifer Doudna, Anthony Atkinson, Jonathan Glover, Jeffrey Sachs, Paul Collier, Dani Rodrik, Robert Reich, Thomas Piketty, and Nobel winners, including Amartya Sen and recent winner, Esther Duflo.
The Arrow Lectures are named in honor of Stanford Emeritus Professor in Economics Kenneth Arrow. Ken Arrow was one of the greatest economists of his time and one of the most renowned scholars to have taught at Stanford.
He died just over two years ago, leaving an incredible legacy in research and teaching. At 51, he was one of the youngest recipients ever of the Nobel Prize in economics, and as an early-career scholar, I can say that perhaps what is most remarkable and less known is that five of Professor Arrow's students have gone on to win the Nobel Prize in economics.
He was instrumental for setting up our very own ethics center that provides a platform to bring ethical questions to bear on important social problems, and was a member of its steering committee until his death. The question posed in the title of tonight's talk is very much in the spirit of Ken Arrow's legacy. Professor Arrow was committed to economics as a moral science that could and should address questions of societal well-being. We honor that legacy this evening with a lecture from Professor Tyler Cowen.
While the breadth of Professor Cowen's work defies easy summary, he consistently asks us to consider the role of economic growth in relation to pressing public policy problems of moral and ethical interest, topics that Professor Cowen writes on widely and prolifically. As we debate policy proposals on the existential threat of climate change, the role of tech corporations in our democracy and our social welfare, and—of interest to our research at the Basic Income Lab—how we should ensure economic benefits are equitably distributed, we're pressed to consider how action in these areas is compatible or not with economic progress.
Professor Cowen is the Holbert L. Harris professor at George Mason University, where he is also chairman and faculty director of the Mercatus Center, which advocates for free market approaches to public policy. He received his PhD in economics from Harvard University and his bachelor's in economics from George Mason University.
It would not be a stretch to say that Professor Cowen is one of the most widely read economists of our present time. He is extensively published in the academic and popular press, and his work has been called essential reading in economics and includes the New York Times bestseller The Great Stagnation: How America Ate All the Low-Hanging Fruit of Modern History, Got Sick, and Will (Eventually) Feel Better and his recent book, Big Business: A Love Letter to an American Antihero. He has a broad public audience for his popular economics blog, Marginal Revolution; podcast, Conversations With Tyler; online education platform, MR University; and frequent contributions to the broader press. He's been named one of Foreign Policy’s 2011 100 Global Thinkers, and an economist survey counted him as one of the most influential economists of the last decade. Please join me in welcoming Tyler Cowen.
Tyler Cowen: Thank you for the very kind introduction. I'd like to do something a little different in this talk from what is usually done. Typically, someone comes and they present their book. My book here, Stubborn Attachments. But rather than present it or argue for it, I'd like to try to give you all of the arguments against my thesis. I want to invite you into my internal monologue of how I think about what are the problems. It's an unusual talk. I mean, I think talks are quite inefficient. Most of them I go to, I'm bored. Why are you all here? I wonder. I feel we should experiment more with how talks are presented, and this is one of my attempts to do that.
But first, I'll give you just a very brief overview of the main thesis of the book. And I do mean very brief. I'm not even trying to make the arguments, but the book starts from what philosophers would call a consequentialist framework. That's somewhat broader than utilitarianism. It just means that what happens matters. Human well-being matters. I assume that broadly speaking we can make comparisons of human well-being that, say, people in Palo Alto today are better off on average than people were in the Stone Age.
I'm an ethical pluralist, so a lot of different values matter, not just utility. But nonetheless, I argue that for human well-being, the correct social discount rate— And now we come to Ken Arrow. Ken Arrow is a major, major influence behind this book. Who wrote the seminal articles on social discount rate? It was Ken Arrow. Anyway, I argue, not for dollars but for human well-being, the social discount rate probably should be zero. Not 1%, not pretty low, but literally zero because over time you do not reinvest happiness that a unit of my well-being today or a unit of well-being for our descendants, say 80 years in the future, are—all other things being equal—morally equivalent.
Now, you set this up, those are premises. I'm not going to give you the arguments in the book, but that's the basic framework. If you accept those premises, then the value of boosting the rate of economic growth in a sustainable manner is very high because discount rate is zero, right? There's a long time horizon. Say the rate of growth of human well-being, which does correlate mostly with prosperity, say you can get that rate of growth to be higher. You're creating phenomenal benefits throughout a pretty long future, and the argument of the book is that will overwhelm other consequentialist considerations. But if you hold other values to be dear—aesthetics or maybe some notion of inequality—take a long enough time horizon, zero discount rate, boom, raising the sustainable rate of growth of well-being is going to dominate the consequentialist calculus.
That's a radical implication. It suggests that when human rights don't enter the picture—human rights, they're kind of an absolute binding side constraint—but when human rights are not in the picture, just maximize something that looks a lot like economic growth. Full steam ahead, a very kind of mono-conclusion. They tweeted this event. This is what they said: "Greta Thunberg says the vision of eternal economic growth is a perverse fairytale. Tyler Cowen says it is a moral imperative."
I want to walk through what are the problems with thinking it's a moral imperative. In fact, in some regards, I think Greta is right. Maybe it's a moral imperative and a perverse fairytale. How about that? That would be weird. But anyway, you see at least how the basic argument works, and some of the bits and pieces of the argument might come out as I walk through some of the problems.
One of the problems I discussed at some length with Robert Wiblin in my three-hour marathon podcast with him, and that is, if you believe in maximizing the rate of sustainable economic growth, there are actually two variables in that short little sentence. One is economic growth, which can be a bigger number. And the other is that tricky word, sustainable. And by running it together, sustainable economic growth, only three words, it sounds like one simple thing, but it's not, right? It's two different variables, and what do you do when they conflict? You could imagine policies or actions that might boost economic growth that would make a society less sustainable, at least with some probability. And what do you do then because you were just told to maximize one thing?
Usually, as I've done promotions for the book, if people ask me a version of that question, what I've said is, well, the framework of the book doesn't answer every case. There's a large class of choices we can make where you can boost both growth and sustainability. Say you improve institutions in a society. Tends to help sustainability, but also tends to help economic growth. That doesn't have to cover every single class of choice. And I think that's fine.
No one is opposed to the case where you get everything moving in the right direction all at once, but you still come back to the question, what about when there's a trade-off? Let's say the government subsidizes artificial intelligence, as it does through the military, and that may boost the rate of economic growth, but at the same time, if you listen to Elon Musk, there's maybe a higher chance of Skynet goes live, as we say in the Terminator movies, and getting some very bad outcome.
I think what's interesting here is exactly how long is the time horizon we're thinking about. The longer a time horizon you have in mind, the more you should be concerned with sustainability. Here's a way to think this through. Let's say you think the world literally can just keep on running forever. There's a zero discount rate if you have an infinite time horizon, or a very, very, very long time horizon. Well, there's no infinity, but they promised me the universe would last 2 trillion years. Sustainability is going to win out, right? Because there's so much at stake if the world ends. You've got to play it very safe.
My argument sounds like it's obsessed with growth, and under some cases it is, but if the time horizon gets too long, it isn't. Let's say alternatively the time horizon gets too short. Let's say we all know the world's going to end in a year—there's a big asteroid on its way. We can't do anything. They didn't listen to the economists about global public goods. I don't think it would really make sense as a recipe to maximize the rate of sustainable economic growth with the world ending in a year. The returns out there just are not very large. Maybe we should have a big party. Maybe with the world ending, we should just all do “the right thing,” read Immanuel Kant, become more deontological. That's fine. In that case, my recipe also would be wrong.
There's a funny way in which the maximize growth imperative is only true for some intermediate time horizon, that if you think it's approaching eternal, you become super safe, and I would say politics becomes boring and terrible. Somehow aesthetically, maybe we all become less. If you think the relevant time horizon is super short, again, nothing really to maximize. Things are going to end.
And then there's some intermediate phase. Some degree of pessimism actually is useful. Like I sat down and I asked myself, how long are we really in this thing for? You ever wonder that? Seriously. You ever go back and read the smart people after World War II, and they very often believe there's going to be a third world war pretty soon. These people were not stupid. Obviously they were wrong for their time, but they had some pretty good arguments. They had just seen two world wars of immense destruction, and to think there might be a third— The U.S. had just used nuclear weapons. Soviet Union, later communist China were not the most reliable partners in a game of mutually assured destruction. There were fears of proliferation. Imagine if someday a country like North Korea got nuclear weapons. Fortunately, that never happened.
So I thought about this for a long time, and I think it's a bit like a finance problem. If Steven Pinker is correct that at any moment the case for optimism is quite strong, the aggregate risks are fairly low, but it's like riding a naked put very much out of the money. If you let the clock tick for long enough, something very bad will happen. We have weapons of mass destruction. I think my actual view is probably we'll have advanced civilization for something like another 6, 700 years. A very approximate view, just pulled out of a hat. It's an intuition. But it's not forever. The world's not going to end next year, but it means if we got 600 years of a higher growth rate, that's much, much better for the world, but it's not so much value out there that we should just play it safe across all margins.
The ways in which our understanding of what is the relative time horizon and how it shapes how we weight, just flat out value, maximization versus safety— I've thought about that a great deal since writing the book, and I found I have this funny intermediate view that I'm actually not that optimistic in a way. But it's the finiteness of our end which also allows us normatively to take some chances with the world. Those are some of the thoughts I've had after writing the book.
So to go back to Greta Thunberg, I'm a fan of hers, though I don't agree with all of her views. But the tweet issued by the people here said, "Greta says the vision of eternal economic growth is a perverse fairytale." I've got to agree with Greta. But I wonder if Greta would agree with me, that if we're down to our last 700 years, well, the imperative of maximizing growth is quite strong. And if you've never played around with compound returns, I mean their power is immense.
So if you think you have a child and the rate of well-being goes up at 3% a year, let's say, well, by the time that kid is in his or her 30s, the kid is twice as well off as you were. It's a pretty big difference, pretty quickly. But if the rate of growth of well-being is 1% a year, kid’s in his or her 30s, kid is maybe 35% better off than you were, eh, happier. And you just play that out over time.
If you want an extreme example, at a rate of discount of 5%, which I'm not advocating we apply, but one death today is equal to 39 billion deaths in 500 years. To me, if you had a social discount rate of 5% and you were comparing human lives over time, climate change or burying nuclear waste, I think the right moral trade-off would be one life today against 39 billion lives out there. As a part-time moral intuitionist, it sounds not right to me now. If that example worries you, you actually end up getting forced down to a zero social discount rate for well-being because if you push out the time horizon far enough, you're going to get a comparison like that. And no one's really willing to say, "Oh, the one life today is more important than the 39 billion lives out there in the future."
If you're living in 1320 and someone says, "Boost the rate of sustainable economic growth," is that really any different from someone telling me today to flap my wings and fly to Pluto?
500 years sounds like a long time. It's not that long. You look back 500 years, it's still a recognizable world. We know its history. All this got me thinking more about issues of time horizon and even the extent to which growth is manipulable. Well, someone asked me once, they said, "Well, your hypothesis of moral imperative to boost economic growth, how does that apply to time periods that are not very good at economic growth? What about before the Industrial Revolution?" Many of you in the room, maybe all of you, know that for many centuries, some would say for millennia, there was not much sustained per capita economic growth in the world that we know of in most places. If you're living in 1320 and someone says, "Boost the rate of sustainable economic growth," is that really any different from someone telling me today to flap my wings and fly to Pluto, right? Maybe there's just simply nothing you can do.
The notion that once your moral argument involves some notion of maximizing rates of growth of well-being—and I think actually all moral arguments need to consider that—it becomes the case the application of your moral argument is time dependent. And it depends on what are the options for your era in boosting or avoiding declines in the rate of economic growth. Again, if someone is in 3040 and the Tyler Cowen comes along and says, "Boost the rate of sustainable economic growth," they look at me like I'm crazy. They say, "Come on. What should I really do?" What would I say? I'd say, "Well, I get you don't have too many options. Yes, invest a bit more in water power. Those water wheels, they're going to be important. But for the most part, you want to just follow deontology because those consequentialist values, they're not yet manipulable."
If they're not manipulable, if you can't create these big gains by boosting the rate of growth from 0% to 2%, well, you might as well just do the right thing. It's the old, should you break eggs to make an omelet? You can debate that trade-off, but if you can't even get an omelet at the end, no reason to break the eggs. I guess I've come around to the conclusion that humans before the Industrial Revolution, or whatever you think is the exact cutoff point, but they should have been more deontological and just, this is the right thing. Their ability to kind of do the wrong thing but create huge social benefits, it just really wasn't there very much, for the most part. That's a little tricky because there's durability.
Say you're in ancient Athens and you're Socrates, Plato, whoever you think is the important figure there. I would say Plato. And, well, maybe you're doing some things wrong but you're getting your manuscripts preserved. Probably you're not boosting the rate of growth in ancient Athens, but you might argue there's some long-term effect. Aristotle, you're one of Aristotle students. You don't make good on one of your ethical commitments because you want to go to Aristotle's class and take the notes and leave the manuscript for future generations. Maybe that's ending up helping late medieval times in the Renaissance. Then you still ought to do it.
But again, most of the past rates of growth seem fairly immutable. You have to wonder, well, the extent morality is time dependent in this way, the correct morality should have been different in the past, how might the correct morality be different in the future? The first question is, and this I always find scary, but what if we're living in an age right now where we can't actually manipulate the rate of economic growth? So maybe various growth revolutions are over, at least for a while. We're stuck with our current crummy rate of increase in total factor productivity. Forty years it'll all be cleared up when there's a big breakthrough in AI or driverless cars, whatever, but in the meantime, we're stuck and we've got to go back to being more deontological. That would be something, right? Kind of a return of the morality of the ancients.
Another possibility is that actually the future will give us much grander growth opportunities. My utopian friends would say, "Well, we're going to settle other planets, other solar systems." I don't know. I don't actually believe in any of that. I think we'll destroy ourselves first, but let's just play along with that. If that's true, and if there's contact across these civilizations, the benefits of growth are actually much, much, much larger than we thought. You send out the self-replicating von Neumann probes, there's panspermia, where the father's… life everywhere, you establish connections, whatever, quantum tunneling…
I don't know, but it seems with that future in mind, we should be much less deontological and be willing to do a lot more bad stuff to bring about that now really quite spectacular growth maximization. And they’ll look back on us a bit like the ancients. Oh, those poor little people. They had some government statistics department. They argued, "Can we increase the rate of growth from 1.9% to 2.3%?" And that will then just be crazy. They'll be ruling solar systems and who knows what? People will be uploads. You might have trillions of uploads, kind of all financed by solar energy. And again, the current standards and debates will seem kind of crazy.
Another relevant question for time horizon is whether economic growth of the sort relevant for human well-being actually has some degree of embedded mean reversion. And this gets back to the question, is there any recipe at all for boosting the rate of sustainable economic growth? There's plenty of works, but I think in economics the best known would be Mancur Olson, The Logic of Collective Action, The Rise and Decline of Nations. And in the Mancur Olson worldview, you have healthy economies, special interest groups accumulate. They then pass bad policies; they slow down growth. There's mean reversion in growth rates in the Olson model. He looked at Japan and Germany. He at least claimed, why did they do so well after World War II? Well, the war destroyed all their interest groups. The war was terrible for Japan and Germany, but they have growth miracles after the war because for a while, they're free again and somewhat unfettered.
Not everyone agrees, but the point is, there's plenty of models with mean reverting growth. And it seems that when you have mean reverting growth, that in a way is shortening the time horizon again. Because if you get wonderful growth now, and it's wonderful for 17 years but all it does is pay for Silicon Valley NIMBY, then the tech companies can't grow because they can't hire auxiliary staff because an apartment in Atherton costs whatever insane price it costs.
And Stanford faculty housing is crowded because they couldn't do the thing with— You all know about this. You're all living mean reverting economic growth, I suspect. I'm not sure the world or country as a whole is, but you must know what it means, right, if you live here and come here. And that's just shortening time horizons. It's not as tragic as the big asteroid coming and blowing us all up. It just means positive impacts get bounded and they come back on you.
Correspondingly, if you make mistakes, like in the Solow growth model—don't worry if you don't know it—but if you make a mistake, there's some element of catch-up, and in some of these models, mistakes just matter way less. And I wonder, if you believe in a Solow growth model, should you be more deontological because mistakes—consequentialist mistakes—matter less. Benefits are less lasting. I guess I think you should.
And it's interesting, if you look back at history of world per capita GDP, history of world population, there seems to be, over the long term, a clear pattern that at various points in time—this is, again, pre-Industrial Revolution—per capita GDP goes up. People are happier; there's more food. What happens when there's more food? There's better nutrition; fewer infants die; people live longer. But how much is the technology for producing food really much better? Well, not much.
Then there' s some Malthusian correction. And in essence the world in earlier times, it can be argued, kept on creating rents. But then the rents are exhausted by growing population, and you bounce back down closer to the Malthusian setting. That's again a form of mean reversion. It could be what's special about the modern age is we're the first civilization that will exhaust our rents by something other than population growth, and this in a way could make us the stupidest of all civilizations collectively because population growth, at least there's some more happy people.
Imagine exhausting your rents just through bad policy. Again, you're all in California. You all, most of you, spend more time here than I do. It's like the world's best weather. Even here is much better than San Francisco, right? You know this. Much better. Maybe not quite as good as San Diego. And you pay for it with wildfires, power blackouts, other problems, right? Bad state government, that's just like rent exhaustion. You're not even getting more people for it. There's that study by Raj Chetty. He looked— Of all the communities you might move to, which move makes you most likely to have fewer kids? And it was here, right here. I don't mean like South San Francisco. I mean right here.
So again, to the extent you believe in a kind of mean reversion, that's significant. I'm not sure how much I believe in mean reversion. There's a research paper in economics. I would say it's somewhat known, but I think it should be much better known. Bill Easterly is the main author on it, and Bill Easterly looks at per capita income through time, and he finds remarkable persistence effects. He finds over a number of periods of human history that per capita income can predict a region's per capita income up to a thousand years later. A thousand. You need to adjust for settlers moving. If British people moved to New Zealand, you need to carry the variable of British to New Zealand. But if you do that, the predictive power of this model across various thousand-period time differences is really pretty good.
This, to me, is all of a sudden a very important moral fact. If Bill Easterly and coauthors [Diego] Comin and [Erick] Gong, if they're correct, we're back to economic growth being really important and deontology being kind of like, eh. You can do some wrong things because we're going to make an omelet. Because if you do something good, on average, the benefits last for a thousand years. That's pretty good. I don't even think the whole world's going to last 700 years, right? That would imply a big role for consequentialism, zero discount rate. Oh, it won't last forever. But if we make this improvement, we're going to get gains for a thousand years, if you accept the Bill Easterly argument, which, as far as I know, has not been countermanded. But it also hasn't been studied enough. It's not based on perfect data, but again, it suggests persistence.
The "deep roots" literature in political science, also in economics, I wouldn't count it as fully established, but there's good arguments. If you accept the "deep roots" literature, that makes gains and losses quite persistent. And that again means we've got to jack up the rate of economic growth, that it's not about mean reversion. How much of ethical or moral choices depend on the persistence of roots, high per capita income, never occurred to me before reading this book. And part of me thinks it's kind of a reductio ad absurdum. Part of me thinks correct morality should be eternal, some kind of balance of consequentialist and deontological considerations. It's not a priori, but it doesn't depend on whether you're in 1320 or 1956. But I guess now I think it does. I'm sort of arguing against myself. But again, I promised you this internal monologue of both sides, and that's what you're getting.
Here's another problem with my argument, and this one bugs me, and I know I don't have any solution to it, and I've actually worried about this since 1984, which is now a long time ago. I have those simple three words—sustainable economic growth. And I left out, is that total or is that per capita? And in a lot of settings, again, it's not a problem. For a lot of human history, the two have gone together. Societies become wealthier, they support more people, it all moves together. You don't need to sit around and agonize about which is the correct concept. But of course, if you're a philosopher, empirical is not the main thing anyway. And besides, we're now in a time where it seems a lot of countries, there is a trade-off between per capita and total, that is, there's plenty of wealthy societies where population is shrinking.
I think South Korea has become the worst culprit or possibly Singapore. Italy has TFR [total fertility rate] I think at about 1.3. France and England seem to be back up at 2. So it's a complex picture, but there's no doubt there's plenty of countries, typically wealthier countries, where population is shrinking. I looked at India a while ago. This number might be a little bit out-of-date. India is not that wealthy. Their TFR was at 2.7. They are still growing, great. Maybe they're in this one cone of morality because their key variables, per capita and total, they're still moving together.
You even have some poor countries, like Iran, Mexico falling off the fertility table. Mexico, I think, is close to being below replacement. Places that will have a hard time affording population aging won't have a demographic dividend. So in those cases, like my simple little formula, the one in this simple little book, it's like, come on, is it per capita? Is it total? I don't know. And it bugs me. So when it comes to weighting the value of total population against per capita income, I don't think I have a clear answer.
One implication of this is that in the wealthy countries with shrinking populations, there are moral dilemmas with prosperity that do not exist in the countries where per capita and total are still growing in tandem. And I don't know what to do about those moral dilemmas, but they’re cases I feel I cannot handle, and I wrote other articles about those cases way before I started on this book. I read Derek Parfit's book in 1984 when it came out. I have thought about it since then, and, again, I feel at this point I'm not going to solve that problem. But again, it's interesting the relativization of morality across those two classes of society. Moral dilemmas, where you have shrinking populations. And then you've got to worry a bit about the value of economic growth in a way you don't have to in many other countries.
Other problems with the argument, and these are getting a little more obscure but I'll close with one or two more obscure ones before we get to question and answer. This falls out of economic models. In economic models there's a big distinction between, playing the Solow model, a once-and-for-all gain and a sustained increase in the growth rate—a logical distinction that underpins some economic growth models. A once-and-for-all change would be if, say, a worker decides to work five extra hours one week and that's it. Will that boost GDP? Well, yes, right? Is that the end of the story? Well, it depends, but in a lot of simple models, yes. You get a one-time gain, five hours’ more worth of output, and end of story.
These things that we call increases in the growth rate, those might be increases in the rate of idea generation. If all the most brilliant scientists decide to work an extra five hours every week, they might invent more things, and then over time the economic growth rate will be higher than it otherwise would have been. And that will carry you through to these huge gains through time—all those scenarios we talked about, where things just compound and social discount rate is zero.
So it seems in this framework a once-and-for-all change is like, fine, I'll take it, and a change in the rate is like, oh my goodness, this is paradise. Utopia. My goodness. If we can keep it going forever, might even have an undefined value in some manner, unbounded. That rubs me a bit the wrong way. It bugs me because if I put my philosopher's hat back on, I can tell you in a Solow model what's the difference between a once-and-for-all gain and a boost in the rate of growth. I know that. I've taught it. At the metaphysical level, aren't these gains in the rate of growth just kind of a clumping together of a bunch of once-and-for-all gains?
You look at Chinese economic growth, to make this a little more concrete. They've gone gangbusters since 1979. It's the most stunning 40-year period ever. For the most part, they have not invented new technologies the way new stuff comes out of Silicon Valley. They've been very effective mobilizing resources, but it's as if every year they kept on giving a new once-and-for-all gain. But the Chinese story, just phenomenal. Is that like these kinds of pitiful once-and-for-all gains, or did they manage a sustained increase in their rate of economic growth compared to Maoist time? It seems to me somehow like a semantic or an arbitrary distinction in the metaphysical sense. We know what they did. That's not a mystery. But what category does it belong to, and how morally should we evaluate it? It seems like some parts of my framework invoke that distinction, which makes sense in economic models but maybe doesn't make sense in philosophy.
The best piece on this is by John Cochrane, who wrote a long blog post challenging this distinction, and John, I think he mentioned China as one of his core cases. He's like, "Look, they didn't really invent much new stuff. They just kept on doing the right thing." And if you keep on doing the right thing, can't you model whatever process induces you to keep on doing the right thing as a sustained rate of increase in economic growth? There's some kind of principle of distinction or individuation of decisions, where how you treat those individuations seems to have too much moral force in my framework relative to what it ought to have. And I really haven't figured that out either.
Just one or two other things I'll mention, not talk through the whole thing, before we close. People have asked me, "Does the rate of social discount have to be zero? Can it just be really low? What about like 0.00001?" You can add on 30 more 0's if you want. And I'm like, "No, it's got to be zero." But that sounds a little weird, doesn't it? There's this funny moral discontinuity. You remember my example, the rate of discount is 5%, one death today is morally equal to 39 billion deaths in 500 years, and you're like, oh my God, we can't do that. Can't have a rate of discount of 5%. If your rate of discount is 0.00001, you've just got to up the number of years enough to get the same comparison. And time is either morally neutral or it isn't.
You end up forced into this framework where you're saying the rate of discount has to be exactly zero, not a smidgen more, but you come back to what does human well-being mean? You might think it's partially cardinal. You might think it's partially comparable. But it's not this simple, clear cut thing that you measure utils like Jeremy Bentham perhaps once dreamed of. It's a highly imprecise concept. The notion that you're dealing with this pretty rough, imprecise concept and then this moral absolute is a "my goodness, it's got to be zero." "Can't be a smidgen more than zero." I don't know, that bugs me, rubs me the wrong way. Makes me feel like I don't have the whole framework.
Final issue that bugs me, by no means the only extra issue, but simply the issue of animals, which people either find totally compelling or totally unconvincing. No one I think has the— Hardly anyone has the appropriate in-between concern. But in my view, it's a bad thing to torture a dog. We sort of all agree with that, but no one actually wants to put animals into the social welfare function. How many hamsters are equal in value to a person? I don't think there's an answer to that question. Is it 20? How about 3 trillion hamsters? Do we bid up or down? Somehow, it seems to me we're asking the wrong question. I don't know how to make those units comparable.
You can then move to the view, while there's something not very comparable about humans and animals, doesn't mean you've got to be a nasty person to animals to think, is this somehow different units? But once you think there's different units, you're like, well, why are there different units? We're alive, they're alive. We feel pain, they feel pain. It's like, no, we're different. Well, okay, we're smarter, but you don't want to weight people by how smart they are in the social welfare function. It seems to me there's some kind of vision where you can't make a lot of comparisons across people and animals because they just kind of inhabit different worlds.
You can't say 3 million, 3 trillion, whatever billion hamsters for a person.
Maybe it's not a contractarian framework but a contractualist framework. There aren't systematic ways we can make deals with non-canine animals and be part of the same society. They're just outside the cone of moral relevance at the big, macro level. You can't say 3 million, 3 trillion, whatever billion hamsters for a person. But once you do that, which I'm kind of fine with on its own terms, you then realize these other consequentialist calculations, they're embedded in some bigger picture view. It might be contractualism. I'm not insisting it's that. Whatever else it is, you might wonder that larger embeddedness. Why don't you take that embeddedness and apply it a bit more directly to all those moral comparisons you are trying to clear up with all your silly talk about maximizing the rate of sustainable economic growth? And again, I'm back to, I don't know.
With that, my talk ends. Those are my thoughts. I figured you're all sick of regular talks. I'll keep on thinking about them, and thank you all for coming and listening.
Audience Member: Thank you. Brilliant talk. It's great to be an ethical pluralist within consequentialism, but going back and forth from consequentialism to deontology feels uncomfortable because they’re such radically different kinds of thought. It's like the robber barons going to church on Sunday and nodding along to the Sermon on the Mount. Instead of invoking absolute deontological constraints, like human rights, wouldn't it be more tempting for you to say that human rights are binding when growth possibilities are 10% or less? But actually, the deontological constraints themselves have a consequentialist justification. If I can offer you a growth rate of 50%, 500%, those justifications for the constraints drop away, and then you would just go full bore consequentialist and get rid of that uncomfortable human rights stuff.
Cowen: That is a possible resolution and I don't want to reject it, but a big part of me is not ready to embrace it. It seems to me even consequentialism is somewhat parasitic on moral intuitionism. Part of our moral intuitions is that some rights constraints are truly binding, short of needing to prevent the destruction of the entire universe. Like killing and torturing innocent babies in large numbers I wouldn't personally do, even if I could boost the rate of sustainable growth by whatever in a 2-trillion-year framework, and I wouldn't do it. And I'm not ready to toss that out the window. And then we get back to what are the trade-offs? But I fully get what you're saying, and it would solve a number of problems for me if I could just embrace it. But I can't push intuitionism out of my argument altogether because I don't find utilitarianism self-evident. And then I'm back to messy pluralism and trade-offs and morality being kind of contingent on how much you can boost growth at any point in time.
Audience Member: Thank you. In the news Microsoft Japan went to a four-day workweek and had a 40% increase in productivity. I'd like to ask about February 2009 and the length of the workweek. The original legislative response to the recession was the Emergency Economic Stabilization Act of 2008—$850 billion of stimulus that didn't do anything to the leading economic indicators. But in February, the American Reinvestment and Recovery Act led with a payroll tax cut, called the Making Work Pay Tax Credit, and it turned things around so sharply within a month of enactment that by the time it got to 2010 and the Republicans had taken control of Congress, they had decided not to reenact it, presumably because it would risk inflation. My question is, can we follow the Netherlands by reducing the workweek length and allow that to provide the kind of distribution of labor that would not harm the cost of labor?
Cowen: There are a number of distinct questions embedded in there. I would say we disagree on the history of the stimulus, but that's not essential to what I see you as asking. The Microsoft study— I've read about it, I haven't read it, as a word of caution. As I said to a student group before, more than half of the studies you read about are false. I face a number of competitors in what I do, and I would be delighted if they read this study and took it to heart. Keep in mind also, Microsoft is quite a mature company. It's now about 50 years old. It's probably not true, almost certainly not true for startups that they should just move to four-day workweeks. I don't think in general working less is a way to get higher output. There's a large literature—Edward Prescott's piece is most notable, but also a piece by Charles Jones, who teaches here, Pete Klenow on related issues. And it does seem when people work more, they produce more on net.
Not in every single case, and you can't have them work 24/7. But I don't think working less at current margins is in general a path to higher output, and the difference between U.S. and Western European per capita GDP—a big chunk of it stems from that. And Prescott and Paul Krugman would both agree on that point. I think at most margins, there's no free lunch. If people value the leisure more, fine. Proper measure of economic growth should actually count value of leisure, household production, not just measured GDP. I advocate a kind of reformed measure for GDP, but still, for the most part, I think the American system of lower marginal tax rates on labor and higher work hours will get you a higher rate of sustainable economic growth. And this country has seen that, compared to Western Europe. And that I take to be your main question.
Audience Member: So I want to ask you about moral growth. Since the Stone Age, there hasn't just been economic growth, there's been big growth in coming to correct moral views. In the ancient world, you say they should have been deontologists. They weren't deontologists or consequentialists because Kant and [John] Mill hadn't come along yet. They had a different ethical theory that didn't include any human rights. Aristotle thought slavery was morally required. And so the question is, do you think that that kind of moral growth in our grasp of the correct moral view is still ongoing or not? If not, so then we have Tyler Cowen's ethical hybrid of deontology and consequentialism is the end of all human thought about morality. Okay, then that may be your argument’s fall, but that seems epistemically not humble.
And so then let's say we take the other view, right, which is that actually we're still working on morality, and over the next 6, 700 years we're going to keep working on it, and we might change some of our thinking about morality. Then might we maybe not want to plan for economic growth that outstrips how long a moral theory is going to last? Because what if our moral theory changes in a hundred years? Maybe we shouldn't make any decisions that are only correct on the assumption that our morality doesn't change.
Cowen: I think on moral growth, my view is pretty close to that of Steven Pinker's, which is there has been moral growth correlated with positive economic growth, at least until the world ends 700 years from now due to a nuclear Holocaust. Violent deaths are falling across the relevant time horizons. For the most part, people are nicer to each other. There's enough co-movement. I'm not sure we need to figure out the right weights—economic growth, moral growth. And it's not that people have all arrived at the correct Tyler Cowen view of the world. If everyone just listened to me, no one would drink alcohol and the murder rate would be zero, right? People are out there doing different things, but it does seem that the growing societies become more peaceful and safer on average, for the most part, in a somewhat predictable way.
Now that said, there's one area where I think you find it very hard to see moral growth, and that is how people treat animals. The wealthier societies have much more factory farming simply because they can, right? There's no factory farming in ancient Athens. There's just not the technology. We're much nastier to animals, and I don't think there you have co-movement, and we're back to my framework, not handling animals very well at all. That would be the case I worry about. Otherwise, I see a fair degree of co-movement.
Audience Member: Lovely talk. I actually messaged someone beforehand hoping, asking whether you'd be elaborating on Stubborn Attachments. My question is, to what extent do you feel that your book is predicated on an assumption that economic growth will continue to be correlated with well-being improvements? It seems to me that well-being improvements are likely to saturate, even as economic growth may continue to accelerate. Does that concern you?
Cowen: It concerns me. My view of the show so far, so to speak, is that in the short run wealth and happiness are barely correlated. And you see this within individual lives as well. But over the long run there's an extreme correlation. I think it's fair to say people in the United States are much happier than people, say, even in a peaceful Yemen or in much poorer countries. If per capita income in the United States could be 5X higher rather than 2X higher, my view is in expected value terms we would be much happier.
Now, getting back a bit to the issue of animal rights. If you think one of the next things growth will do is make us fundamentally different through something like genetic engineering, or maybe drugs will be so good, we'll feel pleasure but our minds will be obliterated. We won't even seem like the humans we know; we'll be plugged in somehow. We're not then animals, but it would seem to me we could become fundamentally different beings, where we're just outside the moral cone I'm used to operating within. And then I'm back to not knowing how to evaluate it.
There's some common moral cone, which has the humans, not the hamsters. Future growth could push what are now humans outside that cone, and then I think I'm back to a kind of incommensurability. So that does worry me, but in the meantime I think of it in terms of expected value. That happening is not a sure thing. If it doesn't happen, we'll just be better off. If it does happen, we're not sure. The case where we're better off is going to dominate the expected value calculus. So full steam ahead is where I'm at. But I do think about this quite a bit.
Audience Member: Hi, so thank you for your fascinating lecture. My question is about China. In 2019 alone China spent more than $19 billion on building infrastructure, but it doesn't seem like this investment is giving enough marginal benefit as China is experiencing kind of a slowing economy. I was wondering, do you think that after a couple of decades of rapid economic growth, China is now addressing the sustainability part? Or if not, just to open the question up, if not, what kind of markets or industries do you think have the potential of reaching that sweet spot between sustainability and economic growth?
Cowen: That's a very good question. I think about China a great deal. I make a point of going there at least once a year. Perhaps it's no surprise that I don't have particularly good China predictions, but I would say this: China experts I speak to seem to think that right now, China's steady state rate of economic growth might be something like 4%, a far cry from the former 10% but still not bad, right? The Chinese labor force is now shrinking. China is aging. It will end up as an older country than the United States. But I'm not so driven to pessimism because of a number of features about China.
There is still a lot of possible rural to urban migration that would raise the effective labor force even when the numbers are falling. There is room for China to use its female labor force much more effectively, and Chinese retirement ages are now often quite early. They can be as young as 50 to 55. You could extend those, especially as the quality of jobs in China rises and they become more fun, more creative, in a way that would boost output, increase the effective labor supply—even with the shrinking population—and forego the decline of China into a steady or stationary state.
With that all said, I really don't want to give you a China prediction, but I don't see any particular reason to view it as unsustainable. And it's also possible, I would say likely, China is now on the technological frontier in a number of areas. Clearest one is payment systems. Chinese payment systems, I don't have to tell you, they're way better than U.S. You go and you’re sticking your card in the thing, and oh, and then they're, like, "slide it, " and that doesn't work. China you're, like, QR code, half a second, done. The fact that China is on the frontier in at least some areas makes you think they're going to get TFP [total factor productivity] innovation as another input into their growth. Again, it seems to me in expected value terms, China ought to be sustainable. Not really a prediction, but why not?
Audience Member: Yeah, so I'm a student at the University of Utah. I'm not Mormon, but like you I really have a strong appreciation for them. I've noticed a lot of public intellectuals place a lot of emphasis on their high fertility rate, and seeing as you really also care about that, you can take this as a suggestion or just a thought experiment. Can you see a future where Mormonism becomes a refuge for public intellectuals? And can you even see yourself going so far as to pay the tithe even as a nonbeliever to increase the total fertility rate in this country?
Cowen: I'm not a believer in any particular religion. I'm somewhere between agnostic and atheist. I have a Straussian side. But I'm not going to contribute to population growth at this point, so I feel quite safe praising it without being responsible, right? I can up my quotient to praise, as one would do in a simple economic model. It seems to me that the macroeconomics of a shrinking population are probably worse than we used to think. And this is a pretty new research topic, for obvious reasons, but it's hard in some ways to stimulate your economy when both aggregate demand and aggregate supply are shrinking. You don't have to be an extreme Keynesian to have that worry.
It seems to me France has managed to reboost its rate of population growth. I've read about what they've done. I'm not sure how policy specific their success has been. I know of other cases, such as Singapore—the world's best economic policy, super smart people have tried to reboot, and it has completely failed. Government dating services and cruises and there's a subtle sign, "Do your job. Mother Singapore needs you." And the Chinese fertility rate in Singapore, I think like 1.1. It seems to me this should just be a much more central concern. I don't have the answers.
It seems to me religiosity is part of the picture. I'm very concerned. The Western world has become much less religious. Again, I don't know how to engineer that any differently. I don't feel it's a matter for the government, but still, there ought to be some kind of dialogue. Maybe we need more religious innovation in some way. My prediction would be in the long run something like religion will come back, but the pessimistic view is kids just aren't that fun. Amazon delivery and Netflix are better. This is the new equilibrium. Get used to it. I'm just not quite that pessimistic, but look, that could turn out to be right. Even young people today, they're having less sex. What's with that? But it seems to be true.
Audience Member: Thank you. While we are on the topic of— While we are on the theoretical and ethical leadership, why do we have to puzzle over and exercise our minds over what you just said about shrinking populations, and should we do GDP's total or GDP per capita? Just open— All borders should be open. Allow free movement of labor, one of the core basis of economics, and why do people— And change the composition of this room. Change the mix of this room. Why do people worry about having their own with them all the time? Open it up.
What can we do to make having kids more attractive? Is it robot nannies?
Cowen: Well, I definitely believe in higher levels of migration, including into the United States. It won't boost the world's population as a whole, which is the relevant variable. In fact, it will tend to spread lower fertility. Immigrants come to wealthier countries and typically their fertility rates fall very rapidly. I still want to get more people into the equation. I don't think fully open borders is feasible, but again, I'd like a lot more immigration. But it's only solving the problem for particular countries and not for humanity as a whole. And then we're back to needing to think through what can we do other than making children such an important economic asset that families try to have seven of them? That's a bad solution, right? People are then so impoverished. What can we do to make having kids more attractive? Is it robot nannies? I don't know. Is that good for kids? I don't know. I think it should be a much higher percentage of social science research.
Audience Member: Hi Tyler. Thanks for coming. My question is, would your framework— Does it have prescriptive advice for a lot of us early-career people thinking about their careers? Is the right way of choosing a career path thinking about what is the way to maximize my marginal contribution to sustainable, long-run economic growth? If so, what other assumptions might I need to believe for this also to be consistent with other common adages? Like do something that you are compensated for that you enjoy doing and are good at. More generally, just thinking about your framework as applied to particular acts rather than thinking about general social policy.
Cowen: That's a very good question. Obviously, it's going to depend on what is your prevalent theory of economic growth, but the old [N. Gregory] Mankiw, [David] Weil, and [David] Romer results, that human capital drives growth, I pretty much accept. And good institutions. Insofar as you're going to feel morally obliged to do anything, to invest in your own human capital would be the dominant recommendation. A lot of people should save more in my framework than they're currently saving. It doesn't boost rates of growth in every economic model. I get that, but overall I think that would be a positive. And there's a movement called effective altruism. A lot of people I tend to advise earn a lot of money and give it away, rather than look for the best way of volunteering. A lot of talented people should consider the selfish route of just earning a lot.
And there's one fellow I know, he works at a hedge fund in New York City. Very successful, and people who know him claim he lives off of, like, $30,000 a year in Manhattan. He just doesn't spend money, even though he might be earning a few million dollars a year, and he gives the rest away. That's pretty remarkable. It seems to me more people should do that. I'm not sure it's sustainable for everyone or even most people, but it's astonishing that as far as I can tell, he's the only person in the whole country doing this. And I'd like to see more of that. He could live off even a bit more, but he's a very consistent Peter Singer-like utilitarian, and he's helped a great number of people by doing this.
Audience Member: So I have two questions. One is, what do you think is the best place to eat around here? And the other one is, I want to follow up on the saturation question. It seems like there's lots of data that shows that as income goes up, well-being sort of levels off after a while—not just for an individual in a single life, but across countries and within a single country across different percentiles of income. I guess I'm wondering, do you just not believe those results, and if not, why?
Cowen: The food, that's the easy one. Depends what you mean by around here. I find Palo Alto and Menlo Park to be depressing in the culinary sense, but I think there's a clear first choice, and that is to eat regional Indian cuisine in Mountain View, which is consistently excellent. I don't know the very latest best places, but I've done it many times. It's always been a home run. And that's my recommendation. In San Francisco, I think Burmese is on average the highest quality and best by quality, enjoyment, price gradient.
Now, the leveling off of wealth. I think a lot of that, but not all of that, is true in the short run. I’ll just say a few things, like first, not all of those studies are well done. Sometimes Puerto Rico or Ghana comes up as the happiest country. That gets a lot of publicity when things are okay in Puerto Rico, but if you look at, well, how was Puerto Rico coped with what happened? Why does such an immense percentage of the population of Puerto Rico live in the continental United States? It's just not true.
People used to say, once your country reached the per capita income of Greece, then it flattens out. People don't use that example anymore, right? This was pre-euro crisis. I think there's a shortsightedness to some of it. Catastrophes come. They might even be random. They might be due to your bad institutions. But I would way rather be in a catastrophe in Switzerland than in Greece because, not the only reason, but because Switzerland is wealthier. I think over the longer run, if we continue growing at a decent rate, the wealthier countries will enjoy health care innovations, lifestyle innovations, fixing mental health innovations that for various short and medium runs will not be so available to the poorer countries like Portugal and Greece because they're poor.
I think any 10-year period, you're not going to find much of a correlation. There’s maybe a weak one in the [Betsey] Stevenson, [Justin] Wolfers sense, which if you measure it the right way, doesn't actually look that impressive. But I think over 50, a hundred years, never mind a thousand from the Bill Easterly paper, I think there's a very strong correlation between prosperity and just well-being and happiness and longevity and whatever other values you might want to put on the table.
Audience Member: Thanks so much for the talk. It seems like both a virtue and a vice of your moral philosophy is its sensitivity to certain facts about the world. Sometimes critics of consequentialism will say you're being held hostage to the empirics. My concern, shouldn't a criterion for whether we accept your philosophy as a political moral philosophy is if it can scale.
So suppose I'm more of an optimist and I think we'll be around for five millennia, and therefore we should subsidize tons of particle accelerators, or rather we should not do AI. And you think 700 years and so we should do AI but maybe not double the particle accelerators, and then someone else even more pessimistic— We're going to get wildly different policy prescriptions. Alternative proposal, why not just say, "Here's the certain things that we should do." Kind of a deontological but collective sufficientarian thing, and then we all just agree on those things.
Cowen: I love being held hostage to the empirics. I worry in this case we're not even being held hostage to the empirics, however. If I take your examples, which I'm happy to accept in their own right, you're being held hostage to your speculations about how dangerous the future is, and I think we all would agree, those are not— Whatever you might think, I might think, those are not very reliable, right? But here's where I'm willing to bite the bullet, and this bullet I bite— And I had all these uncertainties about my arguments. This one I'm just going to bite.
If that's the relevant variable, we should in fact be so uncertain about a lot of our bigger macro views. People are not uncertain enough, and they want to complain about people who disagree with them for whatever reason. They're not looking enough at the bigger picture. They should be more agnostic. And that bullet remains fully bitten. But I get what you're saying. It is a weird place to end up, but I think it's the correct place.
Audience Member: Thank you for your lecture. I was wondering if you had ever thought about a world in which we could create virtual realities, in which we could design virtual economies, where the principles of economic theory would be very different. Yeah, that's pretty much my question. Have you given that thought that it could become reality, and then how that would essentially look very different from how economic theory functions today?
Cowen: I think about that some amount, and I know some people working in that space, but like a lot of questions, it's hard to make progress on until we have it. One of the closest things we have to virtual reality are these advanced computer games, which have embodied in them a lot of economics. They have monies, they have prices, they have markets, and what's striking to me is how little the economics changes in those games. I'm not convinced virtual reality will be just like those, but I guess my modal forecast would be, they'll shift the locus of economic trading, but the laws of economics are pretty much going to stay the same. And we'll see, but that's what I'm expecting.
Audience Member: Thank you for coming. I'm just curious, relative to your finite prediction for how long humanity has left, how much weight do you put on preventing extinction relative to economic growth? Hypothetically, if you think the world is going to end in 600 years, but if we prevent the thing that would cause it to end, we get another 2, 3,000 years, wouldn't it be worth it to stagnate growth for that 600 years to get your 3-4% for the next 2,000 years? Just purely mathematically.
Cowen: Here's how I think about that. I think we should devote many more resources to limiting the chance of nuclear war. And there are things we could do. We could help countries, even hostile countries, develop better early warning systems so there's not a false launch. We can improve our own defensive strategies. It may shape how we pursue our diplomacy. Nuclear weapons are already a big issue, but it seems to me a lot of the American public has simply assumed they won't be used again. Hasn't happened lately. Has vanished from the radar screen. Young people worry much more about climate change, a major concern in my view, but I still think nuclear war is much more important, and we can and should do much more to limit the chance.
But my fear is this: I'm very happy to invest in another 67 years for the billions who live on earth. Once the stuff is invented I tend to think it's going to happen sooner or later. It's hard for me to imagine there's something we could do that would postpone it another 5,000 years, and, indeed, you could imagine weapons becoming more destructive and nuclear weapons looking like a kind of toy gun at some point. Maybe it's biological weapons, but it seems it's just going to get worse and cheaper, and that will intensify. All the precautions we should take, which I would fully advocate, be willing to pay higher taxes for, I just don't really quite see we're in for such a major postponement.
Now you might think, well, what if we put Tyler in a time machine and send him back, does Einstein still write that letter to FDR? It does seem to me nuclear weapons themselves have boosted world peace on net, even if they're going to do us in 700 years from now. The world before nuclear weapons, we got a real good close look at that—World War I and World War II—and that was not pretty, to say the least. But in terms of where we are now, I think we're stuck and we have to hope that somehow human nature is better than it seems to be.
Audience Member: All right, so as I understand your view, you picked maximization as the ultimate goal or value. And I get the—
Cowen: Not ultimate, but continue.
Audience Member: It's a very preeminent value, right? And I get the intuition because, for example, the trolley problem where there's five on one side, one on the other side, somebody has to die. Everybody picks the one to die, save the five, five greater than one. But those intuitions kind of change when it's not impersonal. When it's six strangers, the intuition is let's aggregate. When it's personal, everyone will save their child and kill the five. And then there are issues of justice and merit. If the five are convicted murderers and the one is an innocent child, people's intuitions might diverge, and we can fight over that. Then if we're going to put maximization as such a preeminent value, where does something like justice sit? And why aren't we thinking of maybe justice as the preeminent value, say, through Rawls's theory of justice, and then within that we think about maximizing what we can do that's just?
Cowen: In part, you're tossing me some more bullets I can bite. But in part I agree with you. If you think about Rawls's framework, it's really based on the idea of a Pareto improvement for everyone. And he's arguing to get a Pareto improvement, even the worst off people have to be better off. It's not exactly my framework, but I think in a long run sense, higher rates of economic growth will get you as close to that as you can get, noting that even in Rawls, the size of the worst off group, is it a person? Is it a social class? Is it people in the poorest county? That's left open, up for grabs. I think boosting growth gets you not so far from one reading of Rawls.
But that said, you can build in the ontological rights as a constraint into my theory, but I don't view particular one-off injustices just as nearly significant enough. And if you just look at choices people make in the world, I don't see that anyone, say, is willing to devote 80% of GDP to making sure an innocent person is never convicted. People will kind of say that, but I don't think anyone believes it. For me, rights violations are on a large scale, and they're gross and they're obvious, but when it comes to one innocent conviction, I'm going to bite the bullet and let those compound, maximized returns overwhelm them.
Audience Member: This is a sort of related question. Do you think that there's any moral imperative to address inequality or any situation where inequality is so severe that it would diminish the moral imperative to boost economic growth?
Cowen: No, not at all. That's one radical implication of my argument. Now you might say, well, if people resent the inequality too much, you get something like Santiago, Chile, and you should address that to maintain sustainability. I'm on board there 110%. Or you might say the real problem is not inequality, but, say, poor people have malnutrition, and if we send them money or help them out in some other way, they won't have malnutrition. They'll be better off. They'll also be more productive. I'm with that 110%.
But the mere fact that there is a measured, nominal difference between the income of Bill Gates and, say, the wealth of a very poor person in a rural area, that is not a morally relevant factor in my framework, though the well-being of poor people will get picked up very strongly by other considerations. I mean, they are human talent that is vastly being wasted and there is immense suffering there, and those people are more than capable of doing phenomenally better, and it's a very strong priority for us to make that happen.
Audience Member: Hi. Hello. Thank you. My question is kind of a follow-up on this, and it's maybe a twofold question. Also about the distribution of the benefits of that growth because I think it depends very heavily, and it's the second part of the question of how you measure well-being or happiness or quality of life. These are very different measures. If you look at, for example, in the U.S. quality adjusted life years, which are going down in some places. Infancy mortality rates are going up again in some places, whereas in general there is no economic growth. And if you argue for a social discount rate of well-being of 0%, this goes also the other way around, right? We can't sacrifice people today for later gains in some ways.
Cowen: Just as a casual observation, it's striking to me the areas with the biggest problems typically are poor. I wouldn't say that in every case, but on average, it's strongly true. But I also mentioned before this paper by Charles Jones and Klenow, and they correlate just measured GDP and real quality of life the best we can measure it, including public goods and health, longevity, and they find the correlation is 0.98. That's astonishingly close. I'm not saying in every case you've got to mechanically maximize measured GDP, even in light of obvious counter examples. I don't agree. But just as a rough guide, per capita GDP does way better at picking up these other factors than almost anyone thinks. Before I read this paper, I might've said, oh, I think the correlation will be 0.93. That's really high, and I'm me. I read the paper. It seems very well done. And it's 0.98. That really makes an impression on me.
Audience Member: Just a short question. So why do you think so many rich societies behave as if the social discount rate is quite high?
Cowen: People are selfish. But I think also, discount rate on what? The discount rate on dollars should not in general be zero, or whatever your currency is. There are financial flows. In most, but not all settings, you can reinvest dollars at positive dollar return in real terms. Then when you calculate opportunity cost, and Arrow, by the way, worked this all out with Robert Lind and some others, that for most financial planning cost-benefit analysis, you should apply positive discounting to dollars. You can debate the exact number of the rate, but there's general agreement on how this should be done—adjust for taxes, other matters.
But again, for well-being if you look at, say, how people are behaving towards climate change, it seems to me it's a mix of selfishness and self-deception and just denial—and then a huge public goods problem piled on top of that. And that's not just, oh, Donald Trump, whatever. Hardly any country is really making any progress in a significant manner. Carbon emissions are still going up. I don't think it's pure selfishness, but when you pile the problems on top of each other, it becomes very difficult to solve, and we act in a way that just looks like pure selfishness. And some of it is just collective action dilemma, other problems. A lot of it's just citizens—it's genuinely hard for them to tell which are the scientists speaking the truth, and so on. And we get stuck.
Audience Member: I guess I have a question. Given if we can project us back into a time of pre-agricultural revolution, if we consider agriculture revolution as an important technological advancement in society or progress in some way. There's two concerns with that. One is pre- versus after population growth expanded tremendously. If you care about moral value as a direct correlation with moral head counts, though that is one variable that changed, just like head counts changed—
My question with that would be, suffering increased just as much as potential well-being increased in the sense of wars and just death from natural disasters, etc. Quantities will be changed after the technological revolution of agriculture. How do you weigh the well-being with just the simple more death, more pain, more suffering? Increasingly so, we have more disease endemics being created in the last decade, well, actually not the last decade. Ten more different types of endemics—the superbugs, etc. How do you weigh the increased well-being with the increased suffering, and also the head counts of those suffering?
Cowen: You probably know Jared Diamond has an article where he says the agricultural revolution was the worst mistake humanity ever made. I've never been convinced by that. I think one simple way of weighing—not in itself a priori dispositive. It's just to ask any of you in this room, how many of you would like to go back in time to the earlier era? I don't think we'd find any takers, actually. Even if you would know the language or have some skills, like how to sharpen a spear or create a fire. I'm not saying demonstrated preference is the only standard, but parents are happy when their kids are born. They don't view it as a tragic event. They don't think, oh, this is terrible; my kid will suffer more than have a meaningful life. Rates of suicide are still relatively low, even though they're going up a bit.
When the time machine to go back comes and I ask you all to sign up, I will feel pretty good about that bet I'm making on modern life being a positive essentially.
You look at different kinds of choices people make. Also, these day-to-day studies where you register your moods and moments. It seems to me a lot of different kinds of evidence suggest modern lives are pretty good and more of them is better, and they're better than earlier lives in the developing countries. I'm willing to double down on that. I don't even view it as biting a bullet. I don't think you can quite prove it, but when the time machine to go back comes and I ask you all to sign up, I will feel pretty good about that bet I'm making on modern life being a positive essentially.
Audience Member: Thanks for a great talk. My question is a follow-up to how your framework addresses social inequities. And it stems from the idea that, or the history of humanity and our present situation tells us, how there are certain groups of people who uniquely bear the burden of poor moral choices from the larger group of humanities, and there's a certain group of people who uniquely benefit from economic privileges that come by virtue of shortcuts and moral choices made. Therefore, how does your framework address the spectrum of economic growth and benefits from the economic growths and spectrum of burden of the moral costs even within humanity as an entity?
Cowen: Well, here's a way in which admittedly, indirectly, my framework is really quite egalitarian. If you think of the U.S. last quarter, we're growing at 1.9%. I don't know any serious economist who thinks we can grow at 5%, no matter what we do. We could in a super short runway do it with irresponsible stimulus that we would regret, but even to get the U.S. to grow at 3.5% would be very, very difficult on a sustainable basis. But poorer countries—Africa, South Asia—which can do a lot of catch-up, they have the potential to grow in a range from, say, 4-10%. You can debate the exact number, but it's above our range.
If you're going to make something a priority as a philanthropist, as a human, as an economist, what should you study? As a donor, where should you send your money? This book, Stubborn Attachments, I donated all of the royalties to a poor individual in rural Ethiopia. He's getting all the money, and I thought I should practice what I preach. That will do more to boost the rate of growth in Ethiopia than if I had taken this money and spent it, my goodness, however I might have wasted it, I don't even want to tell you.
I think that's the implication. Growth maximization. In a world where catch-up happens more quickly than growing at the frontier, it should be a priority to help out poorer places. But you're not doing it because inequality is intrinsically bad. You're doing it because there's more consequentialist bang for the buck with poorer people whose talents and assets are really not being mobilized properly. In that sense, this is extreme egalitarianism, but not from any egalitarian premises. Anyway, thank you all for your questions. I have more to think about.
By: Martin Gurri
On January 31, as celebratory crowds in London’s Parliament Square counted down to 11 p.m., the United Kingdom formally departed from the European Union (EU). It was a decisive ending to a rancorous and drawn-out process. The battle over Brexit devolved into a nearly perfect specimen of the anti-establishment revolts... Keep reading
Headline photo from Wikimedia Commons
Event photos provided by EthicsSoc on Flikr