Jay Bhattacharya on Understanding the COVID-19 Virus
A furious debate is going on around the country about when and how we should reopen the economy. More than 40,000,000 Americans have filed for unemployment benefits, and more than 100,000 small businesses have permanently closed. Forty percent of those making less than $40,000 have lost their jobs.
But what does the scientific evidence actually tell us about the COVID-19 pandemic? And are we making the right tradeoffs between flattening the curve and flattening the economy? To help us answer these questions, Avik Roy is joined by a man who has been at the center of these debates, Dr. Jay Bhattacharya. Jay has four degrees from Stanford, including an MD from Stanford Medical School and a PhD from the Stanford Economics Department.
You may listen to the podcast here:
Below is a lightly edited transcript from the conversation.
Interpreting Infection vs. Fatality Ratios in COVID-19
Avik Roy: How are you? And I don’t just mean that in the conversational way, because you’ve had a very interesting experience over the last couple months as a guy who’s been plying your trade as a well-known and well-regarded health economist and policy thinker. And all of a sudden, you’ve been at the center of this furious debate about COVID-19. Tell us a little bit about that.
Jay Bhattacharya: It’s been a dramatic two months for me personally. I think we’ve done a lot of good scientific work in this period, and it’s just as you would expect in a science—when you have a set of results that surprise people, it’s led to a lot of people commenting on it. Despite the vociferous tone on places like Twitter, it’s actually been fine. It’s science, and I like it. That’s my job, and I don’t mind that part at all.
Personally, it’s been a little bit tough because there have been some newspaper reports going after my family, which I think is sort of below the belt, but I guess that’s what the media environment is like these days. I think it’s worth it. We want good science out there and we want to make good decisions based on good science. So that’s been my focus the last couple of months.
Avik Roy: You’ve mentioned Twitter a couple times, what’s been the environment for you in the academic context? The majority position, in fact, the dominant position in academia is supportive of keeping the lockdowns in place for a prolonged period of time. Very skeptical of research that might point to a different policy takeaway. What’s that been like?
Jay Bhattacharya: It’s been tough in some sense. I think people have very, very strong priors about how deadly the disease actually is. And they form those priors on the basis of very incomplete evidence. I think a lot of the reaction has to do with giving up those priors in the face of new evidence. I mean, I had to change my priors as well, so I can understand; I’m sympathetic a little to that. And I think we’re kind of seeing the process of people changing their priors as they see that the evidence from seroprevalence studies come out from all around the world.
Avik Roy: It’s interesting you mentioned that, because I was just remarking at national security forum—there’s that old adage that generals are always fighting the last war. And I feel like when it comes to the public health and epidemiology community, most of the off-the-shelf priors, as you put it, come from our experience with influenza pandemics, not just in academia, also at HHS and a lot of policymaking roles. We basically took this package of shovel-ready ideas that we developed for influenza pandemics and then plopped it on to COVID-19.
Jay Bhattacharya: Yeah. I think there are some mistakes there. Actually, it’s ironically the idea that I’ve been trying to test, which is that the disease is more widespread than the case count suggests. That idea came from an after action report in the H1N1 swine flu epidemic. There was a paper published in 2013 looking at a review of every single mortality established—published mortality estimate for H1N1. And the first mortality estimates were alarming: 1%, 3%. I mean, there was an Argentinian record like 13% mortality rate, case fatality rates. All of those were based on the number of reported cases, or PCR confirmed cases, and the death rates were really, really high. I remember that because there was a panic over H1N1 back in 2009 around those case reports.
A full year and some later, the serology-based case estimate started to come out and the number of cases was orders of magnitude higher than they realized. Also, the death rates were orders of magnitude lower, and the infection fatality rates were orders of magnitude lower. So it turns out that, based on the serology for H1N1, the death rates were more like one in 10,000, not one in 100, order of magnitude.
So that is partly what fed the hypothesis I had that maybe the same is true here, because almost all of the testing, in fact, all of the case report numbers up to the beginning of March were PCR-based. The fancy Johns Hopkins map, all of that is counted cases that are either PCR confirmed or they have very strong suspicion that it’s COVID for other reasons. That’s going to undercount the denominator. If the hypothesis is right, then there are many people walking around with evidence of COVID infection that we’re not going to count because they don’t show up to the doctor; they have relatively mild infections. So that was hypothesis that led to the series of studies I’ve been working on.
Avik Roy: Yeah. And just to step back for people who are new to this particular topic, let me just draw back a little bit. So, the debate that you’ve found yourself in the middle of in particular, is this question of case fatality ratio versus infection fatality ratios. That is when COVID-19 first started to appear on the scene; roughly one to 10% of the people, depending on the exact demographics of the population who were diagnosed with COVID-19, were dying. But what we didn’t know is how many more people actually had the virus, were infected with the virus, but had no symptoms and never actually got symptoms because their bodies clear the virus without much trouble. And therefore, we were overestimating the lethality of the virus because we weren’t counting those people who are actually infected but never showed any real illness from the viral infection.
And what a lot of your work has shown—in particular, a survey of Santa Clara County in the Bay Area near San Francisco showed that in fact, that was true; that there was a much larger number of people who were infected who had not come down with severe illness due to the SARS-CoV-2 virus. Tell me if I’ve mis-summarized anything.
Jay Bhattacharya: That’s pretty close. It’s not just asymptomatic, it’s also cases with very, very mild presentations, too. They don’t show up to the doctor because they think they have a cold or something. I think what we learned from these studies—there’s the Santa Clara study, the LA County study, the Major League Baseball studies I’ve worked on. But since then, I think there’ve been 30-plus studies done around the world that basically confirmed the same finding. In Santa Clara, we found that for every person that had been identified with PCR case counts and showed up on the Johns Hopkins map, there were 50 cases of people who hadn’t been identified, that had antibody evidence that suggests they had been previously infected. Fifty.
Avik Roy: Which divides the lethality of the disease by 50.
Jay Bhattacharya: You have to be a little careful, because you have to count exactly when they were infected. You get antibodies 6–10 days after you start getting infected, and death rates happen, say, three to four weeks after you’re infected. So you have to be careful about the timing, but that’s about right. So, we got an infection fatality rate roughly between one in 1000 and two in a 1000, which is vastly less than the World Health Organization’s 34 in 1000 they reported famously at the beginning of the epidemic.
Avik Roy: And I tremble to say this, but one per 1000 is about the fatality rate of the infection from the flu.
Jay Bhattacharya: Actually, you know what, I thought that before the epidemic started, but I don’t know actually. I realized what the fatality rate for the flu is either because those numbers aren’t based on serology either. It could be deadly; based on the serology numbers for H1N1, there’s like one in 10,000 for the H1N1 flu, which is supposedly more deadly. I’m now baffled as to why these serology studies are not the gold standard for understanding the infection fatality rate.
Avik Roy: Well, I think it probably will be going forward because of this whole debate.
Jay Bhattacharya: I think some folks have adopted it, put it into their models and their thinking, but it’s been puzzling why a greater fraction of the public health community hasn’t attached itself to this kind of evidence. It seems like much better evidence than the sort of guesswork we make about the denominators all the time.
Avik Roy: Do you remember what the date was when you published that first Santa Clara serology study? Because I remember it was all over the news, a lot of interest in it. When did it come out? What was the actual date?
Jay Bhattacharya: I don’t remember the specific date, but I think was like the second or third week of April.
Avik Roy: So it was around mid April, actually. Now I remember because we wrote about it in our FREOPP plan to reopen the economy. We published our plan on April 14, and I think your analysis came out right around then, maybe even the day after, so it was probably around then. And the reason I asked that is because when you published that report in medRxiv (which is interesting—we could have a whole podcast on that topic), the interesting way in which academic research is now being speeded to the public domain in a pre-print format.
You published this survey with a number of your colleagues, got a lot of attention; and then came this kind of counter reaction from a lot of corners. I’m just going to quote Andrew Gelman from Columbia, who was quoted in the San Jose Mercury News as saying, “I think the authors of the Santa Clara Paper owe us all an apology. We wasted time and effort discussing this paper, whose main selling point was some numbers that were essentially the product of a statistical error.” How would you respond to Professor Gelman, Director of the Applied Statistics Center at Columbia?
Jay Bhattacharya: He’s walked that back, and I forgive him for that. It’s not a problem. It’s just science-like people. And Andrew has actually been somewhat constructive after he calmed down over that. I’m okay with him. I think that there were two critiques I heard about it from that, just trying to be constructive. One is that we use this Facebook sampling scheme to identify our population; therefore our population includes a lot of people who thought they have COVID, so we have overestimated prevalence. That’s actually false. It’s true we used the Facebook scheme, but actually, I think we underestimated prevalence.
Avik Roy: You used a targeted ad that you sent to people in Santa Clara County, presumably who you wanted to target—people who were residents of Santa Clara County. Did you try to target them demographically as well to get a demographically representative population? Or how did you think about that when you did the initial—
Jay Bhattacharya: Yeah. When we did the ad, we knew that we were going to have some trouble. The reason we used this technique is that it’s difficult in a lockdown to do random sampling. We wanted a population that we could access rapidly. Our goal was to essentially try to get people evenly distributed across the various zip codes; we kept very careful track of the zip codes of people who were signing up. The ad went viral in the richer zip codes, because we ended up with an oversample of relatively healthy people from the richer zip codes. And so, we reweighted so that the demographics matched Santa Clara County more closely. I think that probably led us to an underestimate of the prevalence despite the reweighting.
There are three kinds of schemes I’ve seen in these studies now. And I said, like 30 of them. So one is what we did. The second is go to the grocery stores—I think that’s what they do in New York. And the third is what we did in LA County; we hired a marketing firm to give us their panel, which is representative of LA County, and then reached out to their panel. All these schemes produce very similar kinds of multipliers and IFRs (infection fatality rates). I don’t think the scheme explains the result. And if anything, as I said, I think the selection bias… I did a serology study for the Major League Baseball employee population as well. Again, a different sampling scheme.
I mean, we’re producing results that look similar to one another. Broad scope, I don’t think we overestimated in Santa Clara. LA County had higher prevalence.
Avik Roy: So in terms of that criticism, you mentioned there were two criticisms, and this was one of them. Was there anything from that, let’s call it informal peer review process that you did take to heart and thought was a useful way to improve the work?
Jay Bhattacharya: The second one was, I mean, again, I think false but statistically beguiling. So the idea was using a test kit that has errors. Every medical test has errors. The key question is, what are those errors? And in epidemiologic work, how do you account for those errors? There are two kinds of errors, broadly speaking, in this kind of work when using a test kit for antibodies: false positives and false negatives. False positives means that you’re truly antibody-negative, and yet our kit picks up that you’re positive. That could happen, for instance, if the kit is picking up antibodies and other viruses it’s not supposed to pick up.
It turns out that our kit is incredibly good at not picking up those other viruses. If you’re negative, 99.5% of the time, the kit will show that you’re negative. Now, how do we know that? The manufacturer has taken, and other labs around the world have taken negative samples drawn from before COVID was around, 2018 and earlier, and tested it. Now when we did the Santa Clara study, we had about 400 negative sample reports. So it was literally 401, and 399 had been negative. Pretty accurate, right? If you take the left edge of the confidence band around that and then multiply out, you get a number that’s… we had 50-some positives in the 3300 we had. You get a number of 60 false positives.
So, people are like, okay, well, if the test is worse, really, really the left edge worse, every single positive is false positive. This is misleading because you also have to account for the false negatives. False negative means that you’re truly antibody-positive, but the test kit shows us negative. I mean, at the time we did the study, the test kit, the validation samples we had, I think we had like 60 some validations, I forget the exact number. And it was about 80% accurate. So, 80% of the time if you’re positive, it’ll show up as positive, 20% of the time it won’t. So you have to make an adjustment for the false negatives and false positives as well as the sample variations in the study.
When you do that, you get a standard error bound that’s correct, that doesn’t actually include zero. The mistake people make is—if they could all be false positives but they also could give me a false negative, you have to account for all of the variations, in the false positive rate, the false negative rate, and the sampling variation to get a standard error, or to get a confidence bound around the prevalence estimate. And when you do that, it does not include zero. We could exclude zero as a possibility.
Now, a lot of people sort of made that mistake on Twitter and elsewhere. But if you think about it, if that’s right, if truly you entertain the idea that the lower bound could include zero, you think this is not a very contagious disease at all. It’s just not plausible. I think there was this error made in, but you can see—
Avik Roy: In the sense that if the disease wasn’t getting… if no one had the disease because there were only a few true positives, then the disease wasn’t actually that infectious, because you would expect a higher rate of infection.
Jay Bhattacharya: Even if you accept that critique that they were making, you’re left in a box. Either you have a disease that’s incredibly deadly but not infectious at all, or you have a disease, like we said, which is much more widespread than people thought, but not quite as deadly as people thought. Those are the two. You can’t, based on our evidence, conclude that we have an incredibly deadly, incredibly infectious disease. That’s not possible, not consistent with the data.
Avik Roy: One thing I could imagine that’s a different form of catch-22 is, well, you have a false negative rate, you have a false positive rate, and only about 2% of the population is infected. So, what can we learn from that anyway, because any time there’s a very small percentage of the population that’s infected, those tiny little errors in terms of false positive and false negative, it’s very difficult to draw any conclusions from that, which is—
Jay Bhattacharya: That’s essentially what their people were saying, right? They’re saying we should include zero prevalence in our error bound. Turns out that’s not true when you do the numbers, but that’s fine. You can see why it’s superficially plausible.
Since then, and this is the constructive thing the Twitter war did, is it led a lot of folks who had been validating the test kit from around the world—
Avik Roy: Remind me the name of the company that you used for the test kit?
Jay Bhattacharya: Premier Biotech. This Twitter war partly led them and others around the world to evaluate the test kit. And we got 3000 samples that confirm that in fact that 99.5% specificity number that we use, the false positive rate is .05%; that’s exactly what we found in the 3000 plus samples that we got, it reported. 99.5, except because there are 3000 samples, we know that number much more precisely, and so the confidence bound around that is tighter. If you go to left edge, you no longer get 50 false, 60 false positives, you get much fewer. And so, once you incorporate that improved precision in the specificity number, this false positive number that we have, you basically get an estimate, an error bound that’s even tighter. It changes the total estimate as well because false positive rate matters, but false negative rate matters. But it only changes it slightly. We basically have the same number. About 2.8% prevalence.
Avik Roy: In other words, so the takeaway was similar when you did that recut of the data in terms of the ratio of infections to cases. Is that fair to say?
Jay Bhattacharya: It’s between .1 and .2, just like the report in version one. And the multiplier—the number of cases of people who are walking around with the disease but we don’t see them in the case counts—is roughly 50 to one. Qualitatively, the results are exactly the same; we just know with much more precision than we did after version one.
My takeaway on this is: science worked. It’s not the most fun process to go through; I’d like to get all these criticisms all at once, but it actually worked. I think the open science model actually worked here. It made the paper better, and we know the result with more confidence than we did when we released version one.
Avik Roy: It’s interesting because in the public policy world, this is what we do all the time. Sometimes people in the public policy world publish in academic journals, or some people are focused on that, but people like myself who are publishing in, say, Forbes or at freeop.org or the New York Times or wherever it is, and our analysis are getting critiqued in real time these days. And that’s something we just have gotten used to. That’s been true of my entire time in this world. For 10 years, I’ve been doing this, and now it’s interesting that that has come to academia. But as I said, that’s almost a separate conversation.
Jay Bhattacharya: I think it’s a good thing. I think we ought to be more open. I do think, if I have to say a negative part, people need to… I think it’s exposed scientists as human beings in some sense. They have the same huge negative reaction, just overwrought reactions initially, but they’re, I think—
Avik Roy: The amount of vetting that your study got, relative to the amount of vetting that your typical random policy article in the New England Journal of Medicine gets, is interesting to measure. Let’s just put it that way.
Jay Bhattacharya: I assume I’ll never have a study that gets that kind of scrutiny ever again. You know what, like I said, I think it was constructive and actually fun.
COVID-19 in New York
Avik Roy: Let me ask you another statistical question—and I don’t want to put the listeners to sleep with statistical questions—but I do have this other question to ask about this because it’s important. This issue of the ratio of the number of cases that we’re seeing to the actual number of people who were infected really has a big policy takeaway for how we deal with this whole situation, this whole pandemic, and we’re going to get more into that in a minute.
But I wanted to ask you about New York. So New York and Manhattan, according to Governor Cuomo out there, about a quarter of the population is positive for SARS-CoV-2. So presumably, a lot of the challenges statistically that you have with the false negatives and false positives in California would not be problematic in New York. Did they also use a Premier Biotech test? How should we think about the New York analysis versus what you guys did?
Jay Bhattacharya: I haven’t seen a paper that goes through their methods, so I don’t know the answer to that. I don’t think they’ve released one as yet. In fact, that’s true for most of the rest of the serology studies. I wish they would, because it would make people feel better about the results that are being reported. If they’re using a test with low sensitivity, for instance, it could be an underestimate. I just don’t know. And if they didn’t adjust for it, could be an underestimate. If they don’t have the .5% error rate we have, if it’s higher than that for those false positives, they could be over-reporting; it’s hard to say. I assume that they’re correcting for it, but they haven’t released a report so I can’t comment one way or the other on the test kits they’re using.
But the numbers themselves are striking: 21-25% of the population with an infection fatality rate of about 5 in 1000; that’s higher than the rest of the country.
Avik Roy: Right, yeah.
Jay Bhattacharya: Actually, that brings up a very important point. The infection fatality rate is not a biological constant. It’s not just a single number; it’s a function of both the disease itself, the virus itself, of course, but also the host and the healthcare system managing the host, managing the patients.
Avik Roy: It can vary by pre-existing conditions, by your age, by how well the disease is managed by your doctor, by the hospital, all that stuff.
Jay Bhattacharya: I think the variation we see in the infection fatality rate from around the world is kind of to be expected. You see in Italy a higher IFR; it’s an older population that was infected in Italy than much of the rest of the world. Italy itself I think is like… almost a quarter are elderly, compared to the US which is substantially lower. You kind of would expect a higher accrued IFR in Italy than you would expect in other parts of the world. Similarly, Spain—I think they just reported a serology study with IFR almost of one in 100, which is stunning. But again, they’re an older population. You would expect a higher IFR.
I think the other thing is, the management of patients is really important. In New York City, for instance, we’ve seen reports of people being sent back to nursing homes, which are not equipped to cope with large numbers of COVID positive patients, and so the disease is spread there. I think things like that are going to be very important to understand if we want to basically, just like we do risk adjusted mortality after a cardiac bypass or something, in health policy, we need to do risk-adjusted IFR so that we can understand really what the contributors are to mortality from COVID.
Avik Roy: It’s interesting you bring that up; and this goes to really the core of your entire academic career, measuring this kind of stuff. I’ve been reading some papers recently about, have we been doing the right thing by ventilating, putting ventilators in these patients? Because that was one of the big concerns early on is, “oh, gosh, we don’t have enough ventilators.” And now there appears to be at least some evidence that ventilators are making these patients worse off; that the mortality rates for people on ventilators are higher than the people who didn’t get ventilated or intubated. Tell me about that. What’s your assessment of how we’re managing these patients, and what are you seeing from the emerging evidence? It’s interesting.
Jay Bhattacharya: That ventilator hypothesis is very interesting. I don’t know yet what to make of it, because I haven’t seen any comprehensive analysis of the cases. I have seen numbers that suggest that if you’re on a ventilator, it’s very likely the way you’re going to get off it is by dying. So, I think that there’s at least some plausibility to that hypothesis. I’d like to see it more rigorously checked. We shouldn’t be surprised that we’re learning a lot about how to manage disease. This is a new disease. It’s only a few months in to this crisis. Our initial instincts about how to manage the disease clinically are almost certainly going to be modified very, very sharply over time as we learn more.
That’s another thing. The IFR is likely to come down as we learn more. Even if we don’t have an effective therapeutic found immediately, better management of patients will reduce the IFR over time. I think we’re kind of starting to see that if I’m not mistaken. But again, the data need to be collected and analyzed to really answer that well.
Learnings from Iceland
Avik Roy: We know about this interesting distribution—and tragic for those who are affected—of the disease, the illness and the hospitalization and death towards the elderly. We know that people who are obese, people with diabetes, people with cardiovascular conditions are at greater risk. Are there other aspects of the risk distribution that you’re seeing emerging in the literature that are interesting that aren’t as widely known?
Jay Bhattacharya: There was a New England Journal piece that was fascinating, from a report out of Iceland. I think this company called deCODE genetics.
Avik Roy: Yeah, deCODE, I know them well.
Jay Bhattacharya: I hadn’t heard of them before, but they seem like fantastically great scientists. I think 12% of the Icelandic population was PCR tested. They took every single positive case, and they sequenced the genome of every single virus in every single positive case, and they did an analysis of trying to trace mutations in the genome. So they could say, okay, most of the cases arose from people who were visiting the UK, or ski resorts in Austria or something. They can tell because they looked at the genome in the UK and the mutation patterns matched. And then they could also tell or make some guesses about how people spread the disease based on the mutation patterns of the viruses. And what they concluded was not one single case of a child passing the virus to, I think, a parent was in the data, not one.
Avik Roy: Which is incredible, because that’s not how we assume about any other infectious disease. We all get sick because our kids give us all the diseases, right?
Jay Bhattacharya: Yeah, but not this one apparently.
Avik Roy: Not this one.
Jay Bhattacharya: Yeah, if that’s true, that has very, very big consequences for how we think about policy around this. Essentially, children are a separate component.
Avik Roy: And when you say children in their study, what age ranges were they looking at? How did they define it, under 18 or under five or what are we talking about?
Jay Bhattacharya: I think it was under 18, but I’d have to go look at it.
Avik Roy: Yeah, it’s okay. I’ll look at it myself. Actually, just as a side note, deCODE Genetics—really interesting company. I looked at them back in my investment career days; they are an Icelandic company that, because of certain quirks in the Icelandic healthcare system, were able to sequence the genomes of basically everyone in Iceland. And because of the sagas, these old ancestral tales about the Icelandic people, and because they didn’t have a lot of immigration in Iceland, basically everyone is loosely related to each other.
And so they were able to build these family trees going back to the ninth century AD, where they could trace the familial inheritance of all sorts of diseases and track and compare that with genomic and genetic data. They were trying to use that platform to develop drugs and treatments. Didn’t succeed at that, but eventually Amgen, the Southern California based biotech giant, bought them; now they’re a unit, or a subsidiary of Amgen, but still doing interesting scientific work.
Jay Bhattacharya: I was struck by this paper. I didn’t even realize it would be possible to do that kind of sequencing on such a scale so quickly.
Avik Roy: Yeah, yeah. They already have the database, basically, which is what I think helps them. Not obviously with the SARS-CoV-2 genome, but the genetic database of the Icelandic, who’s related to whom and all that. So I’m sure that work will continue to be very interesting to follow.
Okay, so the fact that, you’re right, if it turns out that children cannot or do not infect their parents and presumably other adults as well (because if you’re living with your parents and you’re not infecting them, you’re probably not going to infect other adults). What are some of the other ways, I mean, again, normally if you were just an epidemiologist, maybe I wouldn’t ask you this, but you’re a guy who thinks a lot about health economics, about health policy. In particular, I think an area that might be fruitful to spend time on with you is, how has the lockdown in your preliminary assessment affected public health? This has been an area of, as you know, increasing conversation.
Public Health Trade-offs with Lockdowns
Avik Roy: Based on the experience you have, the research you’ve done and what others have done, what’s your assessment of the things we should be concerned about when it comes to trade-offs in terms of, and I mean specifically, the public health components of trade-off? So, as more people aren’t unemployed, as more people aren’t going to the doctors, what are the sorts of elements of illness and death and decreases in the quality of public health that we should be worried about, that we should be seeing right now and are seeing?
Jay Bhattacharya: I think it’s an utter disaster for public health, a complete and utter disaster, the lockdowns. We’ll start domestically and then we’ll move around the world. Domestically, in the Great Recession, we saw this phenomenon that Ann Case and Angus Deaton highlighted of Deaths of Despair. Deaths from suicide, deaths from, sorry, depression leading to suicide. I think the opioid epidemic is connected to this. That was with a recession that was deep but nowhere near as deep as the one we’re about to experience—in fact, the one we’re already in. I expected a redux of that. 30% unemployment; I mean, that’s absolutely stunning. We’re going to see a lot of those pathologies come back.
You mentioned some of the delayed doctor visits. That’s going to be absolutely enormous. So, I fully expect a measles epidemic next year, those delays in vaccinations. Probably a lot of kids won’t get vaccinated at all because the parents are too afraid to take the kids in to get vaccinated because of COVID. I fully expect cancer mortality to start to rise again, not just because, I mean, there’s all these stope stories, and I think bear on the data, that people are delaying chemotherapy or just avoiding it all together because they’re afraid of COVID or radiation therapy or treatment for cancer. So people with cancer will die at higher rates, but also people are skipping screening for cancer. Lower rates of colorectal cancer screening, lower rates of breast cancer screening, lower rates of prostate cancer screening. I think we’re going to see a rise in cancer mortality across the board.
The thing about that that’s important is that it won’t manifest itself immediately; it’ll manifest itself over the course of years. We finally, I think, over the last decade started to see some reductions in cancer-related mortality rates in the United States. I think that’s going to be reversed.
Avik Roy: You highlight, Jay, an interesting element of this, which is, one of the great challenges of science in general, but particularly the social sciences, that the things that we measure are not the same things as reality. And it’s easy for us to measure right now, okay, somebody died of COVID-19. We can put that on a chart. But the person who dies five years from now because they didn’t get a mammogram this year, that’s a harder thing to tease out, that’s a harder thing to measure. Are you confident that we have the tools necessary to have some sense of how to measure those costs?
Jay Bhattacharya: I fully anticipate that my fellow health policy wonks will make a career over the next five years of using this natural experiment, if you will, of COVID to isolate those things. I have a lot of confidence in my peers on this. Yeah, I think we can measure it. Actually, it’s interesting, can I go back to something you said earlier in the conversation? How we should deal with the next pandemic, or how should we think about, and I said, look, we should put serology sort of at the center of this. In addition to that, we should put the other side, the cost of these policies also at the center of this. Why is it that we have these fantastic SIR models of COVID deaths all over the place?
Avik Roy: Tell people what an SIR model is?
Jay Bhattacharya: These are standard epidemiologic models of forecasting. The forecasting epidemics. So they assume that people sort of randomly run into each other and spread the disease. If you have a lot of infected people running into a lot of susceptible people, S is susceptible and I is infected, you get a lot of infections and then people die or recover. R is for recover. There’s other sort of variations on it, but that’s the basic—
Avik Roy: The basic tool of these models that everyone’s been talking about in the press.
Jay Bhattacharya: Yeah, this is like—the Imperial College model, for instance, would be example of this. These models have great amount of uncertainty, and yet, we’re relying on them for all kinds of decision making, especially early in the epidemic. I think still today. And yet, the criticism is that we don’t have accurate models of the long term effects, and therefore we shouldn’t have them at all. Basically, I’d rather have one noisy model put up against another noisy model so we have a full picture of what the long term effects of the policy are. Not just the one noisy model against nothing.
Avik Roy: Well, that’s what we do with fiscal policy, we have the Congressional Budget Office, and they’re the only ones who have a model that anybody cares about; nobody else’s model matters.
Jay Bhattacharya: CBO tries, I mean, I’m—
Avik Roy: They try hard. Yes, I know. Yeah.
Jay Bhattacharya: I have some buddies that work for them. Look, I think we’ve got to do the best we can. The future is always uncertain. The question is what do we put in front of us when making decisions about the future? So, in this case, we put in front of us noisy models of COVID deaths and morbidity. But we have no models at all whatsoever to say, what are the effects of the lockdown policy adopted on long term health outcomes? That is not right. Treating the lives of the people that are going to die from cancer, from depression, from suicide and so on as if they don’t matter at all in our policymaking.
Avik Roy: I was really struck in this hearing that Dr. Fauci spoke at in the senate a couple days ago. Rand Paul was challenging Dr. Fauci, and at this hearing where Dr. Fauci was saying, well, if we open the economy prematurely, that’s going to lead to death. And Rand Paul said, look, I have great respect for you, Dr. Fauci. But there are a bunch of other experts on the other side who were talking about the cost of the lockdown, and shouldn’t we take their assessments into account as well? And Dr. Fauci basically said, yes, I can’t speak to those economic costs, but I’m focused purely on the evidence here. What’s your take? What’s your take on that? Dr. Fauci is making pretty strong pronouncements about what he thinks we should do from a policy standpoint, but he’s openly saying, don’t talk to me about the economics, that’s not my field.
Jay Bhattacharya: Yeah. I think on the other side of Dr. Fauci—and he’s a great scientist, we should respect his opinion—but he’s being honest. He doesn’t know about the other side of the deaths. He doesn’t understand or sort of represent the deaths from the lockdown, that will be directly attributed to the lockdown. I think it is immoral for policy to be made without someone on the other side representing those people who will die as a result of this policy. We absolutely need to have a counterpart to Dr. Fauci sitting in the room saying, fine, you have these COVID deaths, but if you do the lockdowns, here’s the deaths you’re also going to have over time. That is absolutely vital and has been missing, for reasons I do not understand, from the national discussion over this.
Avik Roy: One thing that’s interesting and you’ve mentioned this a couple of times, the lives versus lives point. The lockdown is going to cost lives to in terms of deaths of despair, in terms of people not seeking treatment. Take that to the next sort of ethical level, though, which is to say, when it comes to traffic accidents, we accept a certain level of traffic accidents every year to keep the roads open. 30,000 people die of traffic accidents every year. We could shut down the roads and stop traffic fatalities, but we don’t because we accept there’s an economic value to keeping the roads open.
So, one thing I’m wondering about is, at what point does that become part of the conversation too—not merely deaths of despair but lives of despair? I understand that there’s been an argument out there, well, you shouldn’t, in the COVID-19 context, “well, all you economists or all you people who want the economy to open up, you just care about money, you don’t care about people.” I totally get that rhetorically. And it is absolutely important to assess the lives that are being lost or damaged by the lockdown. But particularly, the damage piece I’m interested in. How should we think about that, or how do you think about at what point lives of despair matter as much as deaths of despair?
Jay Bhattacharya: I think that matters, absolutely matters. And as you know, I think there’s a huge infrastructure in economics to try to price some of that. And as you say, even in examples you use about deaths on traffic accidents, that people will price, how much should we spend per dollar per life a year? There is that literature. I think that is important when we’re talking about policymaking, absolutely. But in this context, you don’t really even need it. The lives lost from COVID lockdowns are enormous and should be counted. Even if you’re underestimating the total harm from the lockdowns by just focusing on the lives, you’ll make better decisions than if you ignore those lives at all.
We didn’t get a chance to talk yet about the global consequences on health that are coming. The resurgence of diseases like tuberculosis, parasitic infections that were under control, malaria, huge numbers of people starving, kids starving around the world as a consequence of the global economic collapse we’re about to face, in large part because of the lockdowns.
Now, I think in a context like this where those kinds of direct health effects are utterly evident, I’d rather not sit here arguing about the dollars. I think we’re underestimating the harm, if that’s your point. I’m happy to say that by just focusing on lives. But even underestimating the harm is absolutely colossal. And we’re not counting it.
Avik Roy: It’s interesting, some researchers at UNICEF just put out a paper in The Lancet suggesting that the worst case scenario would be a 45% increase in child mortality, and 1.2 million children dying around the world as a result of policy, primarily the result of policy interventions that lead to starvation, lack of healthcare, more poverty, etc. A really startling estimate. But if you think about the world having, what is it, eight billion people, it’s not that big of a number actually.
Let me ask you this: it’s so interesting that… you talked about the international stuff. A lot of us have been talking about Sweden as a model of a country that was more gentle in its lockdown, and therefore has not had a lot of these harms economically, but with a manageable public health outcome. Is that the right country to look at? I mean, when you look around the world, who do you think is doing a good job of managing those trade-offs? The economic versus the direct COVID infection?
Jay Bhattacharya: I’m grateful that Sweden picked the policy it did, because we’ll learn a lot about what the lockdowns did as a result of this comparison between Sweden and say, Norway or something. In fact, in the United States, we now have our own Swedens. Essentially half the country have started to lift our lockdowns or more than half, and half the country have kept them in place. So we’re going to have these comparisons to make all over the place. Like I said, I have full confidence in my fellow health economists to like go to town over these little natural experiments.
I think it’s going to be interesting. If I had to pick a policy, I think Sweden’s policy was the right one. Their policy wasn’t don’t do any lockdowns at all. People voluntarily self-distanced. They shut down large gatherings, they did all kinds of things that basically fell short of shutting the economy down all together. And in the short run, they’ve had higher COVID death rates, absolutely, than Norway. The question isn’t the short run, the question is the long run. We’ll learn from Sweden what the long run consequences of a much more relaxed lockdown policy is. And if my hypothesis is right, they’re going to come out better in the long run in lives than their neighbors which adopted these draconian public lockdown policies.
It’s a little complicated, because the economy is a global economy. And so, the shutdown orders that are worldwide affect Sweden, it affects India, it affects Africa, it affects the United States, globally. So, there are these sort of massive external effects that we’ll have to think about when we’re doing the analysis. So they’re not just clean, natural experiments. I think we’re going to learn a lot about those. The reason I like Swedish policy better is because it’s clear they did think about the long run effects. They did count some of the lives on the other side of that policy. Did they make the right decision? It’s hard to say right now in the midst of the battle. We’ll learn soon, I don’t know how soon.
Avik Roy: You mentioned some of the what we might call emerging market or third world countries out there. Are there ones that you’re concerned about in particular, whether it’s a lockdown piece or just the pandemic itself having a major impact that we’re not yet observing or looking closely enough at?
Jay Bhattacharya: Even places where COVID has not affected the populations, and it’s not in any measurable way directly as yet, they are going to face… basically the poor people, every country around the world, but certainly the poor people of every poor country around the world are going to face the brunt of the cost of this lockdown policy. They’re the ones that are going to starve, they’re the ones that are going to suffer from diseases that we thought we addressed. They’re the ones that are going to die in large numbers as a result of this lockdown.
Avik Roy: My last question for you is, if you had told me at the beginning of this crisis that Stanford would emerge as the hotbed of contrarian thinking on economic lockdowns, I would have been pretty surprised. I wouldn’t have been surprised at some of the specific people like yourself who were involved in it, but it’s been very interesting that Stanford has become this place where in various different ways, a lot of this contrarian thinking has emerged. Is it something in the water? Is it a coincidence? What’s going on over there?
Jay Bhattacharya: I don’t know. There’s a lot of my colleagues that don’t agree with me, and that’s okay.
Avik Roy: Well, thanks for what you’re doing, and I hope we can have you back on when there is more to discuss with your work. What are you working on right now? You’ve published a lot recently on this topic, and I’m sure you have more in the pipeline. What are the interesting things that you’re trying to investigate, and also, what you’re looking at in other people’s pipelines that you’re anxiously awaiting results from?
Jay Bhattacharya: So, a couple of things. I’m working with this group that’s trying to get a seroprevalence study started in Mumbai. And I’ve been learning about the regulatory environment in India and the difficulties in getting studies like this going. I’m hopeful we’ll get something done, but it won’t be fast, I don’t think. The other thing I’ve been focused on is trying to get some of the modeling groups to incorporate some of the seroprevalence numbers into estimates and see what a big difference it makes on their projections. I suspect it’ll make a huge difference. In fact, I’m working with a group here at Stanford that’s doing good work on that. We’ll see what we find.
The other thing is that we’re going to keep doing seroprevalence studies in LA, but we’re going to start doing a panel study; we’ll go back to the same set of people. The idea partly is to see, obviously, the spread of the epidemic, of course, but also potentially to see—do people get reinfected, or how long is the immunity complete? We’ll start to address questions like that once we have a panel going up and running, so we can go back.
Avik Roy: That’ll be very interesting. Great. Well, listen, Jay, thank you so much for joining us, giving us an hour of your time. It’s been incredibly enlightening.
Jay Bhattacharya: Thank you, Avik. It was really fun to talk with you.
Avik Roy: This has been another episode of American Wonk. I’m your host, Avik Roy, and we’ve been joined by Jay Bhattacharya, professor of medicine at the Stanford University School of Medicine. If you like what we’re doing on the show, please subscribe to American Wonk on Apple podcast or your favorite podcast service, and better yet, rate us on that service, rating us help spread the word on what we’re doing so even more people could learn from our great guests like Jay. Thank you for listening and we’ll see you next time.