White Paper
Energy

Why Nuclear Power is the Path to Low-Carbon Energy: Part 2

Instead of closing nuclear plants, as California and New York are doing, we should invest in next-generation nuclear technologies.
November 23, 2020
Print This Article

(This piece is the second part of an edited excerpt from Robert Bryce’s 2020 book, A Question of Power: Electricity and the Wealth of Nations. The first part can be found here.)

Nearly all of the machinery that generates electricity at the Indian Point Energy Center in Buchanan, New York, is hidden from view. Sure, the domed reactor buildings can be seen from miles away by boats traveling on the Hudson River as well as from other vantage points. But the reactor building — where small amounts of uranium are fissioned to produce the heat that drives the generators — is strictly off limits to visitors. The turbine generators — the bladed machines that are spun by the steam created by the reactors that convert that steam into electricity — are shrouded by big metal covers. Nevertheless, for visitors lucky enough to get inside the turbine hall of the Unit 2 reactor, a glimpse of the guts of the massive machine can be had.

Near the middle of the aircraft-hanger-size turbine hall, the shrouds, pipes, and safety barriers have been stripped away to expose a short segment of the solid-steel generator shaft. A light coating of oil gives it a faint glimmer as it spins at about 1,800 rpm. The shaft, which is perhaps six inches in diameter, doesn’t make any noise. Instead, the spinning motion of the shaft unites with a faint hum and vibration that can be felt throughout the turbine hall.

The 200-foot-long drive shaft is the steel spinal cord that allows the 3,200 megawatts of heat energy produced by the reactor to be converted into 1,000 megawatts of electric energy. That electricity drives much of the Big Apple. By itself, that single shaft energizes fully one-eighth of New York City and its 8.6 million residents. It’s rather astounding to consider. Over 1 million New Yorkers — as well as one out of every eight lights, elevators, subway cars, and electric teapots in the five boroughs — depend on the electricity that is being generated thanks to that one finely balanced piece of spinning steel.

Since the time of Edison, the business of producing electricity on a large scale has depended on machines similar to the one in the turbine hall at Unit 2. The objective has remained the same: turn a drive shaft and spin copper coils inside a clutch of magnets to produce electric current. And when it comes to spinning a drive shaft, Unit 2 and Unit 3 have few peers. The two reactors at Indian Point are among the largest engines in North America. Nuclear opponents like to crack wise about nuclear energy and how it’s an expensive way of boiling water. Maybe so, but the volume amount of boiling water that can be produced from splitting uranium atoms staggers the imagination. The two reactors produce so much heat that each of them requires 600,000 gallons of Hudson River water to flow through their heat exchangers every minute to keep the power plant’s equipment operating at the proper temperature.

During my career in journalism, I’ve visited factories, mines, refineries, and numerous power plants. Indian Point leaves nearly all of them in the shade. It is a marvel of engineering, architecture, and ingenuity that should be appreciated alongside other iconic American landmarks like Hoover Dam, the Arch of St. Louis, or the Washington Monument. Alas, it is not. Instead, Indian Point is a relic from another age.

Indian Point was launched when government and companies were thinking big. In the wake of the New Deal and World War II, the federal government (and state governments) took on big public works projects, like the interstate highways, bridges, and big hydropower dams. During the 1930s, 40s, 50s, and 60s the US built lots of dams — structures that architecture critic Lewis Mumford would later call “democratic pyramids.” Work on Indian Point began in 1956, the same year President Dwight Eisenhower launched the interstate highway system. (Two years earlier, in 1954, work began on the first commercial nuclear reactor in the US, at Shippingport, Pennsylvania.)

Indian Point equals or surpasses any of the great dams ever built in America. But unlike the sprawling reservoirs that are impounded by those dams, Indian Point represents the apogee of densification.¹ The twin reactors perfectly illustrate what may be nuclear energy’s single greatest virtue: its unsurpassed power density, which, in turn, allows us to spare land for nature.

To illustrate that point, let’s compare the footprint of Indian Point with the footprint needed to accommodate renewables. Indian Point covers 239 acres, (about 1 square kilometer) or less than 0.4 square miles. From that small footprint, Indian Point reliably pumps out about 16.4 terawatt-hours of zero-carbon electricity per year. To put Indian Point’s footprint into context, think of it this way: you could fit three Indian Points inside New York City’s Central Park.

Now, let’s compare Indian Point’s footprint and output with what would be required to replace it with electricity produced by wind turbines. Based on projected output from offshore wind projects, producing that same amount of electricity — 16.4 terawatt-hours per year — would require installing about 4,005 megawatts of wind turbines.² That much capacity will require hundreds of turbines spread over some 1,335 square kilometers (515 square miles) of territory. Thus, from a land-use, or ocean-use perspective, wind energy requires about 1,300 times as much territory to produce the same amount of energy as is now being produced by Indian Point.³

Those numbers are almost too big to imagine. Therefore, let’s look again at Central Park. Recall that three Indian Points could fit inside the confines of the famed park in Manhattan. That means that replacing the energy production from Indian Point would require paving a land area equal to 400 Central Parks with nothing but forests of wind turbines.

Powering New York City: Land-Area Needed by Nuclear and Wind Energy To Produce 16.4 Terawatt-hours of Electricity Per Year

Sources: Natural Resources Defense Council, author calculations.

Despite its tiny footprint, despite its importance to New York City’s electricity supply, despite its zero-carbon emissions, Indian Point is headed for premature shutdown. By 2021, the drive shafts at Unit 2 and Unit 3 will stop spinning. The reactors are not being shuttered due to decrepitude. They could continue operating for decades to come. Instead, they are being shuttered for political reasons. Anti-nuclear groups, including Riverkeeper and the Natural Resources Defense Council, fought for years to have the plant closed, claiming it was harming fish in the Hudson River and posing a danger to residents in and around New York City. New York Governor Andrew Cuomo, a Democrat, was convinced to join the anti-nuclear crusade, and in early 2017, he gleefully announced the two reactors would be closed.

The looming closure of Indian Point is part of a rash of nuclear reactor retirements across the US that are impeding efforts to reduce greenhouse gas emissions. Indeed, despite nuclear’s critical role in reducing emissions, the US nuclear sector is in the midst of a full-blown crisis. Between 2013 and 2018, American utilities closed, or announced the closure of 15 nuclear plants. The combined output of those nuclear plants is about 133 terawatt-hours per year. That’s about 70 percent more zero-carbon electricity as was produced by all of the solar facilities in the US in 2017. Not only are US reactors being retired early, no new nuclear plants (aside from a small reactor slated to be built in Idaho in the mid-2020s) are even being considered by electric utilities in the United States.

The result of all these closures is obvious: the US — which has led the world in the development and deployment of nuclear energy since the days of the Manhattan Project — has become a nuclear laggard. That may please anti-nuclear activists at Greenpeace and the Sierra Club, but it’s a big loss in the effort to fight climate change. In addition, the loss of America’s nuclear fleet could mean higher electricity prices, a less stable electric grid, and an escalation in the land-use battles over the siting of renewable energy projects.

Before going further, let me be clear about where I stand on nuclear energy: If you are anti-carbon dioxide and anti-nuclear, you are pro-blackout. There is simply no way to slash global carbon-dioxide emissions without big increases in our use of nuclear energy.

That fact has been made clear by numerous scientists. In 2011, James Hansen, one of the world’s most famous climate scientists, wrote that “suggesting that renewables will let us phase rapidly off fossil fuels in the United States, China, India, or the world as a whole is almost the equivalent of believing in the Easter Bunny and Tooth Fairy.” He went on to say that politicians and environmental groups “Pay homage to the Easter Bunny fantasy, because it is the easy thing to do…They are reluctant to explain what is actually needed to phase out our need for fossil fuels.”

In late 2013, Hansen and three other climate scientists wrote an open letter to environmentalists encouraging them to support nuclear. They wrote that “continued opposition to nuclear power threatens humanity’s ability to avoid dangerous climate change…Renewables like wind and solar and biomass will certainly play roles in a future energy economy, but those energy sources cannot scale up fast enough to deliver cheap and reliable power at the scale the global economy requires.” In 2015, at the UN Climate Change Conference in Paris, Ken Caldeira, a climate scientist at the Carnegie Institution for Science who was one of the co-authors of the 2013 letter, reiterated his belief that nuclear must be part of any emissions-reduction effort. “The goal is not to make a renewable energy system. The goal is to make the most environmentally advantageous system that we can, while providing us with affordable power,” Caldeira said. “And there’s only one technology I know of that can provide carbon-free power when the sun’s not shining and the wind’s not blowing at the scale that modern civilization requires. And that’s nuclear power.”

Also in 2015, the International Energy Agency declared that “Nuclear power is a critical element in limiting greenhouse gas emissions.” It went on, saying that global nuclear generation capacity, which in 2018 totaled about 375 gigawatts, must more than double by 2050 if the countries of the world are to have any hope of limiting temperature increases to the 2-degree scenario that is widely agreed as the acceptable limit.

In May 2019, the International Energy Agency reiterated its support for nuclear by declaring that, without more nuclear energy, global carbon dioxide emissions will surge and “efforts to transition to a cleaner energy system will become drastically harder and more costly.” How costly? The agency estimated that $1.6 trillion in additional investment would be required in the electricity sector in advanced economies from 2018 to 2040,” if the use of nuclear energy continues to decline. That, in turn, will mean higher prices, as “electricity supply costs would be close to $80 billion higher per year on average for advanced economies as a whole.” The report also makes it clear that solar and wind energy cannot fill the gas because of growing land use conflicts. The Paris-based agency said that”resistance to siting wind and, to a lesser extent, solar farms is a major obstacle to scaling up renewables capacity.”

The importance of nuclear energy in reducing greenhouse gas emissions can be seen, again, by looking at Indian Point. In 2017, the New York Independent System Operator, the non-profit entity that manages the grid in the Empire State, issued a report which concluded that if the two reactors at Indian Point are closed as scheduled by 2021, the electricity produced by the plant will largely be replaced by three gas-fired power plants then under construction. That’s no surprise. Whenever nuclear reactors are shuttered, they almost always get replaced by plants that burn hydrocarbons, and that means increased emissions of carbon dioxide. By one estimate, New York’s electricity-sector emissions will increase by 29 percent, when Indian Point is shuttered and its output is replaced by gas-fired power plants.

In 2017, the New England Independent System Operator reported that greenhouse-gas emissions increased by nearly 3 percent in the year following the 2014 closure of the 604-megawatt Vermont Yankee nuclear plant. Why did emissions increase? The percentage of gas-fired electricity in New England jumped by six points after the plant shutdown, to nearly 49 percent.

Similar results occurred in California after state officials negotiated the premature shutdown of the San Onofre Nuclear Generating Station in 2013. After the shutdown, Lucas Davis, a professor at UC Berkeley’s Energy Institute at Haas, along with Catherine Hausman, who works at the Gerald R. Ford School of Public Policy at the University of Michigan, published a report which found that in the first year after San Onofre closed, California’s carbon-dioxide emissions jumped by about 9 million tons. That is roughly the equivalent of putting 2 million additional automobiles on the road.

The closures of Vermont Yankee and San Onofre — along with the looming closure of Indian Point — is part of a grim outlook. By the mid-2020s, the US could prematurely retire as much as a third of its installed nuclear capacity. What’s driving the retirements? Low-cost natural gas is a major factor. In addition, nuclear plants must compete in the wholesale market with heavily subsidized electricity produced from wind and solar. Add in aging reactors, post-Fukushima regulations, and the never-ending opposition from big environmental groups, and the US nuclear sector has been taking a beating.

The closure of these plants has been cheered by the well-funded opponents of nuclear energy. For decades, nuclear energy’s foes have relied on three main criticisms to justify their opposition: radiation, waste, and cost. Let’s look at those in order.

From a nuclear safety scenario, it’s difficult to imagine a scarier scenario than what happened on March 11, 2011. An earthquake measuring 9.0 on the Richter scale hit 130 kilometers off the Japanese coastline. Within minutes of the earthquake, a series of seven tsunamis slammed into the Fukushima Daiichi nuclear plant. The backup diesel generators, designed to keep the nuclear plant’s cooling water pumps operating, quickly failed. A day later, a hydrogen explosion blew the roof off the Unit 1 reactor building. Over the next few days, similar explosions hit Units 2 and 3. Three reactors melted down. It was the worst nuclear accident since the Chernobyl accident in 1986. In the wake of the accident at Fukushima, Greenpeace did its utmost to instill fear of radiation. In a March 22, 2011 op-ed in the New York Times, the head of Greenpeace International, Kumi Naidoo, declared that “Nuclear energy is an expensive and deadly distraction from the real solutions.” Naidoo then claimed that nuclear energy is “inherently unsafe” and that the list of possible illnesses caused by “radiation is horrifying: genetic mutations, birth defects, cancer, leukemia.”

Despite Greenpeace’s efforts to instill “radiophobia” in the minds of consumers, the reality is that nuclear energy remains the safest form of electricity production. The facts show that the accident at Fukushima led to exactly two deaths. About three weeks after the tsunami hit the reactor complex, the bodies of two workers were recovered at the plant. They didn’t die of radiation. They drowned.

I am not minimizing the seriousness of the accident at Fukushima. Cleaning up the mess at Fukushima will take decades and cost hundreds of billions of dollars. Nevertheless, despite all the worry, anxiety, and distribution of iodide pills, exactly zero deaths at Fukushima have been attributed to radiation. I repeat, no one in Japan or anywhere else has been killed by the radiation from the accident at Fukushima. You won’t hear that from the world’s biggest anti-nuclear groups, including Friends of the Earth, Greenpeace or Sierra Club. All have made radiophobia a central tenet of their campaigns against nuclear energy. Nevertheless, the facts are clear.

In 2013, the UN’s Scientific Committee on the Effects of Atomic Radiation released a report which found that “No radiation-related deaths have been observed among nearly 25,000 workers involved at the accident site. Given the small number of highly exposed workers, it is unlikely that excess cases of thyroid cancer due to radiation exposure would be detectable in the years to come.” (Thyroid cancer is among the most common maladies caused by excessive exposure to radiation.) The UN committee was made up of 80 scientists from 18 countries.

In 2018, Gerry Thomas, a professor at Imperial College London, said that radiation fears at Fukushima are overblown. In an interview on 60 Minutes Australia, Thomas said she had been to Fukushima many times and would have no hesitation about going back to what she called “a beautiful part of the country.” Thomas, who runs the Chernobyl Tissue Bank and is an expert on the effects of radiation, also said no more than 160 people will die from radiation poisoning due to the Chernobyl accident. That’s far fewer than the thousands of deaths that were predicted. What about deaths from radiation due to Fukushima? Thomas said there have been “Absolutely none. No one has died from radiation poisoning.” During the same report on 60 Minutes Australia, Thomas told reporter Tom Steinfort that “the one thing that we have learnt from both Chernobyl and Fukushima is that it actually wasn’t radiation that’s done the health damage to the people in the surrounding areas. It’s their fear of radiation. There’s been far more psychological damage than there has actually physical damage because of the two accidents.”

Anti-nuclear groups and others have fanned the fears of radiation by claiming there is no safe dosage level. The truth is that we are exposed to radiation all the time. Not only that, radiation can be therapeutic and is widely used in medical treatments for numerous conditions, including cancer. Despite these facts, the nuclear industry has been constrained by the policy known as ALARA, meaning it must keep radiation levels As Low As Reasonably Achievable. But as energy analyst James Conca pointed out in an article in 2018 article in Forbes, following ALARA means that the nuclear industry must now spend billions of dollars “protecting against what was once background levels” of radiation. Radiation is “one of the weakest mutagenic and cytogenic agents on Earth,” he continued. “That’s why it takes so much radiation to hurt anyone.”

Conca’s article agrees with a 2016 analysis published by the Genetics Society of America about the radiation impacts on human health after the bombings of Hiroshima and Nagasaki. The study concluded that “public perception of the rates of cancer and birth defects among survivors and their children is greatly exaggerated when compared to the reality revealed by comprehensive follow-up studies.”

In addition to hyping fears about radiation, anti-nuclear campaigners routinely claim that the radioactive waste produced by nuclear reactors cannot be disposed of safely, and therefore, no new nuclear plants should be built. That’s simply not true. As Michael Shellenberger, one of the world’s staunchest proponents of nuclear energy, points out, nuclear energy’s waste stream is actually one of its greatest virtues. While sitting in the airy front room of Environmental Progress’s office on Telegraph Avenue in Berkeley, a few blocks from the University of California’s campus, Shellenberger told me that nuclear energy is “the only way to make electricity production that contains all of its toxic waste. All of it.” He continued, saying that nuclear energy prevents its waste “from going into the environment and yet people think that the waste from nuclear plants is a big problem.”

Shellenberger and others have pointed out that the nuclear waste issue is not a technical problem, it’s a political problem. That can be seen by looking, one more time, at Indian Point. During my visit to the plant, two Entergy employees, Jerry Nappi and Brian Vangor, showed me where the company stores the spent fuel from the reactors. On the north side of the facility, on an area that’s maybe the size of two tennis courts, there were about 30 large steel-and-concrete cylinders, known in the industry as dry casks. Each cask stands about 15 feet tall and 8 feet in diameter (4.5 meters by 2.4 meters) and weighs about 100 tons.

As I looked at the row of casks, I was struck by the fact that throughout the entire operating history of the plant, which began producing electricity in 1962, two years after I was born, the bulk of its spent fuel could fit inside such a small area. And what if some terrorist wannabe decided he wanted to cart off one of the casks? When I asked Nappi about that possibility, his reply was, “It’s impossible, basically.” He then pointed to the massive machine that Entergy was using to maneuver the dry casks around the site. The machine, which ran on massive metal tracks, moved at about 1 mile per hour — not exactly the kind of get-away vehicle that would be needed by someone hoping to pilfer a bit of radioactive material.

The Indian Point Energy Center, Buchanan, New York, 2018. The dry-cask storage area can be seen on the lower right. Photo by Tyson Culver

The dry casks at Indian Point are part of the nuclear waste that has been created by the US nuclear-energy sector. Since the 1950s, when construction began on Indian Point, the domestic sector has produced about 80,000 tons of high-level waste. That may sound like a lot. But consider this fact: If you collected all of that waste in one place and stacked it about 10 meters (33 feet) high, all of that material would cover an area the size of a single soccer pitch.

The key to nuclear waste is proper management. France provides one of the world’s best examples of proper nuclear-waste handling. France gets about 75 percent of its electricity from its fleet of about 60 nuclear reactors. Furthermore, according to the World Nuclear Association, France has the highest degree of reactor standardization in the world. All of the high-level radioactive waste from those plants has been collected, compacted or vitrified and is now being safely stored near the town of Le Hague in a single facility.

While France provides an example of the political will needed to deal with nuclear waste, the US Congress has demonstrated decades of political cowardice. Since 1982, when Congress passed the Nuclear Waste Policy Act — a law that requires the federal government to take all of the nuclear waste off of the hands of the nuclear utilities — the US has held numerous presidential elections, conducted about 130 space shuttle missions, and successfully landed robotic rovers on Mars. Despite those many terrestrial and extra-terrestrial feats, the US still doesn’t have a place for long-term storage and disposal of the spent fuel coming from places like Indian Point. The result is that nuclear waste from power plants continues to be stored at dozens of sites across the country, including Indian Point.

In 2011, a presidential panel, the Blue Ribbon Commission on America’s Nuclear Future, summarized the situation, saying that America’s policy toward spent nuclear fuel is “all but completely broken down.” Over the past few decades, the federal government spent more than $13 billion on a waste repository at Yucca Mountain, Nevada. But politics have kept the facility from opening. In 2008, while campaigning in Nevada, Barack Obama, in a bow to the state’s powerful Democratic senator, Harry Reid, promised to cancel federal funding for the Yucca Mountain site. After Obama was elected, he did just that. Although it’s not clear if, or when, Yucca Mountain will ever be opened, the federal government has a viable option: the dry casks now sitting at Indian Point and other nuclear facilities could be put into interim storage on land already owned by the federal government. The Department of Energy has several nuclear-focused locations that are excellent candidates for interim storage, including Savannah River Site in South Carolina, Oak Ridge National Laboratory in Tennessee, and Hanford Site in Washington state. Another location, the Waste Isolation Pilot Plant (WIPP) in New Mexico, is already being used by the federal government for disposal of radioactive waste generated by the Defense Department. It, too, could be used to store nuclear waste if Congress can muster the political will to deal with the issue.

These federal locations already have security and safety systems in place to monitor the waste. The workers at those national laboratories have decades of experience with nuclear materials and the communities near the labs are nuclear-savvy and want to keep the jobs that the sites provide. In addition, the sites are plenty big. For instance, WIPP alone covers 16 square miles. Using those federally owned sites for interim storage of nuclear waste will give Congress plenty of time to either open Yucca Mountain or find another disposal site. In the meantime, most of the used fuel from America’s nuclear energy sector will continue to be stored at the same locations where it was used to generate electricity.

While radiation fears and waste disposal have hampered the nuclear sector, the biggest single problem facing the future of nuclear energy is cost. That can be seen by looking at recent history in the US. In 2012, the U.S. Nuclear Regulatory Commission approved the construction license for the Vogtle 3 and 4 reactors, near Augusta, Georgia. The Vogtle reactors, which are primarily owned by Southern Company, will be capable of producing 2,200 megawatts of electricity. The two reactors were the first to get a construction permit in the U.S. since 1978. They are Westinghouse’s AP1000 design, which is designed to allow passive cooling, and therefore is more resistant to the meltdown accident that occurred at Fukushima. When the reactors were announced, the total cost of the project was estimated at $14 billion. The project was financed, in part, by an $8.3 billion loan guarantee from the Department of Energy. Shortly after the Vogtle reactors got the nod from the NRC, the agency granted a construction license for two more AP 1000 reactors — Summer 2 and 3 — in South Carolina.

But in 2017, Westinghouse, a subsidiary of the Japanese company, Toshiba, filed for bankruptcy, citing losses of some $9 billion on the Vogtle and Summer projects. Shortly after the Westinghouse bankruptcy, the owners of the Summer plant, SCANA and Santee Cooper, announced they would abandon the project, a move that will likely require consumers to pay billions of dollars for the unfinished project. Similar cost overruns hit Vogtle. By 2018, the projected cost for the two reactors at Vogtle had soared to some $25 billion, nearly double the original projected amount. Despite the soaring costs, the owners of the Vogtle project decided to continue construction.

The enormous cost of building large nuclear reactors like the AP 1000, which will produce about the same amount of electricity as the Unit 2 reactor at Indian Point, isn’t the only expense. In addition to the sky-high construction costs, companies that are trying to commercialize new reactor designs face exorbitant permitting costs. In 2015, the Government Accountability Office concluded that obtaining certification from the US Nuclear Regulatory Commission for a new reactor is “a multi-decade process, with costs up to $1 billion to $2 billion, to design and certify or license.” Venture capitalists may be interested in nuclear technologies, but with permitting costs alone measured in the billions of dollars, it appears unlikely that any new nuclear reactor designs will be brought to market — or take significant market share — unless they are backed by central governments.

In fact, when looking at the global nuclear-energy sector, it’s clear that state-owned companies are the only ones building significant amounts of new nuclear capacity. The state-backed model is particularly obvious in China, which is building more nuclear plants than any other country in the world. Those efforts are being led by China National Nuclear Corporation and China General Nuclear Power Group. By early 2019, China had 15 reactors under construction and several more in the development pipeline. The state-backed nuclear-energy model is also observable in South Korea, which has become an exporter of nuclear technology through state-owned Korea Electric Power Company. KEPCO is building the 5,600-megawatt Barakah Nuclear Energy Plant in Abu Dhabi. When completed, the plant will be the world’s largest single nuclear energy project. The first reactor at Barakah is expected to begin producing electricity in 2020. But South Korea’s politicians are planning to phase out the country’s nuclear program over the next few decades and instead rely more on renewables.

The Indian government has said it, too, will construct more capacity. In early 2018, it announced plans for 12 new reactors with a combined capacity of 9,000 megawatts, meaning that India will more than double the size of its nuclear fleet over the next decade or so. Ten of the reactors will use India’s own Pressurized Heavy Water Reactor design and two will use Russia’s reactor design.

State ownership has allowed Russia to become the undisputed leader in nuclear-energy deployment around the world. By mid-2018, Rosatom, the state-owned nuclear firm, had contracts for nearly three dozen new nuclear plants, with about a dozen under construction, including projects in Bangladesh and India. In 2018, the company began building a $20 billion nuclear plant in Turkey, that country’s first. It is slated to come online in 2023. In all, Rosatom had contracts worth about $130 billion and many of those construction contracts were enhanced by the Russian government’s willingness to provide financing for the projects.

In addition to the many reactors it is building onshore, Rosatom has also deployed the world’s first nuclear power ship. In 2018, the state-owned Russian company began testing the power ship at the port of Murmansk. The ship carries two submarine-style reactors with a total electric generation capacity of 70 megawatts. According to one report, Rosatom officials plan to “tow the vessel to coastal cities in need of power, either for short-term boosts or longer-term additions to electricity supply.” The ship holds enough enriched uranium to supply the two onboard reactors for 12 years. After that time, the ship will be towed back to Russia where the spent fuel and radioactive waste will be processed. The first location for the nuclear power ship will be Pevek, a remote port in Siberia. When Rosatom moved the powership from St. Petersburg to Murmansk, it was tailed by a Greenpeace sailboat, which carried a banner which said “Floating Nuclear Reactor? Srsly?”

Those Greenpeacers conveniently ignore the decades-long history of marine propulsion. The first nuclear-powered submarine, the USS Nautilus, began patrolling the world’s oceans in 1955. By 1962, the US Navy was operating more than two dozen nuclear submarines and 30 more were under construction. Since then, the nuclear fleets of countries like the US, China, India, Russia, France, and others have accumulated more than 12,000 reactor years of operation time. While powerships similar to the one made by Rosatom are interesting and could provide an alternative to the powerships like those that have been used in Lebanon and Iraq, they will only provide a part of the electricity needed to boost living standards around the world.

For the nuclear industry to gain greater traction in the global electricity market, it must develop reactors that are cheaper and safer than the ones now being built. Much of the effort has been aimed at designing reactors that are inherently safe, meaning that the cooling and containment systems are designed to prevent accidents and major releases of radioactive materials. Nuclear proponents believe that much of the potential lies in SMRs, short for small modular reactors. Generally defined as plants that have capacities of 300 megawatts or less, SMRs could be deployed as single or multiple units. In theory, SMRs could be cheaper than the reactors now being built because many of the components could be fabricated in a factory rather than on the construction site. Having a centralized production facility could allow a dedicated workforce at one location to test, build, and ship the reactors — by barge, rail, or truck — to the final destination. Concentrating the workforce in one place should also accelerate the learning curve and allow the company (or companies) producing the reactor to streamline production, reduce costs, and therefore, build more reactors faster.

NuScale Power, a US-based company that is owned by construction giant Fluor Corp., is planning to build a smaller version of the light-water reactors now commonly used around the world. All of the commercial domestic reactors now operating and all of the reactors now being built in the U.S. are light water reactors, meaning they use water to control the nuclear reaction. The electrical output of each NuScale reactor is projected to be 60 megawatts. By contrast, the Westinghouse AP1000, the reactor type now being built at Plant Vogtle, has electrical output of 1,110 megawatts.

In theory, that smaller size gives NuScale’s customers more flexibility. If a NuScale customer wants more generation at a future date, it can add capacity in 60-megawatt increments. NuScale has garnered some $226 million in grants from the Department of Energy. After it gets licensing from the NRC, it plans to build its first reactor at Idaho National Laboratory and sell the electricity it produces to Utah Associated Municipal Power Systems. But even though it has a financially secure parent company, federal grant money, a federally owned site for its project, and a customer for its electricity, NuScale is unlikely to begin producing electricity from their reactor until the mid- to late-2020s.

Among the most prominent — and perhaps most promising — SMR designs are ones that use molten salt. Rather than use fuel rods like conventional reactors, this design mixes the nuclear fuel into a salt mixture. Molten salt reactors have a proven track record. The Department of Energy tested the design in the 1960s at Oak Ridge National Laboratory where one ran for six years. Terrestrial Energy, a Canadian company, is developing a molten-salt reactor that it hopes to deploy in the mid-2020s. The company’s sealed-reactor units are designed to run for seven years without having to be refueled. Terrestrial plans to build a 190-megawatt reactor in Ontario by 2030 and it says the power plant will be cost-competitive with ones fueled by natural gas.

Another company with a promising molten-salt reactor design is ThorCon International, which hopes to use shipyards to build its reactor, a 250-megawatt model that will be deployed on ocean-going hulls. ThorCon wants to build the vessels to compete with the powerships that Rosatom stationed in Murmansk in 2018 and the fuel-oil-fired ships that have been deployed by Karadeniz Holdings in Iraq and Lebanon. The problem is that ThorCon needs about $1 billion to build the first copy of its design and it hasn’t been able to raise the money.

The problem with the designs being promoted by NuScale, Terrestrial, ThorCon, and the other nuclear startups can be summed up in one word: commercialization. The reactor designs may sound appealing on paper, but unless or until those reactors can be built — and by that, I mean built by the dozens or hundreds — they cannot, will not, make a significant contribution to the Terawatt Challenge. Furthermore, the longer it takes for those companies to get their products into commercial use, the less likely it is that nuclear energy will make a big contribution to the electricity grids of the future. Electricity producers need to make prompt decisions about the type of generators they will be deploying in the decades ahead. They can’t wait for decades while new nuclear reactors are developed, tested, and permitted.

Robert Hargraves, the co-founder of ThorCon, told me in a telephone interview that his company believes it can build its reactors for a cost of about $1 per watt, a price point that would allow the company’s molten-salt reactors to compete with natural-gas fired generators on initial capital costs. Hargraves said that for companies like his — which needs hundreds of millions of dollars to build and deploy a first-of-its-kind reactor ­– “it’s hard to find people willing to put that much money in and have an 8-year payback. Investors are afraid there will be a regulatory roadblock that will prevent them from getting their money back.” In addition, he said that “So many people oppose nuclear energy it makes regulators reluctant to say yes to new reactor designs.” And then, of course, there’s the problem with financing. “The World Bank won’t touch a project like ours,” Hargraves told me.

In addition to SMRs which use fission, some ambitious companies continue chasing the promise of fusion. Among those companies is TAE Technologies, a California-based company that says it is on “a purposeful path to commercial fusion energy.” In early 2019, the company predicted it would begin commercialization of its design in 2023. The company has raised $600 million in investment capital and counts former Energy Secretary Ernest Moniz as one of its board members. But fusion faces many challenges including building containment systems that can handle the enormous amounts of heat generated by fusion.

In short, deploying new nuclear capacity — that is, reactors that utilize different chemistries than the light-water designs that dominate today’s market — will be difficult and costly. New nuclear technologies will have to be able to overcome the public’s long-standing distrust. But they deserve to.

Footnotes

  1. The combined output of the two reactors — Unit 2 and Unit 3 — is 2,069 megawatts. That’s roughly the same output as Hoover Dam. But Indian Point’s footprint — just one square kilometer — is a tiny fraction of the territory covered by Lake Mead, the body of water created by Hoover Dam. Lake Mead’s surface area is about 640 square kilometers.
  2. This estimate is based on the proposed South Fork wind project, a 90 megawatt facility that is expected to produce 370 gigawatt-hours per year. (For output figures, see: . That means that 1 megawatt of offshore capacity will produce about 4.1 gigawatt-hours per year. Thus, to match the energy output of Indian Point, at 16,400 gigawatt-hours per year, will require 4,005 megawatts of offshore wind capacity.
  3. Note that the projected output of the offshore wind project of 4.1 gigawatt-hours per megawatt of installed capacity implies a capacity factor of about 0.47. That is significantly higher than the recorded output of onshore wind projects. For instance, in 2017, Texas had 22,637 megawatts of installed wind capacity which produced 67,092 gigawatt-hours of electricity. Thus, in one of America’s best states for wind, 1 megawatt of wind capacity produces about 3 gigawatt-hours of electricity per year. That implies a capacity factor of about 0.30.

For 2017 wind capacity in Texas, see: https://windexchange.energy.gov/maps-data/321

For 2017 wind output in Texas, see: https://bit.ly/2EwmlRU

Therefore, to replace Indian Point with onshore wind-energy produced in Texas would require 5,533 megawatts of wind capacity. That much capacity would require 1.844 billion square meters (1,844 square kilometers) or about 711 square miles of territory.