Thursday, March 5, 2024

Conventional Deterrence of Russia


Most of my writing on conventional deterrence topics since 2011 has focused on China. I’m hardly alone in that regard within the broader Western security studies community. Until recently, the most plausible (albeit presently low likelihood) scenarios for a great power war involved hypothetical American interventions to defend Taiwan, Japan, or the Philippines from major Chinese aggression, or perhaps a cascading crisis on the Korean peninsula that brought the U.S. and China into direct conflict. Each of these potential fault lines represents differing sets of political objectives that might be held by and strategic circumstances that might confront the inherent parties. All the same, the underlying condition that makes each of these scenarios conceivable is the lack of a foundational political consensus between China and its neighbors regarding regional security principles.
The exact same condition has long existed between Russia and Europe. Although the Helsinki Final Act and subsequent OSCE Vienna Document series of political commitments were intended to establish a foundational European security consensus, Russia and the Euro-Atlantic bloc have never quite interpreted those commitments in the same ways. The declining security situation between the two sides over the past decade stems from that fundamental values-based disconnect.
For much of the post-Cold War era, Western policy and opinion elites (excepting those in Eastern Europe) largely glossed over the incremental decay of the Helsinki system. Hope reigned in the West that increased economic interdependence would incentivize Russian liberalization. When it became obvious that the desired liberalization would not come to pass under the Putin regime, continued economic interdependence was justified on the grounds that it would serve as an inducement for Russian restraint. Neither the ‘gas wars’ against Ukraine after the 2005 ‘Orange Revolution’ or other subsequent Russian coercive economic efforts against its neighbors, nor the 2007 Russian ‘suspension’ of its adherence to the Conventional Forces in Europe treaty, nor the 2007 cyberattacks against Estonian institutions, nor the 2008 Russo-Georgian War, nor the Russian regime’s increasingly bellicose rhetoric and military posturing after 2009 resulted in any serious Western reevaluations of the security situation.
The Russian invasion of Crimea and Eastern Ukraine last year has changed the equation somewhat. Unlike the case with ‘distant’ Georgia, the Putin regime forcibly revised the borders of a sovereign state on the EU’s and NATO’s geographic doorstep. This, combined with the Putin regime’s bald-faced propaganda offensives against the West, the surge in Russian military demonstrations along Europe’s eastern borders and northern/western maritime periphery, the realization of the growing ideological and financial relationships between Europe’s political fringe and the Putin regime, and the Putin regime’s intimations that other neighboring countries—especially those containing ethnic Russian minorities—should be brought back within Russia’s ‘historical’ sphere of influence notwithstanding their EU/NATO membership are making it impossible to continue ignoring Russian revanchism.
It is therefore refreshing to see the Western security studies community’s increased exploration over the past year of potential strategies for countering Russian aggression. All the same, even though these strategies all lean on conventional deterrence to some degree, the level of analysis regarding how that deterrent should be designed and potentially employed is nowhere nearly as detailed as the work done during the same period of time addressing conventional deterrence of China or even Iran. I’ve seen an alarming number of articles and commentaries that assert NATO’s combined strength or U.S. military superiority renders any Russian aggression against the alliance futile. This ignores the many conceivable campaigns that Russia might opt to wage in order to seize small yet strategically useful portions of victims' sovereign territories fait accompli, to politically ‘Finlandize’ Europe’s east, or to irreversibly politically split NATO. The geographically-confined nature of these potential combat zones, their sheer proximity to Russia, and Russia’s unquestioned quantitative military superiority in theater create a litany of conventional deterrence challenges that must not be paved over.
It is useful to illustrate how the Eastern European and East Asian theaters’ dissimilar geostrategic and military circumstances make for considerable differences in conventional deterrence approaches. The basic principle that conventional deterrence credibility requires forward presence is true in both cases. So is the idea that constabulary forces fill critical deterrence roles along the lower-end of the conventional conflict spectrum. Beyond that, whereas the East Asian theater is predominantly maritime, the Eastern European theater would predominantly involve land warfare. Whereas China and America’s treaty allies in East Asia are separated by the sea or otherwise do not share adjacent land borders, lines in the soil separate Russia from NATO’s Baltic members (not to mention Norway). The maritime approaches to NATO’s Baltic and Black Sea members are far narrower than is the case with any U.S. treaty ally in East Asia. Similarly, the land areas available for operational maneuver in the Baltics are miniscule compared to the vastness of the Western Pacific—and also represent those NATO members’ homelands in their entirety. The political aspects are likewise very different: U.S. alliances in East Asia are bilateral and therefore comparatively less complex to manage in a crisis or conflict, whereas NATO operates by political consensus—and U.S. lines of operations (as well as logistics) would wholly depend upon NATO’s Central and Western members’ support. Militarily, while the PLA has not fought a war since 1979, Russian forces have fought in multiple conflicts in their ‘Near Abroad’ over the past two decades—and there is considerable evidence they made great effort to learn from their 2008 invasion of Georgia and their ongoing operations in Ukraine. Last in this by no means comprehensive comparison, the Russian approach to the potential uses of nuclear forces in crises and war is vastly more assertive than the Chinese approach.
I plan to write more comprehensively on conventional deterrence of Russia as my time allows, and I highly encourage the security studies community to do the same. The subject requires lengthy treatments to be sure. In the interim I strongly recommend reading Jacub Grygiel’s and A. Wess Mitchell’s article in the December 2014 American Interest; Forrest Morgan’s prescient 2012 IFRI monograph on the challenges of managing escalation in a NATO-Russian conflict; the January 2015 CSIS European Trilateral Nuclear Consensus Statement resulting from Track II discussions amongst U.S., U.K., and French participants; the summary of the October 2014 SWP-CNA meeting on Baltic Security; an excellent set of articles by Finnish security commentators regarding Russian threats to Scandinavian security, and Elbridge Colby’s discussion of Russian concepts for the use of theater nuclear forces in his January 2015 CNAS monograph Nuclear Weapons in the Third Offset Strategy as primers for the much-needed discussions and debate.
Suffice to say, while the military-strategic challenges involved with deterring Russia may actually be more complex than are the case with China, they are not insurmountable. Rather, as was the case throughout the Cold War, the single greatest question mark will almost certainly center on the Euro-Atlantic bloc’s political resolve to create—and if necessary wield—sufficiently-capable military forces that are sized, positioned, and postured to support deterrence. We in the security studies community have an important role to play in helping to buttress these leaders' resolves by providing them and the interested public with serious empirical studies of how conventional deterrence against Russian aggression might be best obtained. We're long overdue in getting started.

The views expressed herein are solely those of the author and are presented in his personal capacity. They do not reflect the official positions of Systems Planning and Analysis, and to the author’s knowledge do not reflect the policies or positions of the U.S. Department of Defense, any U.S. armed service, or any other U.S. Government agency.  

Tuesday, March 3, 2024

The Price of Failure in Naval Innovation



German Fleet on its way to surrender in 1918

     On 21 November 2024 Great Britain’s Royal Navy (RN) stood at the pinnacle of naval power and success. Its greatest and most dangerous opponent to date, the German Empire’s High Seas Fleet, sailed to Scapa Flow to surrender and be interned pending armistice negotiations. Grand Fleet Commander Admiral Sir David Beatty declared “the German flag will be hauled down at sunset and is not to be hoisted again without permission”. No nation on earth had ever assembled a larger and more technologically advanced fleet. The Grand Fleet boasted 35 battleships, 11 battlecruisers, the world’s only operational aircraft carriers, and steam-driven, high speed submarines capable of operations with the battle fleet.  It had overcome deficiencies in its heavy ordnance and tactical doctrine that were evident in the loss of three battlecruisers at the May 1916 Battle of Jutland, and the failure to inflict more damage on the German battlefleet. No one would doubt that the Royal Navy continued to rule the waves as it had unconditionally done since the defeat of Napoleon’s naval forces at Trafalgar in October 1805. In just 20 years, however, the mighty RN was significantly behind the other great powers in naval technological innovation. Britain had failed to keep pace in both the extraordinary, and in the mundane features of naval innovation. The mundane features, in particular, proved costly. They put the Royal Navy at considerable disadvantage throughout World War Two and forced additional costs on the service that hobbled its attempts to create a post-World War 2 force. The following three failures in naval innovation were particularly crippling.

1)  Naval propulsion equipment. The Royal Navy led the world in the development of naval propulsion technology from the 1860’s into the second decade of the 20th century. Its achievements included the introduction of fuel oil for propulsion and the turbine engine. By 1915, however, this lead had begun to slip. The U.S. Navy, rather than the RN, introduced turbo-electric drive for warships and had begun significant work on improving warship fuel economy. This trend continued in the interwar period. By the late 1930’s, British propulsion machinery was leaky, heavier, more bulky, and decidedly less fuel efficient than comparable American marine propulsion installations. Standard Royal Navy boilers, for example, had to be cleaned every 750 hours of operating time, as compared with 2000 hours for comparable U.S. boilers.[i] The British Naval Constructor and Historian David K. Brown attributed this to the U.S. use of boiler water chemicals to prevent scale buildup in water tube installations. Senior British naval leadership, who in most cases had little no engineering background or experience, refused to allow the use of such chemicals in Royal Navy vessels. The British Pacific Fleet engineers, after seeing U.S. use of these chemicals in 1945, disobeyed official instructions to use them. The fleet engineer was threatened with court martial for disobeying orders, but improved performance trumped traditional naval justice and he was later promoted.[ii] In addition, British fuel oil nozzles were specially configured to burn the very “sweet” oil purchased from the Persian Gulf. When supplied with U.S. oil, RN boilers made more smoke and were less efficient.[iii]
USS Washington (foreground) with HMS King George V

     These faults significantly reduced the fuel economy of British warships in comparison with their American counterparts. The American battleship USS Washington initially operated with the British home fleet in 1942 before her transfer to the Pacific. Washington had 39% less fuel consumption at standard bell speed than the comparable British battleship HMS King George V. Washington’s fuel efficiency was even better at speeds above 15 knots and in practice had double the endurance of her British counterpart.[iv]

2)  Failure to Adopt Alternating Current (A/C) for Warship Electrical Systems:  As with main propulsion machinery, the Royal Navy was a pioneer in the installation of electrical equipment on board its warships. The first electric searchlights were installed on the battleship HMS Minotaur in 1876. The first fatality from electric shock aboard a warship occurred on HMS Inflexible in 1881. The British continued to advance electrical development throughout the 19th and early 20th centuries. It developed the “ring main” system, the forerunner to the modern electrical distribution system aboard current warships, in 1904. This advance allowed for electric power for large shipboard equipment, such as the electrically-trained main gun turrets of the Invincible class battlecruisers of 1906. These warships also set a new standard of development in their introduction of the 220 volt direct current (DC) system, which would be an RN standard for the next 40+ years.[v]
1920's General Electric Ad for the turbo-electric battleship

HMS Ark Royal
    As in propulsion equipment, British development of shipboard electrical systems also lagged during the interwar period. The U.S. Navy, by contrast, adopted the alternating current (A/C) system in 1932 and attendant small circuit breakers later in the decade. A/C systems proved to be much lighter and more efficient than their D/C counterparts. This advancement allowed smaller ships to support more electric equipment. The Royal Navy, by contrast, lagged in the installation of A/C systems until the end of the Second World War, and did not fully adopt them until the mid 1950’s for larger warships. Given the significant electrical requirements of postwar naval equipment such as radars (search and fire control), sonar, computers and communications gear, this failure to adopt the A/C power system put the British at considerable disadvantage in equipping their warships in the postwar era. D.K Brown cited the limited capabilities of D/C power afloat as one of the most costly elements in keeping Great Britain’s last fixed-wing aircraft carrier, HMS Ark Royal, in commission into the 1970’s.[vi]

Mess Deck HMCS Iroquois, 1944
3)  Habitability on board Warships: D.K. Brown noted that a sailor from the time of Lord Nelson would still feel at home on a British warship of the 1930’s and 1940’s with regard to the levels of habitability. The naval author Nichols Monserrat noted the stark differences between the British and U.S. destroyer escorts he commanded in his 1946 book HM Frigate. British ships still sent food from the galley to individual “messes” in or near sailors’ living spaces. Food often spilled or became cold enroute to these mess locations. British ships still retained hammocks for sleeping accommodation well into the 1940’s after other nations had long since discarded these elements of the age of sail. Ventilation and insulation remained poor on British warships in comparison with other navies as well. The famous Royal Navy antisubmarine Flower class corvettes fighting in the North Atlantic lacked heat and insulation which made their sailors susceptible to Tuberculosis.[vii] Some senior Royal Navy commanders dismissed such concerns and suggested that the additional of additional comforts for sailors would make them “soft”.   
President Truman enjoys cafeteria messing, 1945
     The U.S. Navy, by contrast,  had adopted cafeteria-style, centralized “mess decks”, shipboard laundry services, bunks for sleeping accommodation, ice water in each mess space, potato peelers, good insulation and ventilation, and an excellent internal communication system a whole decade before the RN considered these advancements. In addition, American sailors had access to other comforts their British counterparts did not, including electric ice cream machines that were in use on U.S. warships as early as 1916.[viii]
Monserrat noted all of these features and concluded by saying, “no one would consider U.S. sailors of World War 2 as soft”.

     While failure to immediately embrace innovative advances in propulsion machinery, electrical capacity, and habitability did not prevent The Royal Navy from achieving victory over its German and Italian opponents during the Second World War, it certainly made the effort more costly. Money spent to fuel inefficient engines could have bought more ships, or improved the ability to service its existing order of battle. More advanced electrical plants might have allowed the Royal Navy to preserve more of its war-built force after 1945, especially its aircraft carriers. Finally, habitability matters in that “the combat efficiency of the crew is increased if they are well fed and can rest properly when off duty.”[ix]
     To their great credit, however, Royal Navy senior leaders, engineers and warship designers rebounded smartly from these problems in the post-World War 2 era. They did their best to belatedly adopt A/C electrical systems, but also devised some of the most important new naval systems in postwar combatants and were the first to take gas turbine engineering, fin stabilizers, and helicopters to sea on small warships.
     How does the RN’s failure to adopt new supporting technologies affect the present U.S. Navy? Sometimes the innovations in systems other than armament, sensors, and communications have significant impacts on the development of a ship’s combat systems. Sometimes those developments are long and costly. Electric drive warship development in the form of the DDG 1000 has been slow and time consuming, but will likely lead to fuel savings, better internal ship subdivision, and the ability to deploy directed energy weapons. The modularity of the Littoral Combat Ship may take time to reach full operational capability, but it offers the promise of future, reconfigurable warships with multiple mission combinations on a common hull. The reconfiguration of living space aboard ship toward much less cramped conditions, dedicated exercise facilities, and improved access to internet aboard ship have gone a long way toward improving the individual sailor’s lot whilst at sea. These advances, while difficult and perhaps not as flashy as new weapons and sensors sometimes make a significant difference both in the next war, and what comes after the guns fall silent.
 

[i] David K. Brown, Nelson to Vanguard, Warship Design and Development, 1923-1945, Annapolis, MD, Naval Institute Press, 2000, p. 101.
[ii] Rear Admiral Lewis  Le Bailly, Fisher to the Falklands, London, Institute of Marine Engineers, 1991, pp 71-73.
[iii] Le Baily, pp. 71-73.
[iv] Brown, p. 33.
[v] John M. Maber, “Electrical Supply in Warships, A Brief History”, Crown Copyright/MoD (1980).
[vi] David K. Brown, Rebuilding the Royal Navy, Annapolis, MD, Naval Institute Press, 2005, p. x
[vii] Brown, Nelson to Vanguard, p. 134.
[viii] Ronald Spector, “The U.S. Navy’s Sea Change”, Military History Quarterly, 01 February 2010.
[ix] Brown, Nelson to Vanguard. P. 134.

Always Have Branching Actions Prepared


There’s an excellent article in the Spring ’15 Naval War College Review by Anthony Tully and Lu Yu on how Yamamoto’s and Nagumo’s faulty assumption that the Japanese Combined Fleet had achieved operational surprise at Midway directly lead to their decisive defeat in that battle. The article builds off of the similarly excellent work Tully and Jonathan Parshall did in their book Shattered Sword, which I’m ashamed to say I’ve skimmed and referenced but have yet to complete reading in its entirety.
I believe Tully and Lu make a highly convincing case in their article. I found the discussion of U.S. and Japanese maritime surveillance tactics during the first months of the war particularly fascinating; clearly both sides were still experimenting to find good balances between timely wide-area coverage and the efficient use of limited scouting resources. Nagumo’s decisions heading into the morning of 04 June 2024 make more sense in this context, however disastrous they were in the end.
The moral of the story as I see it is that a force commander must always have multiple well-developed branching actions, or rather alternative plans that account for operational situations different than the one that was used to design the operational plan in progress. It is hubristic to assume that the situational assumptions governing an operational plan are fully correct, or that they won’t change significantly over time given an intelligent and adaptive adversary. Tully and Yu highlight multiple intelligence data points that should have led Yamamoto or Nagumo to question the soundness of their scheme of maneuver or otherwise employ more aggressive surveillance tactics. Instead, despite the existence of some evidence to the contrary, they continued to incorrectly assume that they still possessed the advantage of operational surprise.
I’ve said it before and I’ll say it again: deception and concealment can be terrific force-multipliers, but only if the operational commander doesn’t blindly assume that their use will be effective when needed. An operational plan under development should be thoroughly red teamed to identify planning assumptions that, if mistaken, could lead to unacceptable risk of losses or failure. Branching actions should then be developed—and clearly briefed to the units in the force—that account for such contingencies. If intelligence data begins to suggest that surprise may not be in the offing or that an otherwise unalerted adversary is not behaving as expected, there should always be a branching action ready to go that matches up to the outline of the apparent situation at hand. 

The views expressed herein are solely those of the author and are presented in his personal capacity. They do not reflect the official positions of Systems Planning and Analysis, and to the author’s knowledge do not reflect the policies or positions of the U.S. Department of Defense, any U.S. armed service, or any other U.S. Government agency.  

Monday, March 2, 2024

Call for (Sharing) Papers and Books: Political Histories of Building a Strong U.S. Navy



Carl Vinson (Image courtesy Library of Congress)
There have been several interesting articles of late that touch on the politics of U.S. naval strength. I’ve discussed the topic with a number of friends and colleagues, and unsurprisingly I’ve heard a wide range of views. One thing I think they’re all in agreement on is that we were clearly approaching a strategic precipice even before the Budget Control Act of 2011.
It strikes me that any political strategy for preserving a strong Navy ought to be informed by how that very strength was politically achieved in the first place. We know that the political path to a global U.S. Navy began with the naval authorization acts of the 1880s and 1890s, was amplified in the ‘second to none’ Naval Act of 1916, and was cemented in Naval Authorization Acts of 1934-1940. I personally can’t say I know much about how the sponsors of these acts or their navalist backers achieved what they did, though.
For example, while it’s well understood that Carl Vinson was the driving political force behind the pre-Second World War U.S. naval rearmament, how exactly did he gain the support of those in other positions of Congressional and Executive power who were necessary for passage? Granted, his efforts benefitted from the fact that President Franklin D. Roosevelt was an unabashed navalist, but Roosevelt was not always fully on board with Vinson’s initiatives. How did he obtain Roosevelt’s active cooperation when possible and Constitutional consent when necessary? What specific roles did the Navy’s leaders of the era play? The media? Advocacy groups? How did global events factor in? Did the general public play any roles, and if so to what degree did navalists reach out to them to obtain their support or otherwise get them engaged?
I find what Vinson achieved in 1934 particularly remarkable. Amidst substantial American political opposition to rearmament and overseas entanglements, Vinson and his Senate counterpart Park Trammell got the first of the major interwar naval authorization acts passed through Congress. It seems likely that selling naval investment as a Great Depression jobs program helped, but it’s not clear to me just how much that offset the arguments of those opposed.
Therefore, if you’ve read (or written) books or journal articles that contribute to answering questions similar to the ones I outlined for any of the aforementioned periods, please share the titles in the comments thread. And if you’re in college or grad school and are searching for historical naval policy topics of great contemporary relevance to write about for coursework—and then perhaps get published—I don’t think you can go wrong exploring the late 19th and early 20th Century political paths to U.S. naval strength.

The views expressed herein are solely those of the author and are presented in his personal capacity. They do not reflect the official positions of Systems Planning and Analysis, and to the author’s knowledge do not reflect the policies or positions of the U.S. Department of Defense, any U.S. armed service, or any other U.S. Government agency.