Showing posts with label Network-Centric Warfare. Show all posts
Showing posts with label Network-Centric Warfare. Show all posts

Tuesday, September 22, 2024

Some Observations about Network-Enabled Over-the-Horizon Attacks


Norman Friedman’s 2009 book Network Centric Warfare is one of the principal influences upon my thinking about 21st Century maritime combat. It is a seminal recounting of the evolution of modern maritime warfare systems, the ‘systems of systems’ they fit in to, and the doctrines developed for employing them. It also serves as a core reference for researchers seeking to discover the fine (declassified) technical and operational details of the Cold War competition between U.S. and Soviet maritime ‘battle networks.’
One of Friedman’s most interesting observations in the book pertains to network-enabled attacks, especially from ‘over-the-horizon.’ A ship targeted using remote surveillance sensors, for example, might not realize it had been targeted until it detected inbound weapons. Friedman notes that the multi-source Soviet Ocean Surveillance System (SOSS) couldn’t enable true surprise attacks because Soviet anti-ship missile doctrine was predicated on the use of ‘pathfinder’ and ‘tattletale’ scouts for visual confirmation and classification of targets. Detection of these scouts by U.S. Navy or NATO battleforces (or theater/national surveillance systems) would provide the defenders warning that Soviet anti-ship missile platforms were nearby or that a raid was inbound. (Pg. 217-239)
In contrast, the U.S. Navy of the late 1970s and early 1980s sought to use its Ocean Surveillance Information System (OSIS) network of signals intelligence sensors and fusion centers to provide targeting cues to Tomahawk Anti-Ship Missile (TASM)-armed submarines via an effort dubbed Outlaw Shark. Since its advent a decade earlier, OSIS had been used to detect, classify, and develop “track histories” for Soviet ships in support of Navy operational-level planning. The experimental Outlaw Shark targeting capability stemmed from using OSIS’s track histories to dead-reckon Soviet ships’ geolocations at future times, then transmitting those cues to patrolling submarines. Unlike SOSS, though, OSIS did not use active surveillance or reconnaissance sensors to supplement its passive ones. As a result, Outlaw Shark targeting would have been unavailable if Soviet ships maintained disciplined Emissions Control (EMCON). (Pg. 206-209)
In the event of exploitable Soviet EMCON indiscipline, however, Friedman observes that Outlaw Shark targeting would in theory have denied a Soviet surface force any warning of an impending U.S. anti-ship attack. This is because the OSIS-TASM tandem’s lack of a scout meant that there would have been no discernable U.S. Navy ‘behavior’ to tip Soviet ships off that they had been targeted. Friedman concludes with the thought that even if a TASM attack had landed no blows, it nevertheless might have disrupted a Soviet surface force’s plans or driven it to take rash actions that could have been exploited offensively or defensively by other U.S. or NATO forces. (Pg. 210)
The obvious limitations of relying almost entirely upon non-real-time signals intelligence for over-the-horizon targeting contributed greatly to the Navy shelving its TASM ambitions during the early 1980s. The Navy’s own mid-to-late Cold War countertargeting doctrine and tactics made great use of EMCON and deceptive emissions against SOSS, so there was no fundamental reason why the Soviets could not have returned the favor against OSIS. Moreover, TASM employment depended upon a Soviet ship maintaining roughly the same course and speed it was on at time of an OSIS-generated targeting cue. If the targeted Soviet ship maneuvered such that it would not be within the TASM’s preset ‘search basket’ at the anticipated time, then the TASM would miss. Nor could Navy shooters have been sure that the TASM would have locked on to a valid and desirable Soviet ship vice a lesser Soviet ship, a Soviet decoy ship, or even a non-combatant third-party’s ship.
Friedman’s point remains, though: a network-enabled attack that results in a physical miss could nevertheless theoretically produce significant tactically-exploitable psychological effects. This concept has long been used to forestall attacks by newly-detected nearby hostile submarines, even when the submarine’s precise position is not known. An anti-submarine weapon launched towards the submarine’s vicinity at minimum complicates the latter’s tactical situation and potentially forces it into a reactive and defensive posture. This can buy time for more effective anti-submarine measures including better-aimed attacks.
It therefore might be reasonable to use some longer-ranged weapons to “shock” an opponent’s forces along the lines Friedman outlines, even if the weapons’ hit probabilities are not high, if it is deemed likely that the targeted forces will react in ways that friendly forces armed with more plentiful and producible weapons could exploit. For example, an opponent’s force might light off its air defense radars upon detecting the attacker’s weapons’ own homing radars. Or perhaps the opponent’s units might distinguish themselves from non-combatant vehicles/aircraft/ships in the battlespace by virtue of their maneuvers once they detect inbound weapons. Either reaction might provide the attacker with definitive localization and classification of the opponent’s platforms, which in turn could be used to provide more accurate targeting support for follow-on attacks. Depending on the circumstances, expenditure of a few advanced weapons to ‘flush’ an opponent’s forces in these ways might be well worth it even if none hit.
But would doing so really be the best use of such weapons in most cases? We must bear in mind the advanced ordnance inventory management dilemma: higher-capability (and especially longer-range) guided weapons expended during a conflict likely will not be replaced in the attacker’s arsenal in a timely manner unless they are readily and affordably wartime-producible. Nor will weapons launched from surface ships’ or submarines’ launchers be quickly reloadable, as these platforms will have to retire from the contested zone and expend several days of transit time cycling through a rearward base for rearmament. The force-level operational tempo effects of this cycle time will not be insignificant. A compelling argument can be made that advanced weapons should be husbanded for attacks in which higher-confidence targeting is available…unless of course the responsible commander assesses that the situation at hand justifies firing based on lower-confidence targeting.
There is another option, however. Instead of expending irreplaceable advanced weapons, a network-enabled attacker might instead use decoy weapons that simulate actual weapons’ trajectories, behaviors, and emissions in order to psychologically jar an opponent’s forces or otherwise entice them to react in exploitable ways. This would be especially useful when the attacker‘s confidence in his targeting picture is fairly low. SCATHE MEAN comes to mind in this respect. This is probably more practical for aircraft and their deep munitions inventories in aircraft carriers or at land bases. Still, it might be worth exploring how a small number of decoy weapons sprinkled within a Surface Action Group or amongst some submarines might trade operationally and tactically against using those launcher spots for actual weapons.
As for the defender, there are four principal ways to immunize against (but not decisively counter) the use of actual or decoy weapons for network-enabled ‘shock or disrupt’ attacks:

  • Distribute multi-phenomenology sensors within a defense’s outer layers in order to detect and discriminate decoy platforms or weapons at the earliest opportunity. The sensors must be able to communicate with their operators using means that are highly resistant to detection and exploitation by the attacker.

  • Institute routine, realistic, and robust training regimes that condition crews psychologically and tactically for sudden shocks such as inbound weapons “out of nowhere” or deception. This might also lead to development of tactics or operating concepts in which some or all of the defender’s units gain the ability to maintain restrictive emissions, maneuvering, and firing discipline even when an adversary’s inbound weapons are detected unless certain criteria are met.

  • Field deep (and properly positioned) defensive ordnance inventories. Note that this ordnance does not just include guns and missiles, but also electronic warfare systems and techniques.

  • Embrace tactical flexibility and seize the tactical initiative, or in other words take actions that make it far harder for an adversary to attack first. A force’s possession of preplanned branching actions that cover scenarios in which it is prematurely localized or detected by an adversary can help greatly in this regard.

Friedman’s observations regarding the psychological angles of network-enabled targeting are subtle as they require thinking about how the technological aspects of a tactical scenario might interplay with its human aspects. We tend to fixate on the former and overlook the latter. That’s an intellectual habit we’re going to need to break if we’re going to restore the capacity and conditioning we possessed just a quarter century ago for fighting a great power adversary’s networked forces.

The views expressed herein are solely those of the author and are presented in his personal capacity. They do not reflect the official positions of Systems Planning and Analysis, and to the author’s knowledge do not reflect the policies or positions of the U.S. Department of Defense, any U.S. armed service, or any other U.S. Government agency.

Thursday, September 10, 2024

A Definitive Article about Information Age Naval Warfare


Earlier this week I discussed two superb articles in the July 2015 Naval Institute Proceedings that examined aspects of cyber and networking resiliency. Today I’m going to talk about the issue’s third article on cyber-electromagnetic warfare: LCDR DeVere Crooks’s and LCDR Mateo Robertaccio’s “The Face of Battle in the Information Age.”
Usually when I read a journal article I mark it up with a pen to highlight key passages or ideas so that I can revisit them later. My doing so to their article was pointless in retrospect, as I ended up highlighting just about every one of their paragraphs.
LCDRs Crooks and Robertaccio touch on virtually every major aspect of operating under cyber-electromagnetic opposition. They correctly argue that cyber-electromagnetic warfare is integral to 21st Century naval warfare, and that we ignore that truism at our peril. They observe that while our pre-deployment training exercises are generally designed to test how well units perform particular tasks, or to test or troubleshoot plans and operating concepts, they don’t generally allow for freeplay experimentation that might uncover new insights about fighting at sea in the information age. “What will tactical-level decision-makers experience, what will they be able to understand about the battlefield around them, and how will that lead them to employ the tactics and equipment they’ve been handed?” ask the authors.
They also highlight the centrality of emissions control to combat survival, with the added observation that the Navy must learn to accept “electromagnetic silence” as its “default posture.” They decry the fact that the Navy rarely is “forced to operate in a silent (or reduced) mode for any sort of extended period or while conducting complex operations.” They allude to the fact that we were able to regularly perform at such a level as recently as a quarter century ago.
They then go into great detail asking questions about whether our training, preferred communications methods, doctrine, tactics, and tactical culture are fully aligned with the realities of fighting under cyber-electromagnetic opposition. When I was on active duty at sea in 2001-2004, I only recall one exercise in which a destroyer I served on practiced performing combat tasks while using only our passive sensor systems—and that was done at the initiative of my destroyer’s Commanding Officer. I don’t remember ever conducting a drill in any of my ships in which our connectivity with external intelligence, surveillance, and reconnaissance assets was deliberately manipulated, degraded, or severed by simulated electronic attacks. Evidently LCDRs Crooks and Robertaccio had similar experiences on their sea tours as well. The issues they raise along these lines in the middle sections of their article are worth the “price of admission” alone.
Their concluding recommendations are most commendable:
  • Begin conducting a “series of extended free play Fleet Problems with minimal scripting and objectives beyond the generation of a large body of direct, honest lessons learned and questions for further investigation.” These Fleet Problems should “allow either side to win or lose without intervention to drive a planned outcome” and should “apply as many of the atmospherics and limitations of an Information Age A2/AD environment as possible, challenging participants to work within the constraints of a battlefield that is contested in all domains.”
  • Use these experiments and other forms of analysis to generate “a set of assumptions about the conditions that are likely to apply in Information Age naval combat (in specified time frames) and mandate that they be applied to all tactics development, fleet training requirements and scenarios, manning plans, and training requirements for individual personnel” as well as “to the development of requirements for future payloads and platforms.”
  • Acknowledge at every level that the cyber and electromagnetic domains will be hotly contested. This means no longer treating the confidentiality, availability, and integrity of information “as a given” or otherwise that it would be “lightly contested.” Tactical-level commanders should treat the need for temporary localized cyber-electromagnetic superiority as just as integral to sea control as is the case with the physical domains of war. As they observe, “this may often largely amount to the monitoring of operations coordinated at higher levels of command, but it is critically relevant even to individual watchstanders.” I would add that qualitative observations of the cyber-electromagnetic situation will likely be just as important as quantitative measurements of that situation.
LCDRs Crooks and Robertaccio have written a definitive thought-piece regarding modern naval warfare under cyber-electromagnetic opposition. I commend it to all naval professionals and enthusiasts alike. It should be considered a point-of-departure reference for the naval debates of our time. 
And my thanks to LCDR Crooks for sharing a follow-on surface force-centric piece here at ID last week. I truly hope his and LCDR Robertaccio’s messages percolate within the fleet. Much in the future depends upon it.
 
The views expressed herein are solely those of the author and are presented in his personal capacity. They do not reflect the official positions of Systems Planning and Analysis, and to the author’s knowledge do not reflect the policies or positions of the U.S. Department of Defense, any U.S. armed service, or any other U.S. Government agency.

Tuesday, September 8, 2024

Thinking About Cyber and Networking Resiliency


I’m well over a month late writing about the July 2015 issue of USNI Proceedings. Simply put, it contains three of the finest pieces about operating under cyber-electromagnetic opposition I’ve read in a long time. I’ll be talking about two of them today and the third one later this week.
First up is LCDR Brian Evans’s and Pratik Joshi’s outstanding article “From Readiness to Resiliency.” Evans and Joshi note that past Navy cyberdefense efforts primarily focused on unit-level compliance with information assurance measures such as firewall configurations, network configuration management and behavior monitoring, physical security protections, and regular ‘hygiene’ training for users. While these kinds of measures continue to be critically important in that they deny adversaries ‘cheap and easy’ attack vectors for exploiting Navy networks and systems, the authors observe that no cyberdefense can hope to keep an intelligent, determined, and adequately resourced adversary out forever. According to the authors, last fall the Navy’s nascent Task Force Cyber Awakening concluded (correctly I might add) that the Navy’s systems, networks, and personnel must able to continue operating effectively, albeit with graceful degradation, in the face of cyberattacks. In other words, they must become resilient.
Evans and Joshi essentially outline a concept for shipboard “cyber damage control.” They describe how the longstanding shipboard material readiness conditions X-RAY, YOKE, and ZEBRA can also be applied to shipboard networks: crews can proactively shut down selected internal and external network connections as tactical circumstances warrant, or they can do so reactively if cyber exploitation is suspected. The authors outline how crews will be able to segment networks and isolate mission-critical systems from less-critical systems, or isolate compromised systems from uncompromised systems, much like damaged compartments can be isolated to prevent the spread of fire, smoke, or flooding. The authors go on to discuss how damage isolation must be followed by repair efforts, and how knowledge of a system’s or network segment’s last known good state can be used to recognize what an attacker has exploited and how in order to aid restoration. It stands to reason that affected systems and network segments might additionally be restorable by crews to a known good state, or at least into a “safe state” that trades gracefully degraded non-critical functionality for sustainment of critical functions.
It’s important to keep in mind, though, that resilience requires more than just technological and procedural measures. When I was an Ensign on USS First Ship in 2001, many crewmembers would tell me of the “Refresher Training” at Guantanamo Bay that Atlantic Fleet ships went through up until budget cutbacks ended the program in the mid-1990s or so. At REFTRA, the assessors would put ships through exacting combat drills in which chaotic attacks, major damage, and grievous casualties were simulated in order to expose crews to the most stress possible short of actual battle. According to some of the senior enlisted I served with, it wasn’t unusual for the assessors to “cripple” a ship’s fighting capacity or “kill off” much of a watchteam or a damage control party to see how the “survivors” reacted. Some ships were supposedly tethered to Guantanamo for weeks on end until the assessors were convinced that the crews had demonstrated adequate combat conditioning—and thus a greater potential for combat resilience. This kind of training intensity must be restored, preferably by shipboard leaders themselves, with the 21st Century addition of exposing their crews to the challenges of fighting through cyberattacks. Perhaps a scenario might involve intensive simulation of system malfunctions as a pierside ship rushes to prepare to sortie during an escalating crisis. Or perhaps it might involve simulated malfunctions at sea as “logic bombs” or an “insider attack” are unleashed. Evans and Joshi allude to the cyber-conditioning angle in the fictional future shipboard training drill they use to close their article. One hopes that Task Force Cyber Awakening is in fact exploring how to develop the psychological aspect of resilience within the fleet.
This leads nicely into the July issue’s other excellent technical article on network resilience, CDR John Dahm’s “Next Exit: Joint Information Environment.” CDR Dahm argues that even if the Defense Department were to successfully consolidate and standardize the services’ information infrastructures within the most hardened of citadels, this Joint Information Environment (JIE) would still only be as combat-effective as the security of the communication pathways connecting that citadel to force in the field. He relates a fictional saga in which a near-peer adversary wins a limited war by severing the U.S. military’s satellite communications pathways as well as the oceanic fiber optic cables connecting Guam and Okinawa to the internet. He correctly notes that the “transmission layer” connecting deployed U.S. forces and theater/national intelligence, surveillance, and reconnaissance assets with the JIE presents the most vulnerable segment of the entire JIE concept. He alludes to the fact that a force that is dependent upon exterior lines of networking is essentially setting itself up for ruin if an adversary lands effective physical, electronic, or cyber attacks against any critical link in the communications chain. He closes by observing that “the communications necessary to support a cloud-based network architecture cannot simply be assumed,” with the implication being that the JIE concept must be expanded to encompass the transmission layer if it is to be successful in a major war.
We know that just as there can never be such a thing as an impregnable “information citadel,” there is no way to make any communications pathway completely secure from disruption, penetration, or exploitation. We can certainly use measures such as highly-directional line-of-sight communications systems and low probability of intercept communications techniques to make it exceedingly difficult for an adversary to detect and exploit our communications pathways. We can also use longstanding measures such as high frequency encoded broadcast as a one-way method of communicating operational orders and critical intelligence from higher-level command echelons to deployed forces. But both reduce the amount of information flowing to those forces to a trickle compared to what they are used to receiving when unopposed, and the latter cuts off the higher echelon commander from knowledge that the information he or she had transmitted has been received, correctly interpreted, and properly implemented. And neither method is unequivocally free from the risk of effective adversary attack. What’s needed, then, is a foundation of resilience built upon a force-wide culture of mission command. That may be outside the JIE concept’s scope, but it will be integral to its success.


The views expressed herein are solely those of the author and are presented in his personal capacity. They do not reflect the official positions of Systems Planning and Analysis, and to the author’s knowledge do not reflect the policies or positions of the U.S. Department of Defense, any U.S. armed service, or any other U.S. Government agency.