Showing posts with label Cyberwarfare. Show all posts
Showing posts with label Cyberwarfare. Show all posts

Thursday, September 10, 2024

A Definitive Article about Information Age Naval Warfare


Earlier this week I discussed two superb articles in the July 2015 Naval Institute Proceedings that examined aspects of cyber and networking resiliency. Today I’m going to talk about the issue’s third article on cyber-electromagnetic warfare: LCDR DeVere Crooks’s and LCDR Mateo Robertaccio’s “The Face of Battle in the Information Age.”
Usually when I read a journal article I mark it up with a pen to highlight key passages or ideas so that I can revisit them later. My doing so to their article was pointless in retrospect, as I ended up highlighting just about every one of their paragraphs.
LCDRs Crooks and Robertaccio touch on virtually every major aspect of operating under cyber-electromagnetic opposition. They correctly argue that cyber-electromagnetic warfare is integral to 21st Century naval warfare, and that we ignore that truism at our peril. They observe that while our pre-deployment training exercises are generally designed to test how well units perform particular tasks, or to test or troubleshoot plans and operating concepts, they don’t generally allow for freeplay experimentation that might uncover new insights about fighting at sea in the information age. “What will tactical-level decision-makers experience, what will they be able to understand about the battlefield around them, and how will that lead them to employ the tactics and equipment they’ve been handed?” ask the authors.
They also highlight the centrality of emissions control to combat survival, with the added observation that the Navy must learn to accept “electromagnetic silence” as its “default posture.” They decry the fact that the Navy rarely is “forced to operate in a silent (or reduced) mode for any sort of extended period or while conducting complex operations.” They allude to the fact that we were able to regularly perform at such a level as recently as a quarter century ago.
They then go into great detail asking questions about whether our training, preferred communications methods, doctrine, tactics, and tactical culture are fully aligned with the realities of fighting under cyber-electromagnetic opposition. When I was on active duty at sea in 2001-2004, I only recall one exercise in which a destroyer I served on practiced performing combat tasks while using only our passive sensor systems—and that was done at the initiative of my destroyer’s Commanding Officer. I don’t remember ever conducting a drill in any of my ships in which our connectivity with external intelligence, surveillance, and reconnaissance assets was deliberately manipulated, degraded, or severed by simulated electronic attacks. Evidently LCDRs Crooks and Robertaccio had similar experiences on their sea tours as well. The issues they raise along these lines in the middle sections of their article are worth the “price of admission” alone.
Their concluding recommendations are most commendable:
  • Begin conducting a “series of extended free play Fleet Problems with minimal scripting and objectives beyond the generation of a large body of direct, honest lessons learned and questions for further investigation.” These Fleet Problems should “allow either side to win or lose without intervention to drive a planned outcome” and should “apply as many of the atmospherics and limitations of an Information Age A2/AD environment as possible, challenging participants to work within the constraints of a battlefield that is contested in all domains.”
  • Use these experiments and other forms of analysis to generate “a set of assumptions about the conditions that are likely to apply in Information Age naval combat (in specified time frames) and mandate that they be applied to all tactics development, fleet training requirements and scenarios, manning plans, and training requirements for individual personnel” as well as “to the development of requirements for future payloads and platforms.”
  • Acknowledge at every level that the cyber and electromagnetic domains will be hotly contested. This means no longer treating the confidentiality, availability, and integrity of information “as a given” or otherwise that it would be “lightly contested.” Tactical-level commanders should treat the need for temporary localized cyber-electromagnetic superiority as just as integral to sea control as is the case with the physical domains of war. As they observe, “this may often largely amount to the monitoring of operations coordinated at higher levels of command, but it is critically relevant even to individual watchstanders.” I would add that qualitative observations of the cyber-electromagnetic situation will likely be just as important as quantitative measurements of that situation.
LCDRs Crooks and Robertaccio have written a definitive thought-piece regarding modern naval warfare under cyber-electromagnetic opposition. I commend it to all naval professionals and enthusiasts alike. It should be considered a point-of-departure reference for the naval debates of our time. 
And my thanks to LCDR Crooks for sharing a follow-on surface force-centric piece here at ID last week. I truly hope his and LCDR Robertaccio’s messages percolate within the fleet. Much in the future depends upon it.
 
The views expressed herein are solely those of the author and are presented in his personal capacity. They do not reflect the official positions of Systems Planning and Analysis, and to the author’s knowledge do not reflect the policies or positions of the U.S. Department of Defense, any U.S. armed service, or any other U.S. Government agency.

Tuesday, September 8, 2024

Thinking About Cyber and Networking Resiliency


I’m well over a month late writing about the July 2015 issue of USNI Proceedings. Simply put, it contains three of the finest pieces about operating under cyber-electromagnetic opposition I’ve read in a long time. I’ll be talking about two of them today and the third one later this week.
First up is LCDR Brian Evans’s and Pratik Joshi’s outstanding article “From Readiness to Resiliency.” Evans and Joshi note that past Navy cyberdefense efforts primarily focused on unit-level compliance with information assurance measures such as firewall configurations, network configuration management and behavior monitoring, physical security protections, and regular ‘hygiene’ training for users. While these kinds of measures continue to be critically important in that they deny adversaries ‘cheap and easy’ attack vectors for exploiting Navy networks and systems, the authors observe that no cyberdefense can hope to keep an intelligent, determined, and adequately resourced adversary out forever. According to the authors, last fall the Navy’s nascent Task Force Cyber Awakening concluded (correctly I might add) that the Navy’s systems, networks, and personnel must able to continue operating effectively, albeit with graceful degradation, in the face of cyberattacks. In other words, they must become resilient.
Evans and Joshi essentially outline a concept for shipboard “cyber damage control.” They describe how the longstanding shipboard material readiness conditions X-RAY, YOKE, and ZEBRA can also be applied to shipboard networks: crews can proactively shut down selected internal and external network connections as tactical circumstances warrant, or they can do so reactively if cyber exploitation is suspected. The authors outline how crews will be able to segment networks and isolate mission-critical systems from less-critical systems, or isolate compromised systems from uncompromised systems, much like damaged compartments can be isolated to prevent the spread of fire, smoke, or flooding. The authors go on to discuss how damage isolation must be followed by repair efforts, and how knowledge of a system’s or network segment’s last known good state can be used to recognize what an attacker has exploited and how in order to aid restoration. It stands to reason that affected systems and network segments might additionally be restorable by crews to a known good state, or at least into a “safe state” that trades gracefully degraded non-critical functionality for sustainment of critical functions.
It’s important to keep in mind, though, that resilience requires more than just technological and procedural measures. When I was an Ensign on USS First Ship in 2001, many crewmembers would tell me of the “Refresher Training” at Guantanamo Bay that Atlantic Fleet ships went through up until budget cutbacks ended the program in the mid-1990s or so. At REFTRA, the assessors would put ships through exacting combat drills in which chaotic attacks, major damage, and grievous casualties were simulated in order to expose crews to the most stress possible short of actual battle. According to some of the senior enlisted I served with, it wasn’t unusual for the assessors to “cripple” a ship’s fighting capacity or “kill off” much of a watchteam or a damage control party to see how the “survivors” reacted. Some ships were supposedly tethered to Guantanamo for weeks on end until the assessors were convinced that the crews had demonstrated adequate combat conditioning—and thus a greater potential for combat resilience. This kind of training intensity must be restored, preferably by shipboard leaders themselves, with the 21st Century addition of exposing their crews to the challenges of fighting through cyberattacks. Perhaps a scenario might involve intensive simulation of system malfunctions as a pierside ship rushes to prepare to sortie during an escalating crisis. Or perhaps it might involve simulated malfunctions at sea as “logic bombs” or an “insider attack” are unleashed. Evans and Joshi allude to the cyber-conditioning angle in the fictional future shipboard training drill they use to close their article. One hopes that Task Force Cyber Awakening is in fact exploring how to develop the psychological aspect of resilience within the fleet.
This leads nicely into the July issue’s other excellent technical article on network resilience, CDR John Dahm’s “Next Exit: Joint Information Environment.” CDR Dahm argues that even if the Defense Department were to successfully consolidate and standardize the services’ information infrastructures within the most hardened of citadels, this Joint Information Environment (JIE) would still only be as combat-effective as the security of the communication pathways connecting that citadel to force in the field. He relates a fictional saga in which a near-peer adversary wins a limited war by severing the U.S. military’s satellite communications pathways as well as the oceanic fiber optic cables connecting Guam and Okinawa to the internet. He correctly notes that the “transmission layer” connecting deployed U.S. forces and theater/national intelligence, surveillance, and reconnaissance assets with the JIE presents the most vulnerable segment of the entire JIE concept. He alludes to the fact that a force that is dependent upon exterior lines of networking is essentially setting itself up for ruin if an adversary lands effective physical, electronic, or cyber attacks against any critical link in the communications chain. He closes by observing that “the communications necessary to support a cloud-based network architecture cannot simply be assumed,” with the implication being that the JIE concept must be expanded to encompass the transmission layer if it is to be successful in a major war.
We know that just as there can never be such a thing as an impregnable “information citadel,” there is no way to make any communications pathway completely secure from disruption, penetration, or exploitation. We can certainly use measures such as highly-directional line-of-sight communications systems and low probability of intercept communications techniques to make it exceedingly difficult for an adversary to detect and exploit our communications pathways. We can also use longstanding measures such as high frequency encoded broadcast as a one-way method of communicating operational orders and critical intelligence from higher-level command echelons to deployed forces. But both reduce the amount of information flowing to those forces to a trickle compared to what they are used to receiving when unopposed, and the latter cuts off the higher echelon commander from knowledge that the information he or she had transmitted has been received, correctly interpreted, and properly implemented. And neither method is unequivocally free from the risk of effective adversary attack. What’s needed, then, is a foundation of resilience built upon a force-wide culture of mission command. That may be outside the JIE concept’s scope, but it will be integral to its success.


The views expressed herein are solely those of the author and are presented in his personal capacity. They do not reflect the official positions of Systems Planning and Analysis, and to the author’s knowledge do not reflect the policies or positions of the U.S. Department of Defense, any U.S. armed service, or any other U.S. Government agency.

Monday, March 23, 2024

Honeypots: An Overlooked Cyberweapon



Most discussions of the use of ‘cyber’ as ‘fires’ supporting conventional forces focus on penetrating an enemy’s systems or networks to ‘see’ or manipulate what he ‘sees,’ disrupt or corrupt his communications, disable or damage select systems, and so on. However, there is no assurance that the specific system or network vulnerabilities attacks are designed to exploit will still be available when needed during combat. Vulnerabilities are discovered and patched all the time (though practically speaking, it is impossible to identify every single vulnerability that actually exists in a complex system). An adversary can also change his network topology or close off access points needed by the attacker at inopportune times. Lastly, an exploit is a precious thing: a single use may alert the adversary to a particular vulnerability and may even help the adversary discover new techniques or components that he can reuse in his own arsenal of exploits. Penetrative cyberattacks cannot be assured under all conditions, and may not be worth burning a relevant exploit under some conditions. This hardly means that they are impossible or not worth the costs. It does mean that we must be sober about their combat potential.
It is a given that adversaries will attempt their own wartime penetrative cyberattacks on our military systems and networks. We generally view this as a defensive problem. We often forget that their attacks can also provide us with (passive) offensive opportunities.
Counterintelligence operations and military deception efforts have long used the tactic of feeding disinformation to an adversary’s intelligence collection apparatus. This generally involves knowing at least some of an adversary’s preferred intelligence collection points as well as what kind of ‘evidence’ is best suited to sell the adversary the desired deceptive ‘story.’ Or if it isn’t clear how to convincingly sell a story, the deceiver can conceal accessible ‘real’ information (or make it appear fake) by surrounding it with ‘haystacks’ of false information.
The tactic made a seamless transition into the network age via the honeypot concept. One of the earliest honeypot examples I know of dates back to 1986 when astronomer Cliff Stoll populated one of the mainframes he administered at Lawrence Berkeley Laboratory with entire directories of fake files made to appear related to the Strategic Defense Initiative to help entrap a KGB-sponsored hacker. Stoll had monitored the hacker for quite some time, so he knew exactly what kinds of disinformation would serve as ideal bait. As computing and networking technology has advanced, so have the honeypots (and honeynets).
Honeypots could be outstanding assets for helping to thwart an adversary’s military surveillance and reconnaissance efforts. I outlined how this might be done in my 2013 maritime deception and concealment article; a peer reviewer suggested that I call the technique “Computer Network Charade” (CNC) to line up with the Defense Department computer network operations terminology of the time:
CNC takes advantage of the fact that timely fusion of intelligence into a situational picture is exceptionally difficult, even when aided by data mining and other analytical technologies, since a human generally has to assess each piece of “interesting” information. Once counterintelligence reveals an adversary’s intelligence exploitation activities within friendly forces’ networks, CNC can feed manipulative information tied to a deception story or worthless information meant to saturate. This can be done using the existing exploited network elements, or alternatively by introducing “honeypots.” Massive amounts of such faked material as documents, message traffic, e-mails, chat, or database interactions can be auto-generated and populated with unit identities, locations, times, and even human-looking errors. The material can be either randomized to augment concealment or pattern-formed to reinforce a deception story, as appropriate. A unit can similarly manipulate its network behavior to defeat traffic analysis, or augment the effectiveness of a decoy group by simulating other units or echelons. All this leaves the adversary the task of discriminating false content from any real items he might have collected… this hypothetical CNC tactic is envisioned for the Nonsecure Internet Protocol Router Network (NIPRNet) and perhaps also the Secure Internet Protocol Router Network (SIPRNet). It is not envisioned for operational or tactical data-link or distributed fire-control networks.
Regardless of CNC method, it can be determined whether or not planted disinformation has been captured by the adversary. The commonalities of CNC with many communication-deception tactics are not coincidental. In fact, civilian mass media, social networks, and e-mail pathways can also be used as disinformation channels in support of forward forces.
CNC’s relative immaturity means that its viability must be proved in war games, battle experiments, and developmental tests before it can be incorporated in doctrine and operational plans. CNC may well prove more useful for concealment (saturating adversary collection systems and overwhelming decision makers with sheer volume and ambiguity) than for outright deception. A potentially useful way to estimate its combat efficacy would be to study historical cases of equivalent communications deception. For example, in spring 1942, U.S. naval intelligence used a false, unencrypted radio message about Midway Island’s water-purification system to elicit enemy communications activity that helped verify that Midway was indeed the Imperial Japanese Navy’s next target. There is little conceptual difference between this episode and how CNC might be used in the future. (Pg. 94, 111-112)
CNC (or whatever else you might prefer to call it) therefore represents a form of anti-intelligence/surveillance/reconnaissance.
Another potential use of honeypots is to attack the adversary’s warfare systems or military support infrastructure indirectly and over time. As CFR’s Adam Segal pointed out earlier this month, during the early 1980s French intelligence granted the CIA use of a KGB defector-in-place to funnel disinformation into the Soviet program to collect information on sensitive Western technologies. This ‘Farewell dossier’ not only led to the rolling up of the KGB’s technology transfer operations against European targets, but also ended up inducing the Soviets to use flawed designs and defective components in a wide range of military and industrial systems. It has long been rumored that a section of the Trans-Siberia oil pipeline suffered a massive explosion in 1982 due to ‘tailored’ industrial control software exposed to KGB collection assets.
Segal is absolutely correct about how Farewell could apply in the network age. If a given opponent is striving to advance its national technology base by stealing U.S. data, then it makes great sense to use honeypots and honeynets to pump false information to the opponent. The opponent’s use of such reverse-engineered technologies in his own systems could create vulnerabilities the U.S. could exploit. Similarly, if an opponent’s collections against U.S. military technologies are intended to find exploitable vulnerabilities for use in the event of a crisis or war, then the U.S. could disclose false vulnerabilities in order to induce the opponent to waste precious resources developing and stockpiling worthless exploits. Even if planted data was discovered by the opponent to be deliberately misleading, his realization of the scale of the use of honeypots might cause him to doubt the legitimacy of other 'true' data collected by his hacking and exfiltration operations. The return on investment could be incalculable.
Honeypots and honeynets may not be as direct as penetrative cyberattacks, and their effects would most definitely not be immediately observable. All the same, they would likely be more available in war as they would have the advantage of the adversary ‘running straight into the weapon.’ The nascent Long Range Research and Development Planning Program (LRRDPP) under the ‘Third Offset Strategy’ initiative ought to encourage development of technologies that could support creation of honeypots and honeynets that exhibit highly realistic behaviors and can automatically generate massive amounts of highly realistic but misleading, useless, or fault-laden information while simultaneously distracting attention from a network's actual elements of value.


The views expressed herein are solely those of the author and are presented in his personal capacity. They do not reflect the official positions of Systems Planning and Analysis, and to the author’s knowledge do not reflect the policies or positions of the U.S. Department of Defense, any U.S. armed service, or any other U.S. Government agency.