Battlefield theory meets cybersecurity.
The expression "fog of war" refers to the dramatic increase in uncertainty—a decrease in situational awareness—encountered by soldiers and commanders in military operations. Where is the enemy? What does it consist of? Where is my own army in relation? This was a very literal limitation prior to aircraft, and, later, satellite surveillance. Intelligence came slow, if at all.
The other side was however much of it you could see firsthand scaled by some best guesses. In this fog, tens of thousands of soldiers could be lost in a single all-but-blind WWI battlefield offensive.
A trio of computer scientists at the US Army Research Laboratory see the fog of war as a useful metaphor for a powerful new form of data security, which they describe in the current issue of Computer as "cyberfog." They imagine data hacked apart and embedded in a fog network where it's fragmented into tiny pieces and distributed across not just servers, but end-user devices like the one you're currently reading this story on. Even if data like this is partially compromised, the information whole would remain opaque to the adversary and remain useful to us.
Read more: The Cloud Is Much, Much Older Than You Think
This is the argument, anyway. Fog computing or fog networking is not in itself a completely new idea. It became an official thing last year with the formation of the OpenFog Consortium, with membership including ARM, Cisco, Dell , Intel, Microsoft, and Princeton University. Fog computing is an architectural pattern meant to enable the devices making up the Internet of Things. It's kind of a way of having your cloud cake and eating it too for industrial real-time systems that depend on having data ready at shorter timescales than can be provided by distant data centers, but that would still benefit from the fragmentation and distribution of stored data.
The fog fix, the essence of a fog network, is that this fragmented data is at least in part maintained within a network that's local to end-user devices (whether it's an industrial control system or an iPad) rather than centralized cloud locations (a data center on the other side of the country, perhaps). This is enabled by things like ad hoc peer-to-peer networks, mesh networks, and self-healing networks.
The idea behind fog storage is similar to that of distributed storage, generally. Here, units of information are broken apart into what are known as shards, which are usually imagined to be rows broken out from database tables and moved into other databases. So, we can imagine a single database containing not entire data tables by thin slices of many different tables all stashed together. Moving from the cloud to the fog is then a matter of moving those shards onto devices within a local network.
How can that enhance security? Well, here we need to return to the original metaphor. An adversary looking to hack our network sees these bundles of shards littered across a number of devices, but each individual shard yields little value because it's been stripped of its context (the original table). A battlefield commander may gain some intelligence about the movement of a single enemy unit, but within the fog of war, they will have difficulty in determining what that means. It is likewise intelligence without context, which may be little better than noise.
So, fogging increases uncertainty from the perspective of an adversary. This is further compounded in the cyberfog approach through intentional deception and obfuscation. "Obfuscation subjects information to multiple, equally possible interpretations, whereas deception aims to induce an incorrect interpretation that thwarts the adversary's goals," the Army researchers write. "Obfuscation and deception can be achieved in many ways—for example, by providing a misleading view of the network's topology, traffic, and behavior."
The researchers caution, however, that none of this is particularly easy to implement: "Fogging/defogging must take into account the size, density, complexity, and tempo of the network, the mobility and geographic proximity of users and nodes where data shards are stored, how soon sharded information will become stale, how soon stored information might be needed, and so on."
Handling all of this will likely require some deep and TBD computer science.
Get six of our favorite Motherboard stories every day by signing up for our newsletter.