Jumping the Airgap

One of the most useful tools in the network security toolkit is the use of an airgapped network to store and protect data from wide area networks like the Internet. “Airgapping” a network essentially means disconnecting that network from a gateway (router/modem) that bridges its connection to the larger internet. Essentially, it is a computer that can’t connect and/or never connects to the internet. If you’re not connected to the internet your chances of getting attacked drop considerably. Automated programs can still operate and propagate within an airgapped network but cannot connect to command and control entities to receive instructions or exfiltrate data. These networks  operate with a gap of air between their networks and the networks connected to the internet, hence the name “airgap”. Jumping the airgap refers to the ability for malicious attacks to transverse this airgap, infect computers in separated networks, and exfiltrate data found on it.

What constitutes an airgapped network? A wifi connection to your laptop is not an airgap. It represents a bridge between between a transmitter (wireless router) and the receiver (wifi antenna in a laptop). An airgapped laptop would have it’s wireless receiver removed and be connected to isolated networks via an Ethernet cord. A laptop with secured wifi credentials is not a airapped machine in the sense that it is one exploit away from bridging the gap to the wider internet. A computer connected to a LAN which is connected at one point to the larger internet is not airgapped. A computer sitting in a soundproof room, running on a generator or some other mechanism to defeat ethernet over power attacks, behind the NATO recommended wall thickness to prevent radiation leakage, and without any windows to communicate any visual cues would be considered a conceptually perfect airgap. That is, until the next technique is discovered, possibly including some kind of defeat of the computer/biologic barrier.

What kind of situations would an airgapped network be appropriate? According to wikipedia. Military, government, banking, industrial, aviation, and medical networks all would benefit from the security of an airgapped network. Let’s say the US military was using a supposedly secure network running Windows 7 PCs to manage data associated with troop locations and documented strategy policies. This network is locked down from a systems admin standpoint, all the programs are up to date, all the group policies are set correctly, access is audited. Let’s say a Windows 7 exploit is found which allows attackers to maliciously subvert the security measures that are in place. All that work is for naught when the system is exploited to behave like a public node on the larger internet. The point of the airgap is to assure that these exploits aren’t devastating for the security of the data and the users. Essentially a computer on a traditional, nonairgapped network, is one misconfiguration or one exploit away from being bidirectionally compromised.

Unidirectional networks are a large part of operational security when dealing with airgapped networks. Similar to how classified information is moved within an organization, data can move relatively inscrutably onto the airgapped system compared to being moved off of it, similar to how information can be transferred to higher levels of security clearance with minimal concern compared to the extremely restricted act of declassifying data to lower levels of security clearance. This unidirectional flow creates a fail-safe in a situation where a computer is compromised because the malicious actor fails to exfiltrate data back to the attacker simply because the medium to transport that data is not there. The unidirectional flow is necessary because computers need to be updated and need to have data moved to them, both of these require data from that has been connected to the outside internet to be moved onto the machine. The idea is that once data is on these airgapped machines, it never returns to machines that maintain an external internet connection. Imagine a spy that gets into a vault containing state secrets. The idea is that once the spy is inside the vault he may never leave, rendering him unable to report back what he’s found and ultimately rendering his services useless. The creation of airgap jumping malware is essentially the creation of unorthodox methods that allow this spy from communicate what he’s found without leaving the vault. The most intense conditions of airgapping may include policies against transferring this data to internet capable machines at all, choosing to use human elements to interpret, query, curate, and move this data to its applicable uses. Unidirectional data flow does allow malicious activity to enter an airgapped machine. However, unidirectional networks mitigate this by preventing the exfiltration of data by keeping the malicious software and all the data it desires to communicate to its handler on the airgapped network, isolated from the internet.

Imagine being in a room and two dogs were communication via a dog whistle. You would be unaware of this communication going on. This is the case when people employ acoustic measures to exfiltrate and infiltrate data. Recall the movie inception, someone’s dreams would be technically airgapped. The premise of the movie is that data in the dream state can be easily exfiltrated from the dreaming person but data cannot be easily infiltrated or “incepted”. Exfiltrating data and infiltrating data are often two different conceptual problems when considering an approach to an airgapped network. Within an airgapped network data is not easily exfiltrated. So imagine the process of moving data off the system as “EXception”, or the opposite of of the premise of the Inception movie.

Using acoustic elements from the computers operations, malicious attackers can exfiltrate data that exists in an airgapped machine. You’ve likely heard a computer and the noises it makes. These noises can be controlled and interpreted by a listener to convey information beyond traditional means. The idea of moving data using acoustic methods is not new and you may recall the noises used to convey data when picking up a phone that was sharing a line with the internet back in the days of dial-up. However, the methods that are being used today are getting more and more sophisticated. Of course these methods require malware to be on an airgapped computer in the first place. Getting malware onto an airgapped computer that employs a unidirectional data flow is not difficult today. Once on the airgapped machine the malware begins creating sounds a malicious receiver can then pick up. Diskfiltration is one of these acoustic exfiltration methods. The malware uses the hard drive movement to create sounds that can be picked up by a receiver. This is useful for a situation where an airgapped machine is sitting next to another machine with internet connectivity and a microphone. The malware, once it has been dropped onto an airgapped machine uses this technique to exfiltrate data to a machine capable of phoning home. This method is useful when an airgapped machine does not have speakers an attacker could use to transmit audio, typically beyond the range of human hearing, to a receiver.

What if the airgapped computer uses solid state drives which can be practically silent? The diskfiltration method would be defeated before it could even begin its operation. This is an important reason to keep the technical specs of an airgapped system private and employ good operational security when communicating them. If an attacker manages to compromise a system with diskfiltration, the lack of exfiltrated data will let him know the attack was unsuccessful but he won’t be sure whether the issue is with the listening device, the method of exfiltration, or the incompatibility with the hardware. Keeping attackers in the dark like this grants security professionals an advantage.

Fansmitter is capable of defeating the airgap in systems that are immune to diskfiltration. The method uses the computers fan to communicate acoustically. This, like other acoustic methods, creates an bridge across the “audio gap” to exfiltrate data from the airgapped machine. By controlling the speed of the fans and, as a result, the sound waves emitted from them, a malicious receiver, such as a smartphone or compromised internet capable computer can relay data off an airgapped system. This method was slow at 900 bits per hours (0.25 bytes/second) but is enough to slowly transfer passwords, encryption keys, and other sensitive information stored in text files.

AirHopper is another acoustic exfiltration technique that turns monitors and video components into FM transmitters, capable of transmitting data 1 to 7 meters away. This might not seem like a long distance but it could mean the difference between transmitting data between rooms if an airgapped machine is kept in a room by itself, away from computers with internet connectivity. This technique only allows 60 bytes of information to be transferred per second, due to the nature of sound waves. However, 60 bytes a second is 3.6 kilobytes a minute, enough to transfer txt files with hundreds of passwords or expansive excel documents in a matter of hours.

GSMem is an additional acoustic technique that communicates data from a compromised airgapped machine with a compromised cell phone which is then able to use the cell network to phone home the information. Using cellphone frequencies allows much more data to be transferred, making this method exceptionally dangerous. Attacks like this are responsible for the policies disallowing people from carrying cellphones into sensitive areas.

Recently visual elements have been proven capable of bridging the airgap. We’ve all seen the LEDs used for identifying disk activity and power status on desktop and laptop computers. Recently, at Ben-Gurion University in Beersheba, Israel, researchers were able to interpret the communications expressed by malicious software on computers where LEDs were present, effectively exfiltrating data from an airgapped machine through a window on the 3rd floor of an office building using a drone. This may seem like an extreme method but could be useful in exfiltrating data where acoustic and other options are not available. It only requires a view of the computers LEDs directly or indirectly. A view of the LED itself is not necessary. All that is required is a change in light which conveys a message in a binary code that the receiver can understand. This method can easily be defeated by eliminating windows from an airgapped environment. Even stranger is malware like BitWhisper that communicates by using thermal elements to exfiltrate data.

The most advanced attacks will always require the use of airgap jumping to execute, simply because the most advanced security applications will include airgaps to protect sensitive data. We’ve entered an era where creating an airgap doesn’t ensure protection for data. With the advent of IoT devices and the philosophy of constant connectivity, the industry seems set on eliminating the airgap for practical and pragmatic reasons. I remain unfazed until malware can jump the airgap between a computer and a physical notebook.

Privacy in the Age of Technology

“Thanks to technological progress, Big Brother can now be almost as omnipresent as God.” – Aldous Huxley, 1961


We have never been more connected than we are today and the steady march of technological innovation continues to expand on this connectivity. The social paradigm shift of constant connectivity reinforces behaviors that encourage activities that aren’t particularly privacy-oriented; sharing passing thoughts on social media, sharing locations with vendors and data brokers, divulging information that is valuable to advertisers on everyday life, and surrendering constitutional rights to various national security apparatuses. . The advent of social media itself encourages sharing thoughts and feelings, sometimes without proper 21st century discipline, indiscriminately with acquaintances, friends, and family. This isn’t necessarily a bad thing for collectivist-minded individuals but may seem like a rapidly approaching privacy-extinguishing singularly for individualists among us. We now have access to more individuals while sitting at a computer in an afternoon than people in the 18th century may have encountered in their entire life. This wide breadth of reach is just one part of the communications revolution that is still ongoing after the dramatic arrival and mass adoption of the World Wide Web in the 1990s.

Privacy is described by Wikipedia as “the ability [for] an individual or group to seclude themselves, or information about themselves, and thereby express themselves selectively”. This is a concept many people may take for granted as they go about their daily lives in an age where we’ve become accustomed to connectivity, the relative extreme access to information it provides, and the quality of life benefits it provides. Privacy is an ancient concept and we can corroborate with the present by comparing behavior in modern day societies with the behavior in ancient societies. The  concept of confidants and the selective distribution of information stretches far back into antiquity as people desire to selectively share certain pieces of information with only a select few individuals. This process of selectively sharing information has been employed since times immemorial. When humans existed as pack entities, controlling the flow of information concerning where your pack maintained food stores, weapons, safe houses, and other pertinent location data could be the difference between life and death. Privacy in the current world is no different and, I’d argue, more valuable considering its compared scarcity. Some may argue that we live in a safer world due to the technologies that others may argue are eroding the traditional definitions of privacy.

Is privacy a human right? Should people have the right to separate themselves from the public domain? The creation of privacy laws has skyrocketed since data collection and data brokering have become a staple of modern business. People expect privacy when dealing with health records and the law affords them this confidence. Financial records are also protected by law. People may not want others to know how much they earn. Is this desire to keep financial records private ethical or not? What about health records? Should people be required to disclose this information when asked if they have nothing to hide? Should Donald Trump be required to disclose his taxes? Should Facebook and Google be required to disclose the data they collect about their customers to the customers themselves? Should individuals be able to opt-out of this data collection for a price? Google partially offers services that disclose the information they collect on users with their “takeout” service as an effort to “not be evil.” This slogan had been dropped the rebranding of their parent company “Alphabet.”,  The new slogan is “do the right thing.” Are Google and Alphabet suggesting that not disclosing the data collected from customers that use their services is evil? Do data brokers like Google and Facebook expect the privacy to protect themselves and their data collection technologies from scrutiny about the data they collect? Is this definition of privacy something they afford and extend to their users? Are mass surveillance programs justified because of their contribution to national security programs? All of these questions inevitably lead you back to the ethics of privacy, a rapidly evolving branch of philosophy.

I argue that the ability to conceal information and to maintain a part of your consciousness that is considered a personal “safe space” known and accessible only to you is a basic component of normal psychology. We see the creation of community safe spaces in colleges. Is this not just an expression of privacy? Should personal safe spaces in the form of privacy be offered to individuals in an increasingly connected world? Should the right to solitude be considered a human right? If I’m keeping a secret for someone, being able to reliably keep that secret because my private thoughts are protected, I argue, maintains a healthy psychology that is essential for the human condition and healthy relationships.

In the modern era, technology is everywhere. This ubiquitous reach of technology creates the ability to monitor and maintain records of almost anything that travels along the internet and its infrastructure. How this data is utilized is up to the power brokers and data brokers. The degree of mass surveillance and national security has been escalated since the September 11th attacks on New York City in 2001. The National Security Agency enjoys extreme latitude in how they can collect data and, most of the time this collection is warrantless and the judicial proceedings concerning the utilization of this data is not conducted in a public court. The policy of data collection for national security seems to be one of “collect now, parse later”. These agencies run a dragnet on all communications, collecting everything without a warrant whether it is useful or not. This is in line with many modern theories of data curation and warehousing. It’s better to have too much data that never gets utilized than be caught in the dark without the same data if and when its availability becomes critical. In the realm of national security, utilizing this data requires a warrant but the collection is typically free game. Your whole online life and any fact of it that is accessible by the ever-permeating reach of technology exists in a database ready to be subpoenaed at any moment. This data is also likely to outlive the individual whom it concerns meaning that this generation, whose lives exists in a big portion in the online sphere, are likely to be the first to have the entirety of their online activities logged for security reasons in the present and likely preserved into the future. Will we eventually have to signify whether we’d like to be data donors in the same way we allow ourselves to be data donors.

Big Data thinks that privacy is the sharing of information with other (human) individuals. Since an algorithm is sorting your data, nobody is technically invading your privacy. Should it be? Is the creation of metadata itself a violation of privacy? If I bake a cake with stolen eggs, is the cake itself stolen? Is it an invasion of your privacy when you record your thoughts in a journal? Is it only an invasion of your privacy when someone reads these thoughts without permission? I’d argue that the creation of metadata in itself is an invasion of privacy. Proponents of big data may argue that historiographically, not collecting data is a disservice to future historians and sociologists who would like to tap into the vast quantities of data created by this age of maddening data creation and curation. If we could go back and look at big data at the scale it exists today for ancient civilizations like Greece or Rome, we would be able to draw conclusions with much more certainty and arguably be able to advance our collective body of knowledge further, improving the understanding of ourselves and civilization as a whole. Should we, in our relatively feeble understanding of the world in the early twenty-first century, be making these kind of information-denying decisions for those who might desperately need this data in the future to understand themselves, improve their quality of life, or bolster their knowledge base? Today, some of the most fascinating material to come from ancient Rome is arguably the graffiti that has been perfectly preserved in Pompeii due to the enlightening social commentary it offers for the period. Would it be right to deny inquiring minds in the future the multitudes of social commentary we have in the form of twitter feeds that people manually opt into sharing today? Why would we not share this information? What would make your information so important to you that sharing it would be damaging to your psychology or well-being? If you wouldn’t mind sharing this information with people centuries from now, what’s stopping you from sharing this information with entities in the present who may use it for the greater good like national security or medical research? Why should you care whether Facebook shares your likes with a 3rd party data broker who uses this to tailor ads that suit your interests?

The fact that people advocate for privacy, but publish incriminating or implicating information about themselves online, creates a separation between what people claim to want regarding privacy compared to the behavior they express. This phenomenon is known as the Privacy Paradox . If I claim that I want people to respect my privacy concerning a recent breakup, then I proceed to post intimate details about it on Facebook, then complain about people posting undesired comments, this would be considered paradoxical in the privacy sense. The privacy paradox can also be observed in the shift of information sharing online. When the internet was first adopted by the masses and children were allowed to browse, many parents told their children to be wary about what they shared online. This is completely contradictory to the current understanding of how online presences are conducted online. People often post their phone numbers, addresses, current locations, and intimate thoughts on the internet. This isn’t inherently bad. It only shows the paradigm shift of information sharing from safety to information presence and ubiquity.

The “nothing to hide” argument is an argument used against privacy activists. In my opinion, it is a cancer on the progression of privacy-minded individuals and concepts. The arguments states that if you have nothing to hide you should inherently not oppose surveillance because you’re not likely to be the subject of said surveillance. This, in my opinion, is akin to allowing someone to go through the trash at your house looking for something incriminating, and then claiming that you should go back into the house and not worry about anything as if you have nothing to hide. Not only would this be a violation of your 4th amendment rights in America, it would probably raise an ethical concern about someone poking around in your garbage without your permission. The intangibility of digital assets likely presents a conceptual leap that many people are not willing to associate with material and assets of a more tangible nature. However, your data assets, I argue, are just as valuable, if not more, than your physical assets. Edward Snowden chimes in about the nothing to hide argument saying “Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say”.

What is the future of privacy? Is it dead? Is it destined to become a relic of the past, something ancient and more basic humans did as a result of evolutionary behaviors to secure their existence in a harsher and more dangerous world? Is privacy even necessary? Does the absence of privacy start dehumanizing people and begin eroding how we exist as individuals today? Can the absence of privacy be reconciled with technology? Can the powers that be use metadata and mass surveillance programs responsibly for the appropriate security apparatuses?

Do you have something to hide?