Working with Untangle Firewall

Untangle Firewall is a hardware security solution that provides a robust platform to control and observe network operations. The suite of software includes a firewall, web content blocker, routing capabilities, and many more traffic shaping features. I was interested in trying this out because I was looking for peace of mind regarding home network security. I’m pleased with how my Untangle box has been working so far. In this write-up I briefly explain my experience with different apps included in the software.

The hardware specifications for Untangled version 13 are pretty light for a small home network. The avoid any hassle I tried out a Protectli Vault, fitted with a J1900 processor, 8 GB ram, 120 GB SSD, 4 port Intel NIC for $350 at the time of this writing. It’s a workhorse and perfect for my network of about 8 – 12 devices running. It’s working with a 300/20 connection with constantly redline upload traffic. The CPU has clocked in a 50% under the heaviest load. There is definitely room to scale with this route. If I wanted to get brave I could switch out the 8GB memory stick for 16GB if the board allows it. The SSD swapfile should carry me plenty if things get rough.

Installation can be done using just a USB keyboard. In this case Untangle was loaded from a USB stick into one of the two USB connections on the Vault. Untangle charges different rates for commercial and home users. Off the gate, Untangle comes with a 14-day free trial. After the grace period it’s $50/year for the home version which includes all the “apps”. Once thing I wish it had, though, was a screenshot feature.

 

collage-2017-08-08.png

 

Out of the box; simple and productive. The homepage can be customized to include a plethora of different visualized reports.

Network management took a second to get used to. At first I wanted to get my bearings by googling every session I saw pop up then slowly expanding the network to more devices as I felt more comfortable This led me to some interesting whois websites which provide useful domain data to compare with the built in Untangle resolution. I noted the IPs I didn’t know, using the session viewer in real time, until I had become familiar with the addresses and ranges that services on the network typically use. This type of experience with network behavior lets an administrator quickly view the status of the network by looking at the geographic or other visual representations of data. I feel the at-a-glance data visualization is a key advantage of using Untangle and software like it. I chose to investigate the different apps individually so understanding their functions became easier. At first the amount of information available was overwhelming. The software had a reasonable learning curve so that feeling was short lived.

I apologize for the screenpictures. For this particular instance I wanted to know what the oscp connection was. Google suggested it checks the validity of the certificates installed on the machine. I like the at-a-glance functionality a home screen with contextually selected apps offers. The map tickles my geographic fancy. Sometimes it’s easier to work with spatial data. Glancing at a map and noting the locations of the connections can assist with interpretation on the fly. It would be even better if you could export the state of the dashboard to a static image. Exporting the configuration of the dashboard would be beneficial, too, allowing an administrator the quickly restore the last configuration. I might be missing something, but it doesn’t seem to allow the moving of visualization tiles once they’ve been place on the dashboard. This could be a major inconvenience when reorganizing or grouping visualizations after-the-fact. The geographer in ma

At first it’s easier to misestimate the amount of connections a computer can make in a browsing session. The web page loads, the 10 or so ads and marketing services connect, the DNS is queried. With 3 internet devices browsing the internet and interfacing with media, the amount of sessions can easily reach the hundreds. I worked with each app individually until I felt like I had a solid understanding of the underlying systems. Approaching the software in this manner made it easier to understand at a functional level.

 

800px-1600x1080_apps.png

 

First up was the firewall. Through trial and error, I figured out which connected sessions were important to my computing. This was most critical component I needed security-wise. Being able to see all of the open sessions, in real-time and retroactively, gave me enough data to play with initially to get a hang for the system and understand the routine sessions on my network. The firewall lets you set rules that block traffic, let’s say I own a business and I want to block all traffic that appears to be from Facebook, this would be possible by setting custom firewall rules the block the Facebook domain. In my case I wanted to identify what exactly was going on with the background connections, windows telemetry data, time synchronization efforts, and websessions being kept alive by a browser. I identified the major, constant connections, like the one a cloud migration operation to amazon cloud drive I’m currently running. This allows the administrator to get comfortable with the network and she how it is normally shaped. Along with these connections was a constant odrive connection that was brokering the Amazon Cloud Drive upload. Connections like these that I have accounted for personally were set to bypass the firewall entirely so I could reconfigure the rules without worrying about them being taken offline. The peace of mind this device provides when auditing or preforming network forensics feels priceless.

Untangle includes two web traffic shaping apps; Web Filter and Web Monitor. A few of the apps have “lite” versions (free) and full versions (paid). The application library has a Virus Block Lite and a Virus Blocker. One is the free version and the other is included in the subscription. Untangle developers the lite version and the paid version provide additional protection when run in tandem. They might be using different databases or heuristics to identify threats between the two apps.

Web Monitor is the free app, it allows you to monitor web traffic, its origination, destination, size, associated application, etc. Web Filter is required to shape the traffic. Web filter out of the box comes with several categories of web traffic it blocks. Pornography, malware distributors, known botnets, anonymizing software are all blocked with web filter by default. Several hundred additional categories for web traffic exist to make this selection as precise as an administrator would like. There was one instance where the filter warned me before I was redirected to a malware site while sifting through freeware. This is a necessity for me. The ad blocker, which functions similar to a pi hole, catches the ads before they even make it to the client computer. Normally a user would expect the browser to block ads but that’s not the case with this in-line device. The ability to catch ads over the wire adds an additional line of defense for a traditional browser adblocker.

Intrusion prevention is another app I couldn’t live without. Intrusion prevention systems (IPS) use behavioral and signature analysis to inspect packets as they move across the network. If the signature of a communication or a behavior registers as malicious, the IPS logs and, according to the user-set rules, blocks these attempted misbehaviors. The intrusion detection was quiet while I was messing with it, which is a good sign. There were several UDP portscans and distributed portscans, originating from the Untangle box. These might be functions of the Untangle install or the intrusion detection app scanning the public IP for vulnerabilities but I’m not 100% sure. It could always be a malicious actor over the wire. Whatever the cause, these portscans were the only behaviors the intrusion prevention system picked up.

The question becomes, how thorough do you want to be when setting up rules for the apps. Let’s say a Chromecast is portscanning itself for benevolent reasons, like troubleshooting a connection. Should you allow this? Should you follow the rule of least privilege? Should Chromecast have the ability to recon your network? Security and convenience tend to be mutually exclusive to a certain degree. Knowing what your sweet spot of productivity is will allow better administration of the box.

 

collage-2017-08-09.png

Bandwidth control is something I’m still getting the hang of. One question I have is why the speed I’m getting from the bandwidth monitor app readings and the interface readings seem to be off by a factor of 10. They both seem to be presenting results in the MB/s format. No unit conversion errors detected.

I can’t speak for the banwidth app itself. There are additional apps for bandwidth shaping. WAN balancer makes sure a serving load is balanced across a number of assets. If you were running a server that needs high availability and maximized performance, you would get some use out of the feature. WAN fallover is a feature that activates a backup connection, in the case the primary WAN is unreachable. Again, these features are geared towards users with the need for traffic shaping and high-availability solutions.

There is an app for both IPsec VPN and OpenVPN. I didn’t have a chance to mess around with these. The is a webinar on the IPsec VPN hosted by Untangle on YouTube. I’m curious about the particularities because I’m eager to get this feature operational as soon as possible.

I had an interesting time with the SSL inspector. This app allows you to decrypt HTTPS sessions and intercept traffic before encrypting it again and sending it on its way. Turning this on threw SSL errors on almost all devices in the house. Things like Roku couldn’t connect to YouTube because the certificate chain was incomplete considering the Untangle box was middle-manning the connection. Luckily, it comes with a certificate creator that can serve certificates to client computers so browsers won’t think it’s a malacious redirect.

Transferring the Root certificate around was comically difficult. It couldn’t be transferred on Gmail because of security issues. Those issues might have been because Google thought the attachment was malicious, or that it’s not good OpSec to email root CA installers around, although it was for a client computer. The SSL app is able to generate an installer for Windows machines in additional to the plain cert.

I was able to move it around by putting it on Google Drive. Downloading with Edge threw all sorts of bells and whistles. At first SmartScreen said it didn’t recognize the file and threw the “are you sure you want to download” prompt? Then the warning that “this file could harm you computer” from the browser. Then Kaspersky prompted about the file. Finally, UAC was triggered. This is all in good measure, installing bogus certs on computers this way can be compromising.

SSL inspector needed to be turned off while this configuration was being done. The internet was unusable with browsers like Edge with SmartScreen because of the certificate errors. MAC addresses for devices with hardcoded certs bypassed the SSL inspector all together so they wouldn’t throw errors.

 

stuntsec_ca.png

 

SSL inspector needed to be turned off while this configuration was being done. The internet was practically unusable if the correct certs aren’t installed on the network devices.

Captive Portal and the Brand Manager apps were nice touches to include. These were probably the most fun I had playing around with. The branding manager allows you to provide stock logos that replace the default Untangle logo in the software. I designed a mockup logo for fun and really enjoyed how thorough this functionality was.

The captive portal seems to function in a similar way as the SSL inspector, though I think it uses a different certificate because it throws certificate errors on machines with the SSL inspector cert installed. The captive portal page can include your brand manager content and display and solicit agreement to a terms of service, offer the option to download the certificate and or the installer, log a user in, and brokers a number of other useful functions. Very cool if you’re trying to administer web usage.

 

Stuntman Security 2.png

 

Web Cache is something you want to consider if you’ve got the resources for it. A web cache monitors traffic and puts frequently visited elements in a cache that it can serve locally. If I’m logging on facebook every day, it’s easier, and arguably safer to store the “Facebook” logo locally and serving the local copy instead of asking the website for it. The Web Cache presents a lucrative target for attackers but luckily keeping tabs on its operation with the Untangle reporting system is easy.

There are the features that you would expect to see in home security software. Untangle’s advantage is catching threats over the wire, theoretically before they hit the client box. The complete package includes the two virus scanning apps, the Phish Blocker which I assume is some kind of DNS functionality to check URLs for malpractice. There are the two spam blocker apps which I believe work with some cloud threat database. These tools provide the same functionality as a security suite for your desktop. If you start seeing unusual malware activity you can leverage the firewall against it to really turn up the heat.

In addition to the virus and malware protection, an ad blocker is included. Like the advantage above, Untangle sees the advertising domains and blocks them before they hit the boxes behind it. I know for certain the ad blocker has been busy on my box.

Active Directory is available to further expand your capability on the local network. I didn’t have a chance to mess around with it. Most home networks don’t have active directory services running but some power users out they should get a kick out of it. I played around with policy manager for a bit. It’s useful if you want to run SSL on one group of devices and ignore others, like streaming devices. Essentially each policy runs its own set of apps and generates its own reports. Very useful for compartmentalizing your network.

A lot of the Untangle apps demand more resources as you connect more devices to the network. You need to be conscious of the box running Untangle and how scalable it is. If you’re running a Web Cache for 100 users, the resources required to manage it scales exponentially from 10 useers depending on their workflow. SSL inspector can be a problem if resources are limited while the workload increases. Intrusion detection is another relative resource hog.

I learned about DHCP and routing the hard way, which is always to most effective way. I realized I wasn’t resolving hostnames from devices that were connected to the router. A router, typically by default, sends all information upstream from one IP address. This function is twofold, first it’s because there aren’t enough IPv4 addresses to be issued to every device, and secondly, it’s safer to have the router acting as a firewall so each home device doesn’t directly face the internet.

By changing the wireless router that was behind the Untangle box to “access point” mode, it quickly differed this DHCP serving to the Untangle box. Untangle was then able to resolve the hostname for each device connected to the wifi. This allows for fine tuning of access rules and traffic shaping.

The remote functionality is robust and well-supported. Access can be tailored to the user. Users that only need access to reports are safety granted this access without enabling access to system settings. Multiple boxes can be administered from a single interface. Phone administration is possible through the browser. HTTP administration most be allowed from the client box to allow configuration on a client.

The reports app, though more of a service, is probably the most important app in the box. Reports act as the liaison between the administrator and the Untangle utilities. Graph are easily generated and data is visualized so it can be easily digested on the fly. Reports can be stored on the box for up to 365 days. You will have to account for the resource usage of maintaining this database. Reports can automatically be sent to your email inbox at an interval of your choosing. This report contains much of the top level information about the box’s performance, allow remote administration to be conducted confidently and quickly.

The configuration for each untangle install can be backed up with the Configuration Backup app. It has built in Google Drive functionality and can send and restore from the cloud, eliminating the need for panic if a box becomes physically compromised. Another scenario for this functionality would be sending a configuration template to new boxes. After installation of a new box, you would just need to select the loadout from Google Drive and hours of possible configuration could be avoided. The same backup functionality is available for reports. So essentially, if a box burns up, you just have to replace the hardware and it’s back off to the races thanks to the automated backups.

I had a great time messing around with this software. I’m very pleased with the hardware purchase. The all-in-one computer plus a year’s subscription to Untangle at home was $400. I’m enjoying it so much I’m considering a second box that I can administrate remotely. The opportunity definitely provided me a peace of mind that application solutions couldn’t. Hopefully in the future I can use some of the data for geographic projects. I’ve already started messing around with projecting some geographic data in ArcMap. Here’s to hoping for more positive experiences working with the Untangle box.

Mapping Malicious Access Attempts

Data provides an illuminating light in the dark in the world of network security. When considering computer forensics assessments, the more data available, the better. The difference between being clueless and having a handle on a situation may depend on one critical datapoint that an administrator may or may not have. When data metrics that accompany malicious activity are missing, performing proper forensics of the situation becomes exponentially more difficult.

Operating a media server in the cloud has taught me a lot about the use and operation of internet facing devices. This is provided by a 3rd party who leases servers in a data center. This machine runs Lubuntu, a distribution of Linux. While I’m not in direct control of the network this server is operating on, I do have a lot of leeway in what data can be collected since it is “internet facing” meaning it connects directly to the WAN, allowing it to be be interacted with as if it was a standalone server.

If you’ve ever managed an internet facing service you’ll be immediately familiar with the amount of attacks targeted at your machine, seemingly out of the blue. These aren’t always manual attempts to gain access or disrupt services. These attempts are normally automated and persistent, meaning someone only has to designate a target and the botnets and other malicious actors, tasked with the heavy lifting, begin a persistent threat, an attack that is capable of operating on its own, persistently, without human interaction.

While learning to operate the server, I found myself face to face with a number of malicious attacks directed at my IP address seeking to brute force the root password in order to establish an SSH connection on the server. This would essentially be an attacker gaining complete control of the server and a strong password is the only thing sanding between the vicious world of the internet and the controlled environment of the server. This list provided a number of IP addresses which, like any good geographer, I was eager to put the data on a map to spatially analyze what part of the world these attacks were coming from to glean some information on who and why these actors were targeting my media server, an entity with little to no tangible value beyond the equipment itself.

Screenshot_20170527-000900

This log of unauthorized access attempts can be found in many mainstream Linux distributions in the /var/log/auth.log folder and by using the following bash command in the terminal it is possible to count how many malicious attempts were made by which unique IP and rank them by count.

grep "Failed password for" /var/log/auth.log | grep -Po "[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+" \ | sort | uniq -c

Running
this command will allow a system administrator to quickly see which
IP addresses failed to authenticate and how how many times they
failed to do so.

Parsing operations like this allow system administrators to quickly see which IP address failed to authenticate and how many times they failed to do so. This is part of the steps that turn raw data into actionable knowledge. By turning this raw data into interpretable data we actively transforming it’s interpretability and by result its usability.

This list is easily exported to an excel spreadsheet where the IPs can be georeferenced using other sources like abuseipdb.com. Using this service I was able to link each IP address and the number of the access attempts to the geographic location associated with it at the municipal, state, and national level.

After assigning each IP address a count and a geographic location I was ready to put the data on map. Looking over the excel spreadsheet showed some obvious trends out of the gate. China seems to be a majority of the access attempts. I decided to create 3 maps. The first would be based on the city the attack originated from and a surrounding, graduated symbology that expressed the number of attacks that originated from the data point. These would allow me to see at-a-glance where the majority of the attacks globally and spatially originated.

The first map was going to be tricky. Since the georeferecing built-in to ArcMap requires a subscription to the Arc Online service to use, I decided to parse my own data. I grouped all these entries and consolidated them by city. Then went through and manually entered the coordinates for each one. This is something I’d like to find an easier solution for in the future. When working with coordinates, it’s also important to use matching coordinate systems for all features in ArcMap to avoid geographic inaccuracies.

map2b

Full resolution – http://i.imgur.com/sY0c7IJ.jpg

Something I’d like to get better at is reconciling the graduated symbology between the editing frame and the data frame. Sometimes size inacuracies can throw off the visualization of the data. This is important to consider when working with graduated symbology, like in this case, where the larger symbols are limited to 100 pts.

The second map included just countries of origination, disregarding the cities metric. This choropleth map was quick to create, requiring just a few tweaks in the spreadsheet. This would provide a quick and concise visualization of the geographic national origins of these attacks in a visually interpretable format. This would be appropriate where just including cities in the metric would be too noisy for the reader.

The following is a graphical representation of the unauthorized access attempts on a media server hosting in the cloud with the IPs resolved to the country of origin. Of the roughly 53,000 access attempts between May 15 and May 17, over 50,000 originated from China.

To represent this chloropleth map I saved the data into a .csv file and imported it into ArcMap. Then came the georeferencing. This was easily done with a join operation with a basemap that lists all the countries. The blank map shapefile was added twice. One for the join and one for that background. During the join operation I removed all the countries I didn’t have a count for. Then I sent this layer to the top layer so all the colorless empty countries would appear behind the countries with data. This is one thing I continue to love and be fascinated with about ArcMap, the number of ways to accomplish a task. You could use a different methodology for every task and find a new approach each time.

map3

Full resolution – http://i.imgur.com/XyqOexM.png

I decided the last map should be the states in China to better represent where attacks were coming from in this area of the world. The data was already assembled so I sorted the excel spreadsheet by the country column and created a new sheet with just the Chinese entries. I was able to refer to the GIS database at Harvard which I wrote about in an earlier article concerning the ChinaX MOOC they offered. This was reassuring considering my familiarity with the source. The excel spreadsheet was then consolidated and a quick join operation to the newly downloaded shapefile is all it took to display the data. A choropleth map would be appropriate for this presentation. I had to double check all the state names to make sure there were no new major provincial changes had been missed by the dataset considering the shapefile was from 1997.

map4

Full resolution – http://i.imgur.com/ZhJpHLM.png

While the data might suggest that the source of the threats are originating from China, the entities with a low number of connections might be the most dangerous. If someone attempts to connect 1 time, they might have a password that they retrieved the means of a Trojan horse or a password leaks. These are the entities that may be worth investigating. All these entries were listed in the abuseipdb database so they all had malicious associations. While these threats aren’t persistent in that they are automated, they might suggest an advanced threat or threat actor.

Some of the data retrieval might be geographically inaccurate. While georeferencing IP addresses has come a long way, it’s still not an entirely empirical solution. Some extra effort might be required to make sure the data is as accurate as possible.

How does this data help? I can turn around and take the most incessant threats and blacklist them on the firewall so they’ll be unable to even attempt to log in. Using this methodology I can begin to create a blacklist of malicious IPs that I can continue building upon in the future. This allows me to geographically create a network of IPs that might be associated with a malicious entity.

The Internet can be a dangerous place, especially for internet facing devices that aren’t protected by a router or other firewall enabled devices. Nothing is impossible to mitigate and understand for a system administrator that is armed with the correct data. The epistemological  beauty of geography is the interdisciplinary applications that can be made with almost anything. Even something is insignificant as failed access attempts can be used to paint a data-rich picture.

WannaCry/Wanacrypt0r/Wcry Worm: The Origin Story

Today, May 12, 2017, a massive ransomware attack was detected affecting unpatched Windows machines via a previously NSA exlusive SMB exploit. This attack uses several leaked tools combined with an a ransomware component. The attack seeks to infect as many unpatched Windows systems as possible and maliciously encrypt their systems for profit. The attack is being called several names, WannaCry, Wanacrypt0r 2.0, Wcry, Wanacrypt, Wanacrypt0r 2.0, and Wana Decryp0r.

unnamed

In April 2017 a group calling themselves “The Shadow Brokers” leaked several tools that belonged the United State’s National Security Agency’s computer compromisation capability. Among these tools was an exploit called EternalBlue. EternalBlue is, or was, a 0day exploit involving all distributions of Windows from XP to Windows 10. EternalBlue relied on a vulnerability in the SMB (server message block) component of Windows systems and used this vector to gain access to the systems. It is believed the NSA enjoyed almost exclusive access to this exploit for years, which it likely used to compromise its targets. SMB is normally used for network file sharing activity.

The exploit was disclosed in February 2017 by security researcher

What is being called version 1.0 of the wcry ransomware component of the attack was detected in Feburary by Malwarebytes developer, S!Ri. This attack is being called version 2.0 of the Wanacrytp0r methodology and is spreading like wildfire thanks to the retooled vector using the EternalBlue exploit.

Six weeks later on March 14, 2017, Microsoft included a patch for the SMB exploit when rolling out patches for Windows systems. Anyone who has yet to patch their system is exploitable by this virus. This includes internet facing end-of-life software like Windows XP. Attackers are targeting users with unpatched systems with the EternalBlue exploit and are rumored to be using another component of the Shadow Brokers’ leak, DoublePulsar, to drop the malware onto vulnerable systems.

The geography of the attacks are very diverse, affecting people, businesses, and infrastructure all over the world. With the majority of the infections so far in Russia. Since the attack relies on a ransom message to extort bitcoins from infected users, the legibility of the ransom message is critical. The malware authors have thought ahead, as the ransomware supports more than 2 dozen languages, increasing the linguistic scope and capability of the attack. Reportedly the malware has been spotted in 74 different countries. 

WannaCrypt.png

This map shows Wanacrypt0r 2.0 infections in real time. 

In Spain, the malware has successfully infected several industrial and infrastructure providers: Telefonica, a telecommunications prociders, Gas natural, a natural gas supplier, and the utilities company Iberdrola.

In the UK the NHS and several other healthcare providers have been successfully infected by the malware.

The most effective way to protect yourself from this attack and others like it is to keep your Windows systems up to date. All the systems affected by this attack have failed to keep their systems updated. This is a cardinal sin of systems administration and it is surprising to see targets like hospitals and utility companies buckling under these easily preventable attacks.

This also brings the ethical question of the developers of these tools. Who is ultimately responsible for this attack? Is the NSA responsible because they created the tools? Are the Shadow Brokers responsible for leaking them into the wild? Is Microsoft responsible for not detecting and patching this vulnerabilities for years?

And remember, never pay the ransom if you happen to be infected! This is computing equivalent of negotiating with terrorists!

Jumping the Airgap

One of the most useful tools in the network security toolkit is the use of an airgapped network to store and protect data from wide area networks like the Internet. “Airgapping” a network essentially means disconnecting that network from a gateway (router/modem) that bridges its connection to the larger internet. Essentially, it is a computer that can’t connect and/or never connects to the internet. If you’re not connected to the internet your chances of getting attacked drop considerably. Automated programs can still operate and propagate within an airgapped network but cannot connect to command and control entities to receive instructions or exfiltrate data. These networks  operate with a gap of air between their networks and the networks connected to the internet, hence the name “airgap”. Jumping the airgap refers to the ability for malicious attacks to transverse this airgap, infect computers in separated networks, and exfiltrate data found on it.

What constitutes an airgapped network? A wifi connection to your laptop is not an airgap. It represents a bridge between between a transmitter (wireless router) and the receiver (wifi antenna in a laptop). An airgapped laptop would have it’s wireless receiver removed and be connected to isolated networks via an Ethernet cord. A laptop with secured wifi credentials is not a airapped machine in the sense that it is one exploit away from bridging the gap to the wider internet. A computer connected to a LAN which is connected at one point to the larger internet is not airgapped. A computer sitting in a soundproof room, running on a generator or some other mechanism to defeat ethernet over power attacks, behind the NATO recommended wall thickness to prevent radiation leakage, and without any windows to communicate any visual cues would be considered a conceptually perfect airgap. That is, until the next technique is discovered, possibly including some kind of defeat of the computer/biologic barrier.

What kind of situations would an airgapped network be appropriate? According to wikipedia. Military, government, banking, industrial, aviation, and medical networks all would benefit from the security of an airgapped network. Let’s say the US military was using a supposedly secure network running Windows 7 PCs to manage data associated with troop locations and documented strategy policies. This network is locked down from a systems admin standpoint, all the programs are up to date, all the group policies are set correctly, access is audited. Let’s say a Windows 7 exploit is found which allows attackers to maliciously subvert the security measures that are in place. All that work is for naught when the system is exploited to behave like a public node on the larger internet. The point of the airgap is to assure that these exploits aren’t devastating for the security of the data and the users. Essentially a computer on a traditional, nonairgapped network, is one misconfiguration or one exploit away from being bidirectionally compromised.

Unidirectional networks are a large part of operational security when dealing with airgapped networks. Similar to how classified information is moved within an organization, data can move relatively inscrutably onto the airgapped system compared to being moved off of it, similar to how information can be transferred to higher levels of security clearance with minimal concern compared to the extremely restricted act of declassifying data to lower levels of security clearance. This unidirectional flow creates a fail-safe in a situation where a computer is compromised because the malicious actor fails to exfiltrate data back to the attacker simply because the medium to transport that data is not there. The unidirectional flow is necessary because computers need to be updated and need to have data moved to them, both of these require data from that has been connected to the outside internet to be moved onto the machine. The idea is that once data is on these airgapped machines, it never returns to machines that maintain an external internet connection. Imagine a spy that gets into a vault containing state secrets. The idea is that once the spy is inside the vault he may never leave, rendering him unable to report back what he’s found and ultimately rendering his services useless. The creation of airgap jumping malware is essentially the creation of unorthodox methods that allow this spy from communicate what he’s found without leaving the vault. The most intense conditions of airgapping may include policies against transferring this data to internet capable machines at all, choosing to use human elements to interpret, query, curate, and move this data to its applicable uses. Unidirectional data flow does allow malicious activity to enter an airgapped machine. However, unidirectional networks mitigate this by preventing the exfiltration of data by keeping the malicious software and all the data it desires to communicate to its handler on the airgapped network, isolated from the internet.

Imagine being in a room and two dogs were communication via a dog whistle. You would be unaware of this communication going on. This is the case when people employ acoustic measures to exfiltrate and infiltrate data. Recall the movie inception, someone’s dreams would be technically airgapped. The premise of the movie is that data in the dream state can be easily exfiltrated from the dreaming person but data cannot be easily infiltrated or “incepted”. Exfiltrating data and infiltrating data are often two different conceptual problems when considering an approach to an airgapped network. Within an airgapped network data is not easily exfiltrated. So imagine the process of moving data off the system as “EXception”, or the opposite of of the premise of the Inception movie.

Using acoustic elements from the computers operations, malicious attackers can exfiltrate data that exists in an airgapped machine. You’ve likely heard a computer and the noises it makes. These noises can be controlled and interpreted by a listener to convey information beyond traditional means. The idea of moving data using acoustic methods is not new and you may recall the noises used to convey data when picking up a phone that was sharing a line with the internet back in the days of dial-up. However, the methods that are being used today are getting more and more sophisticated. Of course these methods require malware to be on an airgapped computer in the first place. Getting malware onto an airgapped computer that employs a unidirectional data flow is not difficult today. Once on the airgapped machine the malware begins creating sounds a malicious receiver can then pick up. Diskfiltration is one of these acoustic exfiltration methods. The malware uses the hard drive movement to create sounds that can be picked up by a receiver. This is useful for a situation where an airgapped machine is sitting next to another machine with internet connectivity and a microphone. The malware, once it has been dropped onto an airgapped machine uses this technique to exfiltrate data to a machine capable of phoning home. This method is useful when an airgapped machine does not have speakers an attacker could use to transmit audio, typically beyond the range of human hearing, to a receiver.

What if the airgapped computer uses solid state drives which can be practically silent? The diskfiltration method would be defeated before it could even begin its operation. This is an important reason to keep the technical specs of an airgapped system private and employ good operational security when communicating them. If an attacker manages to compromise a system with diskfiltration, the lack of exfiltrated data will let him know the attack was unsuccessful but he won’t be sure whether the issue is with the listening device, the method of exfiltration, or the incompatibility with the hardware. Keeping attackers in the dark like this grants security professionals an advantage.

Fansmitter is capable of defeating the airgap in systems that are immune to diskfiltration. The method uses the computers fan to communicate acoustically. This, like other acoustic methods, creates an bridge across the “audio gap” to exfiltrate data from the airgapped machine. By controlling the speed of the fans and, as a result, the sound waves emitted from them, a malicious receiver, such as a smartphone or compromised internet capable computer can relay data off an airgapped system. This method was slow at 900 bits per hours (0.25 bytes/second) but is enough to slowly transfer passwords, encryption keys, and other sensitive information stored in text files.

AirHopper is another acoustic exfiltration technique that turns monitors and video components into FM transmitters, capable of transmitting data 1 to 7 meters away. This might not seem like a long distance but it could mean the difference between transmitting data between rooms if an airgapped machine is kept in a room by itself, away from computers with internet connectivity. This technique only allows 60 bytes of information to be transferred per second, due to the nature of sound waves. However, 60 bytes a second is 3.6 kilobytes a minute, enough to transfer txt files with hundreds of passwords or expansive excel documents in a matter of hours.

GSMem is an additional acoustic technique that communicates data from a compromised airgapped machine with a compromised cell phone which is then able to use the cell network to phone home the information. Using cellphone frequencies allows much more data to be transferred, making this method exceptionally dangerous. Attacks like this are responsible for the policies disallowing people from carrying cellphones into sensitive areas.

Recently visual elements have been proven capable of bridging the airgap. We’ve all seen the LEDs used for identifying disk activity and power status on desktop and laptop computers. Recently, at Ben-Gurion University in Beersheba, Israel, researchers were able to interpret the communications expressed by malicious software on computers where LEDs were present, effectively exfiltrating data from an airgapped machine through a window on the 3rd floor of an office building using a drone. This may seem like an extreme method but could be useful in exfiltrating data where acoustic and other options are not available. It only requires a view of the computers LEDs directly or indirectly. A view of the LED itself is not necessary. All that is required is a change in light which conveys a message in a binary code that the receiver can understand. This method can easily be defeated by eliminating windows from an airgapped environment. Even stranger is malware like BitWhisper that communicates by using thermal elements to exfiltrate data.

The most advanced attacks will always require the use of airgap jumping to execute, simply because the most advanced security applications will include airgaps to protect sensitive data. We’ve entered an era where creating an airgap doesn’t ensure protection for data. With the advent of IoT devices and the philosophy of constant connectivity, the industry seems set on eliminating the airgap for practical and pragmatic reasons. I remain unfazed until malware can jump the airgap between a computer and a physical notebook.

0days – Monetizing Mistakes

In computing industries, especially software design. A 0day (pronounced “zero day”) is a flaw in the design of the code that can be exploited in a malicious manner. The name 0day comes from the amount of days a developer has to fix the error in their code. 0days, although not always, are often unknown to the developers, hence, they have zero days to develop and apply a fix. This is the equivalent of a surprise attack in software development. 0days vary in the scope of the vulnerabilities they allow an attacker to exploit.

0days, have monetary value associated with them. Both the black and traditional markets have a niche carved out for 0day exploits. Exploits can range from $10,000 to over a million depending on how many people are aware of the exploit, how hard it is to mitigate, and the nature of the program in which it was found. Typically, these transactions occur on the black market and this black market is typically behind anonymization software like Tor to protect the identity of buyers and sellers. These marketplaces broker the exchanges of 0day exploits, typically using anonymous currency systems like bitcoin, to anonymize transactions. Some markets even offer escrow to protect buyers from receiving faulty services. 14% of all Microsoft, Apple, and Adobe 0day exploits came from white markets like Italy’s Hacking Team, and Israel’s NSO group. These transactions often come with hefty regulations, allowing the sale of prepackaged exploits to government and law enforcement entities only. The remainder of the exploits come from black or gray markets sources.

You may have heard of “bug bounty” programs offered by software companies. These efforts are an attempt to secure 0day exploits by offering penetration testers to opportunity to “sell” the exploit to developers before it can be maliciously applied. Companies like Google and Facebook offer bug bounty programs. In 2014, Facebook paid out more than $1.3 million to bug bounty hunters and in 2016, Google offered $100,000 to anyone that could hack one of its Chromebook laptops. These are just a few examples of companies buying 0day exploits for their own code. Typically, these are lowballed payments compared to what buyers would offer for the same exploits on dark web black market. It remains an important part of the software development ecosystem because it allows white hat hackers to monetize the exploits they find and keeps them off the of the black markets.

Having a robust stash of 0day exploits at hand is important for any group of hackers. The less people that know about a particular exploit, the more dangerous it becomes. A well maintained 0day can allow access to systems for years at a time and sometimes transcends product updates, essentially allowing an attacker to have a personal backdoor to systems with programs exhibiting a specific 0day weakness. According to a study by the Hawaii International Conference of System Studies in 2009, there were 2500 0day exploits active (“in the wild”) at any given time. A particularly elusive exploit called “Dirty Cow” which affected the Linux kernel went 9 years before being patched. The nature of information goods, particularly in the case of 0day exploits, viability and demand decreases exponentially as the number of people who know about and put an exploit to use increases. According to the control in the 2009 study mentioned above, a typical 0day remains undetected for between 112 and 160 days.

The real fun begins when you combine or “chain” 0days together. These attacks are typically so powerful that mitigation becomes a serious problem. Threats like these often go relatively undetected compared to other exploits because of the amount of resources necessary to acquire and utilize them. A famous example would be NSO group’s Pegasus spyware package which took advantage of 3 0days in Apple’s iOS 9 iPhone software. The exploit chain consisted of a bug in the Safari web browser’s webkit which allowed malicious links to call certain memory locations responsible for storing the kernel of the OS. The random location of the kernel would then be located in the memory of the device using a second exploit. Lastly a third exploit allowed the phone to be jailbroken silently and remotely. This exploit chain was particularly devastating because it allowed an attacker to jailbreak an iPhone with just a simple phishing link. Eventually these exploits were patched but not before many instances of the spyware were found on iPhones all over the world. Apple offers a bug bounty program but the payments max out at $200,000, forcing these particular exploits to make their way to the black markets where the chain was developed and sold for millions.

The most deadly exploit chain in history came from the dreaded Stuxnet malware attack against Iran’s nuclear centrifuges in 2010. The malware used an unprecedented 4 0day exploits to remotely disable the nuclear centrifuges which operated on an airgapped network. This effort was allegedly a joint American-Israeli operation, allowing for speculation that governments were actively involved in the procurement of 0days for offensive capability. Stuxnet is still not fully understood. It was written in a previously unknown programming language, had a time based kill switch that self-destructed the virus in 2012, and utilized resources that many believe are beyond the scope of private companies. The operational security employed by the authors suggests that a nation-state may indeed be involved.

The market places for 0day exploits are alive and well and every seems to want to get their hands on these exploits for both offensive and defensive capability. The elimination of the threat 0day exploits produce requires software developers to be diligent in making sure the software the write doesn’t include vulnerabilities that might have been taken advantage of in the past and to be proactive about possible vulnerabilities in the future. Increasing the prices paid out to hackers who discover 0days to be competitive with the black markets may curb the use of the marketplaces and bring some portion of the market share back into traditional, regulated mediums.

Despite what the future may hold, 0days have carved out their place in computer science and represent another round of the cat and mouse game between developers and hackers.