Mapping the Historic Capitals of Myanmar

It’s been some time since I’ve written about or created GIS content outside of a professional environment. I figured it was time to take on a quick project to get back in the swing of things.

I came across an article on Wikipedia about the rather lengthy list of capitals the current state of Myanmar (Burma) has had in roughly the last millennium. This struck me as a unique opportunity to quickly create a project that explores historical GIS, that is, the analysis of historical data using geographic information systems. This was also an opportunity to explore additional tools in the realm of GIS production. In this case QGIS, the open source alternative to some of the mainstream, proprietary products on the market today, was what I needed exposure to.

Below is the finished product of the data sourced on the Wikipedia article, with graduated symbology related to the time a location spent as the capital proper. Included is an inset of the crowded central region for clarity.


Full resolution


The methodology was straightforward. Getting to know the suite of tools included in the QGIS environment was quick and painless after reading the documentation and looking up answers to questions as they arose.

The first step was importing a basemap for the project. As opposed to the creation of a new document in ArcMAP, QGIS doesn’t present the user with a list of basemaps to choose from out of the gate. There are plugins that allow this. I wanted to get a feel for the program and decided to download a basemap from the selection of maps in this blog. Once the basemap was in place it was time to parse the article and create the tabulated data.

CSV data was the preferred format for the project. It’s quick to write up and easily imported as a delineated data layer. There was a design choice to make when curating the data regarding capitals that existed at the same place in different periods of time in a noncontinuous manner. Since the data was being represented in two dimensions, as opposed to three or three plus time, it would have been messy to include different symbology in the same spot. Offsetting these symbols manually would allow the intricacies of this data to be displayed correctly but for the sake of simplicity I decided to use one symbol for each location, summing the time a location spent as the capital throughout time. Since the symbology was going to be divided 5 different ways, the accuracy of the time wasn’t of extreme importance. Some of the figures were quickly eyeballed, if you notice by crosschecking the data. The integrity of the data suffers in the long run but the end product, as it is displayed in this project, is the same.

Below is an example of the formatted data:




The data was separated into four columns, one for each the name, the latitude, the longitude, and the length of time it was the capital.

Exponentially graduated symbology was something I would have liked to use for this project but it didn’t seem to be possible with the base functionality of QGIS without plugins so linear graduated symbology was used. One of the capitals in the Myinsaing period; Myinsaing, wasn’t included in the data due to insufficient information regarding its location. This city has since been abandoned and while the archaeological site might have been used to represent its location, it was not easy to find. The data for Pinya was included by cross referencing this map manually with the basemap.

Modern country borders were added to put the data into a modern perspective. Major ocean features and country names were included. An inset was included to display the congested central region in a different scope. A legend was included to explain the symbology. Below is picture of the final table of contents for the project. Equal interval was used to delineate the graduated symbology.




One major change between QGIS and ArcMap is the design mode. This is the mode used to organize how the data will be displayed once it’s projected properly in the data view. ArcMap using a feature called design layout to organize data while QGIS uses a feature called print composer. Both include similar functionality but are presented through the framework of their respective clients.

How could this data be useful? Spatially representing this data allows a researcher to quickly look and interpret different characteristics. For example, the early capitals were inland, representing the inland empires they ruled. The capitals around the Andaman Sea represent a different type of state as power became more maritime in nature. This is the at-a-glance functionality maps provide that I often like to cite as an important part of representing data spatially. Including dates would have been an interesting touch to add to the map but, like stated above, intermittent reigns of the same capitals would have been difficult to represent.

I enjoyed working on this quick project and real like it was a good primer for the QGIS environment. Hopefully this knowledge can be applied in future projects as I continue to make content in QGIS.

Mapping Computer Networks

A network map represents the relationship between objects. This representation can be 2-dimensional or 3-dimensional depending on how the data is structured. Network maps are useful for mapping social relationships, supply chains, and, as I’ll demonstrate in this post, computer networks.

Creating maps of cyberspace is inherently unintuitive. The instantaneous and global nature of networks like the internet defy traditional spatial interpretation. By depicting these networks, for example, on a 2-deminsional plane, the relationship between devices in a network become easier to interpret at a glance.

Below is a network topology map I created to illustrate the relationship of devices I personally manage. For the creation of this map I used the free tools from The free component of the tool is limited to 60 elements, including line features.

network map
Full Resolution

The network consists of 8 servers, 2 desktops, 2 laptops, 2 firewalls, and 8 media devices over 2 sites. By using a combination of symbology and labels, each computer and it’s function can be quickly interpreted.

I’d like to take a moment to stress the importance of  what I mean when I say “at a glance” or “on the fly” when referring to data visualizations. Data, in its rawest form, can be difficult to interpret quickly. Visualizations aid the analysis of data by making it more easily interpretable through communication, in terms of presentation, or by analyst in terms of speed and reliability. When I’m referring to elements of data visualizations like maps that contribute to easier data conveyance at a glance, I’m directly addressing things that make the data more communicable in terms of conceptual and spatial accessibility, speed of interpretation, and reliability as related to distinction and the ease of identification.

Stylistically, the above network map is radial in nature, with the internet occupying the space near the center. In networks that use intranets, or private networks, this space might provide a space for the main routers, switches, domains, or any other device that sees the most traffic or performs a key role in the network. The network is split into three parts, all communicating to the other devices through the internet. For this reason the internet becomes the central feature of the map, the backbone of the network. It’s enunciated by its position on the map and since this central position tends to draw the eyes, it’s easier to, you guessed it, interpret at a glance.

Their are 3 sections of the general network structure. We’ll call the line going to the top of the diagram from the internet symbol site A, and the one drawn towards the bottom, site B. The three separate lines drawn from internet symbol going towards the left represent assets that are in the “cloud” or hardware I don’t have physical access to. These machines aren’t on the same network, represented by the separate, non-intersecting lines, but they’re grouped according to the remote nature of their access.

I tried to make the symbology as intuitive as possible, labeling the different devices by their role, technical specifications, operational capacity. For example, the brick wall represents a firewall unit. At the top we see the all-in-one Untangle unit I wrote about in this article (Working with Untangle Firewall). Site A utilizes a two network setup. All the server assets sit behind the firewall and all the personal devices operate off their own router. This is a network security concept called compartmentalization. If a personal device ever became compromised, it could be leveraged against the rest of the network. The server farm is more operationally secure by the extra layer of security provided by the firewall. This also allows the personal devices to bypass firewall rules which might interrupt leisurely “workflows” while at the same time simplifying firewall operation by not requiring additional rules and conditions.

Site B utilizes a different strategy, this Untangle box, featured in this article (Building an Untangle Box) routes and shapes all traffic. However, the traffic is compartmentalized internally by two separate wifi networks and a hardwired network. The server built in this article (Building a 50TB Workstation/Server) operates off of this box via ethernet. Everything that is not handling sensitive operations like SSH work or banking operates on one Wifi with rules tailored specifically for this heightened level of security. Home media and leisure devices use the other wifi. The idea is that if a router ever becomes compromised, it won\t have leverage over all the devices on the network. This is in addition to the routers being in access point mode, sending all traffic to the untangle box for rules and routing. It never hurts to have these fail safes. All traffic going to site B sits behind a firewall, as opposed to site A which sits behind a modem and router combo unit. This is inherently safer considering all traffic must pass through the untangle box as it moves to or from the internet or, theoretically, other devices.

In the cloud there are 3 VPS servers. These host a variety of functions with the core functionality listed beside them on the map. Like mentioned earlier, these servers aren’t on the same network, or even the same country for the matter. This network relationship is related by the individual lines that do not intersect on there way to the internet symbol.

Creating a network consists of a few design element with plenty left up to the author. It’s easy to begin with a radial design in mind, with devices that serve central points in the network at the center. Grouping devices by role or location helps the reader spatially interpret assets on the fly. Using easily understandable symbology and utilizing verbose labeling helps clarify finer details. Like all maps, computer network maps can change and having a program that allows you to update and edit features is useful for making changes.

The future of maps consists of an abundance of cyberspace assets. Being able to map these networks will define a key component in the toolkits of future cartographers.

Working with Vantrue X2 Dashcam and Dashcam Viewer

Dashcams are becoming more and more affordable as they become easier to manufacture and their use becomes more ubiquitous. I had previously used a Rexing R2 dashcam but was looking to get into something with more robust data collection capability. The Rexing R2 served as a good initial exposure for dashcam operation and the associated workflow (storage, editing). I was able to incorporate dashcam operation into my working theory of data curation in that any dashcam data that was collected, even it is not inherently valuable, may prove valuable in the future, and thus, should be stored indefinitely.


As a quick example of the usefulness of this kind of dashcam ubiquity, we can look to the meteor that touched down near Chelyabinsk, Russia in February 2013. Almost all of the footage is from dashcams, which are mandatory in the country to prevent insurance fraud, and CCTVs. I’m not saying I’m likely to catch a meteor coming down to Earth and it’s my responsibility as a dashcam owner to be prepared for that moment, but I’d rather be caught with it on rather than off. The footage can also become the medium for other creative expression.

I enjoy working with the footage, speeding it up and putting it alongside music. Driving is something I enjoy and editing driving footage provides a similar satisfaction. Unfortunately, the Rexing R2 and its fish-eyed convex lens was destined to end badly. The lens protruded beyond the safety of the bezel and all it took was one instance of accidentally setting it lens-side down on an abrasive surface for the lens to be slightly cracked, enough to blemish the picture.

Finding a camera which was immune to this kind of operator error was my first priority. Also important was the incorporation of a GPS unit with exportable data. I find a reasonable solution in the Vantrue X2. It was a steal on Amazon for $99, though seems to be out of stock now. It comes out of the box with 2K filming capability, expertly tailored night vision, 64GB microSD support, and an optional GPS mount. This cam checked all the boxes. Two days later I had it installed and took it for a test drive.

A couple things to consider right off the bat; I do quite a bit of driving on average and I’m not one who wants to dismount the camera and export all the footage several times a week. I also thought it would be irresponsible, since I’m storing this footage for whatever future opportunities might arise, to film in less than the full 2K resolution of the camera’s capability. 64GB SD card capability becomes just “OK” at this point, storing between 6 and 7 hours of data before needing to be hooked up to the computer and moved over. 128 or greater might be something I look for in the future, although I’m definitely not in the market for another camera. The 6 hours hasn’t been a problem except for a handful of times I’ve been driving long distances and found myself needing to offload the footage temporary on a device before delivering it to the storage server. However, the average user will not have these problems if they’re not meticulously hoarding this data. The camera has functionality that allows it to overwrite previous footage when it becomes full. Relying on this rolling recording will always assure you have the last 6 hours of driving footage, no maintenance required.

Armed with the camera and GPS mount, I was ready to collect the data, which came naturally over the following months. The next step in this geographic exploration was to incorporate this data in some sort of map. This led me to the Dashcam Viewer by Earthshine Software. This program extracts the GPS data from the videos, plots them on a map, and allows you to cartographically exam your driving. Dashcam Viewer is available for Windows and Mac. Sadly, there is not an official Linux version, although I haven’t tried emulating it on a Linux machine with Wine.




The first video I thought to make was a realtime video with two maps of different scale, showing where the vehicle is in relation to the surroundings that might not be visible on film. Dashcam viewer includes lines that show differences in relative speed which is a nice touch, and saves time compared to crunching this data manually in something like ArcGIS.

Capturing the map footage required a little ingenuity. I couldn’t save the video of the Dashcam coordinate route so I thought capturing video of the desktop then cropping it to the window in question would the easiest route to get a result. The finer details could be ironed out afterwards. I was able to create the two cropped videos of the maps and using the Filmora editor, was able to combine them with the actual footage. A little editing flare and some music was all it took to combine this rough draft, which served as a proof of concept for future projects.



Next I wanted to move onto timelapse videos so these new map perspectives could be incorporated. The length of the editing process is something I’m still trying to reduce with this workflow. Capturing the 2 maps in realtime using Xsplit to capture the desktop adds 2 times the length of the original footage to the process. For the next project, I wanted to use a 4 hour segment of films. This would require 8 hours of desktop capture, not acceptable for a productive workflow, but for what I am doing in this early proof-of-concept stage, getting the results is more important than the workflow.

I started running into limitation in the Filmora video editor. Editing with multiple video sources was limited, and I couldn’t export the final production in glorious 2K resolution due to the 1080p limit. Filmora isn’t native to Linux which is the ecosystem I’m trying to move all my production towards. Wine emulation is poor. For the future, I’m looking towards Da Vinci Resolve by BlackMagic. This, I assume, is an intermediate video editing application where Filmora is focus more on entry level editing.

The idea for the second project using the new dashcam was based around a 4 hour trip. I captured all the media. Moved it over to my Windows machine to edit with Filmora. To make the editing process easier, I focused on one source at a time. First, I merged all the dashcam footage into one video. This machine is working with a Q6950 processor so all rendering had to be done overnight. Once the dashcam footage was one video, it was easily muted sped up by a factor of ten, then rendered again. This gave me finalized footage I wouldn’t have to edit when piecing all the sources together.

I then booted up dashcam viewer and started the desktop capture of the maps in realtime. This took over 8 hours for both maps. Once the capture was complete, they were put through some quick editing so post-production would just be piecing the sources together. They were sped up 10x and rendered individually with custom resolutions, so they could sit on top of the original footage seamlessly.

The first map was set to “follow” the GPS signal at a small scale. The second map would show a majority of the trip and often the starting point and destination in the same frame. These provided two different perspectives for the footage in case the viewer wants supplementary geographic information.

Syncing all the footage was something that turned out to be more complicated than expected. I originally wanted the final editing procedure to be just piecing together the three sources, the dashcam footage, and the two maps. However, the maps were often out of sync with the footage, and had to be adjusted manually every few minutes. This led to chopping up the footage, creating errors in the maps halfway, thanks to Filmora and operator errors.

Post production included adding the music, adding the song information, and fading in and out where appropriate. The final product is not perfect, as there are map errors in the middle of the video and at the end, but I’m happy with how the workflow and the product ended up.


In the future, I hope to choose a different editor, and see if I can find an additional way to capture and render the maps, with a focus on speed of production. I’d love to find other ways to incorporate GPS information like bearing and speed into the video. Until then, it’s off to add to the every growing collection of dashcam footage.

Building an Untangle Box

A few weeks ago I did a quick write up about the Untangle firewall system my experience installing and using it on a Protectli Vault all-in-one mini PC. Today I’d like to describe a box I set up as an alternative to the model I previously used, the Protectli Vault. For this box I used an old Optiplex 780 purchased on Amazon for $87. I’ve been using the OptiPlex 780 for the starting point in a lot of projects recent due to the fact that it’s modular by nature, easily upgradable, and has components that are powerful enough to tackle any moderately resource intensive modern tasks.



The OptiPlex made a great jump-off for this project. I wanted an untangle box that was a small form factor so it could be easily incorporated into the physical environment where it would be operating but not fquite as small as the Protectli Vault setup I had used before. I tried to keep the budget around around $370, the price of the original Protectli Vault Setup. I wanted to keep the at least as powerful of the Protectli Vault build.

First I took a look at the RAM. The OptiPlex 780 has 4 dual channel DDR3 slots onboard. This is more than capable enough to match the RAM loadout on the Vault. I was able to find an 8GB kit of two DDR3 1600MHz sticks for $56 on Amazon. These sticks were plenty powerful for what I was building. The 7800 came with 4GB of RAM preinstalled, allowing some cost to be recouped. This 4GB might be enough if the amount of services running in the Untangle installation were minimal.

Next was the storage solution. The Vault comes with 120GB of solid state memory so I figured a 2.5″ SSD would be a suitable match for the OptiPlex. I found a SanDisk 120GB SSD, again on Amazon, for $60. This would provide quick read/write speeds for typical Untangle operation and open up the possibility of using disk space for swap operation if the need arose. The 780 comes with a harddrive already installed and they range between 160GB and 250GB. After the SSD installation, these could be salvaged for other projects or to recoup some cost.

Arguably the most important part of this particular build is the network interface. The 780 comes equipped with just the 1 network interface onboard out of the box. This, by itself, isn’t capable of being a functional box. There needs to be at least 2 ethernet ports, one for the internal connection and one for the external connection, for the box to function as a firewall. I decided it would be appropriate incorporate a 4-port 1000Mbps NIC to allow for up to 4 external connections. This one-upped the Vault by allowing an additional connection compared to the 3. I purchased the PRO/1000 Ptquadport from Amazon for $56 (now $50) and, in turn, freed up a 4-port switch I had been using to route local traffic, allowing addition cost reclamation by selling this redundant equipment. The NIC had to be low-profile to accommodate the reduced room in the small form-factor OptiPlex. I decided to additionally include a single port card in the spare PCI slot, bringing the number of external ports to an unprecedented 5.

Finally, I wanted to include a beefy quad-core CPU to again one-up the Protectli Vault. The Q9650 was a work-house Core 2 Quad-core chip in its day and still packs a wallop. This monster can hang with new processing solutions and would be more than enough for this build, theoretically capable of routing over a gigabit of traffic at any time and possibly much more depending on how many local services Untangle is running. I was able to secure one from Amazon for $49. Installing the chip however was tricky.


During the install I periodically powered up the build to ease troubleshooting if problems arose. The assembly did turn out to problematic when I installed the NIC and the new processor. Replacing the CPU was probably this most time intensive step in the process. This process included removing the existing E8500 chip in the OptiPlex, another redudant part that could be sold. The process was made easier by the easily removable heatsink secured by two screws. The hood attached to the heatsink is easily detached from the HDD assembly. Thermal paste was then applied to the new Q9650 and the heatsink was the reattached. The system did not boot, and the OptiPlex was showing the error code “2 3 4”, displayed on the lights at the bottom of the front of the chassis. These lights were accompanied by a solid amber light emanating from the power button, indicative of CPU issues.

Troubleshooting was easy enough. I had a spare OptiPlex 780 laying around that had identical specs andd installed the Q9650 in it after removing it from the Untangle build. Luckily, it booted up, eliminating the possibility that the chip was faulty. I then tried the sparee OptiPlex’s chip, another Q9650, in the new build. This attempt also failed to boot, producing the same error indicators for a faulty chip. This confirmed the problem was local to the new build and narrowed it down to a problem with the board or some part of the CPU assembly. Luckily, the problem was due to how the heatsink was mounted, so no faulty hardware was involved. I attached the heatsink by tightening the screws nearest to the DVD drive first instead of the opposite. This pressure differential most have secured the CPU in an optimal way because the machine booting up properly on the next attempt.


The assembly of all the components was relatively painless apart from the CPU hiccup. With the machine up and running and the software configured, we were off to the races. The physical environment was prepared with a small shelf so the box itself could set out of the way. It was anchored to the wall using some wire to prevent any nudges from sending it crashing to the floor. The build was officially ordained with an Untangle sticker on the case.



The final price was $308, and with current prices, this total is just below $300, putting us about $70 below budget.

OptiPlex 780 $87

2x4GB DDR3 1600MHz RAM $56

SanDisk 120GB SSD $60

4-Port NIC $56

Q9650 Processor $49

Total: $308

If the micro form factor provided by the Protectli Vault isn’t a necessity it is demonstrably proven that a box with a superior CPU and network solution is built for around $70 cheaper. This box can handle anything that will be thrown at it in the foreseeable future and is powerful enough to utilize all of the features in the Untangle software suite. In this scenario the OptiPlex once again proves to be an optimum solution.

Building a 50TB Workstation/Server

Collecting data is a passion of mine. I’ve always enjoyed collecting things. I believe the act of collection is a critical component of the human psyche and experience. The act of maintaining, curating, and growing collections of data is personally and professionally therapeutic and fun. Collecting data, applying it to typical situations is a critical part of approaching everyday life in the 21st century and the better your tools, the better your efficacy. Being able to build these tools yourself puts you in even greater control over your data management solutions and opens the door for unique opportunities to engage with interesting cutting edge technology.


Building servers to accomplish the task of storing all the data I’ve collected over the years is a big priority for me. I don’t like to delete things. I don’t like to delete different versions of the same thing. Having the hardware that is capable of scaling and storing my ever-expanding repository of documents, movies, music, data, pictures, games, books, and programs is very important to me. I’ve seen the detrimental effects on not having this data easily available this year as I’ve had 30TB between the cloud and a physical box at home, something that isn’t particularly integral or useful for my workflow.

Having the data available and easily accessible is only one part of the equation. Security is the second part. Computer operation is always a trade off between convenience and security. When it comes to this bulk storage, I’ve come to the conclusion that my personal needs would be better met by having this server offline. By having this server airgapped, I feel like I would have more control over what is ingested and egressed and would be better situated to deal with malicious threats like ransomware.

The planned server is only one part of the solution. I hope this server can function as a backup and that another server, to be built in the future, would handle all the internet-facing and production activity. This would fulfill the data integrity requirement of an offsite backup, making the data that much more secure in the long run and elicit more peace of mind for the administrator.

I decided to run a non-Windows operating system on this machine. I feel like it would require less maintenance, in the form of updates and daily maintenance, as well as eliminate some of the security woes I’ve had in the past with Windows machines.. I decided I want to utilize the ZFS filesystem for the added control of data integrity and the redundancy operations that are superior to traditional RAID. There is no native ZFS support on Windows. First I looked at OpenIndiana, a Solaris distribution that has ZFS baked in. I was worried about hardware support and expandability in the future so this unfortunately might not be an option. I looked at FreeNAS which is a BSD distribution for network attached storage. I wasn’t exactly sure if it had the capability under the hood I was looking for as a workstation, and since the box wouldn’t be connected to a network, a lot of the functionality would not have been utilized. FreeNAS was also limited by its user interface. While it has a robust web interface, the local desktop environment is lacking for use as a workstation.

Securing the hard drives was my first concern when setting up this build. A  great deal was found in the form of Western Digital Easystore 8TB external hard drives from Best Buy. These external enclosures housed WD80EFAX drives that can be easily “shucked” from the enclosure and used for other projects. These hit the shelves at $159.99 a piece which is about $50 cheaper than the cheapest standalone internal drive on the consumer market. I decided to buy as many of these as I could afford, taking off an extra 10% from opening a Best Buy credit card. This is a storage deal you only see once every few years. These drives do come with some drawbacks.


I started mounting the hard drives in the Nanoxia Deep Silence 1 case and realized that the mounting holes were not in the standard position. I was only able to secure two out of four mounting components in the drive trays. This was concerning because drives that can give and move in their enclosures will have shorter lifespans. This case would have to sit up vertically so hopefully gravity would provide the same service as the two missing tray mount points. The 1 year warrant is also something to consider compared to the 3-year warranty on most barebones drives.

The PSU from a previous build was the put in the tower. Shipped, the Nanoxia DS1 comes with 11 internal 3.5″ slots in the form of two 3 drive cages and one 5 drive cage. One of the 3 drive cages had to be removed for the 750W, modular PSU to be install. This build screams overkill and this PSU is definitely part of that. My reasoning is future-proofing, but it’s also nice to find some use for extra parts laying. The highest load this machine would experience would likely be several hundred watts less than 750. All 8 hard drives spinning up at once does create a load that needs to be considered. In addition to installing the PSU, I went ahead and screwed in the motherboard standoffs and did some early wire management to make installation easier.


The motherboard was dry-fitted, assembled, and tested outside of the case to prevent any troublesome troubleshooting. The CPU and RAM was both easily fitted and popped in respectively. The heatsink was dry-fitted to make sure it successfully fit the AM4 socket, despite only saying AM3 on the box. Thermal paste was then applied to the processor and spread to an even coat with a piece of cardboard before the heatsink was applied for a final time. The 2 sticks of 8GB RAM were double checked to make sure the proper dual channel slots were being utilized. The slots were staggered on this board.


Installing M.2 SSD was interesting to do for the first time. I have never had the pleasure of working with one before. The motherboard includes a special standoff for the M.2 SSD and a screw to secure it in place.


After everything was installed it was time to power on the motherboard assembly. This would be done outside of the case on a static resistant material first. The PSU needs to power the mainboard molex, the 8 pin CPU power and at least the power switch on the case. At first it didn’t display. Luckily the  B350 motherboard comes with 4 debugging lights which indicate what component is preventing the system from posting.


The GPU debug light was on and I did a quick facepalm. I had forgotten that Ryzen series chips did not include integrated GPUs and needed discrete graphics cards in order to display. Luckily I was able to cannibalize a GT 1030 from another computer I had laying around. There is a firepro W4100 on the way for another project that might have to be adopted for this project. The 1030 will do for now. Definitely something to consider. I might not have bought this Ryzen originally since I failed to foresee the cost of a discrete video card. I’m still satisfied with my purchase so far. $300 for 8 cores is a great deal no matter how you slice it. If I decided to use the GTX 1030, I will need to get a full profile bracket so it will be flush with the slots on the back of the machine.


With the motherboard posted and fitted, the IO shield was installed on the back of the case. Wires in the case was further arranged for management later. The DVD optical drive was hooked up. FreeNAS was booted up to try out an OS. The system booted fine into the operating system after installation which is always nice. Installing the OS to the M.2 SSD was humorously fast. Decided to switch over to Ubuntu after seeing the FreeNAS’s lack of a DE. OpenIndiana, my other choice, needed some BSD shell knowledge that I was not particularly in the mood to figure out. “Just Working”™ is something I look for in an OS and Ubuntu should support everything out of the box, has a DE, and can run ZFS.

I then encrypted the disk and encrypted the home folder. These are two basic hardening steps for the OS and Ubuntu offers to perform them during the installation process. No one will be able to boot into the system using a rescue CD, DVD, or USB without the password considering these two encryption options. The M.2 SSD will allow this constant encryption work to be transparent and almost unnoticeable thanks to the 3GB/S read/write speeds, something that might bottleneck performance on other hard drive technologies. The speed of this little device is shocking. An install that can take as long as fifteen minutes was done in less than three, including the time intensive encryption operations. This is a fantastic form factor that makes SATA SSD’s seem like they crawl.


After the basics were up and functioning it was time to connect everything on the board; audio ports, USB ports, HDD lights, power lights, reset switch, fans. The SAS controller card was set to go in next, followed by the HDD array. The SAS card booted up properly the first time and occupied the second PCIE 16 slot on the motherboard. I decided it would be best to installing drives one at a time. This way I could erase the preinstalled partition left over from the WD Easystore software, label the drives, and test them individually. Another issue arose over the form factor of these drives. They would not clear the back of the cage, which only allowed one side of the clips to secure the drive in place, further adding to the instability problems. It would be possible to alleviate this by modifying the cages themselves. This is not something I wanted to jump straight into. After everything was checked out and noted it was time to install the ZFS filesystem.


ZFS has to be downloading from the Ubuntu repository. I wanted to create a whitelist that only allowed communication from the server to the Ubuntu repository. Messing with IP tables was not providing the functionality for URLs I was used to with other solutions like Untangle. I decided it was easier to deal with it on the hardware firewall later. Sudo apt-get install zfs is all it took to get the filesystem utility ready to operate. I still need to explore ZFS as a system. This server will give me a platform to experiment before I bring the 25TB of data down from the Amazon cloud.

The wiring for the drives was an extremely tight fit. There was not enough room for the cable management I wanted to perform. The side panel was barely able to latch into place and even then the panel was bulging were the wires were most crowded. Most of the slack wiring is in the open side of the case. A possible mod for this would be cutting a hole where the crowding and installing some kind of distended chamber for the excess wiring. This is something to consider in the future.

Below is the list of parts and the link to this list on PC part picker.

PC part picker link

There are definitely some things I want to handle with this project in the near future. The case either needs to be modified to allow more cable room, are the drives need to be refitted so they dump cables into the front side of the case. This might also alleviate the crowding against the drive cages.

I want to find a good use for the Ryzen 7. Video capture was one of the first things that came to mind. I’d like to include a capture card in this build, having a second system to capture video greatly increases the intensity of operation that can be done on a primary machine without the processoing overhead of recording on the same machine.

I need to install the 2 hard drive hot swap bays. This will fill up all the remaining 5.5″ slots on the case. Having two hotswap bays makes the ingest process easier, allowing two drives to be ingested or egressed as well as duplication operations.

I’d like to investigate additional uses for the build. It hasn’t been completely put into production so the finer details of operation are still up in the air. This build was one of the most powerful machines I’ve ever had the opportunity to put together. I can’t wait to start to begin sorting and curating the data on this machine and expanding its functionality in the future. Hopefully “Ratnest” has many years of hoarding data ahead of it.


After rereading this post I forgot to mention the 50TB total storage. 8 x 8TB is 64TB of raw storage. This is shrunk to 47GB when using ZFS with 1 drive redundancy.

By reversing the direction of the drives in the cage, I was able to route the cords in a manner that allowed the sides to fit on the case. This mounting technique allowed the drives to clear the back of the cage, alleviating the need for case modification, always a plus when it is not completely necessary.


Being able dto situate these drives in the case and close it without having a visible bulge in the side panel effectively completed this build. It is now operational and should provide enough storage for all the data I’m ingesting for the next couple years at least.

All the dense drives made this these the heaviest build I ever constructed, weighing in at almost 50 pounds, a pound for every Terabyte.

Here’s to hoping for a successful archival workflow in “Ratnest”s future.


Writing a RAID Calculator in Python

RAIDr is a RAID calculator written in python. It accepts input from the user and calculates how a certain configuration of hard drives will be allocated across different RAID configurations. This is the first program I ever wrote and the project that got me interested in programming. It’s not the most efficient, and there are some alternate ways to approach this problem, but I’m happy with the product as it turned out. It is still incomplete, but hopefully, someone can find it useful. The code is commented with thoughts on how it should function and things that need to be done. I’m not a professional python programmer, and my methodology might not be completely “pythonic”, but this was a great project for me to gain exposure to programming and syntactical logic. Any constructive criticism is welcomed.

## Josh Dean
## RAIDr
## Created: 2/14/2017
## Last Edit: 3/21/2017
## Known bugs:

## Global Declarations

hddnumvar = 0
hddsizevar = 0
raidvar = 0
hddwritevar = 0 ## used to mitigate reference error in RAID 10 calculation

## Functions

def hdd_num():
	global hddnumvar
	print ("\nHow many drives are in the array?") ## eventual formatting errors will come from here
	hddnumvar = input() ## necessary if variable is global?
	if hddnumvar == 1:
		print ("Error: Can't create a RAID with 1 disk.")
	elif hddnumvar &gt; 1:
		print hddnumvar, "drives in the array"
		print "----------------------- \n"
		print ("I don't know what you entered but it's incorrect.")

def hdd_size(): ##needs error parsing
	global hddsizevar
	print ("What is the capacity of the drives? (gigabyte)")
	hddsizevar = input() ## possible to use line break with input
	print hddsizevar, "raw GiB per disk"
	print "----------------------- \n"
	print("%s drives in the array of %s GiB each.") % (hddnumvar, hddsizevar) ##there was a return value here, implication, seems to be hanging here with a syntax error?
	##removed the % format for something else, seems to be working single quotations critical for functional syntax, fixed it by including the arguments in parathesis

def raid_prompt(): ##update this to reflect actual raid configurations, calls raid_calculation, all edits and calls should start here
	print ("\n1 - RAID 0")
	print ("2 - RAID 1")
	print ("3 - RAID 5")
	print ("4 - RAID 5E")
	print ("5 - RAID 5EE")
	print ("6 - RAID 6")
	print ("7 - RAID 10")
	print ("8 - RAID 50")
	print ("9 - RAID 60 \n")
	raidvar = input("What raid configuration? \n")

def raid_calculation(raidvar): ## just handles the menu
	if raidvar == 1:
		hddtotal = hddsizevar * hddnumvar ## variables need to go first
		print "\n-----------------------" ## /n doesn't need a space to seperate, bad formatting, best to put this in front
		print ("RAID 0 - Striped Volume")
		print hddnumvar, "drives in the array"
		print hddsizevar, "raw GiB in the array per disk"
		print "%s raw GiB in the array total" % hddtotal
		print "Total of", hddnumvar * hddsizevar, "GiB in the RAID array." ## this need alternative wording throughout the program
		print "%s times write speed" % hddnumvar ## Can I put these two prints on one line? Multiple % variables?
		print "%s times read speed" % hddnumvar
		print "No redundancy"
		print "No hot spare"
		print "----------------------- \n"
	elif raidvar == 2:
		print "\n-----------------------"
		print ("RAID 1 - Mirrored Volume")
		print hddnumvar, "drives in the array"
		print hddsizevar, "raw GiB per disk"
		print "Total of", hddsizevar, "GiB in the array."
		print "%s times read speed" % hddnumvar
		print "No write speed increase"
		hddredunvar = hddnumvar - 1
		print "%s disk redundancy" % hddredunvar
		print "No hot spare"
		print "----------------------- \n"
	elif raidvar == 3:
		if hddnumvar &lt; 3:
			print &quot;\nYou need at least 3 disks to utilize Raid 5&quot;
			print &quot;\n-----------------------&quot;
			print (&quot;RAID 5 - Parity&quot;)
			print hddnumvar, &quot;drives in the array&quot;
			print hddsizevar, &quot;raw GiB per disk&quot;
			print &quot;Total of&quot;, (hddnumvar - 1) * hddsizevar, &quot;GiB in the array.&quot;
			hddreadvar = hddnumvar - 1
			print &quot;%s times read speed&quot; % hddreadvar
			print &quot;No write speed increase&quot;
			print &quot;1 disk redundancy&quot;
			print &quot;No hot spare&quot;
			print &quot;----------------------- \n&quot;
	elif raidvar == 4:
		if hddnumvar &lt; 4:
			print &quot;\nYou need at least 4 disks to utilize RAID 5E\n&quot;
			print &quot;\n-----------------------&quot;
			print (&quot;RAID 5E - Parity + Spare&quot;)
			print hddnumvar, &quot;drives in the array&quot;
			print hddsizevar, &quot;raw GiB per disk&quot;
			print &quot;Total of&quot;, (hddnumvar - 2) * hddsizevar, &quot;GiB in the array.&quot;
			hddreadvar = hddnumvar - 1
			print &quot;%s times read speed&quot; % hddreadvar
			print &quot;No write speed increase&quot;
			print &quot;1 disk redundancy&quot;
			print &quot;1 hot spare&quot;
			print &quot;----------------------- \n&quot;
	elif raidvar == 5:
		if hddnumvar &lt; 4:
			print &quot;\nYou need at least 4 disks to utilize RAID 5EE\n&quot;
			print &quot;\n-----------------------&quot;
			print (&quot;RAID 5EE - Parity + Spare&quot;)
			print hddnumvar, &quot;drives in the array&quot;
			print hddsizevar, &quot;raw GiB per disk&quot;
			print &quot;Total of&quot;, (hddnumvar - 2) * hddsizevar, &quot;GiB in the array.&quot;
			hddreadvar = hddnumvar - 2
			print &quot;%s times read speed&quot; % hddreadvar
			print &quot;No write speed increase&quot;
			print &quot;1 disk redundancy&quot;
			print &quot;2 hot spare&quot;
			print &quot;----------------------- \n&quot;
	elif raidvar == 6:
		if hddnumvar &lt; 4:
			print &quot;\nYou need at least 4 disks to utilize RAID 6\n&quot;
			print &quot;\n-----------------------&quot;
			print (&quot;RAID 6 - Double Parity&quot;)
			print hddnumvar, &quot;drives in the array&quot;
			print hddsizevar, &quot;raw GiB per disk&quot;
			print &quot;Total of&quot;, (hddnumvar - 2) * hddsizevar, &quot;GiB in the array.&quot;
			hddreadvar = hddnumvar - 2
			print &quot;%s times read speed&quot; % hddreadvar
			print &quot;No write speed increase&quot;
			print &quot;2 disk redundancy&quot;
			print &quot;No hot spare&quot;
			print &quot;----------------------- \n&quot;
	elif raidvar == 7:
		if hddnumvar &lt; 4:
			print &quot;\nYou need at least 4 disks to utilize RAID 10\n&quot;
		elif (hddnumvar % 2 == 1):
			print &quot;\nYou need an even number of disks to utilize RAID 10\n&quot;
			print &quot;\n-----------------------&quot;
			print (&quot;RAID 10 - Stripe + Mirror&quot;)
			print hddnumvar, &quot;drives in the array&quot;
			print hddsizevar, &quot;raw GiB per disk&quot;
			print &quot;Total of&quot;, (hddnumvar / 2) * hddsizevar, &quot;GiB in the array.&quot;
			hddreadvar = hddnumvar / 2 ## actual write variable calculation
			print &quot;%s times read speed&quot; % hddnumvar
			print &quot;%s write speed increase&quot; % hddreadvar
			print &quot;At least 1 disk redundancy&quot;
			print &quot;No hot spare&quot;
			print &quot;----------------------- \n&quot;
	elif raidvar == 8: ## bookmark, need formulas
		if hddnumvar &lt; 6:
			print &quot;\nYou need at least 6 disks to utilize RAID 50\n&quot;
			print &quot;\n-----------------------&quot;
			print (&quot;RAID 50 - Parity + Stripe&quot;)
			print hddnumvar, &quot;drives in the array&quot;
			print hddsizevar, &quot;raw GiB per disk&quot;
			print &quot;Total of&quot;, (hddnumvar - 2) * hddsizevar, &quot;GiB in the array.&quot;
			hddreadvar = hddnumvar - 2
			##print &quot;%s times read speed&quot; % hddreadvar
			##print &quot;No write speed increase&quot; # Although overall read/write performance is highly dependent on a number of factors, RAID 50 should provide better write performance than RAID 5 alone.
			print &quot;2 disk redundancy&quot;
			print &quot;No hot spare&quot;
			print &quot;----------------------- \n&quot;
	elif raidvar == 9:
		if hddnumvar  9:
		print ("Error: Please select a number between 1 and 9")
	elif raidvar == 0: ## additional error parsing required here
		print ("Error: Please select a number between 1 and 9")
		menu_prompt() ## ubiquitous for all loop items that aren't errors

def disk_num_prompt(): ## this will eventually need to except arguments that are context sensitive for raid type and disk requirements, perhaps handle this is the raid_calculator function
	global hddnumvar
	print "Adjust number of disks?"
	print "1 - Yes"
	print "2 - No"
	disknummenuvar = input()
	if disknummenuvar == 1:
		hddnumvar = input("\nHow many drives are in the array? \n")
		if hddnumvar == 1:
			print "Error: Can't create a RAID with 1 disk."
		elif hddnumvar &gt; 1:
			print "\nUpdated"
			print hddnumvar, "drives in the array" ## displays once for every loop, hdd_num_input for mitigation
			print ("I don't know what you entered but it's incorrect.")
	elif disknummenuvar == 2:
		print("I don't know what you entered but it's incorrect.")

#below is the menu for the end of the selected operations
def menu_prompt(): ## need additional option to go to GiB to GB converter?
	print "1 - RAID menu"
	print "2 - Quit"
	print "3 - Start Over"
	menu = input()
	if menu == 1:
		raid_prompt() ## looping, quit() function is ending script, will need revision
	elif menu == 2:
		print "Cya"
	elif menu == 3:
	elif menu == 0:
		print "Error: Please select 1, 2, or 3 \n"
	elif menu &gt; 3:
		print "Error: Please select 1, 2, or 3 \n"
		print "quit fucking around" ##formatting

def data_transfer_jumpoff(): ##BOOKMARK
	print "What is the transfer speed? Gigabytes, please"
	transfervar = input()
	print"What denominator of data size?"
	print"1 - byte"
	print"2 - kilobyte"
	print"3 - megabyte"
	print"4 - gigiabyte"
	print"5 - terabyte"
	transferunit = input()
	print"How much data?"
	transferamount = input()

## Start Prompt, this needs to be expanded upon
def start(): ##easier way to reset all these variables?
	startmenu = 0
	raid_var = 0 ## should be an inline solution for this in its own function, it just works
	hddnumvar = 0
	hddsizevar = 0
	raidvar = 0
	print("\nChoose an operation") ## line break might cause formatting errors look here first
	print "1 - RAID calculator"
	print "2 - Data Transfer Calculator"
	startmenu = input()
	if startmenu == 1:
		hdd_num() ## these need to be called in a more functional manner
	elif startmenu == 2:
		print "Not supported\n" ## will require edit

#main scripting



The operation of the program begins by calling the start functions. I put this function call at the bottom of the script so it would be easily accessible. start() is the last function before the initial start call. From this menu the user is asked which of two currently implemented operations they wish to perform: RAID calculator or Data Transfer Calculator. Data Transfer Calculator is still a work in progress.



When RAID calculator is selected the user is queried about the number of hard drives and their capacity through the calling of two functions: hddnum and hddsize. These functions would be called upon several times during the session of the program, so I thought it would be appropriate to make them their own functions. These functions read input from the users and set the appropriate variables for use during the calculation.

Next the RAID formats available are listed, one of which the user can choose which calculation they want to perform on the previously set variables. In this instance of the code, RAID 1 through 10 work fine, but the functions for RAID 50 and 60 are missing their capacity calculations since the formulas are not as straightforward. Once the selection is made, the results of the calculations are displayed.

At the end of the operation, users are presented with several options. They can change the variables and recalculate. The RAID calculation itself could be changed. The main menu can also be called in the future to perform a data transfer calculation. In the future, it might be beneficial to pass the size of the array, and allow the calculation of the transfer speed by just asking the user for the connection speed. This data could be appended to the RAID data. It might also be beneficial to include a memory function to remember specific RAID configurations, and read it to a text file than can be loaded on subsequent runs.

Another function that might be useful is the reconciliation of GiB values and GB values. This would help if users are using an NTFS file system. It might also be useful to include other filesystem types in the calculations to get the most accurate numbers possible and maximum compatibility.

Again, this was fun to make and I find myself using it from time to time. There is still a lot of work to do if the program can stand on it’s own. Taking user input comes with an interesting set of problems that could allow certain inputs to change the functions of the program. If the user isn’t intentionally trying to break the program this shouldn’t be an issue, the instructions and commands are very clear when user input is necessary. There is some formulaic work to be done with the two newest RAID formats.

Python was a great language for me to grasp the beginning intricacies of programming. I feel like the capability to make even more intricate programs is possible. Combining the operating structure of something like RAIDr with GIS functions illustrated below would allow easy semi-automatic scripting of tasks. The sky is once again the limit.

Mapping Botnets

A botnet is collection of compromised computers controlled by an individual or a group of malicious actors. You may have heard the term “owned” thrown around online or in gaming communities. The origin of the word comes from taking control of another computer. If I get you to run a Trojan that places a backdoor on your computer, leverage this backdoor to escalate my privileges on a system, then deescalate your privileges, you no longer own your computer. Your computer is “owned” and administered by someone else, likely remotely. A botnet is just a collection of these compromised machines which work in tandem, commanding and controlling each other, pushing updates, mining cryptocurrency, running portscans, launching DDOS attacks, proxying an attacker’s connection, and hosting payloads for attacks.

The larger the botnet the better but other features like connection speed, hardware, and topology on a network ultimately define a computer’s usefulness in a botnet. Don’t get me wrong, a hacker is not going to picky about the computers that are added. There is a job for every piece of hardware.

The larger the botnet the more attention it draws. The larger botnets can also be leased out on black markets for attacks and other malicious activity. There is a sweet spot where a botnet is powerful enough to be leveraged but low key enough to fly under the radar. Flying under the radar is something Mirai, a botnet from late 2016 did relatively well. Mirai took control of IoT (Internet of Things) devices with weak passwords. These devices included TV boxes, closed circuit TV equipment, home thermostats and other “things”. These devices are set up to run without administration, so once they were owned by an attacker, they were likely to remain owned and under the radar. vDos, a DDOS botnet for hire, holds the record for the largest DDOS botnet. The owners were arrested in Israel in August 2017.

I’ve been dealing with what I suspect to be a botnet on my home network. I got lucky the other day after installing a home firewall. After blocking a suspect connection I was swarmed with thousands of attempted sessions from all over the world. My working theory is that this is some botnet using P2P networking for command and control infrastructure and it was trying to see where the computer it has lost contact with went. I was able to export this 5 minute period of connections to a csv file and plot it on ArcMap. The following map is what was produced.


botnet activity 1.png

I’m a firm believer that every problem should be approached, at least hypothetically, through a geographic perspective. By putting this data on a map, an additional perspective is provided that can be analyzed. Looking at this map for the first time was also surprisingly emotional for me. I have been chasing this ghost through the wire since December 2016 and, through the geographic perspective, was finally able to see and size up the possible culprit.

I had to filter out the United States from the dataset because I was running an upload to an Amazon web service which would have added inconsequential coordinates to the United States, skewing the data. This data would later be parsed and included.

Immediately I was drawn to the huge cluster in Europe. If this is truly the botnet I’ve been looking for, Europe would be a good place to start looking. There were 7000 sessions used in the dataset. I’m grateful that Untangle firewall includes longitude and latitude coordinates in the data it produces. This made the data migration easy and painless.

I got lucky again two weeks later when I got another swarm of sessions from what I assume to be the same botnet. This was, again, after I terminated a suspect connection, suggesting that this experiment is repeatable which would provide an avenue for reliable data collection. I then took to the new ArcGIS Pro 2.0 to plot some more maps. With 2 sets of data, analysis could be taken to the next level through comparison.


Full Resolution


First I have to say that this new ArcGIS interface is beautiful. It’s reminiscent of Microsoft Office due to the ribbon toolbar layout. I found the adjustment period quick and the capability expanded compared to earlier versions and standalone ArcMap. After using ArcMap I was surprised to see how responsive and non-frustrating this was to use. I ran ArcMap on a bleeding-edge system with 16GB of RAM and saw substantial slowdown. I was able to run this suite on an old OptiPlex system with 4GB of RAM with no noticeable slowdown. It is truly a pleasure to work with.


Full Resolution


Using the second set of data I was able to produce the map above. I went ahead and created a huge resolution image so I could see the finer geographic details of the entities involved. This dataset includes the United States because I wasn’t running any background processes at the time the data was collected. I can safely assume this map represents only suspected botnet connections. I was glad to see a similar distribution, with Europe continuing to produce the majority of the connections. The real fun begins when we combine these two datasets but first let’s take a moment to look over the patterns in the above map.

Just by looking at-a-glance we can see there is a disproportionate amount of connections originating in Europe. There seem to be 4 discernable areas of concentration in Europe: The United Kingdom, the Benelux region, The Balkans, and Moscow. Looking at the United States we see a majority of connections coming from the Northeast United States, and across the Saint Lawrence in Canada. Florida is represented, as is the Bay area and Los Angeles. Vancouver, Canada seems to have a strong representation. Connections in South America are concentrated along the mouth of Rio Plata, where the major population centers are, and the coast of Brazil. A lot of Southern American tech operations happen in this region. If there were compromised computers on the continent, this would be an appropriate area to find them.

China seems to be under represented. The last network security maps I made were overwhelmingly populated by Chinese IPs. This map seems to feature only Beijing of the three major coastal cities. The Korean peninsula seems to have a strong representation. Central and Southern Asia are not represented strong except for India and, like China, it would seem to be underrepresented considering the population and amount of internet connected devices in the country.

It turns out Singapore is a large player in the network. However, it’s not inherently apparent given Singapore’s small footprint. These point maps don’t properly represent the volume of connections for some areas where many connections originate from a small area. By using heatmaps we can combine the spatial and volume elements in an interesting way.

Next we’ll look at the combination of these two point databases.




I included the lower resolution map above so the points could be easily seen. A level of detail is lost but it allows it to be easily embedded in resolution sensitive media like this webpage.

The idea here was that, since a majority of the points overlap, a comparison of changes could be made between this two week period. I parsed the United States data from the first dataset so it could be included and compared. By focusing on what dataset is layered on top, we can infer which computers were removed from the botnet, either through being cleaned up or going offline, and computers that were added to the botnet in this two week period. I’m operating under the assumption that this is a P2P botnet, so any membership queries are being performed by almost every entity in the system. I’m also assuming this data represents the entirety of the botnet.

When we put the original dataset created on 7-31-17 above the layer containing the activity on 8-13-17 we’re presented with an opportunity for temporal as well as spatial analysis.


Full Resolution


By putting the 7-31-17 dataset on top, we’re presented with a temporal perspective in addition to the geographic perspective. Visible purple dots are not included in the first dataset or else they would be overlapped by a green dot. These visible purple dots indicate machines that have presumably been added to the botnet. With more datasets it would be able to track the growth of these networks.


Above is a reprojection of the data with the 8-13-17 dataset on the top layer. The temporal perspective has shifted when we change up the ordering. Visible green dots from the first set may indicate machines that are no longer part of the botnet when the second dataset was created. Machines going offline from a botnet is plausible but it’s also possible that the machines were just offline or unable to establish a session. It’s entirely possible that even with a P2P networking scheme, the entire botnet does not ping every system that appears to go offline with every machine on the network. This would seem like a serious security error by the botnet operator. It’s also entirely possible they’re not trying to cover their tracks and employing a “spray and pray” tactic, running the botnet at full capability and not worrying about the consequences. A full resolution image is linked in the caption.

By looking at both sets under the assumption that the entire botnet revealed itself, we can see if the botnet is growing or shrinking. If their are more visible purple dots on the map where green dots are layered on top compared to the visible green dots on the map where purple dots are layered on top, the botnet is growing. If the opposite is true, the botnet is shrinking.

Full Resolution


The most interesting features of these comparison maps I’ve found is the predilection for certain countries and regions. Looking at the rotation of computers, we see the Northeast United States and Florida as hotspots for this activity. The reason is not clear, but this serves as a starting point for additional research. It’s important to remember that data reflects population. Major cities all show signs of activity. Major activity concentrations can be empirically defined by normalizing populations. The activity seems to proliferate from areas where activity is already established. Perhaps there is some kind of localized worm activity used for propagation. Let’s take a look at the real elephant in the room; Europe.


botnet_activity_both_days_8_13_17_top_europe extent_marked.png


The majority of machines seem to be in Europe. There are certain regions that seem to have concentrated activity. They are marked in red above. From left to right; The UK, the Netherlands, and Hungary. There’s also concentrations in Switzerland, Northern Italy, Romania, and Bulgaria.

The main three concentrations pose interesting questions. Why is there so much activity in UK? The Netherlands concentration can be explained by the number of commercial datacenters and VPS operations. A lot of for-rent services operate out of the Netherlands making it a regular on IP address queries.  Hungary is interesting and a befuddling find. There is no dominating information systems industry in Hungary like in the Netherlands What do all these countries have in common? Why are the concentrations so specific to borders? Answering these questions will be critical in solving the mystery. Next we’ll try our hand at some spatial analysis.




A kernel density map, also known as a heatmap, shows the volume of data in geographic space. This is an appropriate spatial analysis to run alongside the point map because it reveals the volume of connections that may be buried under one point. If one point initiates 100 sessions, it’s still represented as one point. These heatmaps reveal spatial perspectives that the point maps cannot.


Full Resolution


Immediately we see some interesting volumes that were hidden in the point map. Moscow lights up in this representation, indicative that many connections came from a small geographic area. By using standard deviation to divide the data, the biggest players show up in red. The circular pattern indicates that many connections come from a small area. There is big representation in Toronto, Canada that wasn’t completely apparent on the other maps. Our focus area of UK and the Netherlands are represented. Peripheral areas like Northern France and Western Germany light up on this map, suggesting concentrated activity, perhaps in the large metro areas. Seoul Korea lights up, suggesting large volumes of connections. There is notable activity in Tokyo. Like I was saying before, Singapore lights up in this map. Singapore is a small city-state that exists on the tip of peninsular Malaysia on the Malacca Strait. Connections here would be difficult to distinguish considering the small square mileage of the city. This raises a peculiar question. Why is this botnet so particular about boundaries? Singapore is crawling with connections but neighboring Malaysia, possibly sharing some of the same internet infrastructure, is quiet on the heatmap.




As with the other maps, I created a small and a large resolution version. For these kernel density maps, there are several options to represent the data. I chose to use standard deviation and geometric delineations of the data. Each provide a unique perspective and every additional perspective might reveal something new. The geometric map “smooths” the distribution of data, showing areas that might not have been significant enough to appear in the standard deviation representation.


Full Resolution


In the future it might be beneficial to select by country borders and make a chloropleth map to show the number of sessions per country. This would reveal countries with multiple sessions from the same coordinates.

It might also be beneficial to parse the data further and add appropriate symbology and additional maps for data that was present in both sets as well as which points were unique to one set. This set of 3 maps would present the data in an additional spatial context, allowing another perspective for analysis.

As always, I will be on the hunt for additional data. The next step for this project is finding out the condition for reproducing this swarm of connections. If it does turn out to be easily reproducible, the real fun begins. Additional data would be collected at regular intervals and mapped accordingly. With more data comes more realization. Automating the data collection and mapping would be the final step. At some point a geographic perspective would be so apparent, the next steps will become clear.

Until then I’m still on the warpath. Never has research been so personal to me.

Imgur Album