Collecting data is a passion of mine. I’ve always enjoyed collecting things. I believe the act of collection is a critical component of the human psyche and experience. The act of maintaining, curating, and growing collections of data is personally and professionally therapeutic and fun. Collecting data, applying it to typical situations is a critical part of approaching everyday life in the 21st century and the better your tools, the better your efficacy. Being able to build these tools yourself puts you in even greater control over your data management solutions and opens the door for unique opportunities to engage with interesting cutting edge technology.
Building servers to accomplish the task of storing all the data I’ve collected over the years is a big priority for me. I don’t like to delete things. I don’t like to delete different versions of the same thing. Having the hardware that is capable of scaling and storing my ever-expanding repository of documents, movies, music, data, pictures, games, books, and programs is very important to me. I’ve seen the detrimental effects on not having this data easily available this year as I’ve had 30TB between the cloud and a physical box at home, something that isn’t particularly integral or useful for my workflow.
Having the data available and easily accessible is only one part of the equation. Security is the second part. Computer operation is always a trade off between convenience and security. When it comes to this bulk storage, I’ve come to the conclusion that my personal needs would be better met by having this server offline. By having this server airgapped, I feel like I would have more control over what is ingested and egressed and would be better situated to deal with malicious threats like ransomware.
The planned server is only one part of the solution. I hope this server can function as a backup and that another server, to be built in the future, would handle all the internet-facing and production activity. This would fulfill the data integrity requirement of an offsite backup, making the data that much more secure in the long run and elicit more peace of mind for the administrator.
I decided to run a non-Windows operating system on this machine. I feel like it would require less maintenance, in the form of updates and daily maintenance, as well as eliminate some of the security woes I’ve had in the past with Windows machines.. I decided I want to utilize the ZFS filesystem for the added control of data integrity and the redundancy operations that are superior to traditional RAID. There is no native ZFS support on Windows. First I looked at OpenIndiana, a Solaris distribution that has ZFS baked in. I was worried about hardware support and expandability in the future so this unfortunately might not be an option. I looked at FreeNAS which is a BSD distribution for network attached storage. I wasn’t exactly sure if it had the capability under the hood I was looking for as a workstation, and since the box wouldn’t be connected to a network, a lot of the functionality would not have been utilized. FreeNAS was also limited by its user interface. While it has a robust web interface, the local desktop environment is lacking for use as a workstation.
Securing the hard drives was my first concern when setting up this build. A great deal was found in the form of Western Digital Easystore 8TB external hard drives from Best Buy. These external enclosures housed WD80EFAX drives that can be easily “shucked” from the enclosure and used for other projects. These hit the shelves at $159.99 a piece which is about $50 cheaper than the cheapest standalone internal drive on the consumer market. I decided to buy as many of these as I could afford, taking off an extra 10% from opening a Best Buy credit card. This is a storage deal you only see once every few years. These drives do come with some drawbacks.
I started mounting the hard drives in the Nanoxia Deep Silence 1 case and realized that the mounting holes were not in the standard position. I was only able to secure two out of four mounting components in the drive trays. This was concerning because drives that can give and move in their enclosures will have shorter lifespans. This case would have to sit up vertically so hopefully gravity would provide the same service as the two missing tray mount points. The 1 year warrant is also something to consider compared to the 3-year warranty on most barebones drives.
The PSU from a previous build was the put in the tower. Shipped, the Nanoxia DS1 comes with 11 internal 3.5″ slots in the form of two 3 drive cages and one 5 drive cage. One of the 3 drive cages had to be removed for the 750W, modular PSU to be install. This build screams overkill and this PSU is definitely part of that. My reasoning is future-proofing, but it’s also nice to find some use for extra parts laying. The highest load this machine would experience would likely be several hundred watts less than 750. All 8 hard drives spinning up at once does create a load that needs to be considered. In addition to installing the PSU, I went ahead and screwed in the motherboard standoffs and did some early wire management to make installation easier.
The motherboard was dry-fitted, assembled, and tested outside of the case to prevent any troublesome troubleshooting. The CPU and RAM was both easily fitted and popped in respectively. The heatsink was dry-fitted to make sure it successfully fit the AM4 socket, despite only saying AM3 on the box. Thermal paste was then applied to the processor and spread to an even coat with a piece of cardboard before the heatsink was applied for a final time. The 2 sticks of 8GB RAM were double checked to make sure the proper dual channel slots were being utilized. The slots were staggered on this board.
Installing M.2 SSD was interesting to do for the first time. I have never had the pleasure of working with one before. The motherboard includes a special standoff for the M.2 SSD and a screw to secure it in place.
After everything was installed it was time to power on the motherboard assembly. This would be done outside of the case on a static resistant material first. The PSU needs to power the mainboard molex, the 8 pin CPU power and at least the power switch on the case. At first it didn’t display. Luckily the B350 motherboard comes with 4 debugging lights which indicate what component is preventing the system from posting.
The GPU debug light was on and I did a quick facepalm. I had forgotten that Ryzen series chips did not include integrated GPUs and needed discrete graphics cards in order to display. Luckily I was able to cannibalize a GT 1030 from another computer I had laying around. There is a firepro W4100 on the way for another project that might have to be adopted for this project. The 1030 will do for now. Definitely something to consider. I might not have bought this Ryzen originally since I failed to foresee the cost of a discrete video card. I’m still satisfied with my purchase so far. $300 for 8 cores is a great deal no matter how you slice it. If I decided to use the GTX 1030, I will need to get a full profile bracket so it will be flush with the slots on the back of the machine.
With the motherboard posted and fitted, the IO shield was installed on the back of the case. Wires in the case was further arranged for management later. The DVD optical drive was hooked up. FreeNAS was booted up to try out an OS. The system booted fine into the operating system after installation which is always nice. Installing the OS to the M.2 SSD was humorously fast. Decided to switch over to Ubuntu after seeing the FreeNAS’s lack of a DE. OpenIndiana, my other choice, needed some BSD shell knowledge that I was not particularly in the mood to figure out. “Just Working”™ is something I look for in an OS and Ubuntu should support everything out of the box, has a DE, and can run ZFS.
I then encrypted the disk and encrypted the home folder. These are two basic hardening steps for the OS and Ubuntu offers to perform them during the installation process. No one will be able to boot into the system using a rescue CD, DVD, or USB without the password considering these two encryption options. The M.2 SSD will allow this constant encryption work to be transparent and almost unnoticeable thanks to the 3GB/S read/write speeds, something that might bottleneck performance on other hard drive technologies. The speed of this little device is shocking. An install that can take as long as fifteen minutes was done in less than three, including the time intensive encryption operations. This is a fantastic form factor that makes SATA SSD’s seem like they crawl.
After the basics were up and functioning it was time to connect everything on the board; audio ports, USB ports, HDD lights, power lights, reset switch, fans. The SAS controller card was set to go in next, followed by the HDD array. The SAS card booted up properly the first time and occupied the second PCIE 16 slot on the motherboard. I decided it would be best to installing drives one at a time. This way I could erase the preinstalled partition left over from the WD Easystore software, label the drives, and test them individually. Another issue arose over the form factor of these drives. They would not clear the back of the cage, which only allowed one side of the clips to secure the drive in place, further adding to the instability problems. It would be possible to alleviate this by modifying the cages themselves. This is not something I wanted to jump straight into. After everything was checked out and noted it was time to install the ZFS filesystem.
ZFS has to be downloading from the Ubuntu repository. I wanted to create a whitelist that only allowed communication from the server to the Ubuntu repository. Messing with IP tables was not providing the functionality for URLs I was used to with other solutions like Untangle. I decided it was easier to deal with it on the hardware firewall later. Sudo apt-get install zfs is all it took to get the filesystem utility ready to operate. I still need to explore ZFS as a system. This server will give me a platform to experiment before I bring the 25TB of data down from the Amazon cloud.
The wiring for the drives was an extremely tight fit. There was not enough room for the cable management I wanted to perform. The side panel was barely able to latch into place and even then the panel was bulging were the wires were most crowded. Most of the slack wiring is in the open side of the case. A possible mod for this would be cutting a hole where the crowding and installing some kind of distended chamber for the excess wiring. This is something to consider in the future.
Below is the list of parts and the link to this list on PC part picker.
There are definitely some things I want to handle with this project in the near future. The case either needs to be modified to allow more cable room, are the drives need to be refitted so they dump cables into the front side of the case. This might also alleviate the crowding against the drive cages.
I want to find a good use for the Ryzen 7. Video capture was one of the first things that came to mind. I’d like to include a capture card in this build, having a second system to capture video greatly increases the intensity of operation that can be done on a primary machine without the processoing overhead of recording on the same machine.
I need to install the 2 hard drive hot swap bays. This will fill up all the remaining 5.5″ slots on the case. Having two hotswap bays makes the ingest process easier, allowing two drives to be ingested or egressed as well as duplication operations.
I’d like to investigate additional uses for the build. It hasn’t been completely put into production so the finer details of operation are still up in the air. This build was one of the most powerful machines I’ve ever had the opportunity to put together. I can’t wait to start to begin sorting and curating the data on this machine and expanding its functionality in the future. Hopefully “Ratnest” has many years of hoarding data ahead of it.
After rereading this post I forgot to mention the 50TB total storage. 8 x 8TB is 64TB of raw storage. This is shrunk to 47GB when using ZFS with 1 drive redundancy.
By reversing the direction of the drives in the cage, I was able to route the cords in a manner that allowed the sides to fit on the case. This mounting technique allowed the drives to clear the back of the cage, alleviating the need for case modification, always a plus when it is not completely necessary.
Being able dto situate these drives in the case and close it without having a visible bulge in the side panel effectively completed this build. It is now operational and should provide enough storage for all the data I’m ingesting for the next couple years at least.
All the dense drives made this these the heaviest build I ever constructed, weighing in at almost 50 pounds, a pound for every Terabyte.
Here’s to hoping for a successful archival workflow in “Ratnest”s future.
Josh Dean Concord Charlotte