Tea & Tech (🍵)

ZFS File Server: Build and Initial Setup

September 18, 2020

I am pleased to report that, after a grueling 18 day journey, my $163 Threadripper 1900x arrived, by way of the Netherlands, all the way from Berlin to my humble abode in the suburbs of northern Virginia.

First though, I must come clean: There were additional purchases made since part one of this blog series.

  • Threadripper CPUs do not ship with coolers, so I bought a Noctua U14S TR4-SP3.
  • My modular PSU was one cable short of being able to power the X399 Motherboard, but there were replacements on Amazon.

Keep in mind (if you’re attempting a similar build) that X399 motherboards do not have on-board video, so you’re going to need to add a video card for initial setup and installation. This is not something I spent money on, because I am swimming in a sea of (now-retired) GPUs from my cryptocurrency mining ventures.

Finally, because I am actually at risk of permanently losing data, I decided to purchase some mechanical drives to tide me over until NVMe prices come down. In addition to addressing pressing concerns over data loss, the mechanical drives can exist in a separate zpool and serve as ZFS snapshot storage, as well as off-site storage for my father.

They’re loud though; it’s driving me nuts! But I’m getting ahead of myself.

I looked at the Backblaze Hard Drive Stats to get an idea of what the reliability of mechanical drives looks like these days, and decided to go with two HGST drives in a mirrored configuration. Could I have gotten away with a single drive? Yes. Absolutely I could have. But that is not something I considered until writing this paragraph.

Build Pictures

Ziggy was most helpful

Ziggy is most helpful

We apply thermal paste according to the instructions

We apply thermal paste according to the instructions

The completed build

The completed build

Setting it up

  1. First, I downloaded Ubuntu
  2. I used Etcher to write the ISO to a bootable USB stick
  3. I closed up the case and put in every screw to ensure that it would definitely work on the first try
  4. I moved the server to the cellar with the network gear and plugged it in
  5. I pressed the power button. Vindication! It posted and detected all hardware correctly. First try!
  6. I plugged in the USB stick and installed Ubuntu 20.04

After the initial installation was complete, the absolute very first command I ran on the server was to install the (lowercase) holy trinity of tmux, git, and vim.

$ sudo apt-get update && sudo apt-get install tmux git vim

Then, because it’s connected to a network, I decided to set up some firewall rules. Ubuntu 20.04 ships with ufw, so I went with that, allowing SSH and RDP:

$ sudo ufw allow from 192.168.1.0/24 to any port 3389
$ sudo ufw allow ssh
$ sudo ufw enable

Speaking of SSH and RDP, let’s install those, too:

$ sudo apt install xrdp openssh-server

With all the essentials out of the way (and really, I’m so so surprised how fast it went), it was time to set up ZFS. Luckily, Ubuntu 20.04 just ships with it. It was already installed! Magic.

I checked out the sector size and logical addresses of my drives:

# fdisk -l

...

Disk /dev/sda: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: HGST HUS728T8TAL
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdb: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: HGST HUS728T8TAL
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Based on this information, it looks like we need to set ashift=12 at pool creation time. So I created the pool,

# zpool create vault mirror -o ashift=12 sda sdb

…and that was it!

I have rsync going now copying my data from the old server. It would be much faster to export the old zpool, plug the drives in to the new server, and transfer the data over SATA, but I’m basically terrified now of anything else happening to the old drives, because they are on their way out the door to buy a bucket-kicking farm.

Actually, I’ve already had the old server lock up on me once, at about 83GB into the transfer. I had to do a hard reboot and wait for the zpool resilvering to finish before resuming rsync.

We’re now up to ~207GB transferred at an average cruising speed of 25MB/s over the network. Obviously we’re IO bound right now, because the network supports 5x that, so I am (again) looking forward very much to getting all this data onto NVMe drives.

Next steps for me are to install Plex Server and see how I feel about using it to replace my “use SMB to mount a drive on Kodi” HTPC solution. That might need to wait until all the data is copied, which I expect to take a couple days (barring catastrophic failures).

Fortunately, the data will copy while I sleep. Thanks for reading; I will report back soon!


Andrew J. Pierce collects Jian Shui teapots and lives in Virginia with his wife, son, and Ziggy the cat. You can follow him on Twitter.

BTC: 121NtsFHrjjfxTLBhH1hQWrP432XhNG7kB
© 2020 Andrew J. Pierce