ZFS File Server: Build and Initial Setup
September 18, 2020
I am pleased to report that, after a grueling 18 day journey, my $163 Threadripper 1900x arrived, by way of the Netherlands, all the way from Berlin to my humble abode in the suburbs of northern Virginia.
First though, I must come clean: There were additional purchases made since part one of this blog series.
- Threadripper CPUs do not ship with coolers, so I bought a Noctua U14S TR4-SP3.
- My modular PSU was one cable short of being able to power the X399 Motherboard, but there were replacements on Amazon.
Keep in mind (if youāre attempting a similar build) that X399 motherboards do not have on-board video, so youāre going to need to add a video card for initial setup and installation. This is not something I spent money on, because I am swimming in a sea of (now-retired) GPUs from my cryptocurrency mining ventures.
Finally, because I am actually at risk of permanently losing data, I decided to purchase some mechanical drives to tide me over until NVMe prices come down. In addition to addressing pressing concerns over data loss, the mechanical drives can exist in a separate zpool and serve as ZFS snapshot storage, as well as off-site storage for my father.
Theyāre loud though; itās driving me nuts! But Iām getting ahead of myself.
I looked at the Backblaze Hard Drive Stats to get an idea of what the reliability of mechanical drives looks like these days, and decided to go with two HGST drives in a mirrored configuration. Could I have gotten away with a single drive? Yes. Absolutely I could have. But that is not something I considered until writing this paragraph.
Build Pictures
Setting it up
- First, I downloaded Ubuntu
- I used Etcher to write the ISO to a bootable USB stick
- I closed up the case and put in every screw to ensure that it would definitely work on the first try
- I moved the server to the cellar with the network gear and plugged it in
- I pressed the power button. Vindication! It posted and detected all hardware correctly. First try!
- I plugged in the USB stick and installed Ubuntu 20.04
After the initial installation was complete, the absolute very first command I ran on the server was to install the (lowercase) holy trinity of tmux, git, and vim.
$ sudo apt-get update && sudo apt-get install tmux git vim
Then, because itās connected to a network, I decided to set up some firewall rules.
Ubuntu 20.04 ships with ufw
, so I went with that, allowing SSH and RDP:
$ sudo ufw allow from 192.168.1.0/24 to any port 3389
$ sudo ufw allow ssh
$ sudo ufw enable
Speaking of SSH and RDP, letās install those, too:
$ sudo apt install xrdp openssh-server
With all the essentials out of the way (and really, Iām so so surprised how fast it went), it was time to set up ZFS. Luckily, Ubuntu 20.04 just ships with it. It was already installed! Magic.
I checked out the sector size and logical addresses of my drives:
# fdisk -l
...
Disk /dev/sda: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: HGST HUS728T8TAL
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sdb: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: HGST HUS728T8TAL
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Based on this information, it looks like we need to set ashift=12
at pool creation time.
So I created the pool,
# zpool create vault mirror -o ashift=12 sda sdb
ā¦and that was it!
I have rsync going now copying my data from the old server. It would be much faster to export the old zpool, plug the drives in to the new server, and transfer the data over SATA, but Iām basically terrified now of anything else happening to the old drives, because they are on their way out the door to buy a bucket-kicking farm.
Actually, Iāve already had the old server lock up on me once, at about 83GB into the transfer. I had to do a hard reboot and wait for the zpool resilvering to finish before resuming rsync.
Weāre now up to ~207GB transferred at an average cruising speed of 25MB/s over the network. Obviously weāre IO bound right now, because the network supports 5x that, so I am (again) looking forward very much to getting all this data onto NVMe drives.
Next steps for me are to install Plex Server and see how I feel about using it to replace my āuse SMB to mount a drive on Kodiā HTPC solution. That might need to wait until all the data is copied, which I expect to take a couple days (barring catastrophic failures).
Fortunately, the data will copy while I sleep. Thanks for reading; I will report back soon!