Overkill NAS Build – 60TB Threadripper NAS
|Inside this box is a whopping 60TB of hard disk space, and I’m going to be putting them in what I’m calling “OVERKILL”. See, I’ve got a few problems with my current network storage solutions starting with the speed, or lack thereof. It’s a single gigabit connection to a QNAP NAS which is running 4 8TB drives running in effectively RAID 5. That does mean I have some redundancy, specifically one drive can fail at a time and in theory no loss data, although in practice rebuilding RAID arrays can cause other drives to fail so I’m on relatively thin ice there. Also, the drives aren’t all the same, one is a shingled seagate archive drive while two are ironwolf pros. And being a QNAP box, it has no rendering power – well enough for Plex but not rendering my videos. Overall, hardly a perfect solution.
So, I went and bought these drives and am going to throw them in this, my threadripper system. I’ll be using the 24 core 2970WX as I can provision 4 cores for UNRAID, and the other 20 to a Windows VM running Adobe Media Encoder. I’ll also be using 10GbE between this and my PC and a 1TB Gen 3 NVME SSD as a cache drive so if all goes to plan, I’ll be able to edit footage directly from this then have it render the project out for me so I don’t have to stop using my PC while it renders – plus I’ve had some instability with my PC so that should help there too.
To make it clear, I’ll be using 2 of these 10TB WD Gold drives for parity, meaning I’ll be able to sustain 2 drive failures without losing data and they are all the same which helps a lot too. So, lets get building. I need to swap the CPU out as it’s got the 16 core 2950X in there at the moment which while I’m sure that’d be fine, if I have a 24 core.. Well I may as well use it, eh?
So it’s been about a month since I actually put this together. It took me a week to copy all 6 or of terabytes of existing footage to it thanks to an issue with my network cable from my router to my office running at 100MB/s instead of 1GB/s, but after installing new cables, using an M.2 SSD enclosure and a 2TB SSD, and setting up my 10GB switch between an Asus XG-C100C in my main PC and the 10GbE card the Zenith Extreme came with in the box, it’s now up and running.
The only thing that I technically haven’t figured out yet is using it to render my videos in the background/automatically. I can open the project over remote desktop in Premiere, locate the ‘missing’ footage and export to Media Encoder, but I want to be able to just copy the folder to the watch folder and have it render it without any input. But, I am now editing off it directly which has sped up my editing experience as all the clips are in the same place, with quick access.
The way I’ve got everything set up in UNRAID is 2 drives are my parity, with 4 ‘in’ the array. I’ve got the SSD set up as the cache, which flushes once a week. It’s a 1TB drive so on a normal week I only write maybe 100GB of data, maybe a touch more, so that week’s videos stay in the cache making for quicker editing, then flush to the still relatively fast array.
As for the rendering VM, that’s running fine. It’s got the GPU passed through to it, and using the community apps plugin which is amazing, I use a vm hot plug tool to let me connect a keyboard and mouse to the VM without having to restart to attach it. That’s set to run Media Encoder on startup, and barring the dreaded ‘Get even more out of Windows’ popup I physically cannot disable – despite even changing the registry key that is meant to disable it – it’s pretty reliable.
I think the next purchase for that will be a UPS. I very, very rarely have any issue with power here, stability and surges aren’t something I experience, but it’s good practice for when that inevitably happens and I lose some data.
A final note on UNRAID. It’s really easy to set up. Like, insanely easy. You just click what drives you want as parity and data drives, let it build and it’s done. Create a share, connect to it and that’s literally it. You can run Plex in a docker container with a couple of clicks, set up a VM with a couple more. It’s mental. The only thing I’m not fully sold on yet is the filesystem, and the fact that the data you write to it is stored – in full – on a single drive (with parity ‘backup’, but still). It’s not like ZFS where a single large file could be stored across multiple drives, and UNRAID lets you see exactly what data is stored on what drive through the web interface. It’s a little different than I’m used to so I’m still a touch uncomfortable with the idea, although I’m not entirely sure why.
Also, the cache it seems is just a cache for writes, that doesn’t auto-clear. You have to either press the ‘Move’ button in the interface, or schedule it to run hourly/daily/weekly/monthly, and once data has left the cache it doesn’t seem to repopulate based on usage. Also, and data in the cache is considered ‘vulnerable’ since it’s not been written to the array yet, and won’t be until it’s ‘Moved’ – I was more hoping for a read/write cache that helps buffer the array to write to it quicker for bursts, and to store recently/most used files for faster access times, but it doesn’t seem like that’s the case. If you know how to set that style of cache up on UNRAID please do let me know – my search of the forum didn’t yield great results and I’m still new to the UNRAID game.