Warning: This post is unapologetically geeky. The more techie aspects may not make much sense to anyone unfamiliar with Linux or PC hardware. Sorry!
The Initial Goal
For a while I’ve wanted a shared drive on my home network. It’d be a location where my wife and I could save our data that’s not tied to the local storage on any of our laptops or desktops.
Why? well a couple of reasons…
It’s a bit annoying if you switch computers and don’t have access to certain files because they live on another computer.
I want to have backups of our most important files in case any of our computers die, get stolen, or destroyed in a house fire.
If you’ve ever worked in a corporate office then you’re probably familiar with the terms “shared drive” or “mapped network drive”. That’s basically what I’m doing here, but the shared drive will exist on a small box under our TV in the lounge, rather than in a climate-controlled server room.
Requirements
Any good IT project will start with a list of requirements. This is no exception.
High storage capacity
I suppose that term “high” is relative. Fortunately my wife and I aren’t data horders.
For context, my wife’s main laptop has 1TB of internal storage, plus she often uses an external SSD that holds an additional 1TB.
She has a powerful work laptop too, but obviously we don’t store any of our private data on that.
My main gaming PC has 3x 1TB M.2 NVMe SSDs. But I actually use less than half of that 3TB.
My guess was that a shared drive of 1 or 2TB (terabytes) would be fine. This network drive won’t hold absolutely all of our files and data, it’ll just be the most important stuff.
Cheap to run
A little while ago I calculated how much power my main gaming PC uses. I was quite shocked to realise it accounts for about 10% of our entire household electricity bill. I don’t want to add more devices that guzzle electricity. That rules out large rack-mounted servers, and even another desktop PC would be pushing it.
Quiet
This shared drive will live under the TV in our lounge. It should be as close to silent as possible. This rules out any mechanical hard drives or noisy fans.
Inexpensive
I’m from Yorkshire, UK, and we don’t like spending lots of money! Cheaper is better. And I’m more than happy to buy second-hand hardware.
Fast data transfer speeds
My wife and I both move around quite large video files. So I don’t want horribly slow transfer times. This means identifying any bottlenecks and fixing them, if it’s not too expensive.
Bottlenecks
Before I bought any new kit, I wanted to be clear in my own mind about where the likely bottlenecks would be in this solution.
My main PC is wired to our router using a 1Gb ethernet cable. I realised that this would likely be the bottleneck when transferring data between my PC to the shared drive.
For reference, a 1Gb (gigabit) network connection tops out at about 125MB/s theoretically. But in practice the maximum speeds are slightly lower than this.
I knew I’d be using SSDs for storage. Even older SATA SSDs typically can read & write at 500MB/s when shifting large sequential files. So the storage devices aren’t going to be the bottleneck.
My initial idea was to use an external SSD enclosure attached to our router’s USB 3.0 (5Gb) port. That port is capable of almost 600MB/s, so again it’s not going to be a bottleneck.
Upgrade to 2.5Gb Ethernet?
My PC actually has a 2.5Gb Ethernet NIC in it. It won’t surprise you to learn that this is roughly 2.5 times faster than a 1Gb NIC!
But the limiting factor is our router - it only has 1Gb ports. That means that we’re limited to 1Gb (125MB/s) transfer speeds… unless I buy a router that also has 2.5GbE LAN ports.
But… our existing router works perfectly well, and I can’t quite justify buying an expensive new router just for faster wired Ethernet.
And, to be honest, 125MB/s is not THAT slow! Let’s say you’ve got 200GB of video files to transfer. That’ll take roughly half an hour. I can’t imagine us ever being desperate for it to be any faster than this.
My wife’s laptops both usually connect to the network using WiFi, not wired Ethernet. And even when sat just a few feet away from the router, we’re unlikely to get faster than 125MB/s transfer speeds.
So, 1Gb Ethernet will have to do and we can keep our current router
Initial Ideas
I considered buying a pre-built NAS (network attached storage) device. But they’re fairly expensive at £300+ before you’ve even bought any disks.
Raspberry Pi devices look fun, and they’re cheap at about £100. But they’re not very powerful and not really suitable if you want to do more than absolute basics with them.
Router’s USB-attached Storage
Then I remembered that our router has a USB 3.0 (5Gb) port and you can attach external drives to it. I have a couple of M.2 NVMe SSD enclosures lying around from previous projects. So this seemed like a good first experiment.
I popped a spare 1TB NVMe M.2 SSD into an enclosure and attached it to the router via USB.
I logged into the router and set up the external SSD as a Samba (SMB) shared drive. If you have Windows servers at work, then SMB is how they will often share their network drives.
Performance Testing
I saw great read speeds when copying large video files from the external SSD to my PC over 1Gb Ethernet. I saw around 100MB/s, which is quite close to the theoretical maximum.
However, the write speeds on this external SSD were disappointing, at a mere 60-65MB/s. So, I set about trying to determine what caused this bottleneck.
Turns out it’s the router’s CPU. It gets absolutely hammered when trying to write data to an external disk. 3 of the 4 CPU cores get maxed out, or close enough.
Geeky note: I found that formatting the external SSD with NTFS actually provided faster write speeds than with ext3 formatting. This was surprising, bearing in mind that the router is based on Linux and ext3 is native to Linux systems, whereas NTFS requires extra compatibility layers. Well, I discovered that NTFS writes are a multi-threaded workload that gets spread across 3 CPU cores on the router, whereas ext3 writes are single-threaded. And this explains the unexpected speed boost when writing to NTFS drives.
One Step Further
I’ve had some nerdy chats with a guy on here,
I think he’s called. He’s got a mega-powerful server at home, with multiple VMs and comically huge amounts of storage.To be fair, he has a lot of media (music, videos) to store, so he needs a lot more storage capacity than me.
Anyway, he sowed the seed for me that maybe I want more than just a shared network drive.
Maybe I want a self-hosted version of Google Photos. Basically, very similar to the real thing, but just hosted on a VM at home, rather than handing my data to a tech giant.
Mr Gooding got me thinking about a home server…
Choosing the right hardware
Bearing in mind the requirements I listed earlier, I started a hunt for a suitable mini PC.
Many of them are woefully underpowered with puny dual-core CPUs. That’s fine for a single-purpose server, but not for a VM host with several VMs.
Step in the Dell Optiplex 3060 Mini PC.
CPU: Intel Core i3-8100T (this is the first gen when i3 parts became quad core).
RAM: 8GB DDR4
Storage: 128GB SATA M.2 SSD
An empty 2.5” SATA drive bay
1GbE LAN - fine for my needs.
This little cutie sips about 15W at idle, and something like 35W at full throttle. It’s got monster performance for such a tiny box.
Importantly, this CPU in this Mini PC is much more powerful than the one in my router, meaning it should easily cope with 1GbE data transfers.
A second-hand Optiplex 3060 on eBay cost me just £80 (about $107 USD).
I’ve also bought 16GB RAM (2x 8GB SODIMMs). I may not need that much RAM, but it was only about £13 ($16) on eBay so I don’t regret it. Also it means that the Mini PC will now operate in dual-channel memory mode, boosting performance a little.
And I bought a 2TB Samsung 850 EVO 2.5” SATA SSD as the main storage for about £65 ($85). It’s slower than the NVMe drive, but still plenty fast bearing in mind the 1GbE network bottleneck.
I think I could have gone for a 4TB drive, but they’re much more expensive.
Total price: about £158 ($211).
That’s for the Mini PC, 16GB RAM and a 2TB SATA SSD, all second-hand.
Not bad, though I am cheating a little as I already had a spare 512GB NVMe SSD lying around unused.
Oh and I think I worked out that it should only cost about £30 ($40) in electricity per year, with 24/7 usage. Not bad at all, and certainly much cheaper to run than my main desktop PC!
Maybe at some point I’ll configure a script to shut it down during the hours when we’ll be asleep. I’ll see if I can get WOL (Wake on LAN) working to start it up again.
Drive layout
I’ve got a spare 512GB NVMe M.2 SSD that I could use to replace the 128GB SATA model it came with.
After discussing options with ChatGPT, I decided on the following drive layout…
The 512GB NVMe drive would be used to install the VM host (proxmox) and the main VM disks.
The 2TB SATA drive would be for the SMB network share.
I still haven’t decided where my photos will live. I only have about 12GB of them myself, and my wife probably has a similar volume. So potentially the photos can live on the 512GB NVMe drive and save all of the 2TB SATA drive for the SMB network share.
Practical Hiccups
I’ve never used proxmox (VM host) before. Back when I had a proper job in IT, I was a VMware boy, complete with relevant professional certifications.
But these days I’m strongly leaning towards FOSS (Free and Open Source Software) rather than be locked into any particular vendor’s closed-source ecosystem. Proxmox fits nicely here as they have a fully capable free version for scenarios like mine.
ISO Troubles
The old-fashioned way to install an OS was to burn an ISO to a USB stick and boot from it. But that’s a bit of a pain because you need a separate USB stick for every OS you might ever install.
That’s where Ventoy comes in. It’s a utility that lets you dump loads of ISOs onto a single USB stick (or a 128GB NVMe drive in an external enclosure, in my case).
You boot your machine from the Ventoy USB and you’re presented with a menu to choose which OS ISO you want to install. And you can just keep adding ISOs on the same drive for as long as you have disk space. Very handy.
On my Ventoy drive, I’ve probably got a dozen different OS install ISOs, including a few versions of Windows (which I hope to almost never use!) and quite a few different Linux desktop distros. And now, proxmox too!
Anyway, it turns out that Ventoy and proxmox don’t play well together. On first boot after installation, proxmox shits the bed and has a kernel panic. Something to do with an invalid configuration file.
I couldn’t be bothered to try to properly troubleshoot this, editing various Linux files. Instead I just decided to burn the proxmox ISO to a good old fashioned USB stick.
Side rant: why the fuck is there still not a decent GUI tool in Linux for burning ISOs? I basically want Rufus (which only runs on Windows), but for Linux. Rufus is amazing, it’s incredibly comprehensive and handles virtually any ISO. But sadly it just doesn’t exist on Linux.
I tried a KDE ISO burner in CachyOS, but it wouldn’t work. So, reluctantly I booted into Windows 11 so I could use Rufus to burn the proxmox ISO. It worked first time, like a charm.
When I installed proxmox on my mini PC this way, it booted just fine after installation, no more kernel panic.
Apparently this incompatibility with Ventoy has been ongoing for several years, but the devs still haven’t fixed it. That’s a bit odd given how simple the fix should be. What have they got against Ventoy?!
Oh, another random hiccup: My Optiplex 3060 Mini PC seemed quite temperamental when I used a wireless keyboard with it. It simply refused to recognise the keyboard half the time. Luckily I had a wired keyboard in a cupboard upstairs, which works just fine.
The proxmox installation is pretty simple. You need a FQDN, which confused me initially. It’s been over 12 years since I worked in corporate IT, so I couldn’t remember how to specify a FQDN if you don’t use a domain (e.g. at home).
But a quick bit of searching online revealed that you can simply use “pve.local” and proxmox is happy with that.
Just as with most servers, proxmox expects to have a static IPv4 address, not one that can change via DHCP. So I just picked a memorable IP address that’s outside of the DHCP range my router hands out. Easy enough.
Aaaand, that’s about as far as I’ve got so far…
I’ve got my Virtual Machine host installed, but without yet making any effort to configure it fully.
Next Steps
I’ve never used virtual containers before - I don’t think they existed back when I had a proper job in IT. But I’m looking forward to having a play. Essentially they seem like cut-down versions of a full VM, with some fairly tight integration into the VM host. Great way to cut down on resource usage.
I think the first VM/container I’ll set up will be for my Samba (SMB) network share on the 2TB SATA drive. Looks fairly simple… we shall see!
And after that I’ll have a stab at the self-hosted Google Photos-type VM. I think the software is called Immitch or something like that.
Watch this space! And if you have any questions, please feel free to ask!
P.S. A bit about my IT background
I don’t like to brag, but I do have some in-depth IT infrastructure skills. I’ve got some high-level qualifications under my belt…
… though it’s questionable how relevant they still are, given that I’ve not worked in corporate IT for over a decade!
At the peak of my career, I was Technical Director for a cloud computing start-up company. I designed, built and supported the entire infrastructure.
I basically did almost everything myself, except for some of the networking (router and switch) config.
We used software from Microsoft, VMware and Citrix as core components in the platform.
It was an epic project and pushed my technical skills to the limit. I remain incredibly proud of what we achieved.
@Yellow Tail Tech , @J.M. Gooding , @Mahmoud Owies - you may find this somewhat interesting!
EXCELLENT work, my dude. I'll have a cuppa Yorkshire black in your honor (yeah, we can get that over here in the colonies).
So yeah, LXC containers are just like sandboxed virtual environments, really. Think anaconda or Python venv, sort of. The advantage to these is you can nuke them without blowing away your entire box, unlike installing stuff on bare metal. I mean, aside from the security advantages. Proxmox also allows VMs, which means you can even run a full Linux distro (or windows) on the same machine!