So, I am thinking about getting myself a NAS to host mainly Immich and Plex. Got a couple of questions for the experienced folk;

  • Is Synology the best/easiest way to start? If not, what are the closest alternatives?
  • What OS should i go for? OMV, Synology’s OS, or UNRAID?
  • Mainly gonna host Plex/Jellyfin, and Synology Photos/Immich - not decided quite what solutions to go for.

Appricate any tips :sparkles:

  • Synapse
    link
    fedilink
    English
    259 months ago

    If you want a “setup and forget” type of experience, synology will serve you well, if you can afford it. Of you are more of a tinkerer and see yourself experimenting and upgrading in the future, then I recommend custom built. OMV is a solid OS for a novice, but any Linux distro you fancy most can do the job very well!

    I’ve started my NAS journey with a very humble 1-bay synology. For the last few years I am using a custom built ARM NAS (nanopi m4v2), with 4-bays and running Armbian. All my services run on docker, I have Jellyfin, *arr, bitwarden and several other servicies running very reliably.

    • @[email protected]
      link
      fedilink
      English
      49 months ago

      And if you’re not sure how much of tinkering you want to do a Synology with docker support is a good option.

    • @[email protected]
      link
      fedilink
      English
      2
      edit-2
      9 months ago

      ^ This. I have an M1 Mac mini running Asahi Linux with a bunch of docker containers and it works great. Run Jellyfin off of a separate stick PC running an Intel Celeron with Ubuntu Mate on it. Basically I just have docker compose files on those two machines and occasionally ssh in from my phone to sudo apt update && sudo apt upgrade -y (on Ubuntu) or sudo pacman -Syu (on Asahi) and then docker compose pull && docker compose up -d

  • @[email protected]
    link
    fedilink
    English
    159 months ago

    Synology is generally a great option if you can afford the premium.

    Unraid is a good alternative for the poor man. Check this list of cases to build in. I personally have a Fractal R5 which can support up to 13 HDD slots.

    Unraid is generally a better bang for your buck imo. It’s got great support from the community.

    • @[email protected]
      link
      fedilink
      English
      49 months ago

      Can definitely confirm this. I started with a Proxmox system which had a TrueNAS VM. TrueNAS just used a USB HDD for storage though. Setting everything up and getting the permissions set correctly so I could connect my computers was a pain in the ass though.

      Later I bought a synology and it just works. Only thing I would recommend is getting good HDDs. I bought Toshiba MG08 16TB drives and while they work great, they are obnoxiously loud during read and write operations. They are so loud, that even though the NAS is in a separate room I have to shut it off at night.

      Meanwhile the Seagate Ironwolf drive I used for TrueNAS was next to my bed for multiple months and was basically silent.

  • @[email protected]
    link
    fedilink
    English
    139 months ago

    I have proxmox on bare metal, an HBA card to passthrough to TrueNAS Scale. I’ve had good luck with this setup.

    The HBA card is to passthrough to TrueNAS so it can get direct control of the drives for ZFS. I got mine on eBay.

    I’m running proxmox so that I can separate some of my processes (e.g. plex LXC) into a different VM.

    • thejevans
      link
      fedilink
      English
      59 months ago

      This is a great way to set this up. I’m moving over to this in a few days. I have a temporary setup with ZFS directly on Proxmox with an OMV VM for handling shares bc my B450 motherboard IOMMU groups won’t let me pass through my GPU and an HBA to separate VMs (note for OP: if you cannot pass through your HBA to a VM, this setup is not a good idea). I ordered an ASRock X570 Phantom Gaming motherboard as a replacement ($110 on Amazon right now. It’s a great deal.) that will have more separate IOMMU groups.

      My old setup was similar but used ESXi instead of Proxmox. I also went nuts and virtualized pfSense on the same PC. It was surprisingly stable, but I’m keeping my gateway on a separate PC from now on.

      • Yote.zip
        link
        fedilink
        English
        39 months ago

        If you can’t pass through your HBA to a VM, feel free to manage ZFS through Proxmox instead (CLI or with something like Cockpit). While TrueNAS is a nice GUI for ZFS, if it’s getting in the way you really don’t need it.

        • thejevans
          link
          fedilink
          English
          39 months ago

          TrueNAS has nice defaults for managing snapshots and the like that make it a bit safer, but yeah, as I said, I run ZFS directly on Proxmox right now.

          • Yote.zip
            link
            fedilink
            English
            19 months ago

            Oh sorry for some reason I read OMV VM and assumed the ZFS pool was set up there. The Cockpit ZFS Manager extension that I linked has good management of snapshots as well, which may be sufficient depending on how much power you need.

    • @[email protected]
      link
      fedilink
      English
      29 months ago

      I’d love to find out more about this setup. Do you know of any blogs/wikis explaining that? Are you separating the storage from the compute with the HBA card?

      • Yote.zip
        link
        fedilink
        English
        39 months ago

        This is a fairly common setup and it’s not too complex - learning more about Proxmox and TrueNAS/ZFS individually will probably be easiest.

        Usually:

        • Proxmox on bare metal

        • TrueNAS Core/Scale in a VM

        • Pass the HBA PCI card through to TrueNAS and set up your ZFS pool there

        • If you run your app stack through Docker, set up a minimal Debian/Alpine host VM (you can technically use Docker under an LXC but experienced people keep saying it causes problems eventually and I’ll take their word for it)

        • If you run your app stack through LXCs, just set them up through Proxmox normally

        • Set up an NFS share through TrueNAS, and connect your app stack to that NFS share

        • (Optional): Just run your ZFS pool on Proxmox itself and skip TrueNAS

        • @[email protected]
          link
          fedilink
          English
          29 months ago

          This is 100% my experience and setup. (Though I run Debian for my docker VM)

          I did run docker in an LXC but ran into some weird permission issues that shouldn’t have existed. Ran it again in VM and no issues with the same setup. Decided to keep it that way.

          I do run my plex and jellyfin on an LXC tough. No issues with that so far.

        • rentar42
          link
          fedilink
          29 months ago

          So theoretically if someone has alrady set up their NAS (custom Debian with ZFS root instead of TrueNAS, but shouldn’t matter), it sounds like it should be relatively straightforward to migrate all of that into a Proxmox VM, by installing Proxmox “under it”, right? Only thing I’d need right now is some SSD for Proxmox itself.

          • Yote.zip
            link
            fedilink
            English
            0
            edit-2
            9 months ago

            Proxmox would be the host on bare metal, with your current install as a VM under that. I’m not sure how to migrate an existing real install into a VM so it might require backing up configs and reinstalling.

            You shouldn’t need any extra hardware in theory, as Proxmox will let you split up the space on a drive to give to guest VMs.

            (I’m probably misunderstanding what you’re trying to do?)

            • rentar42
              link
              fedilink
              19 months ago

              I just thought that if all storage can easily be “passed through” to a VM then it should in theory be very simple to boot the existing installation in a VM directly.

              Regarding the extra storage: sharing disk space between proxmox and my current installation would imply that I have to pass-through “half of a drive” which I don’t think works like that. Also, I’m using ZFS for my OS disk and I don’t feel comformtable trying to figure out if I can easily resize those partitions without breaking anything ;-)

              • Yote.zip
                link
                fedilink
                English
                09 months ago

                That should work, but I don’t have experience with it. In that case yeah you’d need another separate drive to store Proxmox on.

        • @[email protected]
          link
          fedilink
          English
          29 months ago

          I already run proxmox but not TrueNAS. I’m really just confused about the HBA card. Probably a stupid question but why can’t TrueNAS access regular drives connected to SATA?

          • Yote.zip
            link
            fedilink
            English
            19 months ago

            The main problem is just getting TrueNAS access to the physical disks via IOMMU groups and passthrough. HBA cards are a super easy way to get a dedicated IOMMU group that has all your drives attached, so it’s common for people to use them in these sorts of setups. If you can pull your normal SATA controller down into the TrueNAS VM without messing anything else up on the host layer, it will work the same way as an HBA card for all TrueNAS cares.

            (TMK, SATA controller hubs are usually an all-at-once passthrough, so if you have your host system running off some part of this controller it probably won’t work to unhook it from the host and give it to the guest.)

  • rentar42
    link
    fedilink
    119 months ago

    Just throwing out an option, not saying it’s the best:

    If you are comfortable with Linux (or you want to be come intimately familiar with it), then you can just run your favorite distribution. Running a couple of docker containers can be done on anything easily.

    What you’re losing is usually the simple configuration GUI and some built-in features such as automatic backups. What you gain is absolute control over everything. That tradeoff is definitely not for everyone, but it’s what I picked and I’m quite happy with it.

    • @[email protected]OP
      link
      fedilink
      English
      49 months ago

      Yeah already quite familiar, already got a server but looking for something more premium, but essentially deliver the most easy platforms for the rest of the family to use.

      • @[email protected]
        link
        fedilink
        English
        19 months ago

        Also, you could run Linux off of a real CPU. My experience is that my DS916+ is way underpowered even with 8 GB memory. I use my NAS for actual storage, and an old Intel mainboard w/16GB RAM for actual CPU work.

  • @[email protected]
    link
    fedilink
    English
    99 months ago

    My Synogy NAS was super easy to set up and has been very solid. Very happy with it. I’m sure there’s other solutions though.

    • @[email protected]
      link
      fedilink
      English
      39 months ago

      This was the route I went with when I started, and I’ve never had cause to regret it. For people near the start of their self-hosting journey, it’s the no-hassle, reliable choice.

  • Dark Arc
    link
    fedilink
    English
    89 months ago

    TrueNAS Scale is a pretty easy to use option (based on Debian) backed by the excellent ZFS file system.

      • rentar42
        link
        fedilink
        19 months ago

        I agree with the learning curve (personally I found it worthwhile, but that’s subjective).

        But how does ZFS limit easy backup options? IMO it only adds options (like zfs send/receive) but any backup solution that works with any other file systems should work just as well with ZFS (potentially better since you can use snapshots to make sure any backup is internally consistent).

        • @[email protected]
          link
          fedilink
          English
          19 months ago

          Because you can’t use typical back product software. If you do it the right way, you’re using my ZFS send and receive to another machine running ZFS which significantly adds to cost.

          • rentar42
            link
            fedilink
            19 months ago

            That’s an extremely silly reason not to use a specific tool: Tool A provides an alternative way to do X, but I want to do X with some other tool B (that’ll also work with tool A), so I won’t be using tool A.

            Send/receive may or may not be the right answer for backing up even on ZFS, depending on what exactly you want to achieve. It’s really nice when it is what you want, but it’s no panacea (and certainly no reason to avoid ZFS, since its use it 100% optional).

            • @[email protected]
              link
              fedilink
              English
              19 months ago

              I really don’t get your meaning of my apparent silly reason. You can’t use Acronis, Veeam, or other typical backup products with ZFS. My point is this is a barrier to entry. I disagree that it’s not silly for a home user to build another expensive NAS just to do ZFS send and receive which would be the proper way.

              I don’t consider backups optional.

      • Dark Arc
        link
        fedilink
        English
        1
        edit-2
        9 months ago

        Eh… TrueNAS UI basically takes care of any zfs learning curve. The main thing I’d note is that RAID 5 & 6 can’t currently be expanded incrementally. So you either need to use mirroring, configure the system upfront to be as big as you expect you’ll need for years to come, or use smaller RAID 5 sets of disk (e.g. create 2 raid 5 volumes with 3 disks each instead of 1 RAID 5 volume with 6 disks).

        Not sure what you’re referring to as an easy backup option that zfs excludes, but maybe I’m just ignorant 🙂

  • @[email protected]
    link
    fedilink
    English
    79 months ago

    The most common software choices are TrueNAS and UNRAID.

    Depending on your use-case, one is better than the other:

    TrueNAS uses ZFS, which is great if you want to be absolutely sure the unreplaceable data on your disks is 100% safe, like your personal photos. UNRAID has a more flexible expansion and more power efficient, but doesn’t prevent any bit flip, which is not really an issue if you only store multimedia for streaming.

    If you prefer a hardware solution ready to use, Synology and QNAP are great choices so long you remember to use ZFS (QNAP) or BTRFS (Synology) as filesystem.

    • @[email protected]
      link
      fedilink
      English
      5
      edit-2
      9 months ago

      Unraid 6.12 and higher has full support for ZFS pools. You can even use ZFS in the Unraid Array itself - allowing you to use many, but not all, of ZFS extended features. Self healing isn’t one of those features, though, it would be incompatible with Unraid’s parity approach to data integrity.

      I just changed my cache pool from BTRFS to ZFS with Raid 1 and encryption, it was a breeze.

      I generally recommend TrueNAS for projects where speed and security are more important than anything else and Unraid where (hard- and software-)flexibility, power efficiency, ease of use and a very extensive and healthy ecosystem are more pressing concerns.

    • @[email protected]OP
      link
      fedilink
      English
      3
      edit-2
      9 months ago

      Do either of them matter in terms of life of the hardisks? My server just had one of its HDDs reach EoL :| Kind of want to buy something that will last a very long time. Also, not familiar with ZFS, but read that Synology uses Butterfs - which always sounds good in my ears, been having a taste of the filesystem with Garuda on my desktop.

      • @[email protected]
        link
        fedilink
        English
        39 months ago

        Yes, ZFS is commonly known for heavy disk I/O and also huge RAM usage, the rule used to be “1GB of RAM for every TB of disk” but that’s not compulsory.

        Meanwhile, about BTRFS, keep in mind that Synology uses a mixed recipe because the RAID code of BTRFS is still green and it’s not considered production ready. Here’s an interesting read about how Synology filled the gaps: https://daltondur.st/syno_btrfs_1/

        • Monkey With A Shell
          link
          fedilink
          English
          29 months ago

          The only place ZFS seems to use a sizable amount of RAM is for the arc memory cache system which is an really nice feature when you have piles of small file access going on. For me some of the most high access things are the image stores for lemmy and mastodon that combine up to just under 200GB right now but are some crazy high number of files. Letting the system eat up idle ram to not have to pull all those from disk constantly is awesome.

      • @[email protected]
        link
        fedilink
        English
        29 months ago

        Something kind of unique about UnRaid is the JBOD plus parity array. With this you can keep most disks spun down while only the actively read/written disks need to be spun up. Combine with an SSD cache for your dockers/databases/recent data and UnRaid will put a lot less hours(heat, vibration) on your disks than any raid equivalent system that requires the whole array to be spun up for any disk activity. Performance won’t be as high as comparably sized RAID type arrays, but as bulk network storage for backups, media libraries, etc. it’s still plenty fast enough.

  • @[email protected]
    link
    fedilink
    English
    59 months ago

    Do you have any old hardware that doesn’t have a job? That is a great place to start. Take some time try out different solutions (proxmox, unraid, casa OS). Then as you nail down your needs you can better pick hardware.

    • @[email protected]OP
      link
      fedilink
      English
      19 months ago

      Yeah this is what I have been doing so far, loads of spare parts - running Debian atm. So kind of looking for ‘the next step’ rn.

  • @[email protected]
    link
    fedilink
    English
    49 months ago

    I use UNRAID, I didn’t want to pay for a license originally but having the option to mix and match drives and have redundancy is nice.

    I also use the built in docker feature to host most of my services.

    • @[email protected]
      link
      fedilink
      English
      29 months ago

      I run most of my stuff on k8s, but I really enjoy simple docker ecosystem of apps that home assistant supervisor provides. Unraid app approach looks similar, preconfigured and working together. Even thou I don’t need fancy nas, I might try unraid just to evaluate apps ecosystem. How to u find their community apps?

      • @[email protected]
        link
        fedilink
        English
        29 months ago

        I usually search thru the apps and they install as docker containers, I can edit the configs after the fact, it’s pretty nice. There’s also a terminal so I can run regular docker commands too.

    • @[email protected]
      link
      fedilink
      English
      29 months ago

      Unraid is also awesome for places with high energy cost: Unlike with your typical RAID / standard NAS, it allows you to spin down all drives that aren’t in active use at a relatively minor write speed performance penalty.

      That’s pretty ideal for your typical Plex-server where most data is static.

      I built a 10HDD + 2SSD Unraid Server that idles at well below 30W and I could have even lowered that further had I been more selective about certain hardware. In a medium to high energy cost country, Unraid’s license cost is compensated by energy savings within a year or two.

      Mixing & matching older drives means even more savings.

      Simple array extension, single or dual parity, powerful cache pool tools and easily the best plugin and docker app store make it just such a cool tool.

      • @[email protected]OP
        link
        fedilink
        English
        19 months ago

        This sounds very good, i like what i am reading and hearing about unraid! And I do live somewhere with very high energy costs…

  • @[email protected]B
    link
    fedilink
    English
    2
    edit-2
    9 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    ESXi VMWare virtual machine hypervisor
    LXC Linux Containers
    NAS Network-Attached Storage
    Plex Brand of media server package
    RAID Redundant Array of Independent Disks for mass storage
    SATA Serial AT Attachment interface for mass storage
    SSD Solid State Drive mass storage
    SSH Secure Shell for remote terminal access
    k8s Kubernetes container management package

    [Thread #164 for this sub, first seen 24th Sep 2023, 20:25] [FAQ] [Full list] [Contact] [Source code]

  • @[email protected]
    link
    fedilink
    English
    29 months ago

    First I chose a Pi now am using a Nuc as a NAS.
    Reason for why: The price was too much for a synology + transcode capable CPU as it wasmt clear what type of processor was being used.

  • @[email protected]
    link
    fedilink
    English
    29 months ago

    A NAS serves data to clients; I know this is tilting conventional wisdom on it’s head but hear me out: go for the most inexpensive, lowest power storge-only-NAS that you can tolerate, and instead…put your money into your data transport (network) and into your clients..

    As much as possible, simplify your life - move processing out of middle tiers, into client tiers.

  • @[email protected]
    link
    fedilink
    English
    29 months ago

    I wouldn’t recommend a Synology NAS if you intend to stream content with Plex/Jellyfin. It simply lacks the horsepower most of the time. I should just go with a DIY solution, imo. If you just want to through components that you have lying around together, I would go with Unraid. Unraid doesn’t really care what you throw at it hardware wise.

  • @[email protected]
    link
    fedilink
    English
    29 months ago

    Unraid is great. Don’t let the FOSS heads say otherwise.

    I paid $100 3 years ago, ONCE. Best purchase I’ve ever made.

    I’ve tried the foss alternatives after getting familiar with unraid, and I still prefer unraid.

    • @[email protected]
      link
      fedilink
      English
      19 months ago

      Seconded. But for more details… it’s great because you can throw in many different drives of different sizes, unlike RAID servers where every drive has to be the same size. You can also specify however much you want to use as parity (backup) drives.

      It has a nice web interface that you can access from any other PC on your LAN. I also have mine set up with Unraid Connect which allows me to access it from the open web also. It has a strong password and 2FA so I’m not concerned about security.

      It also makes it easy to serve Docker containers and full blown VMs. You can set them up right in the UI, or you can also SSH to it and use it as a normal Linux OS if you’re a power user. The web UI also has a button that’ll launch a SSH terminal in a separate window too.

      You can just use it as a NAS if you want, but Unraid makes it easy to expand your capabilities if you later feel like it. For example, you are only a few button clicks away from running Jellyfin to provide a nice UI for all your media files that you may be storing on your NAS.