Okay I saw this posted a lot and apparently it is pretty common but why do people virtualize your nas in for example a proxmox server/cluster. If that goes down it gets super hard to get your data back than if you do it bare Metal, doesn’t it? Are people only doing it so save on seperate devices or are my concerns unreasonable?

  • cybersandwich@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 year ago

    Usually if you are doing something like that you are passing a SATA controller card/HBA directly to the VM and the drives are connected through that.

    So all of the data would still be on those drives even if you blew up that VM entirely or proxmox corrupted itself.

    On some levels it would be easier to recover from a system failure because you could have VM backups or snapshots to rely on.

    It’s a more advanced setup but it’s not inherently “bad” or more risky. And yes, usually people are doing that type of thing because they don’t need an entire separate physical server if they already have one with some spare cores and ram.

    For example I have truenas running on a server that has a 6700k and 16GB of ram. That’s overkill CPU for a Nas and I already have a 32 core 128GB ram server running. I’m migrating truenas to that server and giving it 4 cores and 32GB (still overkill for my needs). That will let me shut that other server down and save on my electric bill.

    • imperator@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yeah this is what I do. Pass the hda card through to OMV and have a union fs with snapraid. I then periodically back up to an external HDD.

    • fuser@quex.cc
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      okay you actually made me laugh. that’s not easy - take an upvote.

    • rufus@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I actually did this for a few months until I saved up enough for a decent dedicated firewall appliance. Got a cheap PCIe 2x1GB NIC off Amazon and passed it directly to an OPNsense VM.

      Honestly, it wasn’t that bad. The only downside is that that Proxmox server was just an old repurposed desktop PC that was super underpowered, so the VM only had like 2GB of RAM and that ended up being a bit of a bottleneck under load.

      • vividspecter@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I’m doing it with openwrt x86, since I need SQM + wireguard (and at least the former still isn’t supported on *sense last time I checked). Works fine in all honesty, and I can reboot the VM much faster than real hardware.

      • icy_mal@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        Another vote for virtualized router! I keep set a core VMs on that host where uptime is the highest priority. I’ve upgraded RAM, downgraded CPU, and eventually switched to an entirely new host with 0 downtime over the past few months. I’d rather not have to wait until everyone else on the network is sleeping before doing any tinkering on the hardware. It’s pretty neat to be streaming some video and then live migrate the router to another physical host with 0 interruption.

    • Greyscale@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      That was me for about 2 weeks until the esxi took a shit. Never again. I basically went “fuck this shit” and bought a ubiquity udm.

    • arkcom@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I did it for years. The only problem is, if you mess up your opnsense config, you’re gonna need to get the keyboard and monitor out.

    • sneakyninjapants@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I’m all for this actually. Though I’d be doing that on a dedicated machine with just pfsense/opnsense on it. Any other way would be kinda dumb right?

  • Katrina@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    My NAS is a FreeBSD virtual machine. The drives are passed through directly to the Virtual Machine, so it is possible to take the drives out, and attach them to another virtual machine or bare metal computer running FreeBSD or TrueNAS and read them.

  • digitallyfree@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I virtualize my NAS because it’s small (only several TB) and therefore it can be backed up like any other VM with PBS or dumped as a qcow image. A full restoration is extremely easy because I can simply have another node pull the backup from PBS. Also I can migrate the entire NAS to another node so it stays up when I have downtime.

  • Pulsar@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Just because it can be done it doesn’t mean that is a good idea to do it. It all depends on what is your are trying to do and your risk profile. For me Networking, storage and computing are all in different devices.

  • johntash@eviltoast.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    It mostly depends on your comfort level. Generally it is frowned on. I have a proxmox cluster with two dedicated machines. One for truenas and one for unRAID.

    They both get basically the entire resources from the physical servers and have dedicated hbas. I haven’t run into any performance issues, and actually prefer running it this way because if the nas vm dies for some reason I can still log into the hypervisor to fix it.

    That said, I don’t think I’d ever use virtual disks in a nas and I also wouldnt run vms off of the storage in the nas vm. At least not for the same cluster. I also make sure to have backups in case anything bad does happen

  • Monkey With A Shell@lemmy.socdojo.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Hard to guess at for any given situation, but a few pluses that come to mind depending on the drive arrangement would be taking out any network latency issues between a client and the nas. Or if you have a VM using the nas as its operational drive it takes out the ‘oops, lost the link and my OS drive went away’ mid run factor. Keep all your container/vm traffic internal and have that single VM sync back to a bare metal… Might have to consider some ideas on that front myself now that I think of it.

  • arkcom@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    On my proxmox, I make a mergefs filesystem that I just mount to my lxc containers that need shared storage.

  • HTTP_404_NotFound@lemmyonline.com
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    1 year ago

    I honestly, think it’s pretty bad practice.

    Hey, I got this big fancy server-

    Let me install Vmware/Proxmox on it, and create some VMs.

    I want a fancy dashboard to click and install my apps. and I need storage. Let me put a TrueNAS/Unraid VM on my proxmox.

    Oh right, I need storage for another VM. Let me connect Vmware/Proxymox to TrueNAS/Unraid via ISCSI/NFS.

    Oh this is the pinnacle of technology /s.

    (Rather, then just using the hypervisor built into unraid/truenas…)

    Or, my favorite, is installing a full-blown storage OS, just because you need a windows file share…

    I don’t miss the TrueNAS community, and all of the stupid crap coming from it.

    • ProfessionalBoofis@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I agree, proxmox or truenas by itself on baremetal should cover a lot of applications. Both can do most things the other can do to some extent but each has it’s on specialties and focuses. Proxmox more for VMs, truenas for primarily storage/NAS. But both can do either.

    • icy_mal@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Running the hypervisors built in to unraid or truenas are certainly options but proxmox/VMware are just easier. If you’re learning about virtualization, you’re going to find a lot more resources for proxmox/VMware. Conversely the storage capabilities of proxmox/VMware are either severely limited in the case of VMware or just not particularly user friendly for proxmox. By virtualizing your storage OS you can get the best of both worlds for some situations. Sure, there are situations where it’s a bad idea but if you’ve only got one machine and it has plenty of resources it can be very effective.

      Heck even if the main function for the NAS is just windows shares, that full blown storage OS is going to give you redundancy, snapshots, and replication. I’d say those are pretty important even for Windows shares.