• 0 Posts
  • 37 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2023

help-circle







  • Second to this - for what its worth (and I may be tarred and feathered for saying this here), I prefer commercial software for my backups.

    I’ve used many, including:

    • Acronis
    • Arcserve UDP
    • Datto
    • Storagecraft ShadowProtect
    • Unitrends Enterprise Backup (pre-Kaseya, RIP)
    • Veeam B&R
    • Veritas Backup Exec

    What was important to me was:

    • Global (not inline) deduplication to disk storage
    • Agent-less backup for VMware/Hyper-V
    • Tape support with direct granular restore
    • Ability to have multiple destinations on a backup job (e.g. disk to disk to tape)
    • Encryption
    • Easy to set up
    • Easy to make changes (GUI)
    • Easy to diagnose
    • Not having to faff about with it and have it be the one thing in my lab that just works

    Believe it or not, I landed on Backup Exec. Veeam was the only other one to even get close. I’ve been using BE for years now and it has never skipped a beat.

    This most likely isn’t the solution for you, but I’m mentioning it just so you can get a feel for the sort of considerations I made when deciding how my setup would work.


  • As others have mentioned its important to highlight the difference between a sync (basically a replica of the source) vs a true backup which is historical data.

    As far as tools goes, if the device is running OMV you might want to start by looking at the options within OMV itself to achieve this. A quick google hinted at a backup plugin that some people seem to be using.

    If you’re going to be replicating to a remote NAS over the Internet, try to use a site-to-site VPN for this and do not expose file sharing services to the internet (for example by port forwarding). Its not safe to do so these days.

    The questions you need to ask first are:

    1. What exactly needs to be backed up? Some of it? All of it?
    2. How much space does the data I need backed up consume? Do I have enough to fit this plus some headroom for retention?
    3. How many backups do I want to retain? And for how long? (For example you might keep 2 weeks of daily backups, 3 months of weekly backups, 1 year of monthly backups)
    4. How feasible is it to run a test restore? How often am I going to do so? (I can’t emphasise test restores enough - your backups are useless if they aren’t restorable)
    5. Do you need/want to encrypt the data at rest?
    6. Does the internet bandwidth between the two locations allow for you to send all the data for a full backup in a reasonable amount of time or are you best to manually seed the data across somehow?

    Once you know that you will be able to determine:

    1. What tool suits your needs
    2. How you will configure the tool
    3. How to set up the interconnects between sites
    4. How to set up the destination NAS

    I hope I haven’t overwhelmed, discouraged or confused you more and feel free to ask as many questions as you need. Protecting your data isn’t fun but it is important and its a good choice you’re making to look into it


  • Back in the day when the self-hosted $10 license existed I was using JIRA Service Desk to do this. As far as ticketing systems go it was very easy to work with and didn’t slow me down too much.

    I know you don’t want a ticket system but I’m just curious what other people will suggest because I’m in the same boat as you.

    Currently I haphazardly use Joplin to take very loose notes and sync them to Nextcloud.

    If you want a very simple option with minimal setup and overhead you could use Joplin to create separate notes for each “part” of your lab and just add a new line with a date, time and summary of the change.

    I do also use SnipeIT to track all my hardware and parts, which allows you to add notes and service history against the hardware asset.

    Other than that, I’m keen to see what everyone else says


  • Power

    • 2x feeds into the rack (same circuit but we’ll work on that)
    • Eaton 2000VA double conversion UPS on Feed A
    • APC 1500VA line interactive UPS on Feed B (bypassed, replacing it with another double conversion 2kVA eventually)

    Network

    • 2x Dell N2048P, stacked (potentially getting replaced with 2x stacked Cisco 9300)
    • FortiGate firewall
    • 1000/50 FTTP primary Internet link
    • 4G backup Internet link using a different Telco (the dream is to replace this with Starlink)

    Storage

    • Synology 4-bay NAS with 4x4TB in RAID-10 (for overflow storage from Virtual SAN cluster)
    • HP MSL2024 8GB Fiber Channel LTO5 Tape autoloader for off-site backup

    Compute

    • Dell R520 running VMware ESX for Production (2x Xeon E5-2450L, 80GB DDR3, 4x500GB SSD RAID-10 for Virtual SAN, 1x10TB SATA “scratch” disk, 2x10G fibre storage NICs, 2x1G copper NICs for VM traffic)
    • Dell R330 running VMware ESX for backups and DR (1x Xeon E3-1270v5, 32GB DDR4, 2x512GB SSD RAID-1, 2x4TB HDD RAID-1, 8G FC card for tape library)

    A second prod host will join the R520 soon to add some redundancy and mirror the Virtual SAN.

    All VMs are backed up and kept in an encrypted on-site data store for at least 4 weeks. They’re duplicated to tape (encrypted) once a month and taken off site. Those are kept for 1 year minimum. Cloud backup storage will never replace tape in my setup.

    Services

    As far as “public facing” goes, the list is very short:

    Though I do run around 30-40 services all up on this setup (not including actual non-prod lab things that are on other servers or various SBCs around the place).

    If I had unlimited free electricity and no functioning ears I’d be using my Cisco UCS chassis and Nexus 5K switch/fabric extenders. But it just isn’t meant to be (for now, haha).



  • Worked for an MSP, we had a large storage array which was our cloud backup repository for all of our clients. It locked up and was doing this semi-regularly, so we decided to run an “OS reinstall”. Basically these things install the OS across all of the disks, on a separate partition to where the data lives. “OS Reinstall” clones the OS from the flash drive plugged into the mainboard back to all the disks and retains all configuration and data. “Factory default”, however, does not.

    This array was particularly… special… In that you booted it up, held a paperclip into the reset pin, and the LEDs would flash a pattern to let you know you’re in the boot menu. You click the pin to move through the boot menu options, each time you click it the lights flash a different pattern to tell you which option is selected. First option was normal boot, second or third was OS reinstall, the very next option was factory default.

    I head into the data centre. I had the manual, I watched those lights like a hawk and verified the “OS reinstall” LED flash pattern matched up, then I held the pin in for a few seconds to select the option.

    All the disks lit up, away we go. 10 minutes pass. Nothing. Not responding on its interface. 15 minutes. 20 minutes, I start sweating. I plug directly into the NIC and head to the default IP filled with dread. It loads. I enter the default password, it works.

    There staring back at me: “0B of 45TB used”.

    Fuck.

    This was in the days where 50M fibre was rare and most clients had 1-20M ADSL. Yes, asymmetric. We had to send guys out as far as 3 hour trips with portable hard disks to re-seed the backups over a painful 30ish days of re-ingesting them into the NAS.

    The worst part? Years later I discovered that, completely undocumented, you can plug a VGA cable in and you get a text menu on the screen that shows you which option you have selected.

    I (somehow) did not get fired.


  • Because prospective customers get shy when the browser says that your site is “insecure”

    Because it factually is insecure. It is not encrypted and trivial to inspect.

    Because it makes for better google ranking.

    No, in this day and age it is permission to play. Firefox has a built in feature to only load HTTPS sites, which I have enabled. This has nothing to do with Google. Your issue is with expensive CAs, to which there is a free solution (Let’s Encrypt). Not HTTPS itself.

    So there you go. Mob hype and googlian dictatorship.

    Incorrect. It is a matter of safety and security and a trivial thing to implement. You are free to not use HTTPS if you want, just as people are free to not consume your service if you don’t.

    Calling it a “dictatorship” is hyperbole and demonstrates that you clearly have no idea what you’re talking about and won’t listen to people that do.


  • Some do. It depends on the type of certificate. Thankfully now we have LetsEncrypt so that there is a free alternative to the big CAs.

    To answer your initial question - yes it is necessary. Without HTTPS or encryption in general, anybody who can intercept your connection can see everything you’re doing.

    A real world example of this is let’s say you’re connected to a WiFi network that has no password and are browsing a plain HTTP site. Open wifi networks are unencrypted, as is HTTP.

    I can sit across the road in a vehicle, unseen, on a laptop and sniff the traffic to view what you’re doing. If you log into your bank, I now have your credentials and can do what I like, and you don’t even know.

    This is why we need encryption. It is an (almost) guarantee that your traffic is only viewable to yourself and the other end of whatever you’re connecting to and not anyone in the middle.

    Edit: for Anyone downvoting OP remember this is nostupidquestions. Take the time to educate if you know better but don’t downvote “stupid” questions lol.


  • Nothing manual required, you can federate with any other instance as long as you’re not on their ban list.

    You basically use your instance’s search to search for a community on the remote instance, then your instance requests the top (5?) posts from the community on the remote instance. Once a user subscribes, all new posts going forward will be sent to your server via the federation.

    At least I think that’s how it works, haha.


  • Same here! My background is in systems architecture, so I love this stuff.

    Though I run mine on my own “private cloud”. Even though it sounds like an amateur operation I’ve got the proper safety nets in place (backups, redundant power, firewalls, etc). A lot of instances are public cloud which is cool and I have nothing against that, I just wanted to do something a little different.

    I have no idea how to get people to join but I hope to have some friends in here some day :D




  • Authelia is popular, as is Keycloak. I believe Red Hat develops Keycloak or at least has a hand in it.

    I’m on this journey as well, figuring out what I’m going to use. Currently most of my services just use LDAP back to AD but I’m looking to do something more modern like SAML, oAuth or OpenID Connect so that I can simplify the number of MFA tokens I have.

    Just as an anecdote you may find useful - Personally I used to run an Active Directory for Windows and FreeIPA for my Linux machines and have managed to simplify this to just AD. Linux machines can be joined, you can still use sudo and all the other good stuff while only having one source of truth for identity.