I’m a retired Unix admin. It was my job from the early '90s until the mid '10s. I’ve kept somewhat current ever since by running various machines at home. So far I’ve managed to avoid using Docker at home even though I have a decent understanding of how it works - I stopped being a sysadmin in the mid '10s, I still worked for a technology company and did plenty of “interesting” reading and training.

It seems that more and more stuff that I want to run at home is being delivered as Docker-first and I have to really go out of my way to find a non-Docker install.

I’m thinking it’s no longer a fad and I should invest some time getting comfortable with it?

  • MostlyGibberish@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 months ago

    One of the things I like about containers is how central the IaC methodology is. There are certainly tools to codify VMs, but with Docker, right out of the gate, you’ll be defining your containers through a Dockerfile, or docker-compose.yml, or whatever other orchestration platform. With a VM, I’m always tempted to just make on the fly config changes directly on the box, since it’s so heavy to rebuild them, but with containers, I’m more driven to properly update the container definition and then rebuild the container. Because of that, you have an inherent backup that you can easily push to a remote git server or something similar. Maybe that’s not as much of a benefit if you have a good system already, but containers make it easier imo.

    • Dyskolos@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Actually only tried a docker container once tbh. Haven’t put much time into it and was kinda forced to do. So, if I got you right, I do define the container with like nic-setup or ip or ram/cpu/usage and that’s it? And the configuration of the app in the container? is that IN the container or applied “onto it” for easy rebuild-purpose? Right now I just have a ton of (big) backups of all VMs. If I screw up, I’m going back to this morning. Takes like 2 minutes tops. Would I even see a benefit of docker? besides saving much overhead of cours.

      • felbane@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        You don’t actually have to care about defining IP, cpu/ram reservations, etc. Your docker-compose file just defines the applications you want and a port mapping or two, and that’s it.

        Example:

        ---
        version: "2.1"
        services:
          adguardhome-sync:
            image: lscr.io/linuxserver/adguardhome-sync:latest
            container_name: adguardhome-sync
            environment:
              - CONFIGFILE=/config/adguardhome-sync.yaml
            volumes:
              - /my/appdata/config:/config
            ports:
              - 8080:8080
            restart:
              - unless-stopped
        

        That’s it, you run docker-compose up and the container starts, reads your config from your config folder, and exposes port 8080 to the rest of your network.

        • Dyskolos@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          Oh… But that means I need another server with a reverse-proxy to actually reach it by domain/ip? Luckily caddy already runs fine 😊

          Thanks man!

          • felbane@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 months ago

            Most people set up a reverse proxy, yes, but it’s not strictly necessary. You could certainly change the port mapping to 8080:443 and expose the application port directly that way, but then you’d obviously have to jump through some extra hoops for certificates, etc.

            Caddy is a great solution (and there’s even a container image for it 😉)