• 0 Posts
  • 18 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle



  • My recommendation would be to utilize LVM. Set up a PV on the new drive and create an LV filling the drive (wit an FS), then move all the data off of one drive onto this new drive, reformat the first old drive as a second PV in the volume group, and expand the size of the LV. Repeat the process for the second old drive. Then, instead of extending the LV, set the parity option on the LV to 1. You can add further disks, increasing the LV size or adding parity or mirroring in the future, as needed. This also gives you the advantage that you can (once you have some free space) create another LV that has different mirroring or parity requirements.






  • I followed the guide found here, however with a few modifications.

    Notably, I did not encrypt the borg repository, and heavily modified the backup script.

    #!/bin/bash -ue
    
    # The udev rule is not terribly accurate and may trigger our service before
    # the kernel has finished probing partitions. Sleep for a bit to ensure
    # the kernel is done.
    #
    # This can be avoided by using a more precise udev rule, e.g. matching
    # a specific hardware path and partition.
    sleep 5
    
    #
    # Script configuration
    #
    
    # The backup partition is mounted there
    MOUNTPOINT=/mnt/external
    
    # This is the location of the Borg repository
    TARGET=$MOUNTPOINT/backups/backups.borg
    
    # Archive name schema
    DATE=$(date '+%Y-%m-%d-%H-%M-%S')-$(hostname)
    
    # This is the file that will later contain UUIDs of registered backup drives
    DISKS=/etc/backups/backup.disk
    
    # Find whether the connected block device is a backup drive
    for uuid in $(lsblk --noheadings --list --output uuid)
    do
            if grep --quiet --fixed-strings $uuid $DISKS; then
                    break
            fi
            uuid=
    done
    
    if [ ! $uuid ]; then
            echo "No backup disk found, exiting"
            exit 0
    fi
    
    echo "Disk $uuid is a backup disk"
    partition_path=/dev/disk/by-uuid/$uuid
    # Mount file system if not already done. This assumes that if something is already
    # mounted at $MOUNTPOINT, it is the backup drive. It won't find the drive if
    # it was mounted somewhere else.
    (mount | grep $MOUNTPOINT) || mount $partition_path $MOUNTPOINT
    drive=$(lsblk --inverse --noheadings --list --paths --output name $partition_path | head --lines 1)
    echo "Drive path: $drive"
    
    # Log Borg version
    borg --version
    
    echo "Starting backup for $DATE"
    
    # Make sure all data is written before creating the snapshot
    sync
    
    
    # Options for borg create
    BORG_OPTS="--stats --one-file-system --compression lz4 --checkpoint-interval 86400"
    
    # No one can answer if Borg asks these questions, it is better to just fail quickly
    # instead of hanging.
    export BORG_RELOCATED_REPO_ACCESS_IS_OK=no
    export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=no
    
    
    #
    # Create backups
    #
    
    function backup () {
      local DISK="$1"
      local LABEL="$2"
      shift 2
    
      local SNAPSHOT="$DISK-snapshot"
      local SNAPSHOT_DIR="/mnt/snapshot/$DISK"
    
      local DIRS=""
      while (( "$#" )); do
        DIRS="$DIRS $SNAPSHOT_DIR/$1"
        shift
      done
    
      # Make and mount the snapshot volume
      mkdir -p $SNAPSHOT_DIR
      lvcreate --size 50G --snapshot --name $SNAPSHOT /dev/data/$DISK
      mount /dev/data/$SNAPSHOT $SNAPSHOT_DIR
    
      # Create the backup
      borg create $BORG_OPTS $TARGET::$DATE-$DISK $DIRS
    
    
      # Check the snapshot usage before removing it
      lvs
      umount $SNAPSHOT_DIR
      lvremove --yes /dev/data/$SNAPSHOT
    }
    
    # usage: backup <lvm volume> <snapshot name> <list of folders to backup>
    backup photos immich immich
    # Other backups listed here
    
    echo "Completed backup for $DATE"
    
    # Just to be completely paranoid
    sync
    
    if [ -f /etc/backups/autoeject ]; then
            umount $MOUNTPOINT
            udisksctl power-off -b $drive
    fi
    
    # Send a notification
    curl -H 'Title: Backup Complete' -d "Server backup for $DATE finished" 'http://10.30.0.1:28080/backups'
    

    Most of my services are stored on individual LVM volumes, all mounted under /mnt, so immich is completely self-contained under /mnt/photos/immich/. The last line of my script sends a notification to my phone using ntfy.


  • Here are a few more details of my setup:

    Components:

    • server
    • clients (phone/laptop)
    • domain name (we’ll call it custom.domain)
    • home router
    • dynamic DNS provider

    The home router has WireGuard port forwarded to server, with no re-mapping (I’m using the default 51820). It’s also providing DHCP services to my home network, using the 192.168.1.0/24 network.

    The server is running the dynamic DNS client (keeping the dynamic domain name updated to my public IP), and I have a CNAME record on the vpn.custom.domain pointing to the dynamic DNS name (which is an awful random string of characters). I also have server.custom.domain with an A record pointing to 10.30.0.1. All my DNS records are in public DNS (so no need to change the DNS settings on the computer or phone or use DNS overrides with WireGuard.)

    Immich config:

    version: "3.8"
    
    services:
      immich-server:
        container_name: immich_server
        image: ghcr.io/immich-app/immich-server:release
        entrypoint: ["/bin/sh", "./start-server.sh"]
        volumes:
          - ${UPLOAD_LOCATION}:/usr/src/app/upload
        env_file:
          - .env
        ports:
          - target: 3001
            published: 2283
            host_ip: 10.30.0.1
        depends_on:
          - redis
          - database
        restart: always
        networks:
          - immich
    

    WireGuard is configured using wg-quick (/etc/wireguard/wg0.conf):

    [Interface]
    Address = 10.30.0.1/16
    PrivateKey = <server-private-key>
    ListenPort = 51820
    
    [Peer]
    PublicKey = <phone-public-key>
    AllowedIPs = 10.30.0.12/32
    
    [Peer]
    PublicKey = <laptop-public-key>
    AllowedIPs = 10.30.0.11/32
    

    Start WireGuard with systemctl enable --now wg-quick@wg0.

    Phone WireGuard configuration (iOS):

    [Interface]
    Name = vpn.custom.domain
    
    Private Key = <phone private key>
    Public Key = <phone public key>
    
    Addresses = 10.30.0.12/32
    Listen port = <blank>
    MTU = <blank>
    DNS servers = <blank>
    
    [Peer]
    Public Key = <server public key>
    Pre-shared key = <blank>
    Endpoint = vpn.custom.domain:51820
    Allowed IPs = 10.30.0.0/16
    Persistent Keepalive = 25
    
    [On Demand Activation]
    Cellular = On
    Wi-Fi = On
    SSIDs = Any SSID
    

    This connection is then left always enabled, and comes on whenever my phone has any kind of network connection.

    My laptop (running Linux), is also using wg-quick (/etc/wireguard/wg0.conf):

    [Interface]
    Address = 10.30.0.14
    PrivateKey = <laptop private key>
    
    [Peer]
    PublicKey = <server-public-key>
    Endpoint = vpn.custom.domain:51820
    AllowedIPs = 10.30.0.0/16
    

    My wife’s window’s laptop is configured using the official WireGuard windows app, with similar settings.

    No matter where we are (at home, on a WiFi hotspot, or using cellular data) we access Immich over the VPN: http://server.custom.comain:2283/.

    Let me know if you have any further questions.



  • I use WireGaurd, it’s set to on demand for any network or cellular data (so effectively always on), no DNS records (I just use public DNS providing private range IP addresses). It doesn’t make any sort of dent in my battery life. Also, only the wiregaurd network traffic is routed through it, so if my server is down the phone/laptop’s internet continues to work. I borrowed my wife’s phone and laptop for 15 minutes to set it up, and now no one has to think about it.



  • I backup to a external hard disk that I keep in a fireproof and water resistant safe at home. Each service has its own LVM volume which I snapshot and then backup the snapshots with borg, all into one repository. The backup is triggered by a udev rule so it happens automatically when I plug the drive in; the backup script uses ntfy.sh (running locally) to let me know when it is finished so I can put the drive back in the safe. I can share the script later, if anyone is interested.