• 0 Posts
  • 28 Comments
Joined 3 months ago
cake
Cake day: April 5th, 2024

help-circle

  • Not without good logs or debugging tools.

    You need to know what to observe. You are not going to get the information you are looking for directly from zfs or even system logs.

    What I suggest stands. You have to understand the behavior of the USB controller. That information is acquired from researching USB itself.

    Now if you intend to utilize something like a USB enclosure you indeed would be better off with something like ext4. However, keep in mind that this effect is not directly a file system issue. It’s an issue with how USB controllers interact with file systems.

    That has been my experience from researching this matter. ZFS is simply more sensitive.

    In my experience even for motherboards that have port limitations it’s possible to take advantage of pci lanes and install a hba with an onboard SATA controller. They also make pci devices that will accept nvme drives.

    Good luck with your experimentation and research.


  • This takes a degree of understanding of what you are doing and why it fails.

    I’ve done some research on this myself and the answer is the USB controller. Specifically the way the USB controller “shares” bandwidth. It is not the way a sata controller or a pci lane deals with this.

    ZFS expects direct control of the disk to operate correctly and anything that gets in between the file system and the disk is a problem.

    I the case of USB let’s say you have two USB - nvme adapters plugged in to the same system in a basic zfs mirror. ZFS will expect to mirror operations between these devices but will be interrupted by the USB controller constantly sharing bandwidth between these two devices.

    A better but still bad solution would be something like a USB to SATA enclosure. In this situation if you installed a couple disks in a mirror on the enclosure… They would be using a single USB port and the controller would at least keep the data on one lane instead of constantly switching.

    Regardless if you want to dive deeper you will need to do reading on USB controllers and bandwidth sharing.

    If you want a stable system give zfs direct access to your disks and accept it will damage zfs operations over time if you do not.


  • That doesn’t make any sense to me. It can be installed directly from pacman. It may be something silly like adding docker to your user group. Have you done something like below for docker?

    1. Update the package index:

    sudo pacman -Syu

    1. Install required dependencies:

    sudo pacman -S docker

    1. Enable and start the Docker service:
    sudo systemctl enable docker.service
    sudo systemctl start docker.service
    
    1. Add your user to the docker group to run Docker commands without sudo:

    sudo usermod -aG docker $USER

    1. Log out and log back in for the group changes to take effect.

      Verify that Docker CE is installed correctly by running:

    docker --version

    If you get the above working docker compose is just

    sudo pacman -S docker-compose












  • I’ll be honest op if it’s on a TV I use the newer fire sticks with the jellyfin app. They already have support for various codecs and stream from my server just fine. Cheap too and come with a remote.

    If I were just trying to get a home made client up I would consider Debian bookworm and just utilize the Deb from the GitHub link here…

    https://jellyfin.org/downloads/clients/

    Personally I’d throw on cockpit to make remote administration a bit easier and setup an auto start at login for the jellyfin media player with the startup apps. You can even add a launch variable to launch it full screen like…

    jellyfin --fullscreen
    

    The media player doesn’t really need special privileges so you could create a basic user account just for jellyfin.


  • Setups for hardware decoding are based on the underlying OS. An example quite common is docker on Debian or Ubuntu. You will need to pass the appropriate /dev/ directories and at times files into your jellyfin docker container with the device environment variable. Commonly that would be /dev/dri

    It gets more complicated with a vm because you are likely going to be passing the hardware directly into the vm which will prevent other devices outside the vm from using it.

    You can get around this by placing docker directly on the os or placing docker in a Linux container with appropriate permissions and the same devices passed into the Linux container. In this manner system devices and other services will still have access the the video card.

    All this to say it depends on your setup and where you have docker installed how you will pass the hardware into jellyfin. However jellyfin on docker will need you to pass the video card into the container with the device environment variable. Docker will need to see the device to be able to do that.



  • most filesystems as mentioned in the guide that exist within qcow2, zvols, even raws, that live on a zfs dataset would benefit form a zfs recordsize of 64k. By default the recordsize will be 128k.

    I would never utilize 1mb for any dataset that had vm disks inside it.

    I would create a new dataset for media off the pool and set a recordsize of 1mb. You can only really get away with this if you have media files directly inside this dataset. So pics, music, videos.

    The cool thing is you can set these options on an individual dataset basis. so one dataset can have one recordsize and another dataset can have another.


  • It looks like you are using legacy bios. mine is using uefi with a zfs rpool

    proxmox-boot-tool status
    Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
    System currently booted with uefi
    31FA-87E2 is configured with: uefi (versions: 6.5.11-8-pve, 6.5.13-5-pve)
    

    However, like with everything a method always exists to get it done. Or not if you are concerned.

    If you are interested it would look like…

    Pool Upgrade

    sudo zpool upgrade <pool_name>
    

    Confirm Upgrade

    sudo zpool status
    
    

    Refresh boot config

    sudo pveboot-tool refresh
    
    

    Confirm Boot configuration

    cat /boot/grub/grub.cfg
    

    You are looking for directives like this to see if they are indeed pointing at your existing rpool

    root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
    

    here is my file if it helps you compare…

    #
    # DO NOT EDIT THIS FILE
    #
    # It is automatically generated by grub-mkconfig using templates
    # from /etc/grub.d and settings from /etc/default/grub
    #
    
    ### BEGIN /etc/grub.d/000_proxmox_boot_header ###
    #
    # This system is booted via proxmox-boot-tool! The grub-config used when
    # booting from the disks configured with proxmox-boot-tool resides on the vfat
    # partitions with UUIDs listed in /etc/kernel/proxmox-boot-uuids.
    # /boot/grub/grub.cfg is NOT read when booting from those disk!
    ### END /etc/grub.d/000_proxmox_boot_header ###
    
    ### BEGIN /etc/grub.d/00_header ###
    if [ -s $prefix/grubenv ]; then
      set have_grubenv=true
      load_env
    fi
    if [ "${next_entry}" ] ; then
       set default="${next_entry}"
       set next_entry=
       save_env next_entry
       set boot_once=true
    else
       set default="0"
    fi
    
    if [ x"${feature_menuentry_id}" = xy ]; then
      menuentry_id_option="--id"
    else
      menuentry_id_option=""
    fi
    
    export menuentry_id_option
    
    if [ "${prev_saved_entry}" ]; then
      set saved_entry="${prev_saved_entry}"
      save_env saved_entry
      set prev_saved_entry=
      save_env prev_saved_entry
      set boot_once=true
    fi
    
    function savedefault {
      if [ -z "${boot_once}" ]; then
        saved_entry="${chosen}"
        save_env saved_entry
      fi
    }
    function load_video {
      if [ x$feature_all_video_module = xy ]; then
        insmod all_video
      else
        insmod efi_gop
        insmod efi_uga
        insmod ieee1275_fb
        insmod vbe
        insmod vga
        insmod video_bochs
        insmod video_cirrus
      fi
    }
    
    if loadfont unicode ; then
      set gfxmode=auto
      load_video
      insmod gfxterm
      set locale_dir=$prefix/locale
      set lang=en_US
      insmod gettext
    fi
    terminal_output gfxterm
    if [ "${recordfail}" = 1 ] ; then
      set timeout=30
    else
      if [ x$feature_timeout_style = xy ] ; then
        set timeout_style=menu
        set timeout=5
      # Fallback normal timeout code in case the timeout_style feature is
      # unavailable.
      else
        set timeout=5
      fi
    fi
    ### END /etc/grub.d/00_header ###
    
    ### BEGIN /etc/grub.d/05_debian_theme ###
    set menu_color_normal=cyan/blue
    set menu_color_highlight=white/blue
    ### END /etc/grub.d/05_debian_theme ###
    
    ### BEGIN /etc/grub.d/10_linux ###
    function gfxmode {
            set gfxpayload="${1}"
    }
    set linux_gfx_mode=
    export linux_gfx_mode
    menuentry 'Proxmox VE GNU/Linux' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-/dev/sdc3' {
            load_video
            insmod gzio
            if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
            insmod part_gpt
            echo    'Loading Linux 6.5.13-5-pve ...'
            linux   /ROOT/pve-1@/boot/vmlinuz-6.5.13-5-pve root=ZFS=/ROOT/pve-1 ro       root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
            echo    'Loading initial ramdisk ...'
            initrd  /ROOT/pve-1@/boot/initrd.img-6.5.13-5-pve
    }
    submenu 'Advanced options for Proxmox VE GNU/Linux' $menuentry_id_option 'gnulinux-advanced-/dev/sdc3' {
            menuentry 'Proxmox VE GNU/Linux, with Linux 6.5.13-5-pve' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-6.5.13-5-pve-advanced-/dev/sdc3' {
                    load_video
                    insmod gzio
                    if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
                    insmod part_gpt
                    echo    'Loading Linux 6.5.13-5-pve ...'
                    linux   /ROOT/pve-1@/boot/vmlinuz-6.5.13-5-pve root=ZFS=/ROOT/pve-1 ro       root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
                    echo    'Loading initial ramdisk ...'
                    initrd  /ROOT/pve-1@/boot/initrd.img-6.5.13-5-pve
            }
            menuentry 'Proxmox VE GNU/Linux, with Linux 6.5.13-5-pve (recovery mode)' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-6.5.13-5-pve-recovery-/dev/sdc3' {
                    load_video
                    insmod gzio
                    if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
                    insmod part_gpt
                    echo    'Loading Linux 6.5.13-5-pve ...'
                    linux   /ROOT/pve-1@/boot/vmlinuz-6.5.13-5-pve root=ZFS=/ROOT/pve-1 ro single       root=ZFS=rpool/ROOT/pve-1 boot=zfs
                    echo    'Loading initial ramdisk ...'
                    initrd  /ROOT/pve-1@/boot/initrd.img-6.5.13-5-pve
            }
            menuentry 'Proxmox VE GNU/Linux, with Linux 6.5.11-8-pve' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-6.5.11-8-pve-advanced-/dev/sdc3' {
                    load_video
                    insmod gzio
                    if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
                    insmod part_gpt
                    echo    'Loading Linux 6.5.11-8-pve ...'
                    linux   /ROOT/pve-1@/boot/vmlinuz-6.5.11-8-pve root=ZFS=/ROOT/pve-1 ro       root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
                    echo    'Loading initial ramdisk ...'
                    initrd  /ROOT/pve-1@/boot/initrd.img-6.5.11-8-pve
            }
            menuentry 'Proxmox VE GNU/Linux, with Linux 6.5.11-8-pve (recovery mode)' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-6.5.11-8-pve-recovery-/dev/sdc3' {
                    load_video
                    insmod gzio
                    if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
                    insmod part_gpt
                    echo    'Loading Linux 6.5.11-8-pve ...'
                    linux   /ROOT/pve-1@/boot/vmlinuz-6.5.11-8-pve root=ZFS=/ROOT/pve-1 ro single       root=ZFS=rpool/ROOT/pve-1 boot=zfs
                    echo    'Loading initial ramdisk ...'
                    initrd  /ROOT/pve-1@/boot/initrd.img-6.5.11-8-pve
            }
    }
    
    ### END /etc/grub.d/10_linux ###
    
    ### BEGIN /etc/grub.d/20_linux_xen ###
    
    ### END /etc/grub.d/20_linux_xen ###
    
    ### BEGIN /etc/grub.d/20_memtest86+ ###
    ### END /etc/grub.d/20_memtest86+ ###
    
    ### BEGIN /etc/grub.d/30_os-prober ###
    ### END /etc/grub.d/30_os-prober ###
    
    ### BEGIN /etc/grub.d/30_uefi-firmware ###
    menuentry 'UEFI Firmware Settings' $menuentry_id_option 'uefi-firmware' {
            fwsetup
    }
    ### END /etc/grub.d/30_uefi-firmware ###
    
    ### BEGIN /etc/grub.d/40_custom ###
    # This file provides an easy way to add custom menu entries.  Simply type the
    # menu entries you want to add after this comment.  Be careful not to change
    # the 'exec tail' line above.
    ### END /etc/grub.d/40_custom ###
    
    ### BEGIN /etc/grub.d/41_custom ###
    if [ -f  ${config_directory}/custom.cfg ]; then
      source ${config_directory}/custom.cfg
    elif [ -z "${config_directory}" -a -f  $prefix/custom.cfg ]; then
      source $prefix/custom.cfg
    fi
    ### END /etc/grub.d/41_custom ###
    

    You can see the lines by the linux sections.