• MentalEdge@sopuli.xyz
    link
    fedilink
    arrow-up
    70
    ·
    edit-2
    7 months ago

    I manage a machine that runs both media transcodes and some video game servers.

    The video game servers have to run in real-time, or very close to it. Otherwise players using them suffer noticeable lag.

    Achieving this at the same time that an ffmpeg process was running was completely impossible. No matter what I did to limit ffmpegs use of CPU time. Even when running it at lowest priority it impacted the game server processes running at top priority. Even if I limited it to one thread, it was affecting things.

    I couldn’t understand the problem. There was enough CPU time to go around to do both things, and the transcode wasn’t even time sensitive, while the game server was, so why couldn’t the Linux kernel just figure it out and schedule things in a way that made sense?

    So, for the first time I read up on how computers actually handle processes, multi-tasking and CPU scheduling.

    As FFMPEG is an application that uses ALL available CPU time until a task is done, I came to the conclusion that due to how context switching works (CPU cores can only do one thing, they just switch out what they do really fast, but this too takes time) it was causing the system to fall behind on the video game processes when the system was operating with zero processing headroom. The scheduler wasn’t smart enough to maintain a real-time process in the face of FFMPEG, which would occupy ALL available cycles.

    I learned the solution was core pinning. Manually setting processes to run on certain cores of the CPU. I set FFMPEG to use only one core, since it doesn’t matter how fast it completes. And I set the game processes to use all but that one core, so they don’t accidentally end up queueing for CPU time on a core that doesn’t have the headroom to allow the task to run within a reasonable time range.

    This has completely solved the problem, as the game processes and FFMPEG no longer wait for CPU cycles in the same queue.

    • flambonkscious@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 months ago

      Well that’s interesting… I’d have thought, possibly naively, that as long as a thread had work to do it would essentially behave like ffmpeg does?

      Perhaps there’s something about the type of work though, that it’s very CPU-bound or something?

      • MentalEdge@sopuli.xyz
        link
        fedilink
        arrow-up
        11
        ·
        edit-2
        7 months ago

        I think the difference is simply that most processes only have a certain amount that needs accomplishing in a given unit of time. As long as they can get enough CPU time, and do so soon enough after getting in line for it, they can maintain real-time execution.

        Very few workloads have that much to do for that long. But I would expect other similar workloads to present the same problem.

        There is a useful stat which Linux tracks in addition to a simple CPU usage percentage. The “load average” represents the average number of processes that have requested CPU time, but have to queue for it.

        As long as the number is lower than the available number of cores, this essentially means that whenever one process is done running a task, the next in line can get right on with theirs.

        If the load average is less than the number of cores available, that means the cores have idle time where they are essentially just waiting for a process to need them for something. Good for time-sensitive processes.

        If the load average is above the number of cores, that means some processes are having to wait for several cycles of other processes having their turn, before they can execute their tasks. Interestingly, the load average can go beyond this threshold way before the CPU hits 100% usage.

        I found that I can allow my system to get up to a load average of about 1.5 times the number of cores available, before you start noticing it when playing on one of the servers I run.

        And whenever ffmpeg was running, the load average would spike to 10-20 times the number of cores. Not good.

        • flambonkscious@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          7 months ago

          That makes complete sense - if you’ve got something ‘needy’, as soon as it’s queuing up, I imagine it snowballs, too…

          10-20 times the core count is crazy, but I guess it’s had a lot of development effort into parallelizing it’s execution, which of course goes against what your use case is :)

          • MentalEdge@sopuli.xyz
            link
            fedilink
            arrow-up
            7
            ·
            edit-2
            7 months ago

            Theoretically a load average could be as high as it likes, it’s essentially just the length of the task queue, after all.

            Processes having to queue to get executed is no problem at all for lots of workloads. If you’re not running anything latency-sensitive, a huge load average isn’t a problem.

            Also it’s not really a matter of parallelization. Like I mentioned, ffmpeg impacted other processes even when restricted to running in a single thread.

            That’s because most other processes will do work in small chunks that complete within nanoseconds. Send a network request, parse some data, decode an image, poll HID device, etc.

            A transcode meanwhile can easily have a CPU running full tilt for well over a second, working on just that one thing. Most processes will show up and go “I need X amount of CPU time” while ffmpeg will show up and go “give me all available CPU time” which is something the scheduler can’t actually quantify.

            It’s like if someone showed up at a buffet and asked for all the food that no-one else is going to eat. How do you determine exactly how much that is, and thereby how much it is safe to give this person without giving away food someone else might’ve needed?

            You don’t. Without CPU headroom it becomes very difficult for the task scheduler to maintain low system latency. It’ll do a pretty good job, but inevitably some CPU time that should have gone to other stuff, will go the process asking for as much as it can get.

  • DefederateLemmyMl@feddit.nl
    link
    fedilink
    English
    arrow-up
    55
    ·
    edit-2
    7 months ago

    I’ve found that the silliest desktop problems are usually the hardest to solve, and the “serious” linux system errors are the easiest.

    System doesn’t boot? Look at error message, boot from a rescue disk, mount root filesystem and fix what you did wrong.

    Wrong mouse cursor theme in some Plasma applications, ignoring your settings? Some weird font rendering issue? Bang your head against a wall exploring various dotfiles and rc files in your home directory for two weeks, and eventually give up and nuke your profile and reconfigure your whole desktop from scratch.

    • ccunix@lemmy.world
      link
      fedilink
      arrow-up
      13
      ·
      7 months ago

      A couple of weeks ago I moved Firefox to one side. Window disappeared, but Firefox was still running “somewhere” on my desktop, but was not actually be rendered to the screen. Killing the process and relaunching just resulted in it be rendered to this weird black hole. Log out of gnome and log back in? Same! Reboot? Same!

      Ended up deleting it’s config folder and re-attaching to Firefox sync in order to have it working again. No idea what went wrong, nor will I ever most likely.

      • dejected_warp_core@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        7 months ago

        There really should be a hotkey for “move window to primary display” or somesuch. The worst is when just the top “cleat” of the window is inaccessible, making it impossible to simply move the window yourself.

        Alternately, a CLI tool to just trash a specific app’s window settings, or a system control panel that lets you browse these settings, would be incredible.

        • Captain Aggravated@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          7 months ago

          In every GUI I’ve used, there are tiling or snapping hotkeys, something like Super + Arrow keys or something, that will usually put the window somewhere sane.

      • slurpeesoforion@startrek.website
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        7 months ago

        I feel like i had a disappearing window like that a lifetime ago and the fix was to change the resolution. I don’t know if that uncovered the void to the right or forced the window to reassign itself to usable space. But it worked then. Hell, it could have been windows for all I recall.

    • marilynia@discuss.tchncs.de
      link
      fedilink
      arrow-up
      3
      ·
      7 months ago

      Yeah for some reason a single game ignores the system sound settings and goes straight to a line out. My system doesn’t see that the game is outputting sound and I can’t change it. (Arch with KDE)

      • Corr@lemm.ee
        link
        fedilink
        arrow-up
        3
        ·
        7 months ago

        Somewhat related on windows 11, for some reason teams volume will desync from system volume. I’ll put system volume to 0 and still be hearing teams. It’s the same audio device being selected. I don’t understand why it would ever work that way but here we are

    • fossphi@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      Oh my god, you’ve put it into (really nice) words something I’ve felt since quite some time now. I’ve no trouble (in fact even joy) when something major is fucked up. But all this GUI shenanigans, I’ve usually no idea where to even begin. The lack of structure and hierarchy completely flummoxes me. Or maybe I just don’t have enough experience debugging userland stuff

  • Hyrulian@lemmy.world
    link
    fedilink
    arrow-up
    38
    arrow-down
    1
    ·
    7 months ago

    Around 2017 I spent three days on and off trying to diagnose why my laptop running elementary OS had no wifi support. I reinstalled the wifi drivers and everything countless times. It worked for many days initially then just didn’t one day when I got on the laptop. Turns out I had accidentally flipped the wifi toggle switch while it was in my bag. I forgot the laptop had one. Womp womp.

    • Hawke@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      7 months ago

      Womp womp.

      I used to bullseye womp rats in my T-16 back home, they’re not much bigger than 2 meters.

    • passepartout@feddit.de
      link
      fedilink
      arrow-up
      8
      ·
      7 months ago

      I had a friend come over to my place to fix her laptops wifi. After about an hour searching for any setting in windows that i could have missed, i coincidentally found a forum where one pointed out this could be due to a hardware wifi switch…

  • Treczoks@lemmy.world
    link
    fedilink
    arrow-up
    27
    ·
    7 months ago

    My first Linux machine crashing. This was way before Redhat, Ubuntu, Arch, or OpenSUSE. This was installed from 60+ floppy disks on a 386-40 with 8MB of RAM.

    This machine ran happily, but it crashed under heavy load. I checked out causing the load by using different applications, but could not nail it to a certain software. So the next thing I checked was the RAM. Memtest86 ran for a day without any problems. But the crashes still came. So I got the infrared camera from the lab to see if some hardware overheats. Nope, this went nowhere, either.

    Then I tested the harddisk. Read test of the whole HD went without problems. I copied the data on a backup medium and did a write and read test by dd’ing /dev/zero over the whole disk, and then dd’ing the disk to /dev/null. Nothing did show up.

    I reinstalled the Linux, and it crashed again. But this time, I noticed that something was odd with the harddisk. I added a second swap partition, disabled the first, and the machine ran without problems. Strange…

    So I wrote a small program that tested the part of the disk occupied by the old swap space: Write data, read data, and log everything with timestamps. And there was the culprit: There was an area on the HD where I could write any data, but when I read blocks from that area, a) It took a very long time for the read, b) the blocks I read were containing all zero, regardless of what I had written, and worst of all c) there was no error indication whatsoever from the controller or drive. Down at the kernel level, the zeroed blocks were happily served by the HD with an “OK”. And the faulty area was right in the middle of the original swap partition.

  • ChojinDSL@discuss.tchncs.de
    link
    fedilink
    arrow-up
    20
    ·
    7 months ago

    Around 2003-2004. I was still a bit of a Linux noob, just getting to grips with Gentoo.

    Had two no-name WiFi adapters that weren’t directly supported under Linux. Found some obscure forum thread that mentioned them, along with which lines in which source code driver to change to make these adapters work.

  • rowinxavier@lemmy.world
    link
    fedilink
    arrow-up
    20
    ·
    7 months ago

    Working for a VoIP company in the early 2010s I rm -rf’d the /bin/ directory. As root. On a production server. On site.

    I ended up booting from my phone (android app for iso booting) then manually coppied over the files from another machine. Chrooted and some stuff was broken but rebuilding from the package manager reinstalled everything that was missing. Got the system back up in around 40 mins after that colossal screw up. Good fun and a great learning experience. Honestly, my manager should not have had me doing anything on a root shell with no training.

  • Maxxus@sh.itjust.works
    link
    fedilink
    arrow-up
    16
    ·
    7 months ago

    Maybe this goes a bit deeper than the question intended, but I’ve made and shared two patches that I had to apply locally for years before they were merged into the base packages.

    The first was a patch in 2015 for SDL2 to prevent the Sixaxis and other misbehaving controllers to not use uninitialized axes and overwrite initialized ones. Merged in 2018.

    The second was a patch in the spring of 2021 for Xft to not assume all the glyphs in a monospaced font to be the same size. Some fonts have ligatures which are glyphs that represent multiple characters together, so they’re actually some multiple of the base glyph size. Merged in the fall of 2022.

  • johannesvanderwhales@lemmy.world
    link
    fedilink
    arrow-up
    15
    ·
    edit-2
    7 months ago

    Back in the day, I upgraded a Slackware install from kernel 1.3 to 2.0. That was a fucking adventure.

    The fun part about back then was that if your machine wouldn’t boot or if you couldn’t get your modem or pppd working, you probably didn’t have another internet connected device so you might have to drive somewhere with a computer…or try to figure it out through books.

    • megabat@lemm.ee
      link
      fedilink
      arrow-up
      4
      ·
      7 months ago

      You probably remember the libc5 to glibc swap. Bad times to DIY distros.

      • johannesvanderwhales@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        7 months ago

        Yep. I remember at the time I saw a lot of advice saying “you know you might want to seriously consider just installing your distro from scratch with a newer version.” Tracking down all of the dependencies (some of which had to be installed as binaries) was a very manual process.

        Edit: Oh and another fun aspect of that time period was that since downloads were so slow on a modem, if you wanted a newer version or to try out another distro, you would go and order a cdrom from a place like Walnut Creek.

  • T4V0@lemmy.world
    link
    fedilink
    arrow-up
    15
    ·
    7 months ago

    Not a Linux problem per se, but I had a 128GB image disk in a unknown .bin format which belongs to a proprietary application. The application only ran on Windows.

    I tried a few things but nothing except Windows based programs seemed able to identify the partitions, while I could run it in Wine, it dealt with unimplementend functions. So after a bit of googling and probing the file, it turns out the format had just a 512 bytes as header which some Windows based software ignored. After including the single block offset, all the tools used in Linux started working flawlessly.

    • Hadriscus@lemm.ee
      link
      fedilink
      arrow-up
      3
      ·
      7 months ago

      This is so arcane to me. Like, I more or less understand your high-level explanation, but then you gloss over “including the block offset” but how would one do that ??

      • DickFiasco@lemm.ee
        link
        fedilink
        arrow-up
        5
        ·
        7 months ago

        Inspecting the file with a hex editor would give you lots of useful info in this case. If you know approximately what the data should look like, you can just see where the garbage (header) ends and the data starts. I’ve reverse engineered data files from an oscilloscope like this.

      • T4V0@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        7 months ago

        Well, in this scenario the image file had 512 bytes sections, each one is called a block. If you have a KiB (a kibibyte = 1024 bytes) it will occupy 2 blocks and so on…

        Since this image file had a header with 512 bytes (i.e. a block) I could, in any of the relevant Linux mounting software (e.g. mount, losetup), choose an offset adding to the starting block of a partition. The command would look like this:

        sudo mount -o loop,offset=$((header+partition)) img_file /mnt
        
  • Swordgeek@lemmy.ca
    link
    fedilink
    arrow-up
    15
    ·
    7 months ago

    Not Linux, but Solaris, back in the day.

    We had a system with a mirrored boot disk. One of the disks failed. And we were unable to boot from the other, because the boot device in OBP (~BIOS) pointed to a device-specific partitIon. When we manually booted from the live device, it was lacking the boot sector code, and wouldn’t boot. When we booted from CDROM, the partitions wouldn’t mount because the virtual device mapping pointed to the dead drive.

    This was a gas futures trading system, and rebuild wasn’t an option. Restoring from backup woyld have lost four hours of trades, which would be an extreme last resort.

    A coworker and I spent all night on the box. We had a whiteboard covered with every stage of the boot sequence broken down, and every redirection we needed to (a) boot and (b) repair the system. The issue started mid-afternoon, and we finally got it back up by around 6:30 am.

  • Diplomjodler@feddit.de
    link
    fedilink
    arrow-up
    14
    ·
    7 months ago

    Fixed a typo in my /etc/fstab that prevented the NAS from mounting. I am a bear of little brain. But I’m also proof that you don’t have to be some master hacker to successfully run Linux.

    • IninewCrow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      Sometimes … usually I just hit a wall because I don’t know enough but I know enough to get myself in trouble … so I just stop, reformat, reinstall and start all over.

      About the biggest lesson I’ve learned from Linux is not to mess with too many things unless you want to learn about it and have lots of time in your hands.

      Otherwise if you find a good distro for your needs, a stick with it, don’t change it, update and backup regularly.