• pycorax@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    1 month ago

    There’s nothing stopping x86-64 processors from being power efficient. This article is pretty technical but does a really good explanation of why that’s the case: https://chipsandcheese.com/2024/03/27/why-x86-doesnt-need-to-die/

    It’s just that traditionally Intel and AMD earn most of their money from the server and enterprise sectors where high performance is more important than super low power usage. And even with that, AMD’s Z1 Extreme also gets within striking distance of the M3 at a similar power draw. It also helps that Apple is generally one node ahead.

    • SquiffSquiff@lemmy.world
      cake
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 month ago

      If there’s ‘nothing stopping’ it then why has nobody done it? Apple moved from x86 to ARM. Mobile is all ARM. All the big cloud providers are doing their own ARM chips. Intel killed off much of the architectural competition with Itanic in the early 2000’s. Why stop?

      • pycorax@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        Their primary money makers are what’s stopping them I reckon. Apple’s move to ARM is because they already had a ton of experience with building their own in house processors for their mobile devices and ARM licenses stock chip designs, making it easier for other companies to come up with their own custom chips whereas there really isn’t any equivalent for x86-64. There were some disagreements between Intel and AMD over patents on the x86 instruction set too.

    • QuaternionsRock@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 month ago

      This article fails to mention the single biggest differentiator between x86 and ARM: their memory models. Considering the sheer amount of everyday software that is going multithreaded, this is a huge issue, and the reason why ARM drastically outperforms x86 running software like modern web browsers.

      • pycorax@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        Do you mind elaborating what is it about the difference on their memory models that makes a difference?

        • sunbeam60@lemmy.one
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          On the x86 architecture, RAM is used by the CPU and the GPU has a huge penalty when accessing main RAM. It therefore has onboard graphics memory.

          On ARM this is unified so GPU and CPU can both access the same memory, at the same penalty. This means a huge class of embarrassingly parallel problems can be solved quicker on this architecture.

        • QuaternionsRock@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          Here is a great article on the topic. Basically, x86 spends a comparatively enormous amount of energy ensuring that its strong memory guarantees are not violated, even in cases where such violations would not affect program behavior. As it turns out, the majority of modern multithreaded programs only occasionally rely on these guarantees, and including special (expensive) instructions to provide these guarantees when necessary is still beneficial for performance/efficiency in the long run.

          For additional context, the special sauce behind Apple’s Rosetta 2 is that the M family of SoCs actually implement an x86 memory model mode that is selectively enabled when executing dynamically translated multithreaded x86 programs.