• pdxfed@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 months ago

    And by super intelligence we mean connected to a lot of things and able to wreak significant havoc, but absolutely fucking worthless for complex thought.

    #Skynub

  • Sanctus@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 months ago

    And what’s the power consumption rates on these super AIs? Can we afford that with our power grids? Our current situation with existing AI is already getting murky.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 months ago

    I mean, theoretically, yes.

    I mean, it could be that some guy in his basement has been working on it in total secrecy and it shows up tomorrow.

    But my guess is that the likely timeline is further out than either.

    I seriously doubt that what we’re going to see is a single “Eureka” moment that gives us both AGI and manages to greatly surpass humans.

    I would expect to see a more-incremental process, where publicly-visible systems get closer and closer to approaching that. And what OpenAI and friends are doing isn’t close. It’s cool, is useful for a lot of things, but isn’t a generalized system for solving problems.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      Exactly. If you look at timelines for significant human achievement, you’ll think innovation comes in waves. But if you zoom in a bit, it’s really a bunch of ripples leading to pretty steady innovation.

      For example, EVs exploded with Tesla, but they’ve been around for decades, they just didn’t catch on. The innovation to get there was steady, but the adoption was quick once a viable product was available and marketed well.

      The same is true for AI. I learned about generative AI in college over a decade ago, and the source material was also old (IIRC the old lisp machines were supposed to be used for AI). It exploded because it got just good enough to be viable, and it was marketed well. The actual innovation was quite gradual.

  • brsrklf@jlai.lu
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 months ago

    Beneficial AGI Summit

    Oh good, they’re the ones who want a nice AI overlord.

    • Uriel238 [all pronouns]@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 months ago

      To be fair, current human overlords are presenting a strong case that human beings cannot govern themselves at large scale (e.g. more than 500 people in a society) so a nice, public-servicing AI overlord is a pretty good pipe dream.

      I don’t know if it’s feasible at all, but man we’d be lucky if we made one.