• Kbin_space_program@kbin.social
    link
    fedilink
    arrow-up
    1
    arrow-down
    6
    ·
    edit-2
    6 months ago

    It’s an exaggeration, but its not far off given that Google literally has all of the web parsed at least once a day.

    Reddit just sold off AI harvesting rights on all of its content to Google.

    The problem is no longer model size. The problem is interpretation.

    You can ask almost everyone on earth a simple deterministic math problem and you’ll get the right answer almost all of the time because they understand the principles behind it.

    Until you can show deterministic understanding in AI, you have a glorified chat bot.

    • AggressivelyPassive@feddit.de
      link
      fedilink
      English
      arrow-up
      8
      ·
      6 months ago

      It is far off. It’s like saying you have the entire knowledge of all physics because you skimmed a textbook once.

      Interpretation is also a problem that can be solved, current models do understand quite a lot of nuance, subtext and implicit context.

      But you’re moving the goal post here. We started at “don’t get better, at a plateau” and now you’re aiming for perfection.

      • Kbin_space_program@kbin.social
        link
        fedilink
        arrow-up
        1
        arrow-down
        3
        ·
        6 months ago

        You’re building beautiful straw men. They’re lies, but great job.

        I said originally that we need to improve the interpretation of the model by AI, not just have even bigger models that will invariably have the same flaw as they do now.

        Deterministic reliability is the end goal of that.