• bobburger@fedia.io
    link
    fedilink
    arrow-up
    61
    ·
    3 months ago

    To be fair it’s a pretty terrible dataset. The AI is just going to say “this” to every question you ask

    • lol@discuss.tchncs.de
      link
      fedilink
      arrow-up
      10
      arrow-down
      1
      ·
      edit-2
      3 months ago

      You’re exaggerating of course, but I don’t think it’s terrible at all; the opposite really. It’s likely incredibly useful for creating LLMs with specific knowledge or behavior.

      The categorization into subreddits alone opens up so many possible applications. Imagine for example training a conversational AI with data from specific subreddits like science, askscience, biology, physics, astronomy,… or posts by users that frequent such subreddits in order to create sort of an academic AI.

      You could do the same for all sorts of topics: Want a sports commentator AI, use sports related subreddits; an AI that supports you in writing a novel, use creative writing subreddits etc. Don’t want your AI to spew political opinions, exclude political subreddits from your data; don’t want it to use offensive language, only use well-moderated subreddits etc.

      • Adderbox76@lemmy.ca
        link
        fedilink
        English
        arrow-up
        11
        ·
        3 months ago

        This presumes that Reddit is populated by so-called experts answering questions and posting in those subs.

        But the vast overwhelming truth is that most people pretending to be experts are just regurgitating the answers they heard from another reddit post, and so on, and so on.

        You might as well just train your AI on the “confidently incorrect” sub and call it a day.