• Rabbit R1 AI box is actually an Android app in a limited $200 box, running on AOSP without Google Play.
  • Rabbit Inc. is unhappy about details of its tech stack being public, threatening action against unauthorized emulators.
  • AOSP is a logical choice for mobile hardware as it provides essential functionalities without the need for Google Play.
  • De_Narm@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    Why are there AI boxes popping up everywhere? They are useless. How many times do we need to repeat that LLMs are trained to give convincing answers but not correct ones. I’ve gained nothing from asking this glorified e-waste something, pulling out my phone and verifying it.

    • cron@feddit.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      What I don’t get is why anyone would like to buy a new gadget for some AI features. Just develop a nice app and let people run it on their phones.

      • no banana @lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 months ago

        That’s why though. Because they can monetize hardware. They can’t monetize something a free app does.

    • MxM111@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      The best convincing answer is the correct one. The correlation of AI answers with correct answers is fairly high. Numerous test show that. The models also significantly improved (especially paid versions) since introduction just 2 years ago.
      Of course it does not mean that it could be trusted as much as Wikipedia, but it is probably better source than Facebook.

      • De_Narm@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        “Fairly high” is still useless (and doesn’t actually quantify anything, depending on context both 1% and 99% could be ‘fairly high’). As long as these models just hallucinate things, I need to double-check. Which is what I would have done without one of these things anyway.

        • TrickDacy@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          1% correct is never “fairly high” wtf

          Also if you want a computer that you don’t have to double check, you literally are expecting software to embody the concept of God. This is fucking stupid.

          • De_Narm@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            2 months ago

            1% correct is never “fairly high” wtf

            It’s all about context. Asking a bunch of 4 year olds questions about trigonometry, 1% of answers being correct would be fairly high. ‘Fairly high’ basically only means ‘as high as expected’ or ‘higher than expected’.

            Also if you want a computer that you don’t have to double check, you literally are expecting software to embody the concept of God. This is fucking stupid.

            Hence, it is useless. If I cannot expect it to be more or less always correct, I can skip using it and just look stuff up myself.

            • TrickDacy@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              Obviously the only contexts that would apply here are ones where you expect a correct answer. Why would we be evaluating a software that claims to be helpful against 4 year old asked to do calculus? I have to question your ability to reason for insinuating this.

              So confirmed. God or nothing. Why don’t you go back to quills? Computers cannot read your mind and write this message automatically, hence they are useless

              • De_Narm@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                Obviously the only contexts that would apply here are ones where you expect a correct answer.

                That’s the whole point, I don’t expect correct answers. Neither from a 4 year old nor from a probabilistic language model.

                • TrickDacy@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  2 months ago

                  And you don’t expect a correct answer because it isn’t 100% of the time. Some lemmings are basically just clones of Sheldon Cooper

                  • FlorianSimon@sh.itjust.works
                    link
                    fedilink
                    English
                    arrow-up
                    0
                    ·
                    2 months ago

                    Something seems to fly above your head: quality is not optional and it’s good engineering practice to seek reliable methods of doing our work. As a mature software person, you look for tools that give less room for failure and want to leave as little as possible for humans to fuck up, because you know they’re not reliable, despite being unavoidable. That’s the logic behind automated testing, Rust’s borrow checker, static typing…

                    If you’ve done code review, you know it’s not very efficient at catching bugs. It’s not efficient because you don’t pay as much attention to details when you’re not actually writing the code. With LLMs, you have to do code review to ensure you meet quality standards, because of the hallucinations, just like you’ve got to test your work before committing it.

                    I understand the actual software engineers that care about delivering working code and would rather write it in order to be more confident in the quality of the output.