• ameancow@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 month ago

      We should leave AI to the realm of producing fringe/impossible porn, like it was meant for and like what everyone actually wants from it. All this “search engine” stuff is just cover like when you buy some non-lube products like groceries along with the tube of astroglide at 1:00 AM.

    • miridius@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      1 month ago

      If you read the whole thing, it’s not wrong. It just highlighted a part that is wrong when taken out of context

      • intensely_human@lemm.ee
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        1 month ago

        What you’re referring to as “highlighting” here is what most of us consider the thing “answering the question”.

        “Where are you from?”

        “Connecticut. I was born and raised in Utah …”

        That first sentence is the answer to the question.

  • thejml@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 month ago

    I thought this was fake or a bad result or something, but totally just duplicated it. Wow.

    If you read the block of text…. It doesn’t make sense either.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      1 month ago

      I expect if you follow the references you’d find one of them to be one of those “if Earth was a grain of sand” analogies.

      People like laughing at AI but usually these silly-sounding answers accurately reflect the information the search returned.

      • conciselyverbose@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        It’s in the quote that they scaled it.

        The point is that the entire alleged value is the ability to parse the reading material and extract the key points, but because it doesn’t resemble intelligence in any way, it isn’t actually capable of meaningfully doing so.

        Yes, not being able to distinguish between the real answer and a “banana for scale” analogy is a big problem that shows how fucking useless the technology is.

        • btaf45@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          It’s in the quote that they scaled it.

          Yes but they supposedly scaled it to “one meter per meter”. A “scale where the distance from the Sun to Earth is 150 million km” is the actual distance.

          • conciselyverbose@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            lol I did miss that, but it was enough to make it not a guess that its source was scaling for comparison.

            My whole point was the same as your OP, though. A condom that’s 95% effective isn’t worth shit. You can’t let a toy without reading comprehension do your reading for you.

        • WhatAmLemmy@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 month ago

          *Dangerous! Don’t forget how dangerous it is — considering all tech bros and corps are acting as though LLM’s are on the verge of real intelligence, instead of being a stochastic parrot that’s essentially a mathematical magic trick.

          “Now watch as I, the great mathemagician, make a statistical algorithm appear to hold general intelligence!”

          Our “intelligence” agencies already kill innocent people based entirely on metadata — because they simply live or work around areas that known terrorists occupy — now imagine if an AI was calling the shots. The more LLM’s are integrated into our day to day lives, the more people will trust them and disregard their own logic, and the more dangerous they become.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            1
            ·
            1 month ago

            Our “intelligence” agencies already kill innocent people based entirely on metadata — because they simply live or work around areas that known terrorists occupy — now imagine if an AI was calling the shots.

            So by your own scenario, intelligence agencies are already getting stuff wrong and making bad decisions using existing methodologies.

            Why do you assume that new methodologies that involve LLMs will be worse at that? Why could they not be better? Presumably they’re going to be evaluating their results when deciding whether to make extensive use of them.

            “Mathematical magic tricks” can turn out to be extremely useful. That phrase can be used to describe all manner of existing techniques that are undeniably foundational to civilization.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          0
          ·
          1 month ago

          Except it is capable of meaningfully doing so, just not in 100% of every conceivable situation. And those rare flubs are the ones that get spread around and laughed at, such as this example.

          There’s a nice phrase I commonly use, “don’t let the perfect be the enemy of the good.” These AIs are good enough at this point that I find them to be very useful. Not perfect, of course, but they don’t have to be as long as you’re prepared for those occasions, like this one, where they give a wrong result. Like any tool you have some responsibility to know how to use it and what its capabilities are.

          • btaf45@lemmy.worldOP
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            AIs are definitely not “good enough” to give correct answers to science questions. I’ve seen lots of other incorrect answers before seeing this one. While it was easy to spot that this answer is incorrect, how many incorrect answers are not obvious?

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              1
              ·
              1 month ago

              Then go ahead and put “science questions” into one of the areas that you don’t use LLMs for. That doesn’t make them useless in general.

              I would say that a more precise and specific restriction would be “they’re not good at questions involving numbers.” That’s narrower than “science questions” in general, they’re still pretty good at dealing with the concepts involved. LLMs aren’t good at math so don’t use them for math.

              • btaf45@lemmy.worldOP
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 month ago

                AI doesn’t seem to be good at anything in which there is a right answer and a wrong answer. It works best for things where there are no right/wrong answers.

          • conciselyverbose@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 month ago

            No, it isn’t.

            You’re allowing a simple tool with literally zero reading comprehension to do your reading for you. It’s not surprising your understanding of what the tech is is lacking.

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              1 month ago

              Your comment is simply counterfactual. I do indeed find LLMs to be useful. Saying “no you don’t!” Is frankly ridiculous.

              I’m a computer programmer. Not directly experienced with LLMs themselves, but I understand the technology around them and have written program that make use of them. I know what their capabilities and limitations are.

              • conciselyverbose@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                1 month ago

                Your claim that it’s capable of doing what it claims isn’t just false.

                It’s an egregious, massively harmful lie, and repeating it is always extremely malicious and inexcusable behavior.

                • FaceDeer@fedia.io
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 month ago

                  I have genuinely found LLMs to be useful in many contexts. I use them to brainstorm and flesh out ideas for tabletop roleplaying adventures, to write song lyrics, to write Python scripts to do various random tasks. I’ve talked with them to learn about stuff, and verified that they were correct by checking their references. LLMs are demonstrably capable of these things. I demonstrated it.

                  Go ahead and refrain from using them yourself if you really don’t want to, for whatever reason. But exclaiming “no it doesn’t!” In the face of them actually doing the things you say they don’t is just silly.

    • gaterush@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      I just tried and got “about 40,000 billion kilometers”. Also the references are completely different from the ones in the post, so I guess it was a ranking issue

      AI is just too unpredictable, hard to know what’s accurate and you end up doing the work yourself anyways

  • Gsus4@mander.xyz
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    edit-2
    1 month ago

    Like every tool, it has its uses…but they are not those being advertised. LLMs are great for things where mistakes don’t detract from the result (or even add to it) like brainstorming, art, music, disinformation…all that good stuff.

    • buddascrayon@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 month ago

      Yeah that’s why it would be very nice if they would stop integrating it into fucking search engines.

      • Gsus4@mander.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        They wanna fucking integrate it in everything, dumbfucks. This is why meritocracy is dead, the people with the means to determine where we go as a society are “number go up” people.

    • btaf45@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      That’s what I think too. AI is mainly useful for things that don’t have right or wrong answers.

      Although this incorrect answers is obvious, what about all the times where an incorrect answer from AI is not obvious?

      • contrefeu@akko.contref.eu
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        @Gsus4 @btaf45 That’s true for AI that has been trained for the general public to provide an answer for any provided question meaning they are forced to respond to a prompt even though they are wrong and maybe even know they are wrong. They just don’t know the answer and can’t say that because it’s commercially bad.

        I do believe that for scientific research AI models are much more precise because they have been trained with the right datasets and are tasked with answering specific questions.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 month ago

      brainstorming

      Sure thing, but have to remember to include “no bad ideas” in the prompt for best results.

      • Gsus4@mander.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 month ago

        that’s the point of brainstorming, all ideas are allowed, filter later.

    • ulkesh@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      I suspect there’s a quite-overlapping Venn diagram of people who rely on LLMs for their “facts” with people who believe the earth is flat and people who believe ancient aliens are real.

    • IphtashuFitz@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 month ago

      It’s 126 miles to Chicago 13.6 kilometers to Alpha Centauri, we’ve got a full tank of gas, half a pack off cigarettes, it’s dark, and we’re wearing sunglasses.

    • rhsJack@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      1 month ago

      I’ll be the non jokey one here and bring us all down with the hard math. 13.6 kilometers converted into American is pretty much, like, way more than a half tank of gas unless you have a Prius. But you do you. Can you get me a slushie on the way back? You know I’m good for it.

    • ulkesh@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      13.6km is 44,619ft.

      So nearly every time one flies commercial, yes, since cruising altitude is between 30,000 and 40,000 feet. I think a large triple-star system would be quite visible at that point.

  • sinkingship@mander.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 month ago

    It propably grabbed the info off some random number-confusing dude like me, who recently posted the Earth’s diameter would be about 6 km instead of 6000.

    Edit: oops, did it again. Meant radius, not diameter…

  • Cyber Yuki@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    When techbros said “you can type a question and the AI will answer”, they seem to have forgotten that we expect the answers to be true and accurate.

    And they seem to have forgotten that to do that, they actually need a database of facts.