How stupid do you have to be to believe that only 8% of companies have seen failed AI projects? We can’t manage this consistently with CRUD apps and people think that this number isn’t laughable? Some companies have seen benefits during the LLM craze, but not 92% of them. 34% of companies report that generative AI specifically has been assisting with strategic decision making? What the actual fuck are you talking about?

I don’t believe you. No one with a brain believes you, and if your board believes what you just wrote on the survey then they should fire you.

  • IHeartBadCode@kbin.run
    link
    fedilink
    arrow-up
    53
    arrow-down
    1
    ·
    13 days ago

    I had my fun with Copilot before I decided that it was making me stupider - it’s impressive, but not actually suitable for anything more than churning out boilerplate.

    This. Many of these tools are good at incredibly basic boilerplate that’s just a hint outside of say a wizard. But to hear some of these AI grifters talk, this stuff is going to render programmers obsolete.

    There’s a reality to these tools. That reality is they’re helpful at times, but they are hardly transformative at the levels the grifters go on about.

    • 0x0@programming.dev
      link
      fedilink
      English
      arrow-up
      17
      ·
      13 days ago

      I use them like wikipedia: it’s a good starting point and that’s it (and this comparison is a disservice to wikipedia).

    • Zikeji@programming.dev
      link
      fedilink
      English
      arrow-up
      14
      ·
      13 days ago

      Copilot / LLM code completion feels like having a somewhat intelligent helper who can think faster than I can, however they have no understanding of how to actually code, but are good at mimicry.

      So it’s helpful for saving time typing some stuff, and sometimes the absolutely weird suggestions make me think of other scenarios I should consider, but it’s not going to do the job itself.

      • deweydecibel@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        13 days ago

        So it’s helpful for saving time typing some stuff

        Legitimately, this is the only use I found for it. If I need something extremely simple, and feeling too lazy to type it all out, it’ll do the bulk of it, and then I just go through and edit out all little mistakes.

        And what gets me is that anytime I read all of the AI wank about how people are using these things, it kind of just feels like they’re leaving out the part where they have to edit the output too.

        At the end of the day, we’ve had this technology for a while, it’s just been in the form of predictive suggestions on a keyboard app or code editor. You still had to steer in the right direction. Now it’s just smart enough to make it from start to finish without going off a cliff, but you still have to go back and fix it, the same way you had to steer it before.

    • AIhasUse@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      17
      ·
      13 days ago

      Yes, and then you take the time to dig a little deeper and use something agent based like aider or crewai or autogen. It is amazing how many people are stuck in the mindset of “if the simplest tools from over a year aren’t very good, then there’s no way there are any good tools now.”

      It’s like seeing the original Planet of the Apes and then arguing against how realistic the Apes are in the new movies without ever seeing them. Sure, you can convince people who really want unrealistic Apes to be the reality, and people who only saw the original, but you’ll do nothing for anyone who actually saw the new movies.

      • foenix@lemm.ee
        link
        fedilink
        English
        arrow-up
        13
        ·
        13 days ago

        I’ve used crewai and autogen in production… And I still agree with the person you’re replying to.

        The 2 main problems with agentic approaches I’ve discovered this far:

        • One mistake or hallucination will propagate to the rest of the agentic task. I’ve even tried adding a QA agent for this purpose but what ends up happening is those agents aren’t reliable and also leads to the main issue:

        • It’s very expensive to run and rerun agents at scale. The scaling factor of each agent being able to call another agent means that you can end up with an exponentially growing number of calls. My colleague at one point ran a job that cost $15 for what could have been a simple task.

        One last consideration: the current LLM providers are very aware of these issues or they wouldn’t be as concerned with finding “clean” data to scrape from the web vs using agents to train agents.

        If you’re using crewai btw, be aware there is some builtin telemetry with the library. I have a wrapper to remove that telemetry if you’re interested in the code.

        Personally, I’m kinda done with LLMs for now and have moved back to my original machine learning pursuits in bioinformatics.

  • deweydecibel@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    1
    ·
    13 days ago

    Another friend of mine was reviewing software intended for emergency services, and the salespeople were not expecting someone handling purchasing in emergency services to be a hardcore programmer. It was this false sense of security that led them to accidentally reveal that the service was ultimately just some dude in India. Listen, I would just be some random dude in India if I swapped places with some of my cousins, so I’m going to choose to take that personally and point out that using the word AI as some roundabout way to sell the labor of people that look like me to foreign governments is fucked up, you’re an unethical monster, and that if you continue to try { thisBullshit(); } you are going to catch (theseHands)

    This aspect and of it isn’t getting talked about enough. These companies are presenting these things as fully-formed AI, while completely neglecting the people behind the scenes constantly cleaning it up so it doesn’t devolve into chaos. All of the shortcomings and failures of this technology are being masked by the fact that there’s actual people working round the clock pruning and curating it.

    You know, humans, with actual human intelligence, without which these miraculous “artificial intelligence” tools would not work as they seem to.

  • Spesknight@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    ·
    13 days ago

    I don’t fear Artificial Intelligence, I fear Administrative Idiocy. The managers are the problem.

  • madsen@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    4
    ·
    13 days ago

    This is such a fun and insightful piece. Unfortunately, the people who really need to read it never will.

    • AIhasUse@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      23
      ·
      13 days ago

      It blatantly contradicts itself. I would wager good money that you read the headline and didn’t go much further because you assumed it was agreeing with you. Despite the subject matter, this is objectively horribly written. It lacks a cohesive narrative.

      • Alphane Moon@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        edit-2
        13 days ago

        I don’t think it’s supposed to have a cohesive narrative structure (at least in context of a structured, more formal critique). I read the whole thing and it’s more like a longer shitpost with a lot of snark.

      • AIhasUse@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        7
        ·
        13 days ago

        There is literally not a chance that anyone downvoting this actually read it. It’s just a bunch of idiots that read the title, like the idea that llms suck and so they downvoted. This paper is absolute nonsense that doesn’t even attempt to make a point. I seriously think it is ppprly ai generated and just taking the piss out of idiots that love anything they think is anti-ai, whatever that means.

        • decivex@yiffit.net
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          13 days ago

          It’s not a paper, it’s a stream-of-consciousness style blog post.

  • Rumbelows@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    13 days ago

    I feel like some people in this thread are overlooking the tongue in cheek nature of this humour post and taking it weirdly personally

    • Eccitaze@yiffit.net
      link
      fedilink
      English
      arrow-up
      6
      ·
      13 days ago

      Yeah, that’s what happens when the LLM they use to summarize these articles strips all nuance and comedy.

    • amio@kbin.run
      link
      fedilink
      arrow-up
      2
      ·
      13 days ago

      Even for the internet, this place is truly extremely fond of doing that.

  • jaaake@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    4
    ·
    13 days ago

    After reading that entire post, I wish I had used AI to summarize it.

    I am not in the equally unserious camp that generative AI does not have the potential to drastically change the world. It clearly does. When I saw the early demos of GPT-2, while I was still at university, I was half-convinced that they were faked somehow. I remember being wrong about that, and that is why I’m no longer as confident that I know what’s going on.

    This pull quote feels like it’s antithetical to their entire argument and makes me feel like all they’re doing is whinging about the fact that people who don’t know what they’re talking about have loud voices. Which has always been true and has little to do with AI.

    • AIhasUse@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      7
      ·
      13 days ago

      Yeah, this paper is time wasted. It is hilarious that they think that 3 years is a long time as a data scientists and this somehow gives them such wisdom. Then, they can’t even accurately extract the data from the chart that they posted in the article. On top of all this, like you pointed out, they can’t even keep a clear narrative, and they blatantly contradict themself on their main point. They want to pile drive people who come to the same conclusion as themself. What a strange take.

  • AIhasUse@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    8
    ·
    13 days ago

    I don’t know how much stock to put in this author. They can’t even read the chart that they shared. They saw that 8% didn’t get use from gen ai and so assumed that 92% did. There are also 7% that haven’t tried using it yet. Ironically, pretty much any LLM with vision would have done a better job of comprehending the chart than this author did.

  • BarbecueCowboy@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    12 days ago

    It’s consistently pretty good for writing items with low technical importance and minimal need for accuracy.

    I’ll never write a job description myself again and my need for getting with communications for mass correspondence is almost gone.

  • tron@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    Oh my god this whole post is amazing, thought I’d share my favorite excerpt:

    This entire class of person is, to put it simply, abhorrent to right-thinking people. They’re an embarrassment to people that are actually making advances in the field, a disgrace to people that know how to sensibly use technology to improve the world, and are also a bunch of tedious know-nothing bastards that should be thrown into Thought Leader Jail until they’ve learned their lesson, a prison I’m fundraising for. Every morning, a figure in a dark hood7, whose voice rasps like the etching of a tombstone, spends sixty minutes giving a TedX talk to the jailed managers about how the institution is revolutionizing corporal punishment, and then reveals that the innovation is, as it has been every day, kicking you in the stomach very hard.

    Where the fuck do I donate???

      • pyldriver@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        Right as in the actual definition of the word, no political

        Conforming with or conformable to justice, law, or morality.

        In accordance with fact, reason, or truth; correct.

        Fitting, proper, or appropriate.

        • WldFyre@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          I get that, didn’t think it was a political meaning. Just seems like an iffy word to me personally, hard to put my finger on it.

          Maybe since the inverse would be “wrong-think”?

          • Cryophilia@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            12 days ago

            English your second language? Phrases that seem common to natives may seem off to those who learned English later in life. 'Tis a silly language.

  • Shadowcrawler@discuss.tchncs.de
    cake
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    8
    ·
    edit-2
    13 days ago

    The Author’s Frustration with the Overhyped Use of AI in Businesses

    • The author, a former data scientist, expresses frustration with the excessive hype surrounding AI and its implementation in businesses.

    • They argue that most companies lack the expertise and infrastructure to effectively utilize AI and should focus on addressing fundamental issues like testing database backups and developing basic applications.

    • The author criticizes the lack of genuine understanding and competence among many individuals promoting AI initiatives, leading to a culture of grifters and incompetents.

    • They emphasize the importance of solving basic operational and cultural problems before attempting to implement complex technologies like AI.

    • The author warns against the盲adoption of AI without a clear understanding of its benefits and feasibility, likening it to a recipe for disaster.

    https://ludic.mataroa.blog/blog/i-will-fucking-piledrive-you-if-you-mention-ai-again/

    Yes i’m fully aware of the irony that i used AI for this summary