The new global study, in partnership with The Upwork Research Institute, interviewed 2,500 global C-suite executives, full-time employees and freelancers. Results show that the optimistic expectations about AI’s impact are not aligning with the reality faced by many employees. The study identifies a disconnect between the high expectations of managers and the actual experiences of employees using AI.

Despite 96% of C-suite executives expecting AI to boost productivity, the study reveals that, 77% of employees using AI say it has added to their workload and created challenges in achieving the expected productivity gains. Not only is AI increasing the workloads of full-time employees, it’s hampering productivity and contributing to employee burnout.

  • Sk1ll_Issue@feddit.nl
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    The study identifies a disconnect between the high expectations of managers and the actual experiences of employees

    Did we really need a study for that?

  • barsquid@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    Wow shockingly employing a virtual dumbass who is confidently wrong all the time doesn’t help people finish their tasks.

    • Etterra@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      It’s like employing a perpetually high idiot, but more productive while also being less useful. Instead of slow medicine you get fast garbage!

    • demizerone@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      My dumbass friend who over confidently smart is switch to Linux bcz of open source AI. I can’t wait to see what he learns.

  • iAvicenna@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    because on top of your duties you now have to check whatever the AI is doing in place of the employee it has replaced

  • GreatAlbatross@feddit.uk
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    The workload that’s starting now, is spotting bad code written by colleagues using AI, and persuading them to re-write it.

    “But it works!”

    ‘It pulls in 15 libraries, 2 of which you need to manually install beforehand, to achieve something you can do in 5 lines using this default library’

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      I was trying to find out how to get human readable timestamps from my shell history. They gave me this crazy script. It worked but it was super slow. Later I learned you could do history -i.

      • GreatAlbatross@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Turns out, a lot of the problems in nixland were solved 3 decades ago with a single flag of built-in utilities.

    • ILikeBoobies@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I asked it to spot a typo in my code, it worked but it rewrote my classes for each function that called them

      • morbidcactus@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        I gave it a fair shake after my team members were raving about it saving time last year, I tried a SFTP function and some Terraform modules and man both of them just didn’t work. it did however do a really solid job of explaining some data operation functions I wrote, which I was really happy to see. I do try to add a detail block to my functions and be explicit with typing where appropriate so that probably helped some but yeah, was actually impressed by that. For generation though, maybe it’s better now, but I still prefer to pull up the documentation as I spent more time debugging the crap it gave me than piecing together myself.

        I’d use a llm tool for interactive documentation and reverse engineering aids though, I personally think that’s where it shines, otherwise I’m not sold on the “gen ai will somehow fix all your problems” hype train.

  • Lvxferre@mander.xyz
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    Large “language” models decreased my workload for translation. There’s a catch though: I choose when to use it, instead of being required to use it even when it doesn’t make sense and/or where I know that the output will be shitty.

    And, if my guess is correct, those 77% are caused by overexcited decision takers in corporations trying to shove AI down every single step of the production.

  • superkret@feddit.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 months ago

    The other 23% were replaced by AI (actually, their workload was added to that of the 77%)

    • hswolf@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      It also helps you getting a starting point when you don’t know how ask a search engine the right question.

      But people misinterpret its usefulness and think It can handle complex and context heavy problems, which must of the time will result in hallucinated crap.

  • Hackworth@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    I have the opposite problem. Gen A.I. has tripled my productivity, but the C-suite here is barely catching up to 2005.

        • Flying Squid@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          Cool, enjoy your entire industry going under thanks to cheap and free software and executives telling their middle managers to just shoot and cut it on their phone.

          Sincerely,

          A former video editor.

          • Hackworth@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            If something can be effectively automated, why would I want to continue to invest energy into doing it manually? That’s literal busy work.

                • Flying Squid@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 months ago

                  Video editing is not busy work. You’re excusing executives telling middle managers to put out inferior videos to save money.

                  You seem to think what I used to do was just cutting and pasting and had nothing to do with things like understanding film making techniques, the psychology of choosing and arranging certain shots, along with making do what you have when you don’t have enough to work with.

                  But they don’t care about that anymore because it costs money. Good luck getting an AI to do that as well as a human any time soon. They don’t care because they save money this way.

          • Hackworth@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 months ago

            “Soup to nuts” just means I am responsible for the entirety of the process, from pre-production to post-production. Sometimes that’s like a dozen roles. Sometimes it’s me.

  • tvbusy@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    This study failed to take into consideration the need to feed information to AI. Companies now prioritize feeding information to AI over actually making it usable for humans. Who cares about analyzing the data? Just give it to AI to figure out. Now data cannot be analyzed by humans? Just ask AI. It can’t figure out? Give it more so it can figure it out. Rinse, repeat. This is a race to the bottom where information is useless to humans.

  • FartsWithAnAccent@fedia.io
    link
    fedilink
    arrow-up
    1
    ·
    2 months ago

    They tried implementing AI in a few our our systems and the results were always fucking useless. What we call “AI” can be helpful in some ways but I’d bet the vast majority of it is bullshit half-assed implementations so companies can claim they’re using “AI”

      • FartsWithAnAccent@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        Looking like they were doing something with AI, no joke.

        One example was “Freddy”, an AI for a ticketing system called Freshdesk: It would try to suggest other tickets it thought were related or helpful but they were, not one fucking time, related or helpful.

    • The Menemen!@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      It is great for pattern recognition (we use it to recognize damages in pipes) and probably pattern reproduction (never used it for that). Haven’t really seen much other real life value.

  • cheddar@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Me: no way, AI is very helpful, and if it isn’t then don’t use it

    created challenges in achieving the expected productivity gains

    achieving the expected productivity gains

    Me: oh, that explains the issue.

    • Bakkoda@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      It’s hilarious to watch it used well and then human nature just kick in

      We started using some “smart tools” for scheduling manufacturing and it’s honestly been really really great and highlighted some shortcomings that we could easily attack and get easy high reward/low risk CAPAs out of.

      Company decided to continue using the scheduling setup but not invest in a single opportunity we discovered which includes simple people processes. Took exactly 0 wins. Fuckin amazing.

      • Croquette@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Yeah but they didn’t have a line for that in their excel sheet, so how are they supposed to find that money?

        Bean counters hate nothing more than imprecise cost saving. Are they gonna save 100k in the next year? 200k? We can’t have that imprecision now can we?

      • dejected_warp_core@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Honestly, this sounds like the analysis uncovered some managerial failings and so they buried the results; a cover-up.

        Also, and I have yet to understand this, but selling “people space” solutions to very technically/engineering-inclined management is incredibly hard to do. Almost like there’s a typical blind spot for solving problems outside their area of expertise. I hate generalizing like this but I’ve seen this happen many times, at many workplaces, over many years.

        • Bakkoda@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          No I would think you are spot on. I’m constantly told I’m a type [insert fotm managerial class they just took term] and my conversations intimidate or emasculate people. They are probably usually correct but i find it’s usually just an attempt to cover their asses. I’m a contract worker, i was hired for a purpose with a limited time window and i fuckin deliver results even when they ignore 90% of the analysis. It’s gotta piss them off.

          • dejected_warp_core@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            2 months ago

            It’s gotta piss them off.

            That’s not unusual, sadly. Sometimes, someone brings in a contractor in attempt to foist change, as they’re not tainted by loyalties or the culture when it comes to saying ugly things. So anger and disruption is the product you’ve actually been hired to deliver; surprise! What pains me the most here is when I see my fellow contractors walk into just such a situation and they wind up worse for wear as a result.

            Edit: the key here is to see this coming and devise a communication plan to temper your client’s desire to stir the pot, and get yourself out of the line of fire, so to speak.

  • Nobody@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    You mean the multi-billion dollar, souped-up autocorrect might not actually be able to replace the human workforce? I am shocked, shocked I say!

    Do you think Sam Altman might have… gasp lied to his investors about its capabilities?

      • Nobody@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Yeah, OpenAI, ChatGPT, and Sam Altman have no relevance to AI LLMs. No idea what I was thinking.

        • Hackworth@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          I prefer Claude, usually, but the article also does not mention LLMs. I use generative audio, image generation, and video generation at work as often if not more than text generators.

    • kent_eh@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Except it didn’t make more jobs, it just made more work for the remaining employees who weren’t laid off (because the boss thought the AI could let them have a smaller payroll)

  • Sanctus@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    AI is better when I use it for item generation. It kicks ass at generating loot drops for encounters. All I really have to do is adjust item names if its not a mundane weapon. I do occasionally change an item completely cause its effects can get bland. But dont do much more than that.

    • ClamDrinker@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      That’s because you’re using AI for the correct thing. As others have pointed out, if AI usage is enforced (like in the article), chances are they’re not using AI correctly. It’s not a miracle cure for everything and should just be used when it’s useful. It’s great for brainstorming. Game development (especially on the indie side of things) really benefit from being able to produce more with less. Or are you using it for DnD?

  • TrickDacy@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    AI is stupidly used a lot but this seems odd. For me GitHub copilot has sped up writing code. Hard to say how much but it definitely saves me seconds several times per day. It certainly hasn’t made my workload more…

    • Cryophilia@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      Probably because the vast majority of the workforce does not work in tech but has had these clunky, failure-prone tools foisted on them by tech. Companies are inserting AI into everything, so what used to be a problem that could be solved in 5 steps now takes 6 steps, with the new step being “figure out how to bypass the AI to get to the actual human who can fix my problem”.

      • jubilationtcornpone@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        I’ve thought for a long time that there are a ton of legitimate business problems out there that could be solved with software. Not with AI. AI isn’t necessary, or even helpful, in most of these situations. The problem is that creatibg meaningful solutions requires the people who write the checks to actually understand some of these problems. I can count on one hand the number of business executives that I’ve met who were actually capable of that.