Code used in the analysis is here

      • Thorny_Insight@lemm.ee
        link
        fedilink
        English
        arrow-up
        12
        ·
        edit-2
        8 months ago

        As does most critizism about LLMs. We wanted one that behaves like a human and then got angry when that’s exactly what it does.

  • gedaliyah@lemmy.world
    link
    fedilink
    English
    arrow-up
    46
    ·
    8 months ago

    How could we possibly expect that there wouldn’t be bias? Is based off the patterns that humans use. Humans have bias. The difference is that humans he can recognize their bias and worked overcome it. As far as i know chat GPT can’t do that.

    • Plopp@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      8 months ago

      Because they don’t know what “AI” is so they think it’s this technical thing that just knows things, all the things, magically. I’ve seen confident statements like “we use AI in our recruiting process because it has no bias!!” 🤦‍♂️

    • abhibeckert@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      6
      ·
      edit-2
      8 months ago

      humans he can recognize their bias

      Can they? I’m not convinced.

      As far as i know chat GPT can’t do that.

      You do it with math. Measure how many females you have with a C level position at the company and introduce deliberate bias into hiring process (human or AI) to steer the company towards a target of 50%.

      It’s not easy, but it can be done. And if you have smart people working on it you’ll get it done.

      • JoBo@feddit.uk
        link
        fedilink
        English
        arrow-up
        14
        ·
        edit-2
        8 months ago

        You start off by claiming that humans can’t recognise their biases and end up by saying that there’s no problem because humans can recognise their biases so well they can programme it out of AI.

        Which is it?

      • T156@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        8 months ago

        You do it with math. Measure how many females you have with a C level position at the company and introduce deliberate bias into hiring process (human or AI) to steer the company towards a target of 50%.

        Only if you can recognise the bias, and what the cause of the bias is to fix it.

        It’s not implausible that the AI might come to the same trend using similar patterns, even if you excised the gender data. People with particular names, hobbies, whether they’d joined a sorority, etc.

        A slapdash fix to try to patch the bias by just adding a positive spin might not do that much, and most of the time, you don’t know the specifics of what goes on inside a model, and what different parts specifically contribute to what. Let alone one owned by another company like ChatGPT, who would very much not like people pulling apart their LLMs to figure out how they work, and what they were trained on.

        Consider the whole Google Bard image generation debacle, where it’s suspected that they secretly added additional keywords to prompts to try to minimise bias, causing a whole bunch of other problems because it had unpredicted effects.

      • Natanael@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        8 months ago

        Any LLM won’t have the right architecture to implement that kind of math. They are built specifically to find patterns, even obscure ones, that nobody knows of. They could start flagging random shit indirectly associated with gender like relative timing between jobs or rate of promotions, etc, and you wouldn’t even notice it’s doing it

      • LWD@lemm.ee
        cake
        link
        fedilink
        English
        arrow-up
        9
        ·
        8 months ago

        Didn’t somebody make a biased AI and a laundering AI to say it wasn’t biased, just to demonstrate how easy it was to do?

        • Desistance@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          8 months ago

          Someone built a biased AI knowing it was biased and launched it anyway. Sounds like actually intended design.

          • LWD@lemm.ee
            cake
            link
            fedilink
            English
            arrow-up
            5
            ·
            8 months ago

            Extremely intended! They built a model to lie and a surrogate model to say the first model was being truthful.

            They called it LaundryML.

  • dgmib@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    8 months ago

    Job seekers next ChatGPT prompt:

    Here’s a job posting and my resume, can you tell me what to change to make me sound like a perfect fit for the role?

    ChatGPT:

    • Change name from “Latifa Tshabalala“ to “Kevin Smith” …
  • Cannibal_MoshpitV3@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    2
    ·
    8 months ago

    I’ve had several bosses tell me that the moment they see a stereotypical African American name they throw out the application/resume.

    • KevonLooney@lemm.ee
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      2
      ·
      8 months ago

      This is why statistics are important. Conservatives will say “that’s tokenism! They’re not getting jobs on merit!”

      My guy, quotas for underserved minorities and women exist because we don’t live in a meritocracy. Talent and ambition is dispersed equally. If you are mostly hiring and promoting people like you, that’s exactly why quotas are needed.

      • makyo@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        8 months ago

        I was recently at a launch party Q&A and a guy in the audience actually asked whether quotas kept deserving men from the job. I shit you not - the product being launched was an educational game about equal treatment of women in the workforce. I guess that showed they were reaching the right people at least.

      • Trantarius@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        8 months ago

        Quotas are not the only way to combat discrimination, nor are they a good one. Name-blind hiring would resolve name discrimination without making additional presumptions about the applicant pool. A quota presumes that the applicant pool has a particular racial mix, and that a person’s qualifications and willingness to apply are independent of race. And even if those happen to be true, it can’t take into account the possibility that the random distribution of applicants just happens to sway one way or another in a particular instance.

    • Potatos_are_not_friends@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      8 months ago

      Honestly that works for human recruiters too.

      I have a very “Caucasian” name. For years, I had some serious confused faces during interviews, people who thought I was a great applicant on paper but then cut the meeting short because I suddenly “dont fit the culture”.

      I now include my photo in my application. Saves me time too, because I don’t want to work at a racist ass company.

  • backgroundcow@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    8 months ago

    “I’ve created this amazing program that more or less precisely mimics the response of a human to any question!”

    “What if I ask it a question where humans are well known to apply all kinds of biases? Will it give a completely unbiased answer, like some kind of paragon of virtue?”

    “No”

    <Surprised Pikachu face>

  • SoupBrick@yiffit.net
    link
    fedilink
    English
    arrow-up
    11
    ·
    8 months ago

    I can’t wait for people to start inserting disruptive code using white colored text in their resumes.

  • dinckel@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    8 months ago

    You know what, I’d actually like recruiters to continue using these now.

    Usually I’m against it, because HR gets to not do their job at all, and potential employees get fucked out of work, but this sounds great. Let them use it, and then catch a handful of harassment/discrimination suits

    • CameronDev@programming.dev
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      8 months ago

      Except that they are now protected by the “We arent racist, they algorithm did it” defence, so realistically, only us plebs will lose.

      • gedaliyah@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        8 months ago

        The courts have already established that the user is still responsible for the tool, even if the tool is very sophisticated.

        • CameronDev@programming.dev
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          8 months ago

          Have they? There is the air canada thing, but that was kinda a different situation, the chat bot was explicitly acting for the company, and made direct claims for the company?

          IANAL, but proving discrimination was already hard, and now they can just point at the black box and blame it, so its gonna get harder?

          • T156@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            8 months ago

            IANAL, but proving discrimination was already hard, and now they can just point at the black box and blame it, so its gonna get harder?

            Especially if it gets rolled into other checks, like a police check, or a “personality fit”, which makes it more ambiguous.

        • CameronDev@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 months ago

          Yeah, it really is. On the upside, if you get rejected from a company that doesnt even have the time to manually review your CV, that might be a blessing in disguise.