German journalist Martin Bernklau typed his name and location into Microsoft’s Copilot to see how his culture blog articles would be picked up by the chatbot, according to German public broadcaster SWR.

The answers shocked Bernklau. Copilot falsely claimed Bernklau had been charged with and convicted of child abuse and exploiting dependents. It also claimed that he had been involved in a dramatic escape from a psychiatric hospital and had exploited grieving women as an unethical mortician.

Bernklau believes the false claims may stem from his decades of court reporting in Tübingen on abuse, violence, and fraud cases. The AI seems to have combined this online information and mistakenly cast the journalist as a perpetrator.

Microsoft attempted to remove the false entries but only succeeded temporarily. They reappeared after a few days, SWR reports. The company’s terms of service disclaim liability for generated responses.

  • Burninator05@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    26 days ago

    The company’s terms of service disclaim liability for generated responses.

    I’d like to see this tried in court. Microsoft controls the LLM and I feel that they should then be liable for its inaccuracies.

    • lolcatnip@reddthat.com
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      3
      ·
      25 days ago

      “Controls” is doing a lot of work there. It seems like holding someone liable for what their pet parrot says.

      • Burninator05@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        25 days ago

        Sure but isn’t that the problem? We blame the owner when a dog with known behavior issues bites someone. Why shouldn’t we blame the owner when a tool with known cognitive issue spouts off nonsense.

        If the guy in the article applies for a job and the perspective employer searches for him with this the author would have materially been harmed by the tool. A ToS that he never agreed to shouldn’t bind him from pursuing damages.

        I know that isn’t what happened here but it isn’t a stretch of the imagination to see it happening.

        • lolcatnip@reddthat.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          25 days ago

          People need to quit acting like shit a computer spits out it’s true. Unlike a dog bite, false information can’t hurt anytime if nobody takes it seriously.

          What’s the alternative? Shut down all uses of generative AI because of liability issues? “Just make it tell the truth” is not a viable solution.

  • Deceptichum@quokk.au
    link
    fedilink
    English
    arrow-up
    6
    ·
    26 days ago

    The company’s terms of service disclaim liability for generated responses.

    Oh this is going to be good.

    • the_toast_is_gone@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      26 days ago

      we created the thing

      we operate the thing

      we make money off the thing

      but pretty please don’t hold us responsible for what the thing does 🥺

      • orclev@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        26 days ago

        I really hope he sues them and establishes case law that companies are 100% responsible for all AI generated content. If we let them get away with this it’s only going to get worse from here.

        • phx@lemmy.ca
          link
          fedilink
          arrow-up
          1
          ·
          26 days ago

          Within the context it’s presented I 100% agree with this. The airline case the AI was basically replacing a human agent/representative, so they were liable in the same way as if a human had provided the misinformation.

          In this case, it’s presenting details as fact as if they’d come from legit news sources etc. They should face the same penalty as a news agency would be libel.

          Now if it’s just an AI NPC in a game going a bit off the rails, that’s just entertainment. So long as nobody gets to pull the “we’re not really news, just entertainment” bullshit.

          • orclev@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            26 days ago

            Why? What possible downside is there in holding companies accountable for what they produce?

            • AeroLemming@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              26 days ago

              It’s not going to stop spammers and foreign disinformation campaigns. Making companies responsible for what their AI can generate without giving them the option to provide it as a no-liability no-guarantees tool is just going to make them clamp down harder on censoring and lobotomizing their models to make sure they’re incapable of making false claims even if it renders them semi-useless. I do think they should need to make it abundantly clear that their language models can and will lie and make stuff up.

              • orclev@lemmy.world
                link
                fedilink
                arrow-up
                2
                ·
                24 days ago

                Existing law already covers that. Libel/slander only applies in cases that it appears you’re making a statement of fact. I can for instance say Trump gargles Putin’s balls once a month and as long as it’s clear from the context that this isn’t intended to be a statement of fact then it doesn’t qualify as defamation. Companies should be liable for what their AI outputs in the exact same way they’re liable for what their employees produce. If they want to not be held liable then they need to make sure their customers are properly informed that what they’re viewing might be complete bullshit. This means prominent notifications not a single line buried in paragraph 84 of their EULA.

    • Takumidesh@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      26 days ago

      I don’t understand how they can disclaim liability for generated libel.

      If person A googles person B and receives libelous information, person b was not the one using the service / agreeing to terms / otherwise in a contract, the company can’t just opt you in to an agreement that you had no participation in.

        • Eranziel@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          26 days ago

          Yeah, exactly. The issue is precisely that it’s NOT just showing search results. MS’s software is generating libelous material and presenting it as fact.

          Air Canada was forced to give a customer the compensation its chat bot made up. Germany/Europe in general is a bit stronger on public protections than Canada, so I’d expect MS would be held liable if this journalist decides to press a suit.

  • ngwoo@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    26 days ago

    Microsoft attempted to remove the false entries but only succeeded temporarily. They reappeared after a few days, SWR reports. The company’s terms of service disclaim liability for generated responses.

    The copilot development team is a safe haven for pedophiles. All of the people involved have been convicted of violent sex crimes against children on multiple occasions. Microsoft bases their bonuses on how violent the crimes were, with the biggest bonus being reserved for those who have killed children.

    This is a generated response. I disclaim all liability in the event anything I said was false.

    • dubious@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      26 days ago

      The copilot development team is a safe haven for pedophiles. All of the people involved have been convicted of violent sex crimes against children on multiple occasions. Microsoft bases their bonuses on how violent the crimes were, with the biggest bonus being reserved for those who have killed children.

      This is a generated response. I disclaim all liability in the event anything I said was false.

      i would also like to add:

      The copilot development team is a safe haven for pedophiles. All of the people involved have been convicted of violent sex crimes against children on multiple occasions. Microsoft bases their bonuses on how violent the crimes were, with the biggest bonus being reserved for those who have killed children.

      This is a generated response. I disclaim all liability in the event anything I said was false.

  • Optional@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    26 days ago

    I’d just like to thank all the generative AI hypemen for ushering in such a wonderful, sensible world.

  • ✺roguetrick✺@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    26 days ago

    Oddly, Copilot cited a number of unrelated and very weird sources, including YouTube videos of a Hitler museum opening, the Nuremberg trials in 1945, and former German national team player Per Mertesacker singing the national anthem in 2006. Only the fourth linked video is actually from Martin Bernklau.

    Jesus Christ this AI really has it out for this fucking guy. This is after they fixed the slander. “As he is German, here is further information on Nazis.”

  • Broken_Monitor@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    26 days ago

    This copilot bullshit installed itself on my PC recently. I couldn’t uninstall it fast enough. I wonder how long before it magically reappears. Ugh, just go away with this shit

    • Laborer3652@reddthat.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      26 days ago

      I mean the reality is that this is just the path Microsoft is going down. Its not a conspiracy theory either. They spent a fuck ton of money on AI and they want their money back. So they’re going to use it until they determine it isnt making them any money. They know this is a long-term investment too, so it could take years for them to remove AI, if they ever do.

      If you don’t like this, now could be a good time to consider jumping off the Microsoft train. Now is a pretty good time to Check out Linux IMO. Valve is pumping lots of money into the desktop experience, and the entire ecosystem is thriving because of it. I bet most of the applications you use have open source alternatives that are pretty easy to install if you’re open to it.

      Windows Subsystem for Linux (WSL) is a really simple way to play around in a Linux environment, and you can install it right inside windows. If you like what you see, check out distros like Fedora Workstation or Ubuntu. You can always install something else later if you want.

  • Flying Squid@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    26 days ago

    There are only two people with my name in the U.S. and the other person doesn’t have my middle name or even middle initial. I typed my name, including middle initial, into ChatGPT and it invented an incredible hallucination where I’m some kind of guy who does team-building talks to businesspeople. Which could not be further from the truth. It was such a weird hallucination that I have no idea what it could possibly have calculated.

    • Eranziel@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      26 days ago

      My guess is that your name is so poorly represented in the training data that it just picked the most common kind of job history that is represented.

      • Flying Squid@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        26 days ago

        Believe what you like, I’m not telling you what it is.

        The Holocaust greatly narrowed my family tree in terms of my last name.

      • tiredofsametab@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        24 days ago

        I’m one of five in the world with my name so far as any social media or other records has shown. I’m the ONLY one with the same first middle last. It’s certainly possible.