• RedditWanderer@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      5 months ago

      First all companies were afraid of giving access to these models, for trade secret issues and security. But then they basically all met at the white house to agree that they would make way more fucking money stealing it than they would pay in restitution or damages to people and small businesses.

      Suddenly everybody had a chatbot and generated art ready for commercial sale. They also had to make the shift quickly enough before official laws and protections (mostly from the EU) came in.

      Now AI is plateauing a bit so they must hurry to get valuated at 10 trillion dollars and get their energy needs subsidized and have taxpayers invest into the nation’s energy requirements on their behalf.

    • Wrench@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      edit-2
      5 months ago

      I doubt that most corporations would even consider allowing Slack as a trusted app if they weren’t hosting their own instances themselves.

      I have to assume that this training is exclusively on instances hosted on Slacks’ servers. So probably lots of smaller businesses that don’t know any better. And this was probably agreed to in the ToS as part of utilizing free and easy to set up cloud servers.

        • Wrench@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          2
          ·
          5 months ago

          Ahh, looked at it and you’re right. They have an “Enterprise” version which seems like it’s security conscious.

          Still, I stand by my original assertion. I have worked for FAANG companies with completely locked down security that allowed us to use Slack. I would be extremely surprised if their contract with Slack didn’t ensure complete data privacy.

          We’re talking about companies where a product leak makes international news. There is zero chance Slack employees have access to communications.

          • Kilgore Trout@feddit.it
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 months ago

            We’re talking about companies where a product leak makes international news. There is zero chance Slack employees have access to communications.

            Sure, even though Slack itself admits so in their privacy policy.

  • Andromxda 🇺🇦🇵🇸🇹🇼@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    4 months ago

    Stay away from proprietary crap like Discord, Slack, WhatsApp and Facebook Messenger. There are enough FOSS alternatives out there:

    • You just want to message a friend/family member?
    • You need strong privacy/security/anonymity?
      • SimpleX
      • Session
      • Briar
      • I can’t really tell you which one is the best, since I never used any of these (except for Session) for an extended period of time. Briar seems to be the best for anonymity, because it routes everything through the Tor network. SimpleX allows you to host your own node, which is pretty cool.
    • You want to host an online chatroom/community?
    • You need to message your team at work?
    • You want a Zoom alternative?
  • RidcullyTheBrown@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 months ago

    it’s funny how the conventional wisdom at the end of the last decade was that slack was preferred over other simpler/free alternatives because of its UX. People were hailing it for how simple and intuitive it was to use, etc.

    5, 6 years later, it has become a bloated piece of crap riddled with bugs. And the UI changes which come unannounced… it should be a criminal offense to change UI through automated updates.

    Anyway, here we are, companies have handed their data to this monster and we’ll see how they react when the data gets misused. Hopefully that would be the beginning of the end for it

      • Evotech@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        I love slack. But the only thing I can compare it with for corp use is teams. So if course it’s amazing

  • iAmTheTot@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    There’s a safe bet that if you’ve put something on the internet, it’s been scraped by a bot by now for training. I don’t like that, for the record, just saying I’m not surprised at this point. Companies are morally bankrupt

    • cm0002@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      I don’t know why everyone is all shocked all of a sudden, there have been various scraper bots collecting text info for…many years now, LONG before LLMs came onto the scene.

      • QuadratureSurfer@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        I agree, but it’s one thing if I post to public places like Lemmy or Reddit and it gets scraped.

        It’s another thing if my private DMs or private channels are being scraped and put into a database that will most likely get outsourced for prepping the data for training.

        Not only that, but the trained model will have internal knowledge of things that are sure to give anxiety to any cyber security experts. If users know how to manipulate the AI model, they could cause the model to divulge some of that information.

  • Ghostalmedia@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 months ago

    Sounds like a lot of this is for non-generative AI. It’s for dumb things like that frequently used emoji feature.

    Knowing how my legal teams have worked in my tech companies, I’m a bet that a lawyer updated the terms language to be in compliance with privacy legislation, but they did a shit job, and didn’t clarify what specifically was being covered in the TOS. They were lazy, and crafted something broad, so they wouldn’t have to actually talk to product or marketing people in their org.

  • Endorkend@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    The more they push to train AI on our shitpostings on social networks, the more I’m certain we’re fucking doomed if their AI ever reaches consciousness.

    • Thorny_Insight@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      5 months ago

      We may very well be doomed if AI reaches consciousness but I’m not quite convinced LLM’s is the way to get there but even if it was and it was solely trained on social media content I still wouldn’t expect it to adopt the behaviour of your typical social media commentor. The toxic behaviour on social media is, in my view, almost solely driven by our human ego and pettiness. It’s not obvious to me that AI would care about things like winning arguments or coming up with snide remarks and such. What I see as the most likely outcome would be endlessly patient and quite autistic-like being that’s balanced in it’s views and would most likely be pretty difficult to argue against. I doubt humans are anywhere even near the far-end of the intelligence spectrum and something with the information processing capability that’s orders of magnitude greater than ours would more than likely not get caught up in stuff like confirmation bias, partisan thinking, motivated reasoning, being tossed around by emotions, cognitive dissonance etc. Those are by definitions human features.