• Kichae@lemmy.ca
    link
    fedilink
    English
    arrow-up
    36
    ·
    8 months ago

    Part of what makes Twitter, Reddit, etc. such easy targets for bot spammers is that they’re single-point-of-entry. You join, you have access to everyone, and then you exhaust an account before spinning up 10 more.

    The Fediverse has some advantages and disadvantages here. One significant advantage is that – particularly if, when the dust finally settles, it’s a big network of a large number of small sites – it’s relatively easy to cut off nodes that aren’t keeping the bots out. One disadvantage, though, is that it can create a ton of parallel work if spam botters target a large number of sites to sign up on.

    A big advantage, though, is that most Fediverse sites are manually moderated and administered. By and large, sites aren’t looking to offload this responsibility to automated systems, so what needs to get beaten is not some algorithmic puzzle, but human intuition. Though, the downside to this is that mods and admins can become burned out dealing with an unending stream of scammers.

      • ᴇᴍᴘᴇʀᴏʀ 帝@feddit.uk
        link
        fedilink
        English
        arrow-up
        6
        ·
        8 months ago

        We had a bunch of Japanese teenagers run scripts on their computers and half the Fediverse was full of spam. If someone really cared about spamming, this shit wouldn’t stop as quickly.

        The upside of that attack is that instance Admins had to raise their game and now most of the big instances are running anti-spam bots and sharing intelligence. Next time we’ll be able to move quickly and shut it all down, where this time we were rather scrambling to catch up. Then the spammers will evolve their attack and we’ll raise our game again.

      • Kichae@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 months ago

        It’s true that the toolset isn’t here now, and the network is actually very fragile at the moment.

        It’s also true that platform builders don’t seem to want to deal with these kinds of tools, for raisins.

        But it’s also true that temporary blocks are both effective and not that big of a deal.

        I’m not sure why you’d think that manual moderation will lead to small instances getting barred, though. Unless you’re predicting that federation will move to whitelisting, rather than blacklisting? That’s historically been the tool of corporate services, not personal or community ones.

        • Skull giver@popplesburger.hilciferous.nl
          link
          fedilink
          arrow-up
          3
          ·
          8 months ago

          Lemmy has been using whitelist based federation right up until people started moving over from Reddit, so it’s not exactly a new approach.

          With new domains costing anywhere between $3 and nothing at all, setting up thousands of spam servers isn’t that difficult or expensive. There’s already a tool that’s designed to allow bypassing blocks automatically by simply feeding it a second domain. If spammers actually cared about the Fediverse, they’d be all over it in no time.

          But the big danger right now is that free, open servers, big or small, don’t have much in the way of verification or hot prevention. Some instances don’t have any protection at all (which the Japanese spam wave abused), others are using basic CAPTCHAs that copilot will happily solve for you. On centralised services this problem can be fixed temporarily by using technologies like strict device attestation (rip Linux/custom ROM/super cheap phone users), but in a decentralised environment this won’t work. Then there are the many, many servers that never received patches, and still have the Mastodon account takeover vulnerability, for instance.

          Small servers will have to prove themselves to the servers they want to federate with, or abuse will be too easy.

          I don’t think temporary blocks are a solution. Right now, the attacks focused on tiny servers with one or a couple of users, but with the rise of AI I don’t think the bigger servers will be able to stop dedicated spammers. Right now the spam wave is over, mostly because a few of the Japanese kids got arrested/had their parents find out. Right up until the very end, Lemmy and Mastodon were full of spam.

          I don’t want this recentralisation to happen, but I think the Fediverse will end up like email: strict, often arbitrary spam prevention systems that make running your own very difficult. After all, email is the original federated digital network, and it’s absolutely full of stupid restrictions and spam. ActivityPub may have signatures to authenticate users, something that even DKIM still lacks, but the “short message + picture” nature of most Fediverse content make it very difficult to write good spam detection rules for. Maybe someone will create some kind of AI solution, who knows, but I expect deliverability to become as problematic as with email, or maybe even worse.

          I can’t think of a good solution here. Our best bet may he hoping that people won’t be too dickish, or to keep the Fediverse out of the mainstream so all the spammers go to Threads and Bluesky first.

    • explodicle@local106.com
      link
      fedilink
      English
      arrow-up
      16
      ·
      8 months ago

      If it really ramps up, we could share block lists too, like with ad blockers. So if a friend (or nth-degree friend) blocks someone, then you would block them automatically.