• 0 Posts
  • 87 Comments
Joined 1 年前
cake
Cake day: 2023年6月11日

help-circle
  • But how will you get a “universal” view of the fediverse? No single authoritative view exists.

    You yourself acknowledge that this is complicated, but I honestly don’t understand what appeal a hacked together fake centralized system would have for people if they don’t care about decentralization in the first place. Any such solution is almost inevitably gonna end up being janky and hacked together just to present a façade of worse Reddit.

    Lemmy’s strength is its decentralization and federation. It’s not a problem to be solved, it’s a feature that’s attractive in its own right. It doesn’t need mass appeal, it’s a niche project and probably always will be. I don’t think papering over the fundamental design of the software will make it meaningfully more attractive to the non-technically minded.


  • Yes, but only if your firewall is set to reject instead of drop. The documentation you linked mentions this; that’s why open ports are listed as open|filtered because any port that’s “open” might actually be being filtered (dropped).

    On a modern firewall, an nmap scan will show every port as open|filtered, regardless of whether it’s open or not.

    Edit: Here’s the relevant bit from the documentation:

    The most curious element of this table may be the open|filtered state. It is a symptom of the biggest challenges with UDP scanning: open ports rarely respond to empty probes. Those ports for which Nmap has a protocol-specific payload are more likely to get a response and be marked open, but for the rest, the target TCP/IP stack simply passes the empty packet up to a listening application, which usually discards it immediately as invalid. If ports in all other states would respond, then open ports could all be deduced by elimination. Unfortunately, firewalls and filtering devices are also known to drop packets without responding. So when Nmap receives no response after several attempts, it cannot determine whether the port is open or filtered. When Nmap was released, filtering devices were rare enough that Nmap could (and did) simply assume that the port was open. The Internet is better guarded now, so Nmap changed in 2004 (version 3.70) to report non-responsive UDP ports as open|filtered instead.



  • Google destroys their own search engine by encouraging terrible SEO nonsense and then offers the solution in the form of these AI overviews, cutting results out of the picture entirely.

    You search something on the Web nowadays half the results are written by AI anyway.

    I don’t really care about the “human element” or whatever, but AI is such a hype train right now. It’s still early days for the tech, it still hallucinates a lot, and I fundamentally can’t trust it—even if I trusted the people making it, which I don’t.


  • It definitely encrypts the traffic, the problem is that it encrypts the traffic in a recognizable way that DPI can recognize. It’s easy for someone snooping on your traffic to tell that you’re using Wireguard, but because it’s encrypted they can’t tell the content of the message.


  • I just don’t understand why you want to copy-paste ChatGPT. Surely the parent commenter could access ChatGPT if they wanted, so you’re not bringing a new perspective. If “content” is all that matters, you could generate a thousand different ChatGPT responses and reply to their comment with each one, but that’s not acceptable. Why not?

    People come here for a conversation with other people, and copy-paste ChatGPT responses don’t actually contribute to that. If all they want is information/content, there are better places to find it. They could use ChatGPT, sure, but they could also use Wikipedia or even an economics textbook. It’s up to them. Even if they use ChatGPT, they’d probably prompt it a few times in a few different ways to get the best info for them.

    If you really want to use ChatGPT in your responses, why not add your own voice? When I suggested commentary I don’t mean that you should just prompt ChatGPT into pretending to be a human, I mean that you should add your own perspective. Editorialize. Pull out the good bits.







  • Most things should be behind Authelia. It’s hard to know how to help without knowing what exactly you’re doing with it but generally speaking Authelia means you can have SSO+2FA for every app, even apps that don’t provide it by default.

    It also means that if you have users, you don’t need them to store a bunch of passwords.

    One big thing to keep in mind is that anything with its own login system may be more involved to get working behind Authelia, like Nextcloud.


  • That’s I guess why CSEM is used, because if the images are being shared around exploitation has clearly occurred. I can see where you’re coming from though.

    What I will say is that there are some weird laws around it, and there have even been cases where kids have been convicted of producing child pornography… of themselves. It’s a bizarre situation. If anything, seems like abuse of the court system at that point.

    Luckily a lot of places have been patching the holes in their laws.


  • But hey, instead of killing everyone, eugenics could lead us to a beautiful stratified future, like depicted in the aspirational sci-fi utopia of Brave New World!

    I agree with you, ultimately. My point is just that “good for humanity vs bad for humanity” isn’t a debate, there’s no “We want to ruin humanity” party. Most people see their own viewpoint as being best for humanity, unless they’re a psychopath or a nihilist.

    There are fundamental differences in political views as well as ethical beliefs, and any attempt to boil them down to “good for humanity” vs “bad for humanity” is going to be inherently political. I think “what’s best for humanity” is a good guiding metric to determine what one finds ethical, but using it to categorize others’ political beliefs is going to be divisive at best.

    In other words, it’s not comparable to the left/right axis, which may be insufficient and one-dimensional, but at least it describes something that can be somewhat objective (if controversial and ill-defined). Someone can be happy with their position on the axis. Whereas if it were good/bad, everyone would place themselves at Maximum Good, therefore it’s not really useful or comparable to the left/right paradigm.



  • I don’t think that “everyone is inherently equal” is a conclusion you can reach through logic. I’d argue that it’s more like an axiom, something you have to accept as true in order to build a foundation of a moral system.

    This may seem like an arbitrary distinction, but I think it’s important to distinguish because some people don’t accept the axiom that “everyone is inherently equal”. Some people are simply stronger (or smarter/more “fit”) than others, they’ll argue, and it’s unjust to impose arbitrary systems of “fairness” onto them.

    In fact, they may believe that it is better for humanity as a whole for those who are stronger/smarter/more fit to have positions of power over those who are not, and believe that efforts for “equality” are actually upsetting the natural way of things and thus making humanity worse off.

    People who have this way of thinking largely cannot be convinced to change through pure logical argument (just as a leftist is unlikely to be swayed by the logic of a social darwinist) because their fundamental core beliefs are different, the axioms all of their logic is built on top of.

    And it’s worth noting that while this system of morality is repugnant, it doesn’t inherently result in everyone killing each other like you claim. Even if you’re completely amoral, you won’t kill your neighbor because then the police will arrest you and put you on trial. Fascist governments also tend to have more punitive justice systems, to further discourage such behavior. And on the governmental side, they want to discourage random killing because they want their populace to be productive, not killing their own.




  • I disagree. It would be better to set a precedent that using people’s voices without permission is not okay. Even in your example, you’re suggesting that you would have a Patreon while publishing mods that contain voice clips made using AI. In this scenario, you’ve made money from these unauthorized voice recreations. It doesn’t matter if you’re hoping to one day hire the VAs themselves, in the interim you’re profiting off their work.

    Ultimately though, I don’t think it matters if you’re making money or not. I got caught up in the tech excitement of voice AI when we first started seeing it, but as we’ve had the strike and more VAs and other actors sharing their opinions on it I’ve come to be reminded of just how important consent is.

    In the OP article, Amelia Tyler isn’t saying anything about making money off her voice, she said “to actually take my voice and use it to train something without my permission, I think that should be illegal”. I think that’s a good line to draw.