• 0 Posts
  • 29 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle



  • Imagine you were asked to start speaking a new language, eg Chinese. Your brain happens to work quite differently to the rest of us. You have immense capabilities for memorization and computation but not much else. You can’t really learn Chinese with this kind of mind, but you have an idea that plays right into your strengths. You will listen to millions of conversations by real Chinese speakers and mimic their patterns. You make notes like “when one person says A, the most common response by the other person is B”, or “most often after someone says X, they follow it up with Y”. So you go into conversations with Chinese speakers and just perform these patterns. It’s all just sounds to you. You don’t recognize words and you can’t even tell from context what’s happening. If you do that well enough you are technically speaking Chinese but you will never have any intent or understanding behind what you say. That’s basically LLMs.


  • No, the intent and the consequences of an action are generally taken into consideration in discussions of ethins and in legislation. Additionally, this is not just a matter of ToS. What OpenAI does is create and distribute illegitimate derivative works. They are relying on the argument that what they do is transformative use, which is not really congruent with what “transformative use” has meant historically. We will see in time what the courts have to say about this. But in any case, it will not be judged the same way as a person using a tool just to skip ads. And Revanced is different to both the above because it is a non-commercial service.


  • It’s definitely not “draconian” to make enshittification illegal. But you don’t regulate the turning-to-shit part. You regulate the part where they offer a service for free or too cheap so that they kill the competition. This is called anti-competitive and we supposedly address it already. You also regulate what an EULA can enforce and the ability of companies to change the EULA after a user has agreed to it. Again, these concepts already exist in law.

    We’ve essentially already identified these problems and we have decided that we need to address them, but we been ineffective in doing so for various reasons.


  • Humans are not generally allowed to do what AI is doing! You talk about copying someone else’s “style” because you know that “style” is not protected by copyright, but that is a false equivalence. An AI is not copying “style”, but rather every discernible pattern of its input. It is just as likely to copy Walt Disney’s drawing style as it is to copy the design of Mickey Mouse. We’ve seen countless examples of AI’s copying characters, verbatim passages of texts and snippets of code. Imagine if a person copied Mickey Mouse’s character design and they got sued for copyright infringement. Then they go to court and their defense was that they downloaded copies of the original works without permission and studied them for the sole purpose of imitating them. They would be admitting that every perceived similarity is intentional. Do you think they would not be found guilty of copyright infringement? And AI is this example taken to the extreme. It’s not just creating something similar, it is by design trying to maximize the similarity of its output to its training data. It is being the least creative that is mathematically possible. The AI’s only trick is that it threw so many stuff into its mixer of training data that you can’t generally trace the output to a specific input. But the math is clear. And while its obvious that no sane person will use a copy of Mickey Mouse just because an AI produced it, the same cannot be said for characters of lesser known works, passages from obscure books, and code snippets from small free software projects.

    In addition to the above, we allow humans to engage in potentially harmful behavior for various reasons that do not apply to AIs.

    • “Innocent until proven guilty” is fundamental to our justice systems. The same does not apply to inanimate objects. Eg a firearm is restricted because of the danger it poses even if it has not been used to shoot someone. A person is only liable for the damage they have caused, never their potential to cause it.
    • We care about peoples’ well-being. We would not ban people from enjoying art just because they might copy it because that would be sacrificing too much. However, no harm is done to an AI when it is prevented from being trained, because an AI is not a person with feelings.
    • Human behavior is complex and hard to control. A person might unintentionally copy protected elements of works when being influenced by them, but that’s hard to tell in most cases. An AI has the sole purpose of copying patterns with no other input.

    For all of the above reasons, we choose to err on the side of caution when restricting human behavior, but we have no reason to do the same for AIs, or anything inanimate.

    In summary, we do not allow humans to do what AIs are doing now and even if we did, that would not be a good argument against AI regulation.



  • I have my own backup of the git repo and I downloaded this to compare and make sure it’s not some modified (potentially malicious) copy. The most recent commit on my copy of master was dc94882c9062ab88d3d5de35dcb8731111baaea2 (4 commits behind OP’s copy). I can verify:

    • that the history up to that commit is identical in both copies
    • after that commit, OP’s copy only has changes to translation files which are functionally insignificant

    So this does look to be a legitimate copy of the source code as it appeared on github!

    Clarifications:

    • This was just a random check, I do not have any reason to be suspicious of OP personally
    • I did not check branches other than master (yet?)
    • I did not (and cannot) check the validity of anything beyond the git repo
    • You don’t have a reason to trust me more than you trust OP… It would be nice if more people independently checked and verified against their own copies.

    I will be seeding this for the foreseeable future.








  • If you have a large enough bank roll and continuously double your bet after a loss, you can never lose without a table limit.

    Unless your bank roll is infinite, you always lose in the average case. My math was just an example to show the point with concrete numbers.

    In truth it is trivial to prove that there is no winning strategy in roulette. If a strategy is just a series of bets, then the expected value is the sum of the expected value of the bets. Every bet in roulette has a negative expected value. Therefore, every strategy has a negative expected value as well. I’m not saying anything ground-breaking, you can read a better write-up of this idea in the wikipedia article.

    If you don’t think that’s true, you are welcome to show your math which proves a positive expected value. Otherwise, saying I’m “completely wrong” means nothing.


  • So help me out here, what am I missing?

    You’re forgetting that not all outcomes are equal. You’re just comparing the probability of winning vs the probability of losing. But when you lose you lose much bigger. If you calculate the expected outcome you will find that it is negative by design. Intuitively, that means that if you do this strategy, the one time you will lose will cost you more than the money you made all the other times where you won.

    I’ll give you a short example so that we can calculate the probabilities relatively easily. We make the following assumptions:

    • You have $13, which means you can only make 3 bets: $1, $3, $9
    • The roulette has a single 0. This is the best case scenario. So there are 37 numbers and only 18 of them are red This gives red a 18/37 to win. The zero is why the math always works out in the casino’s favor
    • You will play until you win once or until you lose all your money.

    So how do we calculate the expected outcome? These outcomes are mutually exclusive, so if we can define the (expected gain * probability) of each one, we can sum them together. So let’s see what the outcomes are:

    • You win on the first bet. Gain: $1. Probability: 18/37.
    • You win on the second bet. Gain: $2. Probability: 19/37 * 18/37 (lose once, then win once).
    • You win on the third bet. Gain: $4. Probability: (19/37) ^ 2 * 18/37 (lose twice, then win once).
    • You lose all three bets. Gain: -$13. Probability: (19/37) ^ 3 (lose three times).

    So the expected outcome for you is:

    $1 * (18/37) + 2 * (19/37 * 18/37) + … = -$0.1328…

    So you lose a bit more than $0.13 on average. Notice how the probabilities of winning $1 or $2 are much higher than the probability of losing $13, but the amount you lose is much bigger.

    Others have mentioned betting limits as a reason you can’t do this. That’s wrong. There is no winning strategy. The casino always wins given enough bets. Betting limits just keep the short-term losses under control, making the business more predictable.


  • Im not 100% comfortable with AI gfs and the direction society could potentially be heading. I don’t like that some people have given up on human interaction and the struggle for companionship, and feel the need to resort to a poor artificial substitute for genuine connection.

    That’s not even the scary part. What we really shouldn’t be uncomfortable with is this very closed technology having so much power over people. There’s going to be a handful of gargantuan immoral companies controlling a service that the most emotionally vulnerable people will become addicted to.



  • Exactly this. I can’t believe how many comments I’ve read accusing the AI critics of holding back progress with regressive copyright ideas. No, the regressive ideas are already there, codified as law, holding the rest of us back. Holding AI companies accountable for their copyright violations will force them to either push to reform the copyright system completely, or to change their practices for the better (free software, free datasets, non-commercial uses, real non-profit orgs for the advancement of the technology). Either way we have a lot to gain by forcing them to improve the situation. Giving AI companies a free pass on the copyright system will waste what is probably the best opportunity we have ever had to improve the copyright system.