• 0 Posts
  • 90 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle
  • This would be a good point, if this is what the explicit purpose of the AI was. Which it isn’t. It can quote certain information verbatim despite not containing that data verbatim, through the process of learning, for the same reason we can.

    I can ask you to quote famous lines from books all day as well. That doesn’t mean that you knowing those lines means you infringed on copyright. Now, if you were to put those to paper and sell them, you might get a cease and desist or a lawsuit. Therein lies the difference. Your goal would be explicitly to infringe on the specific expression of those words. Any human that would explicitly try to get an AI to produce infringing material… would be infringing. And unknowing infringement… well there are countless court cases where both sides think they did nothing wrong.

    You don’t even need AI for that, if you followed the Infinite Monkey Theorem and just happened to stumble upon a work falling under copyright, you still could not sell it even if it was produced by a purely random process.

    Another great example is the Mona Lisa. Most people know what it looks like and if they had sufficient talent could mimic it 1:1. However, there are numerous adaptations of the Mona Lisa that are not infringing (by today’s standards), because they transform the work to the point where it’s no longer the original expression, but a re-expression of the same idea. Anything less than that is pretty much completely safe infringement wise.

    You’re right though that OpenAI tries to cover their ass by implementing safeguards. Which is to be expected because it’s a legal argument in court that once they became aware of situations they have to take steps to limit harm. They can indeed not prevent it completely, but it’s the effort that counts. Practically none of that kind of moderation is 100% effective. Otherwise we’d live in a pretty good world.


  • I am kind of afraid that if voting becomes more public than it already is, it will lead exactly to more of the kind of “zero-content downvote” accounts mentioned in the ticket. Because some people are just wildly irrational when it comes to touchy subjects, and aint nobody got time to spend an eternity with them dismantling their beliefs so they understand the nuance you see that they don’t (If they even let you). So it kind of incentivizes people to create an account like that to ensure a crazy person doesn’t latch on to the account you’re trying to have normal discussions with.

    But I understand that they can technically already do this if they wanted to. So perhaps it will be fine as long as we fight against vote viewing being weaponized as a community.


  • I kinda get not starting shit with other instances (Although, Hexbear should be the last to be able to invoke that), and it is still her instance and her rules there. But yeah it would not make me happy to be part of a community under those rules. Being a safe space doesn’t mean you have to shield bad actors from criticism. Especially if she’s not going to be respectful to good actors. And it really is weird how she comes to some of her conclusions (How is a non-existing person trans, where did she learn this?) and then still wants the same response when that turns out to be wrong.




  • Pro tip: Most company logos go off easily with precise sanding tools you can get in hardware stores. Coming from someone who’s had to buy the perfect pair of shoes (which were also the cheapest) which for some reason had one fugly logo on the back ruining it all. Sadly you can’t really return them after, so you can only really do it if you’re sure you will keep it, but sometimes that’s enough.

    EDIT: To clarify - I totally agree with the comic. This isn’t an endorsement to buy brand clothing. I’m saying that sometimes you have no other choice, and this is the way to give the company the middle finger while still getting the quality you desire.





  • And even with that base set, even if a computer could theoretically try all trillion possibilities quickly, it’ll make a ton of noise, get throttled, and likely lock the account out long before it has a chance to try even the tiniest fraction of them

    One small correction - this just isn’t how the vast majority of password cracking happens. You’ll most likely get throttled before you try 5 password and banned before you get to try 50. And it’s extremely traceable what you’re trying to do. Most cracking happens after a data breach, where the cracker has unrestricted local access to (hopefully) encrypted and salted password hashes.

    People just often re-use their password or even forget to change it after a breach. That’s where these leaked passwords get their value if you can decrypt them. So really, this is a non-factor. But the rest stands.


  • While this comic is good for people that do the former or have very short passwords, it often misleads from the fact that humans simply shouldn’t try to remember more than one really good password (for a password manager) and apply proper supplementary techniques like 2FA. One fully random password of enough length will do better than both of these, and it’s not even close. It will take like a week or so of typing it to properly memorize it, but once you do, everything beyond that will all be fully random too, and will be remembered by the password manager.


  • Depends on what kind of AI enhancement. If it’s just more things nobody needs and solves no problem, it’s a no brainer. But for computer graphics for example, DLSS is a feature people do appreciate, because it makes sense to apply AI there. Who doesn’t want faster and perhaps better graphics by using AI rather than brute forcing it, which also saves on electricity costs.

    But that isn’t the kind of things most people on a survey would even think of since the benefit is readily apparent and doesn’t even need to be explicitly sold as “AI”. They’re most likely thinking of the kind of products where the manufacturer put an “AI powered” sticker on it because their stakeholders told them it would increase their sales, or it allowed them to overstate the value of a product.

    Of course people are going to reject white collar scams if they think that’s what “AI enhanced” means. If legitimate use cases with clear advantages are produced, it will speak for itself and I don’t think people would be opposed. But obviously, there are a lot more companies that want to ride the AI wave than there are legitimate uses cases, so there will be quite some snake oil being sold.




  • Ideas are great - but execution is king. Because execution is where most of your creativity actually makes a difference in how the idea is represented. If you have a good idea and a good execution, it’s very hard for someone to take that away from you. If you have a good idea, but execute it poorly, someone taking that idea and executing it better will leave you in the dust. But without the better execution that wouldn’t work.

    Better execution isn’t always fair though - we often start out in life being unable to compete because of lack of experience, financing, and publicity. But it’s basically how the entire entertainment industry works. Everyone just shuffles ideas around, and try to execute it better (or different enough) from the previous time the idea made the rounds.

    After finding good ideas, get people hooked on your execution, and they will not be able to get that anywhere else unless someone else comes along and does it even better, but with practice that can also be you.



  • I’m not sure where you think I’m giving it too much credit, because as far as I read it we already totally agree lol. You’re right, methods exist to diminish the effect of hallucinations. That’s what the scientific method is. Current AI has no physical body and can’t run experiments to verify objective reality. It can’t fact check itself other than be told by the humans training it what is correct (and humans are fallible), and even then if it has gaps in what it knows it will fill it up with something probable - but which is likely going to be bullshit.

    All my point was, is that to truly fix it would be to basically create an omniscient being, which cannot exist in our physical world. It will always have to make some assumptions - just like we do.



  • It will never be solved. Even the greatest hypothetical super intelligence is limited by what it can observe and process. Omniscience doesn’t exist in the physical world. Humans hallucinate too - all the time. It’s just that our approximations are usually correct, and then we don’t call it a hallucination anymore. But realistically, the signals coming from our feet take longer to process than those from our eyes, so our brain has to predict information to create the experience. It’s also why we don’t notice our blinks, or why we don’t see the blind spot our eyes have.

    AI representing a more primitive version of our brains will hallucinate far more, especially because it cannot verify anything in the real world and is limited by the data it has been given, which it has to treat as ultimate truth. The mistake was trying to turn AI into a source of truth.

    Hallucinations shouldn’t be treated like a bug. They are a feature - just not one the big tech companies wanted.

    When humans hallucinate on purpose (and not due to illness), we get imagination and dreams; fuel for fiction, but not for reality.


  • Its funny how something like this get posted every few days and people keep falling for it like its somehow going to end AI. The people that make these models are acutely aware of how to avoid model collapse.

    It’s totally fine for AI models to train on AI generated content that is of high enough quality. Part of the research to train models is building data sets with a text description matching the content, and filtering out content that is not organic enough (or even specifically including it as a ‘bad’ example for the AI to avoid). AI can produce material indistinguishable from human work, and it produces material that wasn’t originally in the training data. There’s no reason that can’t be good training data itself.