• 0 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle
  • The UN is supposed to be a toothless, executively dysfunctional institution, that’s a feature, not a bug. Its members are nations, whose entire purpose is to govern their regions of the planet. If the UN itself had the power to make nations do things, it wouldn’t be the United Nations, it’d be the One World Government, and its most powerful members absolutely do not want it to be that, so it isn’t.

    It’s supposed to be an idealized, nonviolent representation of geopolitics that is always available to nations as a venue for civilized diplomacy. That’s why nuclear powers were given veto power: they effectively have veto power over the question of “should the human race continue existing” and the veto is basically a reflection of that. We want issues to get hashed out with words in the UN if possible, rather than in real life with weapons, and that means it must concede to the power dynamics that exist in real life. The good nations and the bad nations alike have to feel like they get as much control as they deserve, otherwise they take their balls and go home.

    It’s frustrating to see the US or Russia or China vetoing perfectly good resolutions and everyone else just kind of going “eh, what can you do, they have vetoes,” but think through the alternative: everyone has enough and decides “no more veto powers.” The UN starts passing all the good resolutions. But the UN only has the power that member nations give it, so enforcement would have to mean some nations trying to impose their will on the ones that would’ve vetoed. Now we’ve traded bad vetoes in the UN for real-world conflict instead.

    What that “get rid of the vetoes so the UN can get things done” impulse is actually driving at is “we should have a one world government that does good things,” which, yeah, that’d be great, but it’s obviously not happening any time soon. Both articles mention issues and reforms that are worthy of consideration, but the fundamental structure of the UN is always going to reflect the flaws of the world because it’s supposed to do that.


  • Archive Team often uses the Internet Archive to share the things they save and obviously they have a shared goal of saving a copy of everything ever made, but they aren’t the same people. The Archive Team is a vigilante white hat hacker group (well, maybe a little bit grey), and running a Warrior basically means you’re volunteering to be part of their botnet. When a website is going to be shut down, they’ll whip together a script and push it out to the botnet to try to grab as much of the dying site as they can, and when there’s more downtime they have some other projects, like trying to brute force all those awful link shorteners so that when they inevitably die, people can still figure out where it should’ve pointed to.


  • OPML files really aren’t much more than a list of the feeds you’re subscribed to. Individual posts or articles aren’t in there. I would expect that importing a second OPML file would just add more subscriptions, but it’d be up to the reader app to decide what it does.


  • If you ask an LLM to help you with a legal brief, it’ll come up with a bunch of stuff for you, and some of it might even be right. But it’ll very likely do things like make up a case that doesn’t exist, or misrepresent a real case, and as has happened multiple times now, if you submit that work to a judge without a real lawyer checking it first, you’re going to have a bad time.

    There’s a reason LLMs make stuff up like that, and it’s because they have been very, very narrowly trained when compared to a human. The training process is almost entirely getting good at predicting what words follow what other words, but humans get that and so much more. Babies aren’t just associating the sounds they hear, they’re also associating the things they see, the things they feel, and the signals their body is sending them. Babies are highly motivated to learn and predict the behavior of the humans around them, and as they get older and more advanced, they get rewarded for creating accurate models of the mental state of others, mastering abstract concepts, and doing things like make art or sing songs. Their brains are many times bigger than even the biggest LLM, their initial state has been primed for success by millions of years of evolution, and the training set is every moment of human life.

    LLMs aren’t nearly at that level. That’s not to say what they do isn’t impressive, because it really is. They can also synthesize unrelated concepts together in a stunningly human way, even things that they’ve never been trained on specifically. They’ve picked up a lot of surprising nuance just from the text they’ve been fed, and it’s convincing enough to think that something magical is going on. But ultimately, they’ve been optimized to predict words, and that’s what they’re good at, and although they’ve clearly developed some impressive skills to accomplish that task, it’s not even close to human level. They spit out a bunch of nonsense when what they should be saying is “I have no idea how to write a legal document, you need a lawyer for that”, but that would require them to have a sense of their own capabilities, a sense of what they know and why they know it and where it all came from, knowledge of the consequences of their actions and a desire to avoid causing harm, and they don’t have that. And how could they? Their training didn’t include any of that, it was mostly about words.

    One of the reasons LLMs seem so impressive is that human words are a reflection of the rich inner life of the person you’re talking to. You say something to a person, and your ideas are broken down and manipulated in an abstract manner in their head, then turned back into words forming a response which they say back to you. LLMs are piggybacking off of that a bit, by getting good at mimicking language they are able to hide that their heads are relatively empty. Spitting out a statistically likely answer to the question “as an AI, do you want to take over the world?” is very different from considering the ideas, forming an opinion about them, and responding with that opinion. LLMs aren’t just doing statistics, but you don’t have to go too far down that spectrum before the answers start seeming thoughtful.


  • In its complaint, The New York Times alleges that because the AI tools have been trained on its content, they sometimes provide verbatim copies of sections of Times reports.

    OpenAI said in its response Monday that so-called “regurgitation” is a “rare bug,” the occurrence of which it is working to reduce.

    “We also expect our users to act responsibly; intentionally manipulating our models to regurgitate is not an appropriate use of our technology and is against our terms of use,” OpenAI said.

    The tech company also accused The Times of “intentionally” manipulating ChatGPT or cherry-picking the copycat examples it detailed in its complaint.

    https://www.cnn.com/2024/01/08/tech/openai-responds-new-york-times-copyright-lawsuit/index.html

    The thing is, it doesn’t really matter if you have to “manipulate” ChatGPT into spitting out training material word-for-word, the fact that it’s possible at all is proof that, intentionally or not, that material has been encoded into the model itself. That might still be fair use, but it’s a lot weaker than the original argument, which was that nothing of the original material really remains after training, it’s all synthesized and blended with everything else to create something entirely new that doesn’t replicate the original.