The model does have a lot of advantages over sdxl with the right prompting, but it seems to fall apart in prompts with more complex anatomy. Hopefully the community can fix it up once we have working trainers.
The model does have a lot of advantages over sdxl with the right prompting, but it seems to fall apart in prompts with more complex anatomy. Hopefully the community can fix it up once we have working trainers.
On Discord, the black hole for useful information.
The names missing from the list say more about the board’s purpose than the names on it.
All of Firefox’s ai initiatives including translation and chat are completely local. They have no impact on privacy.
The “why would they make this” people don’t understand how important this type of research is. It’s important to show what’s possible so that we can be ready for it. There are many bad actors already pursuing similar tools if they don’t have them already. The worst case is being blindsided by something not seen before.
I’m sure the machine running it was quite warm actually.
Partnered with Adobe research so we’re never going to get the actual model.
This has more to do with how much chess data was fed into the model than any kind of reasoning ability. A 50M model can learn to play at 1500 elo with enough training: https://adamkarvonen.github.io/machine_learning/2024/01/03/chess-world-models.html
The “AI PC” specification requires a minimum of 40TOPs of AI compute which is over double the 18TOPs in the current M3s. Direct comparison doesn’t really work though.
What really matters is how it’s made available for development. The Neural engine is basically a black box. It can’t be incorporated into any low level projects because it’s only made available through a high-level swift api. Intel by comparison seems to be targeting pytorch acceleration with their libraries.
Do another 2 day blackout. That’ll show 'em.
This article is grossly overstating the findings of the paper. It’s true that bad generated data hurts model performance, but that’s true of bad human data as well. The paper used opt125M as their generator model, a very small research model with fairly low quality and often incoherent outputs. The higher quality generated data which makes up a majority of the generated text online is far less of an issue. The use of generated data to improve output consistency is a common practice for both text and image models.
It’s size makes it basically useless. It underperforms models even in it’s active weight class. It’s nice that it’s available but Grok-0 would have been far more interesting.
I feel like the whole Reddit AI deal is a trap. If any real judgment comes down about data use Reddit is an easy scapegoat. There was basically nothing stopping them from scraping the site for free.
I got locked out of my now 8+ year old account because I had set it up with an old ISP provided email which has since been deactivated. I can’t migrate because I have to verify with the email and I can’t change the email without setting up security questions, which also requires the email. Support can do nothing.
I don’t think they care about the images being used, just the disruption of service. It’s pretty clear that this wasn’t a coordinated thing from Stability and was at most a lone individual acting in bad faith.
It’s pretty ironic though that the company that practices mass scraping has no rate limits to prevent outages due to mass scraping.
There should be no difference because the video track hasn’t been touched. Some software will display the length of the longest track rather than the length of the main video track. It’s likely that the the audio track was originally longer than the video track and because of the offset it’s now shorter.
You can use tools like ffmpeg and mediainfo to count the actual frames in each to verify.
According to the article:
They are asking a federal judge to say yes to this, specifically:
Developing or distributing software, including Yuzu, that in its ordinary course functions only when cryptographic keys are integrated without authorization, violates the Digital Millennium Copyright Act’s prohibition on trafficking in devices that circumvent effective technological measures, because the software is primarily designed for the purpose of circumventing technological measures.
So I think they’re definitely intending to set precedent with this case, though this settlement hasn’t been accepted by the court yet.
I believe USB-C is the only connector supported for carrying DisplayPort signals other than DisplayPort itself.
The biggest issue with USB-C for display in my opinion is that cable specs vary so much. A cable with a type c end could carry anywhere from 60-10000MB/s and deliver anywhere from 5-240W. What’s worse is that most aren’t labeled, so even if you know what spec you need you’re going to have a hell of a time finding it in a pile of identical black cables.
Not that I dislike USB-C. It’s a great connector, but the branding of USB has always been a mess.
What’s the deal with Alpine not using GNU? Is it a technical or ideological thing? Or is it another “because we can” type distro?