Thanks for posting, please ignore the stochastic luddites 🙂
One must imagine Sisyphus strapping
Switchcraft!
Espera, ¿todo es Ohio?
I don’t know… admittedly, I only remember some vague bits from Tom Clancy novels, but didn’t Soviet attack subs wait outside the home ports for the SSBNs to try to stay on their tail, and they never managed to?
I should dig up The Hunt for Red October, I guess, but given current geopolitics maybe Red Storm Rising is a better fit :)
These subs all have home ports and can be observed when they leave, so that’s probably not a big deal?
kcatta evissaM
…can’t argue with that
Thanks, that was interesting. I kept thinking that this reads like something out of Quanta Magazine, and then at the end there was an attribution to them :)
To all the reflexive AI-downvoters: This is about an application of machine learning, not an LLM. Don’t behave like an advanced autocomplete; think before you click :P
Thanks for posting, don’t mind the downvotes from the luddites :D
Well, natural language processing is placed in the trough of disillusionment and projected to stay there for years. ChatGPT was released in November 2022…
Arrows
Pointless
Pick one
If you’re logged in to lemmy.world, I think you can click the hamburger menu top right and then “Create community”?
Edit: sorry, just noticed your account is on programming.dev, where there’s no such option? Then I’m afraid I don’t know :/
Edit 2: From the programming.dev sidebar:
Community Creation
Communities in our instance are created from our community request zone. If you have an idea for a community that fits our instance that hasnt been made already feel free to create a post for it there. Communities will be considered for creation if theres enough interest in the idea shown by people upvoting it
From TFA:
For ASD screening on the test set of images, the AI could pick out the children with an ASD diagnosis with a mean area under the receiver operating characteristic (AUROC) curve of 1.00. AUROC ranges in value from 0 to 1. A model whose predictions are 100% wrong has an AUROC of 0.0; one whose predictions are 100% correct has an AUROC of 1.0, indicating that the AI’s predictions in the current study were 100% correct. There was no notable decrease in the mean AUROC, even when 95% of the least important areas of the image – those not including the optic disc – were removed.
They at least define how they get the 100% value, but I’m not an AIologist so I can’t tell if it is reasonable.
Column A: yes
Column B: also yes
Rome and Rome, Georgia
The board that fired him was that of the nonprofit, so they don’t answer to shareholders.
Another occupation ruined by millennials