- cross-posted to:
- autism@lemmy.world
- tech@lemmit.online
- cross-posted to:
- autism@lemmy.world
- tech@lemmit.online
AI-screened eye pics diagnose childhood autism with 100% accuracy::undefined
It’s apparently good at 100% at classifying autism in groups that have already been flagged for high chance of ASD. It is not good at just any old picture.
Retinal photographs of individuals with ASD were prospectively collected between April and October 2022, and those of age- and sex-matched individuals with TD were retrospectively collected between December 2007 and February 2023.
TD stands for “typical development.”
So it correctly differentiated between children diagnosed with ASD and those without it with 100% accuracy.
The confounding factors are that they excluded children with ASD and other issues that might have muddied the waters, so it may not be 100% effective at distinguishing between all cases of ASD vs TD.
There’s no reason to think that given a retinal photograph of someone who hasn’t been diagnosed with ASD that it would fail to reject the diagnosis or confirm it if ASD was the only factor.
And this appears to be based on biological differences that have already been researched:
Considering that a positive correlation exists between retinal nerve fiber layer (RNFL) thickness and the optic disc area,32,33 previous studies that observed reduced RNFL thickness in ASD compared with TD14-16 support the notable role of the optic disc area in screening for ASD. Given that the retina can reflect structural brain alterations as they are embryonically and anatomically connected,12 this could be corroborated by evidence that brain abnormalities associated with visual pathways are observed in ASD. First, reduced cortical thickness of the occipital lobe was identified in ASD when adjusted for sex and intelligence quotient.34 Second, ASD was associated with slower development of fractional anisotropy in the sagittal stratum where the optic radiation passes through.35 Interestingly, structural and functional abnormalities of the visual cortex and retina have been observed in mice that carry mutations in ASD-associated genes
And given that the heat maps of what the model was using to differentiate were almost entirely the optical disc, I’m not sure why so many here are scoffing at this result.
It wasn’t 100% at identifying severity or more nuanced differences, but was able to successfully identify whether the retinal image was from someone diagnosed with ASD or not with 100% success rate in the roughly 150 test images split between the two groups.
100% ? That’s a fucking lie. Nothing in life is 100%
Are you 100% sure of that?
Other aspects weren’t 100%, such as identifying the severity (which was around 70%).
But if I gave a model pictures of dogs and traffic lights, I’d not at all be surprised if that model had a 100% success rate at determining if a test image was a dog or a traffic light.
And in the paper they discuss some of the prior research around biological differences between ASD and TD ocular development.
Replication would be nice and I’m a bit skeptical about their choice to use age-specific models given the sample size, but nothing about this so far seems particularly unlikely to continue to show similar results.
Not even your statement?
A convolutional neural network, a deep learning algorithm, was trained using 85% of the retinal images and symptom severity test scores to construct models to screen for ASD and ASD symptom severity. The remaining 15% of images were retained for testing.
It correctly identified 100% of the testing images. So it’s accurate.
Then somebody’s lying with creative application of 100% accuracy rates.
The confidence interval of the sequence you describe is not 100%
From TFA:
For ASD screening on the test set of images, the AI could pick out the children with an ASD diagnosis with a mean area under the receiver operating characteristic (AUROC) curve of 1.00. AUROC ranges in value from 0 to 1. A model whose predictions are 100% wrong has an AUROC of 0.0; one whose predictions are 100% correct has an AUROC of 1.0, indicating that the AI’s predictions in the current study were 100% correct. There was no notable decrease in the mean AUROC, even when 95% of the least important areas of the image – those not including the optic disc – were removed.
They at least define how they get the 100% value, but I’m not an AIologist so I can’t tell if it is reasonable.
Yeah, from the way they wrote, it sounds to me they indirectly trained on the test set
Could we reasonably expect an AI to something right 100% if a human could do it with 100%?
Could you tell if someone has down syndrome pretty obviously?
Maybe some kind of feature exists that we aren’t aware of
I’m honestly not sure if this whole thing is a good thing or a freaking scary thing.
At the back of the eye, the retina and the optic nerve connect at the optic disc. An extension of the central nervous system, the structure is a window into the brain and researchers have started capitalizing on their ability to easily and non-invasively access this body part to obtain important brain-related information.
It’s way less scary in the actual linked paper:
Given that the retina can reflect structural brain alterations as they are embryonically and anatomically connected,12 this could be corroborated by evidence that brain abnormalities associated with visual pathways are observed in ASD.
TLDR: Abnormal developments in the brain that have visual components may closely correlate with abnormal developments in the eye.
Column A: yes
Column B: also yes
Sensitivity or specificity? Sensitivity is easy, just say every person is positive and you’ll find 100% of true positives. Specificity is the hard problem.
Bull.Shit.
Define the criteria, have it peer reviewed and diagnosed, or else we will ALL be diagnosed with Autism soon enough.