Serious question: is there a way to get access to medical imagery as a non-student? I would love to do some machine learning with it myself, as I see lot’s of potential in image analysis in general. 5 years ago I created a model that was able to spot certain types of ships based only on satellite imagery, which were not easily detectable by eye and ignoring the fact that one human cannot scan 15k images in one hour. Similar use case with medical imagery - seeing the things that are not yet detectable by human eyes.
Yeah there are some openly available datasets on competition sites like Kaggle, and some medical data is available through public institutions like like NIH.
I knew about kaggle, but not about NIH. Thanks for the hint!
5 years ago I created a model that was able to spot certain types of ships based only on satellite imagery, which were not easily detectable by eye and ignoring the fact that one human cannot scan 15k images in one hour.
what is your intended use case? are you trying to help government agencies perfect spying? sounds very cringe ngl
My intended use case is to find possibilities how ML can support people with certain tasks. Science is not political, for what my technology is abused, I cannot control. This is no reason to stop science entirely, there will always be someone abusing something for their own gain.
But thanks for assuming without asking first what the context was.
My intended use case is to find possibilities how ML can support people with certain tasks.
weaselly bullshit. how exactly do you intend for people to use technology that identifies ships via satellite? what is your goal? because the only use cases I can see for this are negative
This is no reason to stop science entirely
if the only thing your tech can be used for is bad then you’re bad for innovating that tech
Ever thought about identifying ships full of refugees and send help, before their ships break apart and 50 people drown?
Of course you have not. Your hatered makes you blind. Close minds never were able to see why science is important. Now enjoy spreading hate somewhere else.
Ever thought about identifying ships full of refugees and send help, before their ships break apart and 50 people drown?
who the fuck is going to have access to this satellite bullshit and be in a position to send help? all the governments that actively want ships full of refugees to fucking sink and die? the ones that put people on trial for saving them?
brainless is honestly too good of a term to describe how carelessly fucking stupid you are
Ever thought about identifying ships full of refugees and send help, before their ships break apart and 50 people drown?
No, I didn’t think about that. If you did, why exactly were you so hostile to me asking what use you thought this might serve?
I don’t think my reply was hostile, I just criticized your behavior assuming things, before you know the whole truth. I kept everything neutral and didn’t have the urge to have a discussion with someone already on edge. I hope you understand and also learn that not everything is entirely evil in this world. Please stay curious - don’t assume.
I just criticized your behavior assuming things, before you know the whole truth.
I didn’t assume anything. I asked you what your intended use case was and you responded with vague platitudes, sarcasm, and then once I pressed further, insults. Try re-reading your comments from a more objective standpoint and you’ll find neutrality nowhere within them.
Holy shit, dude, STFU.
no u
Ok
find possibilities how ML can support people with certain tasks
Marxism-Leninism?
Oh, Machine Learning.
Science is not political
in an ideal world maybe, but that is not our world. In reality science is always always political. It is unavoidable.
Typical hexbear reply lol
Unfortunately, you are right, though. Science can be political. My science is not. I like my bubble.
that’s just going through life with blinders on
Typical hexbear reply
Unfortunately, you are right
Yes, typically hexbear replies are right.
It’s not unfortunate though, it’s simply a matter of having an understanding of the world and a willingness to accept it and engage with it. It’s too bad that you seem not to want that understanding or that you lack the willingness to accept it.
My science is not. I like my bubble.
How can you possibly square that first short sentence with the second? Are you really that willfully hypocritical? Yes, “your” science is political. No science escapes it, and the people who do science thinking themselves and their work is unaffected by their ideology are the most effected by ideology. No wonder you like your bubble - from within it, you don’t have to concern yourself with any of the real world or even the smallest sliver of self reflection. But all it is is a happy, self-reinforcing delusion. You pretend to be someone who appreciates science, but if you truly did, you would be doing everything you can to recognize your unavoidable biases rather than denying them while simultaneously wallowing in them, which is what you are openly admitting to doing whether you realize it or not.
Science is not political, for what my technology is abused, I cannot control.
how did I know that they’d use the jew gassing chamber to gas jews, or use the torment nexus to create a nexus of torment? I was only doing the science
you’re a fucking moron, jesus fucking christ
imagine being a scientist, a person whose entire career and body of work relies on very specific premises of cause and effect, only to go on and make some shit without thinking it’s even possibly your responsibility to consider the subsequent effect of what you make
brainless
“Removed by mod” suck my nuts you fascist fucks lol
These shitlib whiners don’t care and my comments have been removed for the horror of incivility towards dr von braun
Yeah there is. A bloke I know did exactly that with brain scans for his masters.
Would you mind asking your friend, so you can provide the source?
https://adni.loni.usc.edu/ here ya go
Edit: European DTI Study on Dementia too, he said it’s easier to get data from there
Lovely, thank you very much, kind stranger!
Fair weather friends to AI crack me up.
And if we weren’t a big, broken mess of late stage capitalist hellscape, you or someone you know could have actually benefited from this.
I’m involved in multiple projects where stuff like this will be used in very accessible manners, hopefully in 2-3 years, so don’t get too pessimistic.
Yea none of us are going to see the benefits. Tired of seeing articles of scientific advancement that I know will never trickle down to us peasants.
Our clinics are already using ai to clean up MRI images for easier and higher quality reads. We use ai on our cath lab table to provide a less noisy image at a much lower rad dose.
It never makes mistakes that affect diagnosis?
It’s not diagnosing, which is good imho. It’s just being used to remove noise and artifacts from the images on the scan. This means the MRI is clearer for the reading physician and ordering surgeon in the case of the MRI and that the cardiologist can use less radiation during the procedure yet get the same quality image in the lab.
I’m still wary of using it to diagnose in basically any scenario because of the salience and danger that both false negatives and false positives threaten.
… they said, typing on a tiny silicon rectangle with access to the whole of humanity’s knowledge and that fits in their pocket…
Good news, but it’s not “AI”. Please stop calling it that.
Haha I love Gell-Mann amnesia. A few weeks ago there was news about speeding up the internet to gazillion bytes per nanosecond and it turned out to be fake.
Now this thing is all over the internet and everyone believes it.
Well one reason is that this is basically exactly the thing current AI is perfect for - detecting patterns.
The source paper is available online, is published in a peer reviewed journal, and has over 600 citations. I’m inclined to believe it.
You sound like a person who hasn’t been peer reviewed
I really wouldn’t call this AI. It is more or less an inage identification system that relies on machine learning.
That was pretty much the definition of AI before LLM came.
And much before that it was rule-based machine learning, which was basically databases and fancy inference algorithms. So I guess “AI” has always meant “the most advanced computer science thing which looks kind of intelligent”. It’s only now that it looks intelligent enough to fool laypeople into thinking there actually is intelligence there.
Nooooooo you’re supposed to use AI for good things and not to use it to generate meme images.
I think you mean mammary images?
Can’t pigeons do the same thing?
Ductal carcinoma in situ (DCIS) is a type of preinvasive tumor that sometimes progresses to a highly deadly form of breast cancer. It accounts for about 25 percent of all breast cancer diagnoses.
Because it is difficult for clinicians to determine the type and stage of DCIS, patients with DCIS are often overtreated. To address this, an interdisciplinary team of researchers from MIT and ETH Zurich developed an AI model that can identify the different stages of DCIS from a cheap and easy-to-obtain breast tissue image. Their model shows that both the state and arrangement of cells in a tissue sample are important for determining the stage of DCIS.
https://news.mit.edu/2024/ai-model-identifies-certain-breast-tumor-stages-0722
How soon could this diagnostic model be rolled out? It sounds very promising given the seriousness of the DCIS!
As soon as your hospital system is willing to pay big money for it.
pretty sure iterate is the wrong word choice there
That’s not the only issue with the English-esque writing.
100% true, just the first thing that stuck out at me
They probably meant reiterate
I think it’s a joke, like to imply they want to not just reiterate, but rerererereiterate this information, both because it’s good news and also in light of all the sucky ways AI is being used instead. Like at first they typed, "I just want to reiterate… but decided that wasn’t nearly enough.
Dude needs to use AI to fix his fucking grammar.
Common case of programmer brain
I suppose they just dropped the “re” off of “reiterate” since they’re saying it for the first time.
They said something similar with detecting cancer from MRIs and it turned out the AI was just making the judgement based on how old the MRI was to rule cancer or not, and got it right in more cases because of it.
Therefore I am a bit skeptical about this one too.
Citation please?
That’s the nice thing about machine learning, as it sees nothing but something that correlates. That’s why data science is such a complex topic, as you do not see errors this easily. Testing a model is still very underrated and usually there is no time to properly test a model.
It’s really difficult to clean those data. Another case was, when they kept the markings on the training data and the result was, those who had cancer, had a doctors signature on it, so the AI could always tell the cancer from the not cancer images, going by the lack of signature. However, these people also get smarter in picking their training data, so it’s not impossible to work properly at some point.
Using AI for anomaly detection is nothing new though. Haven’t read any article about this specific ‘discovery’ but usually this uses a completely different technique than the AI that comes to mind when people think of AI these days.
Haven’t read any article about this specific ‘discovery’ but usually this uses a completely different technique than the AI that comes to mind when people think of AI these days.
From the conclusion of the actual paper:
Deep learning models that use full-field mammograms yield substantially improved risk discrimination compared with the Tyrer-Cuzick (version 8) model.
If I read this paper correctly, the novelty is in the model, which is a deep learning model that works on mammogram images + traditional risk factors.
For the image-only DL model, we implemented a deep convolutional neural network (ResNet18 [13]) with PyTorch (version 0.31; pytorch.org). Given a 1664 × 2048 pixel view of a breast, the DL model was trained to predict whether or not that breast would develop breast cancer within 5 years.
The only “innovation” here is feeding full view mammograms to a ResNet18(2016 model). The traditional risk factors regression is nothing special (barely machine learning). They don’t go in depth about how they combine the two for the hybrid model, so it’s probably safe to assume it is something simple (merely combining the results, so nothing special in the training step).
As a different commenter mentioned, the data collection is largely the interesting part here.
I’ll admit I was wrong about my first guess as to the network topology used though, I was thinking they used something like auto encoders (but that is mostly used in cases where examples of bad samples are rare)
ResNet18 is ancient and tiny…I don’t understand why they didn’t go with a deeper network. ResNet50 is usually the smallest I’ll use.
They don’t go in depth about how they combine the two for the hybrid model
Actually they did, it’s in Appendix E (PDF warning) . A GitHub repo would have been nice, but I think there would be enough info to replicate this if we had the data.
Yeah it’s not the most interesting paper in the world. But it’s still a cool use IMO even if it might not be novel enough to deserve a news article.
I skimmed the paper. As you said, they made a ML model that takes images and traditional risk factors (TCv8).
I would love to see comparison against risk factors + human image evaluation.
Nevertheless, this is the AI that will really help humanity.
That’s why I hate the term AI. Say it is a predictive llm or a pattern recognition model.
Say it is a predictive llm
According to the paper cited by the article OP posted, there is no LLM in the model. If I read it correctly, the paper says that it uses PyTorch’s implementation of ResNet18, a deep convolutional neural network that isn’t specifically designed to work on text. So this term would be inaccurate.
or a pattern recognition model.
Much better term IMO, especially since it uses a convolutional network. But since the article is a news publication, not a serious academic paper, the author knows the term “AI” gets clicks and positive impressions (which is what their job actually is) and we wouldn’t be here talking about it.
Well, this is very much an application of AI… Having more examples of recent AI development that aren’t ‘chatgpt’(/transformers-based) is probably a good thing.
Op is not saying this isn’t using the techniques associated with the term AI. They’re saying that the term AI is misleading, broad, and generally not desirable in a technical publication.
Op is not saying this isn’t using the techniques associated with the term AI.
Correct, also not what I was replying about. I said that using AI in the headline here is very much correct. It is after all a paper using AI to detect stuff.
That performance curve seems terrible for any practical use.
Yeah that’s an unacceptably low ROC curve for a medical usecase
Good catch!
The correct term is “Computational Statistics”
Stop calling it that, you’re scaring the venture capital
it’s a good term, it refers to lots of thinks. there are many terms like that.
it refers to lots of thinks
So it’s a bad term.
It’s literally the name of the field of study. Chances are this uses the same thing as LLMs. Aka a neutral network, which are some of the oldest AIs around.
It refers to anything that simulates intelligence. They are using the correct word. People just misunderstand it.
If people consistently misunderstand it, it’s a bad term for communicating the concept.
It’s the correct term though.
It’s like when people get confused about what a scientific theory is. We still call it the theory of gravity.
The problem is that it refers to so many and constantly changing things that it doesn’t refer to anything specific in the end. You can replace the word “AI” in any sentence with the word “magic” and it basically says the same thing…
Why do I still have to work my boring job while AI gets to create art and look at boobs?
Because life is suffering and machines dream of electric sheeps.
I dream of boobs.
I’ve seen things you people wouldn’t believe.
No link or anything, very believable.
You could participate or complain.
https://news.mit.edu/2019/using-ai-predict-breast-cancer-and-personalize-care-0507
Complain to who? Some random twitter account? WHy would I do that?
No, here. You could asked for a link or Google.
I am commenting on this tweet being trash, because it doesn’t have a link in it.
Honestly this is a pretty good use case for LLMs and I’ve seen them used very successfully to detect infection in samples for various neglected tropical diseases. This literally is what AI should be used for.
Sure, agreed . Too bad 99% of it’s use is still stealing from society to make a few billionaires richer.
I also agree.
However these medical LLMs have been around for a long time, and don’t use horrific amounts of energy, not do they make billionaires richer. They are the sorts of things that a hobbiest can put together provided they have enough training data. Further to that they can run offline, allowing doctors to perform tests in the field, as I can attest to witnessing first hand with soil transmitted helminths surveys in Mozambique. That means that instead of checking thousands of stool samples manually, those same people can be paid to collect more samples or distribute the drugs to cure the disease in affected populations.
Worth noting the type of comment this is in response to is arguing that home users should be legally forbidden from accessing training data and want a world where only the richest companies can afford to license training data (which will be owned by their other rich friends thanks to ig being posted on their sites)
Supporting heavy copywrite extensions is the dumbest position anyone could have .
I highly doubt the medical data to do these are available to a hobbyist, or that someone like that would have the know-how to train the AI.
But yea, rare non-bad use of AI. Now we just need to eat the rich to make it a good for humanity. Let’s get to that I say!
Actually the datasets for this MDA stuff are widely available.
You don’t understand how they work and that’s fine, you’re upset based on your paranoid guesswork thats filled in the lack of understanding and that’s sad.
No one is stealing from society, ‘society’ isn’t being deprived of anything when ai looks at an image. The research is pretty open, humanity is benefitting from it in the same way Tesla, Westi ghouse and Edison benefitted the history of electrical research.
And yes I’d you’re about to tell me Edison did nothing but steal then this is another bit of tech history you’ve not paid attention to beyond memes.
The big companies you hate like meta or nvidia are producing papers that explain methods, you can follow along at home and make your own model - though with those examples you don’t need to because they’ve released models on open licenses. Ironically it seems likely you don’t understand how this all works or what’s happening because zuck is doing significantly more to help society than you are - Ironic, hu?
And before you tell me about zuck doing genocide or other childish arguments, we’re on lemmy which was purposefully designed to remove the power from a top down authority so if an instance pushed for genocide we would have zero power to stop it - the report you’re no doubt going go allude to says that Facebook is culpable because it did not have adequate systems in place to control locally run groups…
I could make good arguments against zuck, I don’t think anyone should be able to be that rich but it’s funny to me when a group freely shares pytorch and other key tools used to help do things like detect cancer cheaply and efficient, help impoverished communities access education and health resources in their local language, help blind people have independence, etc, etc, all the many positive uses for ai - but you shit on it all simply because you’re too lazy and selfish to actually do anything materially constructive to help anyone or anything that doesn’t directly benefit you.
Yes, this is “how it was supposed to be used for”.
The sentence construction quality these days in in freefall.
Bro, it’s Twitter
And that excuses it I guess.
That would be correct, yes.
Twitter: Where wrongness gathers and imagines itself to be right.
shrugs you know people have been confidently making these kinds of statements… since written language was invented? I bet the first person who developed written language did it to complain about how this generation of kids don’t know how to write a proper sentence.
What is in freefall is the economy for the middle and working class and basic idea that artists and writers should be compensated, period. What has released us into freefall is that making art and crafting words are shit on by society as not a respectable job worth being paid a living wage for.
There are a terrifying amount of good writers out there, more than there have ever been, both in total number AND per capita.
This isn’t a creative writing project. This isn’t an artist presenting their work. What in the world did that tangent even come from?
This is just plain speech, written objectively incorrectly.
But go on, I’m sure next I’ll be accused of all the problems of the writing industry or something.
Objectively incorrect according to, who exactly?
Ironically, if they’d used an LLM, it would have corrected their writing.
Lmao
Not everyone’s a native speaker.
Neural networks are great for pattern recognition, unfortunately all the hype is in pattern generation and we end up with mammograms in anime style
Doctor: There seems to be something wrong with the image.
Technician: What’s the problem?
Doctor: The patient only has two breasts, but the image that came back from the AI machine shows them having six breasts and much MUCH larger breasts than the patient actually has.
Technician: sighs
Why does the paperwork suddenly claim the patient is 600 years old shape shifting dragon?