• earmuff@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Serious question: is there a way to get access to medical imagery as a non-student? I would love to do some machine learning with it myself, as I see lot’s of potential in image analysis in general. 5 years ago I created a model that was able to spot certain types of ships based only on satellite imagery, which were not easily detectable by eye and ignoring the fact that one human cannot scan 15k images in one hour. Similar use case with medical imagery - seeing the things that are not yet detectable by human eyes.

    • adenoid@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Yeah there are some openly available datasets on competition sites like Kaggle, and some medical data is available through public institutions like like NIH.

    • booty [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      5 years ago I created a model that was able to spot certain types of ships based only on satellite imagery, which were not easily detectable by eye and ignoring the fact that one human cannot scan 15k images in one hour.

      what is your intended use case? are you trying to help government agencies perfect spying? sounds very cringe ngl

      • earmuff@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        My intended use case is to find possibilities how ML can support people with certain tasks. Science is not political, for what my technology is abused, I cannot control. This is no reason to stop science entirely, there will always be someone abusing something for their own gain.

        But thanks for assuming without asking first what the context was.

        • booty [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          My intended use case is to find possibilities how ML can support people with certain tasks.

          weaselly bullshit. how exactly do you intend for people to use technology that identifies ships via satellite? what is your goal? because the only use cases I can see for this are negative

          This is no reason to stop science entirely

          if the only thing your tech can be used for is bad then you’re bad for innovating that tech

          • earmuff@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Ever thought about identifying ships full of refugees and send help, before their ships break apart and 50 people drown?

            Of course you have not. Your hatered makes you blind. Close minds never were able to see why science is important. Now enjoy spreading hate somewhere else.

            • Black_Mald_Futures [any]@hexbear.net
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              2 months ago

              Ever thought about identifying ships full of refugees and send help, before their ships break apart and 50 people drown?

              who the fuck is going to have access to this satellite bullshit and be in a position to send help? all the governments that actively want ships full of refugees to fucking sink and die? the ones that put people on trial for saving them?

              brainless is honestly too good of a term to describe how carelessly fucking stupid you are

            • booty [he/him]@hexbear.net
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              Ever thought about identifying ships full of refugees and send help, before their ships break apart and 50 people drown?

              No, I didn’t think about that. If you did, why exactly were you so hostile to me asking what use you thought this might serve?

              • earmuff@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                I don’t think my reply was hostile, I just criticized your behavior assuming things, before you know the whole truth. I kept everything neutral and didn’t have the urge to have a discussion with someone already on edge. I hope you understand and also learn that not everything is entirely evil in this world. Please stay curious - don’t assume.

                • booty [he/him]@hexbear.net
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  2 months ago

                  I just criticized your behavior assuming things, before you know the whole truth.

                  I didn’t assume anything. I asked you what your intended use case was and you responded with vague platitudes, sarcasm, and then once I pressed further, insults. Try re-reading your comments from a more objective standpoint and you’ll find neutrality nowhere within them.

        • MaeBorowski [she/her]@hexbear.net
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          find possibilities how ML can support people with certain tasks

          Marxism-Leninism? anakin-padme-2

          Oh, Machine Learning. sicko-wistful

          Science is not political

          in an ideal world maybe, but that is not our world. In reality science is always always political. It is unavoidable.

          • earmuff@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            2 months ago

            Typical hexbear reply lol

            Unfortunately, you are right, though. Science can be political. My science is not. I like my bubble.

            • MaeBorowski [she/her]@hexbear.net
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              Typical hexbear reply

              Unfortunately, you are right

              Yes, typically hexbear replies are right.

              It’s not unfortunate though, it’s simply a matter of having an understanding of the world and a willingness to accept it and engage with it. It’s too bad that you seem not to want that understanding or that you lack the willingness to accept it.

              My science is not. I like my bubble.

              How can you possibly square that first short sentence with the second? Are you really that willfully hypocritical? Yes, “your” science is political. No science escapes it, and the people who do science thinking themselves and their work is unaffected by their ideology are the most effected by ideology. No wonder you like your bubble - from within it, you don’t have to concern yourself with any of the real world or even the smallest sliver of self reflection. But all it is is a happy, self-reinforcing delusion. You pretend to be someone who appreciates science, but if you truly did, you would be doing everything you can to recognize your unavoidable biases rather than denying them while simultaneously wallowing in them, which is what you are openly admitting to doing whether you realize it or not.

        • Black_Mald_Futures [any]@hexbear.net
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Science is not political, for what my technology is abused, I cannot control.

          how did I know that they’d use the jew gassing chamber to gas jews, or use the torment nexus to create a nexus of torment? I was only doing the science

          you’re a fucking moron, jesus fucking christ

          imagine being a scientist, a person whose entire career and body of work relies on very specific premises of cause and effect, only to go on and make some shit without thinking it’s even possibly your responsibility to consider the subsequent effect of what you make

          brainless

    • Maalus@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Yeah there is. A bloke I know did exactly that with brain scans for his masters.

  • Snapz@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    And if we weren’t a big, broken mess of late stage capitalist hellscape, you or someone you know could have actually benefited from this.

    • MuchPineapples@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      I’m involved in multiple projects where stuff like this will be used in very accessible manners, hopefully in 2-3 years, so don’t get too pessimistic.

    • unconsciousvoidling@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Yea none of us are going to see the benefits. Tired of seeing articles of scientific advancement that I know will never trickle down to us peasants.

      • Telodzrum@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        Our clinics are already using ai to clean up MRI images for easier and higher quality reads. We use ai on our cath lab table to provide a less noisy image at a much lower rad dose.

          • Telodzrum@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            It’s not diagnosing, which is good imho. It’s just being used to remove noise and artifacts from the images on the scan. This means the MRI is clearer for the reading physician and ordering surgeon in the case of the MRI and that the cardiologist can use less radiation during the procedure yet get the same quality image in the lab.

            I’m still wary of using it to diagnose in basically any scenario because of the salience and danger that both false negatives and false positives threaten.

      • Tja@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        … they said, typing on a tiny silicon rectangle with access to the whole of humanity’s knowledge and that fits in their pocket…

  • humbletightband@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Haha I love Gell-Mann amnesia. A few weeks ago there was news about speeding up the internet to gazillion bytes per nanosecond and it turned out to be fake.

    Now this thing is all over the internet and everyone believes it.

    • Redex@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Well one reason is that this is basically exactly the thing current AI is perfect for - detecting patterns.

    • Vigge93@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      The source paper is available online, is published in a peer reviewed journal, and has over 600 citations. I’m inclined to believe it.

      • stormeuh@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        And much before that it was rule-based machine learning, which was basically databases and fancy inference algorithms. So I guess “AI” has always meant “the most advanced computer science thing which looks kind of intelligent”. It’s only now that it looks intelligent enough to fool laypeople into thinking there actually is intelligence there.

  • elrik@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Ductal carcinoma in situ (DCIS) is a type of preinvasive tumor that sometimes progresses to a highly deadly form of breast cancer. It accounts for about 25 percent of all breast cancer diagnoses.

    Because it is difficult for clinicians to determine the type and stage of DCIS, patients with DCIS are often overtreated. To address this, an interdisciplinary team of researchers from MIT and ETH Zurich developed an AI model that can identify the different stages of DCIS from a cheap and easy-to-obtain breast tissue image. Their model shows that both the state and arrangement of cells in a tissue sample are important for determining the stage of DCIS.

    https://news.mit.edu/2024/ai-model-identifies-certain-breast-tumor-stages-0722

  • parpol@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    They said something similar with detecting cancer from MRIs and it turned out the AI was just making the judgement based on how old the MRI was to rule cancer or not, and got it right in more cases because of it.

    Therefore I am a bit skeptical about this one too.

    • earmuff@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      That’s the nice thing about machine learning, as it sees nothing but something that correlates. That’s why data science is such a complex topic, as you do not see errors this easily. Testing a model is still very underrated and usually there is no time to properly test a model.

    • SomeGuy69@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      It’s really difficult to clean those data. Another case was, when they kept the markings on the training data and the result was, those who had cancer, had a doctors signature on it, so the AI could always tell the cancer from the not cancer images, going by the lack of signature. However, these people also get smarter in picking their training data, so it’s not impossible to work properly at some point.

    • FierySpectre@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      Using AI for anomaly detection is nothing new though. Haven’t read any article about this specific ‘discovery’ but usually this uses a completely different technique than the AI that comes to mind when people think of AI these days.

      • PM_ME_VINTAGE_30S [he/him]@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Haven’t read any article about this specific ‘discovery’ but usually this uses a completely different technique than the AI that comes to mind when people think of AI these days.

        From the conclusion of the actual paper:

        Deep learning models that use full-field mammograms yield substantially improved risk discrimination compared with the Tyrer-Cuzick (version 8) model.

        If I read this paper correctly, the novelty is in the model, which is a deep learning model that works on mammogram images + traditional risk factors.

        • FierySpectre@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          For the image-only DL model, we implemented a deep convolutional neural network (ResNet18 [13]) with PyTorch (version 0.31; pytorch.org). Given a 1664 × 2048 pixel view of a breast, the DL model was trained to predict whether or not that breast would develop breast cancer within 5 years.

          The only “innovation” here is feeding full view mammograms to a ResNet18(2016 model). The traditional risk factors regression is nothing special (barely machine learning). They don’t go in depth about how they combine the two for the hybrid model, so it’s probably safe to assume it is something simple (merely combining the results, so nothing special in the training step).

          As a different commenter mentioned, the data collection is largely the interesting part here.

          I’ll admit I was wrong about my first guess as to the network topology used though, I was thinking they used something like auto encoders (but that is mostly used in cases where examples of bad samples are rare)

          • errer@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            ResNet18 is ancient and tiny…I don’t understand why they didn’t go with a deeper network. ResNet50 is usually the smallest I’ll use.

          • PM_ME_VINTAGE_30S [he/him]@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            2 months ago

            They don’t go in depth about how they combine the two for the hybrid model

            Actually they did, it’s in Appendix E (PDF warning) . A GitHub repo would have been nice, but I think there would be enough info to replicate this if we had the data.

            Yeah it’s not the most interesting paper in the world. But it’s still a cool use IMO even if it might not be novel enough to deserve a news article.

        • llothar@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          I skimmed the paper. As you said, they made a ML model that takes images and traditional risk factors (TCv8).

          I would love to see comparison against risk factors + human image evaluation.

          Nevertheless, this is the AI that will really help humanity.

      • Johanno@feddit.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        That’s why I hate the term AI. Say it is a predictive llm or a pattern recognition model.

        • PM_ME_VINTAGE_30S [he/him]@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Say it is a predictive llm

          According to the paper cited by the article OP posted, there is no LLM in the model. If I read it correctly, the paper says that it uses PyTorch’s implementation of ResNet18, a deep convolutional neural network that isn’t specifically designed to work on text. So this term would be inaccurate.

          or a pattern recognition model.

          Much better term IMO, especially since it uses a convolutional network. But since the article is a news publication, not a serious academic paper, the author knows the term “AI” gets clicks and positive impressions (which is what their job actually is) and we wouldn’t be here talking about it.

          • FierySpectre@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Well, this is very much an application of AI… Having more examples of recent AI development that aren’t ‘chatgpt’(/transformers-based) is probably a good thing.

            • wewbull@feddit.uk
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              Op is not saying this isn’t using the techniques associated with the term AI. They’re saying that the term AI is misleading, broad, and generally not desirable in a technical publication.

              • FierySpectre@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                Op is not saying this isn’t using the techniques associated with the term AI.

                Correct, also not what I was replying about. I said that using AI in the headline here is very much correct. It is after all a paper using AI to detect stuff.

        • 0laura@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          it’s a good term, it refers to lots of thinks. there are many terms like that.

            • GetOffMyLan@programming.dev
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              2 months ago

              It’s literally the name of the field of study. Chances are this uses the same thing as LLMs. Aka a neutral network, which are some of the oldest AIs around.

              It refers to anything that simulates intelligence. They are using the correct word. People just misunderstand it.

              • wewbull@feddit.uk
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                If people consistently misunderstand it, it’s a bad term for communicating the concept.

                • GetOffMyLan@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  2 months ago

                  It’s the correct term though.

                  It’s like when people get confused about what a scientific theory is. We still call it the theory of gravity.

          • Ephera@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            The problem is that it refers to so many and constantly changing things that it doesn’t refer to anything specific in the end. You can replace the word “AI” in any sentence with the word “magic” and it basically says the same thing…

  • superkret@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Why do I still have to work my boring job while AI gets to create art and look at boobs?

    • Flyberius [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Honestly this is a pretty good use case for LLMs and I’ve seen them used very successfully to detect infection in samples for various neglected tropical diseases. This literally is what AI should be used for.

        • Flyberius [comrade/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          I also agree.

          However these medical LLMs have been around for a long time, and don’t use horrific amounts of energy, not do they make billionaires richer. They are the sorts of things that a hobbiest can put together provided they have enough training data. Further to that they can run offline, allowing doctors to perform tests in the field, as I can attest to witnessing first hand with soil transmitted helminths surveys in Mozambique. That means that instead of checking thousands of stool samples manually, those same people can be paid to collect more samples or distribute the drugs to cure the disease in affected populations.

          • VirtualOdour@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Worth noting the type of comment this is in response to is arguing that home users should be legally forbidden from accessing training data and want a world where only the richest companies can afford to license training data (which will be owned by their other rich friends thanks to ig being posted on their sites)

            Supporting heavy copywrite extensions is the dumbest position anyone could have .

          • NuraShiny [any]@hexbear.net
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            I highly doubt the medical data to do these are available to a hobbyist, or that someone like that would have the know-how to train the AI.

            But yea, rare non-bad use of AI. Now we just need to eat the rich to make it a good for humanity. Let’s get to that I say!

        • VirtualOdour@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          You don’t understand how they work and that’s fine, you’re upset based on your paranoid guesswork thats filled in the lack of understanding and that’s sad.

          No one is stealing from society, ‘society’ isn’t being deprived of anything when ai looks at an image. The research is pretty open, humanity is benefitting from it in the same way Tesla, Westi ghouse and Edison benefitted the history of electrical research.

          And yes I’d you’re about to tell me Edison did nothing but steal then this is another bit of tech history you’ve not paid attention to beyond memes.

          The big companies you hate like meta or nvidia are producing papers that explain methods, you can follow along at home and make your own model - though with those examples you don’t need to because they’ve released models on open licenses. Ironically it seems likely you don’t understand how this all works or what’s happening because zuck is doing significantly more to help society than you are - Ironic, hu?

          And before you tell me about zuck doing genocide or other childish arguments, we’re on lemmy which was purposefully designed to remove the power from a top down authority so if an instance pushed for genocide we would have zero power to stop it - the report you’re no doubt going go allude to says that Facebook is culpable because it did not have adequate systems in place to control locally run groups…

          I could make good arguments against zuck, I don’t think anyone should be able to be that rich but it’s funny to me when a group freely shares pytorch and other key tools used to help do things like detect cancer cheaply and efficient, help impoverished communities access education and health resources in their local language, help blind people have independence, etc, etc, all the many positive uses for ai - but you shit on it all simply because you’re too lazy and selfish to actually do anything materially constructive to help anyone or anything that doesn’t directly benefit you.

  • wheeldawg@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    Yes, this is “how it was supposed to be used for”.

    The sentence construction quality these days in in freefall.

    • supersquirrel@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      shrugs you know people have been confidently making these kinds of statements… since written language was invented? I bet the first person who developed written language did it to complain about how this generation of kids don’t know how to write a proper sentence.

      What is in freefall is the economy for the middle and working class and basic idea that artists and writers should be compensated, period. What has released us into freefall is that making art and crafting words are shit on by society as not a respectable job worth being paid a living wage for.

      There are a terrifying amount of good writers out there, more than there have ever been, both in total number AND per capita.

      • wheeldawg@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        This isn’t a creative writing project. This isn’t an artist presenting their work. What in the world did that tangent even come from?

        This is just plain speech, written objectively incorrectly.

        But go on, I’m sure next I’ll be accused of all the problems of the writing industry or something.

  • mayo_cider [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Neural networks are great for pattern recognition, unfortunately all the hype is in pattern generation and we end up with mammograms in anime style

    • D61 [any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Doctor: There seems to be something wrong with the image.

      Technician: What’s the problem?

      Doctor: The patient only has two breasts, but the image that came back from the AI machine shows them having six breasts and much MUCH larger breasts than the patient actually has.

      Technician: sighs