• Thomas D. Embree 🇨🇦@me.dm
    link
    fedilink
    arrow-up
    16
    arrow-down
    10
    ·
    2 days ago

    @Kissaki In another thread, people are mocking AI because the free language models they are using are bad at drawing accurate maps. “AI can’t even do geography”. Anything an AI says can’t be trusted, and AI is vastly inferior to human ability.

    These same people haven’t figured out the difference between using a language AI to draw a map, and simply asking it a geography question.

    • FozzyOsbourne@lemm.ee
      link
      fedilink
      arrow-up
      26
      ·
      2 days ago

      Searching for answers and creating maps are both completely unrelated to scanning source code for vulnerabilities. What is the point of this comment?

        • SkyNTP@lemmy.ml
          cake
          link
          fedilink
          arrow-up
          11
          ·
          edit-2
          2 days ago

          A broken clock is right twice a day. Inventions are only good when they reliably work for all the intended solutions.

          • AndrasKrigare@beehaw.org
            link
            fedilink
            arrow-up
            2
            arrow-down
            3
            ·
            2 days ago

            No? I have a pair of shoes that advertise as being great for running and walking. I love walking in them, but they suck for running. Are you saying the shoes suck and I shouldn’t use them at all, even though I like walking in them?

            Tools don’t care about intent, and neither should you. Only things that work and things that don’t. And if it doesn’t work, you should use a different tool.

            • Initiateofthevoid@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              2 days ago

              If they are advertised as being great for running and walking, but they are objectively terrible for running?

              You can use them all you like, but the company that sold them to you mislead you. That’s false advertising. If you call them running shoes, they’re bad running shoes.

              • AndrasKrigare@beehaw.org
                link
                fedilink
                arrow-up
                1
                arrow-down
                3
                ·
                2 days ago

                Sure, but false advertising has nothing to do with how good an invention is, that’s a marketing problem.

                • shnizmuffin@lemmy.inbutts.lol
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  1 day ago

                  I bought a thing that said it was good for A and B but it’s only good for B. Marketing problem! I didn’t make a bad decision! I wasn’t tricked! I’m a smart boy!

                  • AndrasKrigare@beehaw.org
                    link
                    fedilink
                    arrow-up
                    1
                    arrow-down
                    2
                    ·
                    1 day ago

                    Alternate take: I want something that does B, so I research methods of doing B and find one that’s good. Good thing I’m a smart boy that doesn’t make purchasing decisions based on what the marketing department says things do.

                    There’s plenty of good reasons to criticize or be concerned about LLMs. You don’t need to make up dumb ones.

        • FozzyOsbourne@lemm.ee
          link
          fedilink
          arrow-up
          2
          ·
          2 days ago

          Yeah exactly, a code scan is completely unrelated to generative AI, the only thing that even connects them is that someone used the chatbot as an interface to start the scan

      • Thomas D. Embree 🇨🇦@me.dm
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        2 days ago

        @callouscomic I lean towards disappointing. We are literally surrounded at all times by amazing technology, but the default position is still “technology bad” 🙄

        It reminds me of the concerns people had when trains were being invented, people refused to ride on them because “God never meant for us to travel faster than 20 km/h” or that such breakneck speeds would somehow cause harm to a woman’s uterus or ovaries.

    • dblsaiko@discuss.tchncs.de
      link
      fedilink
      arrow-up
      8
      ·
      2 days ago

      Daniel Stenberg has banned AI-edited bug reports from cURL because they were exclusively nonsense and just wasted their time. Just because it gets a hit once doesn’t mean it’s good at this either.

      • Kissaki@beehaw.orgOP
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        2 days ago

        It does show that it can be a useful tool, though.

        Here, the security researcher was evaluating it and stumbled upon a previously undiscovered security bug. Obviously, they didn’t let the AI create the bug report without understanding it. They verified the answer and took action themselves, presumably analyzing, verifying, and reporting in a professional and respectful way.

        The cURL AI spam is an issue at the opposite side of that. But doesn’t really tell us anything about capabilities. It tells us more about people. In my eyes, at least.

        • dblsaiko@discuss.tchncs.de
          link
          fedilink
          arrow-up
          7
          ·
          2 days ago

          Yeah, that’s fair. When verified beforehand, and what it discovered is an actual issue, why not. It does overwhelmingly attract people who have no idea what they’re doing and then submit bogus reports because it looks good to them though.

      • Thomas D. Embree 🇨🇦@me.dm
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        2 days ago

        @2xsaiko That is a poorly made AI model, then. Whoever put that system in place didn’t train the model properly. In fact, I’m going to guess that you chose a random model like ChatGPT or llama or Gemini.

        Or you might not even realize that you need a model specifically trained to handle the kind of thing you are asking.

        That isn’t a limitation of AI, that is human error. Do you think people are just pretending it works or something?

        • tuhriel@discuss.tchncs.de
          link
          fedilink
          arrow-up
          3
          ·
          17 hours ago

          That is the problem they get promoted as the one-size-fits-all solution on everything. And people are using it as it’s promoted

    • apotheotic (she/her)@beehaw.org
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      2 days ago

      Like, I get that there’s people who are mocking AI for the wrong reasons, and they’re silly for that, but there are very real reasons to dislike AI in many applications.

      Would chatgpt be able to do this if their dataset had consisted only of ethically obtained data where the authors had provided consent? My money is on no, at least not yet. The technology is in its infancy and has powerful potential, but is having its progress boosted through highly unethical means.

      I’m so very much for the concept of AI, its a monumental technology space at its core. But it needs to be done right, and I fear that it never will be, and we will have to live with the sins of the existing models forever. I hope I will be wrong.

      If we can reach a future where models are trained on entirely consensual data and the environmental impact of their training and usage isn’t as dire, I’d be so happy.

      • Thomas D. Embree 🇨🇦@me.dm
        link
        fedilink
        arrow-up
        1
        arrow-down
        2
        ·
        2 days ago

        @apotheotic As for things like creating images in the style of a specific artist, that is not plagiarism unless you are asking for a perfect replica of a specific art piece and claiming it as your own original work.

        All artists imitate the styles they find appealing, if you paint a Van Gogh style painting it isn’t plagiarism of Van Gogh. Likewise, if I were to imitate Van Gogh’s style using an AI, the resulting image would be my original work and not Van Gogh’s creation.

        • apotheotic (she/her)@beehaw.org
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          I don’t agree with this argument at all, because if a human artist were to employ the same kind of algorithmic mimicry that an AI does, I would consider it plagiarism. There is a distinct difference between how a human observes and learns from other artists work, and how an AI does it.

          Moreover, to take things out of the realm of plagiarism, if a human artist was mimicking the style of another artist and making bank off of it, and the original artist were to say “hey, that’s kinda not cool, I don’t appreciate this” you could have a conversation about how to accommodate both parties. With AI, there is no such conversation to be had, because it will replicate without barriers and do so in volumes that dwarf any sort of output the original artist could dream of, no matter how nicely you ask it not to, unless it was not trained on it in the first place.

          Anyway, my pushback in my original message was not about the output being plagiarism or anything of the sort, it was about the usage of authors/artists work as training data (input) being non-consensual.

      • Thomas D. Embree 🇨🇦@me.dm
        link
        fedilink
        arrow-up
        3
        arrow-down
        3
        ·
        2 days ago

        @apotheotic The issue with copyright is an inevitable misstep that was bound to happen while figuring out this technology. However, some of criticisms aren’t about ethical issues surrounding copyright, they are about the marketability of skills (such as painting) that you either had to learn yourself or otherwise needed to pay someone to do for you.

        Now you can do that with an AI. Great for disabled people who can create freely now, bad for the artists who exploited that for financial gain.

        • The_Sasswagon@beehaw.org
          link
          fedilink
          arrow-up
          5
          ·
          2 days ago

          I don’t think ‘disabled people’ need a computer to generate content to participate in art creation, and I don’t think artists making art is exploitation. The artists, meaning anyone who ever had their art posted online, are the ones being exploited here, their work was stolen and made to work for tech investors.

          Even if these were tangible benefits they are a small compensation for the accelerated degradation of our shared planet, the mass robbery of nearly everyone on earth, and the further damage to our ability to critically think and create. And on top of that, the stuff it generates isn’t even very good.

          • Thomas D. Embree 🇨🇦@me.dm
            link
            fedilink
            arrow-up
            2
            arrow-down
            2
            ·
            2 days ago

            @The_Sasswagon AI is not destroying the planet, it literally didn’t exist until a few years ago. The way we produce energy is the problem, and that won’t go away if we banned AI.

            AI is actually accelerating the timeline on a lot of important research, things that were decades away are now just years away. That alone might be what saves the climate.

            If it was as simple as using less electricity by using less technology, it wouldn’t be so hard to abandon your smartphone.

            • The_Sasswagon@beehaw.org
              link
              fedilink
              arrow-up
              2
              ·
              1 day ago

              It’s using endless electricity and water to perform tasks I could do powered by a bowl of cereal in the morning. I’d rather need one solar panel than ten, and a river rather than a dried up well, personally, but ever increasing energy demands require the latter two.

              If by accelerating you are referring to making the problem worse so we have to deal with melted ice caps sooner, then I agree! I for one don’t really trust turbo predictive text to solve the collapsing jet stream, but I sure do expect it to play a part in causing it. Or maybe just the extraction of increasing material from colonized countries to pay for our funny memes and your “art” through solar panel and battery. Either way, it is contributing in a very real way to the destruction of our planet for little gain that could be achieved more efficiently by other means.

              The cool part about a smartphone is I actually wanted it and it did a thing nothing had before (except some PDAs maybe). Also living without one is very possible and I do so frequently, I’m not a chronic poster or social media user. Machine learning with a gui on it is neither something I wanted nor is it novel, and it is not improving the world we live in, it is making it worse.

              The saving grace is this fad will pass as it becomes clear it’s the same as home automation, block chain, machine learning, the concept of web domains, etc. and it’s mostly been hype by tech investors all along. I would care about it a whole lot less if it weren’t so full of negative externalities.

          • Thomas D. Embree 🇨🇦@me.dm
            link
            fedilink
            arrow-up
            1
            arrow-down
            4
            ·
            2 days ago

            @The_Sasswagon They do if they aren’t physically capable of holding a brush, instrument, etc.

            This allows people like that to paint, create music, etc. entirely on their own, by their own hand (or voice), without relying on the services of a skilled artist who might not be able to capture what that person is imagining.

            People who don’t have time to learn painting can now bring beauty into the world that would have otherwise never left their head.

            Artists are complaining about that. Fuck them.

            • The_Sasswagon@beehaw.org
              link
              fedilink
              arrow-up
              3
              ·
              1 day ago

              I feel like I am just repeating myself, disability does not prevent creative expression? A broken arm does not define your ability to paint. Perhaps one medium or another is more challenging but art has many many forms and we have been managing for thousands of years without a tech startup reinventing art. And not every culture in history has been as ableist as the one while live in today. Anyone can already make meaningful art.

              As for not having the time, I think that’s an excuse for taking a shortcut using other people’s art and trying to make it their own. It won’t be as impressive, no matter how long they spend typing prompts into the computer, the person badly sketching mushrooms on their 10 at the local coffee chain is far more inspiring.

              I wish we lived in a time where we were allowed to do what we loved and I may be a little envious of the people who are able to, but they have a right to complain that their work is being stolen and invalidated by people who don’t value it.

        • apotheotic (she/her)@beehaw.org
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          I don’t disagree that its a misstep, but it feels like one that is not going to be corrected. It is going to be treated as the normal thing to do with training AI.

          I would hazard that there wouldn’t be nearly as many artists complaining about AI if it hadn’t been trained on immorally obtained inputs. The fact that it can effortlessly recreate the style of an artist that was added to the data without their consent is, I think, what gives most artists the visceral reaction that they have. “Not only is it doing what we can do (to some degree), it is doing so because our work was used without our consent”.

          AI is a valuable tool for art if used correctly, I don’t know if I agree that it is a disability aid. I can perhaps concede that someone who is entirely without fine motor ability can now make colours and shapes that vaguely resemble what they had in mind where perhaps they couldn’t before, but its difficult for me to consider that case “creating”. It is creating in the same sense as describing to your friend what you want and them trying to draw what you describe. There’s an output that resembles your input description, which might be enough for some?

    • jarfil@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      There are 10 kinds of people: those who think they understand neural networks, those who try to understand neural networks, and those whose neural networks can’t spot the difference.

      Not a coincidence the amount of people who are bad at languages, communication, learning, or teaching. On the bright side, new generations are likely to be forced to get better.

      • Thomas D. Embree 🇨🇦@me.dm
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        2 days ago

        @jarfil I think it’s unavoidable instict. In our ancestral environment, it was basic survival sense to fear the unknown and assume it could be dangerous. Caution just makes sense in that scenario.

        There hasn’t been enough time for our genes to adapt to our new, radically different environment. So people will continue to react to technological advances as if a tiger could leap out at any moment and maul them to death. Even I experience a vague unease, and I love technology.