I think the point is that even if LLMs suck at task A, they might be really good at task B. Just because code written by LLMs is often riddled with security flaws, doesn’t mean LLMs also suck at identifying those flaws.
No? I have a pair of shoes that advertise as being great for running and walking. I love walking in them, but they suck for running. Are you saying the shoes suck and I shouldn’t use them at all, even though I like walking in them?
Tools don’t care about intent, and neither should you. Only things that work and things that don’t. And if it doesn’t work, you should use a different tool.
If they are advertised as being great for running and walking, but they are objectively terrible for running?
You can use them all you like, but the company that sold them to you mislead you. That’s false advertising. If you call them running shoes, they’re bad running shoes.
I bought a thing that said it was good for A and B but it’s only good for B. Marketing problem! I didn’t make a bad decision! I wasn’t tricked! I’m a smart boy!
Alternate take: I want something that does B, so I research methods of doing B and find one that’s good. Good thing I’m a smart boy that doesn’t make purchasing decisions based on what the marketing department says things do.
There’s plenty of good reasons to criticize or be concerned about LLMs. You don’t need to make up dumb ones.
Yeah exactly, a code scan is completely unrelated to generative AI, the only thing that even connects them is that someone used the chatbot as an interface to start the scan
I think the point is that even if LLMs suck at task A, they might be really good at task B. Just because code written by LLMs is often riddled with security flaws, doesn’t mean LLMs also suck at identifying those flaws.
A broken clock is right twice a day. Inventions are only good when they reliably work for all the intended solutions.
No? I have a pair of shoes that advertise as being great for running and walking. I love walking in them, but they suck for running. Are you saying the shoes suck and I shouldn’t use them at all, even though I like walking in them?
Tools don’t care about intent, and neither should you. Only things that work and things that don’t. And if it doesn’t work, you should use a different tool.
If they are advertised as being great for running and walking, but they are objectively terrible for running?
You can use them all you like, but the company that sold them to you mislead you. That’s false advertising. If you call them running shoes, they’re bad running shoes.
Sure, but false advertising has nothing to do with how good an invention is, that’s a marketing problem.
I bought a thing that said it was good for A and B but it’s only good for B. Marketing problem! I didn’t make a bad decision! I wasn’t tricked! I’m a smart boy!
Alternate take: I want something that does B, so I research methods of doing B and find one that’s good. Good thing I’m a smart boy that doesn’t make purchasing decisions based on what the marketing department says things do.
There’s plenty of good reasons to criticize or be concerned about LLMs. You don’t need to make up dumb ones.
Yeah exactly, a code scan is completely unrelated to generative AI, the only thing that even connects them is that someone used the chatbot as an interface to start the scan