25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)

  • 0 Posts
  • 228 Comments
Joined 8 months ago
cake
Cake day: October 14th, 2024

help-circle
  • I found out about this about a year ago while I was laid off. It coincided with when the massive layoffs began. Seems pretty likely to me. Developer salaries aren’t low and to lose another 80% on top is a big hit.

    Also a lot of my coworkers are really nervous about immigration right now. This is a bad time to be an Indian tech worker in the US. My team of about 10 could wind up reduced to me and one other guy. We’d even lose our manager and every PM. And this team is responsible for critical software at a major company.



  • It’s a massive new disruptive technology and people are scared of what changes it will bring. AI companies are putting out tons of propaganda both claiming AI can do anything and fear mongering that AI is going to surpass and subjugate us to back up that same narrative.

    Also, there is so much focus on democratizing content creation, which is at best a very mixed bag, and little attention is given to collaborative uses (which I think is where AI shines) because it’s so much harder to demonstrate, and it demands critical thinking skills and underlying knowledge.

    In short, everything AI is hyped as is a lie, and that’s all most people see. When you’re poking around with it, you’re most likely to just ask it to do something for you: write a paper, create a picture, whatever, and the results won’t impress anyone actually good at those things, and impress the fuck out of people who don’t know any better.

    This simultaneously reinforces two things to two different groups: AI is utter garbage and AI is smarter than half the people you know and is going to take all the jobs.



  • Our purpose with this column isn’t to be alarmist

    [x] Doubt

    The amount of math that goes into training an AI and generating output exceeds human capacity to calculate. So does the Big Bang, but we have some pretty good ideas how that went.

    when given access to fictional emails during safety testing, threatened to blackmail an engineer over a supposed extramarital affair. This was part of responsible safety testing — but Anthropic can’t fully explain the irresponsible action.

    Because human writing, both fiction and non-fiction is full of this sort of thing, and all any LLM is doing is writing. Why wouldn’t it take a dark turn sometimes? It’s not like it has any inherent sense of ethics or morality.

    Anthropic CEO Dario Amodei, in an essay in April called “The Urgency of Interpretability,” warned: “People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned: this lack of understanding is essentially unprecedented in the history of technology.” Amodei called this a serious risk to humanity — yet his company keeps boasting of more powerful models nearing superhuman capabilities.

    Is this true? Don’t we have drugs that we don’t fully understand how they do what they do? I’m reading that we don’t fully understand all the mechanisms of aspirin.

    I get that this is a quote and not the author of the article, but this quote is just included without deeper analysis. Also, a car has superhuman capabilities; a fish has superhuman capabilities. LLMs are not superhuman in any way that matters. They are not even superhuman in ways different from computers of 40 years ago.

    But researchers at all these companies worry LLMs, because we don’t fully understand them, could outsmart their human creators and go rogue.

    This is 100% alarmism. AI might at some point outsmart humans, but it won’t be LLMs.


    None of this is to say there are absolutely no concerns about LLMs. Obviously there are. But there is no reason to suspect LLMs are going to end humanity unless some moron hooks one up to nuclear weapons.


  • You probably could train an AI to play chess and win, but it wouldn’t be an LLM.

    In fact, let’s go see…

    • Stockfish: Open-source and regularly ranks at the top of computer chess tournaments. It uses advanced alpha-beta search and a neural network evaluation (NNUE).

    • Leela Chess Zero (Lc0): Inspired by DeepMind’s AlphaZero, it uses deep reinforcement learning and plays via a neural network with Monte Carlo tree search.

    • AlphaZero: Developed by DeepMind, it reached superhuman levels using reinforcement learning and defeated Stockfish in high-profile matches (though not under perfectly fair conditions).

    Hmm. neural networks and reinforcement learning. So non-LLM AI.

    you can play chess against something based on chatgpt, and if you’re any good at chess you can win

    You don’t even have to be good. You can just flat out lie to ChatGPT because fiction and fact are intertwined in language.

    “You can’t put me in check because your queen can only move 1d6 squares in a single turn.”



  • I think a lot of ground has been covered. It’s a useful technology that has been hyped to be way more than it is, and the really shitty part is a lot of companies are trying to throw away human workers for AI because they are that fucking stupid or that fucking greedy (or both).

    They will fail, for the most part, because AI is a tool your employees use, they aren’t a thing to foist onto your customers. Also where do the next generation of senior developers come from if we replace junior developers with AI? Substitute in teachers, artists, copy editors, others.

    Add to that people who are too fucking stupid to understand AI deciding it needs to be involved in intelligence, warfare, police work.

    I frequently disagree with the sky is falling crowd. AI use by individuals, particularly local AI (though it’s not as capable) is democratizing. I moved from windows to Linux two years ago and I couldn’t have done that if I hadn’t had AI to help me troubleshoot a bunch of issues I had. I use it all the time at work to leverage my decades of experience in areas where I’d have to relearn a bunch of things from scratch. I wrote a Python program in a couple of hours having never written a line before because I knew what questions to ask.

    I’m very excited for a future with LLMs helping us out. But everyone is fixated on AI gen (image, voice, text) but it’s not great at that. What it excels at is very quickly giving feedback. You have to be smart enough to know when it’s full of shit. That’s why vibe coding is a dead end. I mean it’s cool that very simple things can be churned out by very inexperienced developers, but that has a ceiling. An experienced developer can also leverage it to do more faster at a higher level, but there is a ceiling there as well. Human input and knowledge never stops being essential.

    So welcome to Lemmy and discussion about AI. You have to be prepared for knee-jerk negativity, and the ubiquitous correction when you anthropomorphize AI as a shortcut to make your words easier to read. There isn’t usually too much overtly effusive praise here as that gets shut down really quickly, but there is good discussion to be had among enthusiasts.

    I find most of the things folks hate about AI aren’t actually the things I do with it, so it’s easy to not take the comments personally. I agree that ChatGPT written text is slop and I don’t like it as writing. I agree AI art is soulless. I agree distributing AI generated nudes of someone is unethical (I could give a shit what anyone jerks off to in private). I agree that in certain niches, AI is taking jobs, even if I think humans ultimately do the jobs better. I do disagree that AI is inherently theft and I just don’t engage with comments to that effect. It’s unsettled law at this point and I find it highly transformative, but that’s not a question anyone can answer in a legal sense, it’s all just strongly worded opinion.

    So discussions regarding AI are fraught, but there is plenty of good discourse.

    Enjoy Lemmy!


  • One of the things I miss about web rings and recommended links is it’s people who are passionate about a thing saying here are other folks worth reading about this. Google is a piss poor substitute for the recommendations of people you like to read.

    Only problem with slow web is people write what they are working on, they aren’t trying to exhaustively create “content”. By which I mean, they aren’t going to have every answer to every question. You read what’s there, you don’t go searching for what you want to read.


  • I signed up with Matrix and it was not seamless but maybe a private server would be great and they could go from there (but that feels like a long term commitment to supporting those users). I haven’t really played much with it. Tried getting the folks in my discord server to give it a try but they haven’t and they are tech folks. I would say it’s not ready for normies, but I really wish it was.

    I still have it installed on my phone, but I don’t really have anywhere interesting to go. Same with Signal TBH—it’s installed but no one I know uses it. Still waiting on my invite from the Secretary of Defense.


  • Most people don’t care about decentralization

    I think that’s largely not the case for people that are currently on Lemmy/Mastodon, but I think you’re right that it prevents larger adoption. I’m okay with that, though. I don’t need to talk with everyone. There’s room for more growth, probably especially for more niche communities, but at least for me Lemmy has hot critical mass.

    Everything else I either like the things you dislike or disagree that they are problems.



  • MagicShel@lemmy.ziptoTechnology@lemmy.worldAi Code Commits
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    8 days ago

    An LLM providing “an opinion” is not a thing

    Agreed, but can we just use the common parlance? Explaining completions every time is tedious, and most everyone talking about it at this level always knows. It doesn’t think, it doesn’t know anything, but it’s a lot easier to use those words to mean something that seems analogous. But yeah, I’ve been on your side of this conversation before and let’s just read all that as agreed.

    this would not have to reach either a human or an AI agent or anything before getting fixed with little resources

    There are tools that do some of this automatically. I picked really low hanging fruit that I still see every single day in multiple environments. LLMs attempt (wrong word here, I know) more, but they need review and acceptance by a human expert.

    Perfectly decent looking “minor fixes” that are well worded, follow guidelines, and pass all checks, while introducing an off by one error or suddenly decides to swap two parameters that happens to be compatible and make sense in context are the issue. And those, even if rare (empirically I’d say they are not that rare for now) are so much harder to spot without full human analysis, are a real threat.

    I get that folks are trying to fully automate this. That’s fucking stupid. I don’t let seasoned developers commit code to my repos without review, why would I let AI? Incidentally, seasoned developers also can suggest fixes with subtle errors. And sometimes they escape into the code base, or sometimes perfectly good code that worked fine on prem goes to shit in the cloud—I just had to argue my team into fixing something that executed over 10k SQL statements in some cases on a single page load due to lazy loading. That shit worked “great” on prem but was taking up to 90 seconds in the cloud. All written by humans.

    The goal should not be to emulate human mistakes, but to make something better.

    I’m sure that is someone’s goal, but LLMs aren’t going to do that. They are a different tool that helps but does not in any way replace human experts. And I’m caught in the middle of every conversation because I don’t hate them enough for one side, and I’m not hype enough about them for the other. But I’ve been working with them for several years now and watched the grow since GPT2 and I understand them pretty well. Well enough not to trust them to the degree some idiots do, but I still find them really handy.


  • MagicShel@lemmy.ziptoTechnology@lemmy.worldAi Code Commits
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    3
    ·
    9 days ago

    The place I work is actively developing an internal version of this. We already have optional AI PR reviews (they neither approve nor reject, just offer an opinion). As a reviewer, AI is the same as any other. It offers an opinion and you can judge for yourself whether its points need to be addressed or not. I’ll be interested to see whether its comments affect the comments of the tech lead.

    I’ve seen a preview of a system that detects problems like failing sonar analysis and it can offer a PR to fix it. I suppose for simple enough fixes like removing unused imports or unused code it might be fine. It gets static analysis and review like any other PR, so it’s not going to be merging any defects without getting past a human reviewer.

    I don’t know how good any of this shit actually is. I tested the AI review once and it didn’t have a lot to say because it was a really simple PR. It’s a tool. When it does good, fine. When it doesn’t, it probably won’t take any more effort than any other bad input.

    I’m sure you can always find horrific examples, but the question is how common they are and how subtle any introduced bugs are, to get past the developer and a human reviewer. Might depend more on time pressure than anything, like always.






  • I know this is about hating on AI, but this seems like a typical company relying on SEO tricks to drive traffic to their site so they can display ads hates when the SEO algorithm changes.

    Good news: I’m already starting to see SEO experts giving advice on how to get cited by AI to drive traffic to your site, and for the first time the advice I’m seeing has more to do with providing quality content that answers the kinds of questions people tend to ask about your business domain.

    I recognize the problems AI is bringing to search on both ends. AI generated content is making the already bad signal-to-noise ratio on the internet much worse. And now AI is going to grab the knowledge content of your website, summarize it to users perhaps wrongly with at most a reference link, and the person who invested the time and effort to create the content is cut out. That’s a big problem.

    But I think this begs the question of whether search was any good before. It wasn’t. It isn’t. And to a large degree, all of the SEO bullshit is the reason why, although also the fact that every single site has to make money on ads to justify it’s existence is also ruining fucking everything. Journalism is all click-bait. Reviews are all advertising and referral links.

    This is a nuanced issue, but it really doesn’t matter whether AI wins or loses because the internet is going to continue to get worse. I miss the days of low-bandwidth forcing efficient website design. No bloated 500kb Javascript frameworks. No ad-sense tracking you everywhere you go. I’ll grant you that the internet is prettier now, but that’s really not going to matter if everything winds up presented by AI anyway.

    I’m really hopeful that federation continues to grow and bandwidth and storage costs can allow a simple hobbyist to maintain a site/node for minimal cost while contributing to the greater ecosystem. Smaller communities where reputation actually matters instead of being gamified into upvotes and downvotes as some sort of facsimile of trustworthiness or quality. I think with a more personal internet, AI becomes less of a threat anyway.