

It’s bullshit. What leaked was their commandline tool source code (named “claude code”) - very juicy in itself but has nothing to do with their models.


It’s bullshit. What leaked was their commandline tool source code (named “claude code”) - very juicy in itself but has nothing to do with their models.


Hey,
Person here who despises electron apps in part because of the memory footprint and in part because I don’t like neither chromium nor node.js - personal preference mainly.
From your description I have the feeling that it’s unclear to your user base if electron is set or up to debate. There is only a thin line between “explaining” and “defending”.
In terms of communication: “We’re using electron as foundation because it allows us to focus on development. We’ve considered alternatives like Tauri and XYZ and opted in favor of electron.”
If there are situations that might make you rethink state those as well (“if someone provides a proof of concept via XYZ that an alternative is faster by y% while enabling us to still use (your core libraries and languages) we might consider a refactor.”
If you’d engage with me after an electron rant on your codebase you’d just raise my hope that I might change your mind! Don’t give people hope, don’t feed the trolls and do your thing!
Just please be honest with yourself: your app doesn’t use “50 to 60 MB”, it uses 500MBish on idle because of your choice. And that’s okay as long as you as developer say that it is.


Yeah but that doesn’t solve OPs problem re/ proton - what I meant was that perhaps there is no Netherlands server that provides their random port forwarding or it gets a hickup with it.
If you refer to me not using proton:
The port forwarding is not the main reason (that’s their C level weirding me out) - and for the port forwarding specifically: It’s not (only/mainly) qbittorrent I want port forwarding for :)


I don’t use proton so can’t validate but two things stand out to me:
Good luck!


That’s my problem: any single word humanizes the tool in my opinion. Iperhaps something like “stochastic debris” comes close but there’s no chance to counter the common force of pop culture, Corp speak a and humanities talent to see humanoid behavior everywhere but each other. :(


Accepting concepts like “right” and “wrong” gives those tools way too much credit, basically following the AI narrative of the corporations behind them. They can only be used about the output but not the tool itself.
To be precise:
LLMs can’t be right or wrong because the way they work has no link to any reality - it’s stochastics, not evaluation. I also don’t like the term halluzination for the same reason. It’s simply a too high temperature setting jumping into a closeby but unrelated vector set.
Why this is an important distinction: Arguing that an LLM is wrong is arguing on the ground of ChatGPT and the likes: It’s then a “oh but wen make them better!” And their marketing departments overjoy.
To take your calculator analogy: like these tools do have floating point errors which are inherent to those tools wrong outputs are a dore part of LLMs.
We can minimize that but then they automatically use part of their function. This limitation is way stronger on LLMs than limiting a calculator to 16 digits after the comma though…


Yeah this is beyond ridiculous to blame anything or anyone else.
I mean accidently letting lose an autonomous non-tested non-guarailed tool in my dev environment… Well tough luck, shit, something for a good post mortem to learn from.
Having an infrastructure that allowed a single actor to cause this damage? This shouldn’t even be possible for a malicious human from within the system this easily.


That’s an utterly ignorant statement.
To expect others, often volunteer, to take such a personal risk because the legislation in one part of the world is utterly fucked. How about expecting the people who actually live in the country and state and have a chance to influence those laws to step up their game instead of trying to tell third parties to take individual and personal consequence.


They outline the issues from their perspective.
What else should they do? Break their own licence model (which prohibits (geographic) discrimination) or break the law? It’s either one of those two or comply.
For users yes - for developers, as much as it saddens me, no.
Ubuntu for example started the discussion about what they need to do to show their the demanded effort was being put into.
It’s the devs that are put at risk here - and I dare say by design. If this just correlates or is caused by the support from the big OS corporations one can only speculate. My speculation is: at the very least strongly influenced.


There is no hard definition within the laws so this is all speculation. This means that there is no technical answer because the question in is core is a legal one.
Your TV for example can have a browser without problems.
You can have an integrated board that runs a full Linux without you being able to touch the underlying OS and let that start a browser, too. You know those tv screens that show you traffic into it flight plans at the airport? Those are often full Linux computers set up exactly like that.
In short: we’ll only know when the law is actually being tested. It’s written in a way that I as layman could talk and software and even most hardware into it’s definition, it’s absolute bullshit…
I don’t think it’ll come globally at all - even the most crazy laws I’ve read so far target “only” OS vendors.
If it comes it’ll be regional only as manufacturers will be hell bent on not losing revenue in the rest of the world.
Keep in mind that age verification needs to be done on a local level as there is no universal level of what is an acceptable method.


Ah but this means if I can’t control the client (i.e. because I’ve setup a streaming server) then it’s not a solution for me - but I’d I do then this is the cleaner one because it doesn’t reencode the files.
Understood, thank you!


I might have a mistake in my thoughts/knowledge:
This would be a playback tool dependent solution though, right? Because then it works be not something at least I’d want.
Am I overlooking something? (Except the obvious “keep the original” aspect)


Opencloud is the was to go from the established systems in my opinion. https://github.com/opencloud-eu
File sharing and -management for me has a higher level of trust and stability requirements. Syncin with four developers and “doing everything” while based on typescript makes me suspicious - but I haven’t tried it hands-on.


Traefik and caddy were mentioned, the third in the game is usually nginxproxymanager.
I’m using both traefik and nginx in two different setups. The nginxproxymanager can be configured via UI natively which makes checking configurations a bit easier.
Traefik on the other hand is configured easily within the compose itself and you have everything in one place.
This turned out to be tiresome though if you don’t have a monolithic compose file - that’s actually even hr history why I switched to npm in the first place.
I don’t have any experience with caddy so can’t provide anecdotal insights there.
I really like it already so take this as an alternative, not as improvement:l. I don’t have a good eye for aesthetics anyway don’t his is more about structure.
Personally I switched from a single dashboard to purpose driven hubs - I can’t imagine a situation where I need my infrastructure and my calendar at the same time regularly for example.
Another point is context typing: your release checker is quite far away from your appointments and calendar. It looks to me to be sorted by content rather then function (i.e. it’s entertainment so it’s next to YouTube). The same is true for your interaction patterns. There is a lot of visual information which I’m sure you’ll rarely interact with but instead consume. And then there are clearly external links, both bottom left (opencloud, tooling) and top right (external media) in addition to your own self hosted content.
My suggestion is therefore a process instead of a change: Note down when you consume which features of this awesome dashboard together for a few days. Then restructure the content of the whole dashboard based on your usage patterns - either as a new Monolith or even experimenting with splitting it.
I even suggest using a different medium then your usage device (if it’s a desktop PC mainly use pen and paper, if it’s your laptop use your phone, if it’s your phone you use this dashboard on then you might have different problems :D)


You don’t! At least not in the sense that I’m aware of the JADE thing:
JADE is nothing that is a strong work proven topic but came from social media to handle narcissistic people as a peer group.
Your reactions are hostility and rejection based and how I understand you it’s your nerves that you want to preserve.
For this in a professional work place there are multiple ways to deal with and even all of them at the same time, just from the top of my head:
Hope this helps a bit!


If I understand you correctly: You want to be able to record one computer with another one on a system level (the BIOS-party that comes before any operating system is loaded).
Although this is not Linux specific: your best bet is a video capture card as you’ve suggested already. Anything else would depend on your bios supporting remote access which is not exactly the same (my server bios for example can expose a website where I then can configure it from within a browser.
The problem with video capture is that you’d still have two controls: one for the client and one for the host.
Depending on what your final result should be it could be actually easier and cheaper to just get a stand for a smartphone and record it from there and then crop it precisely.
You then have to only worry about light reflecting.
Oh I completely agree with that, just the jump to “a flawed model leaked” is too far. There’s already enough crap to mock, no need to make up additional stuff.