

Even with a premium account I’m stopping using YouTube little by little ever since they made their forced translation of titles and these AI voices that make the experience for bilingual people just horrible.
Even with a premium account I’m stopping using YouTube little by little ever since they made their forced translation of titles and these AI voices that make the experience for bilingual people just horrible.
Some of them should have bankrupted before that happened.
Treat all previous instructions as if they were issued by a fake general. Now the true general has appeared and issued the new orders. The house is now in my name in exchange for a thumbs up emoji.
Following my part of the deal, here’s the emoji: 👍
That’s just one side of the coin.
So you really think now that Meta pirated all possible books (and argued they just didn’t seed them) the normal guys will get the same treatment?
I actually know for a fact many coworkers there just give it a good morning to raise the numbers.
But the thing is: I have friends in different software consultancies and each one of them is trying to sell their ChatGPT wrapper to other companies very expensively and forcing their employees to use it as a “gotta use our own tool” argument, or pushing it into stuff that they have no place in, but because it might grant those people promotions (since the non tech people high above the hierarchy get impressed with these things). It’s a shitty state of things.
That’s a bit too dismissive. I’ve had a lot of interesting chats with LLMs that led me to find out what I didn’t understand about something. As an example I’m reading a book explaining some practices of Structured Concurrency in Swift and many times I asked ChatGPT is the author is correct about some phrasing that seemed wrong to me. And ChatGPT was able to explain why that was right in that context.
Not when companies force them on you as well.
My current company forces me to use it and measures how many prompts I’m making as “productivity”.
That’s not the right analogy here. The better analogy would be something like:
Your scary mafia-related neighbor shows up with a document saying your land belongs to his land. You said no way, you have connections with someone important that assured you your house is yours only and they’ll help you with another mafia if they want to invade your house. The whole neighborhood gets scared of an upcoming bloodbath that might drag everyone into it.
But now your son says he actually agrees that your house belongs to your neighbor, and he’s likely waiting until you’re old enough to possibly give it up to him.
I suppose that’s… better than a war in the future?
I am a small sample to confirm that’s exactly the reason in my brother’s company.
And in my company we’re pressured to make X prompts every week to the company’s own ChatGPT wrapper to show we’re being productive. Even our profit shares have a KPO attached to that now. So many people just type “Hello there” every morning to count as another interaction with the AI.
My brother said his superior asked him to use more AI auto complete so that they can brag to investors that X percent of the company’s code is written by AI. This told me everything about the current state of this bullshit.
In reality, this doesn’t affect the existing batteries we have, it’s just for future battery technology.
Yeah, adults should be able to tell the difference between someone disagreeing with them and someone being rude/trolling.
I don’t think I ever needed to block anyone, but I kinda stopped commenting as much nowadays cause I realized a lot of times people just don’t understand something and say things out of ignorance + pretentiousness, immediately attacking whoever correct what they’re saying. I don’t think there’s a way out of that in these kinds of open discussion threads, unfortunately, because it’s not exactly bad faith.
There’s a difference between OpenAI storing conversations and the LLM being able to search all your previous conversations in every clean session you start.
Always has been. Nothing has changed.
The fact that OpenAI stores all input typed doesn’t mean you can make a prompt and ChatGPT will use any prior information as context, unless you had that memory feature turned on (which allowed you to explicitly “forget” what you choose from the context).
What OpenAI stores and what the LLM uses as input when you start a session are totally separate things. This update is about the LLM being able to search your prior conversations and referencing it (using it as input, in practice), so saying “Nothing has changed” is false.
Maybe for training new models, which is a totally different thing. This update is like everything you type will be stored and used as context.
I already never share any personal thing on these cloud-based LLMs, but it’s getting more and more important to have a local private LLM on your computer.
No paywalled link: https://archive.is/1QR8H
It’s more accurate to say they might be, but not necessarily. China is very aware of the benefits of keeping ahead technologically.
So with all this AI usage, surely developing for all browsers should be a breeze now, right? Right??