How are you using new AI technology? Maybe you're only deploying things like ChatGPT to summarize long texts or draft up mindless emails. But what are you losing by taking these shortcuts? And is this tech taking away our ability to think?
Actually it’s taking me quite a lot of effort and learning to setup AI’s that I run locally as I don’t trust them (any of them) with my data. If anything, it’s got me interested in learning again.
That’s the kind of effort in thought and learning that the article is calling out as being lost when it comes to reading and writing. You’re taking the time to learn and struggle with the effort, as long as you’re not giving that up once you have the AI running you’re not losing that.
I have difficulty learning, but using AI has helped me quite a lot. It’s like a teacher who will never get angry, doesn’t matter how dumb your question is or how many time you ask it.
Mind you, I am not in school and I understand hallucinations, but having someone who is this understanding in a discourse helps immensely.
It’s a wonderful tool for learning, especially for those who can’t follow the normal pacing. :)
It’s not normal for a teacher to get angry. Those people should be replaced by good teachers, not by a nicely-lying-to-you-bot. It’s not a jab at you, of course, but at the system.
I agree, I’ve been traumatized by the system. Whatever I’ve learnt that’s been useful to me has happened through the internet, give or take a few good teachers.
I still think it’s a good auxiliary tool. If you understand its constraints, it’s useful.
It’s just really unfortunate that it’s a for profit tool that will be used to try and replace us all.
Yeah, same. I have to learn now to learn in spite of all the old disillusioned creatures that hated their lives almost as much as they hated students.
And yet, I’m afraid learning from chatbots might be even worse.
Too be fair, this can also be said of teachers. It’s important to recognise that AI’s are as accurate as any single source and should always check everything yourself. I have concerns over a future where our only available sources are through AI.
Bruh so much of our lives is made up of people lying, either intentionally or unintentionally via spreading misinformation.
I remember being in 5th grade and my science teacher in a public school was teaching the “theory” of evolution but then she mentioned there are “other theories like intelligent design”
She wasn’t doing it to be malicious, just a brainwashed idiot.
And that’s why we, as humans, know how to look for signs of this in other humans. This is the skill we have to learn precisely because of that. Not only it’s not applicable when you read the generated bullshit, it actually does the opposite.
Some people are mistaken, some people are actively misleading, almost no one has the combination of being wrong just enough, and confident just enough, to sneak their bullshit under the bullshit detector.
Took that a slightly different way then I was expecting, my point is we have to be on the lookout for bullshit when getting info from other people so it’s really no different when getting info from an LLM.
However you took it to the LLM can’t determine between what’s true and false, which is obviously true but an interesting point to make nonetheless
It’s not that LLM can’t know truth, that’s obvious but besides the point. Its that the user can’t really determine when the lies are, not to the degree that you can be when getting info from a human.
So you really need to check everything, every claim, every word, every sound. You can’t assume good intentions, there are no intentions in real sence of the word, you can’t extrapolate or intrapolate. Every word of the data you’re getting might be a lie with the same certainty as any other word.
It requires so much effort to check properly, you either skip some or spend more time that you would without the layer of lies.
I understand that. I am careful to not use it as my main teaching source, rather a supplement. It helps when I want to dive into the root cause of something, which I then double check with real sources.
I do go to the real source first. But sometimes, I just need a very simple explanation before I can dive deep into the topic.
My brain sucks, I give up very easily if I don’t understand something. (This has been true since way before short form content and internet)
If I had to say how much I use it to learn, I’d say it’s about 30% of the total learning. It can’t teach you course work from scratch like a real person can (even through videos), but it can help clear doubts.
Its not a bit deal if you aren’t completely stupid, I don’t use LLMs to learn topics I know nothing about, but I do use them to assist me in figuring out solutions to things I’m somewhat familiar with. In my case I find it easy catch incorrect info, and even if I don’t catch it most of the time if you just occasionally tell it to double check what it said it self corrects.
It is a big deal. There is thr whole set of ways humans can gauge validity of the info, that are perpendicular to the way we interact with fancy autocomplete.
Every single word might be false, with no pattern to it. So if you can and do check it, you just wasting your time and humanity’s resources instead of finding the info yourself in the first place. If you don’t, or if you think you do, it’s even worse, you are being fed lies and believe them extra hard.
Actually it’s taking me quite a lot of effort and learning to setup AI’s that I run locally as I don’t trust them (any of them) with my data. If anything, it’s got me interested in learning again.
That’s the kind of effort in thought and learning that the article is calling out as being lost when it comes to reading and writing. You’re taking the time to learn and struggle with the effort, as long as you’re not giving that up once you have the AI running you’re not losing that.
I have difficulty learning, but using AI has helped me quite a lot. It’s like a teacher who will never get angry, doesn’t matter how dumb your question is or how many time you ask it.
Mind you, I am not in school and I understand hallucinations, but having someone who is this understanding in a discourse helps immensely.
It’s a wonderful tool for learning, especially for those who can’t follow the normal pacing. :)
It’s not normal for a teacher to get angry. Those people should be replaced by good teachers, not by a nicely-lying-to-you-bot. It’s not a jab at you, of course, but at the system.
I agree, I’ve been traumatized by the system. Whatever I’ve learnt that’s been useful to me has happened through the internet, give or take a few good teachers.
I still think it’s a good auxiliary tool. If you understand its constraints, it’s useful.
It’s just really unfortunate that it’s a for profit tool that will be used to try and replace us all.
Yeah, same. I have to learn now to learn in spite of all the old disillusioned creatures that hated their lives almost as much as they hated students.
And yet, I’m afraid learning from chatbots might be even worse.
Learning how to learn is so important. I only learned that as an adult.
The problem is if it’s wrong, you have no way to know without double checking everything it says
Too be fair, this can also be said of teachers. It’s important to recognise that AI’s are as accurate as any single source and should always check everything yourself. I have concerns over a future where our only available sources are through AI.
The level of psychopathy required from a human to be as blatant at lying as an llm is almost unachievable
Bruh so much of our lives is made up of people lying, either intentionally or unintentionally via spreading misinformation.
I remember being in 5th grade and my science teacher in a public school was teaching the “theory” of evolution but then she mentioned there are “other theories like intelligent design”
She wasn’t doing it to be malicious, just a brainwashed idiot.
And that’s why we, as humans, know how to look for signs of this in other humans. This is the skill we have to learn precisely because of that. Not only it’s not applicable when you read the generated bullshit, it actually does the opposite.
Some people are mistaken, some people are actively misleading, almost no one has the combination of being wrong just enough, and confident just enough, to sneak their bullshit under the bullshit detector.
Took that a slightly different way then I was expecting, my point is we have to be on the lookout for bullshit when getting info from other people so it’s really no different when getting info from an LLM.
However you took it to the LLM can’t determine between what’s true and false, which is obviously true but an interesting point to make nonetheless
It’s not that LLM can’t know truth, that’s obvious but besides the point. Its that the user can’t really determine when the lies are, not to the degree that you can be when getting info from a human.
So you really need to check everything, every claim, every word, every sound. You can’t assume good intentions, there are no intentions in real sence of the word, you can’t extrapolate or intrapolate. Every word of the data you’re getting might be a lie with the same certainty as any other word.
It requires so much effort to check properly, you either skip some or spend more time that you would without the layer of lies.
I understand that. I am careful to not use it as my main teaching source, rather a supplement. It helps when I want to dive into the root cause of something, which I then double check with real sources.
But like why not go to the real sorces directly in the first place? Why add unnecessary layer that doesn’t really add anything?
I do go to the real source first. But sometimes, I just need a very simple explanation before I can dive deep into the topic.
My brain sucks, I give up very easily if I don’t understand something. (This has been true since way before short form content and internet)
If I had to say how much I use it to learn, I’d say it’s about 30% of the total learning. It can’t teach you course work from scratch like a real person can (even through videos), but it can help clear doubts.
Its not a bit deal if you aren’t completely stupid, I don’t use LLMs to learn topics I know nothing about, but I do use them to assist me in figuring out solutions to things I’m somewhat familiar with. In my case I find it easy catch incorrect info, and even if I don’t catch it most of the time if you just occasionally tell it to double check what it said it self corrects.
It is a big deal. There is thr whole set of ways humans can gauge validity of the info, that are perpendicular to the way we interact with fancy autocomplete.
Every single word might be false, with no pattern to it. So if you can and do check it, you just wasting your time and humanity’s resources instead of finding the info yourself in the first place. If you don’t, or if you think you do, it’s even worse, you are being fed lies and believe them extra hard.