• 0 Posts
  • 60 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle




  • Lying requires intent. Currently popular LLMs build responses one token at a time—when it starts writing a sentence, it doesn’t know how it will end, and therefore can’t have an opinion about the truth value of it. (I’d go further and claim it can’t really “have an opinion” about anything, but even if it can, it can neither lie nor tell the truth on purpose.) It can consider its own output (and therefore potentially have an opinion about whether it is true or false) only after it has been generated, when generating the next token.

    “Admitting” that it’s lying only proves that it has been exposed to “admission” as a pattern in its training data.







  • Oh phonics is the old one (although it’s making a comeback). The “new” one that they’ve been promoting for a couple decades (and have recently realized isn’t very good) is cueing, the one where you just show kids words and encourage them to use context clues to guess what they mean, and hope that they eventually learn to read by doing that. Phonics is the one where you start with letter (and letter group) sounds and learn to sound out words by reading out loud.