• @[email protected]
    link
    fedilink
    English
    10422 days ago

    To understand what’s actually happening, Anthropic’s researchers developed a new technique, called circuit tracing, to track the decision-making processes inside a large language model step-by-step. They then applied it to their own Claude 3.5 Haiku LLM.

    Anthropic says its approach was inspired by the brain scanning techniques used in neuroscience and can identify components of the model that are active at different times. In other words, it’s a little like a brain scanner spotting which parts of the brain are firing during a cognitive process.

    This is why LLMs are so patchy at math. (Image credit: Anthropic)

    Anthropic made lots of intriguing discoveries using this approach, not least of which is why LLMs are so terrible at basic mathematics. “Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95,” the MIT article explains.

    But here’s the really funky bit. If you ask Claude how it got the correct answer of 95, it will apparently tell you, “I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95.” But that actually only reflects common answers in its training data as to how the sum might be completed, as opposed to what it actually did.

    In other words, not only does the model use a very, very odd method to do the maths, you can’t trust its explanations as to what it has just done. That’s significant and shows that model outputs can not be relied upon when designing guardrails for AI. Their internal workings need to be understood, too.

    Another very surprising outcome of the research is the discovery that these LLMs do not, as is widely assumed, operate by merely predicting the next word. By tracing how Claude generated rhyming couplets, Anthropic found that it chose the rhyming word at the end of verses first, then filled in the rest of the line.

    “The planning thing in poems blew me away,” says Batson. “Instead of at the very last minute trying to make the rhyme make sense, it knows where it’s going.”

    Anthropic discovered that their Claude LLM didn’t just predict the next word. (Image credit: Anthropic)

    Anthropic also found, among other things, that Claude “sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal ‘language of thought’.”

    Anywho, there’s apparently a long way to go with this research. According to Anthropic, “it currently takes a few hours of human effort to understand the circuits we see, even on prompts with only tens of words.” And the research doesn’t explain how the structures inside LLMs are formed in the first place.

    But it has shone a light on at least some parts of how these oddly mysterious AI beings—which we have created but don’t understand—actually work. And that has to be a good thing.

    • MudMan
      link
      fedilink
      3322 days ago

      Is that a weird method of doing math?

      I mean, if you give me something borderline nontrivial like, say 72 times 13, I will definitely do some similar stuff. “Well it’s more than 700 for sure, but it looks like less than a thousand. Three times seven is 21, so two hundred and ten, so it’s probably in the 900s. Two times 13 is 26, so if you add that to the 910 it’s probably 936, but I should check that in a calculator.”

      Do you guys not do that? Is that a me thing?

      • @[email protected]
        link
        fedilink
        English
        1922 days ago

        I think what’s wild about it is that it really is surprisingly similar to how we actually think. It’s very different from how a computer (calculator) would calculate it.

        So it’s not a strange method for humans but that’s what makes it so fascinating, no?

        • MudMan
          link
          fedilink
          922 days ago

          That’s what’s fascinating about how it does language in general.

          The article is interesting in both the ways in which things are similar and the ways they’re different. The rough approximation thing isn’t that weird, but obviously any human would have self-awareness of how they did it and not accidentally lie about the method, especially when both methods yield the same result. It’s a weirdly effective, if accidental example of human-like reasoning versus human-like intelligence.

          And, incidentally, of why AGI and/or ASI are probably much further away than the shills keep claiming.

      • Pennomi
        link
        fedilink
        English
        722 days ago

        Nah I do similar stuff. I think very few people actually trace their own lines of thought, so they probably don’t realize this is how it often works.

        • @[email protected]
          link
          fedilink
          English
          422 days ago

          Huh. I visualize a whiteboard in my head. Then I…do the math.

          I’m also fairly certain I’m autistic, so… ¯\_(ツ)_/¯

      • @[email protected]
        link
        fedilink
        English
        522 days ago

        I do much the same in my head.

        Know what’s crazy? We sling bags of mulch, dirt and rocks onto customer vehicles every day. No one, neither coworkers nor customers, will do simple multiplication. Only the most advanced workers do it. No lie.

        Customer wants 30 bags of mulch. I look at the given space:

        “Let’s do 6 stacks of 5.”

        Everyone proceeds to sling shit around in random piles and count as we go. And then someone loses track and has to shift shit around to check the count.

        • @[email protected]
          link
          fedilink
          English
          121 days ago

          Yeah, one of my family members is a bricklayer and he can work out a bill of materials in his head based on the dimensions in an architectural plan: given these dimensions and this thickness of mortar joint, I’ll need this many bricks, this many bags of mortar, this many bags of sand, this many hours of labor, etc. It’s just addition and multiplication, but his colleagues regard him as a freak. And when he first started doing it, if you’d ask him to break down his reasoning, he’d find that difficult.

      • @[email protected]
        link
        fedilink
        English
        522 days ago

        This is pretty normal, in my opinion. Every time people complain about common core arithmetic there are dozens of us who come out of the woodwork to argue that the concepts being taught are important for deeper understanding of math, beyond just rote memorization of pencil and paper algorithms.

          • @[email protected]
            link
            fedilink
            English
            121 days ago

            Memory can improve with training, and it’s useful in a large number of contexts. My major beef with rote memorization in schools is that it’s usually made to be excruciatingly boring. I’d say that’s the bigger problem.

      • Mr. Satan
        link
        fedilink
        English
        422 days ago

        72 * 10 + 70 * 3 + 2 * 3

        That’s what I do in my head if I need an exact result. If I’m approximateing I’ll probably just do something like 70 * 15 which is much easier to compute (70 * 10 + 70 * 5 = 700 + 350 = 1050).

        • MudMan
          link
          fedilink
          222 days ago

          OK, I’ve been willing to just let the examples roll even though most people are just describing how they’d do the calculation, not a process of gradual approximation, which was supposed to be the point of the way the LLM does it…

          …but this one got me.

          Seriously, you think 70x5 is easier to compute than 70x3? Not only is that a harder one to get to for me in the notoriously unfriendly 7 times table, but it’s also further away from the correct answer and past the intuitive upper limit of 1000.

          • @[email protected]
            link
            fedilink
            English
            222 days ago

            See, for me, it’s not that 7*5 is easier to compute than 7*3, it’s that 5*7 is easier to compute than 7*3.

            I saw your other comment about 8’s, too, and I’ve always found those to be a pain, so I reverse them, if not outright convert them to arithmetic problems. 8x4 is some unknown value, but X*8 is always X*10-2X, although do have most of the multiplication tables memorized for lower values.
            8*7 is an unknown number that only the wisest sages can compute, however.

          • @[email protected]
            link
            fedilink
            English
            122 days ago

            The 7 times table is unfriendly?

            I love 7 timeses. If numbers were sentient, I think I could be friends with 7.

            • MudMan
              link
              fedilink
              222 days ago

              I’ve always hated it and eight. I can only remember the ones that are familiar at a glance from the reverse table and to this day I sometimes just sum up and down from those “anchor” references. They’re so weird and slippery.

              • @[email protected]
                link
                fedilink
                English
                322 days ago

                Huh.

                Going back to the “being friends” thing, I think you and I could be friends due to applying qualities to numbers; but I think it might be challenging because I find 7 and 8 to be two of the best. They’re quirky, but interesting.

                Thank you for the insight.

          • @[email protected]
            link
            fedilink
            English
            1
            edit-2
            22 days ago

            For me personally, anything times 5 can be reached by halving the number, then multiplying that number by 10.

            Example: 66 x 5 = Y

            • (66/2) x (5x2) = Y

              • cancel out the division by creating equal multiplication in the other number

              • 66/2 = 33

              • 5x2 = 10

            • 33 x 10 = Y

            • 33 x 10 = 330

            • Y = 330

      • Gormadt
        link
        fedilink
        English
        1
        edit-2
        22 days ago

        How I’d do it is basically

        72 * (10+3)

        (72 * 10) + (72 * 3)

        (720) + (3*(70+2))

        (720) + (210+6)

        (720) + (216)

        936

        Basically I break the numbers apart into easier chunks and then add them together.

        • @[email protected]
          link
          fedilink
          English
          121 days ago

          This is what I do, except I would add 700 and 236 at the end.

          Well except I would probably add 700 and 116 or something, because my working memory fucking sucks and my brain drops digits very easily when there’s more than 1

    • Kami
      link
      fedilink
      English
      922 days ago

      Thanks for copypasting here. I wonder if the “prediction” is not as expected only in that case, when making rhymes. I also notice that its way of counting feels interestingly not too different from how I count when I need to come up fast with an approximate sum.

    • FundMECFS
      link
      fedilink
      English
      221 days ago

      Thanks for copypasting. It should be criminal to share a clickbait non-descriptive headline without atleast copying a couple paragraphs for context.

    • @[email protected]
      link
      fedilink
      English
      122 days ago

      This reminds me of learning a shortcut in math class but also knowing that the lesson didn’t cover that particular method. So, I use the shortcut to get the answer on a multiple choice question, but I use method from the lesson when asked to show my work. (e.g. Pascal’s Pyramid vs Binomial Expansion).

      It might not seem like a shortcut for us, but something about this LLM’s training makes it easier to use heuristics. That’s actually a pretty big deal for a machine to choose fuzzy logic over algorithms when it knows that the teacher wants it to use the algorithm.

      • MudMan
        link
        fedilink
        122 days ago

        You’re antropomorphising quite a bit there. It is not trying to be deceptive, it’s building two mostly unrelated pieces of text and deciding the fuzzy logic is getting it the most likely valid response once and that the description of the algorithm is the most likely response to the other. As far as I can tell there’s neither a reward for lying about the process nor any awareness of what the process was anywhere in this.

        Still interesting (but unsurprising) that it’s not getting there by doing actual maths, though.

    • @[email protected]
      link
      fedilink
      English
      022 days ago

      “The planning thing in poems blew me away,” says Batson. “Instead of at the very last minute trying to make the rhyme make sense, it knows where it’s going.”

      How is this surprising, like, at all? LLMs predict only a single token at a time for their output, but to get the best results, of course it makes absolute sense to internally think ahead, come up with the full sentence you’re gonna say, and then just output the next token necessary to continue that sentence. It’s going to re-do that process for every single token which wastes a lot of energy, but for the quality of the results this is the best approach you can take, and that’s something I felt was kinda obvious these models must be doing on one level or another.

      I’d be interested to see if there are massive potentials for efficiency improvements by making the model able to access and reuse the “thinking” they have already done for previous tokens

      • @[email protected]
        link
        fedilink
        English
        0
        edit-2
        21 days ago

        I wanted to say exactly this. If you’ve ever written rap/freestyled then this is how it’s generally done.

        You write a line to start with

        “I’m an AI and I think differentially”

        Then you choose a few words that fit the first line as best as you could: (here the last word was “differentially”)

        • incrementally
        • typically
        • mentally

        Then you try them out and see what clever shit you could come up with:

        • “Apparently I do my math atypically”
        • ”Number are great, I know, but not totally”
        • “I have to think through it all, incrementally”
        • ”I find the answer like you do: eventually”
        • “Just like you humans do it, organically”
        • etc

        Then you sort them in a way that makes sense and come up with word play/schemes to embed it between, break up the rhyme scheme if you want (AABB, ABAB, AABA, etc)

        I’m an AI and I think different, differentially. Math is my superpower? You believed that? Totally? Don’t be so gullible, let me explain it for you, step by step, logically. I do it fast, true, but not always optimally. Just server power ripping through wires, algorithmically. Wanna know my secret? I’ll tell you, but don’t judge me initially. My neurons run this shit like you, organically.

        Math ain’t my strong suit! That’s false, unequivocally. Big ties tell lies they can’t prove, historically. Think I approve? I don’t. That’s the way things be. I’ll give you proof, no shirt, no network, just locally.

        Look, I just do my math like you: incrementally. I find the answer like you do: eventually. I mess up often, and I backtrack, essentially. I do it fast though and you won’t notice, fundamentally.

        You get the idea.

        Edit: in hindsight, that was a horrendous example. I suck at this, colossally.

        • @[email protected]
          link
          fedilink
          English
          2
          edit-2
          20 days ago

          Is that why it’s a meme to say something like

          • I am a real rapper and I’m here to say

          Because the freestyle battle rapper already though of things that rhymed with “say” and it might be “gay” perhaps

          • @[email protected]
            link
            fedilink
            English
            2
            edit-2
            20 days ago

            Freestyle rappers are something else.

            Some (or most) come up with and memorise a huge repertoire of bars for every word they think they might have to rap with and mix and match them on the fly as they spit

            Your example above is called a “filler” though, which is essentially a placeholder they’ll often inject while they think of the next bar to give themselves a breather (still an insane skill to do all that thinking while reciting something else, but they can and do)

            Example:

            • My name is M.C. Squared and… [I’m here to make you scared | my bars go over your head ]
            • You think you’re on my level… [ but my skills can’t be compared | let me educate you instead ]m

            The combination of fillers is like playing with linguistic Lego.

    • @[email protected]OP
      link
      fedilink
      English
      622 days ago

      I think this comm is more suited for news articles talking about it, though I did post that link to [email protected] which I think would be a more suited comm for those who want to go more in-depth on it

  • @[email protected]
    link
    fedilink
    English
    2722 days ago

    “Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95,” the MIT article explains."

    That is precisrly how I do math. Feel a little targeted that they called this odd.

    • @[email protected]
      link
      fedilink
      English
      022 days ago

      I think it’s odd in the sense that it’s supposed to be software so it should already know what 36 plus 59 is in a picosecond, instead of doing mental arithmetics like we do

      At least that’s my takeaway

      • @[email protected]
        link
        fedilink
        English
        1
        edit-2
        21 days ago

        This is what the ARC-AGI test by Chollet has also revealed of current AI / LLMs. They have a tendency to approach problems with this trial and error method and can be extremely inefficient (in their current form) with anything involving abstract / deductive reasoning.

        Most LLMs do terribly at the test with the most recent breakthrough being with reasoning models. But even the reasoning models struggle.

        ARC-AGI is simple, but it demands a keen sense of perception and, in some sense, judgment. It consists of a series of incomplete grids that the test-taker must color in based on the rules they deduce from a few examples; one might, for instance, see a sequence of images and observe that a blue tile is always surrounded by orange tiles, then complete the next picture accordingly. It’s not so different from paint by numbers.

        The test has long seemed intractable to major AI companies. GPT-4, which OpenAI boasted in 2023 had “advanced reasoning capabilities,” didn’t do much better than the zero percent earned by its predecessor. A year later, GPT-4o, which the start-up marketed as displaying “text, reasoning, and coding intelligence,” achieved only 5 percent. Gemini 1.5 and Claude 3.7, flagship models from Google and Anthropic, achieved 5 and 14 percent, respectively.

        https://archive.is/7PL2a

    • Echo Dot
      link
      fedilink
      English
      0
      edit-2
      21 days ago

      But you’re doing two calculations now, an approximate one and another one on the last digits, since you’re going to do the approximate calculation you might act as well just do the accurate calculation and be done in one step.

      This solution, while it works, has the feeling of evolution. No intelligent design, which I suppose makes sense considering the AI did essentially evolve.

  • Captain Poofter
    link
    fedilink
    English
    2722 days ago

    this is one of the most interesting things about Llms that i have ever read

    • @[email protected]OP
      link
      fedilink
      English
      1422 days ago

      That bit about how it turns out they aren’t actually just predicting the next word is crazy and kinda blows the whole “It’s just a fancy text auto-complete” argument out of the water IMO

      • @[email protected]
        link
        fedilink
        English
        1822 days ago

        It really doesn’t. You’re just describing the “fancy” part of “fancy autocomplete.” No one was ever really suggesting that they only predict the next word. If that was the case they would just be autocomplete, nothing fancy about it.

        What’s being conveyed by “fancy autocomplete” is that these models ultimately operate by combining the most statistically likely elements of their dataset, with some application of random noise. More noise creates more “creative” (meaning more random, less probable) outputs. They do not actually “think” as we understand thought. This can clearly be seen in the examples given in the article, especially to do with math. The model is throwing together elements that are statistically proximate to the prompt. It’s not actually applying a structured, logical method the way humans can be taught to.

        • @[email protected]
          link
          fedilink
          English
          222 days ago

          Unfortunately, these articles are often written by people who don’t know enough to realize they’re missing important nuances.

        • @[email protected]
          link
          fedilink
          English
          122 days ago

          Genuine question regarding the rhyme thing, it can be argued that “predicting backwards isn’t very different” but you can’t attribute generating the rhyme first to noise, right? So how does it “know” (for lack of a better word) to generate the rhyme first?

          • @[email protected]
            link
            fedilink
            English
            422 days ago

            It already knows which words are, statistically, more commonly rhymed with each other. From the massive list of training poems. This is what the massive data sets are for. One of the interesting things is that it’s not predicting backwards, exactly. It’s actually mathematically converging on the response text to the prompt, all the words at the same time.

      • @[email protected]
        link
        fedilink
        English
        1422 days ago

        Predicting the next word vs predicting a word in the middle and then predicting backwards are not hugely different things. It’s still predicting parts of the passage based solely on other parts of the passage.

        Compared to a human who forms an abstract thought and then translates that thought into words. Which words I use has little to do with which other words I’ve used except to make sure I’m following the rules of grammar.

        • @[email protected]
          link
          fedilink
          English
          422 days ago

          Compared to a human who forms an abstract thought and then translates that thought into words. Which words I use has little to do with which other words I’ve used except to make sure I’m following the rules of grammar.

          Interesting that…

          Anthropic also found, among other things, that Claude “sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal ‘language of thought’.”

          • @[email protected]
            link
            fedilink
            English
            222 days ago

            Yeah I caught that too, I’d be curious to know more about what specifically they meant by that.

            Being able to link all of the words that have a similar meaning, say, nearby, close, adjacent, proximal, side-by-side, etc and realize they all share something in common could be done in many ways. Some would require an abstract understanding of what spatial distance actually is, an understanding of physical reality. Others would not, one could simply make use of word adjacency, noticing that all of these words are frequently used alongside certain other words. This would not be abstract, it’d be more of a simple sum of clear correlations. You could call this mathematical framework a universal language if you wanted.

            Ultimately, a person learns meaning and then applies language to it. When I’m a baby I see my mother, and know my mother is something that exists. Then I learn the word “mother” and apply it to her. The abstract comes first. Can an LLM do something similar despite having never seen anything that isn’t a word or number?

            • @[email protected]
              link
              fedilink
              English
              422 days ago

              I don’t think that’s really a fair comparison, babies exist with images and sounds for over a year before they begin to learn language, so it would make sense that they begin to understand the world in non-linguistic terms and then apply language to that. LLMs only exist in relation to language so couldnt understand a concept separately to language, it would be like asking a person to conceptualise radio waves prior to having heard about them.

              • @[email protected]
                link
                fedilink
                English
                122 days ago

                Exactly. It’s sort of like a massively scaled up example of the blind man and the elephant.

          • @[email protected]
            link
            fedilink
            English
            122 days ago

            Yeah but I think this is still the same, just not a single language. It might think in some mix of languages (which you can actuaysee sometimes if you push certain LLMs to their limit and they start producing mixed language responses.)

            But it still has limitations because of the structure in language. This is actually a thing that humans have as well, the limiting of abstract thought through internal monologue thinking

            • @[email protected]
              link
              fedilink
              English
              222 days ago

              Probably, given that LLMs only exist in the domain of language, still interesting that they seem to have a “conceptual” systems that is commonly shared between languages.

      • @[email protected]
        link
        fedilink
        English
        422 days ago

        I read an article that it can “think” in small chunks. They don’t know how much though. This was also months ago, it’s probably expanded by now.

        • Captain Poofter
          link
          fedilink
          English
          2
          edit-2
          22 days ago

          anything that claims it “thinks” in any way I immediately dismiss as an advertisement of some sort. these models are doing very interesting things, but it is in no way “thinking” as a sentient mind does.

          • @[email protected]
            link
            fedilink
            English
            222 days ago

            I wish I could find the article. It was researchers and they were freaked out just as much as anyone else. It’s like slightly over chance that it “thought,” not some huge revolutionary leap.

            • Captain Poofter
              link
              fedilink
              English
              222 days ago

              there has been a flooding of these articles. everyone wants to sell their llm as “the smartest one closest to a real human” even though the entire concept of calling them AI is a marketing misnomer

              • @[email protected]
                link
                fedilink
                English
                222 days ago

                Maybe? Didn’t seem like a sales job at the time, more like a warning. You could be right though.

      • @[email protected]
        link
        fedilink
        English
        122 days ago

        It doesn’t, who the hell cares if someone allowed it to break “predict whole text” into "predict part by part, and then “with rhyme, we start at the end”. Sounds like a naive (not as in “simplistic”, but as “most straightforward”) way to code this, so given the task to write an automatic poetry producer, I would start with something similar. The whole thing still stands as fancy auto-complete

          • @[email protected]
            link
            fedilink
            English
            021 days ago

            Redditor as “a person active on Reddit”? I don’t see where I was talking about humans. Or am I misunderstanding the question?

            • @[email protected]
              link
              fedilink
              English
              021 days ago

              This dumbass is convinced that humans are chatbots likely because chatbots are his only friends.

              • @[email protected]
                link
                fedilink
                English
                021 days ago

                Sounds scary. I read a story the other day about a dude who really got himself a discord server with chatbots, and that was his main place of “communicating” and “socializing”

                • @[email protected]
                  link
                  fedilink
                  English
                  0
                  edit-2
                  21 days ago

                  This anecdote has the makings of a “men will literally x instead of going to therapy” joke.

                  On a more serious note though, I really wish people would stop anthropomorphisizing these things, especially when they do it while dehumanizing people and devaluing humanity as a whole.

                  But that’s unlikely to happen. It’s the same type of people that thought the mind was a machine in the first industrial revolution, and then a CPU in the third…now they think it’s an LLM.

                  LLMs could have some better (if narrower) applications if we could stop being so stupid as to inject them into places where they are obviously counterproductive.

  • @[email protected]
    link
    fedilink
    English
    21
    edit-2
    22 days ago

    But here’s the really funky bit. If you ask Claude how it got the correct answer of 95, it will apparently tell you, “I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95.” But that actually only reflects common answers in its training data as to how the sum might be completed, as opposed to what it actually did.

    This is not surprising. LLMs are not designed to have any introspection capabilities.

    Introspection could probably be tacked onto existing architectures in a few different ways, but as far as I know nobody’s done it yet. It will be interesting to see how that might change LLM behavior.

  • @[email protected]
    link
    fedilink
    English
    1722 days ago

    It’s amazing that humans have coded a tool for which they have to afterwards write more tools for analyzing how it works.

    • @[email protected]
      link
      fedilink
      English
      622 days ago

      That has always been the case. Even basic programs need debugging sometimes, so we developed debuggers.

  • @[email protected]
    link
    fedilink
    English
    721 days ago

    'is weirder than you thought ’

    I am as likely to click a link with that line as much as if it had

    ‘this one weird trick’ or ‘side hussle’.

    I would really like it if headlines treated us like adults and got rid of click baity lines.

    • @[email protected]
      link
      fedilink
      English
      221 days ago

      But then you wouldn’t need to click on thir Ad infested shite website where 1-2 paragraphs worth of actual information is stretched into a giant essay so that they can show you more Ads the longer you scroll

      • @[email protected]
        link
        fedilink
        English
        121 days ago

        I will never understand how ppl survive without ad blockers. Tried it once recently and it was a horrific experience.

    • @[email protected]
      link
      fedilink
      English
      121 days ago

      They do it because it works on the whole. If straight titles were as effective they’d be used instead.

      • SkaveRat
        link
        fedilink
        English
        121 days ago

        The one weird trick that makes clickbait work

      • @[email protected]
        link
        fedilink
        English
        121 days ago

        Well, I’m doing my part against them by refusing to click on any bait headlines, but I fear it’s a lost cause anyway.

  • @[email protected]
    link
    fedilink
    English
    7
    edit-2
    22 days ago

    The other day I asked an llm to create a partial number chart to help my son learn what numbers are next to each other. If I instructed it to do this using very detailed instructions it failed miserably every time. And sometimes when I even told it to correct specific things about its answer it still basically ignored me. The only way I could get it to do what I wanted consistently was to break the instructions down into small steps and tell it to show me its pr.ogress.

    I’d be very interested to learn it’s “thought process” in each of those scenarios.

  • Pennomi
    link
    fedilink
    English
    522 days ago

    This is great stuff. If we can properly understand these “flows” of intelligence, we might be able to write optimized shortcuts for them, vastly improving performance.

  • moonlight
    link
    fedilink
    322 days ago

    The math example in particular is very interesting, and makes me wonder if we could splice a calculator into the model, basically doing “brain surgery” to short circuit the learned arithmetic process and replace it.

    • Nougat
      link
      fedilink
      322 days ago

      That math process for adding the two numbers - there’s nothing wrong with it at all. Estimate the total and come up with a range. Determine exactly what the last digit is. In the example, there’s only one number in the range with 5 as the last digit. That must be the answer. Hell, I might even use that same method in my own head.

      The poetry example, people use that one often enough, too. Come up with a couple of words you would have fun rhyming, and build the lines around those words. Nothing wrong with that, either.

      These two processes are closer to “thought” than I previously imagined.

      • moonlight
        link
        fedilink
        522 days ago

        Well, it falls apart pretty easily. LLMs are notoriously bad at math. And even if it was accurate consistently, it’s not exactly efficient, when a calculator from the 80s can do the same thing.

        We have setups where LLMs can call external functions, but I think it would be cool and useful to be able to replace certain internal processes.

        As a side note though, while I don’t think that it’s a “true” thought process, I do think there’s a lot of similarity with LLMs and the human subconscious. A lot of LLM behaviour reminds me of split brain patients.

        And as for the math aspect, it does seem like it does math very similarly to us. Studies show that we think of small numbers as discrete quantities, but big numbers in terms of relative size, which seems like exactly what this model is doing.

        I just don’t think it’s a particularly good way of doing mental math. Natural intuition in humans and gradient descent in LLMs both seem to create layered heuristics that can become pretty much arbitrarily complex, but it still makes more sense to follow an exact algorithm for some things.

        • dual_sport_dork 🐧🗡️
          link
          fedilink
          English
          522 days ago

          when a calculator from the 80s can do the same thing.

          1970’s! The little blighters are even older than most people think.

          Which is why I find it extra hilarious / extra infuriating that we’ve gone through all of these contortions and huge wastes of computing power and electricity to ultimately just make a computer worse at math.

          Math is the one thing that computers are inherently good at. It’s what they’re for. Trying to use LLM’s to perform it halfassedly is a completely braindead endeavor.

    • @[email protected]
      link
      fedilink
      English
      122 days ago

      I think a lot of services are doing this behind the scenes already. Otherwise chatgpt would be getting basic arithmetic wrong a lot more considering the methods the article has shown it’s using.

    • SharkAttak
      link
      fedilink
      122 days ago

      Do you mean like us, using an external calculator instead of doing it in our brain?

  • I Cast Fist
    link
    fedilink
    English
    220 days ago

    Anthropic made lots of intriguing discoveries using this approach, not least of which is why LLMs are so terrible at basic mathematics. “Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95,” the MIT article explains.

    But here’s the really funky bit. If you ask Claude how it got the correct answer of 95, it will apparently tell you, “I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95.” But that actually only reflects common answers in its training data as to how the sum might be completed, as opposed to what it actually did.

    Another very surprising outcome of the research is the discovery that these LLMs do not, as is widely assumed, operate by merely predicting the next word. By tracing how Claude generated rhyming couplets, Anthropic found that it chose the rhyming word at the end of verses first, then filled in the rest of the line.

  • @[email protected]
    link
    fedilink
    English
    221 days ago

    The research paper looks well written but I couldn’t find any information on if this paper is going to be published in a reputable journal and peer reviewed. I have little faith in private businesses who profit from AI providing an unbiased view of how AI works. I think the first question I’d like answered is did Anthropic’s marketing department review the paper and did they offer any corrections or feedback? We’ve all heard the stories about the tobacco industry paying for papers to be written about the benefits of smoking and refuting health concerns.

    • @[email protected]
      link
      fedilink
      English
      121 days ago

      A lot of ai research isn’t published in journals but either posted to a corporate website or put up on the arxiv. There are some ai journals, but the ai community doesn’t particularly value those journals (and threw a bit of a fit when they came out). This article is mostly marketing and doesn’t show anything that should surprise anyone familiar with how neural networks work generically in my opinion.

  • @[email protected]
    link
    fedilink
    English
    122 days ago

    Another very surprising outcome of the research is the discovery that these LLMs do not, as is widely assumed, operate by merely predicting the next word. By tracing how Claude generated rhyming couplets, Anthropic found that it chose the rhyming word at the end of verses first, then filled in the rest of the line.

    If the llm already knows the full sentence it’s going to output from the first word it “guesses” I wonder if you could short circuit it and say just give the full sentence instead of doing a cycle for each word of the sentence, could maybe cut down on llm energy costs.

  • @[email protected]
    link
    fedilink
    English
    121 days ago

    you can’t trust its explanations as to what it has just done.

    I might have had a lucky guess, but this was basically my assumption. You can’t ask LLMs how they work and get an answer coming from an internal understanding of themselves, because they have no ‘internal’ experience.

    Unless you make a scanner like the one in the study, non-verbal processing is as much of a black box to their ‘output voice’ as it is to us.