• @[email protected]
    link
    fedilink
    203 months ago

    So if the Chinese version is so efficient, and is open source, then couldn’t openAI and anthropic run the same on their huge hardware and get enormous capacity out of it?

    • @[email protected]
      link
      fedilink
      103 months ago

      Not necessarily… if I gave you my “faster car” for you to run on your private 7 lane highway, you can definitely squeeze every last bit of the speed the car gives, but no more.

      DeepSeek works as intended on 1% of the hardware the others allegedly “require” (allegedly, remember this is all a super hype bubble)… if you run it on super powerful machines, it will perform nicer but only to a certain extend… it will not suddenly develop more/better qualities just because the hardware it runs on is better

      • @[email protected]
        link
        fedilink
        43 months ago

        This makes sense, but it would still allow a hundred times more people to use the model without running into limits, no?

      • @[email protected]
        link
        fedilink
        23 months ago

        Didn’t deepseek solve some of the data wall problems by creating good chain of thought data with an intermediate RL model. That approach should work with the tried and tested scaling laws just using much more compute.

    • @[email protected]
      link
      fedilink
      English
      93 months ago

      OpenAI could use less hardware to get similar performance if they used the Chinese version, but they already have enough hardware to run their model.

      Theoretically the best move for them would be to train their own, larger model using the same technique (as to still fully utilize their hardware) but this is easier said than done.

    • 小莱卡
      link
      fedilink
      English
      83 months ago

      Yes but have you considered that “china bad”?