People are talking about the new Llama 3.3 70b release, which has generally better performance than Llama 3.1 (approaching 3.1’s 405b performance): https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_3

However, something to note:

Llama 3.3 70B is provided only as an instruction-tuned model; a pretrained version is not available.

Is this the end of open-weight pretrained models from Meta, or is Llama 3.3 70b instruct just a better-instruction-tuned version of a 3.1 pretrained model?

Comparing the model cards: 3.1: https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md 3.3: https://github.com/meta-llama/llama-models/blob/main/models/llama3_3/MODEL_CARD.md

The same knowledge cutoff, same amount of training data, and same training time give me hope that it’s just a better finetune of maybe Llama 3.1 405b.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    21 days ago

    A base-model / pre-trained is fed with a large dataset of random text files. Books, Wikipedia etc. After that the model can autocomplete text. And it has learned language and concepts about the world. But it won’t answer your questions. It’ll refine them, or think you’re writing an email or long list of unanswered questions and write some more questions underneath, instead of engaging with you. Or think it’s writing a novel and autocomplete “…that’s what character asked while rolling their eyes.” Or something completely arbitrary like that.

    After that major first step it’ll get fine-tuned to some task. The procedure is the same, it’ll get fed different text in almost the same way. And this just continues the training. But now it’s text that tunes it to it’s role. For example be a Chatbot. It’ll get lots of text that is a question, then a special character/token and then an answer to the question. And it’ll learn to reply with an (correct) answer if you put in a question and that token. It’ll probably also be fine-tuned to write dialogue as a Chatbot. And follow instructions. (And refuse some things and speak more unbiased, be nice…)

    You can also put in domain-specific data, make it learn/focus on medicine… I think that’s also called fine-tuning. But as far as I understand teaching knowledge with arbitrary data comes before teaching/tuning it to follow instructions, or it might forget that.

    I think instruction tuning is a form of fine-tuning. It’s just called that to distinguish it from other forms of fine-tuning. But I’m not really an expert on any of this.