I ran an AI startup back in 2017 and this was a huge deal for us and I’ve seen no actual improvement in this problem. NYTimes is spot on IMO

  • Null User Object@programming.dev
    link
    fedilink
    arrow-up
    1
    arrow-down
    3
    ·
    edit-2
    3 months ago

    Completely irrelevant. The title and posted article are talking about unintentionally training LLM text generation models with prior output of other AI models. Not having enough training data for other types of models is a completely different problem and not what the article is about.

    Nobody is going to "trawl the web for new data to train their next models” (to quote the article) for a model trying to cure diseases.