Two class action lawsuits were filed by authors against Meta in 2023 claiming misuse of their copyrighted books as training material for Llama. The courts haven’t been terribly sympathetic to the authors. The first case, Kadrey et al. v. Meta Platforms, was filed in July 2023. The second, Chabon v. Meta Platforms, was filed in September 2023. The two cases were combined, and the combined case was dismissed on all but one count, direct copyright violation, in December 2023. It still dragged on with amended complaints for another year; in September 2024 the judge ordered “Based on previous filings, the Court has been under the impression that there’s no real dispute about whether Meta fed copyrighted works to its AI programs without authorization, and that the only real legal question to be presented at summary judgment is whether doing so constituted ‘fair use’ within the meaning of copyright law.” Summary judgement was scheduled for March 2025, although there has been plenty of activity from both sides through February 2025.
Below I’ll discuss the progress Meta AI has made with the Llama family of models since the fall of 2023. Note that they’re no longer just language models. Some models are multi-modal (text and vision inputs, text output), and some can interpret code and call tools. In addition, some Llama models are safety components that identify and mitigate attacks, designed to be used as part of a Llama Stack. The following model write-ups are condensed from Meta AI’s model cards.
Llama Guard 1
Llama Guard is a 7B parameter Llama 2-based input-output safeguard model. It can be used for classifying content in both LLM inputs (prompt classification) and in LLM responses (response classification).