Here is the translation of the contents into English:
Llama 4's release brought a new round of open-source model competition and progress. Meta made tremendous efforts in pre-training Llama 4, and developed new training procedures to enhance its length generalization ability. This resulted in a significant advantage for Llama 4 in handling long documents, complex conversations, and multi-round reasoning tasks.
However, based on user testing results, the actual performance of Llama 4 was somewhat underwhelming. Although it had ten times more pre-trained data than previous versions, its performance seemed to be inferior in certain tasks compared to other models.
Please note that this is still a significant progress in the open-source model field. It not only allows users to download Llama 4's model but also enables more people to participate in model development and optimization. The smoke bombs posted by Sam Altman on X also hinted at Meta's reasoning models coming soon, which will further drive the progress of the open-source model field.
Lastly, although Llama 4 has its imperfections, it remains an important milestone that will continue to push the progress of the open-source model field and possibly lead to more powerful models and capabilities.
Translation
总的来说,Llama 4 的发布带来了新一轮开源模型的竞争和进步。Meta 为 Llama 4 的预训练做出了巨大努力,并且开发了新的训练流程来增强其长度泛化能力。这导致了 Llama 4 在处理长文档、复杂对话和多轮推理任务方面有明显的优势。
然而,根据网友们的实测效果,Llama 4 的实际表现略微有些差强人意。虽然在数量上预训练数据量比之前版本大了十倍,但是在某些任务中,Llama 4 的表现似乎没有其他模型好。
但是,请注意,这仍然是开源模型领域的重大进步。这不仅使得用户能够下载 Llama 4 的模型,也让更多的人有机会参与到模型开发和优化中。Sam Altman 在 X 上放出的烟雾弹也预示着 Meta 的推理模型即将到来,这将进一步推动开源模型领域的发展。
最后,尽管在某些方面 Llama 4 的表现并不完美,但它仍然是一个重要的里程碑。这将继续推动开源模型领域的进步,并且可能会带来更强大的模型和能力。
Reference:
https://ai.meta.com/blog/llama-4-multimodal-intelligence/