Aidan Gomez interview: Scaling Limits Emerging, AI Use-cases with PMF & Life After Transformers
Here is the translation of the contents from Chinese into English:
This article describes a series of revolutionary products launched by the DeepMind team over the past decade, including AlphaGo, AlphaZero, and AlphaFold. These products achieved remarkable progress through self-learning and reinforcement learning, such as defeating human chess champions and predicting protein 3D structures.
The article highlights AlphaEvolve, a product capable of training AI to make interdisciplinary scientific discoveries and improving performance on other problems. Its appearance sparked discussions about reinforcement learning, human-machine collaboration cycles, and technical ethics.
The article concludes by urging readers to consider how humans can coexist with AI, emphasizing the importance of human values judgments and AI’s efficient exploration capabilities. It looks forward to a new era of enhanced research in the future.
Translation
这篇文章描述了DeepMind团队在最近十年间推出的系列颠覆性产品,包括AlphaGo、AlphaZero、AlphaFold等。这些产品通过自我学习和强化学习实现了惊人的进步,例如在围棋游戏中击败人类冠军,预测蛋白质三维结构等。
文章特别提到了AlphaEvolve,这是一款能够训练AI进行跨学科的科学发现,并且可以对其他问题进行改进和优化。该产品的出现引发了人们对强化学习、人机协作循环和技术伦理的讨论。
文章结尾呼吁读者思考人类如何与AI共存,强调人的价值判断和AI执行高效探索的重要性,以及期待在未来看到增强型科研的新时代。
Reference:
https://www.youtube.com/watch?v=sLsYzhk_4CY## Aidan Gomez interview: Scaling Limits Emerging, AI Use-cases with PMF & Life After Transformers
Here is the translation of the contents:
The interview mainly talks about Aidan’s perspective on artificial intelligence. He believes that humans are still the undisputed gold standard, and if a model is built for humans, then humans are best suited to evaluate its effectiveness.
Aidan also thinks that human labeling has become too expensive, and it’s easier to find 100,000 ordinary people to teach models how to chat, but it’s hard to find 10,000 doctors to teach the model about medicine. This means we can only train models with ordinary people and then fine-tune them with a smaller pool of human data, such as 100 doctors, and generate thousands of similar synthetic data.
Aidan has also changed his views on some assumptions, particularly about Scaling Laws. He used to be very loyal to the idea of Scaling Laws, but now they have been clearly proven that relying solely on scaling is not enough to succeed. For testing, scaling still requires a lot of computation and makes reasoning 3-10 times more expensive, while training becomes cheaper.
Regarding his views on the future of AI, Aidan remains enthusiastic and believes that we are currently limited by supply rather than demand, so he doesn’t think there will be any large-scale unemployment. Instead, he thinks AI will enhance people’s abilities to do more things and provide more of what the world wants and needs.
Aidan also expressed some concern about potential risks brought by AI, such as malicious individuals getting access to more advanced models ahead of others, and whether we have the ability to retrain people for new jobs if their work is affected. However, he doesn’t worry about the “Terminator” risk because there are still many pressing issues with AI technology.
Translation
这个访谈主要讲的是Aidan关于人工智能的观点。他认为人类仍然是无可争议的黄金标准,如果模型是为人类构建的,那么人类可能最适合来评估它的有效性。 Aidan还认为,人工标注如今变得过于昂贵,我们可以找到10万名普通人教会模型闲聊等内容,但是没办法找到10万名医生让他们教模型医学方面的知识。这使得我们只能是用普通人训练模型后,再用更小的人类数据池,比如100位医生来微调模型,然后再用这个数据池来生成千倍的类似的合成数据。
Aidan还讨论了他最近改变的一些看法,特别是关于对Scaling Laws的假设。他说,他过去几年都相当忠于Scaling Laws的观点,但是现在他们已经被清楚地证明仅靠Scaling是无法取得成功的。对于测试时计算的Scaling来说,它仍然需要大量的计算,并且会让推理变得贵3到10倍,而训练的计算反而会变得更便宜。
最后,关于对AI未来的看法,Aidan依然表示了自己的兴奋之情,他认为我们如今依然受到的是供应限制,而不是需求限制,所以他不认为会出现任何形式的大规模失业。相反,他相信AI会增强人们,让人们能够做更多的事情,提供更多世界想要和需要的东西。
Aidan也略微表示了对AI可能会带来的某些风险的担忧,他担心有恶意的人会提前一步获得更先进的模型,也担心如果某些工作受到影响,我们是否具备将人们转移到新职业的能力。但是他并不担心所谓的终结者风险,因为AI技术还有那么多需要担心的问题。
Reference:
https://www.youtube.com/watch?v=sLsYzhk_4CY