
14期


00:00:33 高手与普通人的差距,在于“记忆预算”的分配
00:04:16 AI当“牛顿”:我们如何找到万物生长的公式?
00:08:26 让AI不止听话,更要会提问
00:12:09 AI 思考的艺术:如何做到又快又好?
00:18:06 AI识人心:20盘棋,就“看穿”了你
本期介绍的五篇论文:
[LG] Capacity-Constrained Continual Learning
[Google DeepMind]
https://arxiv.org/abs/2507.21479
---
[LG] EvoSLD: Automated Neural Scaling Law Discovery With Large Language Models
[Peking University & Tsinghua University]
https://arxiv.org/abs/2507.21184
---
[LG] Teaching Language Models To Gather Information Proactively
[Microsoft]
https://arxiv.org/abs/2507.21389
---
[LG] TriangleMix: A Lossless and Efficient Attention Pattern for Long Context Prefilling
[Microsoft Research]
https://arxiv.org/abs/2507.21526
---
[LG] Learning to Imitate with Less: Efficient Individual Behavior Modeling in Chess
[University of Toronto]
https://arxiv.org/abs/2507.21488



沪ICP备06026464号-4 网络文化经营许可证
沪网文[2014]0587-137号
信息网络传播视听许可证:0911603
©2011-2019 qingting.fm ALL Rights Reserved.
应用名称:蜻蜓FM | 开发者:上海麦克风文化传媒有限公司