The essence of AI is not a specific technology (its definition today is actually quite vague), but a projection of future capabilities. Or rather, it’s a public cognitive label.
Of course, this is undeniably just an empirical summary of the past. What the future holds still has many possibilities. And speech recognition, recommendation algorithms, etc., seem somewhat distant from current AI.
However, from another perspective, looking at this statement from a product design viewpoint, I find this is the process of internalizing AI technology into product evolution. From lofty technology, to machine learning, to “algorithm.”
First, let’s talk about Tesler’s Law, which states that any system has a certain amount of inherent complexity that cannot be eliminated, only transferred. Then the continuous evolution of the “AI” cognition above is actually, as technology matures and consensus forms, the complexity of “AI” technology transfers from users to the system internally.
Regarding prompts as products, models as products—isn’t this because AI technology’s complexity leans more toward users rather than the system? And the transfer is happening.
Actually, I believe prompts as products will still have a place (just as complexity cannot involve only one aspect but needs to be balanced as much as possible). My recent extensive two-sided research on my articles, summaries of interview videos on YouTube with extensions, etc., are all achieved through prompts, and there’s no way to implement my needs in the short term (nor will platforms independently provide this functionality in the long run). However, the problem lies in Google’s powerful ecosystem and model capabilities, making this aspect only usable with Gemini’s Deep Research and built-in YouTube tool.
Returning to the topic of complexity: Complexity cannot disappear, only be transferred. My recent realization is also that truly great design makes users unaware of its existence.
Perhaps the focus of AI products not centered on model competition lies in how to hide the complexity behind intelligence.
Speech recognition, machine translation, OCR, AlphaGo are all AI. Now, what do we call them? They might have been internalized, just as AlphaGo became reinforcement learning engineering capability. In the future, current multimodal large models might just be some interface or platform service.
This is the evolution of this cognitive label.
The Deep Learning Revolution has a passage that says: In the coming decades, there will be some skepticism about whether neural networks are ultimately useful. Then, once the power of neural networks becomes apparent, some will question whether AI will destroy humanity. Yann LeCun finds both questions ridiculous, whether in private or public, he’s always been outspoken. Like decades later, in a video on the night he received the Turing Award, he said: “I’ve always believed I was absolutely right.” He believes neural networks are a path leading to very real and very useful technology. That’s what he said.
The technology is still this technology. Different people’s extreme views on AI’s future actually fail to understand what AI specifically is. For those with extreme ideas, perhaps it’s just self-cognition projecting onto future AI capabilities rather than rational judgment.
Returning to product perspective, how to let ordinary people use AI without burden is the focus of non-model AI applications. The chatbot form is also complexity migrating to the system. Pre-trained base models that only predict the next token might only help with article continuation, hence post-training emerged, transferring complexity to the system. Turning prediction machines into conversational assistants.
So you see Cursor’s seamless tool calling, Manus’s interaction forms—complexity is behind, within the model itself.
For products, perhaps “AI-first” should be replaced by “User-first, AI-enabled”. More experience design, interaction innovation, system encapsulation, ecosystem building might well be competitive advantages.
AI 的本质并不是一项特定的技术(如今它的定义实际上相当模糊),而是对未来能力的一种投射。或者说,它是一个公共的认知标签。
当然,这无疑只是对过去的一种经验总结。未来仍有很多可能性。而语音识别、推荐算法等,似乎与当前 AI 有一定距离。
然而,从另一个角度来看,从产品设计的角度来看这个陈述,我发现这是将 AI 技术内部化到产品演进中。 从高高在上的技术,到机器学习,到”算法”。
首先,让我们谈谈 Tesler 定律,它指出任何系统都有一定数量的固有复杂性,无法消除,只能转移。 那么上述”AI”认知的不断演进,实际上就是随着技术成熟和共识形成,“AI”技术的复杂性从用户转移到系统内部。
关于将提示词作为产品、模型作为产品——这不是因为 AI 技术的复杂性更多地倾向于用户而不是系统吗?而这种转移正在发生。
实际上,我相信提示词作为产品仍将有一席之地(正如复杂性不能只涉及一个方面,而需要尽可能平衡)。我最近对我的文章进行的大规模双边研究、通过扩展总结 YouTube 上的访谈视频等,都是通过提示词实现的,短期内无法实现我的需求(长期来看平台也不会独立提供此功能)。 然而,问题在于 Google 强大的生态系统和模型能力,使得这一方面只能通过 Gemini 的 Deep Research 和内置 YouTube 工具使用。
回到复杂性的话题:复杂性不会消失,只会转移。我最近的领悟也是真正伟大的设计让用户意识不到它的存在。
也许不以模型竞争为中心的 AI 产品的重点在于如何在智能背后隐藏复杂性。
语音识别、机器翻译、OCR、AlphaGo 都是 AI。现在,我们怎么称呼它们?它们可能已经被内部化了,就像 AlphaGo 变成了强化学习工程能力。 未来,当前的多模态大模型可能只是某种接口或平台服务。
这就是这个认知标签的演进。
《深度学习革命》有一段话: 在接下来的几十年里,会有一些怀疑神经网络是否最终有用的声音。然后,一旦神经网络的力量变得明显,有些人会质疑 AI 是否会毁灭人类。Yann LeCun 认为这两个问题都很荒谬,无论是在私下还是公开场合,他一直直言不讳。就像几十年后,在他获得图灵奖当晚的视频中,他说:“我一直相信我是绝对正确的。“他相信神经网络是通往非常真实和非常有用的技术的路径。这就是他所说的。
**技术仍然是这项技术。**不同人对 AI 未来的极端观点实际上未能理解 AI 具体是什么。对于那些有极端想法的人来说,也许这只是自我认知投射到未来的 AI 能力上,而不是理性判断。
回到产品角度,如何让普通人无负担地使用 AI 是非模型 AI 应用的重点。 聊天机器人形式也是复杂性迁移到系统。只预测下一个 token 的预训练基础模型可能只对文章续写有帮助,因此出现了后训练,将复杂性转移到系统。将预测机器转化为对话助手。
所以你看到 Cursor 的无缝工具调用、Manus 的交互表单——复杂性在后面,在模型本身。
对于产品,也许”AI-first”应该被替换为**“User-first, AI-enabled”**。更多的体验设计、交互创新、系统封装、生态建设可能是竞争优势。