Skip to content
Back

Will AIGC Lead to Collective Cognitive Bias in Certain Groups?

Edit page

When We Overtrust AI Responses

In my interactions with LLMs (Large Language Models), I’ve increasingly realized the importance of maintaining critical thinking.

With growing dependence on LLMs (and AI search), I’ve found myself unconsciously inclined to trust their output results. Because the structured nature and definitive tone of the output always makes it seem particularly objective.

But is that really the case? We need to realize that LLM outputs are still products of algorithms and data, inevitably with limitations.

Some might argue that LLM training data comes from the internet, covering massive information and should be comprehensive enough. However, this view overlooks several key issues:

First, there’s a large amount of unverified information on the internet, and certain erroneous content may appear repeatedly in training data due to multiple reposts;

Second, LLMs perform probabilistic modeling on data during training, which may reinforce common but not necessarily accurate viewpoints;

Third, the timeliness of training data is also a major limitation, especially in rapidly evolving fields.

It’s worth noting that quality content in certain professional fields may not be evenly distributed online, and niche but important viewpoints may have their appearance probability diminished due to smaller data volumes.

Despite recognizing this, in fields I’m more familiar with, I can try my best to apply critical thinking to LLM outputs and selectively absorb and reconsider them. But in fields I’m not very familiar with?

I admit I cannot fully do this. With limited industry awareness and lack of professional terminology understanding, I tend to turn to LLMs for answers. In the past, we would use search engines.

However, the problem arises here:

Search engines at least display multiple information sources, allowing us to passively compare different viewpoints and statements. LLM responses are often singular and comprehensive. This “authoritative” expression makes it easy to overlook potential biases and errors. (I don’t advocate traditional search or denigrate current AI tools)

More seriously, LLM responses usually come with high confidence and completeness, which reduces our motivation to continue exploring and questioning. When facing unfamiliar fields, this impact is particularly evident.

Even more noteworthy is that LLMs may fuse information from different sources, producing content that seems reasonable but may actually be erroneous (hallucinations).

This could lead to two serious consequences: First, we may accept erroneous or one-sided information; second, we gradually lose the ability for independent thinking and deep research, over-relying on AI’s “explanations.”

This cognitive bias not only affects individuals. With the proliferation of AIGC tools, I believe this may form a phenomenon of “collective blind obedience” on a larger scale.

Especially in highly specialized fields, non-professionals may be more easily misled by AI’s “professional” expression.


Are We Losing the “Comparison” Ability?

Regarding this topic—that is, what I mentioned in the title “Will AIGC Lead to Collective Cognitive Bias in Certain Groups?”—this actually existed as early as the internet stage. Information overload and algorithm-driven recommendation systems have long been criticized for causing information filter bubbles (in terms of public opinion, the same below) and echo chambers (environments where specific viewpoints, ideas, or information are continuously repeated and reinforced, making them appear more prevalent and persuasive to participants). These effects reinforce viewpoints within certain groups while isolating them from the outside world.

However, the emergence of AIGC (AI Generated Content) technology undoubtedly adds new complexity to this problem.

The convincing standardized output of LLMs mentioned earlier, combined with “customized” expressions formed through user interaction, interact with each other, greatly enhancing the persuasiveness and influence of information.

On the other hand, will AI search break this deadlock?

Sequoia Capital predicts in their report that AI search, as a killer application, will rise. By 2025, everyone may have at least two specialized AI search engines.

Some believe AI search can alleviate this problem by integrating multi-party information, but I worry the situation may be exactly the opposite.

When search engines also start using AI to integrate and present information, the remaining opportunity for “passive comparison” may further decrease.

In traditional search, we at least see titles and summaries from different websites. This visual differentiation reminds us of the diversity of information sources. In AI search, this “reminder” may be replaced by smoother narratives. AI not only integrates information but presents it in a more elegant and persuasive way. This shift may make it easier for us to overlook the original sources and context of information.

Although current AI search annotates search sources and marks statements with reference sources. But let’s look at a similar phenomenon: citations in academic papers. Although each viewpoint has cited sources, in fast-paced reading, we tend to directly accept the arguments in the text rather than verify every citation.

Even more concerning is that AI search may reinforce our dependence on “quick answers.” When a complex question can get a seemingly complete answer in seconds, how much motivation do we still have to explore deeper content? Behind this convenience, does it hide the risk of gradually losing our deep thinking ability?


Moving Forward Critically

I’ve raised these concerns and reflections in this article, but this doesn’t mean I hold a negative attitude toward AI. Quite the opposite—it’s precisely because I see these potential cognitive traps that we can use these tools more clearly, thereby fully leveraging their value.

I’ve always believed technology is neutral; the key lies in the user’s attitude and ability. Critical thinking is not about denying technology, but about using it better. Just like learning to distinguish between true and false information, this is an essential skill in the digital age.

I remain optimistic about AI development, because every recognition and discussion of potential problems is an important step toward wiser use of AI.


My Thoughts Meet Research

Below are some papers that deeply analyze how LLMs interact with humans and how this interaction affects our memory, trust, understanding, and self-perception.

Conversational AI Powered by Large Language Models Amplifies False Memories in Witness Interviews

This paper’s results show that using generative chatbots powered by large language models significantly increases the formation of false memories, inducing over 3 times more immediate false memories than the control group.

Unavoidable Social Contagion of False Memory From Robots to Humans

This article explores how people interact with voice-based or text-based conversational agents (such as chatbots), and these agents may inadvertently retrieve erroneous information from human knowledge bases, fabricate responses on their own, or deliberately spread false information for political purposes.

The Illusion of Understanding: Measuring the True Comprehension of Generic and Specific AI Models

This paper explores users’ understanding and trust of AI model outputs, finding that even when users don’t understand how AI models work, they may overtrust model outputs, a phenomenon that may cause users to lack necessary critical thinking for AI-generated information.

Trust in AI: Looking Beyond the Algorithm Itself

Researchers explore how people’s trust in AI systems forms and the potential blindness this trust may bring. The article points out that users tend to believe in AI’s authority, which may mask the uncertainty and potential errors of AI outputs.

The Algorithmic Self: How the Use of AI Changes the Subject

This paper discusses how AI use changes individual self-perception and decision-making processes, including dependence on AI outputs, which may affect individual independent thinking and critical analysis abilities.

Note: In reviewing these papers, I’ve tried my best to ensure content accuracy and comprehensiveness, including using AI for summarization. However, any summary of complex research may not cover all details. Therefore, the above summary is only a review of some papers in this field, primarily reflecting these papers’ main viewpoints and tone, not their complete content.

I also admit that listing these paper reviews here is to show that my viewpoint (i.e., AIGC’s negative impact on our thinking, etc.) is not a misjudgment caused by subjective reasons.

I hope my summary and sharing of experiences can trigger your thinking, and even more comprehensive viewpoints.

I suggest readers interested in this topic directly consult the original papers to obtain more comprehensive and accurate understanding.

当我们过度信任 AI 的回答

在与 LLM(Large Language Model)互动的过程中,我愈发体会到保持批判性思考是非常重要的。

随着对 LLMs(与 AI 搜索)的依赖日益加深,我发现自己不自觉地倾向于相信其输出结果。因为输出结果的结构化与其确定式的语气让输出结果总是显得尤为客观。

但事实真的如此吗?我们需要意识到 LLM 的输出仍然是基于算法和数据的产物,必然存在局限性。

有人可能会说,LLM 的训练数据来自互联网,涵盖了海量信息,应该足够全面了。然而,这种看法忽视了几个关键问题:

首先,互联网上存在大量未经验证的信息,某些错误内容可能因多次转载而在训练数据中重复出现;

其次,LLM 在训练过程中会对数据进行概率建模,可能会强化这种普遍却不一定准确的观点;

再者,训练数据的时效性也是一个很大的限制,特别是在快速发展的领域。

需要特别注意的是,某些专业领域的优质内容在互联网上的呈现可能并不均衡,小众但重要的观点可能会因数据量较少而被减弱其出现的概率。

尽管我认识到了这一点,在我所较多涉及的领域我能尽量做到客观性的基于 LLM 的输出进行批判性思考并选择性的吸收与再思考。在我不甚了解的领域呢?

我承认无法完全做到,基于有限的行业认知与对于专业名词术语的认知匮乏,我倾向于向 LLM 寻找解答。这在过去我们会去使用搜索引擎。

然而,问题就出现在这里 :

搜索引擎至少会展示多个信息源,让我们能够被动地去对比不同观点和说法。而 LLM 的回答往往是单一的、综合的,这种”权威”的表达方式容易让人忽视其可能存在的偏差和错误。(我并不拥护传统搜索或贬低现在的 AI 工具)

更严重的是,LLM 的回答通常带有高度的确信性和完整性,这种表达方式会降低我们继续探索和质疑的动力。当我们面对不熟悉的领域时,这种影响尤为明显。

更值得注意的是,LLM 可能会将不同来源的信息融合在一起,产生看似合理但实际可能有误的内容(幻觉)。

这可能导致两个严重的后果:一方面,我们可能接受了错误或片面的信息;另一方面,我们逐渐丧失了独立思考和深入研究的能力,过度依赖 AI 的”解释”。

这种认知偏差不仅影响个人,随着 AIGC 工具的普及,我认为这可能会在更大范围内形成一种”集体盲从”现象。

特别是在专业性较强的领域,非专业人士可能更容易被 AI 的”专业”表达所误导。


我们是否正在失去”比较”能力?

关于这个话题,即我在标题所说的”AIGC 将导致某些群体的集体认知偏差?“,实际上,早在互联网阶段就存在。信息的泛滥和算法驱动的推荐系统早已被批评为导致信息茧房(从舆论方面上讲,下同)和回声室效应(在一个封闭或隔离的环境中,特定的观点、想法或信息在其中不断被重复和强化,从而使得这些观点在参与者心中显得更加普遍和有说服力。),这些效应使得某些群体内的观点得以强化,而与外部世界隔绝。

然而,AIGC(人工智能生成内容)技术的出现,无疑为这一问题增添了新的复杂性。

前文所提到的 LLM 的令人信服的标准化输出,再根据与用户交互形成的”定制化”表达,这二者相互作用,极大地增强了信息的说服力和影响力。

另一方面,AI 搜索会突破这种僵局吗?

红杉资本在其报告中预测,AI 搜索作为杀手级应用的崛起,预计到 2025 年,每个人可能至少有两个专业化 AI 搜索引擎。

有人认为 AI 搜索能够通过整合多方信息来缓解这个问题,但我担心情况可能恰恰相反。

当搜索引擎也开始使用 AI 来整合和呈现信息时,原本还存在的**“被动对比”**机会可能进一步减少。

在传统搜索中,我们至少看到不同网站的标题和摘要,这种视觉上的区分提醒着我们信息来源的多样性。但在 AI 搜索中,这种”提醒”可能被更流畅的叙述所取代。AI 不仅会整合信息,还会用更优雅、更具说服力的方式呈现出来。这种转变可能会让我们更容易忽视信息的原始来源和背景。

尽管现在的 AI 搜索会注明搜索源,并标注出来以显示语句的参考来源。但我们来看一个类似的现象:学术论文中的引用。尽管每个观点都有引用来源,但在快节奏的阅读中,我们往往倾向于直接接受文中的论述,而不是去查证每一个引用。

更值得警惕的是,AI 搜索可能会强化我们对**“快速答案”**的依赖。当一个复杂问题能在几秒钟内得到看似完整的解答时,我们还有多少动力去探索更深层的内容?这种便利性背后,是否隐藏着我们逐渐丧失深度思考能力的风险?


在批判中前行

我在文中提出了这些忧虑和思考,但这并不意味着我对 AI 持消极态度。恰恰相反,正是因为看到了这些潜在的认知陷阱,我们才能更清醒地使用这些工具,从而充分发挥它们的价值。

我始终认为技术是中性的,关键在于使用者的态度和能力。批判性思考不是对技术的否定,而是为了更好地运用技术。就像我们学会分辨信息真伪一样,这是数字时代必要的一项能力。

对于 AI 的发展,我保持乐观态度,因为每一次对潜在问题的认识和讨论,都是我们走向更明智使用 AI 的重要步骤。


我的思考遇见研究

下面是一些对于深入分析了 LLM 如何与人类互动,以及这种互动如何影响我们的记忆、信任、理解和自我认知的相关论文。

Conversational AI Powered by Large Language Models Amplifies False Memories in Witness Interviews(由大型语言模型驱动的对话式人工智能在证人访谈中放大了虚假记忆 )

这篇论文结果显示,使用大型语言模型的生成式聊天机器人显著增加了虚假记忆的形成,比对照组多诱导了超过 3 倍的即时虚假记忆

Unavoidable Social Contagion of False Memory From Robots to Humans(不可避免地从机器人到人类的虚假记忆的社会传染)

这篇文章探讨了人们与基于语音或文本的对话代理(如聊天机器人)互动时,这些代理可能会无意中从人类知识数据库中检索错误信息、自行编造回应,或出于政治目的故意传播虚假信息

The Illusion of Understanding: Measuring the True Comprehension of Generic and Specific AI Models(理解的幻觉:衡量对通用和特定 AI 模型的真实理解)

这篇论文探讨了用户对 AI 模型输出的理解和信任度,发现即使用户不理解 AI 模型的工作原理,他们也可能过度信任模型的输出,这种现象可能导致用户对 AI 生成的信息缺乏必要的批判性思考。

Trust in AI: Looking Beyond the Algorithm Itself(信任 AI:超越算法本身)

研究者探讨了人们对 AI 系统的信任是如何形成的,以及这种信任可能带来的盲目性。文章指出,用户倾向于相信 AI 的权威性,这可能掩盖了 AI 输出的不确定性和潜在错误。

The Algorithmic Self: How the Use of AI Changes the Subject(算法自我:AI 的使用如何改变主体)

这篇论文讨论了 AI 的使用如何改变个体的自我认知和决策过程,包括对 AI 输出的依赖性,这可能影响个体的独立思考和批判性分析能力。

注意,在对这些论文进行综述时,我已尽力确保内容的准确性和全面性包括利用 AI 进行总结。然而,任何对复杂研究的总结都可能无法涵盖所有细节。因此,以上总结只是在这一领域的一部分论文的综述,主要反映了这些论文的主要观点与论调,而非其全部内容。

我也承认这里列出论文综述也是为了表明我的观点(即 AIGC 对我们思维等方面的负面影响)并不是一种由于主观原因造成的误判。

我希望我的总结以及经验经历的分享能引发你的思考,甚至更加全面的观点。

我建议对这一主题感兴趣的读者直接查阅原始论文,以获得更全面和准确的理解。


Edit page
Share this post on:

Previous Post
2001: A Space Odyssey in the AI Era