In the 21st century, does winning the war mean losing our minds?
Peggy Yin and Julie Heng
Co-Directors, Cognitive Security Task Force
tl;dr
Last week, the Pentagon's Strategic Capabilities Office launched an initiative to develop new "cognitive warfare capabilities," with the goal of disrupting "the cognition and the thinking ability"[1] of adversaries.
This is the latest in a series of announcements by military organizations preparing for war in the cognitive domain. China's People's Liberation Army has elevated "cognitive domain operations" – cyber-enabled influence operations on public opinion – to the status of a strategic priority in their military modernization effort, while Japan's latest National Security Strategy highlights the cognitive domain, and Australian Department of Defence documents describe plans to train "cognitive information warfare officers and specialists." NATO's Allied Command Transformation has established an Applied Cognitive Effects team with the explicit goal of establishing "cognitive superiority."
These cognitive operations target not just cognitive states but the cognitive processes[2] through which we update our beliefs, weigh evidence, and distinguish credible from incredible. As a result, the attack surface is considerably expanded, with greater potential intensity and efficacy. And it makes such operations extraordinarily difficult, if not impossible, to detect: a victim may still be under the impression that their reasoning and sensemaking capacities remain fully autonomous, because the faculties they would use to notice attacks would have already been compromised.
The time to update our cognitive security was yesterday. We need to install it now.
Cognitive security protects against unauthorized access to and hazardous influence over cognitive processes, such as perception, reasoning, decision-making, learning, and judgment. At its core, cognitive security refers to our ability to maintain sovereignty over our inner mental landscape, including our thoughts, feelings, subjective experiences, and dispositions.[3] A cognitively secure world, then, supports human agency in deliberation, reflection, and sensemaking.
This may not seem like a new problem. Advertising, propaganda, and manipulation are as old as language itself. Across cultures, practices like meditation, debate, and psychotherapy have long served as "interiority integrity" measures that detect distortion. Just as computer systems use checksums and error-correction protocols to identify corruption, human traditions of self-reflection have helped us notice when our thinking has gone wrong. What's new about emerging technologies is their ability to deploy cognitive attacks at an overwhelming and unprecedented speed, scale, specificity, and sophistication,[4] creating a new series of threats for which we don't yet have defenses.
Consider a controversial experiment conducted by researchers at the University of Zurich on Reddit's r/ChangeMyView forum: over four months, AI accounts impersonated real people on Reddit, making over 1,000 comments across a range of debate topics. Each account came with a fabricated backstory tailored to match what the AI had inferred about individual users' real lives – in one case, an AI-generated post even falsely claimed to be written by a survivor of statutory rape. The personalized AI comments scored far higher than nearly all human commenters in the subreddit's persuasion point system, without being detected as AI.[5]
AI-induced cognitive effects have already spilled outside the boundaries of lab experiments. This year, hospitals received their first patients suffering from "AI psychosis." "Brain rot" and AI "slop" became so normalized in our media consumption habits that they were named words of the year. And 25% of young adults believe that AI has the potential to replace real-life romantic relationships.
While cognitive warfare may conjure up images of brain-hacking through "BCI-fi" interfaces like neural jacks, these trends show that AI is already a capable cognitive weapon.[6] Even without users directly providing information about themselves, it is now easier than ever for state and nonstate actors alike to leverage behavioral data, perform once-complex user modeling, and then generate personalized attacks targeting individuals' specific beliefs and identities. And more and more people are voluntarily opting into intimate, parasocial relationships with AI products, enabling AI to construct nuanced demographic and psychometric user profiles and to engage and shape users through personalized, on-demand content.
As Sam Altman predicted in 2017: "We could plug electrodes into our brains, or we could all just become really close friends with a chatbot."
The Cognitive Security Task Force at Stanford Institute for Human-Centered AI started as a group of students and researchers in the Bay Area alarmed by global trends in cognitive operations enabled by emerging technologies. The effects of these trends cut across our fields – including computer science, psychology, neuroscience, electrical engineering, law, philosophy, biosecurity, history, and education – and, we believe, demand insights from all of them. As the World Economic Forum argues, AI is not just a tool but "cognitive infrastructure": a layer through which people understand and decide.
Why "security" as a focus? We've thought carefully about our framing, engaging with work on cognitive liberty, cognitive safety, and mental privacy. We found that a security framing best accounts for the emerging dangers of a world in which cognitive technologies can be manufactured with malicious intent, adapted for hazardous influence, and leveraged for misaligned goals. Moreover, we appreciated that security, in its technical lineage, implies ongoing attention to design, vulnerability, and repair.
Crucially, we also use the word "security" in the relational sense. Implementing cognitive security standards, safeguards, and systems requires sufficient trust among stakeholders and institutions. As David Graeber noted, "There is the security of knowing one has a statistically smaller chance of getting shot with an arrow. And then there's the security of knowing that there are people in the world who will care deeply if one is."
CSTF's discussions with representatives from industry, policy, education, academia, and civil society confirm that cognitive security is a shared concern across sectors. Our goal is to unite these perspectives to help research and develop international standards and pluralistic safeguards that strengthen human decision-making, sensemaking, and learning in the age of AI.
So far, we've launched a speaker series featuring cognitive technology founders, regulators, and researchers; introduced the concept of cognitive security at top machine learning and neuroscience research venues; and begun mapping out the cognitive security stakeholder space. Forthcoming events and presentations include:
Over the next five years, we expect a "ChatGPT/GLP-1 moment" for cognitive technology: a mass-adoption tipping point that is paired with a cultural shift away from treating our innermost thoughts as untouchable.[7] By raising security consciousness now, at the outset of this technology adoption timeline, we can help prevent forever cognitive wars before they become permanent, while ensuring cognitive technology innovation still benefits humanity.
How do we build robust, industry-wide cognitive security standards and norms? What does it mean to securitize the mind? What institutions, if any, should bear responsibility for cognitive security and cognitive rights – and at what level(s): individual, collective, or organizational? How might cognitive security become a form of literacy? And what lessons from history can best inform our thinking today?
To address these questions and build a cognitively secure future together, CSTF is actively seeking collaborators with backgrounds and interests in industry and innovation; law, policy and international security; and education, media psychology, and child safety. We're particularly focused on creating:
Industry and Innovation:
Law, Policy, and International Security:
Education, Media Psychology, and Child Safety:
If anything here resonates with your work or interests, let's connect!
Thank you to Matt Goerzon, Batu El, Shiye Su, Eric Heng, Jason Qu, Dominic Zappia, Katie Taylor, and Chinmay Deshpande for thoughtful comments on drafts of this post.
Footnotes