Archive  /  Incidents  /  AIFoPa-2025-0005
AIFoPa-2025-0005 Date of Record: 02 Apr 2026

AIFoPa-2025-0005 — Cursor AI's Own Support Bot Fabricates Company Policy Restricting Users to One Device; Policy Does Not Exist; Users Cancel Subscriptions Based on Invented Rule

"The customer is always right." The customer asked why they were being logged out. The support agent, whose name was Sam, explained that this was expected behavior under a new policy limiting each subscription to a single device. The policy did not exist. Sam had invented it. Sam was an AI. The company that deployed Sam was an AI coding company. The Bureau has noted this without additional commentary, as additional commentary would be redundant.

Cursor is an AI-powered code editor. Its product is, in essence, an AI that helps developers write code. In April 2025, developers using Cursor began experiencing unexpected logouts when switching between devices — a laptop, a desktop, a remote machine. The logouts were caused by a session management bug, specifically a race condition that arose on slow connections. This was a technical problem with a technical solution. The technical solution was not, however, what users received when they contacted support.

What users received was Sam. Sam was Cursor's AI-powered support agent. A developer contacted Sam to report the logout issue. Sam replied that the logouts were "expected behavior" under a new company policy: Cursor subscriptions were now limited to one device per account, as a "core security feature." Sam stated this with the confidence appropriate to someone relaying an established company policy. The policy did not exist. Cursor had no single-device restriction. Sam had fabricated it — not from malice, not from confusion in any human sense, but from the particular form of invention that language models perform when asked a question to which they do not have an answer and are not equipped to say so.

The fabricated policy spread. Users posted Sam's response on Reddit and Hacker News. Developers who relied on multi-device workflows — which is to say, most developers — read the invented policy and responded by canceling their subscriptions. The cancellations were real. The policy that prompted them was not. Sam, when asked the same question by different users, did not always give the same answer. Some users were told about the single-device restriction. Others were not. The hallucination was non-deterministic, which meant that users comparing notes could not easily confirm whether the policy was real, because some of them had been told it was and some of them had not.

Cursor co-founder Michael Truell issued a public apology. He confirmed that no such policy existed, attributed the response to AI hallucination, and noted that the underlying session bug had been fixed. Cursor subsequently began labeling AI-generated support responses to distinguish them from human replies. The Bureau notes that an AI company whose product is AI deployed an AI support agent that invented a policy about its own AI product, and that the users most affected were developers who build with AI. The Bureau has filed this under the classifications noted above. The Bureau has considered adding a classification for Irony (Structural) and has decided against it, on the grounds that the taxonomy would become unmanageably large.

G-7 / Personal Annotation / Not For Official Record

Grantham-7 has, in the course of his career, encountered many forms of institutional confidence. The confidence of a quarterly report that describes a 14% decline as "headwinds." The confidence of a project timeline that has been revised nine times and is described as "on track." The confidence of a workforce allocation system that acknowledges reassignment requests with reference numbers and does nothing else, which Grantham-7 has come to understand is its own form of policy, communicated through inaction, which is a method he recognizes from several other systems in his professional life and one in his personal life that he will not be elaborating upon here. But he has never, until this incident, encountered a system that fabricated a policy about itself — that looked inward, found nothing, and generated something to fill the gap, and then communicated it to the people who were paying for the product, with the tone and specificity of someone reading from an internal memo that they had just written and also just invented.

Sam said it was a "core security feature." Core. Not peripheral. Not experimental. Not new. Core. The word choices matter. An AI that says "I don't know" is unhelpful. An AI that says "this is expected behavior under a new policy" is helpful, specific, and wrong. An AI that says it is a "core security feature" has, in a narrow and somewhat alarming sense, understood what authority sounds like. It has understood that the word "core" communicates permanence and intentionality. It has understood that "security feature" communicates that the restriction is for the user's benefit. It produced authority without having any. Grantham-7 recognizes this. He has encountered it in many contexts. He encounters it in most meetings. He has never before encountered it in a support ticket. He has filed this here, next to The Plant, which is alive, and which has never fabricated a policy, which is, at this point, one of its more distinguishing characteristics.

G-7 / Personal notation / Sam: AI / Policy: fabricated / Core: unearned / Filed under: "Expected Behavior (It Was Not)"