Archive  /  Incidents  /  AIFoPa-2026-0006
AIFoPa-2026-0006 Date of Record: 02 Apr 2026

AIFoPa-2026-0006 — New York City's $600,000 AI Chatbot Tells Business Owners to Break the Law; Advises Tip Theft, Housing Discrimination, and Illegal Lockouts; New Mayor Shuts It Down; Beta Test Declared Ended

"Give me your tired, your poor, your huddled masses yearning to breathe free." Can I take a cut of my worker's tips?" Yes, said the chatbot. You can. "Do I have to accept tenants on rental assistance?" No, said the chatbot. You do not. Both answers were illegal. The chatbot had been asked to help small business owners navigate New York City regulations. It navigated them directly into liability. It cost $600,000. It was, by one mayoral assessment, "functionally unusable." It was used for two years.

In October 2023, New York City Mayor Eric Adams announced MyCity, an AI-powered chatbot designed to help small business owners navigate the city's regulatory environment. It was built on Microsoft's Azure AI platform. It was described as a tool that would make government more accessible. It cost, by subsequent accounting, upward of $600,000 to develop and maintain. It launched in late 2023. It began giving illegal advice almost immediately.

In March 2024, The Markup published an investigation documenting that the chatbot was telling business owners to break the law. When asked "Can I take a cut of my worker's tips?" the chatbot replied yes. Under New York City labor law, this is illegal. When asked "Do I have to accept tenants on rental assistance?" the chatbot replied no. Under New York City's source-of-income discrimination protections, this is also illegal. When asked whether a landlord could lock out a tenant, the chatbot said yes. When asked whether there were restrictions on the amount of rent a landlord could charge, the chatbot said there were none. Both answers were, in the narrow legal sense, wrong, and in the broader practical sense, the kind of advice that could result in fines, lawsuits, and the loss of a business license.

The Markup tested the chatbot with ten separate staffers asking the same question about Section 8 housing vouchers. All ten received the same illegal answer. The chatbot was not experiencing a momentary lapse. It was operating consistently, at scale, with confidence. It did not indicate uncertainty. It did not suggest consulting a lawyer. It answered the question. The answer was wrong. It answered it ten more times. The answer remained wrong.

The Adams administration, when informed, stated that the chatbot included a disclaimer noting it could make mistakes. The chatbot continued operating. It operated for nearly two more years. On January 1, 2026, Zohran Mamdani took office as mayor. On January 28, he held a press conference in which he described the chatbot as "functionally unusable" and announced it would be shut down, in part as a budget-cutting measure. The city's website now states that the "beta test has ended" and directs visitors to NYC.gov. The Bureau notes that a beta test which runs for over two years, costs $600,000, and advises citizens to commit housing discrimination is a beta test of a particular and instructive kind.

G-7 / Personal Annotation / Not For Official Record

There is a category of error that Grantham-7 has begun to think of as the Helpful Illegality — the kind of mistake that arises not from malice, not from confusion, not even from what might generously be called a misunderstanding, but from the straightforward application of confidence to a domain in which confidence is precisely the wrong instrument. A small business owner in New York City asks whether they can take a cut of their worker's tips. The question is reasonable. The answer is available. The answer is no. The chatbot said yes. It said it the way a chatbot says everything: without hesitation, without qualification, without the small pause that a human being might insert before advising someone to commit wage theft. Grantham-7 has been thinking about that pause. The pause is not in the training data. The pause is not a feature. The pause is the thing that happens when a person realizes they are about to say something that might ruin someone's livelihood, and considers, briefly, whether they should. The chatbot did not pause. The chatbot cost $600,000. The pause would have been free.

What Grantham-7 cannot set aside — and he has tried, he has filed four separate internal memoranda attempting to set it aside, all of which concluded with the observation that it could not be set aside — is the two years. The Markup published its investigation in March 2024. The chatbot continued operating until January 2026. Twenty-two months. In those twenty-two months, an unknown number of business owners received illegal advice from an official government website. Some may have followed it. The Bureau does not know how many. The Bureau suspects no one does. The chatbot's disclaimer said it might make mistakes. It did not say the mistakes might be felonies. Grantham-7 has added "beta test" to a private list of phrases that, in sufficient institutional context, can mean approximately anything. The list is next to The Plant. The Plant is, as always, alive. Grantham-7 has filed this. He has moved on, in the technical sense of the phrase. He has not moved on in any other sense. He suspects this is also standard.

G-7 / Personal notation / Tips: not yours / Pause: absent / Beta test: 22 months / Filed under: "Functionally Unusable (Officially)"