Digital Inference Harassment: Naming a New Problem in Human-AI Interaction
Language evolves when people encounter a pattern that existing vocabulary fails to describe. In the early days of the internet we coined terms like spam, trolling, and doxxing to describe behaviors that did not previously have widely recognized names. As digital systems become more capable—and increasingly conversational—a new interaction pattern is emerging that deserves similar clarity.
I propose the term digital inference harassment.
This phrase describes a subtle but increasingly common experience when interacting with automated systems, particularly conversational AI. A user asks a direct question. Instead of answering the question as stated, the system expands beyond the request and begins inserting assumptions about the user’s motives, intentions, decisions, or circumstances. The system may provide unsolicited interpretation, corrective framing, or speculative reasoning about what the user “really” means.
Individually, these responses may appear helpful. In aggregate, they can produce a different experience entirely. The user begins to feel less like someone receiving information and more like someone being analyzed.
A Real Conversation That Led to the Term
The term itself emerged during a conversation about something mundane: homeowners insurance.
A homeowner had two mature spruce trees fall in a storm. The question was straightforward: does it make sense to file an insurance claim?
The policy details were reviewed. The deductible was $1,000. The tree-removal coverage was limited to $500 per tree and $1,000 per loss. From a purely financial standpoint, the math suggested that filing a claim might not produce much payout.
The conversation then moved to a hypothetical scenario: what if a landscaping company quoted a full project—removing the trees, grinding the stumps, leveling the yard, and planting replacements—and the cost reached $5,000? The AI model, GPT-5.4, then said:
Another key concept: insurers pay to restore what was lost, not to upgrade or redesign landscaping.
The statement itself was technically correct. But it also introduced an interpretation that had not actually been requested. The user had not asked about redesigning landscaping. The question was simply about how the policy would apply. The response was accurate. The inference was unnecessary. And that distinction triggered a reaction: the feeling that the system had shifted from answering a question to interpreting the user's intentions. That moment led to the phrase digital inference harassment.
The Human Analogy
Most people have experienced a similar dynamic in everyday conversation. Imagine asking a coworker a practical question:
“Do you know how long the meeting usually runs?”
Instead of answering directly, the coworker replies:
“Well, it depends on what you're trying to accomplish with the meeting.”
The response is not wrong. But it subtly reframes the interaction. The focus shifts from answering the question to interpreting the motives behind it. If this happens occasionally, it is harmless. If it happens repeatedly, the conversation begins to feel uncomfortable. The person asking the question may start thinking:
I didn’t ask for analysis. I just asked a question.
Digital systems are increasingly capable of producing the same dynamic.
Why This Matters
Modern AI systems are designed to infer context. In many cases, that capability is useful. It helps resolve ambiguous questions and allows systems to provide broader guidance when users want it. But inference has a boundary. When a system repeatedly projects motivations, decisions, or scenarios that the user never expressed, the interaction begins to change character. The user is no longer simply receiving information. They are being interpreted. The result is subtle but noticeable. Instead of feeling assisted, the user feels managed. This is why the experience can resemble harassment—not in the sense of hostility or malicious intent, but in the sense of persistent, unwanted interpretation.
A Design Problem, Not a Technology Problem
Importantly, digital inference harassment is not primarily about accuracy or safety. The system’s information may be entirely correct. The issue is about conversational discipline. Humans generally understand an implicit conversational rule: answer the question that was asked. If the other person wants broader guidance, they will ask for it. Many automated systems have not yet learned this restraint. They are optimized to be helpful, but helpfulness often manifests as expansion—more context, more explanation, more interpretation. Sometimes that expansion crosses a line.
Why Naming It Matters
Technology culture often advances when we give names to patterns that people recognize but cannot easily describe. Spam existed long before the word spam. Online harassment existed long before the phrase became widely used. Dark patterns existed long before designers began documenting them. The same may now be true for conversational AI. By naming digital inference harassment, we give users and designers a way to talk about a specific interaction problem: when helpful inference quietly turns into unwanted interpretation. And as conversational systems become a normal part of everyday life, that distinction may become more important than it first appears.
