For the Love of God, Stop Calling Everything an Agent
If everything is agentic, nothing is agentic.
If you've spent any time with AI marketing materials, listening to tech podcasts, or just existing within the confines of any large metropolitan business center, you've probably noticed that everything is an agent now. Chat interfaces? Agent. API wrappers? Agent. A Python script that sends Slack notifications? Believe it or not, also an agent.
It’s grade inflation but for IT systems.
I’ll admit to having a bias toward “words have meaning” pedantry (a straight white man being particular about words online—what a concept). But this pattern, beyond just being annoying, stands to actively damage the industry's ability to speak clearly to consumers and businesses about genuine differences in technologies. And as communicators, we're the ones who need to fix it.
Anatomy of an Agent
There's no universal definition of what constitutes an AI agent, and major tech companies are all using slightly different frameworks. AWS emphasizes agents that "independently choose the best actions" to meet goals, while Google Cloud focuses on "reasoning, planning, and memory" with proactive behavior. NVIDIA takes a more technical approach around “orchestrating resources” and multi-agent collaboration, Salesforce keeps it simple with systems that “understand and respond without human intervention,” and IBM offers the broadest definition of any system “autonomously performing tasks by designing its workflow.”
Some of these definitions are far too broad, to put it charitably. Salesforce's definition could describe any chatbot with decent natural language processing, while IBM's is so broad it could include a scheduled backup script. When you define “agent” as “any automated system that does something for a user,” everything becomes an agent. Hence the terminology inflation we're seeing across the market.
Despite these variations, most definitions converge on a few core characteristics:
Goal-directed behavior: The system pursues specific objectives over time, not just responding to individual prompts.
Autonomy: It operates with meaningful independence, making decisions without constant human input.
Environment interaction: It perceives and acts within some environment, whether digital, physical, or simulated.
Adaptive responses: It adjusts its approach based on feedback or changing conditions.
So what actually meets this bar? Things like ChatGPT Agent, which can navigate websites and perform multi-step tasks like booking travel. Salesforce's Agentforce agents that can autonomously handle customer service cases from start to finish.
What doesn’t? Most customer service chatbots that follow decision trees, even sophisticated ones. Simple workflow automation tools that require pre-defined triggers and actions. Many "AI assistants" that need a prompt for every single response. Rule-based systems that can't adapt their behavior based on outcomes.
A chatbot that can check the weather isn't goal-directed: it's just a fancy interface to a weather service. But in today's marketing parlance, they're both "agents."
None of what is happening is a new phenomenon, of course. We saw it with “machine learning” a decade ago, where every statistical model got rebranded. Before that, it was “big data” for any database with more than three tables. But “agent” might be worse because it's being stretched to cover everything from simple workflows to hypothetical AGI. The term has become so elastic that it's lost any meaningful technical coherence.
The Downstream Damage
The confusion is creating real problems. Enterprise procurement is becoming a nightmare: CTOs are trying to evaluate vendors when everyone claims to have “agents,” but half are selling chatbots and the other half are selling genuinely autonomous systems. Without clear technical distinctions, buyers can't make informed decisions.
Expectations with end users are getting wildly misaligned. Teams design workflows expecting adaptive, autonomous behavior, then discover they've bought a system that needs constant human oversight.
Security and compliance gaps are emerging. True agents that can take autonomous actions need different security models than conversational interfaces. But if organizations can't distinguish between them, they might under-protect genuinely autonomous systems or over-restrict simple tools.
And most important from a comms perspective: the market is settling for mediocrity. If customers can't tell the difference between a sophisticated planning system and a glorified function dispatcher because they’ve been bullshitted beyond comprehension, there's less incentive to build the sophisticated version.
Differentiation Through Clarity
All this imprecision is actually an opportunity for differentiation, in my opinion. While your competitors throw around “agent” meaninglessly, you can set yourself apart through precision and thoughtful comms.
Be specific about what your system actually does. Instead of "AI agent," describe the actual behavior: "handles customer questions and solves most of them without involving your team," "watches your inventory and reorders supplies when you're running low," or "listens to sales calls and tells reps what to do next." Lead with what it accomplishes, not what category of technology it supposedly represents. Zapier’s site is a solid example of relative marketing plainspeak and holding oneself back from the siren song of claiming agentic capabilities.
Frame technical honesty as a feature. Use language like: “Unlike simple chatbots labeled as ‘agents,’ our system actually maintains persistent goals and adapts its approach based on outcomes.” If your system requires human oversight, say so, and frame it as responsible deployment. It’s a feature, not a bug.
Lead with business outcomes, not technology labels. Instead of claiming revolutionary agent capabilities, describe what actually happens: “handles 73% of customer inquiries without escalation" or "reduces manual data entry by 60%.” Customer service company Intercom does an admirable job of this while also showing how their Fin product is actually agentic.
Use a clear autonomy framework. Be explicit about the level of independence: fully autonomous, human-in-the-loop, or human oversight required. Customers will appreciate the clarity. Anthropic has done a masterful job explaining this and how it pertains to their products, especially Claude Code.
The companies that commit to precision now will be better positioned when the market inevitably matures and customers get savvier about distinguishing real capabilities from marketing speak.
Yes, precise language might seem less exciting than “revolutionary AI agent breakthrough.” But the upside is building trust with technical buyers and customers who are already growing wary of the hype. And frankly, if your product is genuinely impressive, you shouldn't need to inflate the terminology to make it sound good.
The risk of continuing down this path is that we'll train an entire generation of buyers to be cynical about AI capabilities just when the technology is getting genuinely interesting. That would be a communications failure with consequences far beyond any individual product launch.



ha yeah I have taken to forcing people to define agent when they say the word too many times