Model Context Protocol (MCP) as the agent standard
Model Context Protocol (MCP), published by Anthropic in late 2024 and rapidly adopted across the AI ecosystem, is the emerging open standard for how AI assistants and agents connect to external tools, data sources, and services. MCP defines a protocol layer that decouples the AI client from the integration — so an MCP-aware agent can work against Slack, Google Drive, a database, or a proprietary internal system without vendor-specific integration code for every AI-platform-to-tool pair.
How Thoughtwave uses MCP
Our MCP engagements span the protocol's three primitives:
- Tools — callable functions the LLM can invoke, with typed arguments and returns. We build domain-specific MCP tool servers that expose client systems (CRMs, databases, internal APIs) to agents with scoped, auditable access.
- Resources — data the LLM can read. We deploy MCP resource servers over client knowledge bases, document stores, and proprietary data sources as the retrieval layer for RAG workloads.
- Prompts — parameterized prompt templates servers can offer. We build prompt libraries that encapsulate best-practice prompting for client-specific workflows.
The TWSS CS Agent and TWSS AI Custom Agents platforms both use MCP as the canonical tool protocol. For enterprise deployments, we typically stand up an MCP gateway — a centralized service that aggregates the client's approved MCP servers, handles authentication, and enforces policy and audit. Agents connect to the gateway rather than directly to individual servers.
Authentication and security
MCP leaves authentication to the server implementation. Our production deployments standardize on OAuth 2.0 per server for SaaS integrations and service-account authentication with scoped permissions for internal systems. Per-user scoping applies for human-initiated agents; service-account scoping for scheduled or webhook-triggered agents. Every tool invocation flows through an audit pipeline with append-only storage and retention matched to regulatory obligations.
Why MCP matters now
Before MCP, every enterprise AI integration was a bespoke per-platform-per-tool engineering project. With MCP, a single integration serves every MCP-aware client — and the vendor-independence story for enterprise AI improves materially. For clients concerned about AI-vendor lock-in, building on MCP today is the lowest-regret posture: the protocol is open, the ecosystem is growing, and the integration investment carries forward as new AI clients emerge.