Use MCP when the assistant drives. Use REST when your software drives.
How-To

If you want the same knowledge base, the same brand voice, and the same client rules to show up everywhere you work, dnAI provides two clean ways to connect. You can let an AI assistant discover and use dnAI through MCP, or you can have your software trigger dnAI directly through the REST API.
The benefit is simple: you do not need to rebuild prompts, duplicate business context, or keep re-explaining your brand across tools.
Start with one decision: who is driving?
Choose the integration path based on what is actually initiating the work.
Use MCP if tools like Claude Desktop or other agent-style assistants should discover and call dnAI tools for you.
Use the REST API if HubSpot, Zapier, Make, your backend, or scheduled jobs should trigger dnAI directly.
Use the REST API for deep research, because that flow currently lives there.
Here is the quick comparison:
| Use case | Best fit |
|---|---|
| Claude Desktop writes using your dnAI setup | MCP |
| An assistant needs to inspect templates, KBs, or presets before generating | MCP |
| Your backend triggers content generation | REST API |
| HubSpot or automation tools create drafts automatically | REST API |
| Deep research with polling or webhooks | REST API |
What both integration paths share
Both paths connect to the same dnAI foundation. Every call can pull from:
- Client knowledge base
- Human or global knowledge base
- Character traits
- Output templates
- Validation rules
This is what makes dnAI useful at scale. You are working from a single source of truth, not a collection of disconnected prompts.
MCP tools proxy into the same external services used by the API, so whether the request starts in a chat window or inside an automation, you are still drawing from the same brand system.
What you need before you start
Make sure these pieces are ready first:
- An active dnAI account
- An API key from Account > API Keys
- The right permissions on that key:
- MCP platform enabled for MCP
generatepermission for content generationintel:runandintel:readfor deep research
- Admin-enabled client access for MCP via
mcpAccessEnabled - The right client identifier:
clientSlugfor MCPclientIdorclientSlugfor external deep research
This is worth checking carefully. Most setup issues come from missing permissions, incorrect client identifiers, or MCP access not being enabled on the client.
How to connect dnAI with MCP
What MCP looks like in dnAI
dnAI provides a hosted MCP endpoint, so there is no local server to install.
Use this endpoint:
https://humandnai.xyz/api/mcp
Authenticate with:
Authorization: Bearer dk_live_...
This makes setup lighter and faster. You point your MCP-compatible client at dnAI’s hosted endpoint, then let the assistant discover the available tools, prompts, and resources.
Step 1: Create an API key
Go to Account > API Keys and create a key for your MCP client.
A clear naming convention helps here, especially if you will manage multiple integrations later. For example:
Claude Desktop MCPInternal agent toolsdnAI sandbox MCP
Step 2: Enable MCP access correctly
Before connecting the client, confirm all of the following:
- The key has MCP platform enabled
- The client has
mcpAccessEnabled - You have the correct
clientSlug
If any one of these is missing, the client may connect but fail to show tools, or it may fail to authenticate at all.
Step 3: Add the dnAI MCP config to your client
Here is a Claude Desktop example:
{
"mcpServers": {
"dnai": {
"command": "npx",
"args": [
"mcp-remote",
"https://humandnai.xyz/api/mcp"
],
"env": {
"AUTHORIZATION": "Bearer dk_live_your_api_key_here",
"DNAI_CLIENT_SLUG": "your-client-slug"
}
}
}
}
Other MCP-compatible tools use the same basic pattern: the same URL and the same header approach.
Step 4: Restart the MCP client
After saving the config, restart the client fully.
This step is easy to skip, and it is one of the most common reasons people think the setup failed when the client simply has not reloaded the server configuration yet.
Step 5: Test with a plain-English request
Once the client restarts, try a few natural requests:
- “Write a spring campaign script in our voice.”
- “List our available output templates.”
- “Search our KB for pricing details.”
- “Generate 20 product descriptions in batch.”
These are good first tests because they check different capabilities: generation, discovery, knowledge lookup, and batch work.
What tools MCP users get
When dnAI is connected through MCP, users can access tools such as:
dnai_generate_contentdnai_batch_generatednai_query_kbdnai_list_charactersdnai_list_templatesdnai_list_presetsdnai_validate_contentdnai_kb_statsdnai_list_knowledge_bases
There is an important advantage here: the assistant can inspect templates, characters, knowledge bases, and presets before generating. dnAI also exposes MCP resources and prompts, not just tools.
That gives assistants more context to work with and helps outputs stay aligned with your brand and client rules.
MCP troubleshooting
If MCP is not working as expected, check these first:
- Invalid API key: confirm the key is active and copied correctly
- MCP tools not appearing: confirm MCP is enabled on the key, then restart the client
- Client slug not found: verify the exact
clientSlug - MCP access disabled for client: ask an admin to enable
mcpAccessEnabled
A practical rule here: if the client connects but cannot see tools, the problem is usually permissions or config. If it cannot authenticate, the problem is usually the key or header formatting.
How to use the REST API
If your software is driving the workflow, start with discovery.
Start with discovery
The first recommended call is:
GET /api/external/discover
This returns the client-specific details you need before building around hardcoded values, including:
- Available formats
- Character traits
- Knowledge base count
- Available endpoints
Here is a copy-paste curl example:
curl -X GET "https://humandnai.xyz/api/external/discover" \
-H "Authorization: Bearer dk_live_your_api_key_here" \
-H "Content-Type: application/json"
Using discover first saves time later, especially when you need the correct outputFormat, available trait IDs, or confirmation that the account can access the endpoints you expect.
Generate content through the REST API
Use this endpoint for content generation:
POST /api/external/generate
Required fields
apiKeyclientIdprompt
Optional fields
outputFormatcharacterTraitIdkbIdmaxTokensincludeMetadatawebhookUrl
Both the authorization header and the JSON body matter in implementation examples, so make sure you handle both.
Here is a practical example:
{
"apiKey": "dk_live_your_api_key_here",
"clientId": "client_12345",
"prompt": "Write a personalized sales email draft for a HubSpot lead who downloaded our franchise growth guide.",
"outputFormat": "sales_email",
"characterTraitId": "trait_consultative_clear",
"kbId": "kb_primary",
"maxTokens": 900,
"includeMetadata": true,
"webhookUrl": "https://yourapp.com/webhooks/dnai/generate"
}
A useful real-world flow looks like this:
HubSpot lead → POST /api/external/generate → personalized sales email draft
Two fields make a big difference here:
outputFormathelps keep the response structured and consistentcharacterTraitIdhelps preserve the right tone and voice
If structure matters, prefer outputFormat over trying to force the shape through prompt wording alone.
What the generate response includes
A typical response can include:
- Generated content
- Metadata
- Knowledge base sources used
- Validation or output flags
- Generation time
- Conversation and message IDs for tracking
These details are useful for more than logging. They make it easier to review outputs, trace what happened, and connect the result back to the automation or CRM record that triggered it.
How to run deep research
Deep research is part of the REST API, not the current MCP tool list.
To start a run, use:
POST /api/external/intel/deep-research
You will need:
- An API key
clientIdorclientSlug- The
intel:runpermission
A successful start returns:
idstatuspollUrl- An estimated duration of 5 to 10 minutes
This is an asynchronous workflow, so plan for a queued job, not an instant reply.
Supported deep research types
dnAI supports these research types:
competitor_deep_diveindustry_trendspricing_intelligencecustomer_insightsaeo_visibilitycustomlead_enrichment
Use the type that best matches the decision you are trying to support. Clear intent helps dnAI return more useful research.
Build a valid deep research request
Most research types need a detailed query.
For lead_enrichment, you need either:
companyName, orcompanyDomain
Optional fields include:
localContextkbIdwebhookUrlresponseFormat
Here is a practical example:
{
"apiKey": "dk_live_your_api_key_here",
"clientSlug": "your-client-slug",
"type": "pricing_intelligence",
"query": "Compare current pricing structures, annual discount patterns, and enterprise packaging expectations for AI marketing platforms competing with our offer.",
"localContext": "We serve marketing directors, brand leaders, franchises, and multi-location organizations that care about brand consistency and AI visibility.",
"kbId": "kb_primary",
"responseFormat": "markdown",
"webhookUrl": "https://yourapp.com/webhooks/dnai/research"
}
Good use cases include:
- Running lead enrichment before sales outreach
- Running AEO visibility research before rewriting landing pages
- Running pricing intelligence before a campaign or sales sprint
Poll or cancel a research run
To poll for status or results:
GET /api/external/intel/deep-research/{id}
This requires the intel:read permission.
To cancel a run:
DELETE /api/external/intel/deep-research/{id}
This requires the intel:run permission.
A few important expectations to set upfront:
- Deep research is async
- It uses credits
- You may receive:
402for insufficient credits429for rate limits
If your workflow needs a clean handoff when the job completes, webhookUrl is usually the better option. If you need tighter control inside your own system, polling may be the better fit.
Best practices for cleaner integrations
A few habits make dnAI integrations easier to maintain:
- Use
discoverbefore hardcoding IDs - Prefer
outputFormatwhen structure matters - Pass the right
characterTraitIdfor tone consistency - Use
kbIdwhen clients have multiple knowledge bases - Keep prompts specific and grounded in the real task
- Use
webhookUrlor polling for follow-up automation - Treat MCP as conversational and self-serve
- Treat the REST API as deterministic and orchestrated
This approach keeps your brand consistent while giving each workflow the right level of flexibility.
The takeaway
dnAI is one content and research engine with two triggers.
MCP lets assistants self-serve your knowledge and voice. REST API lets your systems operationalize the same engine across automations and applications.
If you want both human-in-the-loop workflows and system-driven workflows, you can run both from the same dnAI account and build around one shared source of truth.