Chat API
Integrate AI-powered chat into your apps
The Chat API enables you to integrate Clear Tangle's RAG-powered AI chat into your applications. Send messages, receive contextual responses with source citations, and manage conversation history programmatically.
RAG-Powered Responses
All responses use Retrieval-Augmented Generation, meaning the AI retrieves relevant information from your knowledge base before responding.
Endpoints
| Method | Endpoint | Description |
|---|---|---|
POST | /chat/messages | Send a message |
POST | /chat/messages/stream | Send message with streaming |
GET | /chat/conversations | List conversations |
GET | /chat/conversations/:id | Get conversation |
DELETE | /chat/conversations/:id | Delete conversation |
GET | /chat/conversations/:id/messages | Get conversation messages |
Send a Message
Send a message and receive an AI response with source citations from your knowledge base.
curl -X POST "https://api.cleartangle.com/chat/messages" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"message": "What did I capture about the Q1 roadmap last week?",
"conversationId": "conv_optional123",
"context": {
"projectIds": ["proj_123"],
"tags": ["planning"],
"dateRange": {
"start": "2024-01-08",
"end": "2024-01-15"
}
},
"options": {
"model": "default",
"temperature": 0.7,
"maxTokens": 1000,
"includeSourceCitations": true
}
}'Request Body
| Field | Type | Required | Description |
|---|---|---|---|
message | string | Yes | The user message to send |
conversationId | string | No | Continue existing conversation |
context | object | No | Filter context for RAG retrieval |
options | object | No | Model and response options |
Response
{
"success": true,
"data": {
"id": "msg_xyz789",
"conversationId": "conv_abc123",
"role": "assistant",
"content": "Based on your captures from last week, here's what you recorded about the Q1 roadmap:\n\n1. **Product priorities** were finalized in a meeting on January 10th, focusing on three main initiatives...\n\n2. **Timeline** was set with key milestones...",
"sources": [
{
"id": "cap_111",
"type": "capture",
"title": "Q1 Planning Meeting Notes",
"relevance": 0.94,
"excerpt": "Discussed main initiatives for Q1..."
},
{
"id": "cap_222",
"type": "capture",
"title": "Roadmap Discussion",
"relevance": 0.89,
"excerpt": "Timeline finalized with following milestones..."
}
],
"usage": {
"promptTokens": 2341,
"completionTokens": 512,
"totalTokens": 2853
},
"createdAt": "2024-01-15T10:30:00Z"
}
}Context Control
Control what information the AI has access to when generating responses. This is equivalent to using @mentions in the UI.
{
"context": {
// Filter by specific projects
"projectIds": ["proj_123", "proj_456"],
// Filter by tags
"tags": ["important", "work"],
// Filter by date range
"dateRange": {
"start": "2024-01-01",
"end": "2024-01-15"
},
// Include specific captures
"captureIds": ["cap_specific123"],
// Include specific pages
"pageIds": ["page_abc"],
// Enable semantic search scope
"semanticScope": "all" // "all", "recent", "relevant"
}
}Context Scope
If no context is specified, the API searches your entire knowledge base for relevant information.
Streaming Responses
Use the streaming endpoint for real-time responses. This is ideal for building interactive chat interfaces.
const response = await fetch("https://api.cleartangle.com/chat/messages/stream", {
method: "POST",
headers: {
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
},
body: JSON.stringify({
message: "Summarize my notes from today",
options: { stream: true }
})
});
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
const lines = chunk.split('\n').filter(line => line.startsWith('data: '));
for (const line of lines) {
const data = JSON.parse(line.slice(6));
if (data.type === 'content') {
process.stdout.write(data.content);
} else if (data.type === 'sources') {
console.log('\nSources:', data.sources);
} else if (data.type === 'done') {
console.log('\nStream complete');
}
}
}Stream Event Types
contentPartial response content to appendsourcesSource citations for the responseusageToken usage statisticsdoneStream complete signalerrorError occurred during streamingModel Options
Customize the AI model behavior with these options:
| Option | Type | Default | Description |
|---|---|---|---|
model | string | default | Model to use (default uses optimal model for task) |
temperature | number | 0.7 | Response creativity (0-1) |
maxTokens | integer | 1000 | Maximum response length |
includeSourceCitations | boolean | true | Include source citations |
maxSources | integer | 5 | Maximum sources to include |
stream | boolean | false | Enable streaming (use /stream endpoint) |
Conversation Management
Conversations are automatically created when you send a message without a conversationId. To continue a conversation, include the conversationId in subsequent requests.
List Conversations
curl -X GET "https://api.cleartangle.com/chat/conversations?limit=20" \
-H "Authorization: Bearer YOUR_API_KEY"Get Conversation Messages
curl -X GET "https://api.cleartangle.com/chat/conversations/conv_abc123/messages" \
-H "Authorization: Bearer YOUR_API_KEY"Rate Limits
Chat API Limits
Chat API has separate rate limits due to LLM costs. Check the X-RateLimit-Chat-Remaining header.
| Plan | Messages/min | Messages/day |
|---|---|---|
| Free | 10 | 100 |
| Pro | 60 | 1,000 |
| Enterprise | Custom | Custom |