OpenAI-Compatible API
Overview
The OpenAI Compatible API: Request Conclusion with OpenAI Schema operation returns results immediately without polling, using the standard OpenAI chat completions schema.
For general API information, see the embraceableAI OpenAI-Compatible API Documentation.
Parameters
| Parameter | Required | Type | Description |
|---|---|---|---|
| Model | Yes | Dropdown (dynamically loaded) | Available models are automatically loaded based on your API key. Select from the dropdown. |
| Goal | Yes | Multi-line text | The primary objective or intent. Be specific about the desired outcome. |
| Details | Yes | Multi-line text | Context and information about the case. Provide relevant facts, constraints, and domain-specific information. |
| Policies | Yes | Collection or Text | Hard rules and constraints that must be followed. Choose between List Mode (separate list items, formatted with - when sent) or Text Mode (newline-separated text, each line = one policy, formatted with - ). |
| Stream Output Format | No | Boolean | Whether to return response in OpenAI stream format. Default: false. Note: n8n returns complete results after full generation, even when streaming is enabled. |
| Timeout | No | Number | Maximum time to wait for the API response (in seconds). Default: 300s (5 minutes), Range: 1-3600s. If exceeded, the node throws a timeout error. |
Output Format
Returns response in OpenAI-compatible format:
{
"id": "chatcmpl-...",
"object": "chat.completion",
"model": "model-id",
"choices": [{
"message": {
"role": "assistant",
"content": "The complete reasoning process and final conclusion..."
}
}],
"usage": {"prompt_tokens": 150, "completion_tokens": 200, "total_tokens": 350}
}
For detailed schema information and complete response structure, see the create-chat-completion endpoint documentation.
The reasoning process is in <think></think> tags within message.content, followed by the final conclusion. Review executions in the Sandbox Console under Logs.
n8n-Specific Features
- Immediate Response: No polling - returns results directly
- OpenAI Format: Compatible with OpenAI chat completion schema for integration with other tools
- Error Handling: Returns structured error responses for HTTP errors; throws timeout errors if timeout is exceeded
Differences from Conclusion API
| Feature | Conclusion API | OpenAI-Compatible API |
|---|---|---|
| Response | Polling-based | Immediate |
| Format | Custom embraceableAI | OpenAI chat completion |
| Use Case | Long-running, full transparency | Quick responses, OpenAI compatibility |
Use Conclusion API for long-running conclusions with automatic polling. Use OpenAI-Compatible API for immediate responses and OpenAI-compatible format.