Skip to main content

OpenAI-Compatible API

Overview

The OpenAI Compatible API: Request Conclusion with OpenAI Schema operation returns results immediately without polling, using the standard OpenAI chat completions schema.

For general API information, see the embraceableAI OpenAI-Compatible API Documentation.

Parameters

ParameterRequiredTypeDescription
ModelYesDropdown (dynamically loaded)Available models are automatically loaded based on your API key. Select from the dropdown.
GoalYesMulti-line textThe primary objective or intent. Be specific about the desired outcome.
DetailsYesMulti-line textContext and information about the case. Provide relevant facts, constraints, and domain-specific information.
PoliciesYesCollection or TextHard rules and constraints that must be followed. Choose between List Mode (separate list items, formatted with - when sent) or Text Mode (newline-separated text, each line = one policy, formatted with - ).
Stream Output FormatNoBooleanWhether to return response in OpenAI stream format. Default: false. Note: n8n returns complete results after full generation, even when streaming is enabled.
TimeoutNoNumberMaximum time to wait for the API response (in seconds). Default: 300s (5 minutes), Range: 1-3600s. If exceeded, the node throws a timeout error.

Output Format

Returns response in OpenAI-compatible format:

{
"id": "chatcmpl-...",
"object": "chat.completion",
"model": "model-id",
"choices": [{
"message": {
"role": "assistant",
"content": "The complete reasoning process and final conclusion..."
}
}],
"usage": {"prompt_tokens": 150, "completion_tokens": 200, "total_tokens": 350}
}

For detailed schema information and complete response structure, see the create-chat-completion endpoint documentation.

The reasoning process is in <think></think> tags within message.content, followed by the final conclusion. Review executions in the Sandbox Console under Logs.

n8n-Specific Features

  • Immediate Response: No polling - returns results directly
  • OpenAI Format: Compatible with OpenAI chat completion schema for integration with other tools
  • Error Handling: Returns structured error responses for HTTP errors; throws timeout errors if timeout is exceeded

Differences from Conclusion API

FeatureConclusion APIOpenAI-Compatible API
ResponsePolling-basedImmediate
FormatCustom embraceableAIOpenAI chat completion
Use CaseLong-running, full transparencyQuick responses, OpenAI compatibility

Use Conclusion API for long-running conclusions with automatic polling. Use OpenAI-Compatible API for immediate responses and OpenAI-compatible format.