mirror of
https://github.com/getcompanion-ai/co-mono.git
synced 2026-04-15 09:01:14 +00:00
Merge branch 'feat/use-mistral-sdk'
This commit is contained in:
commit
a31065166d
17 changed files with 728 additions and 171 deletions
|
|
@ -627,6 +627,7 @@ The library uses a registry of API implementations. Built-in APIs include:
|
|||
- **`google-generative-ai`**: Google Generative AI API (`streamGoogle`, `GoogleOptions`)
|
||||
- **`google-gemini-cli`**: Google Cloud Code Assist API (`streamGoogleGeminiCli`, `GoogleGeminiCliOptions`)
|
||||
- **`google-vertex`**: Google Vertex AI API (`streamGoogleVertex`, `GoogleVertexOptions`)
|
||||
- **`mistral-conversations`**: Mistral Conversations API (`streamMistral`, `MistralOptions`)
|
||||
- **`openai-completions`**: OpenAI Chat Completions API (`streamOpenAICompletions`, `OpenAICompletionsOptions`)
|
||||
- **`openai-responses`**: OpenAI Responses API (`streamOpenAIResponses`, `OpenAIResponsesOptions`)
|
||||
- **`openai-codex-responses`**: OpenAI Codex Responses API (`streamOpenAICodexResponses`, `OpenAICodexResponsesOptions`)
|
||||
|
|
@ -639,7 +640,8 @@ A **provider** offers models through a specific API. For example:
|
|||
- **Anthropic** models use the `anthropic-messages` API
|
||||
- **Google** models use the `google-generative-ai` API
|
||||
- **OpenAI** models use the `openai-responses` API
|
||||
- **Mistral, xAI, Cerebras, Groq, etc.** models use the `openai-completions` API (OpenAI-compatible)
|
||||
- **Mistral** models use the `mistral-conversations` API
|
||||
- **xAI, Cerebras, Groq, etc.** models use the `openai-completions` API (OpenAI-compatible)
|
||||
|
||||
### Querying Providers and Models
|
||||
|
||||
|
|
@ -729,7 +731,7 @@ const response = await stream(ollamaModel, context, {
|
|||
|
||||
### OpenAI Compatibility Settings
|
||||
|
||||
The `openai-completions` API is implemented by many providers with minor differences. By default, the library auto-detects compatibility settings based on `baseUrl` for known providers (Cerebras, xAI, Mistral, Chutes, etc.). For custom proxies or unknown endpoints, you can override these settings via the `compat` field. For `openai-responses` models, the compat field only supports Responses-specific flags.
|
||||
The `openai-completions` API is implemented by many providers with minor differences. By default, the library auto-detects compatibility settings based on `baseUrl` for a small set of known OpenAI-compatible providers (Cerebras, xAI, Chutes, DeepSeek, zAi, OpenCode, etc.). For custom proxies or unknown endpoints, you can override these settings via the `compat` field. For `openai-responses` models, the compat field only supports Responses-specific flags.
|
||||
|
||||
```typescript
|
||||
interface OpenAICompletionsCompat {
|
||||
|
|
@ -742,7 +744,6 @@ interface OpenAICompletionsCompat {
|
|||
requiresToolResultName?: boolean; // Whether tool results require the `name` field (default: false)
|
||||
requiresAssistantAfterToolResult?: boolean; // Whether tool results must be followed by an assistant message (default: false)
|
||||
requiresThinkingAsText?: boolean; // Whether thinking blocks must be converted to text (default: false)
|
||||
requiresMistralToolIds?: boolean; // Whether tool call IDs must be normalized to Mistral format (default: false)
|
||||
thinkingFormat?: 'openai' | 'zai' | 'qwen'; // Format for reasoning param: 'openai' uses reasoning_effort, 'zai' uses thinking: { type: "enabled" }, 'qwen' uses enable_thinking: boolean (default: openai)
|
||||
openRouterRouting?: OpenRouterRouting; // OpenRouter routing preferences (default: {})
|
||||
vercelGatewayRouting?: VercelGatewayRouting; // Vercel AI Gateway routing preferences (default: {})
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue