mirror of
https://github.com/getcompanion-ai/co-mono.git
synced 2026-04-21 07:02:04 +00:00
feat(ai): Add OpenAI-compatible provider examples for multiple services
- Add examples for Cerebras, Groq, Ollama, and OpenRouter - Update OpenAI Completions provider to handle base URL properly - Simplify README formatting - All examples use the same OpenAICompletionsLLM provider with different base URLs
This commit is contained in:
parent
6112029076
commit
4bb3a5ad02
6 changed files with 371 additions and 74 deletions
|
|
@ -32,6 +32,7 @@ const streamResponse = await llm.complete({
|
|||
}, {
|
||||
onText: (chunk) => process.stdout.write(chunk),
|
||||
onThinking: (chunk) => process.stderr.write(chunk),
|
||||
// Provider specific config
|
||||
thinking: { enabled: true }
|
||||
});
|
||||
|
||||
|
|
@ -60,24 +61,6 @@ if (toolResponse.toolCalls) {
|
|||
}
|
||||
```
|
||||
|
||||
## Features
|
||||
|
||||
- **Unified Interface**: Same API across OpenAI, Anthropic, and Gemini
|
||||
- **Streaming**: Real-time text and thinking streams with completion signals
|
||||
- **Tool Calling**: Consistent function calling with automatic ID generation
|
||||
- **Thinking Mode**: Access reasoning tokens (o1, Claude, Gemini 2.0)
|
||||
- **Token Tracking**: Input, output, cache, and thinking token counts
|
||||
- **Error Handling**: Graceful fallbacks with detailed error messages
|
||||
|
||||
## Providers
|
||||
|
||||
| Provider | Models | Thinking | Tools | Streaming |
|
||||
|----------|--------|----------|-------|-----------|
|
||||
| OpenAI Completions | gpt-4o, gpt-4o-mini | ❌ | ✅ | ✅ |
|
||||
| OpenAI Responses | o1, o3, gpt-5 | ✅ | ✅ | ✅ |
|
||||
| Anthropic | claude-3.5-sonnet, claude-3.5-haiku | ✅ | ✅ | ✅ |
|
||||
| Gemini | gemini-2.0-flash, gemini-2.0-pro | ✅ | ✅ | ✅ |
|
||||
|
||||
## Development
|
||||
|
||||
This package is part of the pi monorepo. See the main README for development instructions.
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue