mirror of
https://github.com/getcompanion-ai/co-mono.git
synced 2026-04-16 14:01:06 +00:00
docs(ai): Update README with working quick start examples
- Replace planned features with actual working code examples - Add clear provider comparison table - Show real imports and usage patterns - Include streaming, thinking, and tool calling examples - Update supported models to match current implementation
This commit is contained in:
parent
7a6852081d
commit
6112029076
1 changed files with 58 additions and 33 deletions
|
|
@ -1,18 +1,6 @@
|
|||
# @mariozechner/ai
|
||||
|
||||
Unified API for OpenAI, Anthropic, and Google Gemini LLM providers. This package provides a common interface for working with multiple LLM providers, handling their differences transparently while exposing a consistent, minimal API.
|
||||
|
||||
## Features (Planned)
|
||||
|
||||
- **Unified Interface**: Single API for OpenAI, Anthropic, and Google Gemini
|
||||
- **Streaming Support**: Real-time response streaming with delta events
|
||||
- **Tool Calling**: Consistent tool/function calling across providers
|
||||
- **Reasoning/Thinking**: Support for reasoning tokens where available
|
||||
- **Session Management**: Serializable conversation state across providers
|
||||
- **Token Tracking**: Unified token counting (input, output, cached, reasoning)
|
||||
- **Interrupt Handling**: Graceful cancellation of requests
|
||||
- **Provider Detection**: Automatic configuration based on endpoint
|
||||
- **Caching Support**: Provider-specific caching strategies
|
||||
Unified API for OpenAI, Anthropic, and Google Gemini LLM providers with streaming, tool calling, and thinking support.
|
||||
|
||||
## Installation
|
||||
|
||||
|
|
@ -20,38 +8,75 @@ Unified API for OpenAI, Anthropic, and Google Gemini LLM providers. This package
|
|||
npm install @mariozechner/ai
|
||||
```
|
||||
|
||||
## Quick Start (Coming Soon)
|
||||
## Quick Start
|
||||
|
||||
```typescript
|
||||
import { createClient } from '@mariozechner/ai';
|
||||
import { AnthropicLLM } from '@mariozechner/ai/providers/anthropic';
|
||||
import { OpenAICompletionsLLM } from '@mariozechner/ai/providers/openai-completions';
|
||||
import { GeminiLLM } from '@mariozechner/ai/providers/gemini';
|
||||
|
||||
// Automatically detects provider from configuration
|
||||
const client = createClient({
|
||||
provider: 'openai',
|
||||
apiKey: process.env.OPENAI_API_KEY,
|
||||
model: 'gpt-4'
|
||||
// Pick your provider - same API for all
|
||||
const llm = new AnthropicLLM('claude-3-5-sonnet-20241022');
|
||||
// const llm = new OpenAICompletionsLLM('gpt-4o');
|
||||
// const llm = new GeminiLLM('gemini-2.0-flash-exp');
|
||||
|
||||
// Basic completion
|
||||
const response = await llm.complete({
|
||||
messages: [{ role: 'user', content: 'Hello!' }]
|
||||
});
|
||||
console.log(response.content);
|
||||
|
||||
// Streaming with thinking
|
||||
const streamResponse = await llm.complete({
|
||||
messages: [{ role: 'user', content: 'Explain quantum computing' }]
|
||||
}, {
|
||||
onText: (chunk) => process.stdout.write(chunk),
|
||||
onThinking: (chunk) => process.stderr.write(chunk),
|
||||
thinking: { enabled: true }
|
||||
});
|
||||
|
||||
// Same API works for all providers
|
||||
const response = await client.complete({
|
||||
messages: [
|
||||
{ role: 'user', content: 'Hello!' }
|
||||
],
|
||||
stream: true
|
||||
// Tool calling
|
||||
const tools = [{
|
||||
name: 'calculator',
|
||||
description: 'Perform calculations',
|
||||
parameters: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
expression: { type: 'string' }
|
||||
},
|
||||
required: ['expression']
|
||||
}
|
||||
}];
|
||||
|
||||
const toolResponse = await llm.complete({
|
||||
messages: [{ role: 'user', content: 'What is 15 * 27?' }],
|
||||
tools
|
||||
});
|
||||
|
||||
for await (const event of response) {
|
||||
if (event.type === 'content') {
|
||||
process.stdout.write(event.text);
|
||||
if (toolResponse.toolCalls) {
|
||||
for (const call of toolResponse.toolCalls) {
|
||||
console.log(`Tool: ${call.name}, Args:`, call.arguments);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Supported Providers
|
||||
## Features
|
||||
|
||||
- **OpenAI**: GPT-3.5, GPT-4, o1, o3 models
|
||||
- **Anthropic**: Claude models via native SDK
|
||||
- **Google Gemini**: Gemini models with thinking support
|
||||
- **Unified Interface**: Same API across OpenAI, Anthropic, and Gemini
|
||||
- **Streaming**: Real-time text and thinking streams with completion signals
|
||||
- **Tool Calling**: Consistent function calling with automatic ID generation
|
||||
- **Thinking Mode**: Access reasoning tokens (o1, Claude, Gemini 2.0)
|
||||
- **Token Tracking**: Input, output, cache, and thinking token counts
|
||||
- **Error Handling**: Graceful fallbacks with detailed error messages
|
||||
|
||||
## Providers
|
||||
|
||||
| Provider | Models | Thinking | Tools | Streaming |
|
||||
|----------|--------|----------|-------|-----------|
|
||||
| OpenAI Completions | gpt-4o, gpt-4o-mini | ❌ | ✅ | ✅ |
|
||||
| OpenAI Responses | o1, o3, gpt-5 | ✅ | ✅ | ✅ |
|
||||
| Anthropic | claude-3.5-sonnet, claude-3.5-haiku | ✅ | ✅ | ✅ |
|
||||
| Gemini | gemini-2.0-flash, gemini-2.0-pro | ✅ | ✅ | ✅ |
|
||||
|
||||
## Development
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue