co-mono/packages/ai
Mario Zechner 6112029076 docs(ai): Update README with working quick start examples
- Replace planned features with actual working code examples
- Add clear provider comparison table
- Show real imports and usage patterns
- Include streaming, thinking, and tool calling examples
- Update supported models to match current implementation
2025-08-25 15:58:57 +02:00
..
docs refactor(ai): Add completion signal to onText/onThinking callbacks 2025-08-24 20:33:26 +02:00
src test(ai): Add comprehensive E2E tests for all AI providers 2025-08-25 15:54:26 +02:00
test test(ai): Add comprehensive E2E tests for all AI providers 2025-08-25 15:54:26 +02:00
package.json test(ai): Add comprehensive E2E tests for all AI providers 2025-08-25 15:54:26 +02:00
README.md docs(ai): Update README with working quick start examples 2025-08-25 15:58:57 +02:00
tsconfig.build.json feat(ai): Create unified AI package with OpenAI, Anthropic, and Gemini support 2025-08-17 20:18:45 +02:00

@mariozechner/ai

Unified API for OpenAI, Anthropic, and Google Gemini LLM providers with streaming, tool calling, and thinking support.

Installation

npm install @mariozechner/ai

Quick Start

import { AnthropicLLM } from '@mariozechner/ai/providers/anthropic';
import { OpenAICompletionsLLM } from '@mariozechner/ai/providers/openai-completions';
import { GeminiLLM } from '@mariozechner/ai/providers/gemini';

// Pick your provider - same API for all
const llm = new AnthropicLLM('claude-3-5-sonnet-20241022');
// const llm = new OpenAICompletionsLLM('gpt-4o');
// const llm = new GeminiLLM('gemini-2.0-flash-exp');

// Basic completion
const response = await llm.complete({
  messages: [{ role: 'user', content: 'Hello!' }]
});
console.log(response.content);

// Streaming with thinking
const streamResponse = await llm.complete({
  messages: [{ role: 'user', content: 'Explain quantum computing' }]
}, {
  onText: (chunk) => process.stdout.write(chunk),
  onThinking: (chunk) => process.stderr.write(chunk),
  thinking: { enabled: true }
});

// Tool calling
const tools = [{
  name: 'calculator',
  description: 'Perform calculations',
  parameters: {
    type: 'object',
    properties: {
      expression: { type: 'string' }
    },
    required: ['expression']
  }
}];

const toolResponse = await llm.complete({
  messages: [{ role: 'user', content: 'What is 15 * 27?' }],
  tools
});

if (toolResponse.toolCalls) {
  for (const call of toolResponse.toolCalls) {
    console.log(`Tool: ${call.name}, Args:`, call.arguments);
  }
}

Features

  • Unified Interface: Same API across OpenAI, Anthropic, and Gemini
  • Streaming: Real-time text and thinking streams with completion signals
  • Tool Calling: Consistent function calling with automatic ID generation
  • Thinking Mode: Access reasoning tokens (o1, Claude, Gemini 2.0)
  • Token Tracking: Input, output, cache, and thinking token counts
  • Error Handling: Graceful fallbacks with detailed error messages

Providers

Provider Models Thinking Tools Streaming
OpenAI Completions gpt-4o, gpt-4o-mini
OpenAI Responses o1, o3, gpt-5
Anthropic claude-3.5-sonnet, claude-3.5-haiku
Gemini gemini-2.0-flash, gemini-2.0-pro

Development

This package is part of the pi monorepo. See the main README for development instructions.

License

MIT