co-mono/packages/ai
Mario Zechner f064ea0e14 feat(ai): Create unified AI package with OpenAI, Anthropic, and Gemini support
- Set up @mariozechner/ai package structure following monorepo patterns
- Install OpenAI, Anthropic, and Google Gemini SDK dependencies
- Document comprehensive API investigation for all three providers
- Design minimal unified API with streaming-first architecture
- Add models.dev integration for pricing and capabilities
- Implement automatic caching strategy for all providers
- Update project documentation with package creation guide
2025-08-17 20:18:45 +02:00
..
src feat(ai): Create unified AI package with OpenAI, Anthropic, and Gemini support 2025-08-17 20:18:45 +02:00
anthropic-api.md feat(ai): Create unified AI package with OpenAI, Anthropic, and Gemini support 2025-08-17 20:18:45 +02:00
gemini-api.md feat(ai): Create unified AI package with OpenAI, Anthropic, and Gemini support 2025-08-17 20:18:45 +02:00
openai-api.md feat(ai): Create unified AI package with OpenAI, Anthropic, and Gemini support 2025-08-17 20:18:45 +02:00
package.json feat(ai): Create unified AI package with OpenAI, Anthropic, and Gemini support 2025-08-17 20:18:45 +02:00
plan.md feat(ai): Create unified AI package with OpenAI, Anthropic, and Gemini support 2025-08-17 20:18:45 +02:00
README.md feat(ai): Create unified AI package with OpenAI, Anthropic, and Gemini support 2025-08-17 20:18:45 +02:00
tsconfig.build.json feat(ai): Create unified AI package with OpenAI, Anthropic, and Gemini support 2025-08-17 20:18:45 +02:00

@mariozechner/ai

Unified API for OpenAI, Anthropic, and Google Gemini LLM providers. This package provides a common interface for working with multiple LLM providers, handling their differences transparently while exposing a consistent, minimal API.

Features (Planned)

  • Unified Interface: Single API for OpenAI, Anthropic, and Google Gemini
  • Streaming Support: Real-time response streaming with delta events
  • Tool Calling: Consistent tool/function calling across providers
  • Reasoning/Thinking: Support for reasoning tokens where available
  • Session Management: Serializable conversation state across providers
  • Token Tracking: Unified token counting (input, output, cached, reasoning)
  • Interrupt Handling: Graceful cancellation of requests
  • Provider Detection: Automatic configuration based on endpoint
  • Caching Support: Provider-specific caching strategies

Installation

npm install @mariozechner/ai

Quick Start (Coming Soon)

import { createClient } from '@mariozechner/ai';

// Automatically detects provider from configuration
const client = createClient({
  provider: 'openai',
  apiKey: process.env.OPENAI_API_KEY,
  model: 'gpt-4'
});

// Same API works for all providers
const response = await client.complete({
  messages: [
    { role: 'user', content: 'Hello!' }
  ],
  stream: true
});

for await (const event of response) {
  if (event.type === 'content') {
    process.stdout.write(event.text);
  }
}

Supported Providers

  • OpenAI: GPT-3.5, GPT-4, o1, o3 models
  • Anthropic: Claude models via native SDK
  • Google Gemini: Gemini models with thinking support

Development

This package is part of the pi monorepo. See the main README for development instructions.

License

MIT