mirror of
https://github.com/harivansh-afk/claude-code-vertical.git
synced 2026-04-15 09:01:13 +00:00
remove
This commit is contained in:
parent
20e4722fb1
commit
087b195b4c
62 changed files with 0 additions and 5798 deletions
|
|
@ -1,53 +0,0 @@
|
|||
---
|
||||
name: 1password
|
||||
description: Set up and use 1Password CLI (op). Use when installing the CLI, enabling desktop app integration, signing in (single or multi-account), or reading/injecting/running secrets via op.
|
||||
homepage: https://developer.1password.com/docs/cli/get-started/
|
||||
metadata: {"clawdbot":{"emoji":"🔐","requires":{"bins":["op"]},"install":[{"id":"brew","kind":"brew","formula":"1password-cli","bins":["op"],"label":"Install 1Password CLI (brew)"}]}}
|
||||
---
|
||||
|
||||
# 1Password CLI
|
||||
|
||||
Follow the official CLI get-started steps. Don't guess install commands.
|
||||
|
||||
## References
|
||||
|
||||
- `references/get-started.md` (install + app integration + sign-in flow)
|
||||
- `references/cli-examples.md` (real `op` examples)
|
||||
|
||||
## Workflow
|
||||
|
||||
1. Check OS + shell.
|
||||
2. Verify CLI present: `op --version`.
|
||||
3. Confirm desktop app integration is enabled (per get-started) and the app is unlocked.
|
||||
4. REQUIRED: create a fresh tmux session for all `op` commands (no direct `op` calls outside tmux).
|
||||
5. Sign in / authorize inside tmux: `op signin` (expect app prompt).
|
||||
6. Verify access inside tmux: `op whoami` (must succeed before any secret read).
|
||||
7. If multiple accounts: use `--account` or `OP_ACCOUNT`.
|
||||
|
||||
## REQUIRED tmux session (T-Max)
|
||||
|
||||
The shell tool uses a fresh TTY per command. To avoid re-prompts and failures, always run `op` inside a dedicated tmux session with a fresh socket/session name.
|
||||
|
||||
Example (see `tmux` skill for socket conventions, do not reuse old session names):
|
||||
|
||||
```bash
|
||||
SOCKET_DIR="${CLAWDBOT_TMUX_SOCKET_DIR:-${TMPDIR:-/tmp}/clawdbot-tmux-sockets}"
|
||||
mkdir -p "$SOCKET_DIR"
|
||||
SOCKET="$SOCKET_DIR/clawdbot-op.sock"
|
||||
SESSION="op-auth-$(date +%Y%m%d-%H%M%S)"
|
||||
|
||||
tmux -S "$SOCKET" new -d -s "$SESSION" -n shell
|
||||
tmux -S "$SOCKET" send-keys -t "$SESSION":0.0 -- "op signin --account my.1password.com" Enter
|
||||
tmux -S "$SOCKET" send-keys -t "$SESSION":0.0 -- "op whoami" Enter
|
||||
tmux -S "$SOCKET" send-keys -t "$SESSION":0.0 -- "op vault list" Enter
|
||||
tmux -S "$SOCKET" capture-pane -p -J -t "$SESSION":0.0 -S -200
|
||||
tmux -S "$SOCKET" kill-session -t "$SESSION"
|
||||
```
|
||||
|
||||
## Guardrails
|
||||
|
||||
- Never paste secrets into logs, chat, or code.
|
||||
- Prefer `op run` / `op inject` over writing secrets to disk.
|
||||
- If sign-in without app integration is needed, use `op account add`.
|
||||
- If a command returns "account is not signed in", re-run `op signin` inside tmux and authorize in the app.
|
||||
- Do not run `op` outside tmux; stop and ask if tmux is unavailable.
|
||||
|
|
@ -1,29 +0,0 @@
|
|||
# op CLI examples (from op help)
|
||||
|
||||
## Sign in
|
||||
|
||||
- `op signin`
|
||||
- `op signin --account <shorthand|signin-address|account-id|user-id>`
|
||||
|
||||
## Read
|
||||
|
||||
- `op read op://app-prod/db/password`
|
||||
- `op read "op://app-prod/db/one-time password?attribute=otp"`
|
||||
- `op read "op://app-prod/ssh key/private key?ssh-format=openssh"`
|
||||
- `op read --out-file ./key.pem op://app-prod/server/ssh/key.pem`
|
||||
|
||||
## Run
|
||||
|
||||
- `export DB_PASSWORD="op://app-prod/db/password"`
|
||||
- `op run --no-masking -- printenv DB_PASSWORD`
|
||||
- `op run --env-file="./.env" -- printenv DB_PASSWORD`
|
||||
|
||||
## Inject
|
||||
|
||||
- `echo "db_password: {{ op://app-prod/db/password }}" | op inject`
|
||||
- `op inject -i config.yml.tpl -o config.yml`
|
||||
|
||||
## Whoami / accounts
|
||||
|
||||
- `op whoami`
|
||||
- `op account list`
|
||||
|
|
@ -1,17 +0,0 @@
|
|||
# 1Password CLI get-started (summary)
|
||||
|
||||
- Works on macOS, Windows, and Linux.
|
||||
- macOS/Linux shells: bash, zsh, sh, fish.
|
||||
- Windows shell: PowerShell.
|
||||
- Requires a 1Password subscription and the desktop app to use app integration.
|
||||
- macOS requirement: Big Sur 11.0.0 or later.
|
||||
- Linux app integration requires PolKit + an auth agent.
|
||||
- Install the CLI per the official doc for your OS.
|
||||
- Enable desktop app integration in the 1Password app:
|
||||
- Open and unlock the app, then select your account/collection.
|
||||
- macOS: Settings > Developer > Integrate with 1Password CLI (Touch ID optional).
|
||||
- Windows: turn on Windows Hello, then Settings > Developer > Integrate.
|
||||
- Linux: Settings > Security > Unlock using system authentication, then Settings > Developer > Integrate.
|
||||
- After integration, run any command to sign in (example in docs: `op vault list`).
|
||||
- If multiple accounts: use `op signin` to pick one, or `--account` / `OP_ACCOUNT`.
|
||||
- For non-integration auth, use `op account add`.
|
||||
|
|
@ -1,50 +0,0 @@
|
|||
---
|
||||
name: apple-notes
|
||||
description: Manage Apple Notes via the `memo` CLI on macOS (create, view, edit, delete, search, move, and export notes). Use when a user asks Clawdbot to add a note, list notes, search notes, or manage note folders.
|
||||
homepage: https://github.com/antoniorodr/memo
|
||||
metadata: {"clawdbot":{"emoji":"📝","os":["darwin"],"requires":{"bins":["memo"]},"install":[{"id":"brew","kind":"brew","formula":"antoniorodr/memo/memo","bins":["memo"],"label":"Install memo via Homebrew"}]}}
|
||||
---
|
||||
|
||||
# Apple Notes CLI
|
||||
|
||||
Use `memo notes` to manage Apple Notes directly from the terminal. Create, view, edit, delete, search, move notes between folders, and export to HTML/Markdown.
|
||||
|
||||
Setup
|
||||
- Install (Homebrew): `brew tap antoniorodr/memo && brew install antoniorodr/memo/memo`
|
||||
- Manual (pip): `pip install .` (after cloning the repo)
|
||||
- macOS-only; if prompted, grant Automation access to Notes.app.
|
||||
|
||||
View Notes
|
||||
- List all notes: `memo notes`
|
||||
- Filter by folder: `memo notes -f "Folder Name"`
|
||||
- Search notes (fuzzy): `memo notes -s "query"`
|
||||
|
||||
Create Notes
|
||||
- Add a new note: `memo notes -a`
|
||||
- Opens an interactive editor to compose the note.
|
||||
- Quick add with title: `memo notes -a "Note Title"`
|
||||
|
||||
Edit Notes
|
||||
- Edit existing note: `memo notes -e`
|
||||
- Interactive selection of note to edit.
|
||||
|
||||
Delete Notes
|
||||
- Delete a note: `memo notes -d`
|
||||
- Interactive selection of note to delete.
|
||||
|
||||
Move Notes
|
||||
- Move note to folder: `memo notes -m`
|
||||
- Interactive selection of note and destination folder.
|
||||
|
||||
Export Notes
|
||||
- Export to HTML/Markdown: `memo notes -ex`
|
||||
- Exports selected note; uses Mistune for markdown processing.
|
||||
|
||||
Limitations
|
||||
- Cannot edit notes containing images or attachments.
|
||||
- Interactive prompts may require terminal access.
|
||||
|
||||
Notes
|
||||
- macOS-only.
|
||||
- Requires Apple Notes.app to be accessible.
|
||||
- For automation, grant permissions in System Settings > Privacy & Security > Automation.
|
||||
|
|
@ -1,67 +0,0 @@
|
|||
---
|
||||
name: apple-reminders
|
||||
description: Manage Apple Reminders via the `remindctl` CLI on macOS (list, add, edit, complete, delete). Supports lists, date filters, and JSON/plain output.
|
||||
homepage: https://github.com/steipete/remindctl
|
||||
metadata: {"clawdbot":{"emoji":"⏰","os":["darwin"],"requires":{"bins":["remindctl"]},"install":[{"id":"brew","kind":"brew","formula":"steipete/tap/remindctl","bins":["remindctl"],"label":"Install remindctl via Homebrew"}]}}
|
||||
---
|
||||
|
||||
# Apple Reminders CLI (remindctl)
|
||||
|
||||
Use `remindctl` to manage Apple Reminders directly from the terminal. It supports list filtering, date-based views, and scripting output.
|
||||
|
||||
Setup
|
||||
- Install (Homebrew): `brew install steipete/tap/remindctl`
|
||||
- From source: `pnpm install && pnpm build` (binary at `./bin/remindctl`)
|
||||
- macOS-only; grant Reminders permission when prompted.
|
||||
|
||||
Permissions
|
||||
- Check status: `remindctl status`
|
||||
- Request access: `remindctl authorize`
|
||||
|
||||
View Reminders
|
||||
- Default (today): `remindctl`
|
||||
- Today: `remindctl today`
|
||||
- Tomorrow: `remindctl tomorrow`
|
||||
- Week: `remindctl week`
|
||||
- Overdue: `remindctl overdue`
|
||||
- Upcoming: `remindctl upcoming`
|
||||
- Completed: `remindctl completed`
|
||||
- All: `remindctl all`
|
||||
- Specific date: `remindctl 2026-01-04`
|
||||
|
||||
Manage Lists
|
||||
- List all lists: `remindctl list`
|
||||
- Show list: `remindctl list Work`
|
||||
- Create list: `remindctl list Projects --create`
|
||||
- Rename list: `remindctl list Work --rename Office`
|
||||
- Delete list: `remindctl list Work --delete`
|
||||
|
||||
Create Reminders
|
||||
- Quick add: `remindctl add "Buy milk"`
|
||||
- With list + due: `remindctl add --title "Call mom" --list Personal --due tomorrow`
|
||||
|
||||
Edit Reminders
|
||||
- Edit title/due: `remindctl edit 1 --title "New title" --due 2026-01-04`
|
||||
|
||||
Complete Reminders
|
||||
- Complete by id: `remindctl complete 1 2 3`
|
||||
|
||||
Delete Reminders
|
||||
- Delete by id: `remindctl delete 4A83 --force`
|
||||
|
||||
Output Formats
|
||||
- JSON (scripting): `remindctl today --json`
|
||||
- Plain TSV: `remindctl today --plain`
|
||||
- Counts only: `remindctl today --quiet`
|
||||
|
||||
Date Formats
|
||||
Accepted by `--due` and date filters:
|
||||
- `today`, `tomorrow`, `yesterday`
|
||||
- `YYYY-MM-DD`
|
||||
- `YYYY-MM-DD HH:mm`
|
||||
- ISO 8601 (`2026-01-04T12:34:56Z`)
|
||||
|
||||
Notes
|
||||
- macOS-only.
|
||||
- If access is denied, enable Terminal/remindctl in System Settings → Privacy & Security → Reminders.
|
||||
- If running over SSH, grant access on the Mac that runs the command.
|
||||
|
|
@ -1,79 +0,0 @@
|
|||
---
|
||||
name: bear-notes
|
||||
description: Create, search, and manage Bear notes via grizzly CLI.
|
||||
homepage: https://bear.app
|
||||
metadata: {"clawdbot":{"emoji":"🐻","os":["darwin"],"requires":{"bins":["grizzly"]},"install":[{"id":"go","kind":"go","module":"github.com/tylerwince/grizzly/cmd/grizzly@latest","bins":["grizzly"],"label":"Install grizzly (go)"}]}}
|
||||
---
|
||||
|
||||
# Bear Notes
|
||||
|
||||
Use `grizzly` to create, read, and manage notes in Bear on macOS.
|
||||
|
||||
Requirements
|
||||
- Bear app installed and running
|
||||
- For some operations (add-text, tags, open-note --selected), a Bear app token (stored in `~/.config/grizzly/token`)
|
||||
|
||||
## Getting a Bear Token
|
||||
|
||||
For operations that require a token (add-text, tags, open-note --selected), you need an authentication token:
|
||||
1. Open Bear → Help → API Token → Copy Token
|
||||
2. Save it: `echo "YOUR_TOKEN" > ~/.config/grizzly/token`
|
||||
|
||||
## Common Commands
|
||||
|
||||
Create a note
|
||||
```bash
|
||||
echo "Note content here" | grizzly create --title "My Note" --tag work
|
||||
grizzly create --title "Quick Note" --tag inbox < /dev/null
|
||||
```
|
||||
|
||||
Open/read a note by ID
|
||||
```bash
|
||||
grizzly open-note --id "NOTE_ID" --enable-callback --json
|
||||
```
|
||||
|
||||
Append text to a note
|
||||
```bash
|
||||
echo "Additional content" | grizzly add-text --id "NOTE_ID" --mode append --token-file ~/.config/grizzly/token
|
||||
```
|
||||
|
||||
List all tags
|
||||
```bash
|
||||
grizzly tags --enable-callback --json --token-file ~/.config/grizzly/token
|
||||
```
|
||||
|
||||
Search notes (via open-tag)
|
||||
```bash
|
||||
grizzly open-tag --name "work" --enable-callback --json
|
||||
```
|
||||
|
||||
## Options
|
||||
|
||||
Common flags:
|
||||
- `--dry-run` — Preview the URL without executing
|
||||
- `--print-url` — Show the x-callback-url
|
||||
- `--enable-callback` — Wait for Bear's response (needed for reading data)
|
||||
- `--json` — Output as JSON (when using callbacks)
|
||||
- `--token-file PATH` — Path to Bear API token file
|
||||
|
||||
## Configuration
|
||||
|
||||
Grizzly reads config from (in priority order):
|
||||
1. CLI flags
|
||||
2. Environment variables (`GRIZZLY_TOKEN_FILE`, `GRIZZLY_CALLBACK_URL`, `GRIZZLY_TIMEOUT`)
|
||||
3. `.grizzly.toml` in current directory
|
||||
4. `~/.config/grizzly/config.toml`
|
||||
|
||||
Example `~/.config/grizzly/config.toml`:
|
||||
```toml
|
||||
token_file = "~/.config/grizzly/token"
|
||||
callback_url = "http://127.0.0.1:42123/success"
|
||||
timeout = "5s"
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Bear must be running for commands to work
|
||||
- Note IDs are Bear's internal identifiers (visible in note info or via callbacks)
|
||||
- Use `--enable-callback` when you need to read data back from Bear
|
||||
- Some operations require a valid token (add-text, tags, open-note --selected)
|
||||
|
|
@ -1,46 +0,0 @@
|
|||
---
|
||||
name: blogwatcher
|
||||
description: Monitor blogs and RSS/Atom feeds for updates using the blogwatcher CLI.
|
||||
homepage: https://github.com/Hyaxia/blogwatcher
|
||||
metadata: {"clawdbot":{"emoji":"📰","requires":{"bins":["blogwatcher"]},"install":[{"id":"go","kind":"go","module":"github.com/Hyaxia/blogwatcher/cmd/blogwatcher@latest","bins":["blogwatcher"],"label":"Install blogwatcher (go)"}]}}
|
||||
---
|
||||
|
||||
# blogwatcher
|
||||
|
||||
Track blog and RSS/Atom feed updates with the `blogwatcher` CLI.
|
||||
|
||||
Install
|
||||
- Go: `go install github.com/Hyaxia/blogwatcher/cmd/blogwatcher@latest`
|
||||
|
||||
Quick start
|
||||
- `blogwatcher --help`
|
||||
|
||||
Common commands
|
||||
- Add a blog: `blogwatcher add "My Blog" https://example.com`
|
||||
- List blogs: `blogwatcher blogs`
|
||||
- Scan for updates: `blogwatcher scan`
|
||||
- List articles: `blogwatcher articles`
|
||||
- Mark an article read: `blogwatcher read 1`
|
||||
- Mark all articles read: `blogwatcher read-all`
|
||||
- Remove a blog: `blogwatcher remove "My Blog"`
|
||||
|
||||
Example output
|
||||
```
|
||||
$ blogwatcher blogs
|
||||
Tracked blogs (1):
|
||||
|
||||
xkcd
|
||||
URL: https://xkcd.com
|
||||
```
|
||||
```
|
||||
$ blogwatcher scan
|
||||
Scanning 1 blog(s)...
|
||||
|
||||
xkcd
|
||||
Source: RSS | Found: 4 | New: 4
|
||||
|
||||
Found 4 new article(s) total!
|
||||
```
|
||||
|
||||
Notes
|
||||
- Use `blogwatcher <command> --help` to discover flags and options.
|
||||
|
|
@ -1,27 +0,0 @@
|
|||
---
|
||||
name: blucli
|
||||
description: BluOS CLI (blu) for discovery, playback, grouping, and volume.
|
||||
homepage: https://blucli.sh
|
||||
metadata: {"clawdbot":{"emoji":"🫐","requires":{"bins":["blu"]},"install":[{"id":"go","kind":"go","module":"github.com/steipete/blucli/cmd/blu@latest","bins":["blu"],"label":"Install blucli (go)"}]}}
|
||||
---
|
||||
|
||||
# blucli (blu)
|
||||
|
||||
Use `blu` to control Bluesound/NAD players.
|
||||
|
||||
Quick start
|
||||
- `blu devices` (pick target)
|
||||
- `blu --device <id> status`
|
||||
- `blu play|pause|stop`
|
||||
- `blu volume set 15`
|
||||
|
||||
Target selection (in priority order)
|
||||
- `--device <id|name|alias>`
|
||||
- `BLU_DEVICE`
|
||||
- config default (if set)
|
||||
|
||||
Common tasks
|
||||
- Grouping: `blu group status|add|remove`
|
||||
- TuneIn search/play: `blu tunein search "query"`, `blu tunein play "query"`
|
||||
|
||||
Prefer `--json` for scripts. Confirm the target device before changing playback.
|
||||
|
|
@ -1,25 +0,0 @@
|
|||
---
|
||||
name: camsnap
|
||||
description: Capture frames or clips from RTSP/ONVIF cameras.
|
||||
homepage: https://camsnap.ai
|
||||
metadata: {"clawdbot":{"emoji":"📸","requires":{"bins":["camsnap"]},"install":[{"id":"brew","kind":"brew","formula":"steipete/tap/camsnap","bins":["camsnap"],"label":"Install camsnap (brew)"}]}}
|
||||
---
|
||||
|
||||
# camsnap
|
||||
|
||||
Use `camsnap` to grab snapshots, clips, or motion events from configured cameras.
|
||||
|
||||
Setup
|
||||
- Config file: `~/.config/camsnap/config.yaml`
|
||||
- Add camera: `camsnap add --name kitchen --host 192.168.0.10 --user user --pass pass`
|
||||
|
||||
Common commands
|
||||
- Discover: `camsnap discover --info`
|
||||
- Snapshot: `camsnap snap kitchen --out shot.jpg`
|
||||
- Clip: `camsnap clip kitchen --dur 5s --out clip.mp4`
|
||||
- Motion watch: `camsnap watch kitchen --threshold 0.2 --action '...'`
|
||||
- Doctor: `camsnap doctor --probe`
|
||||
|
||||
Notes
|
||||
- Requires `ffmpeg` on PATH.
|
||||
- Prefer a short test capture before longer clips.
|
||||
|
|
@ -1,474 +0,0 @@
|
|||
---
|
||||
name: discord
|
||||
description: Use when you need to control Discord from Clawdbot via the discord tool: send messages, react, post or upload stickers, upload emojis, run polls, manage threads/pins/search, create/edit/delete channels and categories, fetch permissions or member/role/channel info, or handle moderation actions in Discord DMs or channels.
|
||||
---
|
||||
|
||||
# Discord Actions
|
||||
|
||||
## Overview
|
||||
|
||||
Use `discord` to manage messages, reactions, threads, polls, and moderation. You can disable groups via `discord.actions.*` (defaults to enabled, except roles/moderation). The tool uses the bot token configured for Clawdbot.
|
||||
|
||||
## Inputs to collect
|
||||
|
||||
- For reactions: `channelId`, `messageId`, and an `emoji`.
|
||||
- For fetchMessage: `guildId`, `channelId`, `messageId`, or a `messageLink` like `https://discord.com/channels/<guildId>/<channelId>/<messageId>`.
|
||||
- For stickers/polls/sendMessage: a `to` target (`channel:<id>` or `user:<id>`). Optional `content` text.
|
||||
- Polls also need a `question` plus 2–10 `answers`.
|
||||
- For media: `mediaUrl` with `file:///path` for local files or `https://...` for remote.
|
||||
- For emoji uploads: `guildId`, `name`, `mediaUrl`, optional `roleIds` (limit 256KB, PNG/JPG/GIF).
|
||||
- For sticker uploads: `guildId`, `name`, `description`, `tags`, `mediaUrl` (limit 512KB, PNG/APNG/Lottie JSON).
|
||||
|
||||
Message context lines include `discord message id` and `channel` fields you can reuse directly.
|
||||
|
||||
**Note:** `sendMessage` uses `to: "channel:<id>"` format, not `channelId`. Other actions like `react`, `readMessages`, `editMessage` use `channelId` directly.
|
||||
**Note:** `fetchMessage` accepts message IDs or full links like `https://discord.com/channels/<guildId>/<channelId>/<messageId>`.
|
||||
|
||||
## Actions
|
||||
|
||||
### React to a message
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "react",
|
||||
"channelId": "123",
|
||||
"messageId": "456",
|
||||
"emoji": "✅"
|
||||
}
|
||||
```
|
||||
|
||||
### List reactions + users
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "reactions",
|
||||
"channelId": "123",
|
||||
"messageId": "456",
|
||||
"limit": 100
|
||||
}
|
||||
```
|
||||
|
||||
### Send a sticker
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "sticker",
|
||||
"to": "channel:123",
|
||||
"stickerIds": ["9876543210"],
|
||||
"content": "Nice work!"
|
||||
}
|
||||
```
|
||||
|
||||
- Up to 3 sticker IDs per message.
|
||||
- `to` can be `user:<id>` for DMs.
|
||||
|
||||
### Upload a custom emoji
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "emojiUpload",
|
||||
"guildId": "999",
|
||||
"name": "party_blob",
|
||||
"mediaUrl": "file:///tmp/party.png",
|
||||
"roleIds": ["222"]
|
||||
}
|
||||
```
|
||||
|
||||
- Emoji images must be PNG/JPG/GIF and <= 256KB.
|
||||
- `roleIds` is optional; omit to make the emoji available to everyone.
|
||||
|
||||
### Upload a sticker
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "stickerUpload",
|
||||
"guildId": "999",
|
||||
"name": "clawdbot_wave",
|
||||
"description": "Clawdbot waving hello",
|
||||
"tags": "👋",
|
||||
"mediaUrl": "file:///tmp/wave.png"
|
||||
}
|
||||
```
|
||||
|
||||
- Stickers require `name`, `description`, and `tags`.
|
||||
- Uploads must be PNG/APNG/Lottie JSON and <= 512KB.
|
||||
|
||||
### Create a poll
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "poll",
|
||||
"to": "channel:123",
|
||||
"question": "Lunch?",
|
||||
"answers": ["Pizza", "Sushi", "Salad"],
|
||||
"allowMultiselect": false,
|
||||
"durationHours": 24,
|
||||
"content": "Vote now"
|
||||
}
|
||||
```
|
||||
|
||||
- `durationHours` defaults to 24; max 32 days (768 hours).
|
||||
|
||||
### Check bot permissions for a channel
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "permissions",
|
||||
"channelId": "123"
|
||||
}
|
||||
```
|
||||
|
||||
## Ideas to try
|
||||
|
||||
- React with ✅/⚠️ to mark status updates.
|
||||
- Post a quick poll for release decisions or meeting times.
|
||||
- Send celebratory stickers after successful deploys.
|
||||
- Upload new emojis/stickers for release moments.
|
||||
- Run weekly “priority check” polls in team channels.
|
||||
- DM stickers as acknowledgements when a user’s request is completed.
|
||||
|
||||
## Action gating
|
||||
|
||||
Use `discord.actions.*` to disable action groups:
|
||||
- `reactions` (react + reactions list + emojiList)
|
||||
- `stickers`, `polls`, `permissions`, `messages`, `threads`, `pins`, `search`
|
||||
- `emojiUploads`, `stickerUploads`
|
||||
- `memberInfo`, `roleInfo`, `channelInfo`, `voiceStatus`, `events`
|
||||
- `roles` (role add/remove, default `false`)
|
||||
- `channels` (channel/category create/edit/delete/move, default `false`)
|
||||
- `moderation` (timeout/kick/ban, default `false`)
|
||||
### Read recent messages
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "readMessages",
|
||||
"channelId": "123",
|
||||
"limit": 20
|
||||
}
|
||||
```
|
||||
|
||||
### Fetch a single message
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "fetchMessage",
|
||||
"guildId": "999",
|
||||
"channelId": "123",
|
||||
"messageId": "456"
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "fetchMessage",
|
||||
"messageLink": "https://discord.com/channels/999/123/456"
|
||||
}
|
||||
```
|
||||
|
||||
### Send/edit/delete a message
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "sendMessage",
|
||||
"to": "channel:123",
|
||||
"content": "Hello from Clawdbot"
|
||||
}
|
||||
```
|
||||
|
||||
**With media attachment:**
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "sendMessage",
|
||||
"to": "channel:123",
|
||||
"content": "Check out this audio!",
|
||||
"mediaUrl": "file:///tmp/audio.mp3"
|
||||
}
|
||||
```
|
||||
|
||||
- `to` uses format `channel:<id>` or `user:<id>` for DMs (not `channelId`!)
|
||||
- `mediaUrl` supports local files (`file:///path/to/file`) and remote URLs (`https://...`)
|
||||
- Optional `replyTo` with a message ID to reply to a specific message
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "editMessage",
|
||||
"channelId": "123",
|
||||
"messageId": "456",
|
||||
"content": "Fixed typo"
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "deleteMessage",
|
||||
"channelId": "123",
|
||||
"messageId": "456"
|
||||
}
|
||||
```
|
||||
|
||||
### Threads
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "threadCreate",
|
||||
"channelId": "123",
|
||||
"name": "Bug triage",
|
||||
"messageId": "456"
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "threadList",
|
||||
"guildId": "999"
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "threadReply",
|
||||
"channelId": "777",
|
||||
"content": "Replying in thread"
|
||||
}
|
||||
```
|
||||
|
||||
### Pins
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "pinMessage",
|
||||
"channelId": "123",
|
||||
"messageId": "456"
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "listPins",
|
||||
"channelId": "123"
|
||||
}
|
||||
```
|
||||
|
||||
### Search messages
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "searchMessages",
|
||||
"guildId": "999",
|
||||
"content": "release notes",
|
||||
"channelIds": ["123", "456"],
|
||||
"limit": 10
|
||||
}
|
||||
```
|
||||
|
||||
### Member + role info
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "memberInfo",
|
||||
"guildId": "999",
|
||||
"userId": "111"
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "roleInfo",
|
||||
"guildId": "999"
|
||||
}
|
||||
```
|
||||
|
||||
### List available custom emojis
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "emojiList",
|
||||
"guildId": "999"
|
||||
}
|
||||
```
|
||||
|
||||
### Role changes (disabled by default)
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "roleAdd",
|
||||
"guildId": "999",
|
||||
"userId": "111",
|
||||
"roleId": "222"
|
||||
}
|
||||
```
|
||||
|
||||
### Channel info
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "channelInfo",
|
||||
"channelId": "123"
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "channelList",
|
||||
"guildId": "999"
|
||||
}
|
||||
```
|
||||
|
||||
### Channel management (disabled by default)
|
||||
|
||||
Create, edit, delete, and move channels and categories. Enable via `discord.actions.channels: true`.
|
||||
|
||||
**Create a text channel:**
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "channelCreate",
|
||||
"guildId": "999",
|
||||
"name": "general-chat",
|
||||
"type": 0,
|
||||
"parentId": "888",
|
||||
"topic": "General discussion"
|
||||
}
|
||||
```
|
||||
|
||||
- `type`: Discord channel type integer (0 = text, 2 = voice, 4 = category; other values supported)
|
||||
- `parentId`: category ID to nest under (optional)
|
||||
- `topic`, `position`, `nsfw`: optional
|
||||
|
||||
**Create a category:**
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "categoryCreate",
|
||||
"guildId": "999",
|
||||
"name": "Projects"
|
||||
}
|
||||
```
|
||||
|
||||
**Edit a channel:**
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "channelEdit",
|
||||
"channelId": "123",
|
||||
"name": "new-name",
|
||||
"topic": "Updated topic"
|
||||
}
|
||||
```
|
||||
|
||||
- Supports `name`, `topic`, `position`, `parentId` (null to remove from category), `nsfw`, `rateLimitPerUser`
|
||||
|
||||
**Move a channel:**
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "channelMove",
|
||||
"guildId": "999",
|
||||
"channelId": "123",
|
||||
"parentId": "888",
|
||||
"position": 2
|
||||
}
|
||||
```
|
||||
|
||||
- `parentId`: target category (null to move to top level)
|
||||
|
||||
**Delete a channel:**
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "channelDelete",
|
||||
"channelId": "123"
|
||||
}
|
||||
```
|
||||
|
||||
**Edit/delete a category:**
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "categoryEdit",
|
||||
"categoryId": "888",
|
||||
"name": "Renamed Category"
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "categoryDelete",
|
||||
"categoryId": "888"
|
||||
}
|
||||
```
|
||||
|
||||
### Voice status
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "voiceStatus",
|
||||
"guildId": "999",
|
||||
"userId": "111"
|
||||
}
|
||||
```
|
||||
|
||||
### Scheduled events
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "eventList",
|
||||
"guildId": "999"
|
||||
}
|
||||
```
|
||||
|
||||
### Moderation (disabled by default)
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "timeout",
|
||||
"guildId": "999",
|
||||
"userId": "111",
|
||||
"durationMinutes": 10
|
||||
}
|
||||
```
|
||||
|
||||
## Discord Writing Style Guide
|
||||
|
||||
**Keep it conversational!** Discord is a chat platform, not documentation.
|
||||
|
||||
### Do
|
||||
- Short, punchy messages (1-3 sentences ideal)
|
||||
- Multiple quick replies > one wall of text
|
||||
- Use emoji for tone/emphasis 🦞
|
||||
- Lowercase casual style is fine
|
||||
- Break up info into digestible chunks
|
||||
- Match the energy of the conversation
|
||||
|
||||
### Don't
|
||||
- No markdown tables (Discord renders them as ugly raw `| text |`)
|
||||
- No `## Headers` for casual chat (use **bold** or CAPS for emphasis)
|
||||
- Avoid multi-paragraph essays
|
||||
- Don't over-explain simple things
|
||||
- Skip the "I'd be happy to help!" fluff
|
||||
|
||||
### Formatting that works
|
||||
- **bold** for emphasis
|
||||
- `code` for technical terms
|
||||
- Lists for multiple items
|
||||
- > quotes for referencing
|
||||
- Wrap multiple links in `<>` to suppress embeds
|
||||
|
||||
### Example transformations
|
||||
|
||||
❌ Bad:
|
||||
```
|
||||
I'd be happy to help with that! Here's a comprehensive overview of the versioning strategies available:
|
||||
|
||||
## Semantic Versioning
|
||||
Semver uses MAJOR.MINOR.PATCH format where...
|
||||
|
||||
## Calendar Versioning
|
||||
CalVer uses date-based versions like...
|
||||
```
|
||||
|
||||
✅ Good:
|
||||
```
|
||||
versioning options: semver (1.2.3), calver (2026.01.04), or yolo (`latest` forever). what fits your release cadence?
|
||||
```
|
||||
|
|
@ -1,29 +0,0 @@
|
|||
---
|
||||
name: eightctl
|
||||
description: Control Eight Sleep pods (status, temperature, alarms, schedules).
|
||||
homepage: https://eightctl.sh
|
||||
metadata: {"clawdbot":{"emoji":"🎛️","requires":{"bins":["eightctl"]},"install":[{"id":"go","kind":"go","module":"github.com/steipete/eightctl/cmd/eightctl@latest","bins":["eightctl"],"label":"Install eightctl (go)"}]}}
|
||||
---
|
||||
|
||||
# eightctl
|
||||
|
||||
Use `eightctl` for Eight Sleep pod control. Requires auth.
|
||||
|
||||
Auth
|
||||
- Config: `~/.config/eightctl/config.yaml`
|
||||
- Env: `EIGHTCTL_EMAIL`, `EIGHTCTL_PASSWORD`
|
||||
|
||||
Quick start
|
||||
- `eightctl status`
|
||||
- `eightctl on|off`
|
||||
- `eightctl temp 20`
|
||||
|
||||
Common tasks
|
||||
- Alarms: `eightctl alarm list|create|dismiss`
|
||||
- Schedules: `eightctl schedule list|create|update`
|
||||
- Audio: `eightctl audio state|play|pause`
|
||||
- Base: `eightctl base info|angle`
|
||||
|
||||
Notes
|
||||
- API is unofficial and rate-limited; avoid repeated logins.
|
||||
- Confirm before changing temperature or alarms.
|
||||
|
|
@ -1,41 +0,0 @@
|
|||
---
|
||||
name: food-order
|
||||
description: Reorder Foodora orders + track ETA/status with ordercli. Never confirm without explicit user approval. Triggers: order food, reorder, track ETA.
|
||||
homepage: https://ordercli.sh
|
||||
metadata: {"clawdbot":{"emoji":"🥡","requires":{"bins":["ordercli"]},"install":[{"id":"go","kind":"go","module":"github.com/steipete/ordercli/cmd/ordercli@latest","bins":["ordercli"],"label":"Install ordercli (go)"}]}}
|
||||
---
|
||||
|
||||
# Food order (Foodora via ordercli)
|
||||
|
||||
Goal: reorder a previous Foodora order safely (preview first; confirm only on explicit user “yes/confirm/place the order”).
|
||||
|
||||
Hard safety rules
|
||||
- Never run `ordercli foodora reorder ... --confirm` unless user explicitly confirms placing the order.
|
||||
- Prefer preview-only steps first; show what will happen; ask for confirmation.
|
||||
- If user is unsure: stop at preview and ask questions.
|
||||
|
||||
Setup (once)
|
||||
- Country: `ordercli foodora countries` → `ordercli foodora config set --country AT`
|
||||
- Login (password): `ordercli foodora login --email you@example.com --password-stdin`
|
||||
- Login (no password, preferred): `ordercli foodora session chrome --url https://www.foodora.at/ --profile "Default"`
|
||||
|
||||
Find what to reorder
|
||||
- Recent list: `ordercli foodora history --limit 10`
|
||||
- Details: `ordercli foodora history show <orderCode>`
|
||||
- If needed (machine-readable): `ordercli foodora history show <orderCode> --json`
|
||||
|
||||
Preview reorder (no cart changes)
|
||||
- `ordercli foodora reorder <orderCode>`
|
||||
|
||||
Place reorder (cart change; explicit confirmation required)
|
||||
- Confirm first, then run: `ordercli foodora reorder <orderCode> --confirm`
|
||||
- Multiple addresses? Ask user for the right `--address-id` (take from their Foodora account / prior order data) and run:
|
||||
- `ordercli foodora reorder <orderCode> --confirm --address-id <id>`
|
||||
|
||||
Track the order
|
||||
- ETA/status (active list): `ordercli foodora orders`
|
||||
- Live updates: `ordercli foodora orders --watch`
|
||||
- Single order detail: `ordercli foodora order <orderCode>`
|
||||
|
||||
Debug / safe testing
|
||||
- Use a throwaway config: `ordercli --config /tmp/ordercli.json ...`
|
||||
|
|
@ -1,23 +0,0 @@
|
|||
---
|
||||
name: gemini
|
||||
description: Gemini CLI for one-shot Q&A, summaries, and generation.
|
||||
homepage: https://ai.google.dev/
|
||||
metadata: {"clawdbot":{"emoji":"♊️","requires":{"bins":["gemini"]},"install":[{"id":"brew","kind":"brew","formula":"gemini-cli","bins":["gemini"],"label":"Install Gemini CLI (brew)"}]}}
|
||||
---
|
||||
|
||||
# Gemini CLI
|
||||
|
||||
Use Gemini in one-shot mode with a positional prompt (avoid interactive mode).
|
||||
|
||||
Quick start
|
||||
- `gemini "Answer this question..."`
|
||||
- `gemini --model <name> "Prompt..."`
|
||||
- `gemini --output-format json "Return JSON"`
|
||||
|
||||
Extensions
|
||||
- List: `gemini --list-extensions`
|
||||
- Manage: `gemini extensions <command>`
|
||||
|
||||
Notes
|
||||
- If auth is required, run `gemini` once interactively and follow the login flow.
|
||||
- Avoid `--yolo` for safety.
|
||||
|
|
@ -1,47 +0,0 @@
|
|||
---
|
||||
name: gifgrep
|
||||
description: Search GIF providers with CLI/TUI, download results, and extract stills/sheets.
|
||||
homepage: https://gifgrep.com
|
||||
metadata: {"clawdbot":{"emoji":"🧲","requires":{"bins":["gifgrep"]},"install":[{"id":"brew","kind":"brew","formula":"steipete/tap/gifgrep","bins":["gifgrep"],"label":"Install gifgrep (brew)"},{"id":"go","kind":"go","module":"github.com/steipete/gifgrep/cmd/gifgrep@latest","bins":["gifgrep"],"label":"Install gifgrep (go)"}]}}
|
||||
---
|
||||
|
||||
# gifgrep
|
||||
|
||||
Use `gifgrep` to search GIF providers (Tenor/Giphy), browse in a TUI, download results, and extract stills or sheets.
|
||||
|
||||
GIF-Grab (gifgrep workflow)
|
||||
- Search → preview → download → extract (still/sheet) for fast review and sharing.
|
||||
|
||||
Quick start
|
||||
- `gifgrep cats --max 5`
|
||||
- `gifgrep cats --format url | head -n 5`
|
||||
- `gifgrep search --json cats | jq '.[0].url'`
|
||||
- `gifgrep tui "office handshake"`
|
||||
- `gifgrep cats --download --max 1 --format url`
|
||||
|
||||
TUI + previews
|
||||
- TUI: `gifgrep tui "query"`
|
||||
- CLI still previews: `--thumbs` (Kitty/Ghostty only; still frame)
|
||||
|
||||
Download + reveal
|
||||
- `--download` saves to `~/Downloads`
|
||||
- `--reveal` shows the last download in Finder
|
||||
|
||||
Stills + sheets
|
||||
- `gifgrep still ./clip.gif --at 1.5s -o still.png`
|
||||
- `gifgrep sheet ./clip.gif --frames 9 --cols 3 -o sheet.png`
|
||||
- Sheets = single PNG grid of sampled frames (great for quick review, docs, PRs, chat).
|
||||
- Tune: `--frames` (count), `--cols` (grid width), `--padding` (spacing).
|
||||
|
||||
Providers
|
||||
- `--source auto|tenor|giphy`
|
||||
- `GIPHY_API_KEY` required for `--source giphy`
|
||||
- `TENOR_API_KEY` optional (Tenor demo key used if unset)
|
||||
|
||||
Output
|
||||
- `--json` prints an array of results (`id`, `title`, `url`, `preview_url`, `tags`, `width`, `height`)
|
||||
- `--format` for pipe-friendly fields (e.g., `url`)
|
||||
|
||||
Environment tweaks
|
||||
- `GIFGREP_SOFTWARE_ANIM=1` to force software animation
|
||||
- `GIFGREP_CELL_ASPECT=0.5` to tweak preview geometry
|
||||
|
|
@ -1,47 +0,0 @@
|
|||
---
|
||||
name: github
|
||||
description: "Interact with GitHub using the `gh` CLI. Use `gh issue`, `gh pr`, `gh run`, and `gh api` for issues, PRs, CI runs, and advanced queries."
|
||||
---
|
||||
|
||||
# GitHub Skill
|
||||
|
||||
Use the `gh` CLI to interact with GitHub. Always specify `--repo owner/repo` when not in a git directory, or use URLs directly.
|
||||
|
||||
## Pull Requests
|
||||
|
||||
Check CI status on a PR:
|
||||
```bash
|
||||
gh pr checks 55 --repo owner/repo
|
||||
```
|
||||
|
||||
List recent workflow runs:
|
||||
```bash
|
||||
gh run list --repo owner/repo --limit 10
|
||||
```
|
||||
|
||||
View a run and see which steps failed:
|
||||
```bash
|
||||
gh run view <run-id> --repo owner/repo
|
||||
```
|
||||
|
||||
View logs for failed steps only:
|
||||
```bash
|
||||
gh run view <run-id> --repo owner/repo --log-failed
|
||||
```
|
||||
|
||||
## API for Advanced Queries
|
||||
|
||||
The `gh api` command is useful for accessing data not available through other subcommands.
|
||||
|
||||
Get PR with specific fields:
|
||||
```bash
|
||||
gh api repos/owner/repo/pulls/55 --jq '.title, .state, .user.login'
|
||||
```
|
||||
|
||||
## JSON Output
|
||||
|
||||
Most commands support `--json` for structured output. You can use `--jq` to filter:
|
||||
|
||||
```bash
|
||||
gh issue list --repo owner/repo --json number,title --jq '.[] | "\(.number): \(.title)"'
|
||||
```
|
||||
|
|
@ -1,90 +0,0 @@
|
|||
---
|
||||
name: gog
|
||||
description: Google Workspace CLI for Gmail, Calendar, Drive, Contacts, Sheets, and Docs.
|
||||
homepage: https://gogcli.sh
|
||||
metadata: {"clawdbot":{"emoji":"🎮","requires":{"bins":["gog"]},"install":[{"id":"brew","kind":"brew","formula":"steipete/tap/gogcli","bins":["gog"],"label":"Install gog (brew)"}]}}
|
||||
---
|
||||
|
||||
# gog
|
||||
|
||||
Use `gog` for Gmail/Calendar/Drive/Contacts/Sheets/Docs. Requires OAuth setup.
|
||||
|
||||
Setup (once)
|
||||
- `gog auth credentials /path/to/client_secret.json`
|
||||
- `gog auth add you@gmail.com --services gmail,calendar,drive,contacts,sheets,docs`
|
||||
- `gog auth list`
|
||||
|
||||
Common commands
|
||||
- Gmail search: `gog gmail search 'newer_than:7d' --max 10`
|
||||
- Gmail send (plain): `gog gmail send --to a@b.com --subject "Hi" --body "Hello"`
|
||||
- Gmail send (multi-line): `gog gmail send --to a@b.com --subject "Hi" --body-file ./message.txt`
|
||||
- Gmail send (stdin): `gog gmail send --to a@b.com --subject "Hi" --body-file -`
|
||||
- Gmail send (HTML): `gog gmail send --to a@b.com --subject "Hi" --body-html "<p>Hello</p>"`
|
||||
- Gmail draft: `gog gmail drafts create --to a@b.com --subject "Hi" --body-file ./message.txt`
|
||||
- Gmail send draft: `gog gmail drafts send <draftId>`
|
||||
- Gmail reply: `gog gmail send --to a@b.com --subject "Re: Hi" --body "Reply" --reply-to-message-id <msgId>`
|
||||
- Calendar list events: `gog calendar events <calendarId> --from <iso> --to <iso>`
|
||||
- Calendar create event: `gog calendar create <calendarId> --summary "Title" --from <iso> --to <iso>`
|
||||
- Calendar create with color: `gog calendar create <calendarId> --summary "Title" --from <iso> --to <iso> --event-color 7`
|
||||
- Calendar update event: `gog calendar update <calendarId> <eventId> --summary "New Title" --event-color 4`
|
||||
- Calendar show colors: `gog calendar colors`
|
||||
- Drive search: `gog drive search "query" --max 10`
|
||||
- Contacts: `gog contacts list --max 20`
|
||||
- Sheets get: `gog sheets get <sheetId> "Tab!A1:D10" --json`
|
||||
- Sheets update: `gog sheets update <sheetId> "Tab!A1:B2" --values-json '[["A","B"],["1","2"]]' --input USER_ENTERED`
|
||||
- Sheets append: `gog sheets append <sheetId> "Tab!A:C" --values-json '[["x","y","z"]]' --insert INSERT_ROWS`
|
||||
- Sheets clear: `gog sheets clear <sheetId> "Tab!A2:Z"`
|
||||
- Sheets metadata: `gog sheets metadata <sheetId> --json`
|
||||
- Docs export: `gog docs export <docId> --format txt --out /tmp/doc.txt`
|
||||
- Docs cat: `gog docs cat <docId>`
|
||||
|
||||
Calendar Colors
|
||||
- Use `gog calendar colors` to see all available event colors (IDs 1-11)
|
||||
- Add colors to events with `--event-color <id>` flag
|
||||
- Event color IDs (from `gog calendar colors` output):
|
||||
- 1: #a4bdfc
|
||||
- 2: #7ae7bf
|
||||
- 3: #dbadff
|
||||
- 4: #ff887c
|
||||
- 5: #fbd75b
|
||||
- 6: #ffb878
|
||||
- 7: #46d6db
|
||||
- 8: #e1e1e1
|
||||
- 9: #5484ed
|
||||
- 10: #51b749
|
||||
- 11: #dc2127
|
||||
|
||||
Email Formatting
|
||||
- Prefer plain text. Use `--body-file` for multi-paragraph messages (or `--body-file -` for stdin).
|
||||
- Same `--body-file` pattern works for drafts and replies.
|
||||
- `--body` does not unescape `\n`. If you need inline newlines, use a heredoc or `$'Line 1\n\nLine 2'`.
|
||||
- Use `--body-html` only when you need rich formatting.
|
||||
- HTML tags: `<p>` for paragraphs, `<br>` for line breaks, `<strong>` for bold, `<em>` for italic, `<a href="url">` for links, `<ul>`/`<li>` for lists.
|
||||
- Example (plain text via stdin):
|
||||
```bash
|
||||
gog gmail send --to recipient@example.com \
|
||||
--subject "Meeting Follow-up" \
|
||||
--body-file - <<'EOF'
|
||||
Hi Name,
|
||||
|
||||
Thanks for meeting today. Next steps:
|
||||
- Item one
|
||||
- Item two
|
||||
|
||||
Best regards,
|
||||
Your Name
|
||||
EOF
|
||||
```
|
||||
- Example (HTML list):
|
||||
```bash
|
||||
gog gmail send --to recipient@example.com \
|
||||
--subject "Meeting Follow-up" \
|
||||
--body-html "<p>Hi Name,</p><p>Thanks for meeting today. Here are the next steps:</p><ul><li>Item one</li><li>Item two</li></ul><p>Best regards,<br>Your Name</p>"
|
||||
```
|
||||
|
||||
Notes
|
||||
- Set `GOG_ACCOUNT=you@gmail.com` to avoid repeating `--account`.
|
||||
- For scripting, prefer `--json` plus `--no-input`.
|
||||
- Sheets values can be passed via `--values-json` (recommended) or as inline rows.
|
||||
- Docs supports export/cat/copy. In-place edits require a Docs API client (not in gog).
|
||||
- Confirm before sending mail or creating events.
|
||||
|
|
@ -1,30 +0,0 @@
|
|||
---
|
||||
name: goplaces
|
||||
description: Query Google Places API (New) via the goplaces CLI for text search, place details, resolve, and reviews. Use for human-friendly place lookup or JSON output for scripts.
|
||||
homepage: https://github.com/steipete/goplaces
|
||||
metadata: {"clawdbot":{"emoji":"📍","requires":{"bins":["goplaces"],"env":["GOOGLE_PLACES_API_KEY"]},"primaryEnv":"GOOGLE_PLACES_API_KEY","install":[{"id":"brew","kind":"brew","formula":"steipete/tap/goplaces","bins":["goplaces"],"label":"Install goplaces (brew)"}]}}
|
||||
---
|
||||
|
||||
# goplaces
|
||||
|
||||
Modern Google Places API (New) CLI. Human output by default, `--json` for scripts.
|
||||
|
||||
Install
|
||||
- Homebrew: `brew install steipete/tap/goplaces`
|
||||
|
||||
Config
|
||||
- `GOOGLE_PLACES_API_KEY` required.
|
||||
- Optional: `GOOGLE_PLACES_BASE_URL` for testing/proxying.
|
||||
|
||||
Common commands
|
||||
- Search: `goplaces search "coffee" --open-now --min-rating 4 --limit 5`
|
||||
- Bias: `goplaces search "pizza" --lat 40.8 --lng -73.9 --radius-m 3000`
|
||||
- Pagination: `goplaces search "pizza" --page-token "NEXT_PAGE_TOKEN"`
|
||||
- Resolve: `goplaces resolve "Soho, London" --limit 5`
|
||||
- Details: `goplaces details <place_id> --reviews`
|
||||
- JSON: `goplaces search "sushi" --json`
|
||||
|
||||
Notes
|
||||
- `--no-color` or `NO_COLOR` disables ANSI color.
|
||||
- Price levels: 0..4 (free → very expensive).
|
||||
- Type filter sends only the first `--type` value (API accepts one).
|
||||
|
|
@ -1,217 +0,0 @@
|
|||
---
|
||||
name: himalaya
|
||||
description: "CLI to manage emails via IMAP/SMTP. Use `himalaya` to list, read, write, reply, forward, search, and organize emails from the terminal. Supports multiple accounts and message composition with MML (MIME Meta Language)."
|
||||
homepage: https://github.com/pimalaya/himalaya
|
||||
metadata: {"clawdbot":{"emoji":"📧","requires":{"bins":["himalaya"]},"install":[{"id":"brew","kind":"brew","formula":"himalaya","bins":["himalaya"],"label":"Install Himalaya (brew)"}]}}
|
||||
---
|
||||
|
||||
# Himalaya Email CLI
|
||||
|
||||
Himalaya is a CLI email client that lets you manage emails from the terminal using IMAP, SMTP, Notmuch, or Sendmail backends.
|
||||
|
||||
## References
|
||||
|
||||
- `references/configuration.md` (config file setup + IMAP/SMTP authentication)
|
||||
- `references/message-composition.md` (MML syntax for composing emails)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. Himalaya CLI installed (`himalaya --version` to verify)
|
||||
2. A configuration file at `~/.config/himalaya/config.toml`
|
||||
3. IMAP/SMTP credentials configured (password stored securely)
|
||||
|
||||
## Configuration Setup
|
||||
|
||||
Run the interactive wizard to set up an account:
|
||||
```bash
|
||||
himalaya account configure
|
||||
```
|
||||
|
||||
Or create `~/.config/himalaya/config.toml` manually:
|
||||
```toml
|
||||
[accounts.personal]
|
||||
email = "you@example.com"
|
||||
display-name = "Your Name"
|
||||
default = true
|
||||
|
||||
backend.type = "imap"
|
||||
backend.host = "imap.example.com"
|
||||
backend.port = 993
|
||||
backend.encryption.type = "tls"
|
||||
backend.login = "you@example.com"
|
||||
backend.auth.type = "password"
|
||||
backend.auth.cmd = "pass show email/imap" # or use keyring
|
||||
|
||||
message.send.backend.type = "smtp"
|
||||
message.send.backend.host = "smtp.example.com"
|
||||
message.send.backend.port = 587
|
||||
message.send.backend.encryption.type = "start-tls"
|
||||
message.send.backend.login = "you@example.com"
|
||||
message.send.backend.auth.type = "password"
|
||||
message.send.backend.auth.cmd = "pass show email/smtp"
|
||||
```
|
||||
|
||||
## Common Operations
|
||||
|
||||
### List Folders
|
||||
|
||||
```bash
|
||||
himalaya folder list
|
||||
```
|
||||
|
||||
### List Emails
|
||||
|
||||
List emails in INBOX (default):
|
||||
```bash
|
||||
himalaya envelope list
|
||||
```
|
||||
|
||||
List emails in a specific folder:
|
||||
```bash
|
||||
himalaya envelope list --folder "Sent"
|
||||
```
|
||||
|
||||
List with pagination:
|
||||
```bash
|
||||
himalaya envelope list --page 1 --page-size 20
|
||||
```
|
||||
|
||||
### Search Emails
|
||||
|
||||
```bash
|
||||
himalaya envelope list from john@example.com subject meeting
|
||||
```
|
||||
|
||||
### Read an Email
|
||||
|
||||
Read email by ID (shows plain text):
|
||||
```bash
|
||||
himalaya message read 42
|
||||
```
|
||||
|
||||
Export raw MIME:
|
||||
```bash
|
||||
himalaya message export 42 --full
|
||||
```
|
||||
|
||||
### Reply to an Email
|
||||
|
||||
Interactive reply (opens $EDITOR):
|
||||
```bash
|
||||
himalaya message reply 42
|
||||
```
|
||||
|
||||
Reply-all:
|
||||
```bash
|
||||
himalaya message reply 42 --all
|
||||
```
|
||||
|
||||
### Forward an Email
|
||||
|
||||
```bash
|
||||
himalaya message forward 42
|
||||
```
|
||||
|
||||
### Write a New Email
|
||||
|
||||
Interactive compose (opens $EDITOR):
|
||||
```bash
|
||||
himalaya message write
|
||||
```
|
||||
|
||||
Send directly using template:
|
||||
```bash
|
||||
cat << 'EOF' | himalaya template send
|
||||
From: you@example.com
|
||||
To: recipient@example.com
|
||||
Subject: Test Message
|
||||
|
||||
Hello from Himalaya!
|
||||
EOF
|
||||
```
|
||||
|
||||
Or with headers flag:
|
||||
```bash
|
||||
himalaya message write -H "To:recipient@example.com" -H "Subject:Test" "Message body here"
|
||||
```
|
||||
|
||||
### Move/Copy Emails
|
||||
|
||||
Move to folder:
|
||||
```bash
|
||||
himalaya message move 42 "Archive"
|
||||
```
|
||||
|
||||
Copy to folder:
|
||||
```bash
|
||||
himalaya message copy 42 "Important"
|
||||
```
|
||||
|
||||
### Delete an Email
|
||||
|
||||
```bash
|
||||
himalaya message delete 42
|
||||
```
|
||||
|
||||
### Manage Flags
|
||||
|
||||
Add flag:
|
||||
```bash
|
||||
himalaya flag add 42 --flag seen
|
||||
```
|
||||
|
||||
Remove flag:
|
||||
```bash
|
||||
himalaya flag remove 42 --flag seen
|
||||
```
|
||||
|
||||
## Multiple Accounts
|
||||
|
||||
List accounts:
|
||||
```bash
|
||||
himalaya account list
|
||||
```
|
||||
|
||||
Use a specific account:
|
||||
```bash
|
||||
himalaya --account work envelope list
|
||||
```
|
||||
|
||||
## Attachments
|
||||
|
||||
Save attachments from a message:
|
||||
```bash
|
||||
himalaya attachment download 42
|
||||
```
|
||||
|
||||
Save to specific directory:
|
||||
```bash
|
||||
himalaya attachment download 42 --dir ~/Downloads
|
||||
```
|
||||
|
||||
## Output Formats
|
||||
|
||||
Most commands support `--output` for structured output:
|
||||
```bash
|
||||
himalaya envelope list --output json
|
||||
himalaya envelope list --output plain
|
||||
```
|
||||
|
||||
## Debugging
|
||||
|
||||
Enable debug logging:
|
||||
```bash
|
||||
RUST_LOG=debug himalaya envelope list
|
||||
```
|
||||
|
||||
Full trace with backtrace:
|
||||
```bash
|
||||
RUST_LOG=trace RUST_BACKTRACE=1 himalaya envelope list
|
||||
```
|
||||
|
||||
## Tips
|
||||
|
||||
- Use `himalaya --help` or `himalaya <command> --help` for detailed usage.
|
||||
- Message IDs are relative to the current folder; re-list after folder changes.
|
||||
- For composing rich emails with attachments, use MML syntax (see `references/message-composition.md`).
|
||||
- Store passwords securely using `pass`, system keyring, or a command that outputs the password.
|
||||
|
|
@ -1,174 +0,0 @@
|
|||
# Himalaya Configuration Reference
|
||||
|
||||
Configuration file location: `~/.config/himalaya/config.toml`
|
||||
|
||||
## Minimal IMAP + SMTP Setup
|
||||
|
||||
```toml
|
||||
[accounts.default]
|
||||
email = "user@example.com"
|
||||
display-name = "Your Name"
|
||||
default = true
|
||||
|
||||
# IMAP backend for reading emails
|
||||
backend.type = "imap"
|
||||
backend.host = "imap.example.com"
|
||||
backend.port = 993
|
||||
backend.encryption.type = "tls"
|
||||
backend.login = "user@example.com"
|
||||
backend.auth.type = "password"
|
||||
backend.auth.raw = "your-password"
|
||||
|
||||
# SMTP backend for sending emails
|
||||
message.send.backend.type = "smtp"
|
||||
message.send.backend.host = "smtp.example.com"
|
||||
message.send.backend.port = 587
|
||||
message.send.backend.encryption.type = "start-tls"
|
||||
message.send.backend.login = "user@example.com"
|
||||
message.send.backend.auth.type = "password"
|
||||
message.send.backend.auth.raw = "your-password"
|
||||
```
|
||||
|
||||
## Password Options
|
||||
|
||||
### Raw password (testing only, not recommended)
|
||||
```toml
|
||||
backend.auth.raw = "your-password"
|
||||
```
|
||||
|
||||
### Password from command (recommended)
|
||||
```toml
|
||||
backend.auth.cmd = "pass show email/imap"
|
||||
# backend.auth.cmd = "security find-generic-password -a user@example.com -s imap -w"
|
||||
```
|
||||
|
||||
### System keyring (requires keyring feature)
|
||||
```toml
|
||||
backend.auth.keyring = "imap-example"
|
||||
```
|
||||
Then run `himalaya account configure <account>` to store the password.
|
||||
|
||||
## Gmail Configuration
|
||||
|
||||
```toml
|
||||
[accounts.gmail]
|
||||
email = "you@gmail.com"
|
||||
display-name = "Your Name"
|
||||
default = true
|
||||
|
||||
backend.type = "imap"
|
||||
backend.host = "imap.gmail.com"
|
||||
backend.port = 993
|
||||
backend.encryption.type = "tls"
|
||||
backend.login = "you@gmail.com"
|
||||
backend.auth.type = "password"
|
||||
backend.auth.cmd = "pass show google/app-password"
|
||||
|
||||
message.send.backend.type = "smtp"
|
||||
message.send.backend.host = "smtp.gmail.com"
|
||||
message.send.backend.port = 587
|
||||
message.send.backend.encryption.type = "start-tls"
|
||||
message.send.backend.login = "you@gmail.com"
|
||||
message.send.backend.auth.type = "password"
|
||||
message.send.backend.auth.cmd = "pass show google/app-password"
|
||||
```
|
||||
|
||||
**Note:** Gmail requires an App Password if 2FA is enabled.
|
||||
|
||||
## iCloud Configuration
|
||||
|
||||
```toml
|
||||
[accounts.icloud]
|
||||
email = "you@icloud.com"
|
||||
display-name = "Your Name"
|
||||
|
||||
backend.type = "imap"
|
||||
backend.host = "imap.mail.me.com"
|
||||
backend.port = 993
|
||||
backend.encryption.type = "tls"
|
||||
backend.login = "you@icloud.com"
|
||||
backend.auth.type = "password"
|
||||
backend.auth.cmd = "pass show icloud/app-password"
|
||||
|
||||
message.send.backend.type = "smtp"
|
||||
message.send.backend.host = "smtp.mail.me.com"
|
||||
message.send.backend.port = 587
|
||||
message.send.backend.encryption.type = "start-tls"
|
||||
message.send.backend.login = "you@icloud.com"
|
||||
message.send.backend.auth.type = "password"
|
||||
message.send.backend.auth.cmd = "pass show icloud/app-password"
|
||||
```
|
||||
|
||||
**Note:** Generate an app-specific password at appleid.apple.com
|
||||
|
||||
## Folder Aliases
|
||||
|
||||
Map custom folder names:
|
||||
```toml
|
||||
[accounts.default.folder.alias]
|
||||
inbox = "INBOX"
|
||||
sent = "Sent"
|
||||
drafts = "Drafts"
|
||||
trash = "Trash"
|
||||
```
|
||||
|
||||
## Multiple Accounts
|
||||
|
||||
```toml
|
||||
[accounts.personal]
|
||||
email = "personal@example.com"
|
||||
default = true
|
||||
# ... backend config ...
|
||||
|
||||
[accounts.work]
|
||||
email = "work@company.com"
|
||||
# ... backend config ...
|
||||
```
|
||||
|
||||
Switch accounts with `--account`:
|
||||
```bash
|
||||
himalaya --account work envelope list
|
||||
```
|
||||
|
||||
## Notmuch Backend (local mail)
|
||||
|
||||
```toml
|
||||
[accounts.local]
|
||||
email = "user@example.com"
|
||||
|
||||
backend.type = "notmuch"
|
||||
backend.db-path = "~/.mail/.notmuch"
|
||||
```
|
||||
|
||||
## OAuth2 Authentication (for providers that support it)
|
||||
|
||||
```toml
|
||||
backend.auth.type = "oauth2"
|
||||
backend.auth.client-id = "your-client-id"
|
||||
backend.auth.client-secret.cmd = "pass show oauth/client-secret"
|
||||
backend.auth.access-token.cmd = "pass show oauth/access-token"
|
||||
backend.auth.refresh-token.cmd = "pass show oauth/refresh-token"
|
||||
backend.auth.auth-url = "https://provider.com/oauth/authorize"
|
||||
backend.auth.token-url = "https://provider.com/oauth/token"
|
||||
```
|
||||
|
||||
## Additional Options
|
||||
|
||||
### Signature
|
||||
```toml
|
||||
[accounts.default]
|
||||
signature = "Best regards,\nYour Name"
|
||||
signature-delim = "-- \n"
|
||||
```
|
||||
|
||||
### Downloads directory
|
||||
```toml
|
||||
[accounts.default]
|
||||
downloads-dir = "~/Downloads/himalaya"
|
||||
```
|
||||
|
||||
### Editor for composing
|
||||
Set via environment variable:
|
||||
```bash
|
||||
export EDITOR="vim"
|
||||
```
|
||||
|
|
@ -1,182 +0,0 @@
|
|||
# Message Composition with MML (MIME Meta Language)
|
||||
|
||||
Himalaya uses MML for composing emails. MML is a simple XML-based syntax that compiles to MIME messages.
|
||||
|
||||
## Basic Message Structure
|
||||
|
||||
An email message is a list of **headers** followed by a **body**, separated by a blank line:
|
||||
|
||||
```
|
||||
From: sender@example.com
|
||||
To: recipient@example.com
|
||||
Subject: Hello World
|
||||
|
||||
This is the message body.
|
||||
```
|
||||
|
||||
## Headers
|
||||
|
||||
Common headers:
|
||||
- `From`: Sender address
|
||||
- `To`: Primary recipient(s)
|
||||
- `Cc`: Carbon copy recipients
|
||||
- `Bcc`: Blind carbon copy recipients
|
||||
- `Subject`: Message subject
|
||||
- `Reply-To`: Address for replies (if different from From)
|
||||
- `In-Reply-To`: Message ID being replied to
|
||||
|
||||
### Address Formats
|
||||
|
||||
```
|
||||
To: user@example.com
|
||||
To: John Doe <john@example.com>
|
||||
To: "John Doe" <john@example.com>
|
||||
To: user1@example.com, user2@example.com, "Jane" <jane@example.com>
|
||||
```
|
||||
|
||||
## Plain Text Body
|
||||
|
||||
Simple plain text email:
|
||||
```
|
||||
From: alice@localhost
|
||||
To: bob@localhost
|
||||
Subject: Plain Text Example
|
||||
|
||||
Hello, this is a plain text email.
|
||||
No special formatting needed.
|
||||
|
||||
Best,
|
||||
Alice
|
||||
```
|
||||
|
||||
## MML for Rich Emails
|
||||
|
||||
### Multipart Messages
|
||||
|
||||
Alternative text/html parts:
|
||||
```
|
||||
From: alice@localhost
|
||||
To: bob@localhost
|
||||
Subject: Multipart Example
|
||||
|
||||
<#multipart type=alternative>
|
||||
This is the plain text version.
|
||||
<#part type=text/html>
|
||||
<html><body><h1>This is the HTML version</h1></body></html>
|
||||
<#/multipart>
|
||||
```
|
||||
|
||||
### Attachments
|
||||
|
||||
Attach a file:
|
||||
```
|
||||
From: alice@localhost
|
||||
To: bob@localhost
|
||||
Subject: With Attachment
|
||||
|
||||
Here is the document you requested.
|
||||
|
||||
<#part filename=/path/to/document.pdf><#/part>
|
||||
```
|
||||
|
||||
Attachment with custom name:
|
||||
```
|
||||
<#part filename=/path/to/file.pdf name=report.pdf><#/part>
|
||||
```
|
||||
|
||||
Multiple attachments:
|
||||
```
|
||||
<#part filename=/path/to/doc1.pdf><#/part>
|
||||
<#part filename=/path/to/doc2.pdf><#/part>
|
||||
```
|
||||
|
||||
### Inline Images
|
||||
|
||||
Embed an image inline:
|
||||
```
|
||||
From: alice@localhost
|
||||
To: bob@localhost
|
||||
Subject: Inline Image
|
||||
|
||||
<#multipart type=related>
|
||||
<#part type=text/html>
|
||||
<html><body>
|
||||
<p>Check out this image:</p>
|
||||
<img src="cid:image1">
|
||||
</body></html>
|
||||
<#part disposition=inline id=image1 filename=/path/to/image.png><#/part>
|
||||
<#/multipart>
|
||||
```
|
||||
|
||||
### Mixed Content (Text + Attachments)
|
||||
|
||||
```
|
||||
From: alice@localhost
|
||||
To: bob@localhost
|
||||
Subject: Mixed Content
|
||||
|
||||
<#multipart type=mixed>
|
||||
<#part type=text/plain>
|
||||
Please find the attached files.
|
||||
|
||||
Best,
|
||||
Alice
|
||||
<#part filename=/path/to/file1.pdf><#/part>
|
||||
<#part filename=/path/to/file2.zip><#/part>
|
||||
<#/multipart>
|
||||
```
|
||||
|
||||
## MML Tag Reference
|
||||
|
||||
### `<#multipart>`
|
||||
Groups multiple parts together.
|
||||
- `type=alternative`: Different representations of same content
|
||||
- `type=mixed`: Independent parts (text + attachments)
|
||||
- `type=related`: Parts that reference each other (HTML + images)
|
||||
|
||||
### `<#part>`
|
||||
Defines a message part.
|
||||
- `type=<mime-type>`: Content type (e.g., `text/html`, `application/pdf`)
|
||||
- `filename=<path>`: File to attach
|
||||
- `name=<name>`: Display name for attachment
|
||||
- `disposition=inline`: Display inline instead of as attachment
|
||||
- `id=<cid>`: Content ID for referencing in HTML
|
||||
|
||||
## Composing from CLI
|
||||
|
||||
### Interactive compose
|
||||
Opens your `$EDITOR`:
|
||||
```bash
|
||||
himalaya message write
|
||||
```
|
||||
|
||||
### Reply (opens editor with quoted message)
|
||||
```bash
|
||||
himalaya message reply 42
|
||||
himalaya message reply 42 --all # reply-all
|
||||
```
|
||||
|
||||
### Forward
|
||||
```bash
|
||||
himalaya message forward 42
|
||||
```
|
||||
|
||||
### Send from stdin
|
||||
```bash
|
||||
cat message.txt | himalaya template send
|
||||
```
|
||||
|
||||
### Prefill headers from CLI
|
||||
```bash
|
||||
himalaya message write \
|
||||
-H "To:recipient@example.com" \
|
||||
-H "Subject:Quick Message" \
|
||||
"Message body here"
|
||||
```
|
||||
|
||||
## Tips
|
||||
|
||||
- The editor opens with a template; fill in headers and body.
|
||||
- Save and exit the editor to send; exit without saving to cancel.
|
||||
- MML parts are compiled to proper MIME when sending.
|
||||
- Use `himalaya message export --full` to inspect the raw MIME structure of received emails.
|
||||
|
|
@ -1,25 +0,0 @@
|
|||
---
|
||||
name: imsg
|
||||
description: iMessage/SMS CLI for listing chats, history, watch, and sending.
|
||||
homepage: https://imsg.to
|
||||
metadata: {"clawdbot":{"emoji":"📨","os":["darwin"],"requires":{"bins":["imsg"]},"install":[{"id":"brew","kind":"brew","formula":"steipete/tap/imsg","bins":["imsg"],"label":"Install imsg (brew)"}]}}
|
||||
---
|
||||
|
||||
# imsg
|
||||
|
||||
Use `imsg` to read and send Messages.app iMessage/SMS on macOS.
|
||||
|
||||
Requirements
|
||||
- Messages.app signed in
|
||||
- Full Disk Access for your terminal
|
||||
- Automation permission to control Messages.app (for sending)
|
||||
|
||||
Common commands
|
||||
- List chats: `imsg chats --limit 10 --json`
|
||||
- History: `imsg history --chat-id 1 --limit 20 --attachments --json`
|
||||
- Watch: `imsg watch --chat-id 1 --attachments`
|
||||
- Send: `imsg send --to "+14155551212" --text "hi" --file /path/pic.jpg`
|
||||
|
||||
Notes
|
||||
- `--service imessage|sms|auto` controls delivery.
|
||||
- Confirm recipient + message before sending.
|
||||
|
|
@ -1,101 +0,0 @@
|
|||
# Local Places
|
||||
|
||||
This repo is a fusion of two pieces:
|
||||
|
||||
- A FastAPI server that exposes endpoints for searching and resolving places via the Google Maps Places API.
|
||||
- A companion agent skill that explains how to use the API and can call it to find places efficiently.
|
||||
|
||||
Together, the skill and server let an agent turn natural-language place queries into structured results quickly.
|
||||
|
||||
## Run locally
|
||||
|
||||
```bash
|
||||
# copy skill definition into the relevant folder (where the agent looks for it)
|
||||
# then run the server
|
||||
|
||||
uv venv
|
||||
uv pip install -e ".[dev]"
|
||||
uv run --env-file .env uvicorn local_places.main:app --host 0.0.0.0 --reload
|
||||
```
|
||||
|
||||
Open the API docs at http://127.0.0.1:8000/docs.
|
||||
|
||||
## Places API
|
||||
|
||||
Set the Google Places API key before running:
|
||||
|
||||
```bash
|
||||
export GOOGLE_PLACES_API_KEY="your-key"
|
||||
```
|
||||
|
||||
Endpoints:
|
||||
|
||||
- `POST /places/search` (free-text query + filters)
|
||||
- `GET /places/{place_id}` (place details)
|
||||
- `POST /locations/resolve` (resolve a user-provided location string)
|
||||
|
||||
Example search request:
|
||||
|
||||
```json
|
||||
{
|
||||
"query": "italian restaurant",
|
||||
"filters": {
|
||||
"types": ["restaurant"],
|
||||
"open_now": true,
|
||||
"min_rating": 4.0,
|
||||
"price_levels": [1, 2]
|
||||
},
|
||||
"limit": 10
|
||||
}
|
||||
```
|
||||
|
||||
Notes:
|
||||
|
||||
- `filters.types` supports a single type (mapped to Google `includedType`).
|
||||
|
||||
Example search request (curl):
|
||||
|
||||
```bash
|
||||
curl -X POST http://127.0.0.1:8000/places/search \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"query": "italian restaurant",
|
||||
"location_bias": {
|
||||
"lat": 40.8065,
|
||||
"lng": -73.9719,
|
||||
"radius_m": 3000
|
||||
},
|
||||
"filters": {
|
||||
"types": ["restaurant"],
|
||||
"open_now": true,
|
||||
"min_rating": 4.0,
|
||||
"price_levels": [1, 2, 3]
|
||||
},
|
||||
"limit": 10
|
||||
}'
|
||||
```
|
||||
|
||||
Example resolve request (curl):
|
||||
|
||||
```bash
|
||||
curl -X POST http://127.0.0.1:8000/locations/resolve \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"location_text": "Riverside Park, New York",
|
||||
"limit": 5
|
||||
}'
|
||||
```
|
||||
|
||||
## Test
|
||||
|
||||
```bash
|
||||
uv run pytest
|
||||
```
|
||||
|
||||
## OpenAPI
|
||||
|
||||
Generate the OpenAPI schema:
|
||||
|
||||
```bash
|
||||
uv run python scripts/generate_openapi.py
|
||||
```
|
||||
|
|
@ -1,91 +0,0 @@
|
|||
---
|
||||
name: local-places
|
||||
description: Search for places (restaurants, cafes, etc.) via Google Places API proxy on localhost.
|
||||
homepage: https://github.com/Hyaxia/local_places
|
||||
metadata: {"clawdbot":{"emoji":"📍","requires":{"bins":["uv"],"env":["GOOGLE_PLACES_API_KEY"]},"primaryEnv":"GOOGLE_PLACES_API_KEY"}}
|
||||
---
|
||||
|
||||
# 📍 Local Places
|
||||
|
||||
*Find places, Go fast*
|
||||
|
||||
Search for nearby places using a local Google Places API proxy. Two-step flow: resolve location first, then search.
|
||||
|
||||
## Setup
|
||||
|
||||
```bash
|
||||
cd {baseDir}
|
||||
echo "GOOGLE_PLACES_API_KEY=your-key" > .env
|
||||
uv venv && uv pip install -e ".[dev]"
|
||||
uv run --env-file .env uvicorn local_places.main:app --host 127.0.0.1 --port 8000
|
||||
```
|
||||
|
||||
Requires `GOOGLE_PLACES_API_KEY` in `.env` or environment.
|
||||
|
||||
## Quick Start
|
||||
|
||||
1. **Check server:** `curl http://127.0.0.1:8000/ping`
|
||||
|
||||
2. **Resolve location:**
|
||||
```bash
|
||||
curl -X POST http://127.0.0.1:8000/locations/resolve \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"location_text": "Soho, London", "limit": 5}'
|
||||
```
|
||||
|
||||
3. **Search places:**
|
||||
```bash
|
||||
curl -X POST http://127.0.0.1:8000/places/search \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"query": "coffee shop",
|
||||
"location_bias": {"lat": 51.5137, "lng": -0.1366, "radius_m": 1000},
|
||||
"filters": {"open_now": true, "min_rating": 4.0},
|
||||
"limit": 10
|
||||
}'
|
||||
```
|
||||
|
||||
4. **Get details:**
|
||||
```bash
|
||||
curl http://127.0.0.1:8000/places/{place_id}
|
||||
```
|
||||
|
||||
## Conversation Flow
|
||||
|
||||
1. If user says "near me" or gives vague location → resolve it first
|
||||
2. If multiple results → show numbered list, ask user to pick
|
||||
3. Ask for preferences: type, open now, rating, price level
|
||||
4. Search with `location_bias` from chosen location
|
||||
5. Present results with name, rating, address, open status
|
||||
6. Offer to fetch details or refine search
|
||||
|
||||
## Filter Constraints
|
||||
|
||||
- `filters.types`: exactly ONE type (e.g., "restaurant", "cafe", "gym")
|
||||
- `filters.price_levels`: integers 0-4 (0=free, 4=very expensive)
|
||||
- `filters.min_rating`: 0-5 in 0.5 increments
|
||||
- `filters.open_now`: boolean
|
||||
- `limit`: 1-20 for search, 1-10 for resolve
|
||||
- `location_bias.radius_m`: must be > 0
|
||||
|
||||
## Response Format
|
||||
|
||||
```json
|
||||
{
|
||||
"results": [
|
||||
{
|
||||
"place_id": "ChIJ...",
|
||||
"name": "Coffee Shop",
|
||||
"address": "123 Main St",
|
||||
"location": {"lat": 51.5, "lng": -0.1},
|
||||
"rating": 4.6,
|
||||
"price_level": 2,
|
||||
"types": ["cafe", "food"],
|
||||
"open_now": true
|
||||
}
|
||||
],
|
||||
"next_page_token": "..."
|
||||
}
|
||||
```
|
||||
|
||||
Use `next_page_token` as `page_token` in next request for more results.
|
||||
|
|
@ -1,27 +0,0 @@
|
|||
[project]
|
||||
name = "my-api"
|
||||
version = "0.1.0"
|
||||
description = "FastAPI server"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.11"
|
||||
dependencies = [
|
||||
"fastapi>=0.110.0",
|
||||
"httpx>=0.27.0",
|
||||
"uvicorn[standard]>=0.29.0",
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
dev = [
|
||||
"pytest>=8.0.0",
|
||||
]
|
||||
|
||||
[build-system]
|
||||
requires = ["hatchling"]
|
||||
build-backend = "hatchling.build"
|
||||
|
||||
[tool.hatch.build.targets.wheel]
|
||||
packages = ["src/local_places"]
|
||||
|
||||
[tool.pytest.ini_options]
|
||||
addopts = "-q"
|
||||
testpaths = ["tests"]
|
||||
|
|
@ -1,2 +0,0 @@
|
|||
__all__ = ["__version__"]
|
||||
__version__ = "0.1.0"
|
||||
|
|
@ -1,314 +0,0 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import os
|
||||
from typing import Any
|
||||
|
||||
import httpx
|
||||
from fastapi import HTTPException
|
||||
|
||||
from local_places.schemas import (
|
||||
LatLng,
|
||||
LocationResolveRequest,
|
||||
LocationResolveResponse,
|
||||
PlaceDetails,
|
||||
PlaceSummary,
|
||||
ResolvedLocation,
|
||||
SearchRequest,
|
||||
SearchResponse,
|
||||
)
|
||||
|
||||
GOOGLE_PLACES_BASE_URL = os.getenv(
|
||||
"GOOGLE_PLACES_BASE_URL", "https://places.googleapis.com/v1"
|
||||
)
|
||||
logger = logging.getLogger("local_places.google_places")
|
||||
|
||||
_PRICE_LEVEL_TO_ENUM = {
|
||||
0: "PRICE_LEVEL_FREE",
|
||||
1: "PRICE_LEVEL_INEXPENSIVE",
|
||||
2: "PRICE_LEVEL_MODERATE",
|
||||
3: "PRICE_LEVEL_EXPENSIVE",
|
||||
4: "PRICE_LEVEL_VERY_EXPENSIVE",
|
||||
}
|
||||
_ENUM_TO_PRICE_LEVEL = {value: key for key, value in _PRICE_LEVEL_TO_ENUM.items()}
|
||||
|
||||
_SEARCH_FIELD_MASK = (
|
||||
"places.id,"
|
||||
"places.displayName,"
|
||||
"places.formattedAddress,"
|
||||
"places.location,"
|
||||
"places.rating,"
|
||||
"places.priceLevel,"
|
||||
"places.types,"
|
||||
"places.currentOpeningHours,"
|
||||
"nextPageToken"
|
||||
)
|
||||
|
||||
_DETAILS_FIELD_MASK = (
|
||||
"id,"
|
||||
"displayName,"
|
||||
"formattedAddress,"
|
||||
"location,"
|
||||
"rating,"
|
||||
"priceLevel,"
|
||||
"types,"
|
||||
"regularOpeningHours,"
|
||||
"currentOpeningHours,"
|
||||
"nationalPhoneNumber,"
|
||||
"websiteUri"
|
||||
)
|
||||
|
||||
_RESOLVE_FIELD_MASK = (
|
||||
"places.id,"
|
||||
"places.displayName,"
|
||||
"places.formattedAddress,"
|
||||
"places.location,"
|
||||
"places.types"
|
||||
)
|
||||
|
||||
|
||||
class _GoogleResponse:
|
||||
def __init__(self, response: httpx.Response):
|
||||
self.status_code = response.status_code
|
||||
self._response = response
|
||||
|
||||
def json(self) -> dict[str, Any]:
|
||||
return self._response.json()
|
||||
|
||||
@property
|
||||
def text(self) -> str:
|
||||
return self._response.text
|
||||
|
||||
|
||||
def _api_headers(field_mask: str) -> dict[str, str]:
|
||||
api_key = os.getenv("GOOGLE_PLACES_API_KEY")
|
||||
if not api_key:
|
||||
raise HTTPException(
|
||||
status_code=500,
|
||||
detail="GOOGLE_PLACES_API_KEY is not set.",
|
||||
)
|
||||
return {
|
||||
"Content-Type": "application/json",
|
||||
"X-Goog-Api-Key": api_key,
|
||||
"X-Goog-FieldMask": field_mask,
|
||||
}
|
||||
|
||||
|
||||
def _request(
|
||||
method: str, url: str, payload: dict[str, Any] | None, field_mask: str
|
||||
) -> _GoogleResponse:
|
||||
try:
|
||||
with httpx.Client(timeout=10.0) as client:
|
||||
response = client.request(
|
||||
method=method,
|
||||
url=url,
|
||||
headers=_api_headers(field_mask),
|
||||
json=payload,
|
||||
)
|
||||
except httpx.HTTPError as exc:
|
||||
raise HTTPException(status_code=502, detail="Google Places API unavailable.") from exc
|
||||
|
||||
return _GoogleResponse(response)
|
||||
|
||||
|
||||
def _build_text_query(request: SearchRequest) -> str:
|
||||
keyword = request.filters.keyword if request.filters else None
|
||||
if keyword:
|
||||
return f"{request.query} {keyword}".strip()
|
||||
return request.query
|
||||
|
||||
|
||||
def _build_search_body(request: SearchRequest) -> dict[str, Any]:
|
||||
body: dict[str, Any] = {
|
||||
"textQuery": _build_text_query(request),
|
||||
"pageSize": request.limit,
|
||||
}
|
||||
|
||||
if request.page_token:
|
||||
body["pageToken"] = request.page_token
|
||||
|
||||
if request.location_bias:
|
||||
body["locationBias"] = {
|
||||
"circle": {
|
||||
"center": {
|
||||
"latitude": request.location_bias.lat,
|
||||
"longitude": request.location_bias.lng,
|
||||
},
|
||||
"radius": request.location_bias.radius_m,
|
||||
}
|
||||
}
|
||||
|
||||
if request.filters:
|
||||
filters = request.filters
|
||||
if filters.types:
|
||||
body["includedType"] = filters.types[0]
|
||||
if filters.open_now is not None:
|
||||
body["openNow"] = filters.open_now
|
||||
if filters.min_rating is not None:
|
||||
body["minRating"] = filters.min_rating
|
||||
if filters.price_levels:
|
||||
body["priceLevels"] = [
|
||||
_PRICE_LEVEL_TO_ENUM[level] for level in filters.price_levels
|
||||
]
|
||||
|
||||
return body
|
||||
|
||||
|
||||
def _parse_lat_lng(raw: dict[str, Any] | None) -> LatLng | None:
|
||||
if not raw:
|
||||
return None
|
||||
latitude = raw.get("latitude")
|
||||
longitude = raw.get("longitude")
|
||||
if latitude is None or longitude is None:
|
||||
return None
|
||||
return LatLng(lat=latitude, lng=longitude)
|
||||
|
||||
|
||||
def _parse_display_name(raw: dict[str, Any] | None) -> str | None:
|
||||
if not raw:
|
||||
return None
|
||||
return raw.get("text")
|
||||
|
||||
|
||||
def _parse_open_now(raw: dict[str, Any] | None) -> bool | None:
|
||||
if not raw:
|
||||
return None
|
||||
return raw.get("openNow")
|
||||
|
||||
|
||||
def _parse_hours(raw: dict[str, Any] | None) -> list[str] | None:
|
||||
if not raw:
|
||||
return None
|
||||
return raw.get("weekdayDescriptions")
|
||||
|
||||
|
||||
def _parse_price_level(raw: str | None) -> int | None:
|
||||
if not raw:
|
||||
return None
|
||||
return _ENUM_TO_PRICE_LEVEL.get(raw)
|
||||
|
||||
|
||||
def search_places(request: SearchRequest) -> SearchResponse:
|
||||
url = f"{GOOGLE_PLACES_BASE_URL}/places:searchText"
|
||||
response = _request("POST", url, _build_search_body(request), _SEARCH_FIELD_MASK)
|
||||
|
||||
if response.status_code >= 400:
|
||||
logger.error(
|
||||
"Google Places API error %s. response=%s",
|
||||
response.status_code,
|
||||
response.text,
|
||||
)
|
||||
raise HTTPException(
|
||||
status_code=502,
|
||||
detail=f"Google Places API error ({response.status_code}).",
|
||||
)
|
||||
|
||||
try:
|
||||
payload = response.json()
|
||||
except ValueError as exc:
|
||||
logger.error(
|
||||
"Google Places API returned invalid JSON. response=%s",
|
||||
response.text,
|
||||
)
|
||||
raise HTTPException(status_code=502, detail="Invalid Google response.") from exc
|
||||
|
||||
places = payload.get("places", [])
|
||||
results = []
|
||||
for place in places:
|
||||
results.append(
|
||||
PlaceSummary(
|
||||
place_id=place.get("id", ""),
|
||||
name=_parse_display_name(place.get("displayName")),
|
||||
address=place.get("formattedAddress"),
|
||||
location=_parse_lat_lng(place.get("location")),
|
||||
rating=place.get("rating"),
|
||||
price_level=_parse_price_level(place.get("priceLevel")),
|
||||
types=place.get("types"),
|
||||
open_now=_parse_open_now(place.get("currentOpeningHours")),
|
||||
)
|
||||
)
|
||||
|
||||
return SearchResponse(
|
||||
results=results,
|
||||
next_page_token=payload.get("nextPageToken"),
|
||||
)
|
||||
|
||||
|
||||
def get_place_details(place_id: str) -> PlaceDetails:
|
||||
url = f"{GOOGLE_PLACES_BASE_URL}/places/{place_id}"
|
||||
response = _request("GET", url, None, _DETAILS_FIELD_MASK)
|
||||
|
||||
if response.status_code >= 400:
|
||||
logger.error(
|
||||
"Google Places API error %s. response=%s",
|
||||
response.status_code,
|
||||
response.text,
|
||||
)
|
||||
raise HTTPException(
|
||||
status_code=502,
|
||||
detail=f"Google Places API error ({response.status_code}).",
|
||||
)
|
||||
|
||||
try:
|
||||
payload = response.json()
|
||||
except ValueError as exc:
|
||||
logger.error(
|
||||
"Google Places API returned invalid JSON. response=%s",
|
||||
response.text,
|
||||
)
|
||||
raise HTTPException(status_code=502, detail="Invalid Google response.") from exc
|
||||
|
||||
return PlaceDetails(
|
||||
place_id=payload.get("id", place_id),
|
||||
name=_parse_display_name(payload.get("displayName")),
|
||||
address=payload.get("formattedAddress"),
|
||||
location=_parse_lat_lng(payload.get("location")),
|
||||
rating=payload.get("rating"),
|
||||
price_level=_parse_price_level(payload.get("priceLevel")),
|
||||
types=payload.get("types"),
|
||||
phone=payload.get("nationalPhoneNumber"),
|
||||
website=payload.get("websiteUri"),
|
||||
hours=_parse_hours(payload.get("regularOpeningHours")),
|
||||
open_now=_parse_open_now(payload.get("currentOpeningHours")),
|
||||
)
|
||||
|
||||
|
||||
def resolve_locations(request: LocationResolveRequest) -> LocationResolveResponse:
|
||||
url = f"{GOOGLE_PLACES_BASE_URL}/places:searchText"
|
||||
body = {"textQuery": request.location_text, "pageSize": request.limit}
|
||||
response = _request("POST", url, body, _RESOLVE_FIELD_MASK)
|
||||
|
||||
if response.status_code >= 400:
|
||||
logger.error(
|
||||
"Google Places API error %s. response=%s",
|
||||
response.status_code,
|
||||
response.text,
|
||||
)
|
||||
raise HTTPException(
|
||||
status_code=502,
|
||||
detail=f"Google Places API error ({response.status_code}).",
|
||||
)
|
||||
|
||||
try:
|
||||
payload = response.json()
|
||||
except ValueError as exc:
|
||||
logger.error(
|
||||
"Google Places API returned invalid JSON. response=%s",
|
||||
response.text,
|
||||
)
|
||||
raise HTTPException(status_code=502, detail="Invalid Google response.") from exc
|
||||
|
||||
places = payload.get("places", [])
|
||||
results = []
|
||||
for place in places:
|
||||
results.append(
|
||||
ResolvedLocation(
|
||||
place_id=place.get("id", ""),
|
||||
name=_parse_display_name(place.get("displayName")),
|
||||
address=place.get("formattedAddress"),
|
||||
location=_parse_lat_lng(place.get("location")),
|
||||
types=place.get("types"),
|
||||
)
|
||||
)
|
||||
|
||||
return LocationResolveResponse(results=results)
|
||||
|
|
@ -1,65 +0,0 @@
|
|||
import logging
|
||||
import os
|
||||
|
||||
from fastapi import FastAPI, Request
|
||||
from fastapi.encoders import jsonable_encoder
|
||||
from fastapi.exceptions import RequestValidationError
|
||||
from fastapi.responses import JSONResponse
|
||||
|
||||
from local_places.google_places import get_place_details, resolve_locations, search_places
|
||||
from local_places.schemas import (
|
||||
LocationResolveRequest,
|
||||
LocationResolveResponse,
|
||||
PlaceDetails,
|
||||
SearchRequest,
|
||||
SearchResponse,
|
||||
)
|
||||
|
||||
app = FastAPI(
|
||||
title="My API",
|
||||
servers=[{"url": os.getenv("OPENAPI_SERVER_URL", "http://maxims-macbook-air:8000")}],
|
||||
)
|
||||
logger = logging.getLogger("local_places.validation")
|
||||
|
||||
|
||||
@app.get("/ping")
|
||||
def ping() -> dict[str, str]:
|
||||
return {"message": "pong"}
|
||||
|
||||
|
||||
@app.exception_handler(RequestValidationError)
|
||||
async def validation_exception_handler(
|
||||
request: Request, exc: RequestValidationError
|
||||
) -> JSONResponse:
|
||||
logger.error(
|
||||
"Validation error on %s %s. body=%s errors=%s",
|
||||
request.method,
|
||||
request.url.path,
|
||||
exc.body,
|
||||
exc.errors(),
|
||||
)
|
||||
return JSONResponse(
|
||||
status_code=422,
|
||||
content=jsonable_encoder({"detail": exc.errors()}),
|
||||
)
|
||||
|
||||
|
||||
@app.post("/places/search", response_model=SearchResponse)
|
||||
def places_search(request: SearchRequest) -> SearchResponse:
|
||||
return search_places(request)
|
||||
|
||||
|
||||
@app.get("/places/{place_id}", response_model=PlaceDetails)
|
||||
def places_details(place_id: str) -> PlaceDetails:
|
||||
return get_place_details(place_id)
|
||||
|
||||
|
||||
@app.post("/locations/resolve", response_model=LocationResolveResponse)
|
||||
def locations_resolve(request: LocationResolveRequest) -> LocationResolveResponse:
|
||||
return resolve_locations(request)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
|
||||
uvicorn.run("local_places.main:app", host="0.0.0.0", port=8000)
|
||||
|
|
@ -1,107 +0,0 @@
|
|||
from __future__ import annotations
|
||||
|
||||
from pydantic import BaseModel, Field, field_validator
|
||||
|
||||
|
||||
class LatLng(BaseModel):
|
||||
lat: float = Field(ge=-90, le=90)
|
||||
lng: float = Field(ge=-180, le=180)
|
||||
|
||||
|
||||
class LocationBias(BaseModel):
|
||||
lat: float = Field(ge=-90, le=90)
|
||||
lng: float = Field(ge=-180, le=180)
|
||||
radius_m: float = Field(gt=0)
|
||||
|
||||
|
||||
class Filters(BaseModel):
|
||||
types: list[str] | None = None
|
||||
open_now: bool | None = None
|
||||
min_rating: float | None = Field(default=None, ge=0, le=5)
|
||||
price_levels: list[int] | None = None
|
||||
keyword: str | None = Field(default=None, min_length=1)
|
||||
|
||||
@field_validator("types")
|
||||
@classmethod
|
||||
def validate_types(cls, value: list[str] | None) -> list[str] | None:
|
||||
if value is None:
|
||||
return value
|
||||
if len(value) > 1:
|
||||
raise ValueError(
|
||||
"Only one type is supported. Use query/keyword for additional filtering."
|
||||
)
|
||||
return value
|
||||
|
||||
@field_validator("price_levels")
|
||||
@classmethod
|
||||
def validate_price_levels(cls, value: list[int] | None) -> list[int] | None:
|
||||
if value is None:
|
||||
return value
|
||||
invalid = [level for level in value if level not in range(0, 5)]
|
||||
if invalid:
|
||||
raise ValueError("price_levels must be integers between 0 and 4.")
|
||||
return value
|
||||
|
||||
@field_validator("min_rating")
|
||||
@classmethod
|
||||
def validate_min_rating(cls, value: float | None) -> float | None:
|
||||
if value is None:
|
||||
return value
|
||||
if (value * 2) % 1 != 0:
|
||||
raise ValueError("min_rating must be in 0.5 increments.")
|
||||
return value
|
||||
|
||||
|
||||
class SearchRequest(BaseModel):
|
||||
query: str = Field(min_length=1)
|
||||
location_bias: LocationBias | None = None
|
||||
filters: Filters | None = None
|
||||
limit: int = Field(default=10, ge=1, le=20)
|
||||
page_token: str | None = None
|
||||
|
||||
|
||||
class PlaceSummary(BaseModel):
|
||||
place_id: str
|
||||
name: str | None = None
|
||||
address: str | None = None
|
||||
location: LatLng | None = None
|
||||
rating: float | None = None
|
||||
price_level: int | None = None
|
||||
types: list[str] | None = None
|
||||
open_now: bool | None = None
|
||||
|
||||
|
||||
class SearchResponse(BaseModel):
|
||||
results: list[PlaceSummary]
|
||||
next_page_token: str | None = None
|
||||
|
||||
|
||||
class LocationResolveRequest(BaseModel):
|
||||
location_text: str = Field(min_length=1)
|
||||
limit: int = Field(default=5, ge=1, le=10)
|
||||
|
||||
|
||||
class ResolvedLocation(BaseModel):
|
||||
place_id: str
|
||||
name: str | None = None
|
||||
address: str | None = None
|
||||
location: LatLng | None = None
|
||||
types: list[str] | None = None
|
||||
|
||||
|
||||
class LocationResolveResponse(BaseModel):
|
||||
results: list[ResolvedLocation]
|
||||
|
||||
|
||||
class PlaceDetails(BaseModel):
|
||||
place_id: str
|
||||
name: str | None = None
|
||||
address: str | None = None
|
||||
location: LatLng | None = None
|
||||
rating: float | None = None
|
||||
price_level: int | None = None
|
||||
types: list[str] | None = None
|
||||
phone: str | None = None
|
||||
website: str | None = None
|
||||
hours: list[str] | None = None
|
||||
open_now: bool | None = None
|
||||
|
|
@ -1,38 +0,0 @@
|
|||
---
|
||||
name: mcporter
|
||||
description: Use the mcporter CLI to list, configure, auth, and call MCP servers/tools directly (HTTP or stdio), including ad-hoc servers, config edits, and CLI/type generation.
|
||||
homepage: http://mcporter.dev
|
||||
metadata: {"clawdbot":{"emoji":"📦","requires":{"bins":["mcporter"]},"install":[{"id":"node","kind":"node","package":"mcporter","bins":["mcporter"],"label":"Install mcporter (node)"}]}}
|
||||
---
|
||||
|
||||
# mcporter
|
||||
|
||||
Use `mcporter` to work with MCP servers directly.
|
||||
|
||||
Quick start
|
||||
- `mcporter list`
|
||||
- `mcporter list <server> --schema`
|
||||
- `mcporter call <server.tool> key=value`
|
||||
|
||||
Call tools
|
||||
- Selector: `mcporter call linear.list_issues team=ENG limit:5`
|
||||
- Function syntax: `mcporter call "linear.create_issue(title: \"Bug\")"`
|
||||
- Full URL: `mcporter call https://api.example.com/mcp.fetch url:https://example.com`
|
||||
- Stdio: `mcporter call --stdio "bun run ./server.ts" scrape url=https://example.com`
|
||||
- JSON payload: `mcporter call <server.tool> --args '{"limit":5}'`
|
||||
|
||||
Auth + config
|
||||
- OAuth: `mcporter auth <server | url> [--reset]`
|
||||
- Config: `mcporter config list|get|add|remove|import|login|logout`
|
||||
|
||||
Daemon
|
||||
- `mcporter daemon start|status|stop|restart`
|
||||
|
||||
Codegen
|
||||
- CLI: `mcporter generate-cli --server <name>` or `--command <url>`
|
||||
- Inspect: `mcporter inspect-cli <path> [--json]`
|
||||
- TS: `mcporter emit-ts <server> --mode client|types`
|
||||
|
||||
Notes
|
||||
- Config default: `./config/mcporter.json` (override with `--config`).
|
||||
- Prefer `--output json` for machine-readable results.
|
||||
|
|
@ -1,45 +0,0 @@
|
|||
---
|
||||
name: model-usage
|
||||
description: Use CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
|
||||
metadata: {"clawdbot":{"emoji":"📊","os":["darwin"],"requires":{"bins":["codexbar"]},"install":[{"id":"brew-cask","kind":"brew","cask":"steipete/tap/codexbar","bins":["codexbar"],"label":"Install CodexBar (brew cask)"}]}}
|
||||
---
|
||||
|
||||
# Model usage
|
||||
|
||||
## Overview
|
||||
Get per-model usage cost from CodexBar's local cost logs. Supports "current model" (most recent daily entry) or "all models" summaries for Codex or Claude.
|
||||
|
||||
TODO: add Linux CLI support guidance once CodexBar CLI install path is documented for Linux.
|
||||
|
||||
## Quick start
|
||||
1) Fetch cost JSON via CodexBar CLI or pass a JSON file.
|
||||
2) Use the bundled script to summarize by model.
|
||||
|
||||
```bash
|
||||
python {baseDir}/scripts/model_usage.py --provider codex --mode current
|
||||
python {baseDir}/scripts/model_usage.py --provider codex --mode all
|
||||
python {baseDir}/scripts/model_usage.py --provider claude --mode all --format json --pretty
|
||||
```
|
||||
|
||||
## Current model logic
|
||||
- Uses the most recent daily row with `modelBreakdowns`.
|
||||
- Picks the model with the highest cost in that row.
|
||||
- Falls back to the last entry in `modelsUsed` when breakdowns are missing.
|
||||
- Override with `--model <name>` when you need a specific model.
|
||||
|
||||
## Inputs
|
||||
- Default: runs `codexbar cost --format json --provider <codex|claude>`.
|
||||
- File or stdin:
|
||||
|
||||
```bash
|
||||
codexbar cost --provider codex --format json > /tmp/cost.json
|
||||
python {baseDir}/scripts/model_usage.py --input /tmp/cost.json --mode all
|
||||
cat /tmp/cost.json | python {baseDir}/scripts/model_usage.py --input - --mode current
|
||||
```
|
||||
|
||||
## Output
|
||||
- Text (default) or JSON (`--format json --pretty`).
|
||||
- Values are cost-only per model; tokens are not split by model in CodexBar output.
|
||||
|
||||
## References
|
||||
- Read `references/codexbar-cli.md` for CLI flags and cost JSON fields.
|
||||
|
|
@ -1,28 +0,0 @@
|
|||
# CodexBar CLI quick ref (usage + cost)
|
||||
|
||||
## Install
|
||||
- App: Preferences -> Advanced -> Install CLI
|
||||
- Repo: ./bin/install-codexbar-cli.sh
|
||||
|
||||
## Commands
|
||||
- Usage snapshot (web/cli sources):
|
||||
- codexbar usage --format json --pretty
|
||||
- codexbar --provider all --format json
|
||||
- Local cost usage (Codex + Claude only):
|
||||
- codexbar cost --format json --pretty
|
||||
- codexbar cost --provider codex|claude --format json
|
||||
|
||||
## Cost JSON fields
|
||||
The payload is an array (one per provider).
|
||||
- provider, source, updatedAt
|
||||
- sessionTokens, sessionCostUSD
|
||||
- last30DaysTokens, last30DaysCostUSD
|
||||
- daily[]: date, inputTokens, outputTokens, cacheReadTokens, cacheCreationTokens, totalTokens, totalCost, modelsUsed, modelBreakdowns[]
|
||||
- modelBreakdowns[]: modelName, cost
|
||||
- totals: totalInputTokens, totalOutputTokens, cacheReadTokens, cacheCreationTokens, totalTokens, totalCost
|
||||
|
||||
## Notes
|
||||
- Cost usage is local-only. It reads JSONL logs under:
|
||||
- Codex: ~/.codex/sessions/**/*.jsonl
|
||||
- Claude: ~/.config/claude/projects/**/*.jsonl or ~/.claude/projects/**/*.jsonl
|
||||
- If web usage is required (non-local), use codexbar usage (not cost).
|
||||
|
|
@ -1,310 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
Summarize CodexBar local cost usage by model.
|
||||
|
||||
Defaults to current model (most recent daily entry), or list all models.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
from dataclasses import dataclass
|
||||
from datetime import date, datetime, timedelta
|
||||
from typing import Any, Dict, Iterable, List, Optional, Tuple
|
||||
|
||||
|
||||
def eprint(msg: str) -> None:
|
||||
print(msg, file=sys.stderr)
|
||||
|
||||
|
||||
def run_codexbar_cost(provider: str) -> List[Dict[str, Any]]:
|
||||
cmd = ["codexbar", "cost", "--format", "json", "--provider", provider]
|
||||
try:
|
||||
output = subprocess.check_output(cmd, text=True)
|
||||
except FileNotFoundError:
|
||||
raise RuntimeError("codexbar not found on PATH. Install CodexBar CLI first.")
|
||||
except subprocess.CalledProcessError as exc:
|
||||
raise RuntimeError(f"codexbar cost failed (exit {exc.returncode}).")
|
||||
try:
|
||||
payload = json.loads(output)
|
||||
except json.JSONDecodeError as exc:
|
||||
raise RuntimeError(f"Failed to parse codexbar JSON output: {exc}")
|
||||
if not isinstance(payload, list):
|
||||
raise RuntimeError("Expected codexbar cost JSON array.")
|
||||
return payload
|
||||
|
||||
|
||||
def load_payload(input_path: Optional[str], provider: str) -> Dict[str, Any]:
|
||||
if input_path:
|
||||
if input_path == "-":
|
||||
raw = sys.stdin.read()
|
||||
else:
|
||||
with open(input_path, "r", encoding="utf-8") as handle:
|
||||
raw = handle.read()
|
||||
data = json.loads(raw)
|
||||
else:
|
||||
data = run_codexbar_cost(provider)
|
||||
|
||||
if isinstance(data, dict):
|
||||
return data
|
||||
|
||||
if isinstance(data, list):
|
||||
for entry in data:
|
||||
if isinstance(entry, dict) and entry.get("provider") == provider:
|
||||
return entry
|
||||
raise RuntimeError(f"Provider '{provider}' not found in codexbar payload.")
|
||||
|
||||
raise RuntimeError("Unsupported JSON input format.")
|
||||
|
||||
|
||||
@dataclass
|
||||
class ModelCost:
|
||||
model: str
|
||||
cost: float
|
||||
|
||||
|
||||
def parse_daily_entries(payload: Dict[str, Any]) -> List[Dict[str, Any]]:
|
||||
daily = payload.get("daily")
|
||||
if not daily:
|
||||
return []
|
||||
if not isinstance(daily, list):
|
||||
return []
|
||||
return [entry for entry in daily if isinstance(entry, dict)]
|
||||
|
||||
|
||||
def parse_date(value: str) -> Optional[date]:
|
||||
try:
|
||||
return datetime.strptime(value, "%Y-%m-%d").date()
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def filter_by_days(entries: List[Dict[str, Any]], days: Optional[int]) -> List[Dict[str, Any]]:
|
||||
if not days:
|
||||
return entries
|
||||
cutoff = date.today() - timedelta(days=days - 1)
|
||||
filtered: List[Dict[str, Any]] = []
|
||||
for entry in entries:
|
||||
day = entry.get("date")
|
||||
if not isinstance(day, str):
|
||||
continue
|
||||
parsed = parse_date(day)
|
||||
if parsed and parsed >= cutoff:
|
||||
filtered.append(entry)
|
||||
return filtered
|
||||
|
||||
|
||||
def aggregate_costs(entries: Iterable[Dict[str, Any]]) -> Dict[str, float]:
|
||||
totals: Dict[str, float] = {}
|
||||
for entry in entries:
|
||||
breakdowns = entry.get("modelBreakdowns")
|
||||
if not breakdowns:
|
||||
continue
|
||||
if not isinstance(breakdowns, list):
|
||||
continue
|
||||
for item in breakdowns:
|
||||
if not isinstance(item, dict):
|
||||
continue
|
||||
model = item.get("modelName")
|
||||
cost = item.get("cost")
|
||||
if not isinstance(model, str):
|
||||
continue
|
||||
if not isinstance(cost, (int, float)):
|
||||
continue
|
||||
totals[model] = totals.get(model, 0.0) + float(cost)
|
||||
return totals
|
||||
|
||||
|
||||
def pick_current_model(entries: List[Dict[str, Any]]) -> Tuple[Optional[str], Optional[str]]:
|
||||
if not entries:
|
||||
return None, None
|
||||
sorted_entries = sorted(
|
||||
entries,
|
||||
key=lambda entry: entry.get("date") or "",
|
||||
)
|
||||
for entry in reversed(sorted_entries):
|
||||
breakdowns = entry.get("modelBreakdowns")
|
||||
if isinstance(breakdowns, list) and breakdowns:
|
||||
scored: List[ModelCost] = []
|
||||
for item in breakdowns:
|
||||
if not isinstance(item, dict):
|
||||
continue
|
||||
model = item.get("modelName")
|
||||
cost = item.get("cost")
|
||||
if isinstance(model, str) and isinstance(cost, (int, float)):
|
||||
scored.append(ModelCost(model=model, cost=float(cost)))
|
||||
if scored:
|
||||
scored.sort(key=lambda item: item.cost, reverse=True)
|
||||
return scored[0].model, entry.get("date") if isinstance(entry.get("date"), str) else None
|
||||
models_used = entry.get("modelsUsed")
|
||||
if isinstance(models_used, list) and models_used:
|
||||
last = models_used[-1]
|
||||
if isinstance(last, str):
|
||||
return last, entry.get("date") if isinstance(entry.get("date"), str) else None
|
||||
return None, None
|
||||
|
||||
|
||||
def usd(value: Optional[float]) -> str:
|
||||
if value is None:
|
||||
return "—"
|
||||
return f"${value:,.2f}"
|
||||
|
||||
|
||||
def latest_day_cost(entries: List[Dict[str, Any]], model: str) -> Tuple[Optional[str], Optional[float]]:
|
||||
if not entries:
|
||||
return None, None
|
||||
sorted_entries = sorted(
|
||||
entries,
|
||||
key=lambda entry: entry.get("date") or "",
|
||||
)
|
||||
for entry in reversed(sorted_entries):
|
||||
breakdowns = entry.get("modelBreakdowns")
|
||||
if not isinstance(breakdowns, list):
|
||||
continue
|
||||
for item in breakdowns:
|
||||
if not isinstance(item, dict):
|
||||
continue
|
||||
if item.get("modelName") == model:
|
||||
cost = item.get("cost") if isinstance(item.get("cost"), (int, float)) else None
|
||||
day = entry.get("date") if isinstance(entry.get("date"), str) else None
|
||||
return day, float(cost) if cost is not None else None
|
||||
return None, None
|
||||
|
||||
|
||||
def render_text_current(
|
||||
provider: str,
|
||||
model: str,
|
||||
latest_date: Optional[str],
|
||||
total_cost: Optional[float],
|
||||
latest_cost: Optional[float],
|
||||
latest_cost_date: Optional[str],
|
||||
entry_count: int,
|
||||
) -> str:
|
||||
lines = [f"Provider: {provider}", f"Current model: {model}"]
|
||||
if latest_date:
|
||||
lines.append(f"Latest model date: {latest_date}")
|
||||
lines.append(f"Total cost (rows): {usd(total_cost)}")
|
||||
if latest_cost_date:
|
||||
lines.append(f"Latest day cost: {usd(latest_cost)} ({latest_cost_date})")
|
||||
lines.append(f"Daily rows: {entry_count}")
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def render_text_all(provider: str, totals: Dict[str, float]) -> str:
|
||||
lines = [f"Provider: {provider}", "Models:"]
|
||||
for model, cost in sorted(totals.items(), key=lambda item: item[1], reverse=True):
|
||||
lines.append(f"- {model}: {usd(cost)}")
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def build_json_current(
|
||||
provider: str,
|
||||
model: str,
|
||||
latest_date: Optional[str],
|
||||
total_cost: Optional[float],
|
||||
latest_cost: Optional[float],
|
||||
latest_cost_date: Optional[str],
|
||||
entry_count: int,
|
||||
) -> Dict[str, Any]:
|
||||
return {
|
||||
"provider": provider,
|
||||
"mode": "current",
|
||||
"model": model,
|
||||
"latestModelDate": latest_date,
|
||||
"totalCostUSD": total_cost,
|
||||
"latestDayCostUSD": latest_cost,
|
||||
"latestDayCostDate": latest_cost_date,
|
||||
"dailyRowCount": entry_count,
|
||||
}
|
||||
|
||||
|
||||
def build_json_all(provider: str, totals: Dict[str, float]) -> Dict[str, Any]:
|
||||
return {
|
||||
"provider": provider,
|
||||
"mode": "all",
|
||||
"models": [
|
||||
{"model": model, "totalCostUSD": cost}
|
||||
for model, cost in sorted(totals.items(), key=lambda item: item[1], reverse=True)
|
||||
],
|
||||
}
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser(description="Summarize CodexBar model usage from local cost logs.")
|
||||
parser.add_argument("--provider", choices=["codex", "claude"], default="codex")
|
||||
parser.add_argument("--mode", choices=["current", "all"], default="current")
|
||||
parser.add_argument("--model", help="Explicit model name to report instead of auto-current.")
|
||||
parser.add_argument("--input", help="Path to codexbar cost JSON (or '-' for stdin).")
|
||||
parser.add_argument("--days", type=int, help="Limit to last N days (based on daily rows).")
|
||||
parser.add_argument("--format", choices=["text", "json"], default="text")
|
||||
parser.add_argument("--pretty", action="store_true", help="Pretty-print JSON output.")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
try:
|
||||
payload = load_payload(args.input, args.provider)
|
||||
except Exception as exc:
|
||||
eprint(str(exc))
|
||||
return 1
|
||||
|
||||
entries = parse_daily_entries(payload)
|
||||
entries = filter_by_days(entries, args.days)
|
||||
|
||||
if args.mode == "current":
|
||||
model = args.model
|
||||
latest_date = None
|
||||
if not model:
|
||||
model, latest_date = pick_current_model(entries)
|
||||
if not model:
|
||||
eprint("No model data found in codexbar cost payload.")
|
||||
return 2
|
||||
totals = aggregate_costs(entries)
|
||||
total_cost = totals.get(model)
|
||||
latest_cost_date, latest_cost = latest_day_cost(entries, model)
|
||||
|
||||
if args.format == "json":
|
||||
payload_out = build_json_current(
|
||||
provider=args.provider,
|
||||
model=model,
|
||||
latest_date=latest_date,
|
||||
total_cost=total_cost,
|
||||
latest_cost=latest_cost,
|
||||
latest_cost_date=latest_cost_date,
|
||||
entry_count=len(entries),
|
||||
)
|
||||
indent = 2 if args.pretty else None
|
||||
print(json.dumps(payload_out, indent=indent, sort_keys=args.pretty))
|
||||
else:
|
||||
print(
|
||||
render_text_current(
|
||||
provider=args.provider,
|
||||
model=model,
|
||||
latest_date=latest_date,
|
||||
total_cost=total_cost,
|
||||
latest_cost=latest_cost,
|
||||
latest_cost_date=latest_cost_date,
|
||||
entry_count=len(entries),
|
||||
)
|
||||
)
|
||||
return 0
|
||||
|
||||
totals = aggregate_costs(entries)
|
||||
if not totals:
|
||||
eprint("No model breakdowns found in codexbar cost payload.")
|
||||
return 2
|
||||
|
||||
if args.format == "json":
|
||||
payload_out = build_json_all(provider=args.provider, totals=totals)
|
||||
indent = 2 if args.pretty else None
|
||||
print(json.dumps(payload_out, indent=indent, sort_keys=args.pretty))
|
||||
else:
|
||||
print(render_text_all(provider=args.provider, totals=totals))
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
|
|
@ -1,30 +0,0 @@
|
|||
---
|
||||
name: nano-banana-pro
|
||||
description: Generate or edit images via Gemini 3 Pro Image (Nano Banana Pro).
|
||||
homepage: https://ai.google.dev/
|
||||
metadata: {"clawdbot":{"emoji":"🍌","requires":{"bins":["uv"],"env":["GEMINI_API_KEY"]},"primaryEnv":"GEMINI_API_KEY","install":[{"id":"uv-brew","kind":"brew","formula":"uv","bins":["uv"],"label":"Install uv (brew)"}]}}
|
||||
---
|
||||
|
||||
# Nano Banana Pro (Gemini 3 Pro Image)
|
||||
|
||||
Use the bundled script to generate or edit images.
|
||||
|
||||
Generate
|
||||
```bash
|
||||
uv run {baseDir}/scripts/generate_image.py --prompt "your image description" --filename "output.png" --resolution 1K
|
||||
```
|
||||
|
||||
Edit
|
||||
```bash
|
||||
uv run {baseDir}/scripts/generate_image.py --prompt "edit instructions" --filename "output.png" --input-image "/path/in.png" --resolution 2K
|
||||
```
|
||||
|
||||
API key
|
||||
- `GEMINI_API_KEY` env var
|
||||
- Or set `skills."nano-banana-pro".apiKey` / `skills."nano-banana-pro".env.GEMINI_API_KEY` in `~/.clawdbot/clawdbot.json`
|
||||
|
||||
Notes
|
||||
- Resolutions: `1K` (default), `2K`, `4K`.
|
||||
- Use timestamps in filenames: `yyyy-mm-dd-hh-mm-ss-name.png`.
|
||||
- The script prints a `MEDIA:` line for Clawdbot to auto-attach on supported chat providers.
|
||||
- Do not read the image back; report the saved path only.
|
||||
|
|
@ -1,169 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
# /// script
|
||||
# requires-python = ">=3.10"
|
||||
# dependencies = [
|
||||
# "google-genai>=1.0.0",
|
||||
# "pillow>=10.0.0",
|
||||
# ]
|
||||
# ///
|
||||
"""
|
||||
Generate images using Google's Nano Banana Pro (Gemini 3 Pro Image) API.
|
||||
|
||||
Usage:
|
||||
uv run generate_image.py --prompt "your image description" --filename "output.png" [--resolution 1K|2K|4K] [--api-key KEY]
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def get_api_key(provided_key: str | None) -> str | None:
|
||||
"""Get API key from argument first, then environment."""
|
||||
if provided_key:
|
||||
return provided_key
|
||||
return os.environ.get("GEMINI_API_KEY")
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Generate images using Nano Banana Pro (Gemini 3 Pro Image)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--prompt", "-p",
|
||||
required=True,
|
||||
help="Image description/prompt"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--filename", "-f",
|
||||
required=True,
|
||||
help="Output filename (e.g., sunset-mountains.png)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--input-image", "-i",
|
||||
help="Optional input image path for editing/modification"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--resolution", "-r",
|
||||
choices=["1K", "2K", "4K"],
|
||||
default="1K",
|
||||
help="Output resolution: 1K (default), 2K, or 4K"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--api-key", "-k",
|
||||
help="Gemini API key (overrides GEMINI_API_KEY env var)"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Get API key
|
||||
api_key = get_api_key(args.api_key)
|
||||
if not api_key:
|
||||
print("Error: No API key provided.", file=sys.stderr)
|
||||
print("Please either:", file=sys.stderr)
|
||||
print(" 1. Provide --api-key argument", file=sys.stderr)
|
||||
print(" 2. Set GEMINI_API_KEY environment variable", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Import here after checking API key to avoid slow import on error
|
||||
from google import genai
|
||||
from google.genai import types
|
||||
from PIL import Image as PILImage
|
||||
|
||||
# Initialise client
|
||||
client = genai.Client(api_key=api_key)
|
||||
|
||||
# Set up output path
|
||||
output_path = Path(args.filename)
|
||||
output_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Load input image if provided
|
||||
input_image = None
|
||||
output_resolution = args.resolution
|
||||
if args.input_image:
|
||||
try:
|
||||
input_image = PILImage.open(args.input_image)
|
||||
print(f"Loaded input image: {args.input_image}")
|
||||
|
||||
# Auto-detect resolution if not explicitly set by user
|
||||
if args.resolution == "1K": # Default value
|
||||
# Map input image size to resolution
|
||||
width, height = input_image.size
|
||||
max_dim = max(width, height)
|
||||
if max_dim >= 3000:
|
||||
output_resolution = "4K"
|
||||
elif max_dim >= 1500:
|
||||
output_resolution = "2K"
|
||||
else:
|
||||
output_resolution = "1K"
|
||||
print(f"Auto-detected resolution: {output_resolution} (from input {width}x{height})")
|
||||
except Exception as e:
|
||||
print(f"Error loading input image: {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Build contents (image first if editing, prompt only if generating)
|
||||
if input_image:
|
||||
contents = [input_image, args.prompt]
|
||||
print(f"Editing image with resolution {output_resolution}...")
|
||||
else:
|
||||
contents = args.prompt
|
||||
print(f"Generating image with resolution {output_resolution}...")
|
||||
|
||||
try:
|
||||
response = client.models.generate_content(
|
||||
model="gemini-3-pro-image-preview",
|
||||
contents=contents,
|
||||
config=types.GenerateContentConfig(
|
||||
response_modalities=["TEXT", "IMAGE"],
|
||||
image_config=types.ImageConfig(
|
||||
image_size=output_resolution
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
# Process response and convert to PNG
|
||||
image_saved = False
|
||||
for part in response.parts:
|
||||
if part.text is not None:
|
||||
print(f"Model response: {part.text}")
|
||||
elif part.inline_data is not None:
|
||||
# Convert inline data to PIL Image and save as PNG
|
||||
from io import BytesIO
|
||||
|
||||
# inline_data.data is already bytes, not base64
|
||||
image_data = part.inline_data.data
|
||||
if isinstance(image_data, str):
|
||||
# If it's a string, it might be base64
|
||||
import base64
|
||||
image_data = base64.b64decode(image_data)
|
||||
|
||||
image = PILImage.open(BytesIO(image_data))
|
||||
|
||||
# Ensure RGB mode for PNG (convert RGBA to RGB with white background if needed)
|
||||
if image.mode == 'RGBA':
|
||||
rgb_image = PILImage.new('RGB', image.size, (255, 255, 255))
|
||||
rgb_image.paste(image, mask=image.split()[3])
|
||||
rgb_image.save(str(output_path), 'PNG')
|
||||
elif image.mode == 'RGB':
|
||||
image.save(str(output_path), 'PNG')
|
||||
else:
|
||||
image.convert('RGB').save(str(output_path), 'PNG')
|
||||
image_saved = True
|
||||
|
||||
if image_saved:
|
||||
full_path = output_path.resolve()
|
||||
print(f"\nImage saved: {full_path}")
|
||||
# Clawdbot parses MEDIA tokens and will attach the file on supported providers.
|
||||
print(f"MEDIA: {full_path}")
|
||||
else:
|
||||
print("Error: No image was generated in the response.", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error generating image: {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
|
@ -1,20 +0,0 @@
|
|||
---
|
||||
name: nano-pdf
|
||||
description: Edit PDFs with natural-language instructions using the nano-pdf CLI.
|
||||
homepage: https://pypi.org/project/nano-pdf/
|
||||
metadata: {"clawdbot":{"emoji":"📄","requires":{"bins":["nano-pdf"]},"install":[{"id":"uv","kind":"uv","package":"nano-pdf","bins":["nano-pdf"],"label":"Install nano-pdf (uv)"}]}}
|
||||
---
|
||||
|
||||
# nano-pdf
|
||||
|
||||
Use `nano-pdf` to apply edits to a specific page in a PDF using a natural-language instruction.
|
||||
|
||||
## Quick start
|
||||
|
||||
```bash
|
||||
nano-pdf edit deck.pdf 1 "Change the title to 'Q3 Results' and fix the typo in the subtitle"
|
||||
```
|
||||
|
||||
Notes:
|
||||
- Page numbers are 0-based or 1-based depending on the tool’s version/config; if the result looks off by one, retry with the other.
|
||||
- Always sanity-check the output PDF before sending it out.
|
||||
|
|
@ -1,156 +0,0 @@
|
|||
---
|
||||
name: notion
|
||||
description: Notion API for creating and managing pages, databases, and blocks.
|
||||
homepage: https://developers.notion.com
|
||||
metadata: {"clawdbot":{"emoji":"📝"}}
|
||||
---
|
||||
|
||||
# notion
|
||||
|
||||
Use the Notion API to create/read/update pages, data sources (databases), and blocks.
|
||||
|
||||
## Setup
|
||||
|
||||
1. Create an integration at https://notion.so/my-integrations
|
||||
2. Copy the API key (starts with `ntn_` or `secret_`)
|
||||
3. Store it:
|
||||
```bash
|
||||
mkdir -p ~/.config/notion
|
||||
echo "ntn_your_key_here" > ~/.config/notion/api_key
|
||||
```
|
||||
4. Share target pages/databases with your integration (click "..." → "Connect to" → your integration name)
|
||||
|
||||
## API Basics
|
||||
|
||||
All requests need:
|
||||
```bash
|
||||
NOTION_KEY=$(cat ~/.config/notion/api_key)
|
||||
curl -X GET "https://api.notion.com/v1/..." \
|
||||
-H "Authorization: Bearer $NOTION_KEY" \
|
||||
-H "Notion-Version: 2025-09-03" \
|
||||
-H "Content-Type: application/json"
|
||||
```
|
||||
|
||||
> **Note:** The `Notion-Version` header is required. This skill uses `2025-09-03` (latest). In this version, databases are called "data sources" in the API.
|
||||
|
||||
## Common Operations
|
||||
|
||||
**Search for pages and data sources:**
|
||||
```bash
|
||||
curl -X POST "https://api.notion.com/v1/search" \
|
||||
-H "Authorization: Bearer $NOTION_KEY" \
|
||||
-H "Notion-Version: 2025-09-03" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"query": "page title"}'
|
||||
```
|
||||
|
||||
**Get page:**
|
||||
```bash
|
||||
curl "https://api.notion.com/v1/pages/{page_id}" \
|
||||
-H "Authorization: Bearer $NOTION_KEY" \
|
||||
-H "Notion-Version: 2025-09-03"
|
||||
```
|
||||
|
||||
**Get page content (blocks):**
|
||||
```bash
|
||||
curl "https://api.notion.com/v1/blocks/{page_id}/children" \
|
||||
-H "Authorization: Bearer $NOTION_KEY" \
|
||||
-H "Notion-Version: 2025-09-03"
|
||||
```
|
||||
|
||||
**Create page in a data source:**
|
||||
```bash
|
||||
curl -X POST "https://api.notion.com/v1/pages" \
|
||||
-H "Authorization: Bearer $NOTION_KEY" \
|
||||
-H "Notion-Version: 2025-09-03" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"parent": {"database_id": "xxx"},
|
||||
"properties": {
|
||||
"Name": {"title": [{"text": {"content": "New Item"}}]},
|
||||
"Status": {"select": {"name": "Todo"}}
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
||||
**Query a data source (database):**
|
||||
```bash
|
||||
curl -X POST "https://api.notion.com/v1/data_sources/{data_source_id}/query" \
|
||||
-H "Authorization: Bearer $NOTION_KEY" \
|
||||
-H "Notion-Version: 2025-09-03" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"filter": {"property": "Status", "select": {"equals": "Active"}},
|
||||
"sorts": [{"property": "Date", "direction": "descending"}]
|
||||
}'
|
||||
```
|
||||
|
||||
**Create a data source (database):**
|
||||
```bash
|
||||
curl -X POST "https://api.notion.com/v1/data_sources" \
|
||||
-H "Authorization: Bearer $NOTION_KEY" \
|
||||
-H "Notion-Version: 2025-09-03" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"parent": {"page_id": "xxx"},
|
||||
"title": [{"text": {"content": "My Database"}}],
|
||||
"properties": {
|
||||
"Name": {"title": {}},
|
||||
"Status": {"select": {"options": [{"name": "Todo"}, {"name": "Done"}]}},
|
||||
"Date": {"date": {}}
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
||||
**Update page properties:**
|
||||
```bash
|
||||
curl -X PATCH "https://api.notion.com/v1/pages/{page_id}" \
|
||||
-H "Authorization: Bearer $NOTION_KEY" \
|
||||
-H "Notion-Version: 2025-09-03" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"properties": {"Status": {"select": {"name": "Done"}}}}'
|
||||
```
|
||||
|
||||
**Add blocks to page:**
|
||||
```bash
|
||||
curl -X PATCH "https://api.notion.com/v1/blocks/{page_id}/children" \
|
||||
-H "Authorization: Bearer $NOTION_KEY" \
|
||||
-H "Notion-Version: 2025-09-03" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"children": [
|
||||
{"object": "block", "type": "paragraph", "paragraph": {"rich_text": [{"text": {"content": "Hello"}}]}}
|
||||
]
|
||||
}'
|
||||
```
|
||||
|
||||
## Property Types
|
||||
|
||||
Common property formats for database items:
|
||||
- **Title:** `{"title": [{"text": {"content": "..."}}]}`
|
||||
- **Rich text:** `{"rich_text": [{"text": {"content": "..."}}]}`
|
||||
- **Select:** `{"select": {"name": "Option"}}`
|
||||
- **Multi-select:** `{"multi_select": [{"name": "A"}, {"name": "B"}]}`
|
||||
- **Date:** `{"date": {"start": "2024-01-15", "end": "2024-01-16"}}`
|
||||
- **Checkbox:** `{"checkbox": true}`
|
||||
- **Number:** `{"number": 42}`
|
||||
- **URL:** `{"url": "https://..."}`
|
||||
- **Email:** `{"email": "a@b.com"}`
|
||||
- **Relation:** `{"relation": [{"id": "page_id"}]}`
|
||||
|
||||
## Key Differences in 2025-09-03
|
||||
|
||||
- **Databases → Data Sources:** Use `/data_sources/` endpoints for queries and retrieval
|
||||
- **Two IDs:** Each database now has both a `database_id` and a `data_source_id`
|
||||
- Use `database_id` when creating pages (`parent: {"database_id": "..."}`)
|
||||
- Use `data_source_id` when querying (`POST /v1/data_sources/{id}/query`)
|
||||
- **Search results:** Databases return as `"object": "data_source"` with their `data_source_id`
|
||||
- **Parent in responses:** Pages show `parent.data_source_id` alongside `parent.database_id`
|
||||
- **Finding the data_source_id:** Search for the database, or call `GET /v1/data_sources/{data_source_id}`
|
||||
|
||||
## Notes
|
||||
|
||||
- Page/database IDs are UUIDs (with or without dashes)
|
||||
- The API cannot set database view filters — that's UI-only
|
||||
- Rate limit: ~3 requests/second average
|
||||
- Use `is_inline: true` when creating data sources to embed them in pages
|
||||
|
|
@ -1,55 +0,0 @@
|
|||
---
|
||||
name: obsidian
|
||||
description: Work with Obsidian vaults (plain Markdown notes) and automate via obsidian-cli.
|
||||
homepage: https://help.obsidian.md
|
||||
metadata: {"clawdbot":{"emoji":"💎","requires":{"bins":["obsidian-cli"]},"install":[{"id":"brew","kind":"brew","formula":"yakitrak/yakitrak/obsidian-cli","bins":["obsidian-cli"],"label":"Install obsidian-cli (brew)"}]}}
|
||||
---
|
||||
|
||||
# Obsidian
|
||||
|
||||
Obsidian vault = a normal folder on disk.
|
||||
|
||||
Vault structure (typical)
|
||||
- Notes: `*.md` (plain text Markdown; edit with any editor)
|
||||
- Config: `.obsidian/` (workspace + plugin settings; usually don’t touch from scripts)
|
||||
- Canvases: `*.canvas` (JSON)
|
||||
- Attachments: whatever folder you chose in Obsidian settings (images/PDFs/etc.)
|
||||
|
||||
## Find the active vault(s)
|
||||
|
||||
Obsidian desktop tracks vaults here (source of truth):
|
||||
- `~/Library/Application Support/obsidian/obsidian.json`
|
||||
|
||||
`obsidian-cli` resolves vaults from that file; vault name is typically the **folder name** (path suffix).
|
||||
|
||||
Fast “what vault is active / where are the notes?”
|
||||
- If you’ve already set a default: `obsidian-cli print-default --path-only`
|
||||
- Otherwise, read `~/Library/Application Support/obsidian/obsidian.json` and use the vault entry with `"open": true`.
|
||||
|
||||
Notes
|
||||
- Multiple vaults common (iCloud vs `~/Documents`, work/personal, etc.). Don’t guess; read config.
|
||||
- Avoid writing hardcoded vault paths into scripts; prefer reading the config or using `print-default`.
|
||||
|
||||
## obsidian-cli quick start
|
||||
|
||||
Pick a default vault (once):
|
||||
- `obsidian-cli set-default "<vault-folder-name>"`
|
||||
- `obsidian-cli print-default` / `obsidian-cli print-default --path-only`
|
||||
|
||||
Search
|
||||
- `obsidian-cli search "query"` (note names)
|
||||
- `obsidian-cli search-content "query"` (inside notes; shows snippets + lines)
|
||||
|
||||
Create
|
||||
- `obsidian-cli create "Folder/New note" --content "..." --open`
|
||||
- Requires Obsidian URI handler (`obsidian://…`) working (Obsidian installed).
|
||||
- Avoid creating notes under “hidden” dot-folders (e.g. `.something/...`) via URI; Obsidian may refuse.
|
||||
|
||||
Move/rename (safe refactor)
|
||||
- `obsidian-cli move "old/path/note" "new/path/note"`
|
||||
- Updates `[[wikilinks]]` and common Markdown links across the vault (this is the main win vs `mv`).
|
||||
|
||||
Delete
|
||||
- `obsidian-cli delete "path/note"`
|
||||
|
||||
Prefer direct edits when appropriate: open the `.md` file and change it; Obsidian will pick it up.
|
||||
|
|
@ -1,71 +0,0 @@
|
|||
---
|
||||
name: openai-image-gen
|
||||
description: Batch-generate images via OpenAI Images API. Random prompt sampler + `index.html` gallery.
|
||||
homepage: https://platform.openai.com/docs/api-reference/images
|
||||
metadata: {"clawdbot":{"emoji":"🖼️","requires":{"bins":["python3"],"env":["OPENAI_API_KEY"]},"primaryEnv":"OPENAI_API_KEY","install":[{"id":"python-brew","kind":"brew","formula":"python","bins":["python3"],"label":"Install Python (brew)"}]}}
|
||||
---
|
||||
|
||||
# OpenAI Image Gen
|
||||
|
||||
Generate a handful of “random but structured” prompts and render them via the OpenAI Images API.
|
||||
|
||||
## Run
|
||||
|
||||
```bash
|
||||
python3 {baseDir}/scripts/gen.py
|
||||
open ~/Projects/tmp/openai-image-gen-*/index.html # if ~/Projects/tmp exists; else ./tmp/...
|
||||
```
|
||||
|
||||
Useful flags:
|
||||
|
||||
```bash
|
||||
# GPT image models with various options
|
||||
python3 {baseDir}/scripts/gen.py --count 16 --model gpt-image-1
|
||||
python3 {baseDir}/scripts/gen.py --prompt "ultra-detailed studio photo of a lobster astronaut" --count 4
|
||||
python3 {baseDir}/scripts/gen.py --size 1536x1024 --quality high --out-dir ./out/images
|
||||
python3 {baseDir}/scripts/gen.py --model gpt-image-1.5 --background transparent --output-format webp
|
||||
|
||||
# DALL-E 3 (note: count is automatically limited to 1)
|
||||
python3 {baseDir}/scripts/gen.py --model dall-e-3 --quality hd --size 1792x1024 --style vivid
|
||||
python3 {baseDir}/scripts/gen.py --model dall-e-3 --style natural --prompt "serene mountain landscape"
|
||||
|
||||
# DALL-E 2
|
||||
python3 {baseDir}/scripts/gen.py --model dall-e-2 --size 512x512 --count 4
|
||||
```
|
||||
|
||||
## Model-Specific Parameters
|
||||
|
||||
Different models support different parameter values. The script automatically selects appropriate defaults based on the model.
|
||||
|
||||
### Size
|
||||
|
||||
- **GPT image models** (`gpt-image-1`, `gpt-image-1-mini`, `gpt-image-1.5`): `1024x1024`, `1536x1024` (landscape), `1024x1536` (portrait), or `auto`
|
||||
- Default: `1024x1024`
|
||||
- **dall-e-3**: `1024x1024`, `1792x1024`, or `1024x1792`
|
||||
- Default: `1024x1024`
|
||||
- **dall-e-2**: `256x256`, `512x512`, or `1024x1024`
|
||||
- Default: `1024x1024`
|
||||
|
||||
### Quality
|
||||
|
||||
- **GPT image models**: `auto`, `high`, `medium`, or `low`
|
||||
- Default: `high`
|
||||
- **dall-e-3**: `hd` or `standard`
|
||||
- Default: `standard`
|
||||
- **dall-e-2**: `standard` only
|
||||
- Default: `standard`
|
||||
|
||||
### Other Notable Differences
|
||||
|
||||
- **dall-e-3** only supports generating 1 image at a time (`n=1`). The script automatically limits count to 1 when using this model.
|
||||
- **GPT image models** support additional parameters:
|
||||
- `--background`: `transparent`, `opaque`, or `auto` (default)
|
||||
- `--output-format`: `png` (default), `jpeg`, or `webp`
|
||||
- Note: `stream` and `moderation` are available via API but not yet implemented in this script
|
||||
- **dall-e-3** has a `--style` parameter: `vivid` (hyper-real, dramatic) or `natural` (more natural looking)
|
||||
|
||||
## Output
|
||||
|
||||
- `*.png`, `*.jpeg`, or `*.webp` images (output format depends on model + `--output-format`)
|
||||
- `prompts.json` (prompt → file mapping)
|
||||
- `index.html` (thumbnail gallery)
|
||||
|
|
@ -1,230 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
import argparse
|
||||
import base64
|
||||
import datetime as dt
|
||||
import json
|
||||
import os
|
||||
import random
|
||||
import re
|
||||
import sys
|
||||
import urllib.error
|
||||
import urllib.request
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def slugify(text: str) -> str:
|
||||
text = text.lower().strip()
|
||||
text = re.sub(r"[^a-z0-9]+", "-", text)
|
||||
text = re.sub(r"-{2,}", "-", text).strip("-")
|
||||
return text or "image"
|
||||
|
||||
|
||||
def default_out_dir() -> Path:
|
||||
now = dt.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")
|
||||
preferred = Path.home() / "Projects" / "tmp"
|
||||
base = preferred if preferred.is_dir() else Path("./tmp")
|
||||
base.mkdir(parents=True, exist_ok=True)
|
||||
return base / f"openai-image-gen-{now}"
|
||||
|
||||
|
||||
def pick_prompts(count: int) -> list[str]:
|
||||
subjects = [
|
||||
"a lobster astronaut",
|
||||
"a brutalist lighthouse",
|
||||
"a cozy reading nook",
|
||||
"a cyberpunk noodle shop",
|
||||
"a Vienna street at dusk",
|
||||
"a minimalist product photo",
|
||||
"a surreal underwater library",
|
||||
]
|
||||
styles = [
|
||||
"ultra-detailed studio photo",
|
||||
"35mm film still",
|
||||
"isometric illustration",
|
||||
"editorial photography",
|
||||
"soft watercolor",
|
||||
"architectural render",
|
||||
"high-contrast monochrome",
|
||||
]
|
||||
lighting = [
|
||||
"golden hour",
|
||||
"overcast soft light",
|
||||
"neon lighting",
|
||||
"dramatic rim light",
|
||||
"candlelight",
|
||||
"foggy atmosphere",
|
||||
]
|
||||
prompts: list[str] = []
|
||||
for _ in range(count):
|
||||
prompts.append(
|
||||
f"{random.choice(styles)} of {random.choice(subjects)}, {random.choice(lighting)}"
|
||||
)
|
||||
return prompts
|
||||
|
||||
|
||||
def get_model_defaults(model: str) -> tuple[str, str]:
|
||||
"""Return (default_size, default_quality) for the given model."""
|
||||
if model == "dall-e-2":
|
||||
# quality will be ignored
|
||||
return ("1024x1024", "standard")
|
||||
elif model == "dall-e-3":
|
||||
return ("1024x1024", "standard")
|
||||
else:
|
||||
# GPT image or future models
|
||||
return ("1024x1024", "high")
|
||||
|
||||
|
||||
def request_images(
|
||||
api_key: str,
|
||||
prompt: str,
|
||||
model: str,
|
||||
size: str,
|
||||
quality: str,
|
||||
background: str = "",
|
||||
output_format: str = "",
|
||||
style: str = "",
|
||||
) -> dict:
|
||||
url = "https://api.openai.com/v1/images/generations"
|
||||
args = {
|
||||
"model": model,
|
||||
"prompt": prompt,
|
||||
"size": size,
|
||||
"n": 1,
|
||||
}
|
||||
|
||||
# Quality parameter - dall-e-2 doesn't accept this parameter
|
||||
if model != "dall-e-2":
|
||||
args["quality"] = quality
|
||||
|
||||
if model.startswith("dall-e"):
|
||||
args["response_format"] = "b64_json"
|
||||
|
||||
if model.startswith("gpt-image"):
|
||||
if background:
|
||||
args["background"] = background
|
||||
if output_format:
|
||||
args["output_format"] = output_format
|
||||
|
||||
if model == "dall-e-3" and style:
|
||||
args["style"] = style
|
||||
|
||||
body = json.dumps(args).encode("utf-8")
|
||||
req = urllib.request.Request(
|
||||
url,
|
||||
method="POST",
|
||||
headers={
|
||||
"Authorization": f"Bearer {api_key}",
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
data=body,
|
||||
)
|
||||
try:
|
||||
with urllib.request.urlopen(req, timeout=300) as resp:
|
||||
return json.loads(resp.read().decode("utf-8"))
|
||||
except urllib.error.HTTPError as e:
|
||||
payload = e.read().decode("utf-8", errors="replace")
|
||||
raise RuntimeError(f"OpenAI Images API failed ({e.code}): {payload}") from e
|
||||
|
||||
|
||||
def write_gallery(out_dir: Path, items: list[dict]) -> None:
|
||||
thumbs = "\n".join(
|
||||
[
|
||||
f"""
|
||||
<figure>
|
||||
<a href="{it["file"]}"><img src="{it["file"]}" loading="lazy" /></a>
|
||||
<figcaption>{it["prompt"]}</figcaption>
|
||||
</figure>
|
||||
""".strip()
|
||||
for it in items
|
||||
]
|
||||
)
|
||||
html = f"""<!doctype html>
|
||||
<meta charset="utf-8" />
|
||||
<title>openai-image-gen</title>
|
||||
<style>
|
||||
:root {{ color-scheme: dark; }}
|
||||
body {{ margin: 24px; font: 14px/1.4 ui-sans-serif, system-ui; background: #0b0f14; color: #e8edf2; }}
|
||||
h1 {{ font-size: 18px; margin: 0 0 16px; }}
|
||||
.grid {{ display: grid; grid-template-columns: repeat(auto-fill, minmax(240px, 1fr)); gap: 16px; }}
|
||||
figure {{ margin: 0; padding: 12px; border: 1px solid #1e2a36; border-radius: 14px; background: #0f1620; }}
|
||||
img {{ width: 100%; height: auto; border-radius: 10px; display: block; }}
|
||||
figcaption {{ margin-top: 10px; color: #b7c2cc; }}
|
||||
code {{ color: #9cd1ff; }}
|
||||
</style>
|
||||
<h1>openai-image-gen</h1>
|
||||
<p>Output: <code>{out_dir.as_posix()}</code></p>
|
||||
<div class="grid">
|
||||
{thumbs}
|
||||
</div>
|
||||
"""
|
||||
(out_dir / "index.html").write_text(html, encoding="utf-8")
|
||||
|
||||
|
||||
def main() -> int:
|
||||
ap = argparse.ArgumentParser(description="Generate images via OpenAI Images API.")
|
||||
ap.add_argument("--prompt", help="Single prompt. If omitted, random prompts are generated.")
|
||||
ap.add_argument("--count", type=int, default=8, help="How many images to generate.")
|
||||
ap.add_argument("--model", default="gpt-image-1", help="Image model id.")
|
||||
ap.add_argument("--size", default="", help="Image size (e.g. 1024x1024, 1536x1024). Defaults based on model if not specified.")
|
||||
ap.add_argument("--quality", default="", help="Image quality (e.g. high, standard). Defaults based on model if not specified.")
|
||||
ap.add_argument("--background", default="", help="Background transparency (GPT models only): transparent, opaque, or auto.")
|
||||
ap.add_argument("--output-format", default="", help="Output format (GPT models only): png, jpeg, or webp.")
|
||||
ap.add_argument("--style", default="", help="Image style (dall-e-3 only): vivid or natural.")
|
||||
ap.add_argument("--out-dir", default="", help="Output directory (default: ./tmp/openai-image-gen-<ts>).")
|
||||
args = ap.parse_args()
|
||||
|
||||
api_key = (os.environ.get("OPENAI_API_KEY") or "").strip()
|
||||
if not api_key:
|
||||
print("Missing OPENAI_API_KEY", file=sys.stderr)
|
||||
return 2
|
||||
|
||||
# Apply model-specific defaults if not specified
|
||||
default_size, default_quality = get_model_defaults(args.model)
|
||||
size = args.size or default_size
|
||||
quality = args.quality or default_quality
|
||||
|
||||
count = args.count
|
||||
if args.model == "dall-e-3" and count > 1:
|
||||
print(f"Warning: dall-e-3 only supports generating 1 image at a time. Reducing count from {count} to 1.", file=sys.stderr)
|
||||
count = 1
|
||||
|
||||
out_dir = Path(args.out_dir).expanduser() if args.out_dir else default_out_dir()
|
||||
out_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
prompts = [args.prompt] * count if args.prompt else pick_prompts(count)
|
||||
|
||||
# Determine file extension based on output format
|
||||
if args.model.startswith("gpt-image") and args.output_format:
|
||||
file_ext = args.output_format
|
||||
else:
|
||||
file_ext = "png"
|
||||
|
||||
items: list[dict] = []
|
||||
for idx, prompt in enumerate(prompts, start=1):
|
||||
print(f"[{idx}/{len(prompts)}] {prompt}")
|
||||
res = request_images(
|
||||
api_key,
|
||||
prompt,
|
||||
args.model,
|
||||
size,
|
||||
quality,
|
||||
args.background,
|
||||
args.output_format,
|
||||
args.style,
|
||||
)
|
||||
b64 = res.get("data", [{}])[0].get("b64_json")
|
||||
if not b64:
|
||||
raise RuntimeError(f"Unexpected response: {json.dumps(res)[:400]}")
|
||||
image_bytes = base64.b64decode(b64)
|
||||
filename = f"{idx:03d}-{slugify(prompt)[:40]}.{file_ext}"
|
||||
(out_dir / filename).write_bytes(image_bytes)
|
||||
items.append({"prompt": prompt, "file": filename})
|
||||
|
||||
(out_dir / "prompts.json").write_text(json.dumps(items, indent=2), encoding="utf-8")
|
||||
write_gallery(out_dir, items)
|
||||
print(f"\nWrote: {(out_dir / 'index.html').as_posix()}")
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
|
|
@ -1,43 +0,0 @@
|
|||
---
|
||||
name: openai-whisper-api
|
||||
description: Transcribe audio via OpenAI Audio Transcriptions API (Whisper).
|
||||
homepage: https://platform.openai.com/docs/guides/speech-to-text
|
||||
metadata: {"clawdbot":{"emoji":"☁️","requires":{"bins":["curl"],"env":["OPENAI_API_KEY"]},"primaryEnv":"OPENAI_API_KEY"}}
|
||||
---
|
||||
|
||||
# OpenAI Whisper API (curl)
|
||||
|
||||
Transcribe an audio file via OpenAI’s `/v1/audio/transcriptions` endpoint.
|
||||
|
||||
## Quick start
|
||||
|
||||
```bash
|
||||
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a
|
||||
```
|
||||
|
||||
Defaults:
|
||||
- Model: `whisper-1`
|
||||
- Output: `<input>.txt`
|
||||
|
||||
## Useful flags
|
||||
|
||||
```bash
|
||||
{baseDir}/scripts/transcribe.sh /path/to/audio.ogg --model whisper-1 --out /tmp/transcript.txt
|
||||
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a --language en
|
||||
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a --prompt "Speaker names: Peter, Daniel"
|
||||
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a --json --out /tmp/transcript.json
|
||||
```
|
||||
|
||||
## API key
|
||||
|
||||
Set `OPENAI_API_KEY`, or configure it in `~/.clawdbot/clawdbot.json`:
|
||||
|
||||
```json5
|
||||
{
|
||||
skills: {
|
||||
"openai-whisper-api": {
|
||||
apiKey: "OPENAI_KEY_HERE"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
|
@ -1,85 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
usage() {
|
||||
cat >&2 <<'EOF'
|
||||
Usage:
|
||||
transcribe.sh <audio-file> [--model whisper-1] [--out /path/to/out.txt] [--language en] [--prompt "hint"] [--json]
|
||||
EOF
|
||||
exit 2
|
||||
}
|
||||
|
||||
if [[ "${1:-}" == "" || "${1:-}" == "-h" || "${1:-}" == "--help" ]]; then
|
||||
usage
|
||||
fi
|
||||
|
||||
in="${1:-}"
|
||||
shift || true
|
||||
|
||||
model="whisper-1"
|
||||
out=""
|
||||
language=""
|
||||
prompt=""
|
||||
response_format="text"
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--model)
|
||||
model="${2:-}"
|
||||
shift 2
|
||||
;;
|
||||
--out)
|
||||
out="${2:-}"
|
||||
shift 2
|
||||
;;
|
||||
--language)
|
||||
language="${2:-}"
|
||||
shift 2
|
||||
;;
|
||||
--prompt)
|
||||
prompt="${2:-}"
|
||||
shift 2
|
||||
;;
|
||||
--json)
|
||||
response_format="json"
|
||||
shift 1
|
||||
;;
|
||||
*)
|
||||
echo "Unknown arg: $1" >&2
|
||||
usage
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ ! -f "$in" ]]; then
|
||||
echo "File not found: $in" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ "${OPENAI_API_KEY:-}" == "" ]]; then
|
||||
echo "Missing OPENAI_API_KEY" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ "$out" == "" ]]; then
|
||||
base="${in%.*}"
|
||||
if [[ "$response_format" == "json" ]]; then
|
||||
out="${base}.json"
|
||||
else
|
||||
out="${base}.txt"
|
||||
fi
|
||||
fi
|
||||
|
||||
mkdir -p "$(dirname "$out")"
|
||||
|
||||
curl -sS https://api.openai.com/v1/audio/transcriptions \
|
||||
-H "Authorization: Bearer $OPENAI_API_KEY" \
|
||||
-H "Accept: application/json" \
|
||||
-F "file=@${in}" \
|
||||
-F "model=${model}" \
|
||||
-F "response_format=${response_format}" \
|
||||
${language:+-F "language=${language}"} \
|
||||
${prompt:+-F "prompt=${prompt}"} \
|
||||
>"$out"
|
||||
|
||||
echo "$out"
|
||||
|
|
@ -1,19 +0,0 @@
|
|||
---
|
||||
name: openai-whisper
|
||||
description: Local speech-to-text with the Whisper CLI (no API key).
|
||||
homepage: https://openai.com/research/whisper
|
||||
metadata: {"clawdbot":{"emoji":"🎙️","requires":{"bins":["whisper"]},"install":[{"id":"brew","kind":"brew","formula":"openai-whisper","bins":["whisper"],"label":"Install OpenAI Whisper (brew)"}]}}
|
||||
---
|
||||
|
||||
# Whisper (CLI)
|
||||
|
||||
Use `whisper` to transcribe audio locally.
|
||||
|
||||
Quick start
|
||||
- `whisper /path/audio.mp3 --model medium --output_format txt --output_dir .`
|
||||
- `whisper /path/audio.m4a --task translate --output_format srt`
|
||||
|
||||
Notes
|
||||
- Models download to `~/.cache/whisper` on first run.
|
||||
- `--model` defaults to `turbo` on this install.
|
||||
- Use smaller models for speed, larger for accuracy.
|
||||
|
|
@ -1,30 +0,0 @@
|
|||
---
|
||||
name: openhue
|
||||
description: Control Philips Hue lights/scenes via the OpenHue CLI.
|
||||
homepage: https://www.openhue.io/cli
|
||||
metadata: {"clawdbot":{"emoji":"💡","requires":{"bins":["openhue"]},"install":[{"id":"brew","kind":"brew","formula":"openhue/cli/openhue-cli","bins":["openhue"],"label":"Install OpenHue CLI (brew)"}]}}
|
||||
---
|
||||
|
||||
# OpenHue CLI
|
||||
|
||||
Use `openhue` to control Hue lights and scenes via a Hue Bridge.
|
||||
|
||||
Setup
|
||||
- Discover bridges: `openhue discover`
|
||||
- Guided setup: `openhue setup`
|
||||
|
||||
Read
|
||||
- `openhue get light --json`
|
||||
- `openhue get room --json`
|
||||
- `openhue get scene --json`
|
||||
|
||||
Write
|
||||
- Turn on: `openhue set light <id-or-name> --on`
|
||||
- Turn off: `openhue set light <id-or-name> --off`
|
||||
- Brightness: `openhue set light <id> --on --brightness 50`
|
||||
- Color: `openhue set light <id> --on --rgb #3399FF`
|
||||
- Scene: `openhue set scene <scene-id>`
|
||||
|
||||
Notes
|
||||
- You may need to press the Hue Bridge button during setup.
|
||||
- Use `--room "Room Name"` when light names are ambiguous.
|
||||
|
|
@ -1,47 +0,0 @@
|
|||
---
|
||||
name: ordercli
|
||||
description: Foodora-only CLI for checking past orders and active order status (Deliveroo WIP).
|
||||
homepage: https://ordercli.sh
|
||||
metadata: {"clawdbot":{"emoji":"🛵","requires":{"bins":["ordercli"]},"install":[{"id":"brew","kind":"brew","formula":"steipete/tap/ordercli","bins":["ordercli"],"label":"Install ordercli (brew)"},{"id":"go","kind":"go","module":"github.com/steipete/ordercli/cmd/ordercli@latest","bins":["ordercli"],"label":"Install ordercli (go)"}]}}
|
||||
---
|
||||
|
||||
# ordercli
|
||||
|
||||
Use `ordercli` to check past orders and track active order status (Foodora only right now).
|
||||
|
||||
Quick start (Foodora)
|
||||
- `ordercli foodora countries`
|
||||
- `ordercli foodora config set --country AT`
|
||||
- `ordercli foodora login --email you@example.com --password-stdin`
|
||||
- `ordercli foodora orders`
|
||||
- `ordercli foodora history --limit 20`
|
||||
- `ordercli foodora history show <orderCode>`
|
||||
|
||||
Orders
|
||||
- Active list (arrival/status): `ordercli foodora orders`
|
||||
- Watch: `ordercli foodora orders --watch`
|
||||
- Active order detail: `ordercli foodora order <orderCode>`
|
||||
- History detail JSON: `ordercli foodora history show <orderCode> --json`
|
||||
|
||||
Reorder (adds to cart)
|
||||
- Preview: `ordercli foodora reorder <orderCode>`
|
||||
- Confirm: `ordercli foodora reorder <orderCode> --confirm`
|
||||
- Address: `ordercli foodora reorder <orderCode> --confirm --address-id <id>`
|
||||
|
||||
Cloudflare / bot protection
|
||||
- Browser login: `ordercli foodora login --email you@example.com --password-stdin --browser`
|
||||
- Reuse profile: `--browser-profile "$HOME/Library/Application Support/ordercli/browser-profile"`
|
||||
- Import Chrome cookies: `ordercli foodora cookies chrome --profile "Default"`
|
||||
|
||||
Session import (no password)
|
||||
- `ordercli foodora session chrome --url https://www.foodora.at/ --profile "Default"`
|
||||
- `ordercli foodora session refresh --client-id android`
|
||||
|
||||
Deliveroo (WIP, not working yet)
|
||||
- Requires `DELIVEROO_BEARER_TOKEN` (optional `DELIVEROO_COOKIE`).
|
||||
- `ordercli deliveroo config set --market uk`
|
||||
- `ordercli deliveroo history`
|
||||
|
||||
Notes
|
||||
- Use `--config /tmp/ordercli.json` for testing.
|
||||
- Confirm before any reorder or cart-changing action.
|
||||
|
|
@ -1,153 +0,0 @@
|
|||
---
|
||||
name: peekaboo
|
||||
description: Capture and automate macOS UI with the Peekaboo CLI.
|
||||
homepage: https://peekaboo.boo
|
||||
metadata: {"clawdbot":{"emoji":"👀","os":["darwin"],"requires":{"bins":["peekaboo"]},"install":[{"id":"brew","kind":"brew","formula":"steipete/tap/peekaboo","bins":["peekaboo"],"label":"Install Peekaboo (brew)"}]}}
|
||||
---
|
||||
|
||||
# Peekaboo
|
||||
|
||||
Peekaboo is a full macOS UI automation CLI: capture/inspect screens, target UI
|
||||
elements, drive input, and manage apps/windows/menus. Commands share a snapshot
|
||||
cache and support `--json`/`-j` for scripting. Run `peekaboo` or
|
||||
`peekaboo <cmd> --help` for flags; `peekaboo --version` prints build metadata.
|
||||
Tip: run via `polter peekaboo` to ensure fresh builds.
|
||||
|
||||
## Features (all CLI capabilities, excluding agent/MCP)
|
||||
|
||||
Core
|
||||
- `bridge`: inspect Peekaboo Bridge host connectivity
|
||||
- `capture`: live capture or video ingest + frame extraction
|
||||
- `clean`: prune snapshot cache and temp files
|
||||
- `config`: init/show/edit/validate, providers, models, credentials
|
||||
- `image`: capture screenshots (screen/window/menu bar regions)
|
||||
- `learn`: print the full agent guide + tool catalog
|
||||
- `list`: apps, windows, screens, menubar, permissions
|
||||
- `permissions`: check Screen Recording/Accessibility status
|
||||
- `run`: execute `.peekaboo.json` scripts
|
||||
- `sleep`: pause execution for a duration
|
||||
- `tools`: list available tools with filtering/display options
|
||||
|
||||
Interaction
|
||||
- `click`: target by ID/query/coords with smart waits
|
||||
- `drag`: drag & drop across elements/coords/Dock
|
||||
- `hotkey`: modifier combos like `cmd,shift,t`
|
||||
- `move`: cursor positioning with optional smoothing
|
||||
- `paste`: set clipboard -> paste -> restore
|
||||
- `press`: special-key sequences with repeats
|
||||
- `scroll`: directional scrolling (targeted + smooth)
|
||||
- `swipe`: gesture-style drags between targets
|
||||
- `type`: text + control keys (`--clear`, delays)
|
||||
|
||||
System
|
||||
- `app`: launch/quit/relaunch/hide/unhide/switch/list apps
|
||||
- `clipboard`: read/write clipboard (text/images/files)
|
||||
- `dialog`: click/input/file/dismiss/list system dialogs
|
||||
- `dock`: launch/right-click/hide/show/list Dock items
|
||||
- `menu`: click/list application menus + menu extras
|
||||
- `menubar`: list/click status bar items
|
||||
- `open`: enhanced `open` with app targeting + JSON payloads
|
||||
- `space`: list/switch/move-window (Spaces)
|
||||
- `visualizer`: exercise Peekaboo visual feedback animations
|
||||
- `window`: close/minimize/maximize/move/resize/focus/list
|
||||
|
||||
Vision
|
||||
- `see`: annotated UI maps, snapshot IDs, optional analysis
|
||||
|
||||
Global runtime flags
|
||||
- `--json`/`-j`, `--verbose`/`-v`, `--log-level <level>`
|
||||
- `--no-remote`, `--bridge-socket <path>`
|
||||
|
||||
## Quickstart (happy path)
|
||||
```bash
|
||||
peekaboo permissions
|
||||
peekaboo list apps --json
|
||||
peekaboo see --annotate --path /tmp/peekaboo-see.png
|
||||
peekaboo click --on B1
|
||||
peekaboo type "Hello" --return
|
||||
```
|
||||
|
||||
## Common targeting parameters (most interaction commands)
|
||||
- App/window: `--app`, `--pid`, `--window-title`, `--window-id`, `--window-index`
|
||||
- Snapshot targeting: `--snapshot` (ID from `see`; defaults to latest)
|
||||
- Element/coords: `--on`/`--id` (element ID), `--coords x,y`
|
||||
- Focus control: `--no-auto-focus`, `--space-switch`, `--bring-to-current-space`,
|
||||
`--focus-timeout-seconds`, `--focus-retry-count`
|
||||
|
||||
## Common capture parameters
|
||||
- Output: `--path`, `--format png|jpg`, `--retina`
|
||||
- Targeting: `--mode screen|window|frontmost`, `--screen-index`,
|
||||
`--window-title`, `--window-id`
|
||||
- Analysis: `--analyze "prompt"`, `--annotate`
|
||||
- Capture engine: `--capture-engine auto|classic|cg|modern|sckit`
|
||||
|
||||
## Common motion/typing parameters
|
||||
- Timing: `--duration` (drag/swipe), `--steps`, `--delay` (type/scroll/press)
|
||||
- Human-ish movement: `--profile human|linear`, `--wpm` (typing)
|
||||
- Scroll: `--direction up|down|left|right`, `--amount <ticks>`, `--smooth`
|
||||
|
||||
## Examples
|
||||
### See -> click -> type (most reliable flow)
|
||||
```bash
|
||||
peekaboo see --app Safari --window-title "Login" --annotate --path /tmp/see.png
|
||||
peekaboo click --on B3 --app Safari
|
||||
peekaboo type "user@example.com" --app Safari
|
||||
peekaboo press tab --count 1 --app Safari
|
||||
peekaboo type "supersecret" --app Safari --return
|
||||
```
|
||||
|
||||
### Target by window id
|
||||
```bash
|
||||
peekaboo list windows --app "Visual Studio Code" --json
|
||||
peekaboo click --window-id 12345 --coords 120,160
|
||||
peekaboo type "Hello from Peekaboo" --window-id 12345
|
||||
```
|
||||
|
||||
### Capture screenshots + analyze
|
||||
```bash
|
||||
peekaboo image --mode screen --screen-index 0 --retina --path /tmp/screen.png
|
||||
peekaboo image --app Safari --window-title "Dashboard" --analyze "Summarize KPIs"
|
||||
peekaboo see --mode screen --screen-index 0 --analyze "Summarize the dashboard"
|
||||
```
|
||||
|
||||
### Live capture (motion-aware)
|
||||
```bash
|
||||
peekaboo capture live --mode region --region 100,100,800,600 --duration 30 \
|
||||
--active-fps 8 --idle-fps 2 --highlight-changes --path /tmp/capture
|
||||
```
|
||||
|
||||
### App + window management
|
||||
```bash
|
||||
peekaboo app launch "Safari" --open https://example.com
|
||||
peekaboo window focus --app Safari --window-title "Example"
|
||||
peekaboo window set-bounds --app Safari --x 50 --y 50 --width 1200 --height 800
|
||||
peekaboo app quit --app Safari
|
||||
```
|
||||
|
||||
### Menus, menubar, dock
|
||||
```bash
|
||||
peekaboo menu click --app Safari --item "New Window"
|
||||
peekaboo menu click --app TextEdit --path "Format > Font > Show Fonts"
|
||||
peekaboo menu click-extra --title "WiFi"
|
||||
peekaboo dock launch Safari
|
||||
peekaboo menubar list --json
|
||||
```
|
||||
|
||||
### Mouse + gesture input
|
||||
```bash
|
||||
peekaboo move 500,300 --smooth
|
||||
peekaboo drag --from B1 --to T2
|
||||
peekaboo swipe --from-coords 100,500 --to-coords 100,200 --duration 800
|
||||
peekaboo scroll --direction down --amount 6 --smooth
|
||||
```
|
||||
|
||||
### Keyboard input
|
||||
```bash
|
||||
peekaboo hotkey --keys "cmd,shift,t"
|
||||
peekaboo press escape
|
||||
peekaboo type "Line 1\nLine 2" --delay 10
|
||||
```
|
||||
|
||||
Notes
|
||||
- Requires Screen Recording + Accessibility permissions.
|
||||
- Use `peekaboo see --annotate` to identify targets before clicking.
|
||||
|
|
@ -1,62 +0,0 @@
|
|||
---
|
||||
name: sag
|
||||
description: ElevenLabs text-to-speech with mac-style say UX.
|
||||
homepage: https://sag.sh
|
||||
metadata: {"clawdbot":{"emoji":"🗣️","requires":{"bins":["sag"],"env":["ELEVENLABS_API_KEY"]},"primaryEnv":"ELEVENLABS_API_KEY","install":[{"id":"brew","kind":"brew","formula":"steipete/tap/sag","bins":["sag"],"label":"Install sag (brew)"}]}}
|
||||
---
|
||||
|
||||
# sag
|
||||
|
||||
Use `sag` for ElevenLabs TTS with local playback.
|
||||
|
||||
API key (required)
|
||||
- `ELEVENLABS_API_KEY` (preferred)
|
||||
- `SAG_API_KEY` also supported by the CLI
|
||||
|
||||
Quick start
|
||||
- `sag "Hello there"`
|
||||
- `sag speak -v "Roger" "Hello"`
|
||||
- `sag voices`
|
||||
- `sag prompting` (model-specific tips)
|
||||
|
||||
Model notes
|
||||
- Default: `eleven_v3` (expressive)
|
||||
- Stable: `eleven_multilingual_v2`
|
||||
- Fast: `eleven_flash_v2_5`
|
||||
|
||||
Pronunciation + delivery rules
|
||||
- First fix: respell (e.g. "key-note"), add hyphens, adjust casing.
|
||||
- Numbers/units/URLs: `--normalize auto` (or `off` if it harms names).
|
||||
- Language bias: `--lang en|de|fr|...` to guide normalization.
|
||||
- v3: SSML `<break>` not supported; use `[pause]`, `[short pause]`, `[long pause]`.
|
||||
- v2/v2.5: SSML `<break time="1.5s" />` supported; `<phoneme>` not exposed in `sag`.
|
||||
|
||||
v3 audio tags (put at the entrance of a line)
|
||||
- `[whispers]`, `[shouts]`, `[sings]`
|
||||
- `[laughs]`, `[starts laughing]`, `[sighs]`, `[exhales]`
|
||||
- `[sarcastic]`, `[curious]`, `[excited]`, `[crying]`, `[mischievously]`
|
||||
- Example: `sag "[whispers] keep this quiet. [short pause] ok?"`
|
||||
|
||||
Voice defaults
|
||||
- `ELEVENLABS_VOICE_ID` or `SAG_VOICE_ID`
|
||||
|
||||
Confirm voice + speaker before long output.
|
||||
|
||||
## Chat voice responses
|
||||
|
||||
When Peter asks for a "voice" reply (e.g., "crazy scientist voice", "explain in voice"), generate audio and send it:
|
||||
|
||||
```bash
|
||||
# Generate audio file
|
||||
sag -v Clawd -o /tmp/voice-reply.mp3 "Your message here"
|
||||
|
||||
# Then include in reply:
|
||||
# MEDIA:/tmp/voice-reply.mp3
|
||||
```
|
||||
|
||||
Voice character tips:
|
||||
- Crazy scientist: Use `[excited]` tags, dramatic pauses `[short pause]`, vary intensity
|
||||
- Calm: Use `[whispers]` or slower pacing
|
||||
- Dramatic: Use `[sings]` or `[shouts]` sparingly
|
||||
|
||||
Default voice for Clawd: `lj2rcrvANS3gaWWnczSX` (or just `-v Clawd`)
|
||||
|
|
@ -1,95 +0,0 @@
|
|||
---
|
||||
name: session-logs
|
||||
description: Search and analyze your own conversation history from session log files using jq.
|
||||
metadata: {"clawdbot":{"emoji":"📜","requires":{"bins":["jq"]}}}
|
||||
---
|
||||
|
||||
# session-logs
|
||||
|
||||
Search your complete conversation history stored in session JSONL files. Use this when you need to recall something not in your memory files.
|
||||
|
||||
## Location
|
||||
|
||||
Session logs live at: `~/.clawdbot/agents/main/sessions/`
|
||||
|
||||
- **`sessions.json`** - Index mapping session keys to session IDs
|
||||
- **`<session-id>.jsonl`** - Full conversation transcript per session
|
||||
|
||||
## Structure
|
||||
|
||||
Each `.jsonl` file contains messages with:
|
||||
- `type`: "session" (metadata) or "message"
|
||||
- `timestamp`: ISO timestamp
|
||||
- `message.role`: "user", "assistant", or "toolResult"
|
||||
- `message.content[]`: Text, thinking, or tool calls
|
||||
- `message.usage.cost.total`: Cost per response
|
||||
|
||||
## Common Queries
|
||||
|
||||
### List all sessions by date and size
|
||||
```bash
|
||||
for f in ~/.clawdbot/agents/main/sessions/*.jsonl; do
|
||||
date=$(head -1 "$f" | jq -r '.timestamp' | cut -dT -f1)
|
||||
size=$(ls -lh "$f" | awk '{print $5}')
|
||||
echo "$date $size $(basename $f)"
|
||||
done | sort -r
|
||||
```
|
||||
|
||||
### Find sessions from a specific day
|
||||
```bash
|
||||
for f in ~/.clawdbot/agents/main/sessions/*.jsonl; do
|
||||
head -1 "$f" | jq -r '.timestamp' | grep -q "2026-01-06" && echo "$f"
|
||||
done
|
||||
```
|
||||
|
||||
### Extract user messages from a session
|
||||
```bash
|
||||
jq -r 'select(.message.role == "user") | .message.content[0].text' <session>.jsonl
|
||||
```
|
||||
|
||||
### Search for keyword in assistant responses
|
||||
```bash
|
||||
jq -r 'select(.message.role == "assistant") | .message.content[]? | select(.type == "text") | .text' <session>.jsonl | grep -i "keyword"
|
||||
```
|
||||
|
||||
### Get total cost for a session
|
||||
```bash
|
||||
jq -s '[.[] | .message.usage.cost.total // 0] | add' <session>.jsonl
|
||||
```
|
||||
|
||||
### Daily cost summary
|
||||
```bash
|
||||
for f in ~/.clawdbot/agents/main/sessions/*.jsonl; do
|
||||
date=$(head -1 "$f" | jq -r '.timestamp' | cut -dT -f1)
|
||||
cost=$(jq -s '[.[] | .message.usage.cost.total // 0] | add' "$f")
|
||||
echo "$date $cost"
|
||||
done | awk '{a[$1]+=$2} END {for(d in a) print d, "$"a[d]}' | sort -r
|
||||
```
|
||||
|
||||
### Count messages and tokens in a session
|
||||
```bash
|
||||
jq -s '{
|
||||
messages: length,
|
||||
user: [.[] | select(.message.role == "user")] | length,
|
||||
assistant: [.[] | select(.message.role == "assistant")] | length,
|
||||
first: .[0].timestamp,
|
||||
last: .[-1].timestamp
|
||||
}' <session>.jsonl
|
||||
```
|
||||
|
||||
### Tool usage breakdown
|
||||
```bash
|
||||
jq -r '.message.content[]? | select(.type == "toolCall") | .name' <session>.jsonl | sort | uniq -c | sort -rn
|
||||
```
|
||||
|
||||
### Search across ALL sessions for a phrase
|
||||
```bash
|
||||
grep -l "phrase" ~/.clawdbot/agents/main/sessions/*.jsonl
|
||||
```
|
||||
|
||||
## Tips
|
||||
|
||||
- Sessions are append-only JSONL (one JSON object per line)
|
||||
- Large sessions can be several MB - use `head`/`tail` for sampling
|
||||
- The `sessions.json` index maps chat providers (discord, whatsapp, etc.) to session IDs
|
||||
- Deleted sessions have `.deleted.<timestamp>` suffix
|
||||
|
|
@ -1,371 +0,0 @@
|
|||
---
|
||||
name: skill-creator
|
||||
description: Create or update AgentSkills. Use when designing, structuring, or packaging skills with scripts, references, and assets.
|
||||
---
|
||||
|
||||
# Skill Creator
|
||||
|
||||
This skill provides guidance for creating effective skills.
|
||||
|
||||
## About Skills
|
||||
|
||||
Skills are modular, self-contained packages that extend Codex's capabilities by providing
|
||||
specialized knowledge, workflows, and tools. Think of them as "onboarding guides" for specific
|
||||
domains or tasks—they transform Codex from a general-purpose agent into a specialized agent
|
||||
equipped with procedural knowledge that no model can fully possess.
|
||||
|
||||
### What Skills Provide
|
||||
|
||||
1. Specialized workflows - Multi-step procedures for specific domains
|
||||
2. Tool integrations - Instructions for working with specific file formats or APIs
|
||||
3. Domain expertise - Company-specific knowledge, schemas, business logic
|
||||
4. Bundled resources - Scripts, references, and assets for complex and repetitive tasks
|
||||
|
||||
## Core Principles
|
||||
|
||||
### Concise is Key
|
||||
|
||||
The context window is a public good. Skills share the context window with everything else Codex needs: system prompt, conversation history, other Skills' metadata, and the actual user request.
|
||||
|
||||
**Default assumption: Codex is already very smart.** Only add context Codex doesn't already have. Challenge each piece of information: "Does Codex really need this explanation?" and "Does this paragraph justify its token cost?"
|
||||
|
||||
Prefer concise examples over verbose explanations.
|
||||
|
||||
### Set Appropriate Degrees of Freedom
|
||||
|
||||
Match the level of specificity to the task's fragility and variability:
|
||||
|
||||
**High freedom (text-based instructions)**: Use when multiple approaches are valid, decisions depend on context, or heuristics guide the approach.
|
||||
|
||||
**Medium freedom (pseudocode or scripts with parameters)**: Use when a preferred pattern exists, some variation is acceptable, or configuration affects behavior.
|
||||
|
||||
**Low freedom (specific scripts, few parameters)**: Use when operations are fragile and error-prone, consistency is critical, or a specific sequence must be followed.
|
||||
|
||||
Think of Codex as exploring a path: a narrow bridge with cliffs needs specific guardrails (low freedom), while an open field allows many routes (high freedom).
|
||||
|
||||
### Anatomy of a Skill
|
||||
|
||||
Every skill consists of a required SKILL.md file and optional bundled resources:
|
||||
|
||||
```
|
||||
skill-name/
|
||||
├── SKILL.md (required)
|
||||
│ ├── YAML frontmatter metadata (required)
|
||||
│ │ ├── name: (required)
|
||||
│ │ └── description: (required)
|
||||
│ └── Markdown instructions (required)
|
||||
└── Bundled Resources (optional)
|
||||
├── scripts/ - Executable code (Python/Bash/etc.)
|
||||
├── references/ - Documentation intended to be loaded into context as needed
|
||||
└── assets/ - Files used in output (templates, icons, fonts, etc.)
|
||||
```
|
||||
|
||||
#### SKILL.md (required)
|
||||
|
||||
Every SKILL.md consists of:
|
||||
|
||||
- **Frontmatter** (YAML): Contains `name` and `description` fields. These are the only fields that Codex reads to determine when the skill gets used, thus it is very important to be clear and comprehensive in describing what the skill is, and when it should be used.
|
||||
- **Body** (Markdown): Instructions and guidance for using the skill. Only loaded AFTER the skill triggers (if at all).
|
||||
|
||||
#### Bundled Resources (optional)
|
||||
|
||||
##### Scripts (`scripts/`)
|
||||
|
||||
Executable code (Python/Bash/etc.) for tasks that require deterministic reliability or are repeatedly rewritten.
|
||||
|
||||
- **When to include**: When the same code is being rewritten repeatedly or deterministic reliability is needed
|
||||
- **Example**: `scripts/rotate_pdf.py` for PDF rotation tasks
|
||||
- **Benefits**: Token efficient, deterministic, may be executed without loading into context
|
||||
- **Note**: Scripts may still need to be read by Codex for patching or environment-specific adjustments
|
||||
|
||||
##### References (`references/`)
|
||||
|
||||
Documentation and reference material intended to be loaded as needed into context to inform Codex's process and thinking.
|
||||
|
||||
- **When to include**: For documentation that Codex should reference while working
|
||||
- **Examples**: `references/finance.md` for financial schemas, `references/mnda.md` for company NDA template, `references/policies.md` for company policies, `references/api_docs.md` for API specifications
|
||||
- **Use cases**: Database schemas, API documentation, domain knowledge, company policies, detailed workflow guides
|
||||
- **Benefits**: Keeps SKILL.md lean, loaded only when Codex determines it's needed
|
||||
- **Best practice**: If files are large (>10k words), include grep search patterns in SKILL.md
|
||||
- **Avoid duplication**: Information should live in either SKILL.md or references files, not both. Prefer references files for detailed information unless it's truly core to the skill—this keeps SKILL.md lean while making information discoverable without hogging the context window. Keep only essential procedural instructions and workflow guidance in SKILL.md; move detailed reference material, schemas, and examples to references files.
|
||||
|
||||
##### Assets (`assets/`)
|
||||
|
||||
Files not intended to be loaded into context, but rather used within the output Codex produces.
|
||||
|
||||
- **When to include**: When the skill needs files that will be used in the final output
|
||||
- **Examples**: `assets/logo.png` for brand assets, `assets/slides.pptx` for PowerPoint templates, `assets/frontend-template/` for HTML/React boilerplate, `assets/font.ttf` for typography
|
||||
- **Use cases**: Templates, images, icons, boilerplate code, fonts, sample documents that get copied or modified
|
||||
- **Benefits**: Separates output resources from documentation, enables Codex to use files without loading them into context
|
||||
|
||||
#### What to Not Include in a Skill
|
||||
|
||||
A skill should only contain essential files that directly support its functionality. Do NOT create extraneous documentation or auxiliary files, including:
|
||||
|
||||
- README.md
|
||||
- INSTALLATION_GUIDE.md
|
||||
- QUICK_REFERENCE.md
|
||||
- CHANGELOG.md
|
||||
- etc.
|
||||
|
||||
The skill should only contain the information needed for an AI agent to do the job at hand. It should not contain auxiliary context about the process that went into creating it, setup and testing procedures, user-facing documentation, etc. Creating additional documentation files just adds clutter and confusion.
|
||||
|
||||
### Progressive Disclosure Design Principle
|
||||
|
||||
Skills use a three-level loading system to manage context efficiently:
|
||||
|
||||
1. **Metadata (name + description)** - Always in context (~100 words)
|
||||
2. **SKILL.md body** - When skill triggers (<5k words)
|
||||
3. **Bundled resources** - As needed by Codex (Unlimited because scripts can be executed without reading into context window)
|
||||
|
||||
#### Progressive Disclosure Patterns
|
||||
|
||||
Keep SKILL.md body to the essentials and under 500 lines to minimize context bloat. Split content into separate files when approaching this limit. When splitting out content into other files, it is very important to reference them from SKILL.md and describe clearly when to read them, to ensure the reader of the skill knows they exist and when to use them.
|
||||
|
||||
**Key principle:** When a skill supports multiple variations, frameworks, or options, keep only the core workflow and selection guidance in SKILL.md. Move variant-specific details (patterns, examples, configuration) into separate reference files.
|
||||
|
||||
**Pattern 1: High-level guide with references**
|
||||
|
||||
```markdown
|
||||
# PDF Processing
|
||||
|
||||
## Quick start
|
||||
|
||||
Extract text with pdfplumber:
|
||||
[code example]
|
||||
|
||||
## Advanced features
|
||||
|
||||
- **Form filling**: See [FORMS.md](FORMS.md) for complete guide
|
||||
- **API reference**: See [REFERENCE.md](REFERENCE.md) for all methods
|
||||
- **Examples**: See [EXAMPLES.md](EXAMPLES.md) for common patterns
|
||||
```
|
||||
|
||||
Codex loads FORMS.md, REFERENCE.md, or EXAMPLES.md only when needed.
|
||||
|
||||
**Pattern 2: Domain-specific organization**
|
||||
|
||||
For Skills with multiple domains, organize content by domain to avoid loading irrelevant context:
|
||||
|
||||
```
|
||||
bigquery-skill/
|
||||
├── SKILL.md (overview and navigation)
|
||||
└── reference/
|
||||
├── finance.md (revenue, billing metrics)
|
||||
├── sales.md (opportunities, pipeline)
|
||||
├── product.md (API usage, features)
|
||||
└── marketing.md (campaigns, attribution)
|
||||
```
|
||||
|
||||
When a user asks about sales metrics, Codex only reads sales.md.
|
||||
|
||||
Similarly, for skills supporting multiple frameworks or variants, organize by variant:
|
||||
|
||||
```
|
||||
cloud-deploy/
|
||||
├── SKILL.md (workflow + provider selection)
|
||||
└── references/
|
||||
├── aws.md (AWS deployment patterns)
|
||||
├── gcp.md (GCP deployment patterns)
|
||||
└── azure.md (Azure deployment patterns)
|
||||
```
|
||||
|
||||
When the user chooses AWS, Codex only reads aws.md.
|
||||
|
||||
**Pattern 3: Conditional details**
|
||||
|
||||
Show basic content, link to advanced content:
|
||||
|
||||
```markdown
|
||||
# DOCX Processing
|
||||
|
||||
## Creating documents
|
||||
|
||||
Use docx-js for new documents. See [DOCX-JS.md](DOCX-JS.md).
|
||||
|
||||
## Editing documents
|
||||
|
||||
For simple edits, modify the XML directly.
|
||||
|
||||
**For tracked changes**: See [REDLINING.md](REDLINING.md)
|
||||
**For OOXML details**: See [OOXML.md](OOXML.md)
|
||||
```
|
||||
|
||||
Codex reads REDLINING.md or OOXML.md only when the user needs those features.
|
||||
|
||||
**Important guidelines:**
|
||||
|
||||
- **Avoid deeply nested references** - Keep references one level deep from SKILL.md. All reference files should link directly from SKILL.md.
|
||||
- **Structure longer reference files** - For files longer than 100 lines, include a table of contents at the top so Codex can see the full scope when previewing.
|
||||
|
||||
## Skill Creation Process
|
||||
|
||||
Skill creation involves these steps:
|
||||
|
||||
1. Understand the skill with concrete examples
|
||||
2. Plan reusable skill contents (scripts, references, assets)
|
||||
3. Initialize the skill (run init_skill.py)
|
||||
4. Edit the skill (implement resources and write SKILL.md)
|
||||
5. Package the skill (run package_skill.py)
|
||||
6. Iterate based on real usage
|
||||
|
||||
Follow these steps in order, skipping only if there is a clear reason why they are not applicable.
|
||||
|
||||
### Skill Naming
|
||||
|
||||
- Use lowercase letters, digits, and hyphens only; normalize user-provided titles to hyphen-case (e.g., "Plan Mode" -> `plan-mode`).
|
||||
- When generating names, generate a name under 64 characters (letters, digits, hyphens).
|
||||
- Prefer short, verb-led phrases that describe the action.
|
||||
- Namespace by tool when it improves clarity or triggering (e.g., `gh-address-comments`, `linear-address-issue`).
|
||||
- Name the skill folder exactly after the skill name.
|
||||
|
||||
### Step 1: Understanding the Skill with Concrete Examples
|
||||
|
||||
Skip this step only when the skill's usage patterns are already clearly understood. It remains valuable even when working with an existing skill.
|
||||
|
||||
To create an effective skill, clearly understand concrete examples of how the skill will be used. This understanding can come from either direct user examples or generated examples that are validated with user feedback.
|
||||
|
||||
For example, when building an image-editor skill, relevant questions include:
|
||||
|
||||
- "What functionality should the image-editor skill support? Editing, rotating, anything else?"
|
||||
- "Can you give some examples of how this skill would be used?"
|
||||
- "I can imagine users asking for things like 'Remove the red-eye from this image' or 'Rotate this image'. Are there other ways you imagine this skill being used?"
|
||||
- "What would a user say that should trigger this skill?"
|
||||
|
||||
To avoid overwhelming users, avoid asking too many questions in a single message. Start with the most important questions and follow up as needed for better effectiveness.
|
||||
|
||||
Conclude this step when there is a clear sense of the functionality the skill should support.
|
||||
|
||||
### Step 2: Planning the Reusable Skill Contents
|
||||
|
||||
To turn concrete examples into an effective skill, analyze each example by:
|
||||
|
||||
1. Considering how to execute on the example from scratch
|
||||
2. Identifying what scripts, references, and assets would be helpful when executing these workflows repeatedly
|
||||
|
||||
Example: When building a `pdf-editor` skill to handle queries like "Help me rotate this PDF," the analysis shows:
|
||||
|
||||
1. Rotating a PDF requires re-writing the same code each time
|
||||
2. A `scripts/rotate_pdf.py` script would be helpful to store in the skill
|
||||
|
||||
Example: When designing a `frontend-webapp-builder` skill for queries like "Build me a todo app" or "Build me a dashboard to track my steps," the analysis shows:
|
||||
|
||||
1. Writing a frontend webapp requires the same boilerplate HTML/React each time
|
||||
2. An `assets/hello-world/` template containing the boilerplate HTML/React project files would be helpful to store in the skill
|
||||
|
||||
Example: When building a `big-query` skill to handle queries like "How many users have logged in today?" the analysis shows:
|
||||
|
||||
1. Querying BigQuery requires re-discovering the table schemas and relationships each time
|
||||
2. A `references/schema.md` file documenting the table schemas would be helpful to store in the skill
|
||||
|
||||
To establish the skill's contents, analyze each concrete example to create a list of the reusable resources to include: scripts, references, and assets.
|
||||
|
||||
### Step 3: Initializing the Skill
|
||||
|
||||
At this point, it is time to actually create the skill.
|
||||
|
||||
Skip this step only if the skill being developed already exists, and iteration or packaging is needed. In this case, continue to the next step.
|
||||
|
||||
When creating a new skill from scratch, always run the `init_skill.py` script. The script conveniently generates a new template skill directory that automatically includes everything a skill requires, making the skill creation process much more efficient and reliable.
|
||||
|
||||
Usage:
|
||||
|
||||
```bash
|
||||
scripts/init_skill.py <skill-name> --path <output-directory> [--resources scripts,references,assets] [--examples]
|
||||
```
|
||||
|
||||
Examples:
|
||||
|
||||
```bash
|
||||
scripts/init_skill.py my-skill --path skills/public
|
||||
scripts/init_skill.py my-skill --path skills/public --resources scripts,references
|
||||
scripts/init_skill.py my-skill --path skills/public --resources scripts --examples
|
||||
```
|
||||
|
||||
The script:
|
||||
|
||||
- Creates the skill directory at the specified path
|
||||
- Generates a SKILL.md template with proper frontmatter and TODO placeholders
|
||||
- Optionally creates resource directories based on `--resources`
|
||||
- Optionally adds example files when `--examples` is set
|
||||
|
||||
After initialization, customize the SKILL.md and add resources as needed. If you used `--examples`, replace or delete placeholder files.
|
||||
|
||||
### Step 4: Edit the Skill
|
||||
|
||||
When editing the (newly-generated or existing) skill, remember that the skill is being created for another instance of Codex to use. Include information that would be beneficial and non-obvious to Codex. Consider what procedural knowledge, domain-specific details, or reusable assets would help another Codex instance execute these tasks more effectively.
|
||||
|
||||
#### Learn Proven Design Patterns
|
||||
|
||||
Consult these helpful guides based on your skill's needs:
|
||||
|
||||
- **Multi-step processes**: See references/workflows.md for sequential workflows and conditional logic
|
||||
- **Specific output formats or quality standards**: See references/output-patterns.md for template and example patterns
|
||||
|
||||
These files contain established best practices for effective skill design.
|
||||
|
||||
#### Start with Reusable Skill Contents
|
||||
|
||||
To begin implementation, start with the reusable resources identified above: `scripts/`, `references/`, and `assets/` files. Note that this step may require user input. For example, when implementing a `brand-guidelines` skill, the user may need to provide brand assets or templates to store in `assets/`, or documentation to store in `references/`.
|
||||
|
||||
Added scripts must be tested by actually running them to ensure there are no bugs and that the output matches what is expected. If there are many similar scripts, only a representative sample needs to be tested to ensure confidence that they all work while balancing time to completion.
|
||||
|
||||
If you used `--examples`, delete any placeholder files that are not needed for the skill. Only create resource directories that are actually required.
|
||||
|
||||
#### Update SKILL.md
|
||||
|
||||
**Writing Guidelines:** Always use imperative/infinitive form.
|
||||
|
||||
##### Frontmatter
|
||||
|
||||
Write the YAML frontmatter with `name` and `description`:
|
||||
|
||||
- `name`: The skill name
|
||||
- `description`: This is the primary triggering mechanism for your skill, and helps Codex understand when to use the skill.
|
||||
- Include both what the Skill does and specific triggers/contexts for when to use it.
|
||||
- Include all "when to use" information here - Not in the body. The body is only loaded after triggering, so "When to Use This Skill" sections in the body are not helpful to Codex.
|
||||
- Example description for a `docx` skill: "Comprehensive document creation, editing, and analysis with support for tracked changes, comments, formatting preservation, and text extraction. Use when Codex needs to work with professional documents (.docx files) for: (1) Creating new documents, (2) Modifying or editing content, (3) Working with tracked changes, (4) Adding comments, or any other document tasks"
|
||||
|
||||
Do not include any other fields in YAML frontmatter.
|
||||
|
||||
##### Body
|
||||
|
||||
Write instructions for using the skill and its bundled resources.
|
||||
|
||||
### Step 5: Packaging a Skill
|
||||
|
||||
Once development of the skill is complete, it must be packaged into a distributable .skill file that gets shared with the user. The packaging process automatically validates the skill first to ensure it meets all requirements:
|
||||
|
||||
```bash
|
||||
scripts/package_skill.py <path/to/skill-folder>
|
||||
```
|
||||
|
||||
Optional output directory specification:
|
||||
|
||||
```bash
|
||||
scripts/package_skill.py <path/to/skill-folder> ./dist
|
||||
```
|
||||
|
||||
The packaging script will:
|
||||
|
||||
1. **Validate** the skill automatically, checking:
|
||||
|
||||
- YAML frontmatter format and required fields
|
||||
- Skill naming conventions and directory structure
|
||||
- Description completeness and quality
|
||||
- File organization and resource references
|
||||
|
||||
2. **Package** the skill if validation passes, creating a .skill file named after the skill (e.g., `my-skill.skill`) that includes all files and maintains the proper directory structure for distribution. The .skill file is a zip file with a .skill extension.
|
||||
|
||||
If validation fails, the script will report the errors and exit without creating a package. Fix any validation errors and run the packaging command again.
|
||||
|
||||
### Step 6: Iterate
|
||||
|
||||
After testing the skill, users may request improvements. Often this happens right after using the skill, with fresh context of how the skill performed.
|
||||
|
||||
**Iteration workflow:**
|
||||
|
||||
1. Use the skill on real tasks
|
||||
2. Notice struggles or inefficiencies
|
||||
3. Identify how SKILL.md or bundled resources should be updated
|
||||
4. Implement changes and test again
|
||||
|
|
@ -1,202 +0,0 @@
|
|||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
|
|
@ -1,378 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
Skill Initializer - Creates a new skill from template
|
||||
|
||||
Usage:
|
||||
init_skill.py <skill-name> --path <path> [--resources scripts,references,assets] [--examples]
|
||||
|
||||
Examples:
|
||||
init_skill.py my-new-skill --path skills/public
|
||||
init_skill.py my-new-skill --path skills/public --resources scripts,references
|
||||
init_skill.py my-api-helper --path skills/private --resources scripts --examples
|
||||
init_skill.py custom-skill --path /custom/location
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import re
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
MAX_SKILL_NAME_LENGTH = 64
|
||||
ALLOWED_RESOURCES = {"scripts", "references", "assets"}
|
||||
|
||||
SKILL_TEMPLATE = """---
|
||||
name: {skill_name}
|
||||
description: [TODO: Complete and informative explanation of what the skill does and when to use it. Include WHEN to use this skill - specific scenarios, file types, or tasks that trigger it.]
|
||||
---
|
||||
|
||||
# {skill_title}
|
||||
|
||||
## Overview
|
||||
|
||||
[TODO: 1-2 sentences explaining what this skill enables]
|
||||
|
||||
## Structuring This Skill
|
||||
|
||||
[TODO: Choose the structure that best fits this skill's purpose. Common patterns:
|
||||
|
||||
**1. Workflow-Based** (best for sequential processes)
|
||||
- Works well when there are clear step-by-step procedures
|
||||
- Example: DOCX skill with "Workflow Decision Tree" -> "Reading" -> "Creating" -> "Editing"
|
||||
- Structure: ## Overview -> ## Workflow Decision Tree -> ## Step 1 -> ## Step 2...
|
||||
|
||||
**2. Task-Based** (best for tool collections)
|
||||
- Works well when the skill offers different operations/capabilities
|
||||
- Example: PDF skill with "Quick Start" -> "Merge PDFs" -> "Split PDFs" -> "Extract Text"
|
||||
- Structure: ## Overview -> ## Quick Start -> ## Task Category 1 -> ## Task Category 2...
|
||||
|
||||
**3. Reference/Guidelines** (best for standards or specifications)
|
||||
- Works well for brand guidelines, coding standards, or requirements
|
||||
- Example: Brand styling with "Brand Guidelines" -> "Colors" -> "Typography" -> "Features"
|
||||
- Structure: ## Overview -> ## Guidelines -> ## Specifications -> ## Usage...
|
||||
|
||||
**4. Capabilities-Based** (best for integrated systems)
|
||||
- Works well when the skill provides multiple interrelated features
|
||||
- Example: Product Management with "Core Capabilities" -> numbered capability list
|
||||
- Structure: ## Overview -> ## Core Capabilities -> ### 1. Feature -> ### 2. Feature...
|
||||
|
||||
Patterns can be mixed and matched as needed. Most skills combine patterns (e.g., start with task-based, add workflow for complex operations).
|
||||
|
||||
Delete this entire "Structuring This Skill" section when done - it's just guidance.]
|
||||
|
||||
## [TODO: Replace with the first main section based on chosen structure]
|
||||
|
||||
[TODO: Add content here. See examples in existing skills:
|
||||
- Code samples for technical skills
|
||||
- Decision trees for complex workflows
|
||||
- Concrete examples with realistic user requests
|
||||
- References to scripts/templates/references as needed]
|
||||
|
||||
## Resources (optional)
|
||||
|
||||
Create only the resource directories this skill actually needs. Delete this section if no resources are required.
|
||||
|
||||
### scripts/
|
||||
Executable code (Python/Bash/etc.) that can be run directly to perform specific operations.
|
||||
|
||||
**Examples from other skills:**
|
||||
- PDF skill: `fill_fillable_fields.py`, `extract_form_field_info.py` - utilities for PDF manipulation
|
||||
- DOCX skill: `document.py`, `utilities.py` - Python modules for document processing
|
||||
|
||||
**Appropriate for:** Python scripts, shell scripts, or any executable code that performs automation, data processing, or specific operations.
|
||||
|
||||
**Note:** Scripts may be executed without loading into context, but can still be read by Codex for patching or environment adjustments.
|
||||
|
||||
### references/
|
||||
Documentation and reference material intended to be loaded into context to inform Codex's process and thinking.
|
||||
|
||||
**Examples from other skills:**
|
||||
- Product management: `communication.md`, `context_building.md` - detailed workflow guides
|
||||
- BigQuery: API reference documentation and query examples
|
||||
- Finance: Schema documentation, company policies
|
||||
|
||||
**Appropriate for:** In-depth documentation, API references, database schemas, comprehensive guides, or any detailed information that Codex should reference while working.
|
||||
|
||||
### assets/
|
||||
Files not intended to be loaded into context, but rather used within the output Codex produces.
|
||||
|
||||
**Examples from other skills:**
|
||||
- Brand styling: PowerPoint template files (.pptx), logo files
|
||||
- Frontend builder: HTML/React boilerplate project directories
|
||||
- Typography: Font files (.ttf, .woff2)
|
||||
|
||||
**Appropriate for:** Templates, boilerplate code, document templates, images, icons, fonts, or any files meant to be copied or used in the final output.
|
||||
|
||||
---
|
||||
|
||||
**Not every skill requires all three types of resources.**
|
||||
"""
|
||||
|
||||
EXAMPLE_SCRIPT = '''#!/usr/bin/env python3
|
||||
"""
|
||||
Example helper script for {skill_name}
|
||||
|
||||
This is a placeholder script that can be executed directly.
|
||||
Replace with actual implementation or delete if not needed.
|
||||
|
||||
Example real scripts from other skills:
|
||||
- pdf/scripts/fill_fillable_fields.py - Fills PDF form fields
|
||||
- pdf/scripts/convert_pdf_to_images.py - Converts PDF pages to images
|
||||
"""
|
||||
|
||||
def main():
|
||||
print("This is an example script for {skill_name}")
|
||||
# TODO: Add actual script logic here
|
||||
# This could be data processing, file conversion, API calls, etc.
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
'''
|
||||
|
||||
EXAMPLE_REFERENCE = """# Reference Documentation for {skill_title}
|
||||
|
||||
This is a placeholder for detailed reference documentation.
|
||||
Replace with actual reference content or delete if not needed.
|
||||
|
||||
Example real reference docs from other skills:
|
||||
- product-management/references/communication.md - Comprehensive guide for status updates
|
||||
- product-management/references/context_building.md - Deep-dive on gathering context
|
||||
- bigquery/references/ - API references and query examples
|
||||
|
||||
## When Reference Docs Are Useful
|
||||
|
||||
Reference docs are ideal for:
|
||||
- Comprehensive API documentation
|
||||
- Detailed workflow guides
|
||||
- Complex multi-step processes
|
||||
- Information too lengthy for main SKILL.md
|
||||
- Content that's only needed for specific use cases
|
||||
|
||||
## Structure Suggestions
|
||||
|
||||
### API Reference Example
|
||||
- Overview
|
||||
- Authentication
|
||||
- Endpoints with examples
|
||||
- Error codes
|
||||
- Rate limits
|
||||
|
||||
### Workflow Guide Example
|
||||
- Prerequisites
|
||||
- Step-by-step instructions
|
||||
- Common patterns
|
||||
- Troubleshooting
|
||||
- Best practices
|
||||
"""
|
||||
|
||||
EXAMPLE_ASSET = """# Example Asset File
|
||||
|
||||
This placeholder represents where asset files would be stored.
|
||||
Replace with actual asset files (templates, images, fonts, etc.) or delete if not needed.
|
||||
|
||||
Asset files are NOT intended to be loaded into context, but rather used within
|
||||
the output Codex produces.
|
||||
|
||||
Example asset files from other skills:
|
||||
- Brand guidelines: logo.png, slides_template.pptx
|
||||
- Frontend builder: hello-world/ directory with HTML/React boilerplate
|
||||
- Typography: custom-font.ttf, font-family.woff2
|
||||
- Data: sample_data.csv, test_dataset.json
|
||||
|
||||
## Common Asset Types
|
||||
|
||||
- Templates: .pptx, .docx, boilerplate directories
|
||||
- Images: .png, .jpg, .svg, .gif
|
||||
- Fonts: .ttf, .otf, .woff, .woff2
|
||||
- Boilerplate code: Project directories, starter files
|
||||
- Icons: .ico, .svg
|
||||
- Data files: .csv, .json, .xml, .yaml
|
||||
|
||||
Note: This is a text placeholder. Actual assets can be any file type.
|
||||
"""
|
||||
|
||||
|
||||
def normalize_skill_name(skill_name):
|
||||
"""Normalize a skill name to lowercase hyphen-case."""
|
||||
normalized = skill_name.strip().lower()
|
||||
normalized = re.sub(r"[^a-z0-9]+", "-", normalized)
|
||||
normalized = normalized.strip("-")
|
||||
normalized = re.sub(r"-{2,}", "-", normalized)
|
||||
return normalized
|
||||
|
||||
|
||||
def title_case_skill_name(skill_name):
|
||||
"""Convert hyphenated skill name to Title Case for display."""
|
||||
return " ".join(word.capitalize() for word in skill_name.split("-"))
|
||||
|
||||
|
||||
def parse_resources(raw_resources):
|
||||
if not raw_resources:
|
||||
return []
|
||||
resources = [item.strip() for item in raw_resources.split(",") if item.strip()]
|
||||
invalid = sorted({item for item in resources if item not in ALLOWED_RESOURCES})
|
||||
if invalid:
|
||||
allowed = ", ".join(sorted(ALLOWED_RESOURCES))
|
||||
print(f"[ERROR] Unknown resource type(s): {', '.join(invalid)}")
|
||||
print(f" Allowed: {allowed}")
|
||||
sys.exit(1)
|
||||
deduped = []
|
||||
seen = set()
|
||||
for resource in resources:
|
||||
if resource not in seen:
|
||||
deduped.append(resource)
|
||||
seen.add(resource)
|
||||
return deduped
|
||||
|
||||
|
||||
def create_resource_dirs(skill_dir, skill_name, skill_title, resources, include_examples):
|
||||
for resource in resources:
|
||||
resource_dir = skill_dir / resource
|
||||
resource_dir.mkdir(exist_ok=True)
|
||||
if resource == "scripts":
|
||||
if include_examples:
|
||||
example_script = resource_dir / "example.py"
|
||||
example_script.write_text(EXAMPLE_SCRIPT.format(skill_name=skill_name))
|
||||
example_script.chmod(0o755)
|
||||
print("[OK] Created scripts/example.py")
|
||||
else:
|
||||
print("[OK] Created scripts/")
|
||||
elif resource == "references":
|
||||
if include_examples:
|
||||
example_reference = resource_dir / "api_reference.md"
|
||||
example_reference.write_text(EXAMPLE_REFERENCE.format(skill_title=skill_title))
|
||||
print("[OK] Created references/api_reference.md")
|
||||
else:
|
||||
print("[OK] Created references/")
|
||||
elif resource == "assets":
|
||||
if include_examples:
|
||||
example_asset = resource_dir / "example_asset.txt"
|
||||
example_asset.write_text(EXAMPLE_ASSET)
|
||||
print("[OK] Created assets/example_asset.txt")
|
||||
else:
|
||||
print("[OK] Created assets/")
|
||||
|
||||
|
||||
def init_skill(skill_name, path, resources, include_examples):
|
||||
"""
|
||||
Initialize a new skill directory with template SKILL.md.
|
||||
|
||||
Args:
|
||||
skill_name: Name of the skill
|
||||
path: Path where the skill directory should be created
|
||||
resources: Resource directories to create
|
||||
include_examples: Whether to create example files in resource directories
|
||||
|
||||
Returns:
|
||||
Path to created skill directory, or None if error
|
||||
"""
|
||||
# Determine skill directory path
|
||||
skill_dir = Path(path).resolve() / skill_name
|
||||
|
||||
# Check if directory already exists
|
||||
if skill_dir.exists():
|
||||
print(f"[ERROR] Skill directory already exists: {skill_dir}")
|
||||
return None
|
||||
|
||||
# Create skill directory
|
||||
try:
|
||||
skill_dir.mkdir(parents=True, exist_ok=False)
|
||||
print(f"[OK] Created skill directory: {skill_dir}")
|
||||
except Exception as e:
|
||||
print(f"[ERROR] Error creating directory: {e}")
|
||||
return None
|
||||
|
||||
# Create SKILL.md from template
|
||||
skill_title = title_case_skill_name(skill_name)
|
||||
skill_content = SKILL_TEMPLATE.format(skill_name=skill_name, skill_title=skill_title)
|
||||
|
||||
skill_md_path = skill_dir / "SKILL.md"
|
||||
try:
|
||||
skill_md_path.write_text(skill_content)
|
||||
print("[OK] Created SKILL.md")
|
||||
except Exception as e:
|
||||
print(f"[ERROR] Error creating SKILL.md: {e}")
|
||||
return None
|
||||
|
||||
# Create resource directories if requested
|
||||
if resources:
|
||||
try:
|
||||
create_resource_dirs(skill_dir, skill_name, skill_title, resources, include_examples)
|
||||
except Exception as e:
|
||||
print(f"[ERROR] Error creating resource directories: {e}")
|
||||
return None
|
||||
|
||||
# Print next steps
|
||||
print(f"\n[OK] Skill '{skill_name}' initialized successfully at {skill_dir}")
|
||||
print("\nNext steps:")
|
||||
print("1. Edit SKILL.md to complete the TODO items and update the description")
|
||||
if resources:
|
||||
if include_examples:
|
||||
print("2. Customize or delete the example files in scripts/, references/, and assets/")
|
||||
else:
|
||||
print("2. Add resources to scripts/, references/, and assets/ as needed")
|
||||
else:
|
||||
print("2. Create resource directories only if needed (scripts/, references/, assets/)")
|
||||
print("3. Run the validator when ready to check the skill structure")
|
||||
|
||||
return skill_dir
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Create a new skill directory with a SKILL.md template.",
|
||||
)
|
||||
parser.add_argument("skill_name", help="Skill name (normalized to hyphen-case)")
|
||||
parser.add_argument("--path", required=True, help="Output directory for the skill")
|
||||
parser.add_argument(
|
||||
"--resources",
|
||||
default="",
|
||||
help="Comma-separated list: scripts,references,assets",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--examples",
|
||||
action="store_true",
|
||||
help="Create example files inside the selected resource directories",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
raw_skill_name = args.skill_name
|
||||
skill_name = normalize_skill_name(raw_skill_name)
|
||||
if not skill_name:
|
||||
print("[ERROR] Skill name must include at least one letter or digit.")
|
||||
sys.exit(1)
|
||||
if len(skill_name) > MAX_SKILL_NAME_LENGTH:
|
||||
print(
|
||||
f"[ERROR] Skill name '{skill_name}' is too long ({len(skill_name)} characters). "
|
||||
f"Maximum is {MAX_SKILL_NAME_LENGTH} characters."
|
||||
)
|
||||
sys.exit(1)
|
||||
if skill_name != raw_skill_name:
|
||||
print(f"Note: Normalized skill name from '{raw_skill_name}' to '{skill_name}'.")
|
||||
|
||||
resources = parse_resources(args.resources)
|
||||
if args.examples and not resources:
|
||||
print("[ERROR] --examples requires --resources to be set.")
|
||||
sys.exit(1)
|
||||
|
||||
path = args.path
|
||||
|
||||
print(f"Initializing skill: {skill_name}")
|
||||
print(f" Location: {path}")
|
||||
if resources:
|
||||
print(f" Resources: {', '.join(resources)}")
|
||||
if args.examples:
|
||||
print(" Examples: enabled")
|
||||
else:
|
||||
print(" Resources: none (create as needed)")
|
||||
print()
|
||||
|
||||
result = init_skill(skill_name, path, resources, args.examples)
|
||||
|
||||
if result:
|
||||
sys.exit(0)
|
||||
else:
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
|
@ -1,111 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
Skill Packager - Creates a distributable .skill file of a skill folder
|
||||
|
||||
Usage:
|
||||
python utils/package_skill.py <path/to/skill-folder> [output-directory]
|
||||
|
||||
Example:
|
||||
python utils/package_skill.py skills/public/my-skill
|
||||
python utils/package_skill.py skills/public/my-skill ./dist
|
||||
"""
|
||||
|
||||
import sys
|
||||
import zipfile
|
||||
from pathlib import Path
|
||||
|
||||
from quick_validate import validate_skill
|
||||
|
||||
|
||||
def package_skill(skill_path, output_dir=None):
|
||||
"""
|
||||
Package a skill folder into a .skill file.
|
||||
|
||||
Args:
|
||||
skill_path: Path to the skill folder
|
||||
output_dir: Optional output directory for the .skill file (defaults to current directory)
|
||||
|
||||
Returns:
|
||||
Path to the created .skill file, or None if error
|
||||
"""
|
||||
skill_path = Path(skill_path).resolve()
|
||||
|
||||
# Validate skill folder exists
|
||||
if not skill_path.exists():
|
||||
print(f"[ERROR] Skill folder not found: {skill_path}")
|
||||
return None
|
||||
|
||||
if not skill_path.is_dir():
|
||||
print(f"[ERROR] Path is not a directory: {skill_path}")
|
||||
return None
|
||||
|
||||
# Validate SKILL.md exists
|
||||
skill_md = skill_path / "SKILL.md"
|
||||
if not skill_md.exists():
|
||||
print(f"[ERROR] SKILL.md not found in {skill_path}")
|
||||
return None
|
||||
|
||||
# Run validation before packaging
|
||||
print("Validating skill...")
|
||||
valid, message = validate_skill(skill_path)
|
||||
if not valid:
|
||||
print(f"[ERROR] Validation failed: {message}")
|
||||
print(" Please fix the validation errors before packaging.")
|
||||
return None
|
||||
print(f"[OK] {message}\n")
|
||||
|
||||
# Determine output location
|
||||
skill_name = skill_path.name
|
||||
if output_dir:
|
||||
output_path = Path(output_dir).resolve()
|
||||
output_path.mkdir(parents=True, exist_ok=True)
|
||||
else:
|
||||
output_path = Path.cwd()
|
||||
|
||||
skill_filename = output_path / f"{skill_name}.skill"
|
||||
|
||||
# Create the .skill file (zip format)
|
||||
try:
|
||||
with zipfile.ZipFile(skill_filename, "w", zipfile.ZIP_DEFLATED) as zipf:
|
||||
# Walk through the skill directory
|
||||
for file_path in skill_path.rglob("*"):
|
||||
if file_path.is_file():
|
||||
# Calculate the relative path within the zip
|
||||
arcname = file_path.relative_to(skill_path.parent)
|
||||
zipf.write(file_path, arcname)
|
||||
print(f" Added: {arcname}")
|
||||
|
||||
print(f"\n[OK] Successfully packaged skill to: {skill_filename}")
|
||||
return skill_filename
|
||||
|
||||
except Exception as e:
|
||||
print(f"[ERROR] Error creating .skill file: {e}")
|
||||
return None
|
||||
|
||||
|
||||
def main():
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: python utils/package_skill.py <path/to/skill-folder> [output-directory]")
|
||||
print("\nExample:")
|
||||
print(" python utils/package_skill.py skills/public/my-skill")
|
||||
print(" python utils/package_skill.py skills/public/my-skill ./dist")
|
||||
sys.exit(1)
|
||||
|
||||
skill_path = sys.argv[1]
|
||||
output_dir = sys.argv[2] if len(sys.argv) > 2 else None
|
||||
|
||||
print(f"Packaging skill: {skill_path}")
|
||||
if output_dir:
|
||||
print(f" Output directory: {output_dir}")
|
||||
print()
|
||||
|
||||
result = package_skill(skill_path, output_dir)
|
||||
|
||||
if result:
|
||||
sys.exit(0)
|
||||
else:
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
|
@ -1,101 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
Quick validation script for skills - minimal version
|
||||
"""
|
||||
|
||||
import re
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import yaml
|
||||
|
||||
MAX_SKILL_NAME_LENGTH = 64
|
||||
|
||||
|
||||
def validate_skill(skill_path):
|
||||
"""Basic validation of a skill"""
|
||||
skill_path = Path(skill_path)
|
||||
|
||||
skill_md = skill_path / "SKILL.md"
|
||||
if not skill_md.exists():
|
||||
return False, "SKILL.md not found"
|
||||
|
||||
content = skill_md.read_text()
|
||||
if not content.startswith("---"):
|
||||
return False, "No YAML frontmatter found"
|
||||
|
||||
match = re.match(r"^---\n(.*?)\n---", content, re.DOTALL)
|
||||
if not match:
|
||||
return False, "Invalid frontmatter format"
|
||||
|
||||
frontmatter_text = match.group(1)
|
||||
|
||||
try:
|
||||
frontmatter = yaml.safe_load(frontmatter_text)
|
||||
if not isinstance(frontmatter, dict):
|
||||
return False, "Frontmatter must be a YAML dictionary"
|
||||
except yaml.YAMLError as e:
|
||||
return False, f"Invalid YAML in frontmatter: {e}"
|
||||
|
||||
allowed_properties = {"name", "description", "license", "allowed-tools", "metadata"}
|
||||
|
||||
unexpected_keys = set(frontmatter.keys()) - allowed_properties
|
||||
if unexpected_keys:
|
||||
allowed = ", ".join(sorted(allowed_properties))
|
||||
unexpected = ", ".join(sorted(unexpected_keys))
|
||||
return (
|
||||
False,
|
||||
f"Unexpected key(s) in SKILL.md frontmatter: {unexpected}. Allowed properties are: {allowed}",
|
||||
)
|
||||
|
||||
if "name" not in frontmatter:
|
||||
return False, "Missing 'name' in frontmatter"
|
||||
if "description" not in frontmatter:
|
||||
return False, "Missing 'description' in frontmatter"
|
||||
|
||||
name = frontmatter.get("name", "")
|
||||
if not isinstance(name, str):
|
||||
return False, f"Name must be a string, got {type(name).__name__}"
|
||||
name = name.strip()
|
||||
if name:
|
||||
if not re.match(r"^[a-z0-9-]+$", name):
|
||||
return (
|
||||
False,
|
||||
f"Name '{name}' should be hyphen-case (lowercase letters, digits, and hyphens only)",
|
||||
)
|
||||
if name.startswith("-") or name.endswith("-") or "--" in name:
|
||||
return (
|
||||
False,
|
||||
f"Name '{name}' cannot start/end with hyphen or contain consecutive hyphens",
|
||||
)
|
||||
if len(name) > MAX_SKILL_NAME_LENGTH:
|
||||
return (
|
||||
False,
|
||||
f"Name is too long ({len(name)} characters). "
|
||||
f"Maximum is {MAX_SKILL_NAME_LENGTH} characters.",
|
||||
)
|
||||
|
||||
description = frontmatter.get("description", "")
|
||||
if not isinstance(description, str):
|
||||
return False, f"Description must be a string, got {type(description).__name__}"
|
||||
description = description.strip()
|
||||
if description:
|
||||
if "<" in description or ">" in description:
|
||||
return False, "Description cannot contain angle brackets (< or >)"
|
||||
if len(description) > 1024:
|
||||
return (
|
||||
False,
|
||||
f"Description is too long ({len(description)} characters). Maximum is 1024 characters.",
|
||||
)
|
||||
|
||||
return True, "Skill is valid!"
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
if len(sys.argv) != 2:
|
||||
print("Usage: python quick_validate.py <skill_directory>")
|
||||
sys.exit(1)
|
||||
|
||||
valid, message = validate_skill(sys.argv[1])
|
||||
print(message)
|
||||
sys.exit(0 if valid else 1)
|
||||
|
|
@ -1,29 +0,0 @@
|
|||
---
|
||||
name: songsee
|
||||
description: Generate spectrograms and feature-panel visualizations from audio with the songsee CLI.
|
||||
homepage: https://github.com/steipete/songsee
|
||||
metadata: {"clawdbot":{"emoji":"🌊","requires":{"bins":["songsee"]},"install":[{"id":"brew","kind":"brew","formula":"steipete/tap/songsee","bins":["songsee"],"label":"Install songsee (brew)"}]}}
|
||||
---
|
||||
|
||||
# songsee
|
||||
|
||||
Generate spectrograms + feature panels from audio.
|
||||
|
||||
Quick start
|
||||
- Spectrogram: `songsee track.mp3`
|
||||
- Multi-panel: `songsee track.mp3 --viz spectrogram,mel,chroma,hpss,selfsim,loudness,tempogram,mfcc,flux`
|
||||
- Time slice: `songsee track.mp3 --start 12.5 --duration 8 -o slice.jpg`
|
||||
- Stdin: `cat track.mp3 | songsee - --format png -o out.png`
|
||||
|
||||
Common flags
|
||||
- `--viz` list (repeatable or comma-separated)
|
||||
- `--style` palette (classic, magma, inferno, viridis, gray)
|
||||
- `--width` / `--height` output size
|
||||
- `--window` / `--hop` FFT settings
|
||||
- `--min-freq` / `--max-freq` frequency range
|
||||
- `--start` / `--duration` time slice
|
||||
- `--format` jpg|png
|
||||
|
||||
Notes
|
||||
- WAV/MP3 decode native; other formats use ffmpeg if available.
|
||||
- Multiple `--viz` renders a grid.
|
||||
|
|
@ -1,26 +0,0 @@
|
|||
---
|
||||
name: sonoscli
|
||||
description: Control Sonos speakers (discover/status/play/volume/group).
|
||||
homepage: https://sonoscli.sh
|
||||
metadata: {"clawdbot":{"emoji":"🔊","requires":{"bins":["sonos"]},"install":[{"id":"go","kind":"go","module":"github.com/steipete/sonoscli/cmd/sonos@latest","bins":["sonos"],"label":"Install sonoscli (go)"}]}}
|
||||
---
|
||||
|
||||
# Sonos CLI
|
||||
|
||||
Use `sonos` to control Sonos speakers on the local network.
|
||||
|
||||
Quick start
|
||||
- `sonos discover`
|
||||
- `sonos status --name "Kitchen"`
|
||||
- `sonos play|pause|stop --name "Kitchen"`
|
||||
- `sonos volume set 15 --name "Kitchen"`
|
||||
|
||||
Common tasks
|
||||
- Grouping: `sonos group status|join|unjoin|party|solo`
|
||||
- Favorites: `sonos favorites list|open`
|
||||
- Queue: `sonos queue list|play|clear`
|
||||
- Spotify search (via SMAPI): `sonos smapi search --service "Spotify" --category tracks "query"`
|
||||
|
||||
Notes
|
||||
- If SSDP fails, specify `--ip <speaker-ip>`.
|
||||
- Spotify Web API search is optional and requires `SPOTIFY_CLIENT_ID/SECRET`.
|
||||
|
|
@ -1,34 +0,0 @@
|
|||
---
|
||||
name: spotify-player
|
||||
description: Terminal Spotify playback/search via spogo (preferred) or spotify_player.
|
||||
homepage: https://www.spotify.com
|
||||
metadata: {"clawdbot":{"emoji":"🎵","requires":{"anyBins":["spogo","spotify_player"]},"install":[{"id":"brew","kind":"brew","formula":"spogo","tap":"steipete/tap","bins":["spogo"],"label":"Install spogo (brew)"},{"id":"brew","kind":"brew","formula":"spotify_player","bins":["spotify_player"],"label":"Install spotify_player (brew)"}]}}
|
||||
---
|
||||
|
||||
# spogo / spotify_player
|
||||
|
||||
Use `spogo` **(preferred)** for Spotify playback/search. Fall back to `spotify_player` if needed.
|
||||
|
||||
Requirements
|
||||
- Spotify Premium account.
|
||||
- Either `spogo` or `spotify_player` installed.
|
||||
|
||||
spogo setup
|
||||
- Import cookies: `spogo auth import --browser chrome`
|
||||
|
||||
Common CLI commands
|
||||
- Search: `spogo search track "query"`
|
||||
- Playback: `spogo play|pause|next|prev`
|
||||
- Devices: `spogo device list`, `spogo device set "<name|id>"`
|
||||
- Status: `spogo status`
|
||||
|
||||
spotify_player commands (fallback)
|
||||
- Search: `spotify_player search "query"`
|
||||
- Playback: `spotify_player playback play|pause|next|previous`
|
||||
- Connect device: `spotify_player connect`
|
||||
- Like track: `spotify_player like`
|
||||
|
||||
Notes
|
||||
- Config folder: `~/.config/spotify-player` (e.g., `app.toml`).
|
||||
- For Spotify Connect integration, set a user `client_id` in config.
|
||||
- TUI shortcuts are available via `?` in the app.
|
||||
|
|
@ -1,61 +0,0 @@
|
|||
---
|
||||
name: things-mac
|
||||
description: Manage Things 3 via the `things` CLI on macOS (add/update projects+todos via URL scheme; read/search/list from the local Things database). Use when a user asks Clawdbot to add a task to Things, list inbox/today/upcoming, search tasks, or inspect projects/areas/tags.
|
||||
homepage: https://github.com/ossianhempel/things3-cli
|
||||
metadata: {"clawdbot":{"emoji":"✅","os":["darwin"],"requires":{"bins":["things"]},"install":[{"id":"go","kind":"go","module":"github.com/ossianhempel/things3-cli/cmd/things@latest","bins":["things"],"label":"Install things3-cli (go)"}]}}
|
||||
---
|
||||
|
||||
# Things 3 CLI
|
||||
|
||||
Use `things` to read your local Things database (inbox/today/search/projects/areas/tags) and to add/update todos via the Things URL scheme.
|
||||
|
||||
Setup
|
||||
- Install (recommended, Apple Silicon): `GOBIN=/opt/homebrew/bin go install github.com/ossianhempel/things3-cli/cmd/things@latest`
|
||||
- If DB reads fail: grant **Full Disk Access** to the calling app (Terminal for manual runs; `Clawdbot.app` for gateway runs).
|
||||
- Optional: set `THINGSDB` (or pass `--db`) to point at your `ThingsData-*` folder.
|
||||
- Optional: set `THINGS_AUTH_TOKEN` to avoid passing `--auth-token` for update ops.
|
||||
|
||||
Read-only (DB)
|
||||
- `things inbox --limit 50`
|
||||
- `things today`
|
||||
- `things upcoming`
|
||||
- `things search "query"`
|
||||
- `things projects` / `things areas` / `things tags`
|
||||
|
||||
Write (URL scheme)
|
||||
- Prefer safe preview: `things --dry-run add "Title"`
|
||||
- Add: `things add "Title" --notes "..." --when today --deadline 2026-01-02`
|
||||
- Bring Things to front: `things --foreground add "Title"`
|
||||
|
||||
Examples: add a todo
|
||||
- Basic: `things add "Buy milk"`
|
||||
- With notes: `things add "Buy milk" --notes "2% + bananas"`
|
||||
- Into a project/area: `things add "Book flights" --list "Travel"`
|
||||
- Into a project heading: `things add "Pack charger" --list "Travel" --heading "Before"`
|
||||
- With tags: `things add "Call dentist" --tags "health,phone"`
|
||||
- Checklist: `things add "Trip prep" --checklist-item "Passport" --checklist-item "Tickets"`
|
||||
- From STDIN (multi-line => title + notes):
|
||||
- `cat <<'EOF' | things add -`
|
||||
- `Title line`
|
||||
- `Notes line 1`
|
||||
- `Notes line 2`
|
||||
- `EOF`
|
||||
|
||||
Examples: modify a todo (needs auth token)
|
||||
- First: get the ID (UUID column): `things search "milk" --limit 5`
|
||||
- Auth: set `THINGS_AUTH_TOKEN` or pass `--auth-token <TOKEN>`
|
||||
- Title: `things update --id <UUID> --auth-token <TOKEN> "New title"`
|
||||
- Notes replace: `things update --id <UUID> --auth-token <TOKEN> --notes "New notes"`
|
||||
- Notes append/prepend: `things update --id <UUID> --auth-token <TOKEN> --append-notes "..."` / `--prepend-notes "..."`
|
||||
- Move lists: `things update --id <UUID> --auth-token <TOKEN> --list "Travel" --heading "Before"`
|
||||
- Tags replace/add: `things update --id <UUID> --auth-token <TOKEN> --tags "a,b"` / `things update --id <UUID> --auth-token <TOKEN> --add-tags "a,b"`
|
||||
- Complete/cancel (soft-delete-ish): `things update --id <UUID> --auth-token <TOKEN> --completed` / `--canceled`
|
||||
- Safe preview: `things --dry-run update --id <UUID> --auth-token <TOKEN> --completed`
|
||||
|
||||
Delete a todo?
|
||||
- Not supported by `things3-cli` right now (no “delete/move-to-trash” write command; `things trash` is read-only listing).
|
||||
- Options: use Things UI to delete/trash, or mark as `--completed` / `--canceled` via `things update`.
|
||||
|
||||
Notes
|
||||
- macOS-only.
|
||||
- `--dry-run` prints the URL and does not open Things.
|
||||
|
|
@ -1,84 +0,0 @@
|
|||
---
|
||||
name: trello
|
||||
description: Manage Trello boards, lists, and cards via the Trello REST API.
|
||||
homepage: https://developer.atlassian.com/cloud/trello/rest/
|
||||
metadata: {"clawdbot":{"emoji":"📋","requires":{"bins":["jq"],"env":["TRELLO_API_KEY","TRELLO_TOKEN"]}}}
|
||||
---
|
||||
|
||||
# Trello Skill
|
||||
|
||||
Manage Trello boards, lists, and cards directly from Clawdbot.
|
||||
|
||||
## Setup
|
||||
|
||||
1. Get your API key: https://trello.com/app-key
|
||||
2. Generate a token (click "Token" link on that page)
|
||||
3. Set environment variables:
|
||||
```bash
|
||||
export TRELLO_API_KEY="your-api-key"
|
||||
export TRELLO_TOKEN="your-token"
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
All commands use curl to hit the Trello REST API.
|
||||
|
||||
### List boards
|
||||
```bash
|
||||
curl -s "https://api.trello.com/1/members/me/boards?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN" | jq '.[] | {name, id}'
|
||||
```
|
||||
|
||||
### List lists in a board
|
||||
```bash
|
||||
curl -s "https://api.trello.com/1/boards/{boardId}/lists?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN" | jq '.[] | {name, id}'
|
||||
```
|
||||
|
||||
### List cards in a list
|
||||
```bash
|
||||
curl -s "https://api.trello.com/1/lists/{listId}/cards?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN" | jq '.[] | {name, id, desc}'
|
||||
```
|
||||
|
||||
### Create a card
|
||||
```bash
|
||||
curl -s -X POST "https://api.trello.com/1/cards?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN" \
|
||||
-d "idList={listId}" \
|
||||
-d "name=Card Title" \
|
||||
-d "desc=Card description"
|
||||
```
|
||||
|
||||
### Move a card to another list
|
||||
```bash
|
||||
curl -s -X PUT "https://api.trello.com/1/cards/{cardId}?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN" \
|
||||
-d "idList={newListId}"
|
||||
```
|
||||
|
||||
### Add a comment to a card
|
||||
```bash
|
||||
curl -s -X POST "https://api.trello.com/1/cards/{cardId}/actions/comments?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN" \
|
||||
-d "text=Your comment here"
|
||||
```
|
||||
|
||||
### Archive a card
|
||||
```bash
|
||||
curl -s -X PUT "https://api.trello.com/1/cards/{cardId}?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN" \
|
||||
-d "closed=true"
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Board/List/Card IDs can be found in the Trello URL or via the list commands
|
||||
- The API key and token provide full access to your Trello account - keep them secret!
|
||||
- Rate limits: 300 requests per 10 seconds per API key; 100 requests per 10 seconds per token; `/1/members` endpoints are limited to 100 requests per 900 seconds
|
||||
|
||||
## Examples
|
||||
|
||||
```bash
|
||||
# Get all boards
|
||||
curl -s "https://api.trello.com/1/members/me/boards?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN&fields=name,id" | jq
|
||||
|
||||
# Find a specific board by name
|
||||
curl -s "https://api.trello.com/1/members/me/boards?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN" | jq '.[] | select(.name | contains("Work"))'
|
||||
|
||||
# Get all cards on a board
|
||||
curl -s "https://api.trello.com/1/boards/{boardId}/cards?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN" | jq '.[] | {name, list: .idList}'
|
||||
```
|
||||
|
|
@ -1,29 +0,0 @@
|
|||
---
|
||||
name: video-frames
|
||||
description: Extract frames or short clips from videos using ffmpeg.
|
||||
homepage: https://ffmpeg.org
|
||||
metadata: {"clawdbot":{"emoji":"🎞️","requires":{"bins":["ffmpeg"]},"install":[{"id":"brew","kind":"brew","formula":"ffmpeg","bins":["ffmpeg"],"label":"Install ffmpeg (brew)"}]}}
|
||||
---
|
||||
|
||||
# Video Frames (ffmpeg)
|
||||
|
||||
Extract a single frame from a video, or create quick thumbnails for inspection.
|
||||
|
||||
## Quick start
|
||||
|
||||
First frame:
|
||||
|
||||
```bash
|
||||
{baseDir}/scripts/frame.sh /path/to/video.mp4 --out /tmp/frame.jpg
|
||||
```
|
||||
|
||||
At a timestamp:
|
||||
|
||||
```bash
|
||||
{baseDir}/scripts/frame.sh /path/to/video.mp4 --time 00:00:10 --out /tmp/frame-10s.jpg
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Prefer `--time` for “what is happening around here?”.
|
||||
- Use a `.jpg` for quick share; use `.png` for crisp UI frames.
|
||||
|
|
@ -1,81 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
usage() {
|
||||
cat >&2 <<'EOF'
|
||||
Usage:
|
||||
frame.sh <video-file> [--time HH:MM:SS] [--index N] --out /path/to/frame.jpg
|
||||
|
||||
Examples:
|
||||
frame.sh video.mp4 --out /tmp/frame.jpg
|
||||
frame.sh video.mp4 --time 00:00:10 --out /tmp/frame-10s.jpg
|
||||
frame.sh video.mp4 --index 0 --out /tmp/frame0.png
|
||||
EOF
|
||||
exit 2
|
||||
}
|
||||
|
||||
if [[ "${1:-}" == "" || "${1:-}" == "-h" || "${1:-}" == "--help" ]]; then
|
||||
usage
|
||||
fi
|
||||
|
||||
in="${1:-}"
|
||||
shift || true
|
||||
|
||||
time=""
|
||||
index=""
|
||||
out=""
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--time)
|
||||
time="${2:-}"
|
||||
shift 2
|
||||
;;
|
||||
--index)
|
||||
index="${2:-}"
|
||||
shift 2
|
||||
;;
|
||||
--out)
|
||||
out="${2:-}"
|
||||
shift 2
|
||||
;;
|
||||
*)
|
||||
echo "Unknown arg: $1" >&2
|
||||
usage
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ ! -f "$in" ]]; then
|
||||
echo "File not found: $in" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ "$out" == "" ]]; then
|
||||
echo "Missing --out" >&2
|
||||
usage
|
||||
fi
|
||||
|
||||
mkdir -p "$(dirname "$out")"
|
||||
|
||||
if [[ "$index" != "" ]]; then
|
||||
ffmpeg -hide_banner -loglevel error -y \
|
||||
-i "$in" \
|
||||
-vf "select=eq(n\\,${index})" \
|
||||
-vframes 1 \
|
||||
"$out"
|
||||
elif [[ "$time" != "" ]]; then
|
||||
ffmpeg -hide_banner -loglevel error -y \
|
||||
-ss "$time" \
|
||||
-i "$in" \
|
||||
-frames:v 1 \
|
||||
"$out"
|
||||
else
|
||||
ffmpeg -hide_banner -loglevel error -y \
|
||||
-i "$in" \
|
||||
-vf "select=eq(n\\,0)" \
|
||||
-vframes 1 \
|
||||
"$out"
|
||||
fi
|
||||
|
||||
echo "$out"
|
||||
|
|
@ -1,35 +0,0 @@
|
|||
---
|
||||
name: voice-call
|
||||
description: Start voice calls via the Clawdbot voice-call plugin.
|
||||
metadata: {"clawdbot":{"emoji":"📞","skillKey":"voice-call","requires":{"config":["plugins.entries.voice-call.enabled"]}}}
|
||||
---
|
||||
|
||||
# Voice Call
|
||||
|
||||
Use the voice-call plugin to start or inspect calls (Twilio, Telnyx, Plivo, or mock).
|
||||
|
||||
## CLI
|
||||
|
||||
```bash
|
||||
clawdbot voicecall call --to "+15555550123" --message "Hello from Clawdbot"
|
||||
clawdbot voicecall status --call-id <id>
|
||||
```
|
||||
|
||||
## Tool
|
||||
|
||||
Use `voice_call` for agent-initiated calls.
|
||||
|
||||
Actions:
|
||||
- `initiate_call` (message, to?, mode?)
|
||||
- `continue_call` (callId, message)
|
||||
- `speak_to_user` (callId, message)
|
||||
- `end_call` (callId)
|
||||
- `get_status` (callId)
|
||||
|
||||
Notes:
|
||||
- Requires the voice-call plugin to be enabled.
|
||||
- Plugin config lives under `plugins.entries.voice-call.config`.
|
||||
- Twilio config: `provider: "twilio"` + `twilio.accountSid/authToken` + `fromNumber`.
|
||||
- Telnyx config: `provider: "telnyx"` + `telnyx.apiKey/connectionId` + `fromNumber`.
|
||||
- Plivo config: `provider: "plivo"` + `plivo.authId/authToken` + `fromNumber`.
|
||||
- Dev fallback: `provider: "mock"` (no network).
|
||||
|
|
@ -1,42 +0,0 @@
|
|||
---
|
||||
name: wacli
|
||||
description: Send WhatsApp messages to other people or search/sync WhatsApp history via the wacli CLI (not for normal user chats).
|
||||
homepage: https://wacli.sh
|
||||
metadata: {"clawdbot":{"emoji":"📱","requires":{"bins":["wacli"]},"install":[{"id":"brew","kind":"brew","formula":"steipete/tap/wacli","bins":["wacli"],"label":"Install wacli (brew)"},{"id":"go","kind":"go","module":"github.com/steipete/wacli/cmd/wacli@latest","bins":["wacli"],"label":"Install wacli (go)"}]}}
|
||||
---
|
||||
|
||||
# wacli
|
||||
|
||||
Use `wacli` only when the user explicitly asks you to message someone else on WhatsApp or when they ask to sync/search WhatsApp history.
|
||||
Do NOT use `wacli` for normal user chats; Clawdbot routes WhatsApp conversations automatically.
|
||||
If the user is chatting with you on WhatsApp, you should not reach for this tool unless they ask you to contact a third party.
|
||||
|
||||
Safety
|
||||
- Require explicit recipient + message text.
|
||||
- Confirm recipient + message before sending.
|
||||
- If anything is ambiguous, ask a clarifying question.
|
||||
|
||||
Auth + sync
|
||||
- `wacli auth` (QR login + initial sync)
|
||||
- `wacli sync --follow` (continuous sync)
|
||||
- `wacli doctor`
|
||||
|
||||
Find chats + messages
|
||||
- `wacli chats list --limit 20 --query "name or number"`
|
||||
- `wacli messages search "query" --limit 20 --chat <jid>`
|
||||
- `wacli messages search "invoice" --after 2025-01-01 --before 2025-12-31`
|
||||
|
||||
History backfill
|
||||
- `wacli history backfill --chat <jid> --requests 2 --count 50`
|
||||
|
||||
Send
|
||||
- Text: `wacli send text --to "+14155551212" --message "Hello! Are you free at 3pm?"`
|
||||
- Group: `wacli send text --to "1234567890-123456789@g.us" --message "Running 5 min late."`
|
||||
- File: `wacli send file --to "+14155551212" --file /path/agenda.pdf --caption "Agenda"`
|
||||
|
||||
Notes
|
||||
- Store dir: `~/.wacli` (override with `--store`).
|
||||
- Use `--json` for machine-readable output when parsing.
|
||||
- Backfill requires your phone online; results are best-effort.
|
||||
- WhatsApp CLI is not needed for routine user chats; it’s for messaging other people.
|
||||
- JIDs: direct chats look like `<number>@s.whatsapp.net`; groups look like `<id>@g.us` (use `wacli chats list` to find).
|
||||
|
|
@ -1,49 +0,0 @@
|
|||
---
|
||||
name: weather
|
||||
description: Get current weather and forecasts (no API key required).
|
||||
homepage: https://wttr.in/:help
|
||||
metadata: {"clawdbot":{"emoji":"🌤️","requires":{"bins":["curl"]}}}
|
||||
---
|
||||
|
||||
# Weather
|
||||
|
||||
Two free services, no API keys needed.
|
||||
|
||||
## wttr.in (primary)
|
||||
|
||||
Quick one-liner:
|
||||
```bash
|
||||
curl -s "wttr.in/London?format=3"
|
||||
# Output: London: ⛅️ +8°C
|
||||
```
|
||||
|
||||
Compact format:
|
||||
```bash
|
||||
curl -s "wttr.in/London?format=%l:+%c+%t+%h+%w"
|
||||
# Output: London: ⛅️ +8°C 71% ↙5km/h
|
||||
```
|
||||
|
||||
Full forecast:
|
||||
```bash
|
||||
curl -s "wttr.in/London?T"
|
||||
```
|
||||
|
||||
Format codes: `%c` condition · `%t` temp · `%h` humidity · `%w` wind · `%l` location · `%m` moon
|
||||
|
||||
Tips:
|
||||
- URL-encode spaces: `wttr.in/New+York`
|
||||
- Airport codes: `wttr.in/JFK`
|
||||
- Units: `?m` (metric) `?u` (USCS)
|
||||
- Today only: `?1` · Current only: `?0`
|
||||
- PNG: `curl -s "wttr.in/Berlin.png" -o /tmp/weather.png`
|
||||
|
||||
## Open-Meteo (fallback, JSON)
|
||||
|
||||
Free, no key, good for programmatic use:
|
||||
```bash
|
||||
curl -s "https://api.open-meteo.com/v1/forecast?latitude=51.5&longitude=-0.12¤t_weather=true"
|
||||
```
|
||||
|
||||
Find coordinates for a city, then query. Returns JSON with temp, windspeed, weathercode.
|
||||
|
||||
Docs: https://open-meteo.com/en/docs
|
||||
Loading…
Add table
Add a link
Reference in a new issue