Skip to main content

AI Settings

Configure how Cadeo's AI features work in Settings > AI.

Local AI (Ollama) — Desktop Only

Run AI models on your machine for complete privacy. Cadeo uses a two-tier model system for the best balance of speed and quality.

Primary Model

The main AI model used for task breakdown, meeting summaries, inbox triage, and planning.

ModelRAM RecommendedDownload SizeNotes
Qwen 3 8B16 GB+5 GBRecommended for most users
Qwen 3 14B32 GB+9 GBHigher quality
Qwen 3 32B64 GB+20 GBBest quality (M-series Max/Ultra)

The RAM numbers are recommendations, not hard requirements. They account for the headroom macOS, your browser, and other apps need alongside local inference. If you download a model slightly above your recommended tier, it will still run — but expect thermal throttling, fan noise, and slower response times on smaller Macs. Cadeo auto-selects the best-fit model for your machine with a Recommended badge.

Fast Model

A lightweight model for instant responses on quick tasks like triage classification and quick capture.

ModelRAM RecommendedDownload SizeNotes
Gemma 3 4B8 GB+3 GBOptional — falls back to primary if not installed

Cadeo auto-detects your Mac's RAM and:

  • Disables models that can't physically run on your hardware
  • Shows a Recommended badge on the best-fit primary model
  • Picks the largest installed primary model as your default automatically

Managing Models

  • Download — with progress bar showing percentage. Downloads continue in the background if you click away from the AI Settings tab or open the menu bar.
  • Delete — remove installed models to free disk space
  • Status — shows install size in GB

Upgrading from DeepSeek R1

If you previously installed DeepSeek R1 models, you'll see an Upgrade Now banner in AI settings. Click it to download the recommended Qwen 3 model plus Gemma 3 4B automatically. Cadeo will automatically delete your old DeepSeek / Llama models after the new ones finish downloading, so you don't need to manage legacy models manually. If any download fails, the legacy models are kept in place so you're never left without working AI.

Ollama Status

Shows whether the AI engine is running or offline, with version number and a Refresh button.

How the AI Learns

Cadeo's local AI gets smarter as you use it:

  • Triage patterns — when you accept, modify, or skip AI suggestions during inbox triage, the AI learns your preferences (e.g., "client tasks always go to the Clients area with high priority")
  • Task breakdown style — the AI adapts to your preferred number of subtasks and complexity level
  • Capture style — the AI learns how you title tasks (e.g., action verbs, terse vs descriptive)
  • Focus preferences — the AI learns when you prefer deep work vs quick tasks

These patterns are stored locally in your app data directory. After 5+ interactions per feature, the AI begins tailoring its suggestions. You don't need to configure anything — it learns automatically.

Cross-Device Sync

If you use Cadeo on multiple Macs with local AI, your learning profile syncs automatically via Supabase on each app launch. Both devices will benefit from the same learned patterns.

This sync is disabled when Save AI conversations locally only is enabled (see Data Privacy below).

Session Memory

Within a single session (from app launch to quit), the AI remembers your recent interactions per feature. For example, if you ask the AI to break down a task and then ask a follow-up question, it remembers the prior exchange. Session memory is cleared when you restart the app.

Context-Aware Suggestions

When you use inbox triage, focus suggestions, or day planning, the AI automatically searches your local tasks and notes for related context. For example, when triaging a task "Follow up with Sarah about launch", the AI might see your recent note "Product launch meeting — discussed Q3 timeline" and factor that into its suggestions.

This works offline using local embeddings — no cloud connection needed.

Cloud AI

Uses Anthropic's cloud AI for processing.

  • Free tier: 10 requests per day — usage bar shows remaining requests
  • BYOK (Bring Your Own Key): Unlimited requests with your own API key
  • Usage bar turns orange at 90%+ consumption

Provider Preference

Choose your preferred AI provider:

  • Local first — use Ollama, fall back to cloud if unavailable
  • Cloud with local fallback — prefer cloud, use local if offline
  • Cloud only — always use cloud AI

Data Privacy

  • Save AI conversations locally only — when enabled, AI chat history stays on your device, doesn't sync to the cloud, and AI profile sync is disabled