Overview
Frajola is a desktop meeting recorder built with Tauri. It records microphone + system audio to local WAV files, transcribes with local Whisper models, and can generate AI notes with Ollama or optional cloud providers.
The app is local-first: the default flow keeps audio and transcripts on your machine.
Status note: This documentation reflects the current implementation as of March 4, 2026.
First-Run Onboarding
On first launch, Frajola opens a guided setup wizard with these steps:
- Choose mode: Full Local or Transcription Only
- Grant permissions: microphone + screen/system audio on macOS
- Pick Whisper model: choose and download your transcription model
- Set up AI: install/start Ollama, select a model, or skip AI
- Finish: quick summary + overlay-only usage guidance
You can skip the AI step and keep transcription-only mode.
System Requirements
Current baseline
- macOS: 14.2+ (current validated baseline in app config)
- Windows: target supported; broader validation in progress
- Linux: target supported; broader validation in progress
Hardware guidance
Whisper and local AI quality/speed depend on CPU and available RAM. Start small and scale up:
- Whisper Base: default recommendation
- Ollama 1B/3B models: best for lower-memory machines
- 7B/8B models: higher quality, higher RAM/latency cost
Downloads
Desktop installers are available on the landing page download section.
macOS Permissions
Frajola checks permission status proactively in onboarding and refreshes when you return from System Settings.
- Microphone: required for your voice
- Screen & System Audio: required to capture meeting output
Use the onboarding "Open Settings" buttons to jump directly to the right macOS privacy pages.
Whisper Models
Local transcription currently supports these managed models:
| Model | Approx Size | Use Case |
|---|---|---|
tiny | ~75 MB | Fastest, lowest quality |
base (recommended) | ~142 MB | Best default balance |
small | ~466 MB | Higher quality, slower |
medium | ~1.5 GB | Highest quality currently in-app |
You can download/delete models in Settings and switch active model at any time.
AI Setup (Ollama / Cloud)
Local AI (default)
Frajola can install/start Ollama from onboarding (best effort), then pull a model with progress feedback.
Preset Ollama model options
llama3.2:1bqwen2.5:3bllama3.2:3bqwen3.5:4bmistral:7bqwen2.5:7bllama3.1:8b
Cloud AI summaries (optional)
You can switch AI provider to OpenAI or Anthropic in Settings and provide your own API key. This applies to summaries only.
Current limitation: Cloud transcription is not implemented yet. Transcription remains local Whisper.
Recording Flow
- Start recording from the main app or overlay
- Pause/resume as needed
- Stop recording to finalize WAV
- Background transcription starts automatically
- If AI summaries are enabled, summarization runs after transcription
Overlay-Only Mode
Frajola includes a floating overlay window with recording controls.
- Minimize the main window to show overlay automatically
- Closing the main window hides it and keeps overlay available
- Click the dock/taskbar app icon to restore the main window
Meeting Library
The current library supports:
- Meeting list grouped by date
- Detail page with transcript, summary, action items, and audio playback
- Delete meeting command (database rows removed)
Re-transcription
You can reprocess a meeting from its detail screen. Re-transcription clears previous transcript/summary/action items and runs the pipeline again.
Settings Overview
Current settings UI includes:
- General: theme
- AI Provider: enable/disable AI, provider selection, model selection, API keys, Ollama model pulls
- Transcription: Whisper model download/delete/select
| Key | Example Value | Purpose |
|---|---|---|
ai_enabled | 1 | Enable or disable summaries |
ai_provider | ollama | Summary provider |
ai_model | qwen3.5:4b | Selected model for provider |
whisper_model | base | Transcription model |
onboarding_completed | 1 | First-run completion flag |
Current Limitations
- No user-facing search/filter UI in meeting library yet
- No export yet (Markdown/PDF/clipboard)
- No full UI i18n framework yet (interface remains English)
- No cloud transcription yet (OpenAI Whisper API path pending)
- No hardware-based model recommendation yet
- Meeting delete currently removes DB data but does not yet guarantee WAV file deletion
- Git integration workflow is not implemented yet
Next Features
See the roadmap page for shipped vs pending PRD features and upcoming priorities.
Permission Issues
- Run onboarding permission check and click "Open Settings" buttons
- Enable both microphone and screen/system audio permissions for Frajola
- Return to app and click refresh; if needed, restart the app
Ollama Issues
- Click "Refresh" in onboarding/settings to verify connectivity
- Use "Start Ollama" if installed but not running
- Pull at least one model before enabling local AI summaries
curl http://localhost:11434/api/tags
Frequently Asked Questions
Can I use Frajola without AI?
Yes. Choose "Transcription Only" during onboarding or disable AI summaries in Settings.
Can I use Qwen models?
Yes. Presets include Qwen models and you can also enter a custom Ollama tag.
Does Frajola support cloud transcription?
Not yet. Current transcription runs locally with Whisper.
Can I export notes to Markdown or PDF?
Not yet. Export is on the roadmap.