Overview

Frajola is a desktop meeting recorder built with Tauri. It records microphone + system audio to local WAV files, transcribes with local Whisper models, and can generate AI notes with Ollama or optional cloud providers.

The app is local-first: the default flow keeps audio and transcripts on your machine.

Status note: This documentation reflects the current implementation as of March 4, 2026.

First-Run Onboarding

On first launch, Frajola opens a guided setup wizard with these steps:

  1. Choose mode: Full Local or Transcription Only
  2. Grant permissions: microphone + screen/system audio on macOS
  3. Pick Whisper model: choose and download your transcription model
  4. Set up AI: install/start Ollama, select a model, or skip AI
  5. Finish: quick summary + overlay-only usage guidance

You can skip the AI step and keep transcription-only mode.

System Requirements

Current baseline

  • macOS: 14.2+ (current validated baseline in app config)
  • Windows: target supported; broader validation in progress
  • Linux: target supported; broader validation in progress

Hardware guidance

Whisper and local AI quality/speed depend on CPU and available RAM. Start small and scale up:

  • Whisper Base: default recommendation
  • Ollama 1B/3B models: best for lower-memory machines
  • 7B/8B models: higher quality, higher RAM/latency cost

Downloads

Desktop installers are available on the landing page download section.

macOS Permissions

Frajola checks permission status proactively in onboarding and refreshes when you return from System Settings.

  • Microphone: required for your voice
  • Screen & System Audio: required to capture meeting output

Use the onboarding "Open Settings" buttons to jump directly to the right macOS privacy pages.

Whisper Models

Local transcription currently supports these managed models:

ModelApprox SizeUse Case
tiny~75 MBFastest, lowest quality
base (recommended)~142 MBBest default balance
small~466 MBHigher quality, slower
medium~1.5 GBHighest quality currently in-app

You can download/delete models in Settings and switch active model at any time.

AI Setup (Ollama / Cloud)

Local AI (default)

Frajola can install/start Ollama from onboarding (best effort), then pull a model with progress feedback.

Preset Ollama model options

  • llama3.2:1b
  • qwen2.5:3b
  • llama3.2:3b
  • qwen3.5:4b
  • mistral:7b
  • qwen2.5:7b
  • llama3.1:8b

Cloud AI summaries (optional)

You can switch AI provider to OpenAI or Anthropic in Settings and provide your own API key. This applies to summaries only.

Current limitation: Cloud transcription is not implemented yet. Transcription remains local Whisper.

Recording Flow

  1. Start recording from the main app or overlay
  2. Pause/resume as needed
  3. Stop recording to finalize WAV
  4. Background transcription starts automatically
  5. If AI summaries are enabled, summarization runs after transcription

Overlay-Only Mode

Frajola includes a floating overlay window with recording controls.

  • Minimize the main window to show overlay automatically
  • Closing the main window hides it and keeps overlay available
  • Click the dock/taskbar app icon to restore the main window

Meeting Library

The current library supports:

  • Meeting list grouped by date
  • Detail page with transcript, summary, action items, and audio playback
  • Delete meeting command (database rows removed)

Re-transcription

You can reprocess a meeting from its detail screen. Re-transcription clears previous transcript/summary/action items and runs the pipeline again.

Settings Overview

Current settings UI includes:

  • General: theme
  • AI Provider: enable/disable AI, provider selection, model selection, API keys, Ollama model pulls
  • Transcription: Whisper model download/delete/select
KeyExample ValuePurpose
ai_enabled1Enable or disable summaries
ai_providerollamaSummary provider
ai_modelqwen3.5:4bSelected model for provider
whisper_modelbaseTranscription model
onboarding_completed1First-run completion flag

Current Limitations

  • No user-facing search/filter UI in meeting library yet
  • No export yet (Markdown/PDF/clipboard)
  • No full UI i18n framework yet (interface remains English)
  • No cloud transcription yet (OpenAI Whisper API path pending)
  • No hardware-based model recommendation yet
  • Meeting delete currently removes DB data but does not yet guarantee WAV file deletion
  • Git integration workflow is not implemented yet

Permission Issues

  1. Run onboarding permission check and click "Open Settings" buttons
  2. Enable both microphone and screen/system audio permissions for Frajola
  3. Return to app and click refresh; if needed, restart the app

Ollama Issues

  • Click "Refresh" in onboarding/settings to verify connectivity
  • Use "Start Ollama" if installed but not running
  • Pull at least one model before enabling local AI summaries
curl http://localhost:11434/api/tags

Frequently Asked Questions

Can I use Frajola without AI?

Yes. Choose "Transcription Only" during onboarding or disable AI summaries in Settings.

Can I use Qwen models?

Yes. Presets include Qwen models and you can also enter a custom Ollama tag.

Does Frajola support cloud transcription?

Not yet. Current transcription runs locally with Whisper.

Can I export notes to Markdown or PDF?

Not yet. Export is on the roadmap.