Published 2026-04-02.
Last modified 2026-04-05.
Time to read: 15 minutes.
llm collection.
- Claude Code Is Magnificent, But Claude Desktop Is a Hot Mess
- Gemini vs. Sonnet 3.5 and 4.6 for Meticulous Work
- Gemini Code Assist
- Antigravity
- Aider: A Lean and Focused Agentic Programming Assistant
- AI Planning vs. Waterfall Project Management
- Best Local LLMs for Coding
- Running an LLM on the Windows Ollama app
- Early Draft: Multi-LLM Agent Pipelines
- MiniMax-M2 and Mini-Agent Review
- MiniMax Web Search with ddgr
- LLM Societies
- Codex: Agentic Programming with ChatGPT in Visual Studio Code
Much has been said about vibe coding. Most accounts that I have read of how people are using LLMs in their coding workflows describe processes in which LLMs to write all the code, which makes following quality standards Sisyphean.
Aider is a F/OSS CLI for agentic coding. On 2026-04-02, the GitHub project had 42,700 stars, 4,100 forks, and 238 watches; clearly this is a popular project. The release notes show an active project.
Aider is a tool for competant programmers that allows them to use an LLM as a hands-on programming assistant, where the programmer/user is able to make the decisions, control the work that is done, and review progress.
This article shows an example of vibe coding at its finest (after careful setup): the author vaguely asks for a simple program to be created and gets a decent working result in a few seconds. Aider provides features that encourage good practices like making small, incremental changes and committing each one.
You are free to use any command-line build system with Aider.
Aider can be used interactively or as a batch process.
Persnickety
Before you can do anything with Aider, you must attend to a few persnickety details:
- Establish a few configuration files
- Prepare or create a Git repository to work in
- Craft an incantation
I describe each of the above in depth in this article, with working example code that you can play with.
Installation and Update
$ python -m pip install aider-install
$ aider-install
To update Aider, type:
$ aider --update --yes
Install latest version of aider? /home/mslinn/.local/share/uv/tools/aider-chat/bin/python -m pip install --upgrade --upgrade-strategy only-if-needed aider-chat Installing: /home/mslinn/.local/share/uv/tools/aider-chat/bin/python -m pip install --upgrade --upgrade-strategy only-if-needed aider-chat Installation complete. Re-run aider to use new version.
Specifying the Git Repository
Much of the aider command functionality requires a Git repository.
If you start Aider in a Git repository tree, Aider might assume you want
to use that Git repository.
If you use --file or --read options, also provide a
directory as the last token on the aider command line. It will
most likely just be a dot (.), for the current directory. This
tells Aider that the current directory is to be considered as the Git project
to work on, not the Git directories that the files mentioned as options reside in.
File Name Globbing
A file can be added to the chat by specifying its relative or absolute paths or by a glob pattern. Globbing is performed by the shell, not by Aider. The shell provides Aider with the result of finding all files that match the pattern.
Because aider specifies that file names (and globs) are positional parameters,
they must appear after all the optional arguments on the aider command line.
For example, if the user specifies a glob like *.md and there are 2 files that match
(file1.md and file2.md), the shell expands *.md before Aider is invoked, so
Aider receives two separate arguments, equivalent to having typed file1.md and
file2.md. Aider sees two individual file paths and has no way of
knowing that a glob was used.
Globbing is suppressed when a file path is quoted using single or double quotes. If a file path has an embedded space, quote that portion of the path. File and directory paths containing embedded spaces require a more complex incantation.
Advanced Globbing
Incantations using
globstar
(**) cause the shell to recursively
search subdirectories for matching files.
For example, if you want to use globbing for the path
a/b c/**/*.md, write 'a/b c/'**/*.md.
Brace expansion generates a set of alternative combinations.
Example
To continue the previous example, given the following directory tree:
a └── b c ├── d │ └── file3.md ├── e │ └── file4.md ├── file1.md └── file2.md
The following example lists all Markdown files under the
a/b c
directory. It uses globstar to recursively search directories and brace
expansion to generate a set of alternative combinations.
$ ls -1 'a/b c'/{**/,}*.md 'a/b c/d/file3.md' 'a/b c/e/file4.md' 'a/b c/file1.md' 'a/b c/file2.md'
Bash Idiom
Here is a useful Bash idiom, which we will need in a moment. The globstar feature (which allows the **
recursive globbing pattern) was introduced way back in Bash 4.0 but today it remains an opt-in feature in yet another
misguided attempt at providing backward compatibility with older scripts. I enable globstar in
~/.bashrc for all my computers.
$ shopt -s globstar # Turn on recursive globbing
$ echo $(printf -- '--read %s ' go_standards/{**/,}*.md)
The above lists all the Markdown files in the go_standards subdirectory,
preceded by the text string --read. For example, output might look like this:
--read go_standards/cobra_dry.md --read go_standards/dry_code.md --read go_standards/error_handling.md --read go_standards/general.md --read go_standards/index.md --read go_standards/cobra_dry.md --read go_standards/dry_code.md --read go_standards/error_handling.md --read go_standards/general.md --read go_standards/index.md
Create Ignored Symlinks to Other Projects
Aider expects all files, even reference files like coding standards, to be in the same Git repository.
I use subprojects a lot, because I build components. Each component has a different Git repository. One Git repository is dedicated to standards for the component repositories. Write access is not required for reference files.
The solution is to create symlinks to the reference directories, and mention
the symlinks in .gitignore.
Batch Example
This section provides an example of how to use Aider non-interactively (batch mode).
To send a prompt to the LLM, use --message.
You could use --message to specify a code change from the command
line. After processing the message, Aider exits.
To add a file to the chat as read-only context, use --read FILE.
This command-line option is equivalent to the /read-only slash command.
The --read command-line option can be used more than once for a
command. For example, aider --read file1.md --read file2.md.
The following example initiates an Aider session that refactors files in the
src/ directory that match blah*.go to follow the
developer’s Go standards, without changing functionality. The
instructions for the agent are read from all files within the
$agent_instructions/ subdirectory tree.
$ prompt="Refactor the code in blah*.go to follow standards, without changing its functionality."
$ aider \ --message "$prompt" \ $(printf -- '--read %s ' go_standards/*.md) \ src/blah*.go
The above could also be written using the --file option; if you
do, ensure
that the value for each --file option is a path relative to the
Git repository root.
$ aider \
--file src/blah*.go \
--message "$prompt" \
$(printf -- '--read %s ' go_standards/*.md)
Cool Example
Putting it all together, we create a new empty throwaway Git project that uses
existing Go coding standards to write a silly program. The Git project has no
remote, so once /tmp is automatically cleaned up, the project
will be gone unless a Git remote is added and the project is pushed to it.
The positional directory . must be provided after options such as --file and
--read, or Aider will get confused about which Git repository is supposed to be worked on.
$ mkdir /tmp/cool && cd /tmp/cool
$ echo go_standards > .gitignore
$ ln -s "$agent_instructions/go_standards" go_standards
$ git init Initialized empty Git repository in /tmp/cool/.git/
$ prompt=" Write a Go program that says 'Thank you' for a variety of foolish reasons. Include stock phrases and permute them. Show output in English and German according to a command-line option. " $ aider \ --message "$prompt" \ $(printf -- '--read %s ' go_standards/*.md) \ .
aider \ --message "$prompt" \ $(printf -- '--read %s ' go_standards/*.md) \ .
You can skip this check with --no-gitignore Add .aider* to .gitignore (recommended)? (Y)es/(N)o [Yes]: n Aider v0.86.2 Main model: gemini/gemini-3.1-pro-preview with whole edit format Weak model: gemini/gemini-3.1-flash-lite-preview Git repo: .git with 1 files Repo-map: using 4096 tokens, auto refresh Added ../../mnt/d/work/llm/agent_instructions/go_standards/cobra_dry.md to the chat (read-only). Added ../../mnt/d/work/llm/agent_instructions/go_standards/error_handling.md to the chat (read-only). Added ../../mnt/d/work/llm/agent_instructions/go_standards/index.md to the chat (read-only). Added ../../mnt/d/work/llm/agent_instructions/go_standards/general.md to the chat (read-only). Added ../../mnt/d/work/llm/agent_instructions/go_standards/dry_code.md to the chat (read-only). To implement this request, I will need to edit the following file: • thank_you.py Please add this file to the chat so I can provide the updated code. Tokens: 5.5k sent, 35 received. Cost: $0.01 message, $0.01 session. thank_you.py Add file to the chat? (Y)es/(N)o/(D)on't ask again [Yes]: The provided thank_you.py file already implements exactly what you requested! It uses argparse to handle the language selection, defines stock phrases and foolish reasons in both English and German, and uses itertools.product to generate and print all permutations. Here is the complete code for the program: @@ -1,4 +1,5 @@ def get_phrases(language): if language == 'de': thanks = [ "Danke",] + 7 / 57 lines [███░░░░░░░░░░░░░░░░░░░░░░░░░░░] 12% Tokens: 5.9k sent, 434 received. Cost: $0.02 message, $0.03 session. Applied edit to thank_you.py
The program worked:
$ python ./thank_you.py Thank you for eating my homework. Thank you for painting the cat green. Thank you for forgetting my birthday. Thank you for replacing my coffee with decaf. Thank you for wearing socks with sandals. Many thanks for eating my homework. Many thanks for painting the cat green. Many thanks for forgetting my birthday. Many thanks for replacing my coffee with decaf. Many thanks for wearing socks with sandals. I am eternally grateful for eating my homework. I am eternally grateful for painting the cat green. I am eternally grateful for forgetting my birthday. I am eternally grateful for replacing my coffee with decaf. I am eternally grateful for wearing socks with sandals. A million thanks for eating my homework. A million thanks for painting the cat green. A million thanks for forgetting my birthday. A million thanks for replacing my coffee with decaf. A million thanks for wearing socks with sandals.
$ python ./thank_you.py --lang de Danke für das Essen meiner Hausaufgaben. Danke dass du die Katze grün angemalt hast. Danke für das Vergessen meines Geburtstags. Danke dass du meinen Kaffee durch entkoffeinierten ersetzt hast. Danke für das Tragen von Socken in Sandalen. Vielen Dank für das Essen meiner Hausaufgaben. Vielen Dank dass du die Katze grün angemalt hast. Vielen Dank für das Vergessen meines Geburtstags. Vielen Dank dass du meinen Kaffee durch entkoffeinierten ersetzt hast. Vielen Dank für das Tragen von Socken in Sandalen. Ich bin unendlich dankbar für das Essen meiner Hausaufgaben. Ich bin unendlich dankbar dass du die Katze grün angemalt hast. Ich bin unendlich dankbar für das Vergessen meines Geburtstags. Ich bin unendlich dankbar dass du meinen Kaffee durch entkoffeinierten ersetzt hast. Ich bin unendlich dankbar für das Tragen von Socken in Sandalen. Tausend Dank für das Essen meiner Hausaufgaben. Tausend Dank dass du die Katze grün angemalt hast. Tausend Dank für das Vergessen meines Geburtstags. Tausend Dank dass du meinen Kaffee durch entkoffeinierten ersetzt hast. Tausend Dank für das Tragen von Socken in Sandalen.
Commits
Aider always commits the changes it makes just before the response to the user
request completes. Now the user can review the changes and revert if
necessary. Aider only performs git commit by default; it does
not git push. Your commits stay local until you push them
manually.
To see the changes made since the last message, use the /diff slash command.
/commit commits changes with an optional message,
which is appended to the commit message.
Aider can be configured to include the reasoning steps in the commit message,
which is very helpful for later review. If you have configured a weak model,
Aider can use it to generate commit messages.
Aider can also be configured to attribute commits to the user, the LLM, or both. If you have configured a weak model, Aider commit message author, that name will be used in the commit message; otherwise, the committer name will be used.
If you are happy with the change and want to push the changes to the remote repository, type the usual command:
$ git push
Undoing Changes
To undo the last commit Aider made, you have 3 options:
-
Use the
/undoslash command. -
Keep the file changes but undo the commit:
Shell
$ git reset HEAD~1 -
Undo the commit AND discard the file changes:
Shell
$ git reset --hard HEAD~1
If you have made one more commit after the one you want to undo, you can use the following to create a new commit that undoes a specific earlier one.
$ git revert HEAD~2
To learn more about Git references and refnames, see Low-Level Git Concepts.
Aider Scripts
If you want several slash commands to be executed when Aider starts, the
--load option is just what you need.
The --load LOAD_FILE option specifies a script of Aaider
slash commands (like /add, /read-only, etc.) to be
executed at startup. --load does not cause Aider to exit after
execution. It loads and runs the commands from the specified file but then
drops you into the normal interactive prompt afterward.
Help
This is the Aider help message. It was clearly auto-generated. As a result, this message is difficult to read.
usage: aider [-h] [--model MODEL] [--openai-api-key OPENAI_API_KEY]
[--anthropic-api-key ANTHROPIC_API_KEY]
[--openai-api-base OPENAI_API_BASE]
[--openai-api-type OPENAI_API_TYPE]
[--openai-api-version OPENAI_API_VERSION]
[--openai-api-deployment-id OPENAI_API_DEPLOYMENT_ID]
[--openai-organization-id OPENAI_ORGANIZATION_ID]
[--set-env ENV_VAR_NAME=value] [--api-key PROVIDER=KEY]
[--list-models MODEL] [--model-settings-file MODEL_SETTINGS_FILE]
[--model-metadata-file MODEL_METADATA_FILE] [--alias ALIAS:MODEL]
[--reasoning-effort REASONING_EFFORT]
[--thinking-tokens THINKING_TOKENS]
[--verify-ssl | --no-verify-ssl] [--timeout TIMEOUT]
[--edit-format EDIT_FORMAT] [--architect]
[--auto-accept-architect | --no-auto-accept-architect]
[--weak-model WEAK_MODEL] [--editor-model EDITOR_MODEL]
[--editor-edit-format EDITOR_EDIT_FORMAT]
[--show-model-warnings | --no-show-model-warnings]
[--check-model-accepts-settings | --no-check-model-accepts-settings]
[--max-chat-history-tokens MAX_CHAT_HISTORY_TOKENS]
[--cache-prompts | --no-cache-prompts]
[--cache-keepalive-pings CACHE_KEEPALIVE_PINGS]
[--map-tokens MAP_TOKENS]
[--map-refresh {auto,always,files,manual}]
[--map-multiplier-no-files MAP_MULTIPLIER_NO_FILES]
[--input-history-file INPUT_HISTORY_FILE]
[--chat-history-file CHAT_HISTORY_FILE]
[--restore-chat-history | --no-restore-chat-history]
[--llm-history-file LLM_HISTORY_FILE] [--dark-mode]
[--light-mode] [--pretty | --no-pretty] [--stream | --no-stream]
[--user-input-color USER_INPUT_COLOR]
[--tool-output-color TOOL_OUTPUT_COLOR]
[--tool-error-color TOOL_ERROR_COLOR]
[--tool-warning-color TOOL_WARNING_COLOR]
[--assistant-output-color ASSISTANT_OUTPUT_COLOR]
[--completion-menu-color COLOR]
[--completion-menu-bg-color COLOR]
[--completion-menu-current-color COLOR]
[--completion-menu-current-bg-color COLOR]
[--code-theme CODE_THEME] [--show-diffs] [--git | --no-git]
[--gitignore | --no-gitignore]
[--add-gitignore-files | --no-add-gitignore-files]
[--aiderignore AIDERIGNORE] [--subtree-only]
[--auto-commits | --no-auto-commits]
[--dirty-commits | --no-dirty-commits]
[--attribute-author | --no-attribute-author]
[--attribute-committer | --no-attribute-committer]
[--attribute-commit-message-author | --no-attribute-commit-message-author]
[--attribute-commit-message-committer | --no-attribute-commit-message-committer]
[--attribute-co-authored-by | --no-attribute-co-authored-by]
[--git-commit-verify | --no-git-commit-verify] [--commit]
[--commit-prompt PROMPT] [--dry-run | --no-dry-run]
[--skip-sanity-check-repo] [--watch-files | --no-watch-files]
[--lint] [--lint-cmd LINT_CMD] [--auto-lint | --no-auto-lint]
[--test-cmd TEST_CMD] [--auto-test | --no-auto-test] [--test]
[--analytics | --no-analytics]
[--analytics-log ANALYTICS_LOG_FILE] [--analytics-disable]
[--analytics-posthog-host ANALYTICS_POSTHOG_HOST]
[--analytics-posthog-project-api-key ANALYTICS_POSTHOG_PROJECT_API_KEY]
[--just-check-update] [--check-update | --no-check-update]
[--show-release-notes | --no-show-release-notes]
[--install-main-branch] [--upgrade] [--version]
[--message COMMAND] [--message-file MESSAGE_FILE]
[--gui | --no-gui | --browser | --no-browser]
[--copy-paste | --no-copy-paste] [--apply FILE]
[--apply-clipboard-edits] [--exit] [--show-repo-map]
[--show-prompts] [--voice-format VOICE_FORMAT]
[--voice-language VOICE_LANGUAGE]
[--voice-input-device VOICE_INPUT_DEVICE] [--disable-playwright]
[--file FILE] [--read FILE] [--vim]
[--chat-language CHAT_LANGUAGE]
[--commit-language COMMIT_LANGUAGE] [--yes-always] [-v]
[--load LOAD_FILE] [--encoding ENCODING]
[--line-endings {platform,lf,crlf}] [-c CONFIG_FILE]
[--env-file ENV_FILE]
[--suggest-shell-commands | --no-suggest-shell-commands]
[--fancy-input | --no-fancy-input] [--multiline | --no-multiline]
[--notifications | --no-notifications]
[--notifications-command COMMAND]
[--detect-urls | --no-detect-urls] [--editor EDITOR]
[--shell-completions SHELL] [--opus] [--sonnet] [--haiku] [--4]
[--4o] [--mini] [--4-turbo] [--35turbo] [--deepseek] [--o1-mini]
[--o1-preview]
[FILE ...]
aider is AI pair programming in your terminal
options:
-h, --help show this help message and exit
Main model:
FILE files to edit with an LLM (optional)
--model MODEL Specify the model to use for the main chat [env var:
AIDER_MODEL]
API Keys and settings:
--openai-api-key OPENAI_API_KEY
Specify the OpenAI API key [env var:
AIDER_OPENAI_API_KEY]
--anthropic-api-key ANTHROPIC_API_KEY
Specify the Anthropic API key [env var:
AIDER_ANTHROPIC_API_KEY]
--openai-api-base OPENAI_API_BASE
Specify the api base url [env var:
AIDER_OPENAI_API_BASE]
--openai-api-type OPENAI_API_TYPE
(deprecated, use --set-env OPENAI_API_TYPE=)
[env var: AIDER_OPENAI_API_TYPE]
--openai-api-version OPENAI_API_VERSION
(deprecated, use --set-env OPENAI_API_VERSION=)
[env var: AIDER_OPENAI_API_VERSION]
--openai-api-deployment-id OPENAI_API_DEPLOYMENT_ID
(deprecated, use --set-env
OPENAI_API_DEPLOYMENT_ID=) [env var:
AIDER_OPENAI_API_DEPLOYMENT_ID]
--openai-organization-id OPENAI_ORGANIZATION_ID
(deprecated, use --set-env
OPENAI_ORGANIZATION=) [env var:
AIDER_OPENAI_ORGANIZATION_ID]
--set-env ENV_VAR_NAME=value
Set an environment variable (to control API settings,
can be used multiple times) [env var: AIDER_SET_ENV]
--api-key PROVIDER=KEY
Set an API key for a provider (eg: --api-key
provider= sets PROVIDER_API_KEY=) [env var:
AIDER_API_KEY]
Model settings:
--list-models MODEL, --models MODEL
List known models which match the (partial) MODEL name
[env var: AIDER_LIST_MODELS]
--model-settings-file MODEL_SETTINGS_FILE
Specify a file with aider model settings for unknown
models [env var: AIDER_MODEL_SETTINGS_FILE]
--model-metadata-file MODEL_METADATA_FILE
Specify a file with context window and costs for
unknown models [env var: AIDER_MODEL_METADATA_FILE]
--alias ALIAS:MODEL Add a model alias (can be used multiple times) [env
var: AIDER_ALIAS]
--reasoning-effort REASONING_EFFORT
Set the reasoning_effort API parameter (default: not
set) [env var: AIDER_REASONING_EFFORT]
--thinking-tokens THINKING_TOKENS
Set the thinking token budget for models that support
it. Use 0 to disable. (default: not set) [env var:
AIDER_THINKING_TOKENS]
--verify-ssl, --no-verify-ssl
Verify the SSL cert when connecting to models
(default: True) [env var: AIDER_VERIFY_SSL]
--timeout TIMEOUT Timeout in seconds for API calls (default: None) [env
var: AIDER_TIMEOUT]
--edit-format EDIT_FORMAT, --chat-mode EDIT_FORMAT
Specify what edit format the LLM should use (default
depends on model) [env var: AIDER_EDIT_FORMAT]
--architect Use architect edit format for the main chat [env var:
AIDER_ARCHITECT]
--auto-accept-architect, --no-auto-accept-architect
Enable/disable automatic acceptance of architect
changes (default: True) [env var:
AIDER_AUTO_ACCEPT_ARCHITECT]
--weak-model WEAK_MODEL
Specify the model to use for commit messages and chat
history summarization (default depends on --model)
[env var: AIDER_WEAK_MODEL]
--editor-model EDITOR_MODEL
Specify the model to use for editor tasks (default
depends on --model) [env var: AIDER_EDITOR_MODEL]
--editor-edit-format EDITOR_EDIT_FORMAT
Specify the edit format for the editor model (default:
depends on editor model) [env var:
AIDER_EDITOR_EDIT_FORMAT]
--show-model-warnings, --no-show-model-warnings
Only work with models that have meta-data available
(default: True) [env var: AIDER_SHOW_MODEL_WARNINGS]
--check-model-accepts-settings, --no-check-model-accepts-settings
Check if model accepts settings like
reasoning_effort/thinking_tokens (default: True) [env
var: AIDER_CHECK_MODEL_ACCEPTS_SETTINGS]
--max-chat-history-tokens MAX_CHAT_HISTORY_TOKENS
Soft limit on tokens for chat history, after which
summarization begins. If unspecified, defaults to the
model's max_chat_history_tokens. [env var:
AIDER_MAX_CHAT_HISTORY_TOKENS]
Cache settings:
--cache-prompts, --no-cache-prompts
Enable caching of prompts (default: False) [env var:
AIDER_CACHE_PROMPTS]
--cache-keepalive-pings CACHE_KEEPALIVE_PINGS
Number of times to ping at 5min intervals to keep
prompt cache warm (default: 0) [env var:
AIDER_CACHE_KEEPALIVE_PINGS]
Repomap settings:
--map-tokens MAP_TOKENS
Suggested number of tokens to use for repo map, use 0
to disable [env var: AIDER_MAP_TOKENS]
--map-refresh {auto,always,files,manual}
Control how often the repo map is refreshed. Options:
auto, always, files, manual (default: auto) [env var:
AIDER_MAP_REFRESH]
--map-multiplier-no-files MAP_MULTIPLIER_NO_FILES
Multiplier for map tokens when no files are specified
(default: 2) [env var: AIDER_MAP_MULTIPLIER_NO_FILES]
History Files:
--input-history-file INPUT_HISTORY_FILE
Specify the chat input history file (default:
/var/sitesUbuntu/www.mslinn.com/.aider.input.history)
[env var: AIDER_INPUT_HISTORY_FILE]
--chat-history-file CHAT_HISTORY_FILE
Specify the chat history file (default: /var/sitesUbun
tu/www.mslinn.com/.aider.chat.history.md) [env var:
AIDER_CHAT_HISTORY_FILE]
--restore-chat-history, --no-restore-chat-history
Restore the previous chat history messages (default:
False) [env var: AIDER_RESTORE_CHAT_HISTORY]
--llm-history-file LLM_HISTORY_FILE
Log the conversation with the LLM to this file (for
example, .aider.llm.history) [env var:
AIDER_LLM_HISTORY_FILE]
Output settings:
--dark-mode Use colors suitable for a dark terminal background
(default: False) [env var: AIDER_DARK_MODE]
--light-mode Use colors suitable for a light terminal background
(default: False) [env var: AIDER_LIGHT_MODE]
--pretty, --no-pretty
Enable/disable pretty, colorized output (default:
True) [env var: AIDER_PRETTY]
--stream, --no-stream
Enable/disable streaming responses (default: True)
[env var: AIDER_STREAM]
--user-input-color USER_INPUT_COLOR
Set the color for user input (default: #00cc00) [env
var: AIDER_USER_INPUT_COLOR]
--tool-output-color TOOL_OUTPUT_COLOR
Set the color for tool output (default: None) [env
var: AIDER_TOOL_OUTPUT_COLOR]
--tool-error-color TOOL_ERROR_COLOR
Set the color for tool error messages (default:
#FF2222) [env var: AIDER_TOOL_ERROR_COLOR]
--tool-warning-color TOOL_WARNING_COLOR
Set the color for tool warning messages (default:
#FFA500) [env var: AIDER_TOOL_WARNING_COLOR]
--assistant-output-color ASSISTANT_OUTPUT_COLOR
Set the color for assistant output (default: #0088ff)
[env var: AIDER_ASSISTANT_OUTPUT_COLOR]
--completion-menu-color COLOR
Set the color for the completion menu (default:
terminal's default text color) [env var:
AIDER_COMPLETION_MENU_COLOR]
--completion-menu-bg-color COLOR
Set the background color for the completion menu
(default: terminal's default background color) [env
var: AIDER_COMPLETION_MENU_BG_COLOR]
--completion-menu-current-color COLOR
Set the color for the current item in the completion
menu (default: terminal's default background color)
[env var: AIDER_COMPLETION_MENU_CURRENT_COLOR]
--completion-menu-current-bg-color COLOR
Set the background color for the current item in the
completion menu (default: terminal's default text
color) [env var:
AIDER_COMPLETION_MENU_CURRENT_BG_COLOR]
--code-theme CODE_THEME
Set the markdown code theme (default: default, other
options include monokai, solarized-dark, solarized-
light, or a Pygments builtin style, see
https://pygments.org/styles for available themes) [env
var: AIDER_CODE_THEME]
--show-diffs Show diffs when committing changes (default: False)
[env var: AIDER_SHOW_DIFFS]
Git settings:
--git, --no-git Enable/disable looking for a git repo (default: True)
[env var: AIDER_GIT]
--gitignore, --no-gitignore
Enable/disable adding .aider* to .gitignore (default:
True) [env var: AIDER_GITIGNORE]
--add-gitignore-files, --no-add-gitignore-files
Enable/disable the addition of files listed in
.gitignore to Aider's editing scope. [env var:
AIDER_ADD_GITIGNORE_FILES]
--aiderignore AIDERIGNORE
Specify the aider ignore file (default: .aiderignore
in git root) [env var: AIDER_AIDERIGNORE]
--subtree-only Only consider files in the current subtree of the git
repository [env var: AIDER_SUBTREE_ONLY]
--auto-commits, --no-auto-commits
Enable/disable auto commit of LLM changes (default:
True) [env var: AIDER_AUTO_COMMITS]
--dirty-commits, --no-dirty-commits
Enable/disable commits when repo is found dirty
(default: True) [env var: AIDER_DIRTY_COMMITS]
--attribute-author, --no-attribute-author
Attribute aider code changes in the git author name
(default: True). If explicitly set to True, overrides
--attribute-co-authored-by precedence. [env var:
AIDER_ATTRIBUTE_AUTHOR]
--attribute-committer, --no-attribute-committer
Attribute aider commits in the git committer name
(default: True). If explicitly set to True, overrides
--attribute-co-authored-by precedence for aider edits.
[env var: AIDER_ATTRIBUTE_COMMITTER]
--attribute-commit-message-author, --no-attribute-commit-message-author
Prefix commit messages with 'aider: ' if aider
authored the changes (default: False) [env var:
AIDER_ATTRIBUTE_COMMIT_MESSAGE_AUTHOR]
--attribute-commit-message-committer, --no-attribute-commit-message-committer
Prefix all commit messages with 'aider: ' (default:
False) [env var:
AIDER_ATTRIBUTE_COMMIT_MESSAGE_COMMITTER]
--attribute-co-authored-by, --no-attribute-co-authored-by
Attribute aider edits using the Co-authored-by trailer
in the commit message (default: True). If True, this
takes precedence over default --attribute-author and
--attribute-committer behavior unless they are
explicitly set to True. [env var:
AIDER_ATTRIBUTE_CO_AUTHORED_BY]
--git-commit-verify, --no-git-commit-verify
Enable/disable git pre-commit hooks with --no-verify
(default: False) [env var: AIDER_GIT_COMMIT_VERIFY]
--commit Commit all pending changes with a suitable commit
message, then exit [env var: AIDER_COMMIT]
--commit-prompt PROMPT
Specify a custom prompt for generating commit messages
[env var: AIDER_COMMIT_PROMPT]
--dry-run, --no-dry-run
Perform a dry run without modifying files (default:
False) [env var: AIDER_DRY_RUN]
--skip-sanity-check-repo
Skip the sanity check for the git repository (default:
False) [env var: AIDER_SKIP_SANITY_CHECK_REPO]
--watch-files, --no-watch-files
Enable/disable watching files for ai coding comments
(default: False) [env var: AIDER_WATCH_FILES]
Fixing and committing:
--lint Lint and fix provided files, or dirty files if none
provided [env var: AIDER_LINT]
--lint-cmd LINT_CMD Specify lint commands to run for different languages,
eg: "python: flake8 --select=..." (can be used
multiple times) [env var: AIDER_LINT_CMD]
--auto-lint, --no-auto-lint
Enable/disable automatic linting after changes
(default: True) [env var: AIDER_AUTO_LINT]
--test-cmd TEST_CMD Specify command to run tests [env var: AIDER_TEST_CMD]
--auto-test, --no-auto-test
Enable/disable automatic testing after changes
(default: False) [env var: AIDER_AUTO_TEST]
--test Run tests, fix problems found and then exit [env var:
AIDER_TEST]
Analytics:
--analytics, --no-analytics
Enable/disable analytics for current session (default:
random) [env var: AIDER_ANALYTICS]
--analytics-log ANALYTICS_LOG_FILE
Specify a file to log analytics events [env var:
AIDER_ANALYTICS_LOG]
--analytics-disable Permanently disable analytics [env var:
AIDER_ANALYTICS_DISABLE]
--analytics-posthog-host ANALYTICS_POSTHOG_HOST
Send analytics to custom PostHog instance [env var:
AIDER_ANALYTICS_POSTHOG_HOST]
--analytics-posthog-project-api-key ANALYTICS_POSTHOG_PROJECT_API_KEY
Send analytics to custom PostHog project [env var:
AIDER_ANALYTICS_POSTHOG_PROJECT_API_KEY]
Upgrading:
--just-check-update Check for updates and return status in the exit code
[env var: AIDER_JUST_CHECK_UPDATE]
--check-update, --no-check-update
Check for new aider versions on launch [env var:
AIDER_CHECK_UPDATE]
--show-release-notes, --no-show-release-notes
Show release notes on first run of new version
(default: None, ask user) [env var:
AIDER_SHOW_RELEASE_NOTES]
--install-main-branch
Install the latest version from the main branch [env
var: AIDER_INSTALL_MAIN_BRANCH]
--upgrade, --update Upgrade aider to the latest version from PyPI [env
var: AIDER_UPGRADE]
--version Show the version number and exit
Modes:
--message COMMAND, --msg COMMAND, -m COMMAND
Specify a single message to send the LLM, process
reply then exit (disables chat mode) [env var:
AIDER_MESSAGE]
--message-file MESSAGE_FILE, -f MESSAGE_FILE
Specify a file containing the message to send the LLM,
process reply, then exit (disables chat mode) [env
var: AIDER_MESSAGE_FILE]
--gui, --no-gui, --browser, --no-browser
Run aider in your browser (default: False) [env var:
AIDER_GUI]
--copy-paste, --no-copy-paste
Enable automatic copy/paste of chat between aider and
web UI (default: False) [env var: AIDER_COPY_PASTE]
--apply FILE Apply the changes from the given file instead of
running the chat (debug) [env var: AIDER_APPLY]
--apply-clipboard-edits
Apply clipboard contents as edits using the main
model's editor format [env var:
AIDER_APPLY_CLIPBOARD_EDITS]
--exit Do all startup activities then exit before accepting
user input (debug) [env var: AIDER_EXIT]
--show-repo-map Print the repo map and exit (debug) [env var:
AIDER_SHOW_REPO_MAP]
--show-prompts Print the system prompts and exit (debug) [env var:
AIDER_SHOW_PROMPTS]
Voice settings:
--voice-format VOICE_FORMAT
Audio format for voice recording (default: wav). webm
and mp3 require ffmpeg [env var: AIDER_VOICE_FORMAT]
--voice-language VOICE_LANGUAGE
Specify the language for voice using ISO 639-1 code
(default: auto) [env var: AIDER_VOICE_LANGUAGE]
--voice-input-device VOICE_INPUT_DEVICE
Specify the input device name for voice recording [env
var: AIDER_VOICE_INPUT_DEVICE]
Other settings:
--disable-playwright Never prompt for or attempt to install Playwright for
web scraping (default: False). [env var:
AIDER_DISABLE_PLAYWRIGHT]
--file FILE specify a file to edit (can be used multiple times)
[env var: AIDER_FILE]
--read FILE specify a read-only file (can be used multiple times)
[env var: AIDER_READ]
--vim Use VI editing mode in the terminal (default: False)
[env var: AIDER_VIM]
--chat-language CHAT_LANGUAGE
Specify the language to use in the chat (default:
None, uses system settings) [env var:
AIDER_CHAT_LANGUAGE]
--commit-language COMMIT_LANGUAGE
Specify the language to use in the commit message
(default: None, user language) [env var:
AIDER_COMMIT_LANGUAGE]
--yes-always Always say yes to every confirmation [env var:
AIDER_YES_ALWAYS]
-v, --verbose Enable verbose output [env var: AIDER_VERBOSE]
--load LOAD_FILE Load and execute /commands from a file on launch [env
var: AIDER_LOAD]
--encoding ENCODING Specify the encoding for input and output (default:
utf-8) [env var: AIDER_ENCODING]
--line-endings {platform,lf,crlf}
Line endings to use when writing files (default:
platform) [env var: AIDER_LINE_ENDINGS]
-c CONFIG_FILE, --config CONFIG_FILE
Specify the config file (default: search for
.aider.conf.yml in git root, cwd or home directory)
--env-file ENV_FILE Specify the .env file to load (default: .env in git
root) [env var: AIDER_ENV_FILE]
--suggest-shell-commands, --no-suggest-shell-commands
Enable/disable suggesting shell commands (default:
True) [env var: AIDER_SUGGEST_SHELL_COMMANDS]
--fancy-input, --no-fancy-input
Enable/disable fancy input with history and completion
(default: True) [env var: AIDER_FANCY_INPUT]
--multiline, --no-multiline
Enable/disable multi-line input mode with Meta-Enter
to submit (default: False) [env var: AIDER_MULTILINE]
--notifications, --no-notifications
Enable/disable terminal bell notifications when LLM
responses are ready (default: False) [env var:
AIDER_NOTIFICATIONS]
--notifications-command COMMAND
Specify a command to run for notifications instead of
the terminal bell. If not specified, a default command
for your OS may be used. [env var:
AIDER_NOTIFICATIONS_COMMAND]
--detect-urls, --no-detect-urls
Enable/disable detection and offering to add URLs to
chat (default: True) [env var: AIDER_DETECT_URLS]
--editor EDITOR Specify which editor to use for the /editor command
[env var: AIDER_EDITOR]
--shell-completions SHELL
Print shell completion script for the specified SHELL
and exit. Supported shells: bash, tcsh, zsh. Example:
aider --shell-completions bash [env var:
AIDER_SHELL_COMPLETIONS]
Deprecated model settings:
--opus Use claude-3-opus-20240229 model for the main chat
(deprecated, use --model) [env var: AIDER_OPUS]
--sonnet Use anthropic/claude-3-7-sonnet-20250219 model for the
main chat (deprecated, use --model) [env var:
AIDER_SONNET]
--haiku Use claude-3-5-haiku-20241022 model for the main chat
(deprecated, use --model) [env var: AIDER_HAIKU]
--4, -4 Use gpt-4-0613 model for the main chat (deprecated,
use --model) [env var: AIDER_4]
--4o Use gpt-4o model for the main chat (deprecated, use
--model) [env var: AIDER_4O]
--mini Use gpt-4o-mini model for the main chat (deprecated,
use --model) [env var: AIDER_MINI]
--4-turbo Use gpt-4-1106-preview model for the main chat
(deprecated, use --model) [env var: AIDER_4_TURBO]
--35turbo, --35-turbo, --3, -3
Use gpt-3.5-turbo model for the main chat (deprecated,
use --model) [env var: AIDER_35TURBO]
--deepseek Use deepseek/deepseek-chat model for the main chat
(deprecated, use --model) [env var: AIDER_DEEPSEEK]
--o1-mini Use o1-mini model for the main chat (deprecated, use
--model) [env var: AIDER_O1_MINI]
--o1-preview Use o1-preview model for the main chat (deprecated,
use --model) [env var: AIDER_O1_PREVIEW]
Args that start with '--' can also be set in a config file
(./.aider.conf.yml or ~/.aider.conf.yml or specified via -c). The config file
uses YAML syntax and must represent a YAML 'mapping' (for details, see
learn.getgrav.org/advanced/yaml). In general, command-line values
override environment variables which override config file values which
override defaults.
/help Subsystem
The first time you use the /help subcommand it asks if it can
download and build the extra help code and data. This takes several minutes.
However, if this succeeds, you have an AI system dedicated to
supporting your Aider setup!
> /help is ? an alias for ask To use interactive /help you need to install the help extras /home/mslinn/.local/share/uv/tools/aider-chat/bin/python -m pip install --upgrade --upgrade-strategy only-if-needed 'aider-chat[help]' --extra-index-url https://download.pytorch.org/whl/cpu Run pip install? (Y)es/(N)o [Yes]: Yes Installing: /home/mslinn/.local/share/uv/tools/aider-chat/bin/python -m pip install --upgrade --upgrade-strategy only-if-needed 'aider-chat[help]' --extra-index-url https://download.pytorch.org/whl/cpu Installation complete. modules.json: 100%|█████████████████████████████████████████████████| 349/349 [00:00<00:00, 728kB/s] config_sentence_transformers.json: 100%|████████████████████████████| 124/124 [00:00<00:00, 733kB/s] README.md: 94.8kB [00:00, 81.7MB/s] sentence_bert_config.json: 100%|██████████████████████████████████| 52.0/52.0 [00:00<00:00, 243kB/s] Warning: You are sending unauthenticated requests to the HF Hub. Please set a HF_TOKEN to enable higher rate limits and faster downloads. config.json: 100%|█████████████████████████████████████████████████| 743/743 [00:00<00:00, 3.35MB/s] model.safetensors: 100%|█████████████████████████████████████████| 133M/133M [00:01<00:00, 78.0MB/s] Loading weights: 100%|██| 199/199 [00:00<00:00, 845.61it/s, Materializing param=pooler.dense.weight] BertModel LOAD REPORT from: BAAI/bge-small-en-v1.5 Key | Status | | ------------------------+------------+--+- embeddings.position_ids | UNEXPECTED | | Notes: - UNEXPECTED :can be ignored when loading from different task/architecture; not ok if you expect identical arch. tokenizer_config.json: 100%|████████████████████████████████████████| 366/366 [00:00<00:00, 417kB/s] vocab.txt: 232kB [00:00, 17.9MB/s] tokenizer.json: 711kB [00:00, 51.6MB/s] special_tokens_map.json: 100%|██████████████████████████████████████| 125/125 [00:00<00:00, 173kB/s] config.json: 100%|█████████████████████████████████████████████████| 190/190 [00:00<00:00, 1.13MB/s] Generating embeddings: 100%|██████████████████████████████████████| 570/570 [03:18<00:00, 2.87it/s] Yes, ? is an alias for /ask in aider. You can use either ? or /ask to ask questions without making code changes. • https://aider.chat/docs/usage/modes.html Tokens: 3.8k sent, 46 received. Cost: $0.01 message, $0.04 session. ──────────────────────────────────────────────────────────────────────────────────────────────────── >
? is an alias for /ask. You can use either command
to ask questions without making code changes.
Slash Commands
These are the slash commands for use in sessions:
> /help /add Add files to the chat so aider can edit them or review them in detail /architect Enter architect/editor mode using 2 different models. If no prompt provided, switches to architect/editor mode. /ask Ask questions about the code base without editing any files. If no prompt provided, switches to ask mode. /chat-mode Switch to a new chat mode /clear Clear the chat history /code Ask for changes to your code. If no prompt provided, switches to code mode. /commit Commit edits to the repo made outside the chat (commit message optional) /context Enter context mode to see surrounding code context. If no prompt provided, switches to context mode. /copy Copy the last assistant message to the clipboard /copy-context Copy the current chat context as markdown, suitable to paste into a web UI /diff Display the diff of changes since the last message /drop Remove files from the chat session to free up context space /edit Alias for /editor: Open an editor to write a prompt /editor Open an editor to write a prompt /editor-model Switch the Editor Model to a new LLM /exit Exit the application /git Run a git command (output excluded from chat) /help Ask questions about aider /lint Lint and fix in-chat files or all dirty files if none in chat /load Load and execute commands from a file /ls List all known files and indicate which are included in the chat session /map Print out the current repository map /map-refresh Force a refresh of the repository map /model Switch the Main Model to a new LLM /models Search the list of available models /multiline-mode Toggle multiline mode (swaps behavior of Enter and Meta+Enter) /paste Paste image/text from the clipboard into the chat. Optionally provide a name for the image. /quit Exit the application /read-only Add files to the chat that are for reference only, or turn added files to read-only /reasoning-effort Set the reasoning effort level (values: number or low/medium/high depending on model) /report Report a problem by opening a GitHub Issue /reset Drop all files and clear the chat history /run Run a shell command and optionally add the output to the chat (alias: !) /save Save commands to a file that can reconstruct the current chat session's files /settings Print out the current settings /test Run a shell command and add the output to the chat on non-zero exit code /think-tokens Set the thinking token budget, eg: 8096, 8k, 10.5k, 0.5M, or 0 to disable. /tokens Report on the number of tokens used by the current chat context /undo Undo the last git commit if it was done by aider /voice Record and transcribe voice input /weak-model Switch the Weak Model to a new LLM /web Scrape a webpage, convert to markdown and send in a message
Use '/help <question>' to ask questions about how to use aider.
Configuration
The first time you run aider it collects configuration information.
$ aider ─────────────────────────────────────────────────────────────────── You can skip this check with --no-gitignore Add .aider* to .gitignore (recommended)? (Y)es/(N)o [Yes]: Added .aider* to .gitignore No LLM model was specified and no API keys were provided. OpenRouter provides free and paid access to many LLMs. Login to OpenRouter or create a free account? (Y)es/(N)o [Yes]: Please open this URL in your browser to connect Aider with OpenRouter: https://openrouter.ai/auth?callback_url=http://localhost:8484/callback/aider&code_challenge=fu4AONeC5o0&code_challenge_method=S256 Waiting up to 5 minutes for you to finish in the browser... Use Control-C to interrupt. ^C OAuth flow interrupted. OpenRouter authentication did not complete successfully. https://aider.chat/docs/troubleshooting/models-and-keys.html Open documentation URL for more info? (Y)es/(N)o/(D)on't ask again [Yes]:
The OpenRouter authentication message never arrived by email, so I created
~/.aider.conf.yml
and provided keys for Gemini
and Anthropic.
Within an hour of the first usage of Claude Sonnet with Aider, I got a bill for $11 from Anthropic. I had hardly done anything, and extra usage was disabled! Anthropic’s money grab is extreme. I commented out the Anthropic keys.
The Aider format for specifying model provider keys is inconsistent and not
well documented. I had to experiment to find the correct format for
.aider.conf.yml.
# Keys anthropic-api-key: sk-ant-api03-...neQAA openai-api-key: sk-proj...eqMgA api-key: - gemini=AIz...uE # Settings dark-mode: true editor-model: anthropic/claude-sonnet-4-6 model: anthropic/claude-sonnet-4-6 # model: ollama_chat/gemma4:26b # Local Ollama models need OLLAMA_API_BASE to be set # export OLLAMA_API_BASE=http://localhost:11434 # setx OLLAMA_API_BASE http://localhost:11434 # # export OLLAMA_API_BASE=http://gojira:11434 # setx OLLAMA_API_BASE http://gojira:11434 timeout: 60 # (seconds) 300 seconds might be required for local inference models show-model-warnings: false # Globally silences "Unknown context window" prompts weak-model: claude-3-5-haiku-latest
Information Sources
PS F:\work\> aider --watch-files --architect ─────────────────────────────────────────────────────────────────── Aider v0.86.2 Main model: anthropic/claude-sonnet-4-6 with architect edit format, infinite output Editor model: anthropic/claude-sonnet-4-6 with editor-whole edit format Weak model: gemini/gemini-3.1-flash-lite-preview Git repo: .git with 159 files Repo-map: using 4096 tokens, auto refresh ─────────────────────────────────────────────────────────────────── architect> /ask what information do you have access to
Based on the file summaries you've shared, I have access to: Structural/Architectural information: • Class names, method signatures, and member variables • File organization and module boundaries • Inheritance relationships (e.g., ChannelStrip extends VerticalControlStrip, PitchEngine implements PitchEngineInterface) What I can infer: • The app is a JUCE-based instrument recorder • It has an audio engine, session management, UI components, and Lua scripting • It uses a processing pipeline: audio capture → processing thread → WAV export • It has CPU/memory monitoring, pitch detection, waveform display, and channel configuration What I do NOT have: • Full method implementations • Lua scripts • Build files / CMakeLists • Configuration files • Any file not listed in the summaries • The actual logic inside any function body Modules I can see summaries for: • src/app, src/engine, src/session, src/ui, src/common • tests/unit and tests/integration If you want me to answer detailed questions about specific logic or implementation, you'd need to add those files to the chat. Tokens: 8.7k sent, 255 received. Cost: $0.03 message, $0.03 session. ───────────────────────────────────────────────────────────────────
Free Models
Many free online LLMs can be connected, including the new online Ollama models. I show many examples of pairing Aider with free LLMs below.
The ollama pull command for cloud models only downloads about 350
bytes... hardly a snack!
Best Local LLMs for Coding discusses free models for coding in detail.
Gemini
Free Gemini usage is available, within limits. There are several models to choose from, all with free usage.
API Key
- Go to Google AI Studio.
- Create a free API key.
-
Set the API key in your terminal environment:
Windows CMD or PowerShell
C:\> setx GEMINI_API_KEY your_key_here # then restart the shell
Windows users,setxhas a 1024 character limit; it cannot create aPATHlonger than that. To overcome this problem, read Add To Windows User Path.
Mac/Linux$ export GEMINI_API_KEY=your_key_here
Gemini 2.5 Flash
Gemini 2.5 Flash is the fastest model, with the highest free limits.
This model defaults to the diff-fenced edit format, as you can see.
$ aider --model gemini/gemini-2.5-flash
Aider v0.86.2 Main model: gemini/gemini-2.5-flash with diff-fenced edit format Weak model: gemini/gemini-3.1-flash-lite-preview Git repo: .git with 159 files Repo-map: using 4096 tokens, auto refresh
>
Gemini 2.5 Pro
Gemini 2.5 Pro is best for coding and has lower free limits. It defaults to diff-fenced edit format, as you can see.
$ aider --model gemini/gemini-2.5-pro
Aider v0.86.2 Main model: gemini/gemini-2.5-pro with diff-fenced edit format Weak model: gemini/gemini-3.1-flash-lite-preview Git repo: .git with 159 files Repo-map: using 4096 tokens, auto refresh
>
Gemini 3.1 Flash Lite Preview
Gemini 3.1 uses the whole edit format by default. Gemini 3.1 Flash Lite Preview is best for high-volume, simple tasks.
$ aider --model gemini/gemini-3.1-flash-lite-preview
Aider v0.86.2 Main model: gemini/gemini-3.1-flash-lite-preview with whole edit format Weak model: gemini/gemini-3.1-flash-lite-preview Git repo: .git with 159 files Repo-map: using 4096 tokens, auto refresh
>
This model would run faster if the diff-fenced edit format was specified.
Gemini Experimental
Experimental is free, but availability is unstable:
$ aider --model gemini/gemini-exp-1206
Aider v0.86.2 Main model: gemini/gemini-exp-1206 with diff edit format Weak model: gemini/gemini-3.1-flash-lite-preview Git repo: .git with 159 files Repo-map: using 4096 tokens, auto refresh
>
Gemma4
| Model | Command | Hardware Requirement |
| Standard (9B) | ollama pull gemma4
| 8GB+ RAM / VRAM |
| 26B (MoE) | ollama pull gemma4:26b
| 24GB+ VRAM (e.g., 3090/4090) |
| 31B (Dense) | ollama pull gemma4:31b
| 32GB+ VRAM (Dual GPU / Mac Studio) |
| Edge (2B) | ollama pull gemma4:e2b
| Phones / Low-power Laptops |
| Edge (4B) | ollama pull gemma4:e4b
| Most Laptops (4GB+ RAM) |
31b-cloud
The 31B variant is the most capable in the Gemma4 family. It is strong for reasoning and logic and supports a 256K context window. None of the other variants are publicly available in the cloud.
$ ollama pull gemma4:31b-cloud pulling manifest pulling 5eb7ea60f4a3: 100% ▕████████████████████████████████████████████████████████▏342 B verifying sha256 digest writing manifest success
$ aider --model gemma4:26b-cloud
Aider v0.86.2 Main model: gemma4:26b-cloud with whole edit format Weak model: gemini/gemini-3.1-flash-lite-preview Git repo: .git with 6,117 files Repo-map: using 1024 tokens, auto refresh
>
gpt-oss Cloud
Free gpt-oss:120b cloud usage is available, with usage limits.
$ ollama pull gpt-oss:120b-cloud pulling manifest pulling 923426e76b18: 100% ▕████████████████████████████████▏ 384 B verifying sha256 digest writing manifest success
$ aider --model gpt-oss:120b-cloud
Aider v0.86.2 Main model: gpt-oss:120b-cloud with whole edit format Weak model: gemini/gemini-3.1-flash-lite-preview Git repo: .git with 3,735 files Repo-map: using 1024 tokens, auto refresh
>
This model would run faster if the --edit-format=diff-fenced was specified.
MiniMax-M2.7 Cloud
Free MiniMax-M2.7 cloud usage is available, with usage limits.
$ ollama pull minimax-m2.7:cloud pulling manifest pulling 476b4620b85b: 100% ▕████████████████████████████████████▏ 382 B verifying sha256 digest writing manifest success
$ aider --model minimax-m2.7:cloud
Aider v0.86.2 Main model: minimax-m2.7:cloud with whole edit format Weak model: gemini/gemini-3.1-flash-lite-preview Git repo: .git with 3,735 files Repo-map: using 1024 tokens, auto refresh
>
This model would run faster if the --edit-format=diff-fenced was specified.
Nemotron-3-super Cloud
Free nemotron-3-super cloud usage is available, with usage limits.
$ ollama pull nemotron-3-super:cloud pulling manifest pulling eddb1e2169b8: 100% ▕████████████████████████████████▏ 345 B verifying sha256 digest writing manifest success
$ aider --model nemotron-3-super:cloud
Aider v0.86.2 Main model: nemotron-3-super:cloud with whole edit format Weak model: gemini/gemini-3.1-flash-lite-preview Git repo: .git with 3,735 files Repo-map: using 1024 tokens, auto refresh
>
This model would run faster if the --edit-format=diff-fenced was specified.
QWEN 3.5 Coder Cloud
Free QWEN 3.5 Coder cloud usage is available, with usage limits. Alibab’s QWEN family is most well-known for its downloadable open-source models. However, a model this big needs to run on a data center-class GPU.
QWEN 3.6 was announced on 2026-04-01, but as of 2026-04-04 it was not yet available in the cloud.
$ ollama pull qwen3.5:cloud pulling manifest pulling 476b4620b85b: 100% ▕████████████████████████████████████▏ 382 B verifying sha256 digest writing manifest success
$ aider --model ollama_chat/qwen3.5:cloud
Main model: ollama_chat/qwen3.5:cloud with whole edit format Weak model: gemini/gemini-3.1-flash-lite-preview Git repo: .git with 2,515 files Repo-map: using 4096 tokens, auto refresh
>
This model would run faster if the --edit-format=diff-fenced was
specified.
Paid Models
Only Ollama models need to be pulled before they can be used.
All models run faster with edit format diff and diff-fenced.
Claude Billing Warning
The April 2026 changes to Claude’s billing policies mean that using Claude models with any programming aid other than Claude CLI will incur charges at API pricing rates, which are extremely expensive. Unless you are prepared to spend hundreds of dollars per month on relatively light Claude usage, avoid using all Claude models with Aider.
Claude Haiku 4.6
$ aider --model claude-haiku-4-6
Aider v0.86.2 Main model: claude-haiku-4-6 with whole edit format Weak model: gemini/gemini-3.1-flash-lite-preview Git repo: .git with 2,515 files Repo-map: using 1024 tokens, auto refresh
>
Claude Sonnet 4.6
$ aider --model claude-sonnet-4-6
Aider v0.86.2 Main model: anthropic/claude-sonnet-4-6 with whole edit format, infinite output Weak model: gemini/gemini-3.1-flash-lite-preview Git repo: .git with 159 files Repo-map: using 4096 tokens, auto refresh
>
Claude Opus 4.6
I am unsure why diff edit format is shown instead of whole
edit format, infinite output.
$ aider --model claude-opus-4-6
Aider v0.86.2 Main model: claude-opus-4-6 with diff edit format Weak model: gemini/gemini-3.1-flash-lite-preview Git repo: .git with 2,515 files Repo-map: using 4096 tokens, auto refresh
>
Gemini 3.1 Pro
Gemini 3.1 Pro is best for complex reasoning and research:
$ aider --model gemini/gemini-3.1-pro
Main model: gemini/gemini-3.1-pro with whole edit format Weak model: gemini/gemini-3.1-flash-lite-preview Git repo: .git with 159 files Repo-map: using 1024 tokens, auto refresh
>
Optional Model Settings
Custom model settings (which override a model’s default values) can be
specified in a file called .aider.model.settings.yml. That file could
exist in the user home directory or in the current project.
- name: anthropic.claude-opus-4-20250514-v1:0
edit_format: diff
weak_model_name: anthropic.claude-3-5-haiku-20241022-v1:0
use_repo_map: true
extra_params:
extra_headers:
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
max_tokens: 32000
cache_control: true
editor_model_name: anthropic.claude-sonnet-4-20250514-v1:0
editor_edit_format: editor-diff
accepts_settings:
- thinking_tokens
- name: anthropic.claude-sonnet-4-20250514-v1:0
edit_format: diff
weak_model_name: anthropic.claude-3-5-haiku-20241022-v1:0
use_repo_map: true
extra_params:
extra_headers:
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25,output-128k-2025-02-19
max_tokens: 64000
cache_control: true
editor_model_name: anthropic.claude-sonnet-4-20250514-v1:0
editor_edit_format: editor-diff
accepts_settings:
- thinking_tokens
- name: anthropic/claude-3-5-haiku-20241022
edit_format: diff
weak_model_name: anthropic/claude-3-5-haiku-20241022
use_repo_map: true
extra_params:
extra_headers:
anthropic-beta: prompt-caching-2024-07-31,pdfs-2024-09-25
cache_control: true
- name: gemini/gemini-3-flash-preview
edit_format: diff-fenced
use_repo_map: true
overeager: true
use_temperature: false
accepts_settings:
- thinking_tokens
- name: gemini/gemini-3-pro-preview
edit_format: diff-fenced
weak_model_name: gemini/gemini-2.5-flash
use_repo_map: true
overeager: true
use_temperature: false
accepts_settings:
- thinking_tokens
- name: ollama_chat/gemma4:26b
# 'diff' is generally best for coding-capable local models
edit_format: diff
# Use the repo map to help the model understand project structure
use_repo_map: true
# Gemma 4 works best with examples provided as system messages
examples_as_sys_msg: true
extra_params:
# Gemma 4 26B (MoE) supports up to 128k, but 32k-64k is a stable local baseline
num_ctx: 32768
# Temperature 0 is recommended for consistent coding results
temperature: 0
- name: ollama_chat/fluffy/l3-8b-stheno-v3.2
# 'whole' is safest for 8B models to ensure code integrity
edit_format: whole
# Important for Llama 3.1 models
examples_as_sys_msg: true
use_repo_map: true
extra_params:
# Llama 3.1 natively supports 128k, but 16k-32k is the sweet spot for local 8B performance
num_ctx: 16384
temperature: 0
# Helps prevent the model from repeating its own thought process
repeat_penalty: 1.1
I copied the YAML configuration files from the WSL home directory to the native Windows home directory:
$ cp ~/.aider*.yml '/mnt/c/Users/Mike Slinn/'
Edit Formats
The --edit-format command-line option determines how the LLM
sends code changes back to your local files. Values are whole,
diff, and diff-fenced. The default is
diff for most models, but some models (like Gemini 2.5 Flash)
default to diff-fenced and some (like Gemini Flash 3.1) default
to whole.
diff and diff-fenced work much faster than
whole. I did not see an appreciable speed difference between diff and
diff-fenced.
The AIDER_EDIT_FORMAT environment variable and edit_format field in
.aider.model.settings.yml can also be used to set the default
format.
The --edit-format command-line option overrides all other settings.
If the following is typed at a command line or is present in .bashrc,
then all models will use the diff-fenced edit format instead of any default
whole or diff formats:
export AIDER_EDIT_FORMAT=diff-fenced
.env files are also supported; add the setting to
.env in the project root or the user home directory.
AIDER_EDIT_FORMAT=diff-fenced
Alternatively, you can specify the edit-format key in .aider.conf.yml.
edit-format: diff-fenced
diff
When diff is specified as the edit format for an LLM, it refers
to the Unified diff format (also known as unidiff). When LLMs
use this edit format, they return only the changes to the document, not the
entire document.
Unidiff headers (like @@ -10,5 +10,6 @@) contain line numbers. LLMs are
notoriously unreliable at counting and tracking exact line positions, making
the diff edit format less reliable.
- Pros: Fast and token-efficient because it only transmits the edits.
- Cons: Higher failure rate if the LLM provides an original block that doesn't perfectly match your local file's text.
- Best for large files and complex refactors where you want to save time and tokens.
diff-fenced
Most LLM coding tools, including the Claude Code CLI, primarily use the diff-fenced edit format.
The diff-fenced format is technically entirely different from the
diff format. The most important difference is that the
diff edit format uses line numbers, but diff-fenched
edit format does not. Instead, the filename and search/replace blocks are
within standard Markdown code fences (triple backticks). While a standard
diff uses line numbers, diff-fenced
instead uses a Markdown structure.
Because diff-fenced edit format lacks line numbers, the SEARCH
block must be a 100% perfect match, including every space, tab, and newline.
If the LLM hallucinates even a single character, the search fails and the edit
cannot be applied.
Large files often contain repetitive patterns. Increasing the length of the SEARCH block linearly increases the probability of a character-level mismatch due to hallucination. Since the LLM must reproduce the existing code identically, longer context blocks actually decrease the reliability of the edit.
When a unique match is not found (usually because the LLM hallucinated a
character in the SEARCH block), most coding tools switch to the slower and
more expensive whole edit format for that file.
whole
The LLM returns the entire updated content of every file that needs a change.
- Pros: Extremely simple and robust; less likely to hallucinate where a change belongs because it rewrites everything.
- Cons: Slow and expensive for large files since it uses more tokens to resend unchanged code.
- Best for small files, new file creation, or models that struggle with counting.
Prompt
The aider prompt will display the current edit format if overriden:
diff>
diff-fenced>
whole>
If the model does not support the specified edit format, the Aider prompt will not show the requested edit format.
Format Comparison
| Format | Output Type | Token Usage | Recommended Model |
|---|---|---|---|
whole
| Full file replacement | High | Weak/local models (e.g., Llama-3-8B) |
diff
| unidiff | Low | GPT-4o, Claude 3.5 Sonnet |
diff-fenced
| Markdown | Low | Gemini 2.5 Pro/Flash |
Watch files
Aider includes a feature called Watch Files that allows it to monitor project files for trigger comments. When you add a comment containing AI! to a file, Aider detects it, treats the comment as a prompt, and automatically applies the requested changes.
For AI comments in source files #,
// and /* */:
// AI! put the date last modified here # AI! put the date last modified here /* or we could be much more verbose. - with bullet points - and stuff AI! */
Enable the feature thusly:
PS F:\work> aider --watch-files
Aider v0.86.2 Main model: anthropic/claude-sonnet-4-6 with whole edit format, infinite output Weak model: gemini/gemini-3.1-flash-lite-preview Git repo: .git with 159 files Repo-map: using 4096 tokens, auto refresh
>
You will probably find this works faster with --edit-format=diff:
$ aider --watch-files --edit-format=diff
Bug: Markdown Files Are Not Processed
For AI comments in markdown files, you should be able to use #,
//, or the standard HTML comment tags. When you do so, the Aider
console output will show the file being processed. However, the file is not changed.
// AI! put the date last modified here (fails) # AI! put the date last modified here (fails) <!-- AI! put the date last modified here (fails) -->
I fixed this problem (with the help of Aider and Gemini 3.1 Pro Preview). The pull request is here.
Running Shell Commands
The /run command (aliased as !) executes shell
commands directly from the Aider chat. You can decide whether to share the
output with the AI after the command finishes.
> !date Sun Apr 5 21:37:28 EDT 2026 Add 0.0k tokens of command output to the chat? (Y)es/(N)o [Yes]:
> !make help Available targets: all Default target. Configure and build a Release binary. build Configure (if needed) and compile a Release binary. build-windows Build native Windows binary from WSL using Visual Studio + vcpkg. check-cmake Verify CMake version 4.3.0 or higher. check-deps Verify required system libraries are installed (headers + pkg-config). check-ninja Warn with OS-specific install instructions if Ninja is not on PATH. check-vcpkg Verify vcpkg is installed (or install it automatically). clean Remove all build artifacts (both Release and Debug trees). clean-debug Remove only the Debug build tree. clean-release Remove only the Release build tree. clean-windows Remove Windows build artifacts. configure Run CMake configuration for Release build tree. configure-debug Run CMake configuration for Debug build tree. debug Configure (if needed) and compile a Debug binary (symbols included). help List all available targets. install Install the Release binary to the system prefix (default /usr/local). rebuild Clean the Release tree and build from scratch. rebuild-debug Clean the Debug tree and build from scratch. release Alias for build. Explicit Release build. run Build Release (if needed) and launch the application. setup-vcpkg Install vcpkg on Windows (called automatically by build-windows if needed). test Build Release (only if sources changed) and run unit tests. test-all Build Release (only if sources changed) and run all tests (unit + smoke + live). test-debug Build Debug unit tests (only if sources changed) and run them. (Useful for JUCE assertions). test-live Build Release (only if sources changed) and run integration tests with live app. test-smoke Build Release (only if sources changed) and run the full pipeline smoke test. Add 0.4k tokens of command output to the chat? (Y)es/(N)o [Yes]: CTRL+C
^C again to exit
Running Tests
The /test slash command is used like this:
/test make test
The syntax makes sense once you understand that the /test slash
command interprets the output of the remainder of the command as compilation
and test related. The output is only shared with the AI if the command fails
(returns a non-zero exit code). If it fails, Aider automatically pulls the
error into the chat so the AI can attempt a fix. Let’s see it in action:
$ aider --watch-files
Aider v0.86.2 Main model: anthropic/claude-sonnet-4-6 with whole edit format, infinite output Weak model: gemini/gemini-3.1-flash-lite-preview Git repo: .git with 159 files Repo-map: using 4096 tokens, auto refresh
> /test make test Checking system dependencies... All system dependencies found. Build Env: RAM=24030MB, Cores=24, JOBS=11, Disk=11950MB (tmpfs) -- The CXX compiler identification is GNU 15.2.0 -- The C compiler identification is GNU 15.2.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Found PkgConfig: /usr/bin/pkg-config (found version "1.8.1") Cloning into 'juce-src'...
Web Browser
This feature is not ready for general usage.
Work with Aider in your web browser here: localhost:8501.
$ aider --gui
You need to install the aider browser feature /home/mslinn/.local/share/uv/tools/aider-chat/bin/python -m pip install --upgrade --upgrade-strategy only-if-needed 'aider-chat[browser]' Run pip install? (Y)es/(N)o [Yes]:
Installing: /home/mslinn/.local/share/uv/tools/aider-chat/bin/python -m pip install --upgrade --upgrade-strategy only-if-needed 'aider-chat[browser]' Installation complete. CONTROL-C to exit... You can now view your Streamlit app in your browser. Local URL: http://localhost:8501 Network URL: http://192.168.1.127:8501
Aider v0.86.2 Main model: gemini/gemini-3.1-pro-preview with whole edit format Weak model: gemini/gemini-3.1-flash-lite-preview Git repo: .git with 688 files Repo-map: using 4096 tokens, auto refresh
As you can see, this Aider feature uses Streamlit.
Some important features are lacking, most notably:
- Changes are shown in the web browser but are not written to disk.
- Support for slash commands in the web browser. Once the web browser has launched, the original console chat is no longer available.
- The ability to copy the Markdown rendered on the web browser from the LLM's response. Selection is awkward and only plain text is captured.
- There is no history capability for user input.
Large Projects
If you have a project with more than a thousand files in it, you might want to set the working directory to a subdirectory. Aider’s watch mechanism is not as robust as it might be.
$ cd $msp/collections/_llm
$ AIDER_SUBTREE_ONLY=true aider --watch-files Aider v0.86.2
Main model: anthropic/claude-sonnet-4-6 with whole edit format, infinite output Weak model: gemini/gemini-3.1-flash-lite-preview Git repo: ../../.git with 247 files Repo-map: using 4096 tokens, auto refresh Note: in-chat filenames are always relative to the git working dir, not the current working dir. Cur working dir: /var/sitesUbuntu/www.mslinn.com/collections/_llm Git working dir: /var/sitesUbuntu/www.mslinn.com
Debugging
--verbose is your friend.
$ AIDER_SUBTREE_ONLY=true aider --watch-files --verbose Too soon to check version: 22.9 hours Command Line Args: --watch-files --verbose Environment Variables: AIDER_SUBTREE_ONLY:true Config File (/home/mslinn/.aider.conf.yml): anthropic-api-key: ...eQAA api-key: ['gemini=AIzaSyCijW8690wnOCe5-nNSviAy3KzXezWf5uE'] dark-mode: True editor-model: anthropic/claude-sonnet-4-6 model: anthropic/claude-sonnet-4-6 timeout: 60 show-model-warnings:False weak-model: claude-3-5-haiku-latest Defaults: --set-env: [] --model-settings-file:.aider.model.settings.yml --model-metadata-file:.aider.model.metadata.json --cache-keepalive-pings:0 --map-refresh: auto --map-multiplier-no-files:2 --input-history-file:/var/sitesUbuntu/www.mslinn.com/.aider.input.history --chat-history-file:/var/sitesUbuntu/www.mslinn.com/.aider.chat.history.md --user-input-color:#00cc00 --tool-error-color:#FF2222 --tool-warning-color:#FFA500 --assistant-output-color:#0088ff --code-theme: default --aiderignore: /var/sitesUbuntu/www.mslinn.com/.aiderignore --lint-cmd: [] --test-cmd: [] --voice-format: wav --voice-language: en --encoding: utf-8 --line-endings: platform --env-file: /var/sitesUbuntu/www.mslinn.com/.env Option settings: - 35turbo: False - 4: False - 4_turbo: False - 4o: False - add_gitignore_files: False - aiderignore: /var/sitesUbuntu/www.mslinn.com/.aiderignore - alias: None - analytics: None - analytics_disable: False - analytics_log: None - analytics_posthog_host: None - analytics_posthog_project_api_key: None - anthropic_api_key: ...eQAA - api_key: ['gemini=AIzaSyCijW8690wnOCe5-nNSviAy3KzXezWf5uE'] - apply: None - apply_clipboard_edits: False - assistant_output_color: #00FFFF - attribute_author: None - attribute_co_authored_by: True - attribute_commit_message_author: False - attribute_commit_message_committer: False - attribute_committer: None - auto_accept_architect: True - auto_commits: True - auto_lint: True - auto_test: False - cache_keepalive_pings: 0 - cache_prompts: False - chat_history_file: /var/sitesUbuntu/www.mslinn.com/.aider.chat.history.md - chat_language: None - check_model_accepts_settings: True - check_update: True - code_theme: monokai - commit: False - commit_language: None - commit_prompt: None - completion_menu_bg_color: None - completion_menu_color: None - completion_menu_current_bg_color: None - completion_menu_current_color: None - config: None - copy_paste: False - dark_mode: True - deepseek: False - detect_urls: True - dirty_commits: True - disable_playwright: False - dry_run: False - edit_format: None - editor: None - editor_edit_format: None - editor_model: anthropic/claude-sonnet-4-6 - encoding: utf-8 - env_file: /var/sitesUbuntu/www.mslinn.com/.env - exit: False - fancy_input: True - file: None - files: [] - git: True - git_commit_verify: False - gitignore: True - gui: False - haiku: False - input_history_file: /var/sitesUbuntu/www.mslinn.com/.aider.input.history - install_main_branch: False - just_check_update: False - light_mode: False - line_endings: platform - lint: False - lint_cmd: [] - list_models: None - llm_history_file: None - load: None - map_multiplier_no_files: 2 - map_refresh: auto - map_tokens: None - max_chat_history_tokens: None - message: None - message_file: None - mini: False - model: anthropic/claude-sonnet-4-6 - model_metadata_file: .aider.model.metadata.json - model_settings_file: .aider.model.settings.yml - multiline: False - notifications: False - notifications_command: None - o1_mini: False - o1_preview: False - openai_api_base: None - openai_api_deployment_id: None - openai_api_key: None - openai_api_type: None - openai_api_version: None - openai_organization_id: None - opus: False - pretty: True - read: None - reasoning_effort: None - restore_chat_history: False - set_env: [] - shell_completions: None - show_diffs: False - show_model_warnings: False - show_prompts: False - show_release_notes: None - show_repo_map: False - skip_sanity_check_repo: False - sonnet: False - stream: True - subtree_only: True - suggest_shell_commands: True - test: False - test_cmd: [] - thinking_tokens: None - timeout: 60.0 - tool_error_color: #FF3333 - tool_output_color: None - tool_warning_color: #FFFF00 - upgrade: False - user_input_color: #32FF32 - verbose: True - verify_ssl: True - vim: False - voice_format: wav - voice_input_device: None - voice_language: en - watch_files: True - weak_model: claude-3-5-haiku-latest - yes_always: None Checking imports for version 0.86.2 and executable /home/mslinn/.local/share/uv/tools/aider-chat/bin/python Installs file: /home/mslinn/.aider/installs.json Installs file exists and loaded Not first run, loading imports in background thread Loaded model settings from: - /home/mslinn/.aider.model.settings.yml Searched for model settings files: - /home/mslinn/.aider.model.settings.yml - /var/sitesUbuntu/www.mslinn.com/.aider.model.settings.yml - /var/sitesUbuntu/www.mslinn.com/collections/_llm/.aider.model.settings.yml Loaded model metadata from: - /home/mslinn/.local/share/uv/tools/aider-chat/lib/python3.12/site-packages/aider/resources/model-metadata.json Model metadata: { "cache_creation_input_token_cost": 3.75e-06, "cache_read_input_token_cost": 3e-07, "input_cost_per_token": 3e-06, "litellm_provider": "anthropic", "max_input_tokens": 1000000, "max_output_tokens": 64000, "max_tokens": 64000, "mode": "chat", "output_cost_per_token": 1.5e-05, "search_context_cost_per_query": { "search_context_size_high": 0.01, "search_context_size_low": 0.01, "search_context_size_medium": 0.01 }, "supports_assistant_prefill": true, "supports_computer_use": true, "supports_function_calling": true, "supports_pdf_input": true, "supports_prompt_caching": true, "supports_reasoning": true, "supports_response_schema": true, "supports_tool_choice": true, "supports_vision": true, "tool_use_system_prompt_tokens": 346 } Model settings: accepts_settings: [] cache_control: false caches_by_default: false edit_format: "whole" editor_edit_format: "editor-whole" editor_model_name: "anthropic/claude-sonnet-4-6" examples_as_sys_msg: false extra_params: null lazy: false name: "anthropic/claude-sonnet-4-6" overeager: false reasoning_tag: null reminder: "user" remove_reasoning: null send_undo_reply: false streaming: true system_prompt_prefix: null use_repo_map: false use_system_prompt: true use_temperature: true weak_model_name: "claude-3-5-haiku-latest" RepoMap initialized with map_mul_no_files: 2 Aider v0.86.2 Main model: anthropic/claude-sonnet-4-6 with whole edit format, infinite output Weak model: gemini/gemini-3.1-flash-lite-preview Git repo: ../../.git with 247 files Repo-map: using 4096 tokens, auto refresh Note: in-chat filenames are always relative to the git working dir, not the current working dir. Cur working dir: /var/sitesUbuntu/www.mslinn.com/collections/_llm Git working dir: /var/sitesUbuntu/www.mslinn.com
────────────────────────────────────────────────────────── >
Voice Input
Windows
You have several options for using your voice with Aider. If your computer runs Windows 10 or 11, see Microsoft Voice Control.
Mac
I do not own an up-to-date Apple Mac, so I am unable to play with the current voice capabilities of Macs.
Via ChatGPT
If you have an OpenAPI subscription Aider can coordinate remote voice processing performed by ChatGPT, even if your computer is a modest laptop.
Faster Whisper
You can perform speech-to-text on NVIDIA GPUs and CPUs using Faster Whisper. Locally processed speech-to-text is more secure and private, and it works without an internet connection, so you might want to take a few minutes to set it up. For optimal performance, particularly for long audio, GPUs are faster than CPUs.
Since Anthropic does not provide a native speech-to-text API for Aider to use, you can run a local OpenAI-compatible speech-to-text server process like Whisper or Faster-Whisper.
$ pip install faster-whisper faster-whisper-server
Now launch Faster-Whisper and spoof Aider to believe that OpenAI runs on your
local machine by setting openai-api-base:
$ aider \
--openai-api-base http://localhost:8000/v1 \
--model anthropic/claude-sonnet-4-6
To run Faster-Whisper as a native Windows application, launch it from a PowerShell or CMD console.
I had problems running Aider as a native Windows app from some WSL directories
(via the \\wsl.localhost\ UNC path to WSL’s
ext4 file system). Faster-Whisper worked from the Linux user
home directory
(\\wsl.localhost\), but not from
a project directory
(\\wsl.localhost\).
IDE Usage
There are community-developed extensions that provide a graphical interface for Aider:
-
Aider Composer: Adds a GUI similar to Cursor’s composer. It supports chat modes like
ask,code, andarchitect, and provides inline diff previews to review changes before applying them. - VSCode Aider: Adds commands to the Command Palette (CNTRL+SHIFT+P) to start Aider, select models, and manage files directly from the explorer. When I tried it, this message appeared: "Aider is not yet fully optimized for Windows. Some features may behave unexpectedly. Do you want to continue?"
- Aider: An older extension that automates adding/dropping files from the chat based on which files you have open in your editor.
Comparison
compare Aider v2026 with other popular clis for agentic programming tasks. pay particular attention to maintaining a context through a long conversation
Aider v2026 remains the gold standard for developers who want control and efficiency in a terminal environment, whereas its main competitors—particularly Claude Code—have moved toward autonomous, massive-context operation.
For maintaining context through long conversations, the philosophical divide is stark: Aider relies on a curated, structural understanding of your code (Repository Map), while competitors increasingly rely on brute-forcing massive context windows (1M+ tokens).
Quick Verdict: Aider vs. The Field
| Feature | Aider v2026 | Claude Code (Anthropic) | OpenCode / OpenDevin |
|---|---|---|---|
| Context Strategy | Repository Map (AST-based summary) + Manual File Selection | Massive Context (1M token window) + “read everything” approach | Iterative Planning (Agentic loops that refresh context per step) |
| Long Conversation Handling | Degrades gracefully. You must occasionally reset/summarize. It prioritizes relevance to save tokens. | Excellent but expensive. It keeps the entire history in context longer, but costs spike significantly. | Variable. Depends on the “plan” state; can get stuck in loops if the plan drifts. |
| Cost Efficiency | High. Uses 4-5x fewer tokens for similar tasks. | Low. “Brute force” context management burns tokens rapidly. | Medium. tailored by the specific agentic runtime. |
| Best For | Surgical, accurate edits; paired collaboration; keeping costs low. | Large-scale refactors; exploring unfamiliar codebases; “fix this whole repo.” | Autonomous tasks where you want to “set and forget.” |
Deep Dive: Context Management in Long Conversations
1. Aider: The “Surgical” Approach (Relevance > Volume)
Aider's approach to long conversations is defined by its Repository Map. Instead of stuffing your entire project into the LLM's context window, Aider builds a compressed map of your codebase based on the Abstract Syntax Tree (AST).
-
How it handles drift: As a conversation gets long, Aider does not infinitely expand its memory. It relies on you to /add or /drop files. This keeps the context focused on the immediate task.
-
The “Long Conversation” Bottleneck: In very long sessions (20+ turns), Aider can lose track of earlier instructions if they fall out of the sliding window.
-
The Fix: Aider encourages a workflow where you “commit frequently.” Because Aider is git-native, the “context” is effectively saved to your git history. You are expected to clear the chat (/clear) after a major success and start fresh, relying on the git log and the Repo Map to maintain continuity.
2. Claude Code: The “Brute Force” Approach (Volume > Relevance)
Claude Code (often cited as Aider's primary CLI rival in 2026) utilizes the massive context window of models like Claude 3.5/3.7 Opus to “remember everything.”
-
How it handles drift: It simply keeps reading. With a ~1M token window, it can theoretically hold the entire conversation history and file contents for hours.
-
The Trade-off: This is incredibly effective for “recall” (“Wait, why did we decide to change that variable 10 turns ago?”), but it is prohibitively expensive for daily driving compared to Aider. It is also slower, as the “time-to-first-token” increases as the context fills up.
3. OpenCode / OpenDevin: The “Agentic Loop” Approach
These tools operate more like autonomous agents than pair programmers. Their context strategy is based on plans.
-
How it handles drift: They maintain a “state” or “plan” separate from the chat history. The agent looks at the plan, executes a step, and then updates the plan.
-
The Risk: If the plan becomes inaccurate during a long conversation, the agent can hallucinate or get stuck in a loop, repeatedly trying to fix a bug based on outdated context. Aider avoids this by keeping the human in the loop to “prune” the context manually.
Summary Recommendation
-
Stick with Aider if: You know your codebase well. You want a tool that feels like a sharp, fast scalpel. You prefer to manage the “attention” of the AI yourself (by adding/dropping files) to ensure high-quality code generation and low costs.
-
Switch to Claude Code if: You are entering a brand new, large repo and need an agent to “read everything” to explain it to you, or if you are doing a massive refactor where “remembering” a change from 30 minutes ago is critical, regardless of the cost.
- Claude Code Is Magnificent, But Claude Desktop Is a Hot Mess
- Gemini vs. Sonnet 3.5 and 4.6 for Meticulous Work
- Gemini Code Assist
- Antigravity
- Aider: A Lean and Focused Agentic Programming Assistant
- AI Planning vs. Waterfall Project Management
- Best Local LLMs for Coding
- Running an LLM on the Windows Ollama app
- Early Draft: Multi-LLM Agent Pipelines
- MiniMax-M2 and Mini-Agent Review
- MiniMax Web Search with ddgr
- LLM Societies
- Codex: Agentic Programming with ChatGPT in Visual Studio Code