lean-ctx
https://github.com/yvgude/lean-ctx
📊 Stats
⭐ Stars: 109
📝 Language: Rust
📝 Description: Hybrid Context Optimizer — Shell Hook + MCP Server. Reduces LLM token consumption by 89-99%. Single Rust binary, zero
⭐ Star Growth (12 months)
🔬 Research Notes
Stats
Description
Hybrid Context Optimizer — Shell Hook + MCP Server. Reduces LLM token consumption by 89-99%. Single Rust binary, zero dependencies.
Topics
agentic-coding, ai, claude-code, cli, compression, context-compression, copilot, cursor, developer-tools, llm, mcp, open-source, rust, token-optimization, tokens
Research Summary
Key Features
Architecture
Use Cases
Assessment
README Excerpt
```
# lean-ctx
Context Intelligence Engine with CCP + TDD. Shell Hook + MCP Server. 21 MCP tools, 90+ shell patterns, cross-session memory (CCP), LITM-aware positioning, tree-sitter AST for 14 languages. Single Rust binary.
[](https://crates.io/crates/lean-ctx)
[](https://crates.io/crates/lean-ctx)
[](https://aur.archlinux.org/packages/lean-ctx)
[](LICENSE)
[](https://discord.gg/pTHkG9Hew9)
[Website](https://leanctx.com) · [Install](#installation) · [Quick Start](#quick-start) · [CLI Reference](#cli-commands) · [MCP Tools](#21-mcp-tools) · [Changelog](CHANGELOG.md) · [vs RTK](#lean-ctx-vs-rtk) · [Discord](https://discord.gg/pTHkG9Hew9)
---
lean-ctx reduces LLM token consumption by up to 99% through two complementary strategies in a single binary:
1. Shell Hook — Transparently compresses CLI output (90+ patterns) before it reaches the LLM. Works without LLM cooperation.
2. MCP Server — 21 tools for cached file reads, adaptive mode selection, incremental deltas, dependency maps, intent detection, cross-file dedup, project graph, cross-session memory (CCP), and session metrics. Works with Cursor, GitHub Copilot, Claude Code, Windsurf, OpenAI Codex, Google Antigravity, OpenCode, and any MCP-compatible editor.
3. AI Tool Hooks — One-command integration for Claude Code, Cursor, Gemini CLI, Codex, Windsurf, and Cline via lean-ctx init --agent .
Token Savings (Typical Cursor/Claude Code Session)
| Operation | Frequency | Standard | lean-ctx | Savings |
|---|---|---|---|---|
| File reads (cached) | 15x | 30,000 | 195 | -99% |
| File reads (map mode) | 10x | 20,000 | 2,000 | -90% |
| ls / find | 8x | 6,400 | 1,280 | -80% |
| git status/log/diff | 10x | 8,000 | 2,400 | -70% |
| grep / rg | 5x | 8,000 | 2,400 | -70% |
| cargo/npm build | 5x | 5,000 | 1,000 | -80% |
| Test runners | 4x | 10,000 | 1,000 | -90% |
| curl (JSON) | 3x | 1,500 | 165 | -89% |
| docker ps/build | 3x | 900 | 180 | -80% |
| Total | | ~89,800 | ~10,620 | -88% |
> Estimates based on medium-sized TypeScript/Rust projects. MCP cache hits reduce re-reads to ~13 tokens each.
Installation
Homebrew (macOS / Linux)
```bash
brew tap yvgude/lean-ctx
brew install lean-ctx
```
Arch Linux (AUR)
```bash
yay -S lean-ctx # builds from source (crates.io)
# or
yay -S lean-ctx-bin # pre-built binary (GitHub Releases)
```
Cargo
```bash
cargo install lean-ctx
```
Build from Source
```bash
git clone https://github.com/yvgude/lean-ctx.git
cd lean-ctx/rust
cargo build --release
cp target/release/lean-ctx ~/.local/bin/
```
> Add ~/.local/bin to your PATH if needed:
> ```bash
> echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.zshrc # or ~/.bashrc
> ```
Verify Installation
```bash
lean-ctx --version # Should show "lean-ctx 2.0.0"
lean-ctx gain # Should show token savings stats
```
Token Dense Dialect (TDD)
lean-ctx introduces TDD mode — enabled by default. TDD compresses LLM communication using mathematical symbols and short identifiers:
| Symbol | Meaning |
|---|---|
| λ | function/handler |
| § | struct/class/module |
| ∂ | interface/trait |
| τ | type alias |
| ε | enum |
| α1, α2... | short identifier IDs |
How it works:
λ+handle(⊕,path:s)→s instead of fn pub async handle(&self, path: String) -> Stringα1, α2... with a §MAP at the endResult: 8-25% additional savings on top of existing compression.
```
---
*Researched: 2026-03-25*