AI News Hub Logo

AI News Hub

Setting Up AI Development with uv — Start a Claude SDK Project in Under 1 Second

DEV Community
Jangwook Kim

Until last year, every time I started an AI project I'd go through the same ritual. python -m venv .venv, source .venv/bin/activate, pip install anthropic openai... and then wait. Sometimes over two minutes. Watching anthropic, torch, and pydantic download one by one. The context-switching cost added up. Every time I made a new experiment branch, environment setup broke the flow. The "works on my machine" problem didn't go away either. Then I started using uv — a Python package manager written in Rust, from the team behind Ruff at Astral. I benchmarked it today while setting up a Claude SDK project: installing anthropic along with 16 packages total took 0.874 seconds. With pip, that same operation would have taken 20〜40 seconds. This is a complete hands-on guide to setting up an AI development environment with uv 0.11 from scratch. Honestly, pip itself isn't the problem. It's a proven tool that's downloaded billions of packages. The issue is the combination of speed and environment isolation. Why pip is slow for AI dev is structural: it resolves packages sequentially and doesn't cache downloaded files efficiently. When you run pip install anthropic openai torch, it fetches each package's metadata, resolves dependencies, and checks for conflicts serially. Poetry is much better at dependency management — declarative pyproject.toml, lock file support. But Poetry itself is written in Python, which puts a ceiling on speed, and debugging a broken environment can get tedious fast. conda is strong for Python version management but environments tend to balloon to several gigabytes, and it's awkward to use in Docker CI. uv is written in Rust, which gives it fundamentally different performance on parallel downloads and cache utilization. What I measured today: uv init claude-agent-demo → 0.435s uv add anthropic → 0.874s (16 packages, pydantic-core 1.9MB) uv add openai httpx python-dotenv → 0.555s (3 more packages) uv sync (cache hit, 19 packages) → 0.074s (29ms install) Almost nothing is required to install uv. Check the following by OS: macOS/Linux: curl or Homebrew Windows: PowerShell No need to pre-install Python — uv can manage Python versions directly That last point matters. You don't need pyenv, conda, or a specific system Python to get started. macOS and Linux: one line. curl -LsSf https://astral.sh/uv/install.sh | sh Or via Homebrew: brew install uv Windows PowerShell: powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex" After installation: uv --version # uv 0.11.11 (ed7b06001 2026-05-06) This is the current latest as of today (2026-05-07). Astral releases frequently; uv self update keeps you current. uv init claude-agent-demo cd claude-agent-demo Output: Initialized project `claude-agent-demo` at `/path/to/claude-agent-demo` Completes in 0.435 seconds. The generated structure: claude-agent-demo/ ├── main.py ├── pyproject.toml └── README.md The pyproject.toml that uv generates: [project] name = "claude-agent-demo" version = "0.1.0" description = "Add your description here" readme = "README.md" requires-python = ">=3.11" dependencies = [] pyproject.toml is the Python ecosystem standard project file. Unlike requirements.txt, it's the source of truth for both development and packaging. Set up your .env at this point: cat > .env cpython-3.14.5rc1-macos-aarch64-none cpython-3.13.13-macos-aarch64-none cpython-3.13.11-macos-aarch64-none /opt/homebrew/bin/python3.13 cpython-3.12.8-macos-aarch64-none /usr/local/bin/python3.12 Create a project pinned to a specific version: uv init my-project --python 3.13 Pin an existing project: uv python pin 3.12 This creates a .python-version file, ensuring everyone on the team uses the same Python version. One tool replaces pyenv + virtualenv + pip. Commit uv.lock to git. With that file checked in, anyone cloning the repo gets exactly the same package versions. After cloning: git clone cd my-project uv sync From today's test — deleted .venv and ran uv sync: Using CPython 3.11.12 Creating virtual environment at: .venv Resolved 21 packages in 0.66ms Installed 19 packages in 29ms 0.074 seconds. With a warm cache, this is what reinstalling 19 packages looks like. GitHub Actions configuration: name: Test on: [push, pull_request] jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Install uv uses: astral-sh/setup-uv@v4 with: version: "0.11.11" - name: Set up Python run: uv python install - name: Install dependencies run: uv sync --all-extras --dev - name: Run tests run: uv run pytest The official astral-sh/setup-uv action handles cache automatically. When building an MCP server in Python, this same CI pattern applies — uv add fastmcp for the dependency, uv sync in GitHub Actions. uv handles CLI tool installation too, replacing pipx. # Install ruff as a global CLI tool uvx ruff check . # One-shot run without installing (like npx) uvx --from httpie http https://api.anthropic.com/v1/models # Persistent install uv tool install ruff uv tool install black uvx runs a tool immediately without a permanent install. For pinning specific versions: uvx [email protected] check . # pin to a version uvx --python 3.12 mypy . # specify Python version Common tools for AI development: uv tool install ruff # linter/formatter uv tool install mypy # type checking uv tool install pytest # test runner uv tool install pre-commit # git hook management No more separate pip install --user or pipx. One tool handles project dependencies and global CLI tools. Migrating an existing requirements.txt project: # Works on existing projects uv pip install -r requirements.txt uv supports pip subcommands. When a full migration isn't practical, uv pip install bridges the gap. Dev-only dependencies: # Add to dev group uv add --dev pytest ruff mypy # Install production dependencies only uv sync --no-dev Anthropic SDK optional extras: # Add Amazon Bedrock support uv add "anthropic[bedrock]" # Add Vertex AI (GCP) support uv add "anthropic[vertex]" The bracket syntax selects optional extras. anthropic[bedrock] pulls in boto3; anthropic[vertex] pulls in the GCP client. To understand what installed and why: uv tree Output: claude-agent-demo v0.1.0 ├── anthropic v0.100.0 │ ├── anyio v4.13.0 │ │ ├── idna v3.13 │ │ └── typing-extensions v4.15.0 │ ├── httpx v0.28.1 │ │ ├── certifi v2026.4.22 │ │ └── httpcore v1.0.9 │ ├── pydantic v2.13.4 │ └── ... └── openai v2.35.1 └── ... (*) marks shared dependencies. Much clearer than pip show for tracking down where a version came from. To remove a package: uv remove openai Lock file updates automatically. Performance-wise, uv is nearly perfect. But a few things deserve honesty. Ecosystem maturity. uv launched in 2024 and is at v0.11 as of today. Still pre-1.0. For personal projects and new codebases, it's an easy recommendation. For large teams migrating existing Poetry or pip workflows, factor in the transition cost. conda ecosystem compatibility. If you need torch, CUDA toolkits, or tensorflow installed from conda channels, uv can't help you — it's PyPI-only. For pure Python AI projects (API calls, text generation, agent frameworks), this isn't an issue. For deep learning work requiring specific CUDA versions, you may still need conda alongside uv. The uv run habit shift. Typing uv run instead of python takes adjustment. The upside is that team members can't accidentally run scripts against system Python without noticing. The LLM coding environment optimization post touches on a similar observation: the harder part of adopting a faster tool is often changing team habits, not the tool itself. # Start a new AI project uv init my-ai-project cd my-ai-project # Install Claude SDK uv add anthropic python-dotenv # Add multiple SDKs at once uv add anthropic openai httpx # Run a script (no venv activation needed) uv run main.py # Sync team environment from lock file uv sync # Create a project with a specific Python version uv init my-project --python 3.13 # Inspect dependency tree uv tree # Remove a package uv remove openai # Update uv itself uv self update I now reach for uv init almost automatically when starting a new AI project. I haven't found a reason to go back to pip. For conda-dependent ML work, the choice is still necessary. 0.874 seconds isn't just a speed stat. The less friction each experiment costs, the more experiments you run. That tends to produce better code.