Product6 min read

OpenAI Launches Codex-Spark: 128K Context Window for Developers

OpenAI's new Codex-Spark model offers a massive 128K context window optimized for software development. The model integrates directly into VS Code, JetBrains, and GitHub Copilot workflows.

DD
Dev Desk
Feb 12, 2026

OpenAI has launched Codex-Spark, a specialized coding model built on the GPT-5.2 architecture but fine-tuned specifically for software development workflows. The model features a 128K context window and is designed to understand entire codebases rather than individual files.

Codex-Spark represents OpenAI's answer to the growing competition in AI-assisted coding, where Anthropic's Claude Opus 4.6 (80.8% SWE-bench) and Moonshot's Kimi K2.5 (76.8% SWE-bench) have been gaining developer mindshare.

Key features include:

128K context window: Load entire project directories, not just single files. The model can reason across multiple interconnected files, dependencies, and configuration

Multi-file editing: Generate coordinated changes across multiple files in a single operation, maintaining consistency

Test generation: Automatically generate comprehensive test suites based on existing code patterns

Codebase Q&A: Ask natural language questions about how any part of your codebase works

Git-aware: Understands branch history, can generate meaningful commit messages, and suggests PR descriptions

Integration is available through three channels:

GitHub Copilot: Codex-Spark is now the default model for Copilot Business and Enterprise plans

VS Code extension: Direct integration with inline suggestions, chat, and multi-file editing

JetBrains plugin: Support for IntelliJ, PyCharm, WebStorm, and other JetBrains IDEs

Early benchmarks show Codex-Spark scoring 77.3% on SWE-bench Verified — competitive with but still behind Opus 4.6. However, OpenAI claims advantages in generation speed (2.5x faster than Opus 4.6 for typical coding tasks) and cost efficiency.

API pricing for Codex-Spark is set at $1.50 per million input tokens and $10 per million output tokens — approximately 15% cheaper than standard GPT-5.2 for input and 29% cheaper for output, reflecting the model's more focused training.

Developer response has been strong. Within the first 48 hours, over 200,000 developers activated Codex-Spark through GitHub Copilot, making it one of the fastest developer tool adoptions in OpenAI's history.

The launch intensifies the AI coding tool wars. Windsurf's Wave 13 offers free SWE-1.5 at 950 tokens/second, Cursor Pro supports multiple frontier models, and Anthropic's Claude Code has become a favorite for terminal-based development. OpenAI is betting that deep integration with GitHub — which it owns through Microsoft — gives Codex-Spark a distribution advantage that pure model quality alone cannot match.

DD
Dev Desk
Feb 12, 2026 · 6 min read
Back to News