|
Navigation
Search
|
OpenAI Launches GPT-5.2 ‘Garlic’ with 400K Context Window for Enterprise Coding
Thursday December 11, 2025. 07:58 PM , from eWeek
On Thursday, OpenAI released GPT-5.2, codenamed “garlic” during development, marking the company’s most capable model for coding and agentic workflows. The model brings a massive 400,000-token context window and 128,000-token output capacity — roughly 5x the context of GPT-4.
In a blog post, OpenAI called it “the most capable model series yet.” “We announced this code red to really signal to the company that we want to martial resources in one particular area, and that’s a way to really define priorities and define things that can be deprioritized,” Fidji Simo, CEO of applications at OpenAI, told reporters in a briefing on Thursday. “We have had an increase in resources focused on ChatGPT in general, I would say that helps with the release of this model, but that’s not the reason it’s coming out this week in particular.” The launch also comes as Google accelerates its own AI push, a surge strong enough that Sam Altman recently signaled a “code red” to rally OpenAI’s pace. What’s new in GPT-5.2 GPT-5.2 is positioned as OpenAI’s flagship for enterprise development teams and agentic systems. Key specifications include: 400,000-token context window: Developers can process entire codebases, lengthy API documentation, or comprehensive technical specifications in a single request 128,000-token max output: Enables generation of complete applications, detailed technical documentation, or extensive code refactoring in one response Reasoning token support: Built-in capabilities for complex problem-solving and multi-step logical operations Aug. 31, 2025 knowledge cutoff: More recent training data than previous models Text and image I/O: Supports both text and image inputs/outputs for multimodal applications The model supports streaming, function calling, and structured outputs through OpenAI’s Chat Completions API, making it compatible with existing enterprise deployments. Pricing and economics GPT-5.2 costs $1.75 per million input tokens and $14 per million output tokens. That’s 40% more expensive than GPT-5 ($1.25 input, $10 output), but OpenAI argues the expanded context window and improved reasoning justify the premium. For cached inputs, pricing drops to $0.175 per million tokens — a 10x reduction that makes repeated queries against large codebases or documentation significantly cheaper. Developers on the Batch API get 50% discounts, bringing costs to $0.875 input and $7 output per million tokens for non-time-sensitive workloads. Enterprise implications The 400K context window addresses a major pain point for development teams. Previously, processing large codebases required splitting files across multiple API calls and managing conversation state. GPT-5.2 handles this natively, streamlining workflows for: Code review and refactoring: Analyze entire applications in context Documentation generation: Process complete APIs and generate comprehensive docs Debugging complex systems: Trace issues across multiple interconnected files Migration projects: Understand legacy systems before modernization The model’s agentic capabilities—handling multi-step tasks with reasoning tokens—make it suitable for autonomous coding assistants and CI/CD pipeline integration. Rate limits and availability GPT-5.2 is available now through OpenAI’s API with tiered rate limits. Tier 1 users start at 500 requests per minute and 500,000 tokens per minute, scaling up to Tier 5’s 15,000 RPM and 40M TPM for high-volume enterprise deployments. The model isn’t available for fine-tuning yet, but OpenAI supports distillation for teams wanting to create smaller, specialized models from GPT-5.2’s outputs. Model snapshots OpenAI released two snapshot versions: gpt-5.2 (default, tracks latest stable release) gpt-5.2-2025-12-11 (locked version for consistent behavior) Enterprise teams requiring reproducible outputs should use the dated snapshot to avoid unexpected changes when OpenAI updates the default alias. What this means for developers GPT-5.2 represents a shift toward AI models that can handle increasingly complex, autonomous workflows. The expanded context window eliminates architectural workarounds that developers previously needed for large-scale code analysis. For teams evaluating GPT-5.2 against GPT-5 or competitors, the decision comes down to whether the 400K context window justifies the 40% price premium. For projects involving large codebases or comprehensive documentation, the efficiency gains likely offset the higher per-token costs. The post OpenAI Launches GPT-5.2 ‘Garlic’ with 400K Context Window for Enterprise Coding appeared first on eWEEK.
https://www.eweek.com/news/openai-launches-gpt-5-2/
Related News |
25 sources
Current Date
Dec, Thu 11 - 22:52 CET
|







