Claude Opus 4.7 Just Dropped. Here's What's Actually New.

Anthropic shipped Claude Opus 4.7 yesterday, and it's a more interesting release than the version number suggests. This isn't a minor point update. It's the first generally available model to incorporate safety work from Mythos (the unrestricted research model Anthropic won't release publicly), and it closes some real gaps that Opus 4.6 users have been bumping into.
Here's what actually changed and why it matters.
The Coding Jump Is Significant
The headline number is a 13% lift on coding benchmarks, but the more telling stat is this: Opus 4.7 resolves 3x more production-level tasks compared to its predecessor. That's not synthetic benchmark performance. That's real-world coding work -- debugging, refactoring, building features across multi-file codebases.
Anthropic is positioning Opus 4.7 as the model you can hand your hardest engineering work to and walk away. Whether that pans out in practice will depend on the task, but the benchmark trajectory is hard to argue with. SWE-bench scores, agentic tool use, and autonomous computer operation all got meaningful bumps.
For businesses using Claude to power customer-facing tools, this matters because a smarter model gives better answers. A chatbot running on Opus 4.7 will handle nuanced customer questions more accurately than one running on 4.6. The improvements in reasoning and multi-step problem solving translate directly to better conversation quality.
High-Res Vision Is Finally Here
This one flew under the radar but it's a big deal. Opus 4.7 is the first Claude model with high-resolution image support. Maximum image resolution jumped from about 1.15 megapixels to 3.75 megapixels -- more than 3x the detail.
That opens up use cases that were previously unreliable: reading fine print in documents, analyzing detailed product images, parsing complex charts and diagrams. If you've ever sent Claude a screenshot and gotten a vague response because it couldn't make out the details, 4.7 should fix that.
The New "xhigh" Effort Level
Opus 4.7 introduces a new effort level called "xhigh" that sits between "high" and "max." This gives users finer control over the tradeoff between thinking depth and response speed.
In plain terms: for easy questions, you can keep effort low and get fast responses. For genuinely hard problems, you can crank it up to xhigh or max and let the model think longer. It's a knob that lets you optimize for cost and speed without sacrificing quality on the tasks that actually need deep reasoning.
The Mythos Connection
The most interesting backstory is how Opus 4.7 connects to Mythos, the research model Anthropic announced earlier this month. Mythos scored 93.9% on SWE-bench but was deemed too capable (and too dangerous in cybersecurity contexts) for general release.
Anthropic said they "differentially reduced" Opus 4.7's cyber capabilities during training -- essentially keeping the coding and reasoning improvements while dialing back the ability to autonomously find and exploit security vulnerabilities. They're encouraging security professionals who want the full capabilities to apply through a formal verification program.
This is an interesting approach: take the learnings from a model that's too powerful to release, bake the safe parts into a model that is generally available, and gate the dangerous parts behind a vetting process. Whether other labs follow this pattern will say a lot about how the industry handles increasingly capable models.
Same Price, Better Model
Opus 4.7 costs the same as Opus 4.6: $5 per million input tokens, $25 per million output tokens. It's available across the Claude API, Amazon Bedrock, Google Vertex AI, and Microsoft Foundry. No pricing games, no premium tier. Just a straight upgrade. For businesses already running on Claude, this is a free performance boost. For businesses evaluating AI tools for customer support, the timing is good -- you're getting Anthropic's best generally available model from day one.
If you want to see what a smarter Claude can do for your customer conversations, give Converzoy a try. The model upgrade means better answers, better reasoning, and fewer moments where the chatbot gets confused by complex questions.
You might also like

Google Just Put Gemini Inside Chrome. The Browser Is Becoming an AI App.

Half of U.S. Data Centers Are Being Delayed or Canceled. AI's Biggest Bottleneck Isn't Software.

Microsoft's GigaTIME Can Map Cancer's Immune System From a Basic Lab Slide. Here's Why That's a Big Deal.

88% of Companies Use AI Now. Here's What the Other 12% Are Missing.