News

Anthropic launches automated AI security tools for Claude Code that scan code for vulnerabilities and suggest fixes, ...
Claude Opus 4.1 scores 74.5% on the SWE-bench Verified benchmark, indicating major improvements in real-world programming, bug detection, and agent-like problem solving.
Anthropic says Claude Opus 4.1 improves software engineering accuracy to 74.5%. That compares to 62.3% with Claude Sonnet 3.7 ...
Anthropic's Claude Opus 4.1 achieves 74.5% on coding benchmarks, leading the AI market, but faces risk as nearly half its $3.1B API revenue depends on just two customers.
It’s Christmas in August – at least, for those tech-wonks who are interested in new model releases. Today’s news is a very ...
References to Anthropic's new 'Claude 4.1' AI model have leaked, suggesting enhanced problem-solving capabilities amid new ...
Anthropic has terminated OpenAI's API access to its Claude AI models, alleging violations of terms of service. OpenAI ...