News
Google DeepMind has claimed its first gold medal at the International Mathematical Olympiad (IMO) with an "advanced version" of its Gemini model running in Deep Think mode.
Tesla has shut down its Dojo supercomputer project and dissolved the team behind it, according to a report from Bloomberg.
OpenAI is offering ChatGPT Enterprise to U.S. government agencies for just one dollar per year. This special version comes with enhanced security and privacy features, but does not include access to ...
OpenAI publishes a comprehensive prompting guide for GPT-5, addressing agentic workflows, new API parameters, coding workflows and concrete prompting patterns - including insights from the integration ...
Google is rolling out Opal, a new experimental tool that lets users build AI-powered mini-apps with simple natural language prompts, no coding required.
With Genie 3, Google Deepmind introduces a "world model" that creates interactive 3D environments in real time, designed for simulating complex scenarios and training autonomous AI agents.
AI-generated code is often "almost right, but not quite," and that's wearing on developers. A new survey shows most spend more time fixing AI-generated code than they expected, and still turn to human ...
Google is rolling out AI-powered shopping summaries in Chrome for users in the US. When you click the icon next to a website address, a pop-up appears with details about the reliability and quality of ...
Apple plans to upgrade its ChatGPT integration in Apple Intelligence to GPT-5 across iOS 26, iPadOS 26, and macOS Tahoe 26. Until now, the feature has relied on GPT-4o, but the switch to GPT-5 is ...
US experts warn that Nvidia’s H20 chip, developed for the Chinese market, surpasses domestic Chinese chips in memory bandwidth and outperforms Nvidia’s H100 in AI tasks, making it a critical component ...
Midjourney is adding new AI video features that let users generate clips with custom start and end images. This allows for seamless loops or smooth transitions that morph one image into another.
Large language models are supposed to handle millions of tokens - the fragments of words and characters that make up their inputs - at once. But the longer the context, the worse their performance ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results