Skip to main content

Google’s new Gemini 2.0 AI model is about to be everywhere

Gemini 2.0 logo
Google DeepMind

Less than a year after debuting Gemini 1.5, Google’s DeepMind division was back Wednesday to reveal the AI’s next-generation model, Gemini 2.0. The new model offers native image and audio output, and “will enable us to build new AI agents that bring us closer to our vision of a universal assistant,” the company wrote in its announcement blog post.

Recommended Videos

As of Wednesday, Gemini 2.0 is available at all subscription tiers, including free. As Google’s new flagship AI model, you can expect to see it begin powering AI features across the company’s ecosystem in the coming months. As with OpenAI’s o1 model, the initial release of Gemini 2.0 is not the company’s full-fledged version, but rather a smaller, less capable “experimental preview” iteration that will be upgraded in Google Gemini in the coming months.

“Effectively,” Google DeepMind CEO Demis Hassabis told The Verge, “it’s as good as the current Pro model is. So you can think of it as one whole tier better, for the same cost efficiency and performance efficiency and speed. We’re really happy with that.”

Google is also releasing a lightweight version of the model, dubbed Gemini 2.0 Flash, for developers.

Introducing Gemini 2.0 | Our most capable AI model yet

With the release of a more capable Gemini model, Google advances its AI agent agenda, which would see smaller, purpose-built models taking autonomous action on the user’s behalf. Gemini 2.o is expected to significantly boost Google’s efforts to roll out its Project Astra, which combines Gemini Live’s conversational abilities with real-time video and image analysis to provide users information about their surrounding environment through a smart glasses interface.

Google also announced on Wednesday the release of Project Mariner, the company’s answer to Anthropic’s Computer Control feature. This Chrome extension is capable of commanding a desktop computer, including keystrokes and mouse clicks, in the same way human users do. The company is also rolling out an AI coding assistant called Jules that can help developers find and improve clunky code, as well as a “Deep Research” feature that can generate detailed reports on the subjects you have it search the internet for.

Deep Research, which seems to serve the same function as Perplextiy AI and ChatGPT Search, is currently available to English-language Gemini Advanced subscribers. The system works by first generating a “multi step research plan,” which it submits to the user for approval before implementing.

Once you sign off on the plan, the research agent will conduct a search on the given subject and then hop down any relevant rabbit holes it finds. Once it’s done searching, the AI will regurgitate a report on what its found, including key findings and citation links to where it found its information. You can select it from the chatbot’s drop-down model selection menu at the top of the Gemini home page.

Andrew Tarantola
Former Digital Trends Contributor
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Google is giving free access to two of Gemini’s best AI features
Gemini Advanced on the Google Pixel 9 Pro Fold.

Google’s Gemini AI has steadily made its way to the best of its software suite, from native Android integrations to interoperability with Workspace apps such as Gmail and Docs. However, some of the most advanced Gemini features have remained locked behind a subscription paywall.
That changes today. Google has announced that Gemini Deep Research will now be available for all users to try, alongside the ability to create custom Gem bots. You no longer need a Gemini Advanced (or Google One AI Premium) subscription to use the aforementioned tools.

The best of Gemini as an AI agent
Deep Research is an agentic tool that takes over the task of web research, saving users the hassle of visiting one web page after another, looking for relevant information. With Deep Research, you can simply put a natural language query as input, and also specify the source, if needed.

Read more
Google’s new Gemma 3 AI models are fast, frugal, and ready for phones
Google Gemma 3 open-source AI model on a tablet.

Google’s AI efforts are synonymous with Gemini, which has now become an integral element of its most popular products across the Worksuite software and hardware, as well. However, the company has also released multiple open-source AI models under the Gemma label for over a year now.

Today, Google revealed its third generation open-source AI models with some impressive claims in tow. The Gemma 3 models come in four variants — 1 billion, 4 billion, 12 billion, and 27 billion parameters — and are designed to run on devices ranging from smartphones to beefy workstations.
Ready for mobile devices

Read more
Google AI Mode will reinvent Search. I’m worried — and you should be, too
Google AI Mode for Search.

Update: A Google spokesperson responded to our queries. The story has been updated with their answers in a dedicated section below. 

Google is pushing forward with more AI into how internet search works. Remember AI Overviews, which essentially summarizes the content pulled from websites, and presents it at the top of the Google Search page?

Read more