What we know so far
5 mins read

What we know so far


At the Google I/O developer conference in May 2023, CEO Sundar Pichai announced the company’s upcoming artificial intelligence (AI) system, Gemini.

The Large Language Model (LLM) is developed by the Google DeepMind division (Brain Team + DeepMind). It could compete with and potentially outperform AI systems like OpenAI’s ChatGPT.

While details are still scarce, here’s what we can piece together from the latest interviews and reports about Google Gemini.

Google Gemini will be multimodal

Pichai explained that Gemini combines the strengths of DeepMind’s AlphaGo system, known for its mastery of the complex game Go, with extensive language modeling capabilities.

He said it is designed to be multimodal from the ground up, integrating text, images and other data types. This could allow for more natural conversation skills.

Pichai also hinted at future skills such as memory and planning that could enable tasks requiring logical thinking.

Gemini can use tools and APIs

In an update to his professional bio over the summer, Google chief scientist Jeffrey Dean said Gemini is one of the “next-generation multimodal models” he is helping to co-lead.

He said it will use Pathways, Google’s new AI infrastructure, to expand training on various data sets.

This suggests that Gemini may be the largest language model created to date, likely surpassing the size of GPT-3 with over 175 billion parameters.

It will be available in different sizes and features

Further details came from Demis Hassabis, CEO of DeepMind.

In June, he told Wired that AlphaGo’s techniques, such as reinforcement learning and tree searching, could give twins new skills like reasoning and problem solving.

Hassabis explained that Gemini is a “model series” that will be available in different sizes and functions.

He also mentioned that Gemini could use memory, fact-checking from sources like Google search, and enhanced reinforcement learning to increase accuracy and reduce dangerous hallucinated content.

The results of early twins are promising

In a Time interview in September, Hassabis reiterated that Gemini aims to combine scale and innovation.

He said the involvement of planning and memory is in the early stages of research.

Hassabis also stated that Gemini could use retrieval methods to output entire blocks of information rather than generating it word by word to improve factual consistency.

He revealed that, like the Flamingo image captioning system, Gemini is built on DeepMind’s multimodal work.

Overall, Hassabis said Gemini is showing “very promising early results.”

Advanced chatbots as universal personal assistants

In an interview with Wired published a few days later, Pichai provided the clearest indication of how Gemini fits into Google’s product roadmap.

He explained that conversational AI systems like Bard are “not the end state,” but rather waypoints leading to more advanced chatbots.

Pichai said Gemini and future versions will ultimately become “incredible universal personal assistants” integrated into people’s daily lives in areas such as travel, work and entertainment.

He reiterated that Gemini will combine the strengths of text and images, explaining that within a few years today’s chatbots will “look trivial” in comparison.

Competitors are interested in Gemini’s performance

OpenAI’s CEO tweeted what appeared to be a response to a paywalled article that said Google Gemini could outperform GPT-4.

There was no official answer to Elon Musk’s follow-up question about whether the numbers provided by SemiAnalysis were correct.

Select companies have early access to Gemini

Other clues about Gemini’s progress this week: The Information reported that Google gave early access to Gemini to a small group of developers outside of Google.

This suggests that Gemini may soon be ready for a beta release and integration with services like Google Cloud Vertex AI.

Meta is working on LLM to compete with OpenAI

Although the news about Gemini is promising so far, Google is not the only company reportedly poised to launch a new LLM to compete with OpenAI.

According to the Wall Street Journal, Meta is also working on an AI model that will compete with the GPT model that powers ChatGPT.

Meta recently announced the release of Llama 2, an open source AI model, in collaboration with Microsoft. The company appears committed to responsibly developing more accessible AI.

The countdown to Google Gemini

What we know so far suggests that twins could represent a significant advance in natural language processing.

Combining DeepMind’s latest AI research with Google’s vast computing resources makes it difficult to overstate the potential impact.

If Gemini lives up to expectations, it could drive a transformation in interactive AI, in line with Google’s ambitions to “make AI responsibly accessible to billions of people.”

The latest news from Meta and Google comes a few days after the first AI Insight Forum, where technology CEOs met privately with part of the US Senate to discuss the future of AI.


Featured image: VDB Photos/Shutterstock