French AI startup Mistral on Tuesday released Mixtral 8x22B, a new large language model (LLM) and its latest attempt to compete with the big boys in the AI arena. Mixtral 8x22B is expected to outperform Mistral’s previous Mixtral 8x7B LLM, which itself showed signs of outshining OpenAI’s GPT-3.5 and Meta’s Llama 2, according to Gigazine.
The new Mixtral model boasts a 65,000-token context window, which refers to the amount of text that an AI model can process and reference at one time. Further, Mixtral 8x22B has a parameter size of up to 176 billion, a reference to the number of internal variables that the model uses to make decisions or predictions.
Founded by researchers from Google and Meta, Mistral takes an open-source approach to its AI models. In this case, Mixtral 8x22B is available for anyone to use after downloading a 281GB file. To do so yourself, just paste the magnet link from Mistral AI’s X post into your favorite BitTorrent client.
Also: What to know about Mistral AI: The company behind the latest GPT-4 rival
The release of Mistral’s newest LLM comes at a busy time in the AI industry for new and innovative models.
On Tuesday, OpenAI released GPT-4 Turbo with Vision, the latest GPT-4 Turbo model with vision capabilities for working with photographs, drawings, and other images uploaded by the user. On the same day, Google released its advanced Gemini Pro 1.5 LLM to developers with a free option that grants up to 50 requests per day. Not to be outdone, Meta revealed that its Llama 3 model would debut later this month.
Mixtral 8x22B and these other advanced LLMs are known as frontier models, which can handle a wide variety of tasks and requests. Evoking the Wild West, they aim to outduel previous models with more pioneering technology. The term frontier also conjures up a sense of danger. In a July 2023 blog post, OpenAI described the risks of frontier models.
“Frontier AI models pose a distinct regulatory challenge: dangerous capabilities can arise unexpectedly; it is difficult to robustly prevent a deployed model from being misused; and, it is difficult to stop a model’s capabilities from proliferating broadly,” OpenAI wrote. “Industry self-regulation is an important first step. However, wider societal discussions and government intervention will be needed to create standards and to ensure compliance with them.”
Mistral’s open-source approach has also earned some criticism, according to The Guardian. By allowing anyone to download and build upon its AI models, the startup can’t prevent its systems from being used for harmful purposes. Further, the models can’t be taken offline if certain flaws or biases crop up that need to be resolved.