5 mins read

Mistral AI and Its Innovative Mixtral 8x7B Model: Insights into the Paris-Based Tech Firm

Big news in the world of generative AI! Mistral AI, a startup based in the heart of Paris, has just landed a massive €450 million investment. This incredible financial boost has skyrocketed the company’s worth to an astounding $2 billion. It’s a game-changer, not just for Mistral AI, but for the entire European AI scene, which is currently lagging behind, proving that Europe is a force to be reckoned with on the global AI stage.


Mistral AI is a dynamic and innovative player in the AI industry. With a focus on scientific excellence and creativity, they are revolutionizing the field by developing efficient, effective, and trustworthy AI models. Their mission is ambitious: to advance AI technology, catering not only to the open community but also to enterprise clients. This company is however committed to challenging the status quo with their open-weight models, offering a formidable alternative to proprietary AI solutions like ChatGPT or Gemini. More about Mistral AI here.

The Success Story of Mistral AI

The Paris-based company introduced the Mixtral 8x7B model, a powerful and open-source model that’s turning heads for all the right reasons. It’s cost-effective, high-performing, and outperforms other models in various benchmarks. According to their website, Mixtral is lightning-fast, multilingual, and a whiz at generating code, thanks to its smart sparse architecture. It’s also less biased and more truthful than other models, setting a new standard in AI. Plus, there’s a special version for those who need precise instructions: the “Mixtral 8x7B Instruct,” complete with open-source tools for community use.

Available Models

  1. Mistral 7B: This compact 7B dense Transformer model is quick to deploy and easy to customize. It may be small, but it’s mighty, perfect for a variety of uses. Mistral 7B handles English and code effortlessly and boasts an 8k context window.
  2. Mixtral 8x7B: This model is a step up from Mistral 7B. It’s a 7B sparse Mixture-of-Experts model with 12B active parameters out of a total of 45B. It supports multiple languages, excels in code tasks, and has a whopping 32k context window.

License: Apache 2.0

How to Access Mistral’s Models

Getting your hands on Mistral’s models is easy if you have a powerful GPU with at least 16 GB of VRAM. You can download these models for free and start exploring. If you don’t have such high-end hardware, don’t worry – Mistral AI offers handy APIs for direct use, making these tools accessible to a broader audience.

Mixtral 8x7B model is currently used to power the ‘mistral-small’ endpoint, which is available in beta. Interested users can register to try gain early access to all generative and embedding endpoints offered, but there is currently a waitlist.

Mistral Waitlist

One of Many

This news comes together with more recent updates regarding LLMs, where giants like Google’ introduced their latest Gemini models and OpenAI’s ChatGPT keeps to be the leader in the field; Mistral AI is offering something very valuable: the ability to download and run their AI models right on your own PC, if you have the resources – making it open source. This is a nice thing because it means you’re not tied down by the usual rules and restrictions set by the big tech companies.

Mistral AI isn’t alone in the open source side. Another big name, Meta, has also released its LLAMA model for open-source use. The progress that is being made with these type of models is impressive, giving a free alternative to other models that are under the exclusive control of a corporation.

The Current State and Future Outlook

As things stand, ChatGPT is still the top dog in the AI market. However, with the latest innovations from Mistral AI and Google, the tides might be turning. These new developments will bring us to a future where everyone will have its own pocket version of ChatGPT, maybe even sooner than we anticipate.

In a nutshell, Mistral AI’s huge investment and the launch of the innovative Mixtral 8x7B model highlight the company’s strong commitment to advancing AI technology. Their focus on open-source solutions and community engagement is paving the way for cutting-edge AI solutions, solidifying Mistral AI’s position as a key player in the world of generative AI.

New Quantized Model

On January 28, someone published some files on HuggingFace, labeled “miqu-1-70b”. Apparently, this file started to circulate until some benchmarks revealed that the model scored very closely to ChatGPT-4 locally. Then, confirmation arrived from Mistral AI directly through Arthur Mensch on X.

An over-enthusiastic employee of one of our early access customers leaked a quantized (and watermarked) version of an old model that we trained and distributed quite openly.

Quantized means that the model can run faster on less performant computers by making some numerical optimizations to the model. It also seems to be watermarked, indicating that there could be hidden messages in the model that can be triggered only by specific (and probably difficult to create by chance) means.

Anyway, it seems that this model is not in its final iteration, but it exists! This would be a great opportunity to reach ChatGPT-4’s level of quality with an openly available weights model!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.