Mistral AI has launched the Mistral 3 family of ten open-source multilingual and multimodal AI models, headlined by Mistral Large 3, which uses a Mixture of Experts (MoE) architecture and is optimized for deployment on the NVIDIA GB200 NVL72 supercomputing system.The Mistral 3 lineup includes nine smaller "Ministral 3" variants designed for NVIDIA's edge platforms, with co-founder Guillaume Lample highlighting that even the smallest Ministral3 3B model outperforms models four times larger and can run locally on consumer devices without internet.Mistral AI, with funding exceeding $1.05 billion including investment from
ASML, differentiates itself from proprietary U.S. competitors by releasing its models under the
Apache 2.0 license and supporting public sector initiatives such as "
AI for Citizens".