Llama 3.2 3B is a 3-billion-parameter multilingual model optimized for advanced NLP tasks such as dialogue generation, reasoning, and summarization. It supports eight languages and is trained on 9 trillion tokens, excelling in instruction-following, complex reasoning, and tool use.
Not enough throughput data
Not enough TTFT data