News
Trained on 4 trillion tokens, this open-source model rivals full-precision LLMs in performance while offering impressive efficiency in memory, energy, and latency.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results