A 7.3B parameter model that outperforms Llama 2 13B on all benchmarks, with optimizations for speed and context length.