4 lines
164 B
Plaintext
4 lines
164 B
Plaintext
The main goal of llama.cpp is to enable LLM inference with minimal setup and
|
|
state-of-the-art performance on a wide variety of hardware - locally and in
|
|
the cloud.
|