11 lines
648 B
Plaintext
11 lines
648 B
Plaintext
Transformers acts as the model-definition framework for state-of-the-art machine
|
|
learning models in text, computer vision, audio, video, and multimodal model,
|
|
for both inference and training.
|
|
|
|
It centralizes the model definition so that this definition is agreed upon
|
|
across the ecosystem. transformers is the pivot across frameworks: if a model
|
|
definition is supported, it will be compatible with the majority of training
|
|
frameworks (Axolotl, Unsloth, DeepSpeed, FSDP, PyTorch-Lightning, ...),
|
|
inference engines (vLLM, SGLang, TGI, ...), and adjacent modeling libraries
|
|
(llama.cpp, mlx, ...) which leverage the model definition from transformers.
|