# Text-Embeddings-Inference

## Docs

- [Text Embeddings Inference](https://huggingface.co/docs/text-embeddings-inference/index.md)
- [CLI arguments](https://huggingface.co/docs/text-embeddings-inference/cli_arguments.md)
- [Neuron backend for AWS Trainium and Inferentia](https://huggingface.co/docs/text-embeddings-inference/local_neuron.md)
- [Using TEI locally with Metal](https://huggingface.co/docs/text-embeddings-inference/local_metal.md)
- [Using TEI locally with GPU](https://huggingface.co/docs/text-embeddings-inference/local_gpu.md)
- [Example uses](https://huggingface.co/docs/text-embeddings-inference/examples.md)
- [Serving private and gated models](https://huggingface.co/docs/text-embeddings-inference/private_models.md)
- [Supported models and hardware](https://huggingface.co/docs/text-embeddings-inference/supported_models.md)
- [Quick Tour](https://huggingface.co/docs/text-embeddings-inference/quick_tour.md)
- [Using TEI locally with CPU](https://huggingface.co/docs/text-embeddings-inference/local_cpu.md)
- [Build a custom container for TEI](https://huggingface.co/docs/text-embeddings-inference/custom_container.md)
- [Deploying TEI on Google Cloud Run](https://huggingface.co/docs/text-embeddings-inference/tei_cloud_run.md)
- [Using TEI Container with Intel® Hardware](https://huggingface.co/docs/text-embeddings-inference/intel_container.md)
