Skip to main content

Text to Speech

Quick Start​

from pathlib import Path
from litellm import speech
import os

os.environ["OPENAI_API_KEY"] = "sk-.."

speech_file_path = Path(__file__).parent / "speech.mp3"
response = speech(
model="openai/tts-1",
voice="alloy",
input="the quick brown fox jumped over the lazy dogs",
api_base=None,
api_key=None,
organization=None,
project=None,
max_retries=1,
timeout=600,
client=None,
optional_params={},
)
response.stream_to_file(speech_file_path)

Async Usage​

from litellm import aspeech
from pathlib import Path
import os, asyncio

os.environ["OPENAI_API_KEY"] = "sk-.."

async def test_async_speech():
speech_file_path = Path(__file__).parent / "speech.mp3"
response = await litellm.aspeech(
model="openai/tts-1",
voice="alloy",
input="the quick brown fox jumped over the lazy dogs",
api_base=None,
api_key=None,
organization=None,
project=None,
max_retries=1,
timeout=600,
client=None,
optional_params={},
)
response.stream_to_file(speech_file_path)

asyncio.run(test_async_speech())

Proxy Usage​

LiteLLM provides an openai-compatible /audio/speech endpoint for Text-to-speech calls.

curl http://0.0.0.0:4000/v1/audio/speech \
-H "Authorization: Bearer sk-1234" \
-H "Content-Type: application/json" \
-d '{
"model": "tts-1",
"input": "The quick brown fox jumped over the lazy dog.",
"voice": "alloy"
}' \
--output speech.mp3

Setup

- model_name: tts
litellm_params:
model: openai/tts-1
api_key: os.environ/OPENAI_API_KEY
litellm --config /path/to/config.yaml

# RUNNING on http://0.0.0.0:4000