Selecting a specific LLM in Storytell
Manually choose which large language model (LLM) Storytell uses for your prompts. By default, Storytell selects the model that delivers the most reliable results, but you can override it at any time or bring your own API keys if you're an enterprise customer.
Written By Patrick Intervalo
Last updated About 22 hours ago
Overview
Storytell evaluates the best LLM for each prompt and defaults to it. Currently, the default is Claude 4.5 Sonnet, optimized for reasoning, text generation, and tool calls.
The previous “Dynamic LLM Router” that automatically switched between multiple models is no longer used. Using a single, strong model ensures better context, higher reliability, and lower cost.
Users can still manually select a different model for specific prompts. Enterprise customers can provide their own API keys for supported models to use them directly.

How it works
Storytell automatically chooses the default LLM (Claude 4.5 Sonnet) for your prompt.
You can override the default at any time using the model dropdown.
Enterprise customers can connect their own API keys to use supported LLMs directly.
Storytell no longer dynamically routes between multiple models. The current approach favors context retention and cost-efficiency while giving users flexibility when needed.
Available models
Storytell offers access to a full range of supported public and enterprise language models. All available models are listed below:
Overriding the default model
Click on the three-dot chat menu.

Locate the LLM Router button on the menu.

Select the model you want to use.

Send your prompt.
Once submitted, the LLM you chose will override the default router for that specific prompt. You can repeat these steps any time you want more control over how your query is processed.