Intelligence Level
Choose how Storytell responds—let the system pick the best mode, or force Fast, Expert, or Deep Intelligence for speed, depth, or full analysis with agents.
Written By Mark Ku
Last updated 2 days ago
Overview

Chat modes control how Storytell handles your prompt: which model is used, whether extended “thinking” is enabled, and which tools (e.g. web search, knowledge base, agents) are available. You can let Storytell choose for you (Auto) or pick a specific mode—Fast, Expert, or Research—to optimize for speed, reasoning depth, or full analysis with agents.
Choosing the right mode helps you get faster answers when you need them, deeper reasoning when the question is complex, and agent-powered research when you want the system to gather and synthesize information across sources.
What are Chat modes?
Chat modes are presets that set:
Model — The AI model (GPT, Claude, etc.) used for your response. Some modes prioritize speed, while others use more advanced models with stronger reasoning.
Extended thinking — How much reasoning time the AI can use before responding. Modes with more extended thinking can handle deeper analysis and complex questions.
Tools — Which capabilities are available during the conversation. These can include web search, knowledge base access, charts, presentations, and agents for advanced research.
You don’t need to understand the technical details. In practice, each mode works like this:
Auto — Storytell picks Fast or Expert based on your prompt. Best for most day-to-day use.
Fast — Quick responses, no extended thinking, all tools except agents. Best when you want speed.
Expert — Deeper reasoning, medium thinking budget, all tools except agents. Best for complex or nuanced questions.
Research — Strongest reasoning, high thinking budget, all tools including agents. Best for research and multi-step analysis.
Where to find the mode selector

Open a project and go to Storytell (or open any chat thread).
In the prompt bar (the area where you type your message), look for the mode control next to the input. It shows the current mode name (e.g. Auto, Fast, Expert, or Research) with a dropdown arrow.
Click the mode label (or the chevron). A popover opens with the four modes and short descriptions.
Click the mode you want. The selector closes and your choice is applied to the next message you send.
Your selection applies to that chat. You can change it anytime before sending another message.
Auto mode
Auto lets Storytell decide whether to use Fast or Expert for each prompt. The system classifies your prompt (e.g. simple fact-check vs. open-ended analysis) and routes accordingly. You get a good default without choosing a mode yourself.
Best for: General use when you don’t want to think about mode. Often the best balance of speed and quality.
Behavior: The backend analyzes your prompt text and selects Fast or Expert. Research is never chosen automatically; use Research mode when you want agents and maximum depth.
💡 Tip: Leave the mode on Auto unless you have a specific reason to force Fast, Expert, or Research. For most prompts, Auto gives you an appropriate balance.
Fast mode
Fast uses lighter, faster models with no extended thinking. All tools are available except agents. Responses are typically shorter and quicker.
Best for: Quick lookups, simple questions, summaries, and when you care more about speed than deep analysis.
Models: Claude 4.5 Haiku, GPT-5 Mini, Gemini 3.1 Flash-Lite (we route to Anthropic primarily; GPT and Gemini are used as fallback when needed).
Thinking: Off.
Tools: Web search, web page, knowledge base, image generation, chart, presentation, Scout, Ask user. Agents are not used.
💡Tip: Use Fast when you need a quick answer and the question is straightforward. For “explain this in depth” or multi-step reasoning, Expert or Research usually gives better results.
Expert mode
Expert uses premium models with a medium extended-thinking budget. All tools are available except agents. You get deeper reasoning and more nuanced answers than Fast.
Best for: Complex questions, analysis, comparisons, and when you want the model to “think” more before answering.
Models: Claude 4.6 Sonnet, GPT-5.2, Gemini 3.0 Flash (we route to Anthropic primarily; GPT and Gemini are used as fallback when needed).Thinking: Medium (e.g. 16k-token thinking budget).
Tools: Same as Fast (web search, knowledge base, charts, etc.). Agents are not used.
Expert uses more credits per query than Fast.
Deep Intelligence mode
Deep Intelligence uses the strongest models with a high extended-thinking budget and all tools, including agents. Agents can perform tasks like web research—gathering and synthesizing information across sources—so you get full analysis and multi-step reasoning.
Best for: Research, deep analysis, and when you want the system to use agents to explore and summarize information.
Models: Claude 4.6 Opus, GPT-5.4, Gemini 3.1 Pro (we route to Anthropic primarily; GPT and Gemini are used as fallback when needed).
Thinking: High (e.g. 32k-token thinking budget).
Tools: Everything in Expert plus agents (e.g. web research agents).
⚠️ Important: Deep Intelligence mode uses on average 5x the credits of Fast. Use it when you need agent-powered research or maximum depth; for everyday questions, Auto or Fast is usually enough.
What Users Say about Deep Intelligence
We consistently hear users tell us that Deep Intelligence is like nothing they’ve used before.
Here’s an email sent to us by Dan, a CEO using Storytell to help him do multi-billion dollar transactions:

Choosing a mode
You can change the mode before each message. The mode selector in the prompt bar always shows the current choice.
Which LLMs we use
Each mode uses a fixed set of models. Storytell routes to Anthropic (Claude) as the primary provider; GPT and Gemini are used as fallback when needed. You cannot override which specific LLM is used—you only choose the mode (Auto, Fast, Expert, or Research). The system then picks the appropriate model (and fallback) for that mode.
This keeps response quality consistent per mode while allowing fallback to other providers when necessary. There is no setting to force a specific model (e.g. “use only GPT”); the mode selector is the only control you have.
Credits and cost
Auto — Uses credits according to whether the system chose Fast or Expert for that prompt. You don’t pay for Research unless you select it.
Fast — Lowest credit use per query.
Expert — Higher than Fast
Research — Highest credit use