Selecting Your AI Model

Your model choice strongly affects reply quality, latency, and cost. In Chatbot → AI Settings, the provider you pick determines which chat models appear in the list (supported services include OpenAI, Google Gemini, Claude (Anthropic), DeepSeek, Grok (xAI), and OpenRouter).

Key Point: Each provider offers different model families with different capability and price trade-offs. Use the tables below for orientation, then validate results against your real traffic and budget.

Understanding Model Types

Limb Chatbot uses different model types for different jobs. Knowing the distinction helps you choose settings for chat versus knowledge (embeddings):

Chat Models

Used for conversational AI—answering questions, generating replies, and similar text tasks. These are the models visitors interact with in the chat.

Embedding Models

Used for knowledge indexing—turning text into embedding vectors so the system can run semantic search. They power retrieval from your knowledge base, not the visible chat personality.

Chat Models Comparison

Chat models power visitor-facing conversations. The table below compares common options:

Model Provider Intelligence Speed Cost Best For
gpt-4.1 OpenAI Highest Medium $$$ Complex reasoning, technical support
gemini-2.5-pro Gemini Highest Medium $$$ High-quality responses, detailed content
gpt-5-mini OpenAI High Fast $$ Balanced – most general chatbots
gpt-4.1-mini OpenAI High Fast $$ General purpose, good quality
gemini-2.5-flash Gemini High Very Fast $$ Fast responses, customer support
gpt-4o OpenAI High Fast $$ Optimized for chat, reliable
gpt-4o-mini OpenAI Good Very Fast $ High volume, cost-effective
gpt-5-nano OpenAI Good Very Fast $ Budget-conscious, simple queries
gpt-4.1-nano OpenAI Good Very Fast $ Economical, adequate quality
gemini-2.5-flash-lite Gemini Good Very Fast $ Most economical Gemini option
gpt-4-turbo OpenAI High Fast $$ Fast performance, quality responses
gpt-4 OpenAI High Medium $$ Original, reliable, established
claude-3.5-sonnet Claude (Anthropic) Highest Medium $$$ Complex reasoning, premium support experiences
claude-3.5-haiku Claude (Anthropic) High Fast $$ Balanced Claude option for most chatbots
claude-3.5-opus Claude (Anthropic) Highest Medium $$$ Most advanced Claude for critical use cases
claude-3-opus Claude (Anthropic) High Medium $$$ Earlier flagship Claude, still very capable
claude-3-sonnet Claude (Anthropic) High Fast $$ General-purpose Claude model
claude-3-haiku Claude (Anthropic) Good Very Fast $ High-volume, budget-friendly Claude
deepseek-chat DeepSeek High Fast $ Cost-effective alternative for general chat

For a quick vendor-level comparison between two providers already listed above:

Criteria Claude (Anthropic) DeepSeek
Typical use cases Premium support, complex reasoning, enterprise assistants Cost-effective general chatbots and tools
Strengths Very strong reasoning, safe and controlled responses Good quality with very competitive pricing
When to choose When you want the highest-quality assistant experience and can pay a bit more When budget is important but you still want strong AI replies

Embedding Models Comparison

Embedding models turn text into searchable vectors. Source material (for example WordPress content, uploads, or pasted text) is split into segments; each segment is embedded. When someone asks a question, the chatbot retrieves the most relevant segments and passes them to the chat model as context.

Model Provider Quality Cost Best For
text-embedding-3-large OpenAI Highest $$ Maximum search accuracy, large knowledge bases
text-embedding-3-small OpenAI High $ Balanced – recommended for most users
text-embedding-ada-002 OpenAI Good $ Legacy option, reliable
gemini-embedding-001 Gemini High $ Gemini users, Google ecosystem

Important: Use one embedding model consistently for everything you index in the knowledge base so search stays coherent. Mixing embedding models can reduce accuracy.

AI Provider Comparison

AI Settings also supports Grok (xAI) and OpenRouter. The matrix below focuses on four widely used providers; for Grok or OpenRouter, use the model lists inside Chatbot → AI Settings.

Criteria OpenAI Google Gemini Claude (Anthropic) DeepSeek
Model Variety 9+ chat models, 3 embedding models 3 chat models, 1 embedding model 8 chat models 1 chat model
Documentation Extensive + large community Comprehensive official docs Well-documented API Good documentation
Ecosystem Independent, platform-agnostic Google ecosystem integration Independent, enterprise-focused Independent, cost-focused
Getting Started Connecting OpenAI Connecting Google Gemini Connecting Claude Connecting DeepSeek

Cost: Stronger chat models usually cost more per message. Embedding cost spikes mainly when you (re)build the knowledge index. Monitor usage and invoices from your AI provider, then adjust models to match quality and budget.