Explore Models For Conversational AI Development
Dive into the world of ChatBotKit's AI models. Here ew showcases a wide array of sophisticated AI technologies, each designed to cater to different aspects of conversational AI and chatbot development.
o4-mini
o4-mini is our latest small o-series model. It's optimized for fast, effective reasoning with exceptionally efficient performance in coding and visual tasks.betachatfunctionsimageo3
o3 is a well-rounded and powerful model across domains. It sets a new standard for math, science, coding, and visual reasoning tasks. It also excels at technical writing and instruction-following.betachatfunctionsimagegpt-4.1-nano
GPT-4.1 nano is the fastest, most cost-effective GPT 4.1 model.betachatfunctionsimagegpt-4.1-mini
GPT 4.1 mini provides a balance between intelligence, speed, and cost that makes it an attractive model for many use cases.betachatfunctionsimagegpt-4.1
GPT 4.1 is OpenAI's flagship model for complex tasks. It is well suited for problem solving across domains.betachatfunctionsimagegpt-4.5
GPT-4.5 excels at tasks that benefit from creative, open-ended thinking and conversation, such as writing, learning, or exploring new ideas.betachatfunctionsimageo3-mini
o3-mini is a cost-efficient reasoning model that's optimized for coding, math, and science, and supports tools and Structured Outputs.chatfunctionso1
o1 is our most powerful reasoning model that supports tools, Structured Outputs, and vision. The model has 200K context and an October 2023 knowledge cutoff.chatfunctionsimagegpt-4o-mini
GPT-4o mini is OpenAI's most cost-efficient small model that's smarter and cheaper than GPT-3.5 Turbo, and has vision capabilities. The model has 128K context and an October 2023 knowledge cutoff.chatfunctionsimagegpt-4o
GPT-4o is faster and cheaper than GPT-4 Turbo with stronger vision capabilities. The model has 128K context and an October 2023 knowledge cutoff.chatfunctionsimagefilegpt-4-turbo
GPT-4 Turbo is offered at 128K context with an April 2023 knowledge cutoff and basic support for vision.chatfunctionsimagegpt-4
The GPT-4 model was built with broad general knowledge and domain expertise.chatfunctionsgpt-3.5-turbo
GPT-3.5 Turbo is fast and inexpensive model for simpler tasks.chatfunctionsgpt-3.5-turbo-instruct
GPT-3.5 Turbo is fast and inexpensive model for simpler tasks.mistral-large-latest
Top-tier reasoning for high-complexity tasks. The most powerful model of the Mistral AI family.betachatfunctionsmistral-small-latest
Cost-efficient reasoning for low-latency workloads.betachatfunctionsdeepseek-r1-distill-llama-70b
Top-tier reasoning for high-complexity tasks. The most powerful model of the Deepseek AI family.betachatfunctionsllama-3.3-70b-versatile
Llama 3.3 is an auto-regressive language model that uses an optimized transformer architecture.betachatfunctionssonar-deep-research
Deep Research conducts comprehensive, expert-level research and synthesizes it into accessible, actionable reports.betachatsonar-reasoning-pro
Premier reasoning offering powered by DeepSeek R1 with Chain of Thought (CoT) and advanced search grounding.betachatsonar-reasoning
Premier reasoning offering powered by DeepSeek R1 with Chain of Thought (CoT).betachatsonar-pro
Premier search offering with search grounding, supporting advanced queries and follow-ups.betachatsonar
Lightweight offering with search grounding, quicker and cheaper than Sonar Pro.betachatgemini-2.5-flash
A capable and inexpensive, multi-modal model with great performance across all tasks, with a 1 million token context window, and built for the era of Agents.betachatfunctionsimagegemini-2.5-pro
A capable multi-modal model with great performance across all tasks, with a 1 million token context window, and built for the era of Agents.betachatfunctionsimagegemini-2.0-flash
A capable multi-modal model with great performance across all tasks, with a 1 million token context window, and built for the era of Agents.betachatfunctionsimagegemini-2.0-flash-lite
Small and most cost effective model, built for at scale usagebetachatfunctionsimagegemini-1.5-flash
Fast multi-modal model with great performance for diverse, repetitive tasks and a 1 million token context window.betachatfunctionsimagegemini-1.5-pro
Highest intelligence Gemini 1.5 series model, with a breakthrough 2 million token context window.betachatfunctionsimageclaude-v3-opus
Anthropic's most powerful AI model, with top-level performance on highly complex tasks. It can navigate open-ended prompts and sight-unseen scenarios with remarkable fluency and human-like understanding.betachatclaude-v3.5-sonnet
Anthropic's most intelligent and advanced model, Claude 3.5 Sonnet, demonstrates exceptional capabilities across a diverse range of tasks and evaluations while also outperforming Claude 3 Opus.betachatclaude-v3-sonnet
Claude 3 Sonnet strikes the ideal balance between intelligence and speed—particularly for enterprise workloads. It offers maximum utility, and is engineered to be the dependable for scaled AI deployments.betachatclaude-v3.5-haiku
Anthropic's fastest, most compact model for near-instant responsiveness. It answers simple queries and requests with speed.betachatclaude-v3-haiku
Anthropic's fastest, most compact model for near-instant responsiveness. It answers simple queries and requests with speed.betachatclaude-v3
Claude 3 Sonnet strikes the ideal balance between intelligence and speed—particularly for enterprise workloads. It offers maximum utility, and is engineered to be the dependable for scaled AI deployments.betachatclaude-v2.1
Claude 2.1 is a large language model (LLM) by Anthropic with a 200K token context window, reduced hallucination rates, and improved accuracy over long documents.betaclaude-v2
Claude 2.0 is a leading LLM from Anthropic that enables a wide range of tasks from sophisticated dialogue and creative content generation to detailed instruction.betaclaude-instant-v1
Claude Instant is Anthropic's faster, lower-priced yet very capable LLM.betacustom
Any custom model created by the user.text-qaa-web-001
Fast and efficient question and answer model with web search grounding.chattext-qaa-005
This model belongs to the GPT-4o mini family of ChatBotKit models. It is designed for question and answer applications. The model has a token limit of 128000 and provides a balance between cost and quality. It is a custom model based on the gpt model architecture.chatfunctionsimagetext-qaa-004
This model belongs to the GPT-4o family of ChatBotKit models. It is designed for question and answer applications. The model has a token limit of 128000 and provides a balance between cost and quality. It is a custom model based on the gpt model architecture.chatfunctionsimagefiletext-qaa-003
This model belongs to the GPT-4 Turbo family of ChatBotKit models. It is designed for question and answer applications. The model has a token limit of 128000 and provides a balance between cost and quality. It is a custom model based on the gpt model architecture.chatfunctionsimagetext-qaa-002
This model belongs to the GPT-4 family of ChatBotKit models. It is designed for question and answer applications. The model has a token limit of 8 * ONE_K and provides a balance between cost and quality. It is a custom model based on the gpt model architecture.chatfunctionstext-qaa-001
This model belongs to the GPT 3.5 Turbo family of ChatBotKit models. It is designed for question and answer applications. The model has a token limit of 4000 and provides a balance between cost and quality. It is a custom model based on the gpt model architecture.chatfunctionstext-algo-004
his model belongs to the GPT-4o family of ChatBotKit models.chatfunctionsimagefiletext-algo-003
his model belongs to the GPT-4 family of ChatBotKit models.chatfunctionstext-algo-002
This model belongs to the Turbo family of ChatBotKit models.chatfunctionsgpt-image-1
GPT Image 1 is a state-of-the-art image generation model. It is a natively multimodal language model that accepts both text and image inputs, and produces image outputs.dalle3
This model is based on the DALL-E 3 architecture. It is a high-quality model that can generate images from text. It is tunable and offers a balance between cost and quality.dalle2
This model is based on the DALL-E 2 architecture. It is a high-quality model that can generate images from text. It is tunable and offers a balance between cost and quality.stablediffusion
This model is based on the Stable Diffusion architecture. It is a high-quality model that can generate images from text. It is tunable and offers a balance between cost and quality.