🧠 🤖 📊
Explore All AI Models For Conversational AI Development

Dive into the world of ChatBotKit's AI models. Here ew showcases a wide array of sophisticated AI technologies, each designed to cater to different aspects of conversational AI and chatbot development.

  • gpt-4o-mini-next

    GPT-4o mini is our most cost-efficient small model that’s smarter and cheaper than GPT-3.5 Turbo, and has vision capabilities. The model has 128K context and an October 2023 knowledge cutoff.
    beta
  • gpt-4o-mini-classic

    GPT-4o mini is our most cost-efficient small model that’s smarter and cheaper than GPT-3.5 Turbo, and has vision capabilities. The model has 128K context and an October 2023 knowledge cutoff.
    beta
  • gpt-4o-mini

    GPT-4o mini is our most cost-efficient small model that’s smarter and cheaper than GPT-3.5 Turbo, and has vision capabilities. The model has 128K context and an October 2023 knowledge cutoff.
    beta
  • gpt-4o-next

    GPT-4o is faster and cheaper than GPT-4 Turbo with stronger vision capabilities. The model has 128K context and an October 2023 knowledge cutoff.
    beta
  • gpt-4o-classic

    GPT-4o is faster and cheaper than GPT-4 Turbo with stronger vision capabilities. The model has 128K context and an October 2023 knowledge cutoff.
    beta
  • gpt-4o

    GPT-4o is faster and cheaper than GPT-4 Turbo with stronger vision capabilities. The model has 128K context and an October 2023 knowledge cutoff.
    beta
  • gpt-4-turbo-next

    GPT-4 Turbo is offered at 128K context with an April 2023 knowledge cutoff and basic support for vision.
  • gpt-4-turbo-classic

    GPT-4 Turbo is offered at 128K context with an April 2023 knowledge cutoff and basic support for vision.
  • gpt-4-turbo

    GPT-4 Turbo is offered at 128K context with an April 2023 knowledge cutoff and basic support for vision.
  • gpt-4-next

    The GPT-4 model was built with broad general knowledge and domain expertise.
  • gpt-4-classic

    The GPT-4 model was built with broad general knowledge and domain expertise.
  • gpt-4

    The GPT-4 model was built with broad general knowledge and domain expertise.
  • gpt-3.5-turbo-next

    GPT-3.5 Turbo is fast and inexpensive model for simpler tasks.
  • gpt-3.5-turbo-classic

    GPT-3.5 Turbo is fast and inexpensive model for simpler tasks.
  • gpt-3.5-turbo

    GPT-3.5 Turbo is fast and inexpensive model for simpler tasks.
  • gpt-3.5-turbo-instruct

    GPT-3.5 Turbo is fast and inexpensive model for simpler tasks.
  • mistral-large-latest

    Top-tier reasoning for high-complexity tasks. The most powerful model of the Mistral AI family.
    beta
  • mistral-small-latest

    Cost-efficient reasoning for low-latency workloads.
    beta
  • claude-v3-opus

    Anthropic's most powerful AI model, with top-level performance on highly complex tasks. It can navigate open-ended prompts and sight-unseen scenarios with remarkable fluency and human-like understanding.
    beta
  • claude-v3-sonnet

    Claude 3 Sonnet strikes the ideal balance between intelligence and speed—particularly for enterprise workloads. It offers maximum utility, and is engineered to be the dependable for scaled AI deployments.
    beta
  • claude-v3-haiku

    Anthropic’s fastest, most compact model for near-instant responsiveness. It answers simple queries and requests with speed.
    beta
  • claude-v3

    Claude 3 Sonnet strikes the ideal balance between intelligence and speed—particularly for enterprise workloads. It offers maximum utility, and is engineered to be the dependable for scaled AI deployments.
    beta
  • claude-v2.1

    Claude 2.1 is a large language model (LLM) by Anthropic with a 200K token context window, reduced hallucination rates, and improved accuracy over long documents.
    beta
  • claude-v2

    Claude 2.0 is a leading LLM from Anthropic that enables a wide range of tasks from sophisticated dialogue and creative content generation to detailed instruction.
    beta
  • claude-instant-v1

    Claude Instant is Anthropic's faster, lower-priced yet very capable LLM.
    beta
  • custom

    Any custom model created by the user.
  • text-qaa-003

    This model belongs to the GPT-4 Turbo family of ChatBotKit models. It is designed for question and answer applications. The model has a token limit of 128000 and provides a balance between cost and quality. It is a custom model based on the gpt model architecture.
    beta
  • text-qaa-002

    This model belongs to the GPT-4 family of ChatBotKit models. It is designed for question and answer applications. The model has a token limit of 8 * ONE_K and provides a balance between cost and quality. It is a custom model based on the gpt model architecture.
    beta
  • text-qaa-001

    This model belongs to the Turbo family of ChatBotKit models. It is designed for question and answer applications. The model has a token limit of 4000 and provides a balance between cost and quality. It is a custom model based on the gpt model architecture.
    beta
  • text-algo-003

    his model belongs to the GPT-4 family of ChatBotKit models.
  • text-algo-002

    This model belongs to the Turbo family of ChatBotKit models.
  • dalle3

    This model is based on the DALL-E 3 architecture. It is a high-quality model that can generate images from text. It is tunable and offers a balance between cost and quality.
  • dalle2

    This model is based on the DALL-E 2 architecture. It is a high-quality model that can generate images from text. It is tunable and offers a balance between cost and quality.
  • stablediffusion

    This model is based on the Stable Diffusion architecture. It is a high-quality model that can generate images from text. It is tunable and offers a balance between cost and quality.