iscover the different models supported by ChatBotKit, including base OpenAI models like GPT-4 and GPT-3, as well as in-house models for various use cases.

ChatBotKit supports various models to create engaging conversational AI experiences. These include foundational OpenAI models such as GPT4o, GPT4, and GPT3, along with models from Anthropic, Mistral, and others. Additionally, ChatBotKit uses several of its own models, including text-algo-004 and text-algo-003, for our in-house general assistant.

Below is a table that summarizes the different models. It includes their names, short descriptions, and context sizes (the maximum number of tokens).

Model NameShort DescriptionToken RatioContext Size
o1-nexto1-preview is OpenAI's new reasoning model for complex tasks that require broad general knowledge. The model has 128K context and an October 2023 knowledge cutoff.3.3333128000
o1-classico1-preview is OpenAI's new reasoning model for complex tasks that require broad general knowledge. The model has 128K context and an October 2023 knowledge cutoff.3.3333128000
o1o1-preview is OpenAI's new reasoning model for complex tasks that require broad general knowledge. The model has 128K context and an October 2023 knowledge cutoff.3.3333128000
o1-mini-nexto1-mini is a fast, cost-efficient reasoning model tailored to coding, math, and science use cases. The model has 128K context and an October 2023 knowledge cutoff.0.6667128000
o1-mini-classico1-mini is a fast, cost-efficient reasoning model tailored to coding, math, and science use cases. The model has 128K context and an October 2023 knowledge cutoff.0.6667128000
o1-minio1-mini is a fast, cost-efficient reasoning model tailored to coding, math, and science use cases. The model has 128K context and an October 2023 knowledge cutoff.0.6667128000
gpt-4o-mini-nextGPT-4o mini is OpenAI's most cost-efficient small model that's smarter and cheaper than GPT-3.5 Turbo, and has vision capabilities. The model has 128K context and an October 2023 knowledge cutoff.0.0333128000
gpt-4o-mini-classicGPT-4o mini is OpenAI's most cost-efficient small model that's smarter and cheaper than GPT-3.5 Turbo, and has vision capabilities. The model has 128K context and an October 2023 knowledge cutoff.0.0333128000
gpt-4o-miniGPT-4o mini is OpenAI's most cost-efficient small model that's smarter and cheaper than GPT-3.5 Turbo, and has vision capabilities. The model has 128K context and an October 2023 knowledge cutoff.0.0333128000
gpt-4o-nextGPT-4o is faster and cheaper than GPT-4 Turbo with stronger vision capabilities. The model has 128K context and an October 2023 knowledge cutoff.0.5556128000
gpt-4o-classicGPT-4o is faster and cheaper than GPT-4 Turbo with stronger vision capabilities. The model has 128K context and an October 2023 knowledge cutoff.0.8333128000
gpt-4oGPT-4o is faster and cheaper than GPT-4 Turbo with stronger vision capabilities. The model has 128K context and an October 2023 knowledge cutoff.0.8333128000
gpt-4-turbo-nextGPT-4 Turbo is offered at 128K context with an April 2023 knowledge cutoff and basic support for vision.1.6667128000
gpt-4-turbo-classicGPT-4 Turbo is offered at 128K context with an April 2023 knowledge cutoff and basic support for vision.1.6667128000
gpt-4-turboGPT-4 Turbo is offered at 128K context with an April 2023 knowledge cutoff and basic support for vision.1.6667128000
gpt-4-nextThe GPT-4 model was built with broad general knowledge and domain expertise.3.33338192
gpt-4-classicThe GPT-4 model was built with broad general knowledge and domain expertise.3.33338192
gpt-4The GPT-4 model was built with broad general knowledge and domain expertise.3.33338192
gpt-3.5-turbo-nextGPT-3.5 Turbo is fast and inexpensive model for simpler tasks.0.083316384
gpt-3.5-turbo-classicGPT-3.5 Turbo is fast and inexpensive model for simpler tasks.0.22224096
gpt-3.5-turboGPT-3.5 Turbo is fast and inexpensive model for simpler tasks.0.083316384
gpt-3.5-turbo-instructGPT-3.5 Turbo is fast and inexpensive model for simpler tasks.0.11114096
mistral-large-latestTop-tier reasoning for high-complexity tasks. The most powerful model of the Mistral AI family.0.666732000
mistral-small-latestCost-efficient reasoning for low-latency workloads.0.166732000
claude-v3-opusAnthropic's most powerful AI model, with top-level performance on highly complex tasks. It can navigate open-ended prompts and sight-unseen scenarios with remarkable fluency and human-like understanding.4.1667200000
claude-v3-sonnetClaude 3 Sonnet strikes the ideal balance between intelligence and speed—particularly for enterprise workloads. It offers maximum utility, and is engineered to be the dependable for scaled AI deployments.0.8333200000
claude-v3-haikuAnthropic's fastest, most compact model for near-instant responsiveness. It answers simple queries and requests with speed.0.0694200000
claude-v3Claude 3 Sonnet strikes the ideal balance between intelligence and speed—particularly for enterprise workloads. It offers maximum utility, and is engineered to be the dependable for scaled AI deployments.0.8333200000
claude-v2.1Claude 2.1 is a large language model (LLM) by Anthropic with a 200K token context window, reduced hallucination rates, and improved accuracy over long documents.1.3333200000
claude-v2Claude 2.0 is a leading LLM from Anthropic that enables a wide range of tasks from sophisticated dialogue and creative content generation to detailed instruction.1.3333100000
claude-instant-v1Claude Instant is Anthropic's faster, lower-priced yet very capable LLM.0.1333100000
customAny custom model created by the user.0.02784096
text-qaa-003This model belongs to the GPT-4 Turbo family of ChatBotKit models. It is designed for question and answer applications. The model has a token limit of 128000 and provides a balance between cost and quality. It is a custom model based on the gpt model architecture.1.6667128000
text-qaa-002This model belongs to the GPT-4 family of ChatBotKit models. It is designed for question and answer applications. The model has a token limit of 8 * ONE_K and provides a balance between cost and quality. It is a custom model based on the gpt model architecture.3.33338192
text-qaa-001This model belongs to the Turbo family of ChatBotKit models. It is designed for question and answer applications. The model has a token limit of 4000 and provides a balance between cost and quality. It is a custom model based on the gpt model architecture.0.14096
text-algo-003his model belongs to the GPT-4 family of ChatBotKit models.3.33338192
text-algo-002This model belongs to the Turbo family of ChatBotKit models.0.14096
About our latest models

We will try to keep this page up-to-date. The latest and most up-to-date list of supported models and their configurations can be found here.

About token costs

The token ratio serves as a key indicator for token cost. A higher ratio corresponds to a more expensive token type.

ChatBotKit uses the token ratio as a multiplier to calculate the actual number of tokens consumed by the model. Each model token is multiplied by the token ratio to determine the number of tokens ChatBotKit records. This ensures accurate tracking of the resources each model uses and correct user billing.

The context size refers to the maximum tokens (words or symbols) the model can consider when generating a response. A larger context size allows for more information to be taken into account, potentially leading to more accurate and relevant responses.

When choosing a model, it's essential to evaluate not just its capabilities, but also its cost and size. Larger and more expensive models aren't always the best choice for every task. Often, a smaller model can perform equally well or even better. As a rule of thumb, gpt-4o and gpt-4 are the best choices if you need the most advanced and capable model. However, if you're looking for a capable model that's also smaller, gpt-3.5-turbo might be a better fit.

Bring Your Own Model

ChatBotKit offers the unique option of bringing your own model and keys to the platform. This feature is designed for those who desire more control over their models and costs. If you have a model that you've trained and perfected over time for your specific use case or requirement, you're free to bring it to our platform. This means you can use your own keys, which allows you to handle the payment for the model usage directly. This could be beneficial, especially if you have particular budget constraints or specific cost strategies. In essence, with ChatBotKit, you're not just limited to using our pre-built models, but you can also introduce your custom-made models, providing you with more flexibility and control to meet your specific needs.

Here is an outline of the steps required to create your own custom model.

  1. Navigate to the Bot Configuration Screen

    • From the main dashboard, click on the "Bots" section in the left-hand menu.
    • Select the bot you want to configure or create a new bot.
  2. Choose the Model

    • Under the "Model" section, select "custom" from the dropdown menu as shown in the first screenshot.

    • Press the “Settings” button.

  3. Model Configuration Window

    • Enter a name for your custom model in the "Name" field. For example, "gpt-3.5-turbo."

    • Choose the provider of your custom model from the "Provider" dropdown menu. In this case, select "OpenAI."

    • Provide the necessary credentials for accessing the custom model. Click on the credentials field and enter the required information.

    • Define the maximum number of tokens the chatbot will use for each interaction in the "Max Tokens" field. The default value is 4096.

BYOK Caveats

When you opt to use your own key (BYOK) for model access, you assume full responsibility for the model's availability and operational limits. This shift occurs because you are no longer utilizing the default ChatBotKit service tiers, which may offer different capabilities and restrictions.

Customising Model Settings

To customize a model's settings, click on the settings icon next to the model name.

There are four main properties that can be customized: Max Tokens, Temperature, Interaction Max Messages, Region, Frequency Penalty, Presence Penalty, and Vision.

Max Tokens: This property determines the maximum number of tokens that the model can consume when generating a response. By default, this is set to the maximum context size for the model, but you can reduce it to limit the amount of resources used by the model. This can help save token cost but may also reduce the ability of the chatbot to keep up with the conversation.

Temperature: This property determines the level of randomness or creativity in the model's responses. A higher temperature value will result in more diverse and creative responses, while a lower value will result in more conservative and predictable responses.

Interaction Max Messages: The maximum number of messages to use per model interaction. Setting this value to low will make the model more deterministic. Increasing the value will result in more creativity. For Q&A-style conversation it is recommended to keep the value to 2.

Region: The region property allows you to specify the geographical region for the model. This can be particularly useful for services that have specific regional requirements or restrictions. However, it's important to note that the availability of certain models may vary depending on the region.

Frequency Penalty: This property determines how much the model penalizes the repetition of certain words or phrases in its responses. A higher frequency penalty value will result in responses that are more varied and less repetitive.

Presence Penalty: This property determines how much the model penalizes the use of certain words or phrases in its responses. A higher presence penalty value will result in responses that are less likely to contain specific words or phrases.

Vision: This property applies solely to vision models. It enables bots to utilize native vision capabilities as opposed to Skillset Vision Actions. While we generally recommend Skillset for cost-efficiency and control, there are situations where native vision capabilities may be preferred.

By customizing these properties, you can fine-tune the behavior of the model to better suit your specific use case and requirements. However, it's important to note that changing these properties can have a significant impact on the model's performance and accuracy, so it's recommended to experiment with different settings to find the best balance between performance and creativity.

FAQ

Can I get regional access to some models?

Yes. Some models such as Claude can be accessed within your own designated region. Please contact us for more information.

Can I bring my own model?

Our models are designed to scale no matter the circumstances. However, customers that wish to bring their own model can do so on some of our higher-tier plans such as Pro, Pro Plus and Team.

How is token usage calculated?

There are many factors that affect your monthly usage including but not limited to the model you are using, the number of datasets, skillsets and their types.