Models
ChatBotKit supports various models to create engaging conversational AI experiences. These include foundational OpenAI models such as GPT4o, GPT4, and GPT3, along with models from Anthropic, Mistral, and others. Additionally, ChatBotKit uses several of its own models, including text-algo-002 and text-algo-003, for our in-house general assistant.
Below is a table that summarizes the different models. It includes their names, short descriptions, and context sizes (the maximum number of tokens).
Model Name | Short Description | Token Ratio | Context Size |
---|---|---|---|
o1-next | o1-preview is OpenAI's new reasoning model for complex tasks that require broad general knowledge. The model has 128K context and an October 2023 knowledge cutoff. | 3.3333 | 128000 |
o1-classic | o1-preview is OpenAI's new reasoning model for complex tasks that require broad general knowledge. The model has 128K context and an October 2023 knowledge cutoff. | 3.3333 | 128000 |
o1 | o1-preview is OpenAI's new reasoning model for complex tasks that require broad general knowledge. The model has 128K context and an October 2023 knowledge cutoff. | 3.3333 | 128000 |
o1-mini-next | o1-mini is a fast, cost-efficient reasoning model tailored to coding, math, and science use cases. The model has 128K context and an October 2023 knowledge cutoff. | 0.6667 | 128000 |
o1-mini-classic | o1-mini is a fast, cost-efficient reasoning model tailored to coding, math, and science use cases. The model has 128K context and an October 2023 knowledge cutoff. | 0.6667 | 128000 |
o1-mini | o1-mini is a fast, cost-efficient reasoning model tailored to coding, math, and science use cases. The model has 128K context and an October 2023 knowledge cutoff. | 0.6667 | 128000 |
gpt-4o-mini-next | GPT-4o mini is OpenAI's most cost-efficient small model that's smarter and cheaper than GPT-3.5 Turbo, and has vision capabilities. The model has 128K context and an October 2023 knowledge cutoff. | 0.0333 | 128000 |
gpt-4o-mini-classic | GPT-4o mini is OpenAI's most cost-efficient small model that's smarter and cheaper than GPT-3.5 Turbo, and has vision capabilities. The model has 128K context and an October 2023 knowledge cutoff. | 0.0333 | 128000 |
gpt-4o-mini | GPT-4o mini is OpenAI's most cost-efficient small model that's smarter and cheaper than GPT-3.5 Turbo, and has vision capabilities. The model has 128K context and an October 2023 knowledge cutoff. | 0.0333 | 128000 |
gpt-4o-next | GPT-4o is faster and cheaper than GPT-4 Turbo with stronger vision capabilities. The model has 128K context and an October 2023 knowledge cutoff. | 0.5556 | 128000 |
gpt-4o-classic | GPT-4o is faster and cheaper than GPT-4 Turbo with stronger vision capabilities. The model has 128K context and an October 2023 knowledge cutoff. | 0.8333 | 128000 |
gpt-4o | GPT-4o is faster and cheaper than GPT-4 Turbo with stronger vision capabilities. The model has 128K context and an October 2023 knowledge cutoff. | 0.8333 | 128000 |
gpt-4-turbo-next | GPT-4 Turbo is offered at 128K context with an April 2023 knowledge cutoff and basic support for vision. | 1.6667 | 128000 |
gpt-4-turbo-classic | GPT-4 Turbo is offered at 128K context with an April 2023 knowledge cutoff and basic support for vision. | 1.6667 | 128000 |
gpt-4-turbo | GPT-4 Turbo is offered at 128K context with an April 2023 knowledge cutoff and basic support for vision. | 1.6667 | 128000 |
gpt-4-next | The GPT-4 model was built with broad general knowledge and domain expertise. | 3.3333 | 8192 |
gpt-4-classic | The GPT-4 model was built with broad general knowledge and domain expertise. | 3.3333 | 8192 |
gpt-4 | The GPT-4 model was built with broad general knowledge and domain expertise. | 3.3333 | 8192 |
gpt-3.5-turbo-next | GPT-3.5 Turbo is fast and inexpensive model for simpler tasks. | 0.0833 | 16384 |
gpt-3.5-turbo-classic | GPT-3.5 Turbo is fast and inexpensive model for simpler tasks. | 0.2222 | 4096 |
gpt-3.5-turbo | GPT-3.5 Turbo is fast and inexpensive model for simpler tasks. | 0.0833 | 16384 |
gpt-3.5-turbo-instruct | GPT-3.5 Turbo is fast and inexpensive model for simpler tasks. | 0.1111 | 4096 |
mistral-large-latest | Top-tier reasoning for high-complexity tasks. The most powerful model of the Mistral AI family. | 0.6667 | 32000 |
mistral-small-latest | Cost-efficient reasoning for low-latency workloads. | 0.1667 | 32000 |
claude-v3-opus | Anthropic's most powerful AI model, with top-level performance on highly complex tasks. It can navigate open-ended prompts and sight-unseen scenarios with remarkable fluency and human-like understanding. | 4.1667 | 200000 |
claude-v3-sonnet | Claude 3 Sonnet strikes the ideal balance between intelligence and speed—particularly for enterprise workloads. It offers maximum utility, and is engineered to be the dependable for scaled AI deployments. | 0.8333 | 200000 |
claude-v3-haiku | Anthropic's fastest, most compact model for near-instant responsiveness. It answers simple queries and requests with speed. | 0.0694 | 200000 |
claude-v3 | Claude 3 Sonnet strikes the ideal balance between intelligence and speed—particularly for enterprise workloads. It offers maximum utility, and is engineered to be the dependable for scaled AI deployments. | 0.8333 | 200000 |
claude-v2.1 | Claude 2.1 is a large language model (LLM) by Anthropic with a 200K token context window, reduced hallucination rates, and improved accuracy over long documents. | 1.3333 | 200000 |
claude-v2 | Claude 2.0 is a leading LLM from Anthropic that enables a wide range of tasks from sophisticated dialogue and creative content generation to detailed instruction. | 1.3333 | 100000 |
claude-instant-v1 | Claude Instant is Anthropic's faster, lower-priced yet very capable LLM. | 0.1333 | 100000 |
custom | Any custom model created by the user. | 0.0278 | 4096 |
text-qaa-003 | This model belongs to the GPT-4 Turbo family of ChatBotKit models. It is designed for question and answer applications. The model has a token limit of 128000 and provides a balance between cost and quality. It is a custom model based on the gpt model architecture. | 1.6667 | 128000 |
text-qaa-002 | This model belongs to the GPT-4 family of ChatBotKit models. It is designed for question and answer applications. The model has a token limit of 8 * ONE_K and provides a balance between cost and quality. It is a custom model based on the gpt model architecture. | 3.3333 | 8192 |
text-qaa-001 | This model belongs to the Turbo family of ChatBotKit models. It is designed for question and answer applications. The model has a token limit of 4000 and provides a balance between cost and quality. It is a custom model based on the gpt model architecture. | 0.1 | 4096 |
text-algo-003 | his model belongs to the GPT-4 family of ChatBotKit models. | 3.3333 | 8192 |
text-algo-002 | This model belongs to the Turbo family of ChatBotKit models. | 0.1 | 4096 |
About our latest models
We will try to keep this page up-to-date. The latest and most up-to-date list of supported models and their configurations can be found here.
About token costs
The token ratio serves as a key indicator for token cost. A higher ratio corresponds to a more expensive token type.
ChatBotKit uses the token ratio as a multiplier to calculate the actual number of tokens consumed by the model. Each model token is multiplied by the token ratio to determine the number of tokens ChatBotKit records. This ensures accurate tracking of the resources each model uses and correct user billing.
The context size refers to the maximum tokens (words or symbols) the model can consider when generating a response. A larger context size allows for more information to be taken into account, potentially leading to more accurate and relevant responses.
When choosing a model, it's essential to evaluate not just its capabilities, but also its cost and size. Larger and more expensive models aren't always the best choice for every task. Often, a smaller model can perform equally well or even better. As a rule of thumb, gpt-4o and gpt-4 are the best choices if you need the most advanced and capable model. However, if you're looking for a capable model that's also smaller, gpt-3.5-turbo might be a better fit.
Bring Your Own Model
ChatBotKit offers the unique option of bringing your own model and keys to the platform. This feature is designed for those who desire more control over their models and costs. If you have a model that you've trained and perfected over time for your specific use case or requirement, you're free to bring it to our platform. This means you can use your own keys, which allows you to handle the payment for the model usage directly. This could be beneficial, especially if you have particular budget constraints or specific cost strategies. In essence, with ChatBotKit, you're not just limited to using our pre-built models, but you can also introduce your custom-made models, providing you with more flexibility and control to meet your specific needs.
Here is an outline of the steps required to create your own custom model.
-
Navigate to the Bot Configuration Screen
- From the main dashboard, click on the "Bots" section in the left-hand menu.
- Select the bot you want to configure or create a new bot.
-
Choose the Model
-
Under the "Model" section, select "custom" from the dropdown menu as shown in the first screenshot.
-
Press the “Settings” button.
-
-
Model Configuration Window
-
Enter a name for your custom model in the "Name" field. For example, "gpt-3.5-turbo."
-
Choose the provider of your custom model from the "Provider" dropdown menu. In this case, select "OpenAI."
-
Provide the necessary credentials for accessing the custom model. Click on the credentials field and enter the required information.
-
Define the maximum number of tokens the chatbot will use for each interaction in the "Max Tokens" field. The default value is 4096.
-
BYOK Caveats
When you opt to use your own key (BYOK) for model access, you assume full responsibility for the model's availability and operational limits. This shift occurs because you are no longer utilizing the default ChatBotKit service tiers, which may offer different capabilities and restrictions.
Customising Model Settings
To customize a model's settings, click on the settings icon next to the model name.
There are four main properties that can be customized: Max Tokens, Temperature, Interaction Max Messages, Region, Frequency Penalty, Presence Penalty, and Vision.
Max Tokens: This property determines the maximum number of tokens that the model can consume when generating a response. By default, this is set to the maximum context size for the model, but you can reduce it to limit the amount of resources used by the model. This can help save token cost but may also reduce the ability of the chatbot to keep up with the conversation.
Temperature: This property determines the level of randomness or creativity in the model's responses. A higher temperature value will result in more diverse and creative responses, while a lower value will result in more conservative and predictable responses.
Interaction Max Messages: The maximum number of messages to use per model interaction. Setting this value to low will make the model more deterministic. Increasing the value will result in more creativity. For Q&A-style conversation it is recommended to keep the value to 2.
Region: The region property allows you to specify the geographical region for the model. This can be particularly useful for services that have specific regional requirements or restrictions. However, it's important to note that the availability of certain models may vary depending on the region.
Frequency Penalty: This property determines how much the model penalizes the repetition of certain words or phrases in its responses. A higher frequency penalty value will result in responses that are more varied and less repetitive.
Presence Penalty: This property determines how much the model penalizes the use of certain words or phrases in its responses. A higher presence penalty value will result in responses that are less likely to contain specific words or phrases.
Vision: This property applies solely to vision models. It enables bots to utilize native vision capabilities as opposed to Skillset Vision Actions. While we generally recommend Skillset for cost-efficiency and control, there are situations where native vision capabilities may be preferred.
By customizing these properties, you can fine-tune the behavior of the model to better suit your specific use case and requirements. However, it's important to note that changing these properties can have a significant impact on the model's performance and accuracy, so it's recommended to experiment with different settings to find the best balance between performance and creativity.
FAQ
Can I get regional access to some models?
Yes. Some models such as Claude can be accessed within your own designated region. Please contact us for more information.
Can I bring my own model?
Our models are designed to scale no matter the circumstances. However, customers that wish to bring their own model can do so on some of our higher-tier plans such as Pro, Pro Plus and Team.