Models
ChatBotKit supports a variety of models for generating responses, including OpenAI models such as the Davinci and Curie family, and in-house models such as text-algo-001 and text-algo-002.
ChatBotKit supports a variety of models for generating responses, including base OpenAI models such as the Davinci and Curie family, the entire OpenAI instruct family, and the newest models such as gpt-3.5-turbo (also known as ChatGPT). Additionally, ChatBotKit has several in-house models such as text-algo-001 and text-algo-002 for our in-house general assistant.
Below is a table summarizing the different models, including their name, short description, and context size (the max number of tokens).
Model Name | Short Description | Context Size |
---|---|---|
gpt-4 | Newest model with enhanced capabilities for conversational use. | 8192 |
gpt-3.5-turbo | Newest model with enhanced capabilities for conversational use. | 4096 |
text-davinci-003 | Most advanced and capable model. | 4096 |
text-curie-001 | Capable model with a smaller footprint. | 2048 |
davinci-instruct-beta | Family of models optimized for specific use cases. | 2048 |
code-davinci-002 | Code-generation model | 2048 |
text-algo-003 | In-house model optimized for general assistant use. | 8192 |
text-algo-002 | In-house model optimized for general assistant use. | 4096 |
text-algo-001 | In-house model optimized for general assistant use. | 4096 |
The context size refers to the maximum number of tokens (words or symbols) that the model can take into account when generating a response. A larger context size allows the model to consider more information in its response, potentially resulting in more accurate and relevant responses.
Choose the appropriate model depending on your specific use case and desired performance.
If you're looking for the most advanced and capable model, gpt-3.5-turbo is your best bet. On the other hand, if you're looking for a capable model with a smaller footprint, text-davinci-003 and text-curie-001 might be more suitable.