Introducing DeepSeek V4 Pro and DeepSeek V4 Flash
ChatBotKit now supports DeepSeek V4 Pro and DeepSeek V4 Flash through the Vercel model catalog, expanding the DeepSeek lineup with two new options built for large-context reasoning, coding, and tool-driven workflows. Both models arrive with a 1 million-token context window and up to 384,000 output tokens, making them strong choices for teams that need to work across large repositories, long research documents, or extended multi-step agent sessions without constantly trimming context.
The split between the two models is practical and easy to understand. DeepSeek V4 Pro is the premium option for teams optimizing for deeper reasoning and higher-end performance on demanding production tasks, priced at $1.74 per million input tokens and $3.48 per million output tokens. DeepSeek V4 Flash is the faster and more cost-efficient option for high-throughput workloads, priced at $0.14 per million input tokens and $0.28 per million output tokens, while still preserving the same large context and output envelope.
That combination gives builders more flexibility inside ChatBotKit. You can use DeepSeek V4 Pro when accuracy, sustained reasoning, and harder coding tasks matter most, then shift to DeepSeek V4 Flash for assistants, automations, and operational workflows where response speed and token efficiency are the stronger constraint. Because both models are available through the same platform surface, switching between them is straightforward as workload requirements change.
DeepSeek V4 Pro and DeepSeek V4 Flash are available now in the ChatBotKit model picker and through the API.