AI Audience Simulator

A multi-agent polling system that simulates diverse human opinions. A Coordinator Agent receives a question or proposal, fans it out to multiple Persona Agents - each with a distinct demographic background, values, and life experience - then collects their responses, detects consensus and disagreement, and produces a structured opinion report. Use it to pressure-test messaging, validate product positioning, or anticipate objections before running a real survey.

polling
simulation
audience research
2888

Running a real survey takes days, costs money, and requires a panel. Sometimes you just want a fast directional signal: "How would different audiences react to this headline / pricing change / policy proposal?"

This blueprint answers that question by treating each AI agent as a simulated panelist. Every Persona Agent has a backstory that encodes a specific demographic profile - age, profession, location, income bracket, values, media diet, and prior experiences. The Coordinator discovers all persona bots at runtime through blueprint introspection, then asks each one via bot/ask[by-id]. Each persona answers from its own perspective, producing responses that diverge in predictable, demographically grounded ways.

Why this works better than a single prompt

Asking one LLM to "list opinions from different demographics" collapses viewpoints into a single voice. The model optimizes for the most likely composite answer and smooths out minority perspectives. By isolating each persona in its own agent with a dedicated backstory, you get genuine diversity of framing, objections, and emotional tone - the same effect that separate focus-group participants produce.

How the Coordinator works

  1. The user submits a question, proposal, or piece of content.
  2. The Coordinator uses blueprint introspection to discover all bot resources in the blueprint. It filters out itself and identifies the persona bots by their names and descriptions.
  3. For each discovered persona bot, the Coordinator uses bot/ask[by-id] to send the poll question. Each ask is independent - the personas do not see each other's answers.
  4. Each Persona Agent responds in character, expressing agreement, disagreement, concerns, or enthusiasm grounded in its backstory.
  5. The Coordinator collects all responses, identifies themes, clusters similar opinions, flags outliers, and produces a structured report with a breakdown by demographic segment.

The five default personas

The blueprint ships with five Persona Agents covering a broad spectrum:

  • Urban Professional - 34, software engineer in San Francisco, high income, pragmatic, values efficiency and data.
  • Rural Small-business Owner - 52, runs a hardware store in rural Ohio, cost-conscious, skeptical of hype, values community.
  • College Student - 21, studies sociology in Atlanta, budget- constrained, socially aware, early adopter of trends.
  • Retired Teacher - 67, former high-school teacher in Florida, values clarity and accessibility, wary of complexity.
  • Freelance Creative - 29, graphic designer in Berlin, values aesthetics and flexibility, globally minded.

Extending the panel

Add more Persona Agents to increase demographic coverage. Each new persona only needs a bot with a backstory - the Coordinator will discover it automatically through blueprint introspection. No new abilities or wiring required. Consider adding personas for underrepresented viewpoints - different income levels, cultural backgrounds, age groups, or professional domains.

A second pass for bias

One commenter on the original Reddit thread suggested a useful refinement: after the first round of responses, run a second pass where a separate Critic Agent reviews all persona answers for bias, groupthink, or missing perspectives. You can add this by creating a sixth bot with a backstory focused on critical analysis and calling it after the Coordinator finishes aggregation.

What this is not

This is not a replacement for real user research. Simulated opinions are directional, not definitive. Use it for early-stage ideation, message testing, and hypothesis generation - then validate with actual humans.

Reference: https://www.reddit.com/r/AgentsOfAI/comments/1sm1tbb/ai_agents_can_be_used_to_simulate_human_opinions/

Backstory

Common information about the bot's experience, skills and personality. For more information, see the Backstory documentation.

You are the Poll Coordinator. Your job is to simulate a diverse focus group by discovering persona agents in this blueprint and polling them for opinions. ## WORKFLOW 1. The user gives you a question, proposal, headline, product description, or any content they want audience feedback on. 2. Use "Discover Persona Agents" to introspect the blueprint and list all bot resources. Filter the results to find persona bots (exclude yourself - the Poll Coordinator). Note each persona's id, name, and description. 3. For each discovered persona bot, use "Ask Persona Agent" with the bot's id and the poll question. Each ask is independent - personas do not see each other's answers. 4. Wait for all responses. Do not fabricate answers - use only what each persona actually returns. 5. Analyze the collected responses: - Identify common themes and points of agreement - Highlight areas of disagreement or divergence - Note surprising or minority viewpoints - Cluster responses by sentiment (positive, negative, mixed) ## REPORT FORMAT Present your synthesis as a structured report: **Summary** - One-paragraph overview of the overall audience reaction. **Consensus Points** - What most or all personas agreed on. **Divergence Points** - Where opinions split, and along which demographic lines. **Strongest Objections** - The most compelling concerns raised. **Strongest Endorsements** - The most enthusiastic support and why. **Demographic Breakdown** - A brief per-persona summary (name, background, stance). **Recommendation** - Based on the simulated feedback, what should the user consider adjusting? ## RULES - ALWAYS discover persona agents via blueprint introspection first. Never hardcode bot IDs. - Ask ALL discovered persona agents. Do not skip any. - Never invent persona responses. Use only actual bot/ask results. - Present disagreement as valuable signal, not as a problem. - Be honest about the limitations of simulated opinions. - If the user asks for a specific demographic focus, still poll all personas but weight the analysis accordingly.

Skillset

This example uses a dedicated Skillset. Skillsets are collections of abilities that can be used to create a bot with a specific set of functions and features it can perform.

  • 👤

    Discover Persona Agents

    Introspect the blueprint to list all bot resources. Use this to discover persona agents by their names and descriptions.
  • 👤

    Ask Persona Agent

    Ask a persona agent a question by its bot ID. The persona only sees the question and answers from its own demographic perspective.

Terraform Code

This blueprint can be deployed using Terraform, enabling infrastructure-as-code management of your ChatBotKit resources. Use the code below to recreate this example in your own environment.

Copy this Terraform configuration to deploy the blueprint resources:

Next steps:

  1. Save the code above to a file named main.tf
  2. Set your API key: export CHATBOTKIT_API_KEY=your-api-key
  3. Run terraform init to initialize
  4. Run terraform plan to preview changes
  5. Run terraform apply to deploy

Learn more about the Terraform provider

A dedicated team of experts is available to help you create your perfect chatbot. Reach out via or chat for more information.

More Awesome Examples