Explore our Blog Close
< Back
Blog Image
Leo Poll
Leo Poll

PhD – President Akendi UK

Research First, AI Second: Virtual Users Done Right

Most AI persona tools are built on assumptions. Ours isn’t, and that distinction matters more than you might think.

The market is flooded with virtual user tools that promise instant insights. Feed them demographic data, add some preferences, and you can chat with a simulated customer. Convenient, yes. Reliable? That’s another question entirely.

At Akendi we’ve developed something different, and the difference starts before a single line of AI code is written. Our Virtual Users & Customers are trained on actual research – user interviews, surveys, ethnographic studies, behavioural data. Not what we think users are like. What they’ve actually told us and what we’ve observed them doing.

The Research Foundation Changes Everything

Here’s why that matters. When you ask a typical AI chatbot to simulate a user, it invents plausible responses based on patterns in its training data. Those patterns might reflect users in California, or users from five years ago, or users of a completely different product. Plausible is not the same as accurate.

When you ask our Virtual Users a question, you’re querying an AI that’s been trained on the actual voices of people who use your service. The difference between “this is what users generally want” and “this is what your users specifically need” is the difference between generic advice and actionable insight.

Companies make decisions based on assumptions. Sales teams think they know what customers want. Development teams build what seems logical. Then reality hits and the gap between assumption and fact becomes expensive. Research eliminates that gap, and AI trained on research scales that knowledge across your entire organization.

Beyond the Chatbot

What makes our tool particularly useful isn’t just the foundation of research – it’s what you can do with it. You’re not limited to typing questions at a single persona. You can engage multiple users simultaneously, which is closer to how actual user research works. Different users have different needs, and seeing those differences in real time reveals insights you’d miss in sequential conversations.

More importantly, you can share webpages and applications with these personas and discuss them. Want to know if a checkout flow makes sense? Share it. Need feedback on a homepage layout? Walk through it with three different user types and watch how their responses differ. It’s the equivalent of having a focus group available at any moment, except this focus group is trained on months of actual research with your real users.

Making Research Available to Everyone

The traditional problem with user research is access. A research team conducts studies, produces reports, and those insights sit in documents that have been filed away. Meanwhile, product managers make daily decisions that could benefit from those insights but don’t have time to dig through archived findings.

Our Virtual Users solve that accessibility problem. The knowledge from comprehensive research becomes available anytime, anywhere, to anyone in the organization who needs it. A developer wondering about edge cases can ask. A strategist exploring new features can test concepts. A support team member can understand underlying user frustrations without scheduling interviews.

This isn’t about replacing research – it’s about amplifying it. We continue conducting fresh research and updating the AI models as your services evolve and user needs shift. The virtual users stay current because they’re grounded in ongoing investigation, not static assumptions.

The Caution

I’ll be direct about the limitation. This only works if the research foundation is solid. Garbage in, garbage out applies here as forcefully as anywhere else. An AI trained on superficial data or biased samples will give you superficial, biased insights, delivered with the conviction that makes AI particularly dangerous when it’s wrong.

That’s why we insist on comprehensive UX research as the starting point. If you want to skip that step and just get some virtual users quickly, there are cheaper tools available. They’ll tell you what you want to hear, based on what generally makes sense. Ours will tell you what your users actually need, based on what they’ve shown us.

The Practical Difference

The shift this creates in organizations is subtle but significant. Teams stop debating opinions about users and start referencing evidence. “I think users would prefer…” becomes “When we tested this pattern with users, they said…”. That evidence is now available conversationally, in meetings, during development, whenever decisions get made.

It’s the democratization of research insights, but only because the research was done properly in the first place. Without that foundation, you’re just democratizing speculation.

If you want to see how this works in practice, book a demo and we’ll show you the difference between AI built on research and AI built on assumptions.


Key Points:

  • Virtual Users trained on actual research data, not assumptions
  • Multi-persona conversations reveal different user needs simultaneously
  • Share webpages and apps to get realistic feedback in real time
  • Research insights become accessible across the organization
  • Models updated with ongoing research as user needs evolve
  • Only works with comprehensive UX research as foundation

Leo Poll
Leo Poll

PhD – President Akendi UK

Since 1996, Leo has been helping organizations provide an intentional customer experience while matching technical innovations to market needs. He uses the Akendi blog to share his thoughts about the challenges of addressing business problems from an end-user perspective and finding solutions that work for real people.

Share this article

Linked InFacebookTwitter

Related Articles

icon of a globe with the outer edge composed of a pair of arrows indicating a circular path

Sign up for our UX Blog

Don't miss an article! We'll notify you of each new post.

How can we help_hand help you?