UX Evals
Evaluate AI through real user experience with Outset UX Evals
Taking traditional usability into the world of tokens and non-deterministic outcomes to modernize UX evaluation for real AI-driven user experiences.
Conceptualized by Microsoft's Copilot team, powered by Outset.

Leading research teams are using Outset to rethink how AI should be evaluated
Use UX Evals to understand AI experiences — not just outputs
AI Experience Evaluation
Understand whether interactions with AI feel helpful and trustworthy to users through first-party user experience evaluation.
Model & System Comparisons
Compare AI systems and tools based on real user experience — not just benchmark scores.
Decision Support & Trust
Evaluate whether AI helps users make decisions, build confidence, or move forward.
Uncover Consensus Through Scale
Every AI interaction is unique. UX Evals reveal shared patterns by scaling user experience evaluation across hundreds of real conversations.
The advantages of UX Evals
As products move from pixels to tokens, every user’s experience is highly unique to them.
AI evals and traditional usability fall short of deriving insights from first-person, multi-modal, multi-turn interactions.
UX Evals ground UX evaluation in how AI is actually experienced by users, across real conversations and real contexts.

UX Evals are built for real-world AI use
First-Person Conversations
Researchers bring their own goals, context, and questions — not pre-written prompts.

Multi-Turn Evaluation
Value is assessed across the entire conversation, not a single response.

Real-World Conditions
Prompts are messy, imperfect, and emotional — just like how humans actually operate.

Scaled Qualitative Signal
Patterns emerge by observing experience across many users, not isolated anecdotes.

Why UX Evals go beyond traditional approaches

Beyond traditional AI evals
Machine evals and human graders test whether AI works against predefined criteria.
UX Evals test whether users actually prefer and value the AI’s experience — as judged by the user.

Beyond usability testing
Usability testing was built for static pixels and flows.
UX Evals are built for conversations — where outcomes are non-deterministic and value is subjective.
Resources for researchers running UX Evals

White paper
Introducing UX Evals
The “why” behind the net-new methodology, written by the Microsoft team that developed it.
Jan 22, 2026


Guide
Outset UX Evals: A How To Guide
A step-by-step resource for researchers looking to adopt.
Jan 22, 2026



Event
From Pixels to Tokens: A UX Evals Workshop
A tangible workshop hosted by the team that developed this methodology at Microsoft Copilot on how to implement. RSVP now.
Feb 4, 2025 • 12-1pm PST (Virtual)
UX Evals FAQ
Answers to common questions about how AI can help with UX Evals.
What are UX Evals?
Answer lorem ipsum dolor iset
How are UX Evals different from traditional UX evaluation?
Answer lorem ipsum dolor iset
Why is user experience evaluation important for AI products?
Answer lorem ipsum dolor iset
What makes an effective AI evaluation platform?
Answer lorem ipsum dolor iset
How do UX Evals help teams evaluate AI at scale?
Answer lorem ipsum dolor iset








