🎯 Before You Hit Send: How AI Changes Decision Confidence in Professional Communication

Evaluative / Quant UXR experiment: how AI changes anxiety, uncertainty, confidence, and ownership right before users hit send.
Evaluative UXRExperiment designA/B testingMetricsHuman–AI collaboration

Evaluative / Quant UXR

Before You Hit Send

A controlled, scenario-based experiment that isolates how AI is used (generate vs edit) and measures its impact on the decision state right before sending—the moment where users choose to send, revise again, or abandon.

Between-subjects
3 conditions
Pre/Post measures
Actionable metrics
Decision
When does AI help users feel safe enough to send?
Comparison
No AI vs AI Generate vs AI Edit
Outcomes
Anxiety · Uncertainty · Confidence · Ownership
Product link
Edit cycles · time-to-send · abandonment

Research question

How does AI assistance change users’ decision state immediately before professional communication—and does that effect depend on how AI is used?

Why this is evaluative

Messaging copilots are often evaluated by text quality, but user behavior is driven by pre-send friction and perceived risk. This study operationalizes “helpfulness” as measurable shifts in decision state that predict sending behavior.

Conceptual model

Conceptual model: incoming message → decision state before sending → no AI vs AI generate vs AI edit → psychological outcomes

Experimental design

Condition A — No AI

Baseline authorship. Participants write the reply themselves.

What it isolates: natural uncertainty + self-authored confidence.
Condition B — AI Generate

AI as external authority. Participants evaluate a fully AI-written reply they are asked to send without editing.

What it isolates: confidence boost vs ownership loss when control is low.
Condition C — AI Edit

Human + AI collaboration. Participants draft a reply, use AI to revise, then evaluate the final version.

What it isolates: whether collaboration preserves ownership while reducing uncertainty.

Scenarios

Participants respond to multiple realistic academic/workplace prompts (e.g., advisor criticism, co-author edits, disagreement, follow-up requests). Scenarios are designed to be high-stakes but common, where social risk is salient.

Measures (metrics-first)

Primary metrics (pre-send state)
  • Anxiety — social pressure / fear of negative evaluation
  • Uncertainty — “Is this reply appropriate?”
  • Confidence — “I’m ready to send”
  • Ownership — “This sounds like something I would genuinely say”
Two timepoints (clean causal story)
  • T1: immediately after reading the incoming message (initial reaction)
  • T2: right before sending (decision state after writing / AI assistance)
Key quantity: Δ (T2 − T1) by condition.

Analysis plan (A/B mindset)

Planned comparisons
AI Generate vs No AI
Does AI-as-author reduce uncertainty / increase confidence, and at what cost to ownership?
AI Edit vs No AI
Does collaboration reduce anxiety/uncertainty while maintaining ownership?
AI Edit vs AI Generate
Does preserving user input improve authenticity without losing the confidence benefit?
Recommended reporting: effect sizes + confidence intervals (not only p-values) to support product decisions.

What this enables for product teams

Choosing defaults

Decide whether the product should default to generate (speed/confidence) or edit (ownership/control).

Designing the UI

Make authorship legible: “your draft + AI polish,” diff views, or editable “tone controls” to protect ownership.

Evaluating success

Connect psych metrics to behavioral KPIs: edit cycles, time-to-send, abandonment, and user-reported trust.