Decision-Criteria Elicitation Before Solutioning

ByGrais Research Team, Communication Science

Teams often think they have a persuasion problem when they actually have a criteria problem.

In early conversations, people frequently request a recommendation before they have agreed on what a good outcome means. They ask for a solution, but the decision frame is incomplete: priorities are unstated, tradeoffs are hidden, and constraints are only partially visible.

When we answer too quickly, we create temporary momentum with fragile commitment. The conversation looks efficient in the moment, then slows down later through rework, second-guessing, and reversals.

A stronger sequence is to define decision criteria first and propose options second.

This article introduces a practical protocol for decision-criteria elicitation before solutioning. It combines evidence from shared decision-making and motivational interviewing, plus guidance on plain-language communication, risk communication, and lifecycle governance for AI-assisted workflows [1] [2] [3] [4] [5] [6].

Quick Takeaways

  • Recommendation quality depends on criteria quality, not only answer quality.
  • A usable first exchange names decision purpose, hard constraints, and acceptable risk.
  • Question-prompt methods improve participation when stakes and uncertainty are high.
  • Plain-language compression reduces interpretation drift across stakeholders.
  • Protocol discipline prevents fast-but-fragile commitments.

The Core Mechanism: Why Criteria Come Before Advice

Most communication failures in complex decisions are not caused by missing options. They are caused by missing filters.

If two people evaluate the same option using different criteria, disagreement is guaranteed even when both are competent and aligned in good faith. One side optimizes speed, the other side optimizes reliability. One side protects short-term budget, the other side protects downstream risk. Without explicit criteria, each recommendation sounds reasonable while still failing the actual decision.

The shared decision-making literature captures this directly: higher-quality decisions emerge when the conversation makes the choice explicit, compares alternatives, and surfaces preferences before commitment [1]. In practice, this means advice should be downstream of criteria elicitation, not a substitute for it.

Motivational interviewing findings reinforce the same operational lesson. Outcomes improve when communication elicits a person’s own reasons and priorities rather than imposing an external script [2]. The process matters because autonomy and clarity interact: people follow through more consistently on decisions they can explain in their own terms.

So the sequence is straightforward:

  1. Define what success requires.
  2. Define what failure must avoid.
  3. Only then compare solution paths.

What Research Suggests About Better Early Exchanges

Three evidence-backed behaviors are especially actionable:

  1. Structured prompting increases useful participation. Question prompt list interventions tend to raise question-asking and communication quality in high-stakes contexts [3].
  2. Plain-language design improves comprehension under pressure. Public-health communication guidance emphasizes organizing around what people need to know and do, in language they can act on quickly [4].
  3. Decision-enabling communication is an operational capability. WHO’s risk communication framing treats timely, understandable, context-fit information as essential to protective action [5].

The synthesis: robust early conversations are not naturally emergent. They are designed interactions with clear information structure.

The Decision-Criteria Elicitation Protocol

Use this protocol when a request is broad, ambiguous, or emotionally charged.

Step 1: Frame the decision object

Name the decision in one sentence before discussing methods.

Example:

"Before I recommend an approach, let’s define the decision this recommendation must support."

This shifts the exchange from answer-hunting to decision design.

Step 2: Surface non-negotiables

Ask for hard constraints first.

  • Time boundary: what deadline cannot move?
  • Resource boundary: what budget or capacity is fixed?
  • Policy boundary: what rules or obligations cannot be violated?

Non-negotiables determine feasibility. If they stay implicit, solution quality is mostly luck.

Step 3: Define success criteria as observable signals

Replace vague goals with measurable indicators.

Weak:

"We need this to work better."

Stronger:

"We need fewer stalled decisions, faster first commitment, and fewer rollback requests."

Observable criteria reduce interpretive conflict later.

Step 4: Elicit acceptable risk and tradeoff tolerance

Every decision trades one value against another. Make that explicit.

Ask:

  • Which matters more right now: speed, certainty, or reversibility?
  • What downside is unacceptable?
  • What downside is tolerable if upside is strong?

This step prevents false consensus where people appear aligned but evaluate outcomes differently.

Step 5: Generate bounded options

Only after criteria are explicit, generate 2-3 options and map each to criteria.

For each option, state:

  • expected benefit,
  • major risk,
  • required effort,
  • best-fit conditions.

This matches the structure described in shared decision models: alternatives become comparable when evaluation criteria are shared [1].

Step 6: Force restatement before close

Ask the counterpart to restate the selected option and rationale in their own words.

If restatement is fuzzy, criteria are still unstable. Do not move to execution yet.

Step 7: Set a review trigger

End with one progress signal and one pivot trigger.

  • Progress signal: what indicates the decision is working?
  • Pivot trigger: what evidence requires changing course?
  • Review point: when do we check?

This makes the decision adaptive without making it unstable.

Common Edge Cases and How to Handle Them

Edge Case A: "Just tell me what to do"

When someone asks for direct advice immediately, treat it as urgency plus uncertainty, not resistance.

Use a compact criteria check:

  1. one desired outcome,
  2. one hard constraint,
  3. one unacceptable downside.

Then provide options. This preserves speed while avoiding blind recommendations.

Edge Case B: Multi-stakeholder conversations

Different stakeholders often optimize different objectives. One wants predictability, another wants innovation, another wants low short-term cost.

Use a criteria board with three columns:

  • shared criteria,
  • stakeholder-specific criteria,
  • conflict criteria.

Then evaluate options against all three columns. This keeps disagreement in the open where it can be negotiated.

Edge Case C: High emotion, low articulation

When stress is high, language precision drops.

Start with acknowledgment, then simplify the prompt structure:

"Let’s make this manageable. What outcome matters most today, and what can we not afford to get wrong?"

Plain-language sequencing matters here because cognitive load is already elevated [4].

Failure Modes

Watch for these recurrent breakdowns:

  • Criteria inflation: listing too many criteria makes comparison impossible.
  • Criteria drift: criteria shift mid-conversation without explicit acknowledgment.
  • Surrogate certainty: strong tone used to hide weak decision framing.
  • Premature optimization: detailed tactics before feasibility checks.

Protocol quality is not "more questions." It is asking the minimum set of high-yield questions in the right order.

Implementation Example

A team receives the request: "We need to improve conversation outcomes this quarter."

Fast but weak response:

"Use this script and enforce it across all channels."

Protocol-based response:

"Before selecting a script, which outcome has priority this quarter: faster commitments, fewer escalations, or higher completion reliability? What constraint is fixed? Which failure is unacceptable?"

The criteria discussion reveals the real objective is fewer reversals after initial agreement, not simply faster first replies. That changes the intervention from generic scripting to decision-checkpoint design, including explicit tradeoff statements and restatement checks.

The visible difference is not rhetorical style. It is decision architecture.

Lab Appendix: How We Measure This (Reproducible)

Run an A/B operational comparison on matched conversation cohorts:

  • Variant A: immediate recommendation.
  • Variant B: criteria-first protocol.

Track:

  • time-to-clear-decision,
  • restatement clarity score,
  • first-action completion rate,
  • reversal rate within the review window,
  • escalation rate for unresolved ambiguity.

Use one rubric for both human and AI-assisted conversations. For AI-supported workflows, keep governance explicit: map risks, measure decision quality indicators, and manage remediation when failure modes repeat [6].

Evidence Triangulation

Internal Linking Path

References

  1. Makoul G, Clayman ML. An integrative model of shared decision making in medical encounters. PubMed
  2. Rubak S, Sandbaek A, Lauritzen T, Christensen B. Motivational interviewing: a systematic review and meta-analysis. PubMed
  3. Wang SJ, Hu WY, Chang YC. Question prompt list intervention for patients with advanced cancer: a systematic review and meta-analysis. PubMed
  4. Centers for Disease Control and Prevention. Plain Language Materials & Resources. CDC
  5. World Health Organization. Risk communication and community engagement. WHO
  6. National Institute of Standards and Technology. AI Risk Management Framework. NIST

Similar research articles

Browse all research