Stop Scoring, Start Seeing
RICE, WSJF, and weighted priority matrices feel rigorous but produce numbers that tell you what you already decided. Visual roadmapping offers something more honest.
At some point almost every product team adopts a scoring framework. RICE. WSJF. A custom spreadsheet with weighted columns for impact, effort, strategic fit, and confidence. The logic is appealing: remove politics from prioritization by making it quantitative.
It doesn’t work. Not because the frameworks are stupid, but because they’re solving the wrong problem.
What scoring actually does
Walk through how a RICE score gets built. Reach: how many users will this affect? Impact: how much will it move the metric? Confidence: how sure are you? Effort: how long will it take?
Every one of these is an estimate. Reach is a guess, often anchored to what the feature lead thinks leadership wants to hear. Impact is a guess backed by whatever user research was done, which may be six months old and n=12. Confidence is a measure of how much you don’t know, which is itself largely unknown. Effort is a guess from engineers asked for a number in a meeting.
You multiply four guesses together and get a number with two decimal places. Then you sort by that number and call the result a prioritized backlog.
The number doesn’t tell you what to build. It tells you what the team had already decided, expressed in a format that’s harder to argue with.
The false precision problem
The danger isn’t that the score is sometimes wrong. The danger is that it feels authoritative. A score of 84.3 versus 61.7 looks like a significant difference. It probably isn’t. Both numbers are within the margin of error of the underlying estimates, which nobody calculated.
When teams trust the score, they stop having the harder conversation: do we actually believe this feature will drive the outcome we care about, and why? That conversation is uncomfortable. The spreadsheet makes it avoidable. That’s the problem.
What you lose when you stop looking
Priority frameworks also have a visual problem: they produce ranked lists. A ranked list shows you which item is number one and which is number seventeen. It doesn’t show you the shape of your roadmap.
Some things that a list hides:
Cluster imbalance. You might have twelve features aimed at retention and two aimed at acquisition. A ranked list doesn’t surface this. A canvas where you group by theme makes it obvious.
Dependency chains. Item four might depend on items nine and eleven. A list can’t represent this. The score doesn’t care.
Load-bearing features. Some items unblock three other items. Some items are isolated. Scoring treats them identically. Spatial arrangement on a canvas makes load-bearing nodes obvious — they’re the ones with many outgoing edges.
Strategic holes. Sometimes the most important thing to prioritize is the gap — the user problem nobody has put on the roadmap yet. A list can only sort what’s already in it.
What visual prioritization looks like
The alternative isn’t to abandon structure. It’s to prioritize with your eyes.
When every feature is a card on a canvas, you’re doing something cognitively different than sorting a spreadsheet. You’re comparing wholes, not numbers. You’re asking: does the spatial arrangement of this quarter’s work tell a coherent story? Are the high-priority items actually high-leverage? Does cutting the bottom third change the shape of anything important?
You can move cards. You can cluster related items. You can draw the dependency edges and see what’s load-bearing. You can mark things “scope” (under consideration) versus “todo” (committed) and see the ratio.
This is what prioritization actually requires: a spatial, relational view of the work — not a leaderboard.
When scores are useful
This isn’t an argument against all quantitative thinking. Scoring frameworks help in specific situations:
- When you have a large, mostly-independent backlog and need a fast first pass to cull obvious low-priority items
- When you need to document prioritization decisions for stakeholders who require a paper trail
- When the team is new and needs a shared starting point for calibrating what “high impact” means
In these cases, use a framework, but treat it as a conversation starter, not an answer. Build the score, then look at the results and ask: does this match what we’d decide if we were just being honest?
If yes, the score was a useful sanity check. If no — and it often won’t — the conversation that follows is the actual prioritization. The score just surfaced a disagreement.
Put the work where you can see it
The best argument for visual roadmapping isn’t aesthetic. It’s epistemic. You cannot reason well about things you cannot see. When the full scope of your quarter is visible on a canvas — with statuses, dependencies, and clusters — you are in a better position to make good decisions than when you are reading a sorted list of numbers.
Stop optimizing the score. Look at the map.