Justin McKelvey
Fractional CTO · 15 years, 50+ products shipped
The Prioritization Formula: A Product Prioritization Framework for Solo Founders
TL;DR: For Founders Re-Litigating Priorities Every Monday
If your team relitigates priorities every Monday, your prioritization framework is broken — or you don't have one at all. Here's the formula: (Customer Fit + Metric Impact + Validation Confidence + Speed) − Cost = Priority Score. Five inputs, scored 0-20 each, totaled to a 1-100 priority. You can run it on your whole roadmap in 10 minutes. It exists because every prioritization framework you've been handed — RICE, ICE, Kano, MoSCoW, Lean Prioritization — was built for a product team with hundreds of thousands of users, not a solo founder with 50. The Prioritization Formula takes the best of all five and recalibrates them for the stage you're actually at.
The 5 Prioritization Frameworks Every Founder Gets Recommended (And Why Each One Fails Solo Founders)
If you've asked any product person, podcast, or AI assistant "how do I prioritize features?", you've heard the same five answers. Here they are, briefly, with the specific way each one breaks down at the solo-founder stage.
ICE Score (Impact, Confidence, Ease). Score each feature 1-10 on three dimensions — how much value it delivers (Impact), how sure you are (Confidence), how fast it ships (Ease). Multiply for a single number. Fast and easy to apply. The breakdown: ICE doesn't account for whether the feature serves your specific customer or works against your strategic focus. A feature with high impact for the wrong customer scores high but should be a cut.
RICE Score (Reach, Impact, Confidence, Effort). The RICE framework adds Reach — how many users will be affected — to ICE. Reach × Impact × Confidence ÷ Effort. The breakdown: solo founders don't have reliable reach data. With 50 users, every reach estimate is noise. RICE works at scale but makes solo-founder scoring feel scientific when it's actually guesswork.
Kano Model. Categorizes features as Basic Needs (must-haves), Performance Features (more is better), Delighters (unexpected wins), or Indifferent (don't bother). Categorical, not numerical. The breakdown: Kano is great for thinking about features but bad for ordering them. You end up with three Performance Features and four Delighters and no rule for which to ship first.
MoSCoW (Must-have, Should-have, Could-have, Won't-have). Sort features into four buckets. Simplest of the bunch. The breakdown: MoSCoW makes you feel decisive without forcing real evidence. Half a roadmap ends up labeled "Must-have" because nobody wants their feature in "Could-have." It works for project scoping; it fails as a prioritization framework.
Lean Prioritization (Validation First). Build whatever tests your riskiest assumption first. Don't score features — score hypotheses. The breakdown: Lean is the right mindset but doesn't help when you have 12 features that all test reasonable hypotheses. You still need a way to choose between them.
All five are correct in some context. None of them are calibrated for the solo founder with 50 users, a product they shipped 3 months ago, and a roadmap of 14 reasonable-sounding features.
Feature prioritization at the solo-founder stage isn't the same activity as feature prioritization at a product team with PMs and analytics. The inputs are different, the data is thinner, and the cost of a wrong call is more concentrated. A scoring framework built for the second stage produces noise at the first.
What We Actually Need: A 6th Option Built for Solo Founders
The Prioritization Formula is the synthesis. It takes ICE's speed, replaces RICE's broken Reach with Customer Fit (which is the same idea calibrated for small numbers), borrows Kano's user-satisfaction logic to define Metric Impact, takes Lean's validation discipline as a standalone input, and uses MoSCoW's rigor by enforcing a minimum threshold under which features are simply cuts.
The formula is intentionally simple. Five inputs, scored 0-20, summed to a 1-100 priority. Anything below 50 is a cut. Anything 50-69 is "park, revisit." 70-84 is "ship in this quarter." 85+ is "ship next."
The Five Inputs, Defined
Input 1 — Customer Fit (0-20). Does this feature serve your one core customer? Score 20 if it directly serves them. 10 if it serves them indirectly (helps their workflow but isn't core to the value). 0 if it serves a different customer segment you're not currently pursuing. This input replaces RICE's Reach. With 50 users, you don't measure how many — you measure whether the right ones get value.
Input 2 — Metric Impact (0-20). Pick your one metric — activation, retention, revenue, whatever the stage demands. Score how much this feature plausibly moves it. 20 if you have direct evidence (similar feature in adjacent product, prototype data, customer interviews). 15 if the logic is strong but unproven. 10 if it's a defensible bet. 5 if it's a guess. 0 if it doesn't move the metric at all (these usually mean a different feature, not a different score).
Input 3 — Validation Confidence (0-20). How sure are you the approach works the way you think it works? 20 if you've tested it, even crudely. 15 if you've talked to users who validated the design. 10 if it's standard practice in your space. 5 if it's a strong hypothesis but unvalidated. 0 if you're guessing. Most founders' first instinct is to score everything 15+ here. Don't. Score honestly. This is the input that catches the "I'm sure this will work" features that won't.
Input 4 — Speed (0-20). How fast can you ship it? 20 if it's a 1-3 day build. 15 if it's a week. 10 if it's 2-3 weeks. 5 if it's a month or more. 0 if you can't estimate (which means you don't understand the work yet — go define it before scoring). Speed matters more than founders admit because slow features delay the next experiment.
Input 5 — Cost (0-20, subtracted). Real cost beyond build time. Score 20 if the feature locks you in (data shape changes, deep integrations, training users to expect behavior). 15 if it adds substantial complexity. 10 if it's moderate. 5 if it's mostly contained. 0 if it's trivially cheap to remove or ignore. Subtract this from the sum of the other four. Cost is the input that kills the most "looks like an easy win" features once you score it honestly.
Total: (Customer Fit + Metric Impact + Validation Confidence + Speed) − Cost. Range: -20 to 80. Add a constant of 20 if you want all positive scores; the relative order is what matters.
The Copy-Paste Scoring Table
Here's the table to use. Copy this into a spreadsheet, paste your features in column 1, score honestly. Sort by total descending. Ship the top 1-3 in the next sprint.
| Feature | Customer Fit (0-20) | Metric Impact (0-20) | Validation Confidence (0-20) | Speed (0-20) | Cost (0-20, subtracted) | Total |
| Feature 1 | 20 | 15 | 10 | 15 | 5 | 55 |
| Feature 2 | 10 | 20 | 5 | 20 | 10 | 45 |
| Feature 3 | 20 | 20 | 15 | 10 | 5 | 60 |
The pattern you'll see almost immediately: features the team is excited about score worse than features that are obvious-but-boring. That's not a bug. The formula is doing the job. Boring features that serve the right customer with high validation confidence beat exciting features that don't, every time.
Three Worked Examples
Example 1: A founder building a directory tool for tennis players. Roadmap item: "Add player skill-level filtering."
Customer Fit: 20 (tennis players directly want this). Metric Impact: 15 (filtering plausibly improves successful match rate; users in interviews keep mentioning skill mismatch as the reason past matches didn't repeat). Validation Confidence: 15 (users have explicitly asked for it in 6 interviews). Speed: 15 (1-week build). Cost: 5 (low — straightforward filter, easy to remove). Total: 60. Ship it.
Example 2: A founder building a lead-qualification tool. Roadmap item: "Build admin dashboard for managing API keys."
Customer Fit: 10 (real estate teams need it eventually but it's not what they're paying for). Metric Impact: 5 (doesn't move qualification accuracy or volume). Validation Confidence: 10 (standard pattern but no direct evidence it matters at this stage). Speed: 10 (2-3 weeks). Cost: 10 (adds an entire admin surface to maintain). Total: 25. Cut. The founder will think about this feature 8 more times before realizing the formula was right; the cost of building it would have been 3 weeks of foregone customer-facing work.
Example 3: A founder building a SaaS product. Roadmap item: "Add a referral program."
Customer Fit: 15 (existing customers benefit). Metric Impact: 20 if you have evidence referrals work in your space, 10 if you're guessing. Let's say 10 (no evidence yet — first SaaS product). Validation Confidence: 5 (pure hypothesis). Speed: 10 (2-3 weeks). Cost: 10 (referral logic adds complexity to user model). Total: 30. Cut, or rerun as a Lean experiment first — fake-door test a referral program with a landing page before building.
How This Compares to Other Feature Prioritization Frameworks
Side-by-side against the named frameworks: ICE is faster but doesn't account for customer fit. RICE is more rigorous but requires reach data solo founders don't have. Kano is great for thinking about features categorically but doesn't order them. MoSCoW is decisive but doesn't force evidence. Lean Prioritization is the right mindset but skips the scoring step. The Prioritization Formula is the synthesis: it preserves Lean's validation discipline as a standalone input, replaces RICE's broken Reach with Customer Fit at small-team scale, and produces a single number you can sort by.
Common Mistakes
Scoring optimistically when you're attached to a feature. Everyone does this. The fix is the blind-scoring trick from the FAQ — score every input before computing the total. Better fix: have someone else score the same features independently and look at the disagreement.
Confusing Customer Fit with "could be useful." Customer Fit is binary at the high end. Either it serves the core customer directly or it doesn't. "Some users might want this" scores 5-10, not 15-20. The line catches a lot of well-meaning feature requests that don't belong on this product.
Underweighting Cost. Cost is the input founders most often wave their hand at. The actual cost of a feature isn't build time — it's the years of carrying it, the surface area in onboarding, the bugs in adjacent features when this one changes, the cognitive load on the team. Score Cost honestly. It's almost always higher than your gut estimate.
Treating the score as the answer. The score is an input to the decision, not the decision itself. If a 60-point feature feels wrong to ship, the formula is telling you one of the inputs is miscalibrated. Find the input you don't believe and re-score it. The formula is a forcing function for honesty, not a replacement for judgment.
If You Want Help Calibrating the Formula
Most teams need 1-2 cycles of using the formula before they trust the inputs. The first round usually surfaces 2-3 inputs that need recalibration for your specific business. If you want a faster path, book a strategy call and I'll run your current roadmap through the formula with you, score by score. It's the same exercise that drives Month 2 of the McKelvey Method.
If your roadmap has more features than feels reasonable, the cuts come first. Read The Clarity Filter and run that before scoring. The formula is for choosing among the survivors. And if you're earlier than this — still working out which MVP to ship — start with the 6-week MVP framework. Prioritization is a luxury you earn after you've shipped.
Frequently Asked Questions
- How is the Prioritization Formula different from RICE?
- RICE is built for product teams with reach data — they have hundreds of thousands of users and can estimate how many would touch a given feature. Solo founders almost always have wrong reach numbers because their user base is too small. The Prioritization Formula replaces Reach with Customer Fit (does this serve our one core customer?) and adds Validation Confidence as a separate input. The output is a 1-100 score per feature, same as RICE, but the inputs are calibrated for stages where you have 50 users instead of 50,000.
- How do I score features honestly when I'm biased toward what I've already built?
- The honest-scoring trick is to score every feature blind first — write the inputs without seeing the running total. Then add up. Most founders' first instinct is to back into the score they want; doing the inputs blind makes that harder. A second trick: have a peer score the same feature independently. If their score is more than 15 points off yours, the gap is usually you flattering a feature you're attached to.
- Who should be doing the scoring — founder or team?
- Solo founders should do it themselves but pressure-test the inputs with one or two trusted advisors. Small teams (2-5 people) should have whoever owns the feature score it, then have a second person score it independently. The score isn't the point — the disagreement is. Where two scorers diverge, you've found the part of the feature that's underspecified or unevidenced. Resolve that gap and the score gets cheap to settle.
- How often should I rescore features?
- Rescore the top 5-10 features on the roadmap once a month. The whole list once a quarter. Inputs change — what you learned from users, what your metrics did, what the market did. A score from three months ago is stale. The cadence isn't optional; an old prioritization list quietly becomes wrong before you notice.
- When does the formula break?
- Three cases. First: when a single feature is genuinely existential (security, compliance, something your top customer demanded). Score it anyway, but trust your judgment over the score. Second: when you're at a strategic inflection point — entering a new market, repositioning the product. The score assumes your strategy is fixed; if it's not, the score is meaningless. Third: when the team has lost trust in the inputs. If people are gaming the scores, fix the trust issue, not the formula.
- How do I weight the inputs?
- Default to equal weights for the first quarter — 5 inputs, 20 points each. After 90 days of using the formula, look at which features you scored high and shipped: did the high-scorers actually move the metric? If yes, your weights are calibrated. If not, the input that's least predictive is the one to weight down. Most solo founders end up over-weighting Customer Fit and under-weighting Cost, because cost is harder to estimate accurately.
- Can I use this with a team that's already on RICE or another framework?
- Yes — but don't switch frameworks just to switch. The Prioritization Formula is most useful when the existing framework is producing bad decisions. Symptoms: the team always agrees on the score but disagrees on what to ship next; high-scoring features keep failing; the scoring takes 3+ hours and people stop doing it. If your current framework isn't broken, leave it alone. If it is, swap to the formula and watch how the new inputs change which features rise to the top.
More on Product Leadership
The Clarity Filter: How to Beat Feature Creep and Know What to Stop Building
Feature creep is a clarity problem, not a discipline problem. Here's the 4-question filter I run when a roadmap is bloated and the founder can't tell what to cut.
The McKelvey Method: A Founder Coaching Framework for the Post-MVP Stage
Founder coaching for the awkward stage between shipping an MVP and scaling. A 90-day framework — Clarity, Systems, Velocity — built for founders who already proved they can build but don't yet know what to build next.
How to Build an MVP in 2026: The 6-Week Framework
Most MVPs fail because founders build too much. Here's the 6-week framework I use with clients to go from idea to paying customers — with real examples from products I've shipped.
The MVP Trap: Why Most Founders Build Too Much
Your MVP should take 4-6 weeks, not 6 months. After building 50+ products, I've watched the same mistake kill startups over and over: building too much before talking to customers.