Justin McKelvey
Fractional CTO · 15 years, 50+ products shipped
The Clarity Filter: How to Beat Feature Creep and Know What to Stop Building
TL;DR: For Founders Whose Roadmap Has Gotten Out of Hand
If your roadmap has 14 items, your team is busy, but the metric isn't moving — your problem isn't execution speed. It's that half of what you're building shouldn't be built. Feature creep isn't a discipline problem. It's a clarity problem, and you can't fix it by working harder. The Clarity Filter is a 4-question test you run on every feature on your roadmap. Anything that fails any one of the four is a cut. When founders run this honestly the first time, 30-50% of their roadmap disappears — and the work that's left actually moves the business.
Why Feature Creep Is a Clarity Problem, Not a Discipline Problem
Most advice on feature creep treats it like a willpower issue. "Just say no." "Be more disciplined about scope." That's wrong, and if you've ever tried to apply that advice while staring at your own roadmap, you already know it doesn't help. Founders aren't shipping bloated products because they lack willpower. They're shipping bloated products because every individual feature looks reasonable when you evaluate it alone.
The reasonable-feature trap: Someone suggests a feature. You think about it. It would help some users. It's not too hard to build. You add it. Repeat 50 times. Now you have a bloated product. Each individual decision was defensible. The aggregate is a mess.
The fix isn't more willpower. The fix is a filter that's harder to pass than "this seems reasonable." A good filter forces a feature to clear several specific bars at once, and the cumulative bar is high enough that most reasonable-looking features fail it.
How to Know What to Stop Building: The 4-Question Clarity Filter
If you Googled "what to stop building" or "how to cut features," you're already past the hardest part — you know the roadmap is too big. The next move is having a rule for the cuts. Here's the rule.
Every feature on your roadmap has to pass all four questions. Failing any one is a cut. Not "we'll discuss it." A cut.
Question 1: Does this serve your one core customer? Not "could it be useful to some users." Does it directly serve the specific customer your product is built around. If you serve solo founders and the feature is "team collaboration," it fails. If you serve solo founders and the feature is "team collaboration we'll need eventually when we expand," it fails harder. Eventually isn't a customer.
Question 2: Does it move the one metric that matters right now? Pick your one metric — activation, retention, revenue, whatever the stage demands. Each feature has to plausibly move that metric. "It improves the experience" doesn't count. "It moves activation from 32% to ~38% based on the assumption that X" counts. If you can't write a sentence about which metric moves and roughly how much, you don't know why you're building the feature.
Question 3: Are you sure it works the way you think it works? Most features founders are sure about turn out to be wrong when users touch them. The filter version: do you have direct evidence (interviews, prototypes, similar features in adjacent products) that this approach is right? If the only evidence is "it makes sense," you're guessing. Guessing isn't disqualifying — but a roadmap full of guesses is. Limit yourself to one or two unproven bets at a time.
Question 4: Is it cheap to remove later? Some features lock you in. They change data shapes, integrate deeply with other features, or train users to expect behavior that's hard to walk back. Those features need an extra-high bar because the cost of being wrong is permanent. Cheap-to-remove features (a setting, a copy change, a new view) can be more speculative because you can rip them out in an afternoon.
A feature that passes all four — serves the core customer, moves the metric, has real evidence, cheap to remove — is worth building. A feature that fails any one is a cut. Most founders find that 30-50% of their roadmap fails on Question 1 or 2 alone.
Applying the Filter to a Real Roadmap
Here's how it actually plays out. Imagine a founder building a booking tool for fitness trainers. The current roadmap has 14 items.
Item: "Trainer profile pages with photos and bios." Does it serve the core customer (trainers)? Yes. Does it move the metric (bookings)? Probably — clients book trainers they trust. Is there evidence? Yes, every booking platform has profiles. Cheap to remove? Yes. Pass.
Item: "Group class scheduling for studios." Does it serve the core customer (solo trainers)? No — studios are a different customer. Cut. The temptation to keep this kind of feature is enormous because "it's not that hard to build" and "studios are an obvious expansion." Both true. Both irrelevant. The product isn't ready to serve two customers; it's barely serving one.
Item: "AI workout plan suggestions." Does it move the metric (bookings)? Unclear — workout plans are a separate product surface. Is there evidence it improves bookings? No. Cut, or move to a separate research bucket.
Item: "Settings page with notification preferences." Cheap to remove? Yes. Moves the metric? Probably not directly. Cut for now — defer until users actually complain about notifications.
This isn't a hypothetical pattern. Every roadmap I've audited has the same shape: 4-6 items that clearly serve the customer and move the metric, 8-10 that fail one of the four questions, and the founder has been treating all 14 as roughly equal.
How Feature Creep Becomes Scope Creep
Feature creep and scope creep aren't the same thing, but one becomes the other if you don't catch it early.
Feature creep is adding features beyond what the product needs to serve its core customer. You ship the booking tool, then you add the workout plans, then you add the studio scheduling. Each addition is a feature. The product still has a clear shape, just a more cluttered version of it.
Scope creep is when feature creep changes what the product is. You started building a booking tool for solo trainers. Now you're building a fitness platform. The customer is different. The pricing is different. The competitor set is different. You didn't decide to build a fitness platform — you arrived there one feature at a time.
The Clarity Filter blocks both, but it does so at different points. Question 1 (does it serve your one core customer?) catches scope creep — anything that serves a different customer fails. Questions 2-4 catch feature creep — anything that doesn't move the metric, lacks evidence, or is hard to remove fails.
If you're already in scope creep — the product has drifted away from its original customer — the filter still works, but the harder question is which customer you're keeping. That's a different exercise: the McKelvey Method's Month 1 (Clarity) is built for exactly this case.
What to Do With the Killed Ideas
Cutting a feature doesn't mean the idea was bad. It means the idea isn't right for now. You have two options for what to do with the killed list.
Park it visibly. Maintain a "parked" list — features you're not building right now and the reason. Review the list quarterly. Some parked features come back as obvious priorities once the product evolves. Most don't. Either way, parking is better than killing because the team and stakeholders can see you didn't forget about it; you just deferred.
Bury it permanently. Some features should never come back. They served a customer you're not pursuing, or solved a problem you've decided isn't worth solving. Bury those. Don't park them; killing them definitively is healthier than letting them haunt every prioritization conversation.
The point of cutting isn't to be permanent about the cut — it's to free up the team's attention right now. A clear "not now, here's why" is almost as valuable as a yes.
The Filter on Existing (Already-Shipped) Features
Most teams apply this kind of filter to new features but not to features they've already shipped. That's backwards. Shipped features have higher costs than unshipped ones — they take up UI real estate, they have to be maintained, they show up in onboarding, they shape user expectations. A bad shipped feature is more expensive than a bad idea.
Run the Clarity Filter against shipped features once a quarter. Anything that fails — especially Questions 1 and 2 — should be a candidate for removal, not just "deprecation" or "hiding." Removing features feels scary; almost no one notices. The team feels relief; the product gets sharper.
Feature bloat happens when this never gets done. After 18 months of shipping, the product has 60 features and uses are concentrated in 8. The other 52 are tax — they slow down the codebase, confuse new users, and consume your team's attention every time something breaks. The fix is to cut.
If You Want Help Running the Filter
The hard part of cutting isn't knowing which features to cut. It's making yourself do it. Founders are often too close to their own roadmap to apply the filter honestly. If that's where you are, book a strategy call and I'll run the Clarity Filter against your current roadmap with you. It's the same exercise that's the first month of the McKelvey Method, condensed into a single conversation.
For the next decision after the cut — which of the surviving features to actually build first — read The Prioritization Formula. And if you're cutting features because the underlying product is broken (a vibe-coded app that's struggling under real load), the deeper fix may be a vibe code rescue rather than a roadmap pruning exercise.
Frequently Asked Questions
- How do I know when a feature should be cut?
- Run it through the Clarity Filter: does it serve your one core customer, does it move the metric that matters, are you sure it works, and is it cheap to remove later? If a feature fails any of the four, cut it. Most founders won't cut features because they've already built them — that's sunk cost, not signal. The cost of keeping a wrong feature is higher than the cost of cutting one you might have wanted later.
- How do I get past the sunk cost trap?
- Reframe the question. Instead of 'should I keep this feature I already built?' ask 'if I were starting today, would I build this?' If the answer is no, the feature is costing you complexity, attention, and trust with users — even if you don't see the cost on the P&L. Sunk cost is a real psychological tax, but you pay it once when you cut. You pay it forever if you keep.
- How do I tell my team I'm cutting features they built?
- Be direct and don't apologize for the cut. Frame it as 'we shipped this, we learned X, and based on what we learned the right move is to remove it.' Engineers and designers respect clear thinking more than they respect protecting their work. The only thing that damages morale is when cuts are random or when leadership pretends the feature was never important. If a person on your team only built one thing and you're cutting it, you owe them a conversation about what comes next — but the cut itself isn't the problem.
- Aren't some features just obviously necessary even if no one uses them?
- Sometimes. Auth, billing, and basic security are real examples — they don't drive engagement but you can't ship without them. The Clarity Filter handles this with question 1: does it serve your one core customer? If your customer can't use the product without it, it stays. The trap is calling something 'foundational' when it's actually optional. Most 'we have to have a settings page' features fail this test.
- Should I cut features after launch if no one's using them?
- Yes — and faster than you think. Post-launch is the highest-signal time to cut because you have actual usage data, not assumptions. If a feature has been live for 60 days and fewer than 5% of active users have touched it, it's failing. Either fix the discoverability and re-test, or cut. Carrying dead features forever is how products end up bloated, slow, and hard to explain.
- What are the signs my product has feature bloat?
- Three signs. First: you can't describe what the product does in one sentence anymore. Second: new users get confused on first use because there are too many surfaces. Third: your team makes incorrect assumptions about how features interact because the system has gotten too complex to hold in one head. If any of those is true, you have feature bloat — which is feature creep that compounded. The fix is the same: run the Clarity Filter and start cutting.
More on Product Leadership
The Prioritization Formula: A Product Prioritization Framework for Solo Founders
RICE, ICE, Kano, MoSCoW, Lean Prioritization — every prioritization framework was built for product teams, not solo founders. Here's a 5-input formula that synthesizes the best of all five for the 1-person company.
The McKelvey Method: A Founder Coaching Framework for the Post-MVP Stage
Founder coaching for the awkward stage between shipping an MVP and scaling. A 90-day framework — Clarity, Systems, Velocity — built for founders who already proved they can build but don't yet know what to build next.
How to Build an MVP in 2026: The 6-Week Framework
Most MVPs fail because founders build too much. Here's the 6-week framework I use with clients to go from idea to paying customers — with real examples from products I've shipped.
The MVP Trap: Why Most Founders Build Too Much
Your MVP should take 4-6 weeks, not 6 months. After building 50+ products, I've watched the same mistake kill startups over and over: building too much before talking to customers.