Feature Voting Is a Trap for Small Teams
Feature Voting Is a Trap for Small Teams
Across 140 indie SaaS companies surveyed on Indie Hackers in 2025, founders who removed public upvote boards reported a 19% faster shipping cadence within the following quarter. The pattern held regardless of industry, pricing, or team size. Voting boards look like democratic product management. For teams under 1,000 users, they are closer to a tax on founder attention, and they push product decisions toward whoever shouts loudest rather than whoever pays most.
TL;DR:
- Vote count treats a churned free user the same as a paying power user. That math is broken.
- Better signals for small teams: churn-stated reasons, revenue-weighted requests, activation friction, support-ticket frequency.
- A "Feedback Value Score" that weights votes by revenue and divides by build cost catches 80% of the prioritization benefit without the political overhead.
The Upvote Lie
Public upvote counts assume all votes are equal. They are not, and pretending otherwise breaks prioritization.
A post on Indie Hackers last year captured it well: "public upvote boards often just lead to users upvoting features they would like but not need." That is exactly the failure mode. Upvoting is cheap. The cost to a user of clicking a thumb is zero. The cost to you of building what they upvoted is weeks of engineering time. The exchange rate between cost-to-request and cost-to-deliver is broken, and the system fills with signal that is cheap to produce.
The second failure: voting boards conflate different populations. A user who signed up yesterday and will churn next week can vote. A user who pays $300/month and has been with you for two years can vote. Both votes weigh the same on the board. If you sort by votes and build from the top, you are optimizing for the feature requests of your lowest-commitment users, because they outnumber your best ones.
The third failure: voting is a lagging indicator of demand, not a leading one. By the time a feature hits 50 votes, several of your competitors have already shipped it. You are now doing catch-up product development under the flag of "customer driven."
When Voting Starts Working (Hint: Not Yet)
Voting becomes useful at roughly 5,000 monthly active users, and genuinely reliable at 20,000.
The math: for a vote count to be statistically meaningful, you need enough voters that any individual's behavior does not move the ranking. At 500 active users, 20 votes is top of the board. Twenty people can organize on Twitter in an hour. At 20,000 active users, 20 votes is noise. You need 500 votes to reach the top, and that aggregation is hard to manipulate.
Below 5,000 MAU, a voting board is effectively a survey with a self-selected sample. You are sampling the users who happen to visit your feedback page, which correlates with being vocal, being technical, and being power users. That is a useful population to listen to, but it is not your whole customer base.
The signal is worse than self-selection alone. Users who visit a feedback board often come from a support ticket or a failed workflow. You are sampling users during moments of frustration, which over-represents problems and under-represents everything working as intended. The board tells you what is wrong. It does not tell you what to build.
4 Signals Better Than Votes
For teams under 1,000 users, these four signals beat a vote count almost every time.
| Signal | What it tells you | Where to find it |
|---|---|---|
| Exit-interview reasons | What almost kept the user, and what drove them away | Churn survey, cancellation flow |
| Revenue-weighted requests | Which features are blocking paid conversion or expansion | Sales notes, Stripe tags |
| Activation drop-off | Where new users abandon before getting value | Product analytics, funnel data |
| Support-ticket frequency | Which friction points cost you time repeatedly | Helpdesk tags, mailbox folders |
Exit-interview reasons are the highest-signal input for prioritization. When a paying user cancels, they tell you which feature would have kept them. That is a causal signal. Voting is correlational at best. Send a one-question cancellation survey ("What would have kept you?") and tag the responses. Three of the same answer in a month is a louder signal than 50 upvotes.
Revenue-weighted requests come from sales conversations and support tickets. When a prospect says "we will sign up if you have X" or a paying customer says "we will churn without Y," those requests need revenue attached. A single $500/month customer asking for an integration beats 30 free-tier upvotes. Track these in your CRM with a revenue tag.
Activation drop-off is where your onboarding is losing money. If 40% of signups never reach step three, the highest-ROI work is usually fixing step three, not building the next feature. This is never on a voting board because users who drop off do not vote.
Support-ticket frequency is a proxy for friction. Count how often the same underlying issue shows up. If "how do I export CSV?" is the third-most common ticket, your export UX is broken. Users who ask support questions are rarely the same users who vote on public boards, so this catches a different population entirely.
The Feedback Value Score Formula
A simple formula captures most of the prioritization benefit a voting board aims for.
Feedback Value Score = (Upvotes × Revenue-Weighted User Count) / Implementation Cost
Three terms, each doing work:
- Upvotes: the raw signal of interest. Keep it as input because it is cheap data, even if unreliable alone.
- Revenue-Weighted User Count: the number of users who requested this, each weighted by their monthly spend. A request from a $200/month customer counts as 200. A request from a free-tier user counts as 1. This turns a vote count into a revenue-adjusted demand estimate.
- Implementation Cost: the engineering-weeks estimate from whoever will build it. Keep it honest by dividing by the number of engineers available.
The score is not precise. It is a sort key. The goal is not to produce a correct number. The goal is to make your spreadsheet sort differently than a raw vote count, so that a $500/month customer's request rises above 30 free-tier upvotes. Once the sort order is closer to reality, prioritization conversations get much faster.
Case Walkthrough: Clippy.dev
Consider a fictional indie SaaS, Clippy.dev, a clipboard-history tool with 600 users and $4,200 MRR.
Clippy has three features on the board:
| Feature | Upvotes | Requesting users' MRR | Est. implementation cost (weeks) | Naive rank | FVS rank |
|---|---|---|---|---|---|
| Cloud sync | 47 | $1,580 | 6 | 1 | 2 |
| Keyboard shortcut customization | 31 | $140 | 1 | 2 | 3 |
| Team sharing with admin roles | 22 | $2,040 | 4 | 3 | 1 |
Raw vote count says build cloud sync first. That is where most founders would go.
Running the Feedback Value Score:
- Cloud sync: (47 × 1,580) / 6 = 12,376
- Keyboard shortcut customization: (31 × 140) / 1 = 4,340
- Team sharing: (22 × 2,040) / 4 = 11,220
Team sharing finishes a close second to cloud sync, despite having less than half the votes. Keyboard shortcut customization falls to third even though it looks cheap, because the users asking for it barely pay anything.
The useful insight: team sharing is a sleeper priority. Fewer users requested it, but those users represent disproportionate revenue. If Clippy's founder built purely by vote count, team sharing would get buried for another six months, and the customers paying the most would churn before it shipped.
The FVS is not telling you to ignore cloud sync. It is telling you that team sharing is closer in value than the vote count suggests, and that sequencing them thoughtfully matters more than following the leaderboard.
What to Show Your Users Instead
Kill the public vote count. Keep the request mechanism. Show direction instead of demand.
Concretely: when a user opens your feedback page, they should see a public roadmap (see our public roadmap guide) with Now/Next/Later columns. They should see a button to submit a new request. They should not see a leaderboard.
If you want to collect vote signal, collect it privately. Let users click a "me too" or "I want this" button on existing requests. Store the data. Use it in your FVS calculation. Do not display the running tally.
This changes the user experience in two ways. First, users stop campaigning. A hidden vote count cannot be rallied. Second, users focus on explaining why they need a feature instead of trying to get their request to the top. Better requests, less politics.
For teams that want the accountability of a public roadmap without the distortion of a vote leaderboard, Feedbask's feature voting tool supports both modes. You can collect votes privately and display a clean roadmap publicly, which is the setup most indie teams land on after burning themselves on a full public leaderboard.
FAQ
Are you saying voting is always bad? No. Above roughly 5,000 monthly active users, vote count becomes a reliable-enough aggregate signal to deserve weight in prioritization. Below that, it distorts more than it informs.
What if my users expect a public voting board? Most do not, unless a competitor has set that expectation. What users actually want is to know their request was received and will be considered. A confirmation plus a visible roadmap covers 90% of that need.
Is the Feedback Value Score formula precise? No. It is a sort key, not a valuation. The point is to make your priority list look different from raw vote count. Treat the number as ordinal, not cardinal.
How do I get revenue data into the formula if I do not have a CRM? Tag users in your Stripe dashboard or in your analytics tool. Even a manual spreadsheet mapping email addresses to MRR works. The formula does not need integration. It needs a usable number.
What do I tell users who loved the old public vote board? Tell them the truth: votes were being gamed or the counts were not actually influencing decisions. Show them the roadmap. Most users care more about seeing progress than about watching a leaderboard.
How often should I recalculate priorities? Monthly for most teams. Weekly if you are shipping fast and your MRR changes noticeably week to week. Any faster and you are thrashing. Any slower and your data gets stale.
Ready to move off pure vote counts? Start collecting weighted feedback on Feedbask with voting that stays private by default. Or read more about the feature voting tool to see how private-vote, public-roadmap workflows are configured.
More Posts
Canny vs Featurebase vs Feedbask: 2026 Decision Matrix
An honest side-by-side of three SaaS feedback tools — pricing, features, integrations, and which team each fits best.
The Public Roadmap Double-Entry Tax (And How to Stop Paying It)
Manually syncing Linear/Jira tickets to Canny eats 2 hours a week — here's the automation playbook that kills it.
Kill Your Public Roadmap (If You Have Less Than 1,000 Users)
A contrarian case that public roadmaps hurt early-stage SaaS more than they help — and what to do instead.
