WordPress excerpt: Most civic projects measure the wrong things — attention, signatures, media coverage — and call it progress. This article explains what America’s Plan is actually trying to produce, how that gets measured at different timescales, and what cannot honestly be claimed yet.
The metrics problem
The standard ways civic and advocacy projects measure success are mostly measuring the wrong things.
Page views and social media followers measure attention. Petition signatures measure willingness to click. Funding raised measures organizational capacity. Media coverage measures salience. None of these are useless — a project with no attention, no participants, and no resources is not going to accomplish anything — but none of them answer the question that actually matters: is the civic work moving forward?
A project can have growing traffic and a stalled issue pipeline. It can have thousands of petition signatures and no plan specific enough to pressure an institution. It can get favorable coverage and produce nothing that persists past the news cycle. Measuring those things as success proxies is how organizations end up optimizing for metrics that bear little relationship to whether the underlying problem is getting addressed.
America’s Plan is trying to build civic infrastructure — a structured process through which affected parties can turn their knowledge of a problem into plans, pressure, and verified accountability. What counts as progress on that goal is different from what counts as progress on an engagement campaign. This article is an attempt to be specific about the difference, honest about the timeline, and clear about what cannot yet be claimed.
Platform health — necessary but not sufficient
The first level of progress is whether people are showing up and doing the work. This is the most straightforward thing to measure, and it is the least meaningful on its own.
Relevant indicators at this level: substantive participation in issue hub threads (not page views — actual contributions that advance the conversation); threads completing their deliberative stages and producing closing summaries; knowledge accumulating in the commons in forms that are specific, documented, and reusable.
A thread that runs through the dialogue stage, produces a summary that captures what was established, and advances to analysis is measurable progress. A thread that generates ten posts and stalls with no summary is not, regardless of how many people read it.
The commons layer matters more here than it might appear. The Issue Pipeline is designed around the idea that civic knowledge should compound — that what one group figures out about a problem becomes the starting point for the next group, rather than being lost or reinvented. If the commons is empty, that compounding cannot happen. If it is accumulating specific, reusable knowledge — named mechanisms, documented failure modes, identified trade-offs — the platform’s long-term function is becoming real.
But platform health is a necessary condition, not a sufficient one. A healthy platform that produces plans with no real-world traction has not succeeded. It has produced a well-functioning internal process that has not yet connected to the institutions and conditions that determine whether anything actually changes.
Issue pipeline progress — more meaningful, harder to measure
The second level is whether issues are moving through the pipeline.
Progress here has specific, observable markers. Has an issue moved from Sentiment to Plan? That is a documented advancement — the group has produced something more organized than shared frustration. Does the plan have enough specificity to be tested against real institutions? A plan that names which officials have authority over the issue, what a measurable change would look like, what the timeline is, and how verification would work is measurably more advanced than a plan that says things should be better.
Has any plan been formally acknowledged or adopted by a responsible institution? That is real progress — the pipeline connecting to the external world it is designed to influence. Has any commitment made under sustained pressure been tracked through the Accountability stage and verified as implemented? That is the full pipeline working as intended.
As of April 2026, no issue on this platform has run the full pipeline. That is an early-stage fact, not a permanent condition, and it does not indicate the platform is failing — it indicates it is at the beginning. The pipeline takes time to run, particularly for structural civic issues where the responsible institutions have their own timelines, political constraints, and capacity limits. Measuring progress at this level requires accepting that the indicators are real but slow to accumulate.
What should be resisted: counting issues opened as a proxy for issues advanced, counting plans drafted as a proxy for plans adopted, or treating the existence of a thread as evidence that the pipeline is moving. The pipeline moves when stages complete and produce documented outputs that carry forward — not when people post.
Knowledge compounding — harder to observe, highly meaningful
The third level is the one most likely to be invisible in the short term and most important in the long term.
The platform’s theory of change rests on a specific claim: that civic knowledge, if documented and made reusable, can accumulate in ways that make each successive effort more effective than the last. The NAACP’s legal campaign did not win Brown v. Board of Education in isolation — it built on twenty-four years of prior litigation, each case adding to the evidentiary record that the next case required. The disability rights movement’s forty-year campaign produced fifty pieces of legislation before the ADA, each one creating new rights, new legal hooks, and new precedents that the next campaign could use.
This kind of progress shows up as: groups solving problems faster because documented prior work exists; analysis stages that begin with more accurate pictures of the problem because prior Sentiment stage work is on record; plans that are more specific because prior attempts have documented what failed and why. It is visible only over years. In the short term, it looks like documentation work — which is easy to undervalue because it does not produce immediate visible wins.
The commons is where this compounding happens. A commons that is growing with specific, attributed, correctable civic knowledge is showing real progress at this level, even before any single issue has run the full pipeline. A platform without a functioning commons is not building the infrastructure that makes the long-term theory of change credible.
Structural change — the long-horizon outcome
The fourth level is the one the platform will ultimately be judged by, and the one that cannot be meaningfully assessed on any timeline shorter than years.
Our strategy is not primarily aimed at producing individual policy wins. It is aimed at building the civic infrastructure that makes sustained, affected-party-led policy change possible — the kind of infrastructure that does not depend on a single election, a single news cycle, or a single organization’s survival. The question at this level is: does the infrastructure exist and function? Are affected parties actually in leadership roles on the issues that affect them? Is the documented record being built in ways that persist, compound, and become a usable foundation for the next effort?
These outcomes are measured in years and decades. They are also the outcomes that matter most — the difference between a platform that produces a few notable wins before fading and one that builds something that outlasts its founders and compounds over time. The NAACP’s legal campaign took twenty-four years. The ADA was forty years in the making. Civic infrastructure that actually works is not built or evaluated on quarterly timelines.
The time problem and intermediate indicators
This creates a real accountability problem. If the long-horizon outcomes take years to verify, how does the platform demonstrate that the slow work is worth continuing?
The answer is intermediate indicators — things that are genuinely predictive of long-horizon success even if they are not the final outcome. Platform health and pipeline progress are intermediate indicators: if threads are completing stages, producing summaries, and advancing, the platform is doing the work that the theory of change requires. Knowledge compounding is a slower intermediate indicator: if the commons is accumulating specific, correctable, reusable knowledge, the foundation is being built. These are not proof that structural change will follow — the conditions outside the platform matter enormously and are not within its control — but they are evidence that the work is being done correctly.
What contributors and potential participants deserve is an honest account of this timeline. The platform is in early build stage, run by one person, with no issue yet having run the full pipeline. The intermediate indicators matter for exactly that reason: they are the honest answer to “how will you know if this is worth continuing?” before the long-horizon outcomes are visible.
What cannot honestly be claimed yet
The Transparency Report covers the platform’s structural limitations. On the question of progress specifically, there are several things that cannot yet be claimed.
Whether any specific policy has changed as a result of this platform’s work: too early. Whether the accumulated civic knowledge is being used by institutions outside the platform: not yet. Whether the platform is reaching people who are not already civically engaged: no data yet.
What the platform should resist measuring as success: engagement metrics as a proxy for civic impact; “reach” as a proxy for effectiveness; number of issues opened as a proxy for issues advanced; media mentions as a proxy for anything.
What it should measure: whether threads are completing stages and producing usable outputs; whether the commons is accumulating; whether plans are becoming specific enough to test against real institutions; whether any commitments made under pressure have been tracked through the Accountability Stage and verified.
That is a shorter list than most project dashboards contain. It is also a more honest one.
This article was researched and drafted with AI assistance under human review. See our full AI and editorial practices.