Grant program whitepaper (work-in-progress)
-
Sharing my outline for the Season 1 grants program here, so anyone interested can follow along, give feedback or contribute.
The goal here is to move us to something state-of-the-art quickly, so it deals with a number of factors that other grants programs are working on as well. So I’d love broad input from in and out of Polygon.
Goals and parameters
The goals of this design are:
- Pipeline yield - treat the talent pipeline as the primary asset to Polygon, aligning with them to maximise their growth and achievement
- Decentralised-first - wherever possible, the process should be politically and architecturally decentralised from the outset. When that’s not possible, use proxies for decentralised primitives to allow for a later transition to on-chain.
- Simple process - we want this to be understandable to all stakeholders, allowing for easier engagement, especially for grantees. “Simplicity is complexity resolved.” The simpler we can make our interfaces to each other, the easier the process will be to understand and improve.
To this end, the design has several guiding principles:
- Impartial - increasing perceived fairness and thus, the programs reputation, and setting it up with democratic mechanisms consistent with self-government.
- Transparent - increasing community agency and allowing for a strong feedback and improvement loop.
- Consistent - consistent growth outperforms sporadic growth since it allows for the support organisation to grow in concert with the grant program, and a more repeatable system less dependent on the performance of individual actors.
Multiple Streams
- the process needs to support a changing set of grant streams. For example, smaller grants to endorse dApps that need a little market boost, larger grants to transition running projects onto Polygon, NFT-specific grants, etc.
- Streams are explicitly defined based on:
- amount requested
- targets such as success metric or type of milestone/achievement
- type of support needed
- qualification parameters such as past achievements, metrics, team composition, endorsements…
This creates a decentralised and permissionless architecture. Various Polygon and Polygon-related grants programs can participate int the process, using the DAO grant infrastructure and resources as their backend or platform.
It also creates cooperation incentives among the grantors, such as pooling non-monetary support resources to accelerate selected projects, and more diverse and available pools of grant evaluators to improve throughput, feedback and scruteny.
GX - Grantee experience
For the program to attract a higher-calibre of applicants, we want to provide great GX. Our feedback tells us grantees want fast feedback, but that’s the tip of the iceberg. If we consider the talent pipeline to be a strategic asset, then we need to develop and support it regardless of whether they qualify for a particular grant within a particular funding window. These are motivated builders ready to work on Polygon, and this design aims to deliver value to them regardless of if their application was accepted.
Response time
To this end, we aim to respond as quickly as possible, when possible, which in turn requires operational optimisations:
- Obvious rejections are processed in real-time, reducing the pipeline volume for the evaluation team
- The process aims to return applications quickly, with feedback, giving the applicant more time to re-apply with improvements. We remove ambiguity by returning (rejecting) any application which requires additional information, but inviting reapplication.
Focus on application improvement
- Application are either returned with:
- Specific reasons and questions, with reference to documentation such as the relevant rubrics and application examples, and
- in cases of strong applications or projects, a dedicated champion to assist them in reapplying
Post-application GX
- easy review process
- automated payments
- megaphone support (twitter, community call shoutouts with results, AMAs)
Operational process
Note that the process optimises for fast response via fast rejections. This entails a supporting process for real-time improvement of public-facing documentation. For example, quickly updating the website with a clearer example application. The strategy here is to use rejections as input for improving our scalable resources like website content.
When there are good applications that don’t technically qualify, we assign them a personal support champion so they can reapply and be accepted as conveniently as possible. This also allows the evaluators to better understand decificiencies in the process, to amend the criteria or adjust the available policy levers accordingly.
Performance measurement
This design looks at performance measurement from several perspectives:
- Absolute performance: for any given funding stream, what has the portfolio achieved and at what cost?
- Iterative performance: is the funding stream improving it’s performance over time? Time-based and policy-based cohorts allow us to adapt and improve the performance of the fund as funds are being deployed
- Operational performance: how effective and efficient is the operation on a weekly basis? This looks at measures such as response time, throughput rate, throughput volume, and efficacy measures such as accepted applications and milestone success rate.
Cohort-based
To this end, we need to measure ourselves on a using weekly cohorts. This has the additional benefit of planning support operations like announcements and payments on a recurring weekly schedule.
Over time, weekly-cohorts can be compared against each other for long-term performance, revealing which policies, marketing channels and partners produce the best performers.
Policy Levers
The process is designed with several policy levers, allowing for real-time adaptation of the grant faucet to external and internal factors:
- max total grant capital allocated per week – capital deployment rate
- selection bar - top X or top X% – for balancing capital effectiveness vs capital deployment
- automatic rejection bar - score – for managing the operational processing rate (a higher bar means faster processing of applications overall, with a trade-off of more false negatives)
Due Diligence
Due Diligence is weak across the blockchain grants landscape, and if we’re being honest, it’s even underused in conventional government, charity and corporate application processes. Nonetheless, weak or no due diligence wastes funds on ineffective projects and signals attracts the fraudulent and inept.
Each stream must define certain measurable or observable qualifiers, and these need to be checked by evaluators before final approval.
Milestones and tranching
For larger grants, it’s recommended that a minimum bar of target success metrics is also required for each application, and payments tranched in milestones measured by those metrics.
This approach protects the funds from abuse, and improves our visibility of what works and doesn’t, thus allowing us to improve or adjust as necessary.
Evaluators
- Self-selection bias - Optimism’s first round of retrospective grants revealed the group biased heavily to projects they were already familiar with. This reveals 2 weaknesses in typical blockchain grants programs:
- They favour community insiders, taking opportunities away from the nascent and diverse projects the grant endowments are meant to support
- They crystalise around known categories of expertise, rejecting emerging categories by omission
- Lack of oversight - granting bodies tend be structured in ways that conflate operational responsibilities with oversight, and as such there is usually almost no oversight. Do the grant programs actually work? Do they achieve results beyond allocating capital and enabling early milestones? And, how do the token-holders and other stakeholders have governance influence over the process?
The answers to these will come from a combination of factors:
- Long-term success metrics
- Decision-making transparency, ideally on-chain
- Separate oversight bodies, elected quadratically
- Separating operational responsibilities from evaluation responsibilities, allowing for easier independent evaluation by community members
- Operational visibility of difficult-to-assess application categories, revealing when broader domain experience is needed in the evaluator group
To simplify this for the moment, organising the evaluators and evaluation process like TCRs would align it to the most well-used on-chain primitive that applies. Though, as a community, we’ll need to develop more primitives for grant and resource distribution generally.
-
saintsal