Request for Comments: CommonGrants protocol v0.1.0

Overview

As part of the SimplerGrants initiative, we are drafting an open standard for sharing data about funding opportunities, applications, and awards across the grants ecosystem.

This standard, called the CommonGrants protocol, aims to:

  • Promote interoperability between SimplerGrants and other grant management systems
  • Streamline grant discovery and reporting across platforms
  • Lay the groundwork for a “CommonApp” for grants

We’re seeking community feedback to refine the v0.1.0 draft specification before the first stable release.

:hammer_and_wrench: Areas for Feedback

We’re particularly interested in your input on:

  • :scroll: Specification & protocol design – Do the proposed data types, routes, and patterns for customization meet your needs?

  • :laptop: Developer tools – What tools will make it easier for you to adopt the protocol?

  • :open_book: Documentation – What content, guides, and examples will help you work with the protocol?

  • :rocket: Adoption & governance – How would you like to contribute to the protocol’s development and adoption?

:link: Read the full RFC and share your feedback below.

:date: Comment Period Open Through May 17

Thanks for helping us make the grants ecosystem more interconnected!

2 Likes

Subject: Feedback on CommonGrants Protocol v0.1.0

Hi Billy & the SimplerGrants Team,

Thank you for putting this RFC together! Our team at GrantMatch AI+ has been actively working on leveraging the SimplerGrants API to create a more efficient, AI-powered grant discovery experience for small businesses and organizations. We’re excited to see how the CommonGrants protocol could enhance interoperability and streamline funding opportunities even further.

Here’s our feedback based on our current work:

1. Specification & Protocol Design

  • The base types and core fields align well with our project, but we’d love to see a standardized way to handle real-time notifications for new grants matching a user’s profile.
  • Having API endpoints for saved searches and automated alerts would improve the user experience.

2. Developer Tools

  • A ready-to-use SDK or boilerplate templates for different frameworks (especially for AI/ML integrations) would speed up adoption.
  • Support for low-code/no-code integrations (e.g., Airtable, Zapier) could help non-technical users leverage the protocol easily.

3. Documentation & Adoption

  • Clearer use case examples (e.g., “How to build an AI-powered grant search tool using this protocol”) would be valuable.
  • We’re open to testing and providing feedback on early versions of the protocol to help refine it.

4. Interest in Adoption & Governance

  • We’re interested in exploring how GrantMatch AI+ could adopt the protocol once it’s stable.
  • If there’s a working group or governance model in place, we’d love to participate and share insights from our ongoing project.

Looking forward to hearing thoughts from the community! :rocket:

Bash
GrantMatch AI+ | InterGemm LLC

1 Like

Hi Bash, thanks so much for taking the time to review our RFC and respond with some comments!

If you’d be open to chatting further about your feedback and use case, would you mind filling out this interest form so we can follow up to find a time?

In the meantime, I’ll briefly respond to these excellent suggestions here:

1. Specification & Protocol Design

  • It’s super helpful to know that real-time notifications, saved searches, and alerts would be useful functionality to create some standards around.
  • I can imagine a number of platforms that will have a similar use case, so these seem like great candidates for optional or experimental routes to incorporate in future versions of the protocol.
  • If/when we get to chat further, I’d love to talk through what notifications might look like if grant seekers were subscribing to opportunities across multiple systems. We’ve talked a little bit internally about whether ActivityPub might serve as a source of inspiration for that subscription model.

2. Developer Tools

  • Glad to hear that SDKs are a top priority for you! It’s something we’re hoping to develop in the next 4-6 months.
  • Is there a particular language you’d be most interested in? We were thinking about Python (might work well with AI tools), TypeScript, and Go.
  • Love the idea of having integrations with low-code tools like Zapier. That suggestion also highlights a secondary goal of the protocol, which is to create a robust community in which open source developers can create and share community-managed tools.
  • Custom Zaps and other low-code integrations seem like a great candidate for a community-managed tool.

3. Documentation & Adoption

  • This is exactly the kind of feedback we’re looking for! I was struggling to come up with some additional guides that would be useful, and love the idea of grounding them in specific use cases.
  • Thanks for this offer! The protocol is still in its early stages, so it would be great to get some input on how it aligns with a specific use case. We can chat more about ways to test it out if you get a chance to fill out the interest form linked above.

4. Interest in Adoption & Governance

  • So excited to hear you’re interested in adopting it for GrantMatch AI+. Tools that make grant discovery more streamlined are precisely the kind of use case we were hoping folks would want to leverage the protocol for.
  • We don’t have a formal governance committee in place at the moment, but it’s definitely helpful to know you’d be interested in joining a working group.
  • When we connect separately, maybe we can chat more about what that might look like?
1 Like

Thank you for your thoughtful response and for considering our feedback! I appreciate the opportunity to contribute to the CommonGrants protocol and would love to continue the discussion.

I will be filling out the interest form shortly so we can schedule a time to chat further. In the meantime, here are a few quick thoughts:

  1. Specification & Protocol Design – The idea of using ActivityPub as inspiration for real-time notifications and cross-platform subscription models sounds intriguing. I’d love to explore what that could look like for grant seekers using multiple systems.
  2. Developer Tools – I’m particularly excited about the SDKs you mentioned. Python would be my top choice, as it aligns well with AI tools and automation. TypeScript could also be valuable, depending on front-end integration needs.
  3. Documentation & Adoption – I appreciate your focus on making guides more use-case-driven. I’d be happy to share insights based on our implementation of GrantMatch AI+, which aims to streamline grant discovery.
  4. Governance & Adoption – I’d love to learn more about what involvement in a working group might look like. If there are specific areas where community members can contribute early on, I’d be happy to explore ways to get involved.

Looking forward to continuing the conversation!

2 Likes

Hey team! chiming in with a few thoughts of my own + highlighting previous comments

Data Fields
This guide seems to have standardized and codified existing data structures listed on grants.gov, which is a great starting point. But I noticed that some of the more descriptive data fields (ex. Eligibility, Funding Instrument, Category, etc.) are not included in the OpportunityBase model. I’m curious why the team decided not to standardize these fields. I worry that various grant posters will pass their own version of a “eligibility criteria” field for example with conflicting / overlapping / unclear options.

I understand why you want to start lean and introduce new fields only when you’re sure they’d be widely relevant. Perhaps some sort of regular analysis of custom fields in use across grant posters could help identify and codify new standardized data fields?

Saved Search / Proactive Notifications (as mentioned by @InterGemm)
As you may know, I’m a strong advocate for this capability, but for this particular project it seems out of scope. IMO this is something the grant platforms should develop, but the commongrants protocol should be sure to enable (ex. proper rate limiting to allow services to frequently query the API to serve proactive notifications to grant seekers based on specific criteria).

Use Case Development via UX Research
I’m sure the research practice is already well established on the simpler grants team, but wanted to formally clear the air and make the case for UX research on this project in particular (since sometimes technical projects don’t get enough research attention).

To ensure wide and continuous adoption, it’s important to introduce robust yet simple feedback mechanisms as early as possible. You establish close communication with some partners (thinking grant posters and platforms in particular) that would be willing to share their progress and concerns as they implement.

2 Likes

@paul_gehrig Thanks for the thoughtful breakdown and for looping back to some of the earlier ideas!

I completely agree with your take on standardized descriptive fields like Eligibility and Funding Instrument. From the perspective of someone actively building with grant-seeking users in mind (GrantMatch AI+), inconsistent formatting across those fields has a big ripple effect. If these data points were standardized, it would seriously improve downstream usability for platforms that rely on structured querying and AI interpretation.

Your suggestion to analyze custom field usage across posters is excellent. It could even feed into a community-driven schema evolution process — something like an RFC-lite workflow for new field proposals?

And yes, huge +1 on saved searches and notifications. While it may sit outside the protocol’s MVP, I think enabling such a layer via smart rate limits and webhook support would open the door for platforms like mine to build meaningful alert systems around real-time updates — especially for smaller orgs and solo seekers who don’t have time to check listings every day.

Totally on board with what you said around UX research, too. Sometimes the data spec gets all the love, but adoption hinges on how usable that spec feels on the ground. If the protocol can flexibly support real-world use cases while keeping the dev experience smooth, it’s going to go far.

Appreciate the energy you’re bringing to this conversation — it’s been a great space to learn from each other and push new ideas forward. :light_bulb:

— Bash
Founder, InterGemm | Creator of GrantMatch AI+
:link: https://dot.cards/intergemm


1 Like

Hey @paul_gehrig and @InterGemm,

Thanks for taking the time to share your thoughts on the first draft, and appreciate your patience as I got a chance to respond!

These are great suggestions and I’ll address them in the sections below:

Data fields

This is a great callout! We were trying to be intentionally conservative with the set of fields that we included in the initial opportunity data model to avoid including fields that didn’t apply to opportunities outside of federal grants.

That being said, the ones you identified, in particular Funding Instrument, seems like one that would apply to most grant systems even if not all domain values (e.g. Grant, Loan, etc.) apply to all systems. We can aim to include some subset of these values in the official v0.1.0 release after the RFC closes.

Perhaps some sort of regular analysis of custom fields in use across grant posters could help identify and codify new standardized data fields?

This suggestion is a great one! We’ve been actively trying to think through a formal “promotion” strategy to add new fields to the standard. And were roughly thinking of the following workflow:

  1. Someone defines a custom field and (optionally) publishes this field as a part of an npm package that other platforms can use.
  2. If multiple platforms are using the same custom field, someone can submit a PR with a lightweight RFC (per @InterGemm suggestion)
  3. After the RFC period, the governance committee decides whether to add this to the standard model.
  4. If approved, it gets added (ideally as an optional field) in the next release of the protocol and core library

We could follow a similar process for adding missing options to enumerated fields (e.g. Opportunity status).

Saved search

Appreciate the dialogue around this requested feature!

As y’all discussed there may be alternative strategies for supporting something like this in the interim (e.g. through improved search and rate limits).

If we find that notifications are a consistent feature request and is something that would benefit from standardization across platforms, I could imagine following a similar RFC pattern for standardizing notification triggers, data formats, and endpoints.

UX research

Love this idea! We conducted some initial user research and usability testing around the CLI and the CommonGrants website, but engaging users should absolutely be core the to continued development of the protocol and the prioritization of features in our roadmap.

Recently we’ve been talking to a few grant management systems who may be interested in adopting the protocol and one consistent feature request that emerged from those conversations has been a standard MCP server for CommonGrants to enable AI-enabled systems to more easily interact with grant data.

Summary

Thanks for the great suggestions and rich dialogue here. It sounds like at a minimum the following would be helpful:

  • Identifying other data fields that currently used by Simpler.Grants.gov but also fairly universal across grant systems, in particular things like “Funding Instrument”, “Eligibility”, etc.
  • Defining a clearer process by which community members can propose additions to the standard (e.g. new fields, enum values, or endpoints)
  • Continuing to conduct user research with prospective adopters to guide the prioritization of features in the CommonGrants roadmap

Thanks,
Billy

1 Like

Hey all,

I’m working on a system for creating NOFO documents. Ours is not anticipated to be a publicly searchable directory in the same way as some other platforms, but we do want to produce NOFOs that are compatible with the format being defined here.

I am writing this message between meetings, but I wanted to briefly describe some of the thoughts I had while looking through the CommonGrants proposal from my perspective of someone working with an existing data model.

OpportunityBase

Overall, I really like this. We have some fields already with (almost) the same name and purpose so that’s nice to see. For example, we have a title, description, created and modified fields, so we’re already on the right track.

camelCase vs other casings

The Python convention is snake_case, so our data elements which are 2 words or more currently look like this: nofo.cover_image.

I am assuming that we have a strong assumption that CommonGrants fields should be camelCased (eg, nofo.coverImage)

What to do with extra fields?

Our model has plenty of extra object-level fields: as an example, we have a nofo.cover_image for NOFOs with an image on their title page.

If we include all of the OpportunityBase fields in our JSON response but also included ~20 extra fields, does this break the spec?

On the one hand, we can make sure everything explicitly defined in OpportunityBase would be there in our response. On the other, there would be plenty of extra data floating around and in future these might conflict.

One potential approach here would be to sock all those extra fields in CustomFields, so let’s talk about that next.

CustomFields

The intent of CustomFields seems to be that it would be a place for putting extra context about your NOFO that other systems might want to know.

But if there are no real expectations around what is in CustomFields, then you wouldn’t know if other systems are accessing them.

If you are going through the effort to move your data into CustomFields in the specified format, but there are no other systems that use it or expect it, you are just creating work for yourself.

So (related to my last question), I am wondering if the best case for additional fields is. To use my example of nofo.cover_image, would the best thing be to:

  • not show it at all?
  • leave it as nofo.coverImage?
  • move it to nofo.customFields.coverImage?

Status

This is the one field we have that collides with an existing field in the CommonGrants proposal. Since we don’t post NOFOs, we won’t ever have a use for “status” the way it is described here. Our status field describes where the NOFO document is in the editing process (‘draft’, ‘active’, ‘in review’, etc).

Seems like we should just not make this visible in our API response rather than include this for other systems to consume.

Source (url)

The public URL can only be created once the NOFO leaves our system. Would there be any need/expectation for us to go back and add a URL to our ‘published’ NOFOs? I would assume not, since we don’t anticipate anyone accessing our NOFO data once it has been posted.

ID (uuid)

We use incremental ints for ids. (/nofos/50, etc). I notice that grants[dot]gov grants are also using incrementing integers (eg: grants[dot]gov/search-results-detail/355833)

The id field in the CommonGrants spec is a uuid and the description says “Globally unique id for the opportunity”.

The obvious downside of us using an int for nofo.id is that it is not globally unique between systems. However, if grants[dot]gov is using int ids, theirs are not unique either.

Does CommonGrants have an opinion on if we should migrate our id variable to use a uuid here, or do we not care what individual systems are using for their id variables?

I would say that if grants[dot]gov was using a uuid, I would change ours as well.

1 Like

Overall, I want to commend the clarity and specificity of this proposal. I appreciate the level of detail, and think it is an overall great starting place for the CommonGrants protocol. I came away with two smaller detail questions.
What is the relationship between OpportunityBase, and OppFunding, OppStatus, and OppTimeline. My gut would be that these would be children of an OpportunityBase, but that is not documented.
I want to elevate the question above about the usability of CustomFields.

But if there are no real expectations around what is in CustomFields, then you wouldn’t know if other systems are accessing them.

I’d be curious to hear the proposed use cases for custom fields. I also noticed that the schema link is not required. What is the default schema for a custom field?

Great work on this so far!

1 Like

Hi Billy,

I just wanted to make a few comments and add a few thoughts from across the pond (and apologies for any misunderstandings as a result of that geographic and cultural distance). This is a really interesting idea, and having the backing of Grants.gov definitely feels like it could give the project some of the necessary critical mass.

Data Model

On the specifics, the data model and fields are pretty sensible. In our experience, there are often grants that don’t fit into even the broadest criteria - even an application “date” can be something quite complex and nebulous if you have programmes with irregular rolling periods. So, the decision to keep the field set conservative and limited seems appropriate.

UK experiences:

There’s been a number of successful and unsuccessful approaches to standardisation in grant applications/awards over here.

360Giving:

Definitely the most stand-out - this is an opt-in platform for grant-makers to publish their grant awards in a standardised format. They have a data standard that they help grant-makers to implement. Sometimes this is through helping them build integrations with Grant Management Systems (e.g. UK Community Foundations have a plugin to their Salesforce instance that auto-formats their grant awards data into the right structure). Other times, they collect spreadsheets via email and format them manually.

It’s run by a charity, and the value is in giving a greater understanding of the funding ecosystem, rather than showcasing available and open funding opportunities. It’s also good for transparency. Over 300 funders have signed up to it to share their awards publicly.

Brevio

This was a platform that attempted to standardise the grant application process - going further than the grant opportunity discovery, and helping non-profits to make 1 application to multiple funders. This has also happened in London specifically with the “Propel” grant application portal. These have never been very successful.

Impact of LLMs:

Streamlining the application process is nowhere near as pressing as it was. It is already possible to use AI to reformat applications to one funder into the required format for another. The only things holding this back from wider usage is familiarity with AI within the sector… and horrendously inaccessible grant application portals.

As a result, I think it would be a mistake to try and extend this data standard too far into the application form/requirements.

Focusing back on the grant opportunity discovery process…

Centralisation vs Federation vs Standardisation

Perhaps obvious, but worth stating precisely: a data standard is only valuable if multiple organisations/data sources start using it, otherwise it’s just your database schema.

In my view, the only reason multiple organisations will want to use the data format is if there is some benefit to centralising or federating the information (as you alluded to with ActivityPub).

Clearly the ultimate benefit is an easy way to query all the grants that might be available for a particular theme, area or organisation. The most obvious user story is the non-profit looking for funding their own activity, but there are ancillary benefits in terms of other funders being better aware of the wider grant-making picture, and where there might be hotspots/coldspots and helping them better allocate their funding.

Data availability vs motivation

It’s not enough to have connections to the platforms to collect opportunity data in the standard format, you also need both those platforms and the funders listing opportunities to agree to share to a federated or centralised database of grants.

Many funders won’t see this as a high priority, others may even see this as a negative - potentially increasing the number of applications they receive to an unmanageable level.

Crowdsourcing

In the UK, local capacity-building/support organisations often maintain their own mini-directories of local grant opportunities. I imagine the same is true in the US?

Some way of allowing these opportunities to be linked into a wider network could potentially reduce the duplicated effort and expand the breadth of these smaller local directories.

Our interest:

At Plinth, we have both a Grant Management System and an AI Grant Fundraiser. We would be interested in allowing the funders who use our GMS to publish data in this format so that other platforms could subscribe to it.

We would also be very interested in subscribing to data about potential grant opportunities from multiple other sources.

1 Like

@pcraig3, @mepps, and @tom_plinth,

Thank you all so much for your thoughtful responses!

It was really helpful to have these additional as we synthesized the first round of feedback for HHS leadership. And I appreciate your patience as I got a chance to respond more thoughtfully to your comments.

I might not be able to touch on all of the individual questions and suggestions, but I’ll try to cover the main themes below:

Data model

Thanks for the deep dive into the proposed data model @pcraig3! I know we’ve chatted a bit about your questions directly, but for other folks who might have similar questions.

Before we dive into the specific questions though, I wanted to highlight a couple of important distinctions about CommonGrants:

  • The CommonGrants models and requirements only apply to CommonGrants-defined API endpoints (e.g. GET /common-grants/opportunities/). Platforms interested in adopting the standard can continue maintain their own representation of the data internally and in non-CommonGrants public API endpoints (i.e. endpoints not prefixed with /common-grants/)
  • Systems or tools that want to interface with other CommonGrants APIs need to be able to translate data into the proposed CommonGrants format, but DO NOT need to implement the required routes themselves.
  • Because we anticipate most tools will opt to translate their current data models into CommonGrants compatible-formats rather than changing how the data is represented internally, we’re working on a proposed mapping format and set of SDKs that will enable adopters to automate this data transformation using a declarative mapping config.

Field name casing

Yes, the fields defined explicitly by the standard are in camelCase matching the case for those properties is REQUIRED, but camelCase is RECOMMENDED for all other custom fields. We should add that guidance to the specification though.

Additionally, we’re considering building the SDKs to automatically convert custom field names into camelCase unless directed to preserve the original case.

IDs

While systems looking to adopt the CommonGrants standard can continue to support serial or integer IDs for their internal systems or custom API endpoints, they should support some indexed UUID field as well if they plan to implement the CommonGrants API routes.

As mentioned above, if a system doesn’t plan to implement the CommonGrants API endpoints directly and only needs to fetch data from or write data to other systems that support CommonGrants, then adoption of a UUID is a little less critical.

Source (url)

The intention behind this field is for CommonGrants systems that are designed to serve as aggregators of opportunities from multiple sources (e.g. Grants.gov, state-specific grant platforms, etc.) to be able to link users back to the original grant posting on the host system.

As such, it doesn’t really apply to the NOFO builder, since y’all are upstream of the final posting and, as you mentioned, members of the public wouldn’t be fetching data from your system directly.

Status

Since we don’t post NOFOs, we won’t ever have a use for “status” the way it is described here. Our status field describes where the NOFO document is in the editing process (‘draft’, ‘active’, ‘in review’, etc).

Yup! Because y’all’s status is semantically different from the opportunity status field defined by CommonGrants, there are two main options:

  • As you suggested, hide it from the related CommonGrants endpoint
  • Move it to a separate custom field e.g. (nofoStatus) that indicates it relates to the NOFO rather than the opportunity as a whole

Again this is only really necessary when we’re talking about translating data from the NOFO builder to another system that is using CommonGrants.

Custom fields

Thanks for these additional thoughts about custom fields @pcraig3 and @mepps!

Summarizing a bit, it sounds like there are three main questions it would be helpful to address:

  • Why move custom fields into a nested property rather than keeping them at the root of the opportunity model?
  • How can we provide more structure or context around custom fields to make them useful?
  • How can we avoid duplication of custom fields that are semantically equivalent but defined differently across systems?

This is definitely an area for further design as we incorporate feedback from the RFC into the first release of the protocol, but I’ll try to offer some initial clarification around these questions below:

Why nest custom fields?

The primary use case for custom fields is for platforms that have implemented the CommonGrants-defined API endpoints (e.g. GET /common-grants/opportunities/) to enrich their response with fields that aren’t defined by the CommonGrants standard but important for their specific platform (e.g. CFDA listing for federal opportunities)

There are two main reasons we require API responses for CommonGrants endpoints to follow the custom fields format:

  • It distinguishes standard fields from non-standard fields within the response body, and requires custom fields to provide a bit more context for parsing and interpretation.
  • It avoids collisions with future additions to the standard data model by reserving the top-level namespace for CommonGrants-defined fields.

However, again these restrictions only apply to the specific endpoints defined by the CommonGrants API. Other platform-specific endpoints can include other properties at the root-level.

Custom field structure

It might be helpful to explain that custom fields are really defined at two levels:

  • An arbitrary custom field in the CommonGrants standard
  • A specific custom field in a given CommonGrants implementation

At the CommonGrants standard level, custom fields only need to satisfy a basic set of criteria:

  • They are an object in nested in the customFields property on a model that supports custom fields
  • They have a name, type, and value field, with optional description and schema field
  • The type field describes the JSON-compliant type used to deserialize the value field

Within a given CommonGrants implementation, ideally the OpenAPI spec is explicitly defining which custom fields it supports and/or requires. So in addition to the general criteria above, a given CommonGrants API will likely define:

  • A specific key for each custom field (e.g. cfdaListing or legacyId) that allows the API consumer to parse that field deterministically.
  • A specific format that the value field must satisfy (could be a simple type like string, int, etc. or compound types like an array of strings)

What I’m potentially hearing though, is that maybe in addition to having a CommonGrants API define these custom fields in their OpenAPI specs, it could be helpful to include a mapping of all supported custom fields and their expected schemas at the root of the API response?

If so, I’ll have to think a bit more about how best to represent that in the OpenAPI spec itself since it’s a bit meta.

Avoiding duplicate custom fields

This is a much deeper problem space that we haven’t fully explored.

The ideal vision is to have CommonGrants APIs optionally publish packages for frequently custom fields, so that other platforms could re-use them. It’s part of the reason we chose TypeSpec as the language for defining the protocol itself since developers could create custom npm packages.

How we make those custom field packages discoverable and usable across multiple language-specific SDKs, though, is something we’re still figuring out. But ideally CommonGrants would be a repository for these custom fields so that users can discover and re-use them.

Similar efforts and strategic considerations

@tom_plinth Thank you for all of the resources and notes you shared about past standardization efforts that you’ve seen succeed and struggle in the UK.

I’ve been looking into 360 Giving more since you mentioned it and anticipate making few changes to our proposed data model to align more closely with their existing standard.

Your broader suggestions about the role of LLMs and the importance of defining a clear value proposition and use case for adopting this standard, doing so in a way that balances centralization with federation is now the focus of our strategic conversations around next steps for CommonGrants. We’d love to continue to engage you in those conversations and maybe pitch some ideas for incentivizing the kind of reciprocal data sharing you and I discussed.

Hi CommonGrants team, I’m Matt from the 360Giving and Open Data Services Teams.

We just discovered your spec recently and are very excited to see such work in the space. The 360Giving standard has existed for over a decade and is in heavy use in the UK, resulting in a thriving ecosystem of shared grants data. Below we’d like to share some considerations for CommonGrants derived from our lessons and experience doing work in this space.

Apparently new users can only put two links in posts; so you can find out more about 360Giving here:

Specification

Most of these are based on the fact that the About page states that CommonGrants is for sharing information about awards, but a lot of the models seem oriented around modelling opportunities. If we’ve got the wrong end of the stick please let us know!

  • Consider adding a model for an Organization rather than relying on people implementing this as a custom field. Lots of different use cases/analyses for grants data relies on having good and well-structured data about recipients and funders.
  • In particular, we’d encourage stipulating or recommending the use of proper org-ids (see org-id-guide) for identifying organisations. This allows for cross-referencing organisations in various registers, and works around issues such as organisations being known by many different names (as well as typos!). For example, data published to the 360Giving standard can be combined with data from other datasets using the same identifiers to.
  • It’s possible you might want to expand your model to ensure you capture fields which have a high impact on the usability of the data. Through our experience we’ve discovered that it’s very important to capture data about the following:
    • a unique identifier for the grant (you have this)
    • a title for the grant (you have this)
    • a description for the grant (you have this)
    • the amount awarded and currency (you have this)
    • the date the award was made (it’s unclear to me whether is is in OppTimeline or not, or would else be under otherDates, which would risk it being missed and not standardised across different grants)
    • each the funding and recipient organisations. These should have identifiers which follow org-id.guide practices as stated above.
      • also it’s worth bearing in mind whether grants are ever made to individuals in your grantmaking context. In the UK, individuals can be in receipt of grants and there are obvious privacy concerns around making this data available. We explained how we handled this when we launched new fields to describe grants to individuals.
    • location data for the recipient organisation as well as for beneficiaries or where activity is to take place. This supports understanding where funding is going
    • Grant duration and/or planned start and end dates. Is it one-off funding, or funding across a period? (I don’t see details for this in the OppTimeline)
    • Grant programme. In the UK a funder (or funders working together) can make grants under a “programme” where grants are related in some way. This allows that to be tracked. Again, if modelling this then you’ll need to consider interoperability (do programmes have an agreed identifier), human legibility (a title or description field), and possibly a URI to discover more.
  • If you’re mostly focusing on the Opportunities at the moment rather than the award of grants, then it’s possible to do some inter-standard linking where funders publish their awards as 360Giving data and then these identifiers and urls to datasets can be linked to from CommonGrants.

Developer Tools

  • It is important to ensure that you produce some tooling which allows adopters (developers, other implementer etc) to validate that they are compliant. 360Giving has an online Data Quality Tool which publishers use to submit data to check that it’s valid, as well as perform some additional quality checks. Validating API compliance is a little different and tougher but providing tools to check validity and quality is important to support adoption. The needs and shape of validation tooling will obviously be different based on your goals and requirements, but it’s still an important piece of the puzzle.

Documentation

  • It’s useful to separate out normative (reference) content from guidance content which isn’t normative. Normative content should ideally be governed using the same process as the specification itself, as it should only change with the specification (or some PATCH level changes for clarity in some cases) whereas guidance content might change faster in response to user need. Consider having a separate guidance site or else forming a policy on which parts of the reference documentation site are considered normative vs which are guidance.

Adoption and Governance

Re adoption

  • This seems to be more focused on federating data across existing platforms directly, rather than being an initiative to encourage publication of grants data as open data; have we got that right? Are systems compliant to the specification expected to have their APIs open?
  • Stemming from this, it seems that you’re mostly targeting software vendors for grant platforms (which is great!) but is there a missed opportunity for allowing smaller funders to publish grants as files (e.g. a spreadsheet serialisation of the CommonGrants models), which can then be consumed and transformed to be integrated into use.
  • Have you got specific use-cases in mind for the data, or the specification in general? In our experience we’ve found that adoption is largely driven by a desire to exchange and use data for practical purposes. For example, 360Giving was founded by a philanthropist who felt she was giving “in the dark” and the standard enables grantmakers in the UK to be more informed and thus more strategic and effective with their grantmaking. Use cases also help with benchmarking compliance.

Re Governance:

  • Will the models and the APIs be governed together tightly? for example, if you add a new optional endpoint to the API spec does the entire CommonGrants spec update even if the models don’t update? There are benefits and drawbacks to governing these things either independently or together.
  • We’ve found that it’s also useful to set out the expectations/mechanisms for making PATCH level changes to the standard/specification to allow for quickly attending to bug fixes and clarifications in the documentation.

All the best and keep up the good work, this is very exciting to see :slight_smile:

1 Like

Thanks @mrshll / Matt, this is incredibly helpful — especially the note on organization identifiers and validation tools. As a founder working on GrantMatch AI+ from the grant seeker side, I can see how these elements will directly improve usability for small teams and solo applicants.

1 Like

Hi Billy,

Just following up about ids. It’s a relatively minor point considering some of the other topics here, but based on your recommendation above, we have migrated our internal data to use UUID instead of integer ids.

Ultimately, we decided that our opportunity ids should be unique between systems, so we made that change, and we will keep in sync with it going forward.

1 Like

Hi @mrshll,

Thanks for your thoughtful comments! And apologies for my delayed response. I was OOO when you initially posted and didn’t get a chance to follow-up on this thread when I returned to office last week.

I’ve heard great things about 360 Giving, especially from @tom_plinth, and the work that y’all have done to standardize award data in the UK. So we really appreciate you taking the time to offer some feedback and share some insights with us.

I’ll try to respond directly to most of the suggestions and questions you raised below, but would also love to connect 1:1 if you’d be open to finding a time to chat. Here’s an interest form we’ve been using to collect contact information from folks: https://forms.gle/3rVuyM7YE6aR4wxf8

I can also try to find you on LinkedIn if it would be easier to connect there!

Specification

Thank you for these detailed notes! We’re planning to make some updates to current data models based on RFC feedback in the next 2-3 weeks, so these targeted suggestions are extremely helpful to consider.

I’ve also been referencing the 360 giving standard to see if there are opportunities to align the names and format of semantically related data elements.

On the following note:

About page states that CommonGrants is for sharing information about awards, but a lot of the models seem oriented around modelling opportunities.

We probably didn’t do a great job of clarifying this in RFC, but we’ll be evolving the standard in a few phases:

  • Phase 1: Opportunity search and discovery (the primary focus of this RFC)
  • Phase2: Grant applications and forms (what we’ve shifted our attention to the last 3-4 weeks)
  • Phase 3: Grant awards and report (likely something we’ll prioritize in the next 4-6 months)

As we shift to grant awards, we’d love to borrow or adapt as much of the work that 360 Giving has already done to standardize award reporting. At a minimum we’d like to maintain a mapping between the CommonGrants model and the 360 Giving model.

If we get a chance to connect directly, I’d love to talk through this point further:

If you’re mostly focusing on the Opportunities at the moment rather than the award of grants, then it’s possible to do some inter-standard linking where funders publish their awards as 360Giving data and then these identifiers and urls to datasets can be linked to from CommonGrants.

The ultimate goal is to support the kind of federated linking that you’re describing, and I think this could be particularly powerful once we integrate grant opportunity data with the post-award reporting on federal grants that are currently hosted by USA spending.

Developer tools

These are also excellent suggestions!

360Giving has an online Data Quality Tool which publishers use to submit data to check that it’s valid, as well as perform some additional quality checks.

Would love to learn more about what this tool looks like and how it’s implemented! Part of the reason we prioritized defining our models using TypeSpec (and compiling to JSON Schema) is to leverage those schemas for validation.

Validating API compliance is a little different and tougher but providing tools to check validity and quality is important to support adoption.

Excellent point! The preliminary CLI tool that we’ve published attempts to provide this kind of validation with its cg check spec command which validates that a given OpenAPI spec is compliant with the CommonGrants OpenAPI spec.

We’re currently working on improving the output of that command to make it a bit more useful, but I’d welcome any other thoughts you have on ways of making this API-level validation more useful for potential adopters.

Documentation

It’s useful to separate out normative (reference) content from guidance content which isn’t normative.

This is such a helpful suggestion, and articulates a tension I’ve been grappling with around versioning as we seek to add new models, reference material, and guides for supporting grant applications.

Follow-up question: Are there any other sites (in addition to 360 giving) that you feel demonstrate this separation of concerns well?

Adoption

This seems to be more focused on federating data across existing platforms directly, rather than being an initiative to encourage publication of grants data as open data; have we got that right? Are systems compliant to the specification expected to have their APIs open?

Our thinking is still evolving a little, so we’d welcome your input, but I think your description is largely accurate.

Rather than become a central repository for data, we’re aiming to do three distinct but complementary things with CommonGrants:

  1. Define a set of data models that could be used as an extensible intermediate representation of grant data exchanged between existing systems.
  2. Supporting translation between existing formats by hosting a set of publicly maintained cross-walks/mappings as well as tools that can leverage those mappings to automate those transformations.
  3. Proposing a set of APIs that support standard grant management activities using the CommonGrants models, so that third-party developers can build tools that integrate with multiple GMSs.

As part of use cases #1 and #2, though, we’d love to partner with existing repositories of grant data like 360 Giving and Philanthropy Data Commons (PDC) to encourage folks to share data for aggregation and reporting.

Have you got specific use-cases in mind for the data, or the specification in general? In our experience we’ve found that adoption is largely driven by a desire to exchange and use data for practical purposes.

In the interest of full transparency, we’re still refining the exact use case and value proposition.

Since we’re connected to the SimplerGrants initiative, one medium- to long-term goal would be to:

  • Enable grants.gov to host opportunities from state and local government and even private philanthropy
  • Making it easier for those opportunities to also appear on existing GMSs like Plinth, Fluxx, Temelio etc.
  • Enabling third-party search and applications across platforms, so that applicants could use whichever portal they’re already in to discover and apply to those opportunities.

Governance

Will the models and the APIs be governed together tightly? for example, if you add a new optional endpoint to the API spec does the entire CommonGrants spec update even if the models don’t update?

This is an excellent question! Initially I think we were anticipating keeping them tightly coupled. However are work on applications has highlighted some of the advantages of maintaining and versioning them separately.

This is another area in which I’d strongly welcome your feedback. Since we’ve still only published pre-releases of our models and associated developer tools, we expect there to be some breaking changes made in the next few iterations. But I’d love to land on a sensible versioning strategy for our tools, data model, and proposed API early to provide prospective adopters with more stability and confidence in the standard.

Summary

Thank you again for such detailed input! I would LOVE to connect directly and talk through many of the points you shared above. In particular, it would be great to dive deeper into the following topics:

  • Changes we should make to our initial set of data models to align more closely with 360 Giving’s data schema.
  • Clarifying the foundational use case(s) and value proposition(s) for CommonGrants, including opportunities to share data with existing data centralization efforts like 360 Giving or PDC.
  • Patterns and approaches for versioning the data model vs. API and reflecting that accurately in our website and documentation.

Hi @Billy

Absolutely no worries about late replies!

I’ve spoken to Marion at 360Giving and we’d both love to chat with you synchronously. I’ve filled out your contact form for myself and on her behalf, so feel free to wing us an email to arrange a time. Marion also has a link to book time with her directly if you prefer that way, although I’m not sure if it accounts for time zones.

Either way, Marion and I would love to chat with you about your work :slight_smile:

Specification

As we shift to grant awards, we’d love to borrow or adapt as much of the work that 360 Giving has already done to standardize award reporting. At a minimum we’d like to maintain a mapping between the CommonGrants model and the 360 Giving model.

The ultimate goal is to support the kind of federated linking that you’re describing, and I think this could be particularly powerful once we integrate grant opportunity data with the post-award reporting on federal grants that are currently hosted by USA spending.

This all sounds super exciting. On the topic of mapping, having a mapping between the two models is definitely worthwhile.

We also know that 360Giving is currently skewed towards UK contexts at the moment but we’re at the start of planning our first MAJOR update. As part of that I’ll be investigating ways to make it more of an international standard, and it’ll be very useful to have your insight from the US context to help us understand concepts in grantmaking at a broader level. This could lead to stronger alignment between the standards, or better opportunities for inter-standard linking between datasets.

Developer tools

Would love to learn more about what this tool looks like and how it’s implemented!

The 360Giving Data Quality Tool is available at the following link:

Under the hood, it is an instance of some software we develop at Open Data Services called “CoVE”, which stands for “Convert, Validate, and Explore”. I’d link to it here, but new users are limited to two links in posts!

CoVE uses some other tooling we built at Open Data Services called flatten-tool which supports round-tripping between JSON data and spreadsheet/flat formats given a schema. 360Giving publishers mostly publish in spreadsheet formats, since the data is often compiled by “non-technical” administrative workers. This also means we get a boost in adoption, since grantmakers don’t need to invest in developing APIs; they can just host a spreadsheet file on their website and our tooling will pick it up and convert it.

CoVE specifically solves the problem of letting people upload data and get useful feedback about it. It validates it using the JSON Schemas but massages the messages you get from the validation library into something more useful for people.

The Data Quality Tool has also developed further than being an instance of CoVE. It performs additional testing on data beyond validation, to look at issues of data quality. In the case of 360Giving this will be things such as including particular fields for specific use cases, or looking at the format of certain fields like organization identifiers to see whether they’re using org-ids. These tests pick up where the validation leaves off. Schema validation gets us valid data, but the additional testing gets us data which is more interoperable and useful.

We’re currently working on improving the output of that command to make it a bit more useful, but I’d welcome any other thoughts you have on ways of making this API-level validation more useful for potential adopters.

API validation is something I’m only just starting to grapple with myself. With my Open Data Services hat on for a second, rather than just my 360Giving hat, if you get in touch I’ll link you up with the folks at Open Referral who are also grappling with API Validation at the moment. They’re also based in the US, so I think there might be scope for collaboration and insight-sharing between you two.

Documentation

Follow-up question: Are there any other sites (in addition to 360 giving) that you feel demonstrate this separation of concerns well?

Yes! At Open Data Services we help a number of data initiatives develop and maintain data standards so we have a few examples. Again, I can’t link directly to them but I would check out:

  • The Open Contracting Data Standard (OCDS) – standad [dot] open-contracting [dot] org
  • The Beneficial Ownership Data Standard (BODS) – standard [dot] openownership [dot] org

In particular, the OCDS docs are quite advanced and really representative of good data standard documentation imho. 360Giving were an earlier standard so OCDS benefitted from a lot of the lessons we learned at 360Giving, and we’re about to start revisiting the structure of the docs, likely in accordance with Diátaxis: dividing docs into Reference, Tutorials, How-tos, and Explanation oriented material. I think where we’re headed with 360Giving is paring back the content on the standard docs site to be mostly reference material and putting up other material onto another site.

Adoption

Rather than become a central repository for data, we’re aiming to do three distinct but complementary things with CommonGrants:

This is a great model. 360Giving both is and isn’t a central repository for data. 360Giving’s publication model is decentralised. Publishers host their data on their own site; mostly this looks like a link to a spreadsheet file but there are one or two publishers who give us an API endpoint which produces 360Giving JSON.

We then have a registry of publications. Specifically files, as publishers may want to chop up their grants data into multiple files rather than maintain one big file. The registry has recently become “self-service” where publishers can update the links to their files and add new files etc.

We then have some (open source) tooling called the “datagetter” which fetches data from the registry to a machine, and works in tandem with flatten-tool and our validation libraries to then put valid data into a datastore. The datastore is also open source. 360Giving host the only known instance of the datastore, but in theory people can spin up their own (with an alternative registry if they want!). The datastore powers applications like GrantNav, which in practice act a centralised source of grants data because people love using it but in theory people can again spin up their own instance.

This is a bit different from a purely federated model such as you describe, but we definitely agree with a decentralised approach.

Governance

This is another area in which I’d strongly welcome your feedback. Since we’ve still only published pre-releases of our models and associated developer tools, we expect there to be some breaking changes made in the next few iterations. But I’d love to land on a sensible versioning strategy for our tools, data model, and proposed API early to provide prospective adopters with more stability and confidence in the standard.

Since 360Giving don’t actually have an API specification (for reasons outlined above), that might be out of scope for my jurisdiction for this thread but I am interested in this as well. I can put you in touch with some folks at Open Referral, who have a similar model as yourselves. They used to version these separately but as of HSDS 3.0 now version them together as part of the same specification. It might be useful to have a chat with them as to what motivated that move and the tradeoffs it brings.