If you have any question our engineers are available to help.

Where do I start?

You can deploy your first indexing cluster for free by following the Getting Started guide. The free trial period provides ample time to learn about the features and developer experience

What are the differences with in-house solutions, or Graph Protocol, or other providers such as Alchemy?

Notable differences with "in-house solutions"

  • No need to reinvent wheel when indexing isn't the core competency of your product or business. For a production-ready solution, you would need to:

    • Build block processors and ingestors with failover mechanisms, automatically redrive missing blocks/logs due to RPC failures, monitor RPC sources throughout your stack, and more.

    • Understand the quirks of each chain regarding data fetching, event listening limitations (max block range, max addresses, etc), as well as re-org handling.

    • Handle eventual consistency nature of the blockchain data (re-orgs, retries, etc).

    • Find ways to save on RPC call costs as they can easily skyrocket.

  • Lower costs due to economies of scale (as a managed service hundreds of optimizations such as caching, failovers, 24/7 monitoring, and more; can offer a higher quality solution with lower costs)

  • Less DevOps work and fewer resources to maintain on daily basis (compute resources, databases, on-call monitoring and fixing issues, etc)

When should you build in-house?

If the indexing primitives don't satisfy a complex need, it might makes sense to build a backend infrastructure for indexing. In these cases feel free to discuss with our engineers we might be able to help with part of the stack.

Notable differences with "Graph Protocol"

  • Uses node.js runtime which means you can run any JavaScript code as you normally would (as opposed to the limited WASM environment Graph uses). This means:

    • You can call external APIs (like CoinGecko), interact with contracts on other chains, run SQL query on your historical data right within the handler, etc.

  • Offers more primitives like scheduled Workers and Aggregations, which allow to execute arbitrary logic, not only based on chain events. For example tracking Health Factor over time as it changes when underlying asset price changes.

  • One global database for all chains and all entities, with SQL access to run arbitrary "Aggregation" or "Join" queries.

  • Flexible backfilling mechanism allows you to execute your scripts for a certain block range, or for full history of a specific contract, as many times, for testing purposes or for loading historical data.

  • Due to parallel design of the indexer you can backfill up to 80,000 blocks / second, and backfill a full 4 years worth of history (with millions of events) in just 30 minutes.

  • Going live to production using Graph Protocol's free hosted service is not performant enough (and is being shutdown in favor of their decentralized network). The alternative would be to host your own Graph Node, RPC node for each, and stake $GRT tokens, and having a DevOps person on-site to help with maintenance.

  • Graph Protocol does not support any EVM chain, their hosted service or decentralized network must opt-in for a certain chain to be supported. In Flair you only need to provide an RPC URL to add a new chain.

  • Using Graph Protocol results in separate APIs/Graphs for each subgraph or chain. With Flair, there's only one global database to query from.

When should you use Graph Protocol?

If your end-users and consumers require decentralization of your frontend dApp, and need a backup plan in case your team goes out of business, the Graph Protocol might be a suitable alternative. Another team could pick up your Graph and continue serving those end-users.

Notable differences with RPC providers like Alchemy, etc

  • Most other providers only sell "data" to you, so you cannot offload the indexing workload on their product.

  • When using their raw data solutions (like events) you need to rebuild the indexing infra from scratch, as explained above in the "in-house solution" section.

  • Usually, they only support certain chains and you need to wait for them to support a new chain. With Flair, you only need to provide an RPC URL to add a new chain.

When to use Alchemy APIs or other providers?

If your use-cases needs certain popular data (such as NFTs) and the chains you're interested in are already supported by a provider, and their data quality is satisfactory (no missing NFTs/transfer/metadata etc), that might be a better alternative.

In cases where you have NFTs across multiple chains, using Flair's indexer can increase your developers speed (as it's only 1 API) and data quality (the indexer only focuses on your own data, not the entire world)

Can I have access to my indexing resources directly?

Flair indexer can be deployed in your own cloud account (aka on-premise). You can discuss this option with our engineers.

Is your product in beta?

The product has been in development since Jan 2023, and several projects have been using it for various workload (from staging tests to production usage). Production-readiness (such as failover handling, 24/7 monitoring, etc) has been in place for months. This means the indexer is ready for certain use-cases today.

We encourage to start from small use-cases to fully understand how the product works for you, and gradually build up from there.

We are more than happy to help on your journey to production.

Can I load the data in my own database?

All your namespace data can be synced in real-time to your Database. Since the data always exists in Flair underlying storage you can always re-sync all or portions of your data agian to your tables/collections.

What if my processors fail for a specific event log?

Processors platform will retry each event up to 5 times with exponential backoff to make sure intermittent issues are resolved.

If a processor fails fatally for a specific event (e.g. due to logic bug or RPC downtime) it will not stop processing of other events, because Flair platform is built for parallel and distributed processing.

Those failures will be logged for future investigations and they'll be stored in a DLQ (dead-letter queue) so you can re-drive them later for example after solving the problem (e.g. fixin logic issue or fixing RPC downtime), to make sure all events processed.

Can I have higher capacity for my resources both for ingestion, processing or aggregations if we have more load requirements?

Flair's architecture is elastic, meaning that you can scale capacity as needed, up to millions of transactions per second and petabytes of data (which is unlikely to be produced by current limited chains architectures, even on L2s).

We can deploy all these resources in your own cloud environment, so everything is physically owned by you, only managed by Flair's orchestrator.

If you like to explore this option reach out to our engineers.

When not to use Flair indexing?

As an indexer Flair will listen to all data and process depending on your custom use-case. This mean if a provider sells already indexed data (e.g. standard NFT data) it might be cheaper to use those providers like SimpleHash or Dune Analytics.

In cases where you have many chains, even for standard data it might be better to use Flair, because you don't need to wait for those providers to support the chains you need.

Last updated