If you have any question our engineers are available to help.
What are the differences with in-house solutions, or Graph Protocol, or other providers such as Alchemy?
- No need to reinvent wheel when indexing isn't the core competency of your product or business. For a production-ready solution, you would need to:
- Build block processors and ingestors with failover mechanisms, automatically redrive missing blocks/logs due to RPC failures, monitor RPC sources throughout your stack, and more.
- Understand the quirks of each chain regarding data fetching, event listening limitations (max block range, max addresses, etc), as well as re-org handling.
- Handle eventual consistency nature of the blockchain data (re-orgs, retries, etc).
- Find ways to save on RPC call costs as they can easily skyrocket.
- Lower costs due to economies of scale (as a managed service hundreds of optimizations such as caching, failovers, 24/7 monitoring, and more; can offer a higher quality solution with lower costs)
- Less DevOps work and fewer resources to maintain on daily basis (compute resources, databases, on-call monitoring and fixing issues, etc)
If the indexing primitives don't satisfy a complex need, it might makes sense to build a backend infrastructure for indexing. In these cases feel free to discuss with our engineers we might be able to help with part of the stack.
- You can call external APIs (like CoinGecko), interact with contracts on other chains, run SQL query on your historical data right within the handler, etc.
- Offers more primitives like Scheduled Jobs, which allow to execute arbitrary logic, not only based on chain events. For example tracking Health Factor over time as it changes when underlying asset price changes.
- Flexible backfilling mechanism allows you to execute your scripts for a certain block range, or for full history of a specific contract, as many times, for testing purposes or for loading historical data.
- Due to parallel design of the indexer you can backfill up to 80,000 blocks / second, and backfill a full 4 years worth of history (with millions of events) in just 30 minutes.
- Going live to production using Graph Protocol's free hosted service is not performant enough (and is being shutdown in favor of their decentralized network). The alternative would be to host your own Graph Node, RPC node for each, and stake $GRT tokens, and having a DevOps person on-site to help with maintenance.
- Graph Protocol does not support any EVM chain, their hosted service or decentralized network must opt-in for a certain chain to be supported. In Flair you only need to provide an RPC URL to add a new chain.
- Using Graph Protocol results in separate APIs/Graphs for each subgraph or chain. With Flair, there's only one global database to query from.
If your end-users and consumers require decentralization of your frontend dApp, and need a backup plan in case your team goes out of business, the Graph Protocol might be a suitable alternative. Another team could pick up your Graph and continue serving those end-users.
- Most other providers only sell "data" to you, so you cannot offload the indexing workload on their product.
- Usually, they only support certain chains and you need to wait for them to support a new chain. With Flair, you only need to provide an RPC URL to add a new chain.
If your use-cases needs certain popular data (such as NFTs) and the chains you're interested in are already supported by a provider, and their data quality is satisfactory (no missing NFTs/transfer/metadata etc), that might be a better alternative.
In cases where you have NFTs across multiple chains, using Flair's indexer can increase your developers speed (as it's only 1 API) and data quality (the indexer only focuses on your own data, not the entire world)
The product has been in development since Jan 2023, and several projects have been using it for various workload (from staging tests to production usage). Production-readiness (such as failover handling, 24/7 monitoring, etc) has been in place for months. This means the indexer is ready for certain use-cases today.
You can write anything in your custom scripts, including pushing the data to your own database right from the source as soon as any event occurs.
For majority of use-cases, the managed database offers development velocity and will covers the requirements (~200ms query speed, ~1000 RPS on SQL, and up to 30,000 RPS on API). The main benefits of using this managed database:
- No need to do migrations or define schemas as it's schemaless.
- Can run any complex SQL query with real-time results
- Latency is suitable for transactional real-time scenarios like front-end dApps
Processors platform will retry each event up to 5 times with exponential backoff to make sure intermittent issues are resolved.
If a processor fails fatally for a specific event (e.g. due to logic bug or RPC downtime) it will not stop processing of other events, because Flair platform is built for parallel and distributed processing.
Those failures will be logged for future investigations and they'll be stored in a DLQ (dead-letter queue) so you can re-drive them later for example after solving the problem (e.g. fixin logic issue or fixing RPC downtime), to make sure all events processed.
Can I have higher capacity for my resources both for ingestion, processing or Graph/SQL access if we have more load requirements?
Flair's architecture is elastic, meaning that you can scale capacity as needed, up to millions of transactions per second and petabytes of data (which is unlikely to be produced by current limited chains architectures, even on L2s).
We can even deploy all these resources in your own cloud environment, so everything is physically owned by you, only managed by Flair's orchestrator.
Main focus of Flair's indexer is real-time events indexing for transactional purposes (OLTP) such as end-user experiences, real-time notifications, operational dashboards,...
This means it's not suited for analytical purposes which might require data of all transactions of every single address and every single contract in the whole history. For that you can try Dune Analytics.
Last modified 22d ago