The main config that describes your indexing components.

Getting Started

As described in Getting Started guide, you can generate a new manifest.yml based on starter boilerplate and examples repository.

Full Example

manifest: 1.2.0
# Default log level for all processors if not defined
defaultLogLevel: DEBUG
# Information about your indexing cluster
# As good practice create a separate cluster for each project + env
# Cluster ID is unique within your own organization (e.g. dev, prod, v2-dev, v3-prod)
id: dev
size: small
# A namespace is used to group all your entities.
# Must be unique globally therefore it is recommeneded to prefix with your org name.
namespace: fuji-finance-dev
# Filters define which contract addresses (and/or event topics) to ingest
# from the RPC nodes.
# Each project will start with a default filter group that keeps your contract addresses
# If your procol has "factory" contracts that deploy new contracts (such as Pools)
# you'd only need to put factory addresses in contracts.csv, then have a factory-tracker processor
# which automatically adds newly deployed contracts to this fitler group.
# See /reference/custom-scripting/examples for a factory-tracker example.
- id: default
# The "preserve" strategy means everytime you deploy the cluster
# it must keep any existing (e.g. programmatically added contracts) as-is
# and only upsert new ones from contracts.csv
# If you want to always and only track contracts.csv addresses use "replace"
updateStrategy: preserve
description: |
All factory and market addresses that are being tracked.
Pools will be imported from CSV and newly markets will be added
using processor below (track-newly-listed-market).
# You can import addresses from a CSV file
# with headers like "chainId,address"
- fromFile: ./contracts.csv
# You can also add manual address entries
- chainId: 137
address: "0x0000...0000"
# You can also track the same address across all chains
- chainId: "*"
address: "0x0000...0000"
# Each indexer is an instance that listens for one specific chain ID
# using the RPC nodes you provide under "sources".
# You can provide ANY chain ID as long as you have a "websocket" or "https"
# RPC URL for that chain.
- chainId: 1
# Setting enabled as true will capture data in real-time
# Setting enabled as false will not be real-time and require backfilling to get the data
enabled: true
# Indexers filter the incoming events based on ingestion filter group,
# and then broadcasts all matched events to all your processors.
ingestionFilterGroup: default
# For certain advanced use-cases "processing" filter group might be different
# than ingestion group. Consult our engineers for more info.
processingFilterGroup: default
# You can provide up to 10 RPC sources "of the same chain" for each indexer
# for higher reliability and fail-over.
# This will make sure intermittent network or server-side issues on any RPC
# is not going to disrupt your indexing cluster.
# Remember it is highly-recommended to provide at least one "websocket" RPC,
# otherwise your http providers will receive a high traffic and cost you $$$.
# Flair already provides fallback RPC endpoints for certain popular chains,
# ask our engineers to know which chains have an internal fallback RPC.
- endpoint: wss://
- endpoint:
# Each processor has a unique ID and a handler.js and abi.json
# The abi.json will be used to define which "event topics" to listen to.
# - Processors will ignore any other topic broadcasted by your indexers.
# - Processors receive events from any indexer/chain/contract
# as long as the topic matches.
- id: track-newly-listed-market
handler: ./src/market-listing/handler.ts
abi: ./src/market-listing/abi.json
# Setting env variables that can be used inside your defined processor above
# Access it in the processor via `const SOME_VARIABLE = process.env.SOME_VARIABLE;`
value: xxxxxxxxxxxx
- id: process-market-events
handler: ./src/market-events/handler.ts
abi: ./src/market-events/abi.json

Need help or new features?

Reach out to our engineers 🙂
Last modified 5mo ago