The main config that describes your indexing components.
# Default log level for all processors if not defined
# Information about your indexing cluster
# As good practice create a separate cluster for each project + env
# Cluster ID is unique within your own organization (e.g. dev, prod, v2-dev, v3-prod)
# A namespace is used to group all your entities.
# Must be unique globally therefore it is recommeneded to prefix with your org name.
# Filters define which contract addresses (and/or event topics) to ingest
# from the RPC nodes.
# Each project will start with a default filter group that keeps your contract addresses
# If your procol has "factory" contracts that deploy new contracts (such as Pools)
# you'd only need to put factory addresses in contracts.csv, then have a factory-tracker processor
# which automatically adds newly deployed contracts to this fitler group.
# See /reference/custom-scripting/examples for a factory-tracker example.
- id: default
# The "preserve" strategy means everytime you deploy the cluster
# it must keep any existing (e.g. programmatically added contracts) as-is
# and only upsert new ones from contracts.csv
# If you want to always and only track contracts.csv addresses use "replace"
All factory and market addresses that are being tracked.
Pools will be imported from CSV and newly markets will be added
using processor below (track-newly-listed-market).
# You can import addresses from a CSV file
# with headers like "chainId,address"
- fromFile: ./contracts.csv
# You can also add manual address entries
- chainId: 137
# You can also track the same address across all chains
- chainId: "*"
# Each indexer is an instance that listens for one specific chain ID
# using the RPC nodes you provide under "sources".
# You can provide ANY chain ID as long as you have a "websocket" or "https"
# RPC URL for that chain.
- chainId: 1
# Setting enabled as true will capture data in real-time
# Setting enabled as false will not be real-time and require backfilling to get the data
# Indexers filter the incoming events based on ingestion filter group,
# and then broadcasts all matched events to all your processors.
# For certain advanced use-cases "processing" filter group might be different
# than ingestion group. Consult our engineers for more info.
# You can provide up to 10 RPC sources "of the same chain" for each indexer
# for higher reliability and fail-over.
# This will make sure intermittent network or server-side issues on any RPC
# is not going to disrupt your indexing cluster.
# Remember it is highly-recommended to provide at least one "websocket" RPC,
# otherwise your http providers will receive a high traffic and cost you $$$.
# Flair already provides fallback RPC endpoints for certain popular chains,
# ask our engineers to know which chains have an internal fallback RPC.
- endpoint: wss://chain-1.rpc.internal.flair.build
- endpoint: https://chain-1.rpc.internal.flair.build
# Each processor has a unique ID and a handler.js and abi.json
# The abi.json will be used to define which "event topics" to listen to.
# - Processors will ignore any other topic broadcasted by your indexers.
# - Processors receive events from any indexer/chain/contract
# as long as the topic matches.
- id: track-newly-listed-market
# Setting env variables that can be used inside your defined processor above
# Access it in the processor via `const SOME_VARIABLE = process.env.SOME_VARIABLE;`
- name: SOME_VARIABLE
- id: process-market-events
Last modified 27d ago