Database
Deliver all your indexed data to your own backend database in real-time
Last updated
Was this helpful?
Deliver all your indexed data to your own backend database in real-time
Last updated
Was this helpful?
A namespace groups many entities together, you can think of a namespace as a "database instance".
You can create one or more namespaces (for versioning purposes, or separating dev and prod environments, etc) for your project. We recommend having a prod namespace like sushiswap
and a development namespace such as sushiswap-dev
.
As described in Getting Started guide, you will define your own namespace in the .
Refer to to see how to write data to or read from your namespace from custom processor scripts.
Processors use the integration to store entities in your namespace (e.g. sushiswap). You can define any destination for your data using Flair's managed engine.
On high-level database syncing involves creating a table (e.g Postgres, MySQL, MongoDB, etc.), then defining a SQL INSERT statement. You will define this for both historical and real-time syncing as describe blow.
That's it! You can see the status of your real-time database sync in the job manager GUI:
For fixing data issues or if your destination database was down for some time, you can run the same batch job you've created on step 3 above, either for full data, or just a specific period:
Avoid changing field data types on RDBMS databases and instead CAST
the types in the INSERT SELECT
statement, for example:
Define all indexes and check the schema before syncing a huge table to avoid timeouts on your database engine.
In cases where your processing logic is changed (e.g. added a new USD price field to your Swap entities) then you would need to use mechanism to apply those changes. In such scenario you do NOT need to sync the database, because all those changed entities will be applied in real-time.
Preferably use eu-central-1 region (Frankfurt, Central Europe) for highest performance. If you need other regions ping .