Is there best practice for distributed large-scale cluster with netdata?

What I expect?

I wanna to deploy netdata in a large-scale cluster with over 100 nodes and all of them collect metrics in 1s granularity. The collected data are posted to a cloud database.

What’s the problem?

I have found a solution. All netdata agents export data with remote-prometheus-write. I select promscale as connector and timescaledb as backend database.

As there are over 100 nodes in the cluster, there may be large data flow. What is the best practice for large-scale cluster with proscale as connector?

100 agents → 1 single promscale instance → database ?
100 agents → 100 promscale instances(each for a agent) → database?

Is redis/rabbitmq needed for cache between promscale and database?