Enabling Couchdb collector

Hello,
I am trying to enable the collector and based on the docs, I should be able to run

sudo ./edit-config go.d/couchdb.conf

But it appears that I do not have the go.d orchestrator installed. My /etc/netdata/netdata.conf seems to be missing some info, especially the plugins section

h-4.2$
# netdata configuration
#
# You can download the latest version of this file, using:
#
#  wget -O /etc/netdata/netdata.conf http://localhost:19999/netdata.conf
# or
#  curl -o /etc/netdata/netdata.conf http://localhost:19999/netdata.conf
#
# You can uncomment and change any of the options below.
# The value shown in the commented settings, is the default value.
#

# global netdata configuration
[global]
    run as user = netdata

    # the default database size - 1 hour
    history = 3600

    # some defaults to run netdata with least priority
    process scheduling policy = idle
    OOM score = 1000

    stock config directory = /etc/netdata/conf.d

[web]
    web files owner = root
    web files group = netdata

    # by default do not expose the netdata port
    bind to = localhost

[health]
    stock health configuration directory = /etc/netdata/conf.d/health.d
/etc/netdata/netdata.conf (END)

Here is my netdata buildinfo:

bash-4.2$ netdata -W buildinfo
Version: netdata v1.34.1
Configure options:  '--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' '--program-prefix=' '--disable-dependency-tracking' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--localstatedir=/var' '--sharedstatedir=/var/lib' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--enable-plugin-freeipmi' '--with-bundled-protobuf' '--with-zlib' '--with-math' '--with-user=netdata' 'build_alias=x86_64-redhat-linux-gnu' 'host_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1  -m64 -mtune=generic' 'LDFLAGS=-Wl,-z,relro -specs=/usr/lib/rpm/redhat/redhat-hardened-ld' 'CXXFLAGS=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1  -m64 -mtune=generic' 'PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig'
Install type: custom
Features:
    dbengine:                   YES
    Native HTTPS:               YES
    Netdata Cloud:              YES
    ACLK Next Generation:       YES
    ACLK-NG New Cloud Protocol: YES
    ACLK Legacy:                NO
    TLS Host Verification:      YES
    Machine Learning:           YES
    Stream Compression:         NO
Libraries:
    protobuf:                YES (bundled)
    jemalloc:                NO
    JSON-C:                  YES
    libcap:                  YES
    libcrypto:               YES
    libm:                    YES
    tcalloc:                 NO
    zlib:                    YES
Plugins:
    apps:                    YES
    cgroup Network Tracking: YES
    CUPS:                    NO
    EBPF:                    NO
    IPMI:                    YES
    NFACCT:                  NO
    perf:                    YES
    slabinfo:                YES
    Xen:                     NO
    Xen VBD Error Tracking:  NO
Exporters:
    AWS Kinesis:             NO
    GCP PubSub:              NO
    MongoDB:                 NO
    Prometheus Remote Write: YES
bash-4.2$

I also noticed “Install Type: custom” in the above command. Not sure why that is since I used the provided script command from my NetData.cloud instance, as follows:

wget -O /tmp/netdata-kickstart.sh https://my-netdata.io/kickstart.sh && sh /tmp/netdata-kickstart.sh --claim-token <TOKEN>--claim-url https://app.netdata.cloud

Can anyone suggest how I can enable the Go orchestrator so that I can use its collector plugins?

Thank you.

Hello @amit and welcome in our community.

We need some clarifications:

  1. Did you ran this wget -O /tmp/netdata-kickstart.sh https://my-netdata.io/kickstart.sh && sh /tmp/netdata-kickstart.sh --claim-token <TOKEN>--claim-url https://app.netdata.cloud exact command without any other arguments? Any previous attempts to install the Agent in this particular node?
  2. what’s the error message of this sudo ./edit-config go.d/couchdb.confcommand. Can you make sure that the edit-config script is under /etc/netdata. If it isn’t please check also under /etc/netdata/conf.d
  3. To check your whole configuration (in your case default values have been omitted) access the http://localhost:19999/netdata.conf

Tasos.

Hi @Tasos_Katsoulas ,

Thank you for replying!

  1. Yes that’s the exact command I ran, without any additional arguments. It’s the script in our NetData Cloud so it claims the node. The system did not have any previous NetData agents.

  2. There is no go.d folder in /etc/netdata so I cannot run the command. The edit-config file is not found in /etc/netdata nor is it found in /etc/netdata/conf.d. However, I did find edit-config in the /usr/libexex/netdata/ folder.

  3. Yes, I realized that the edit-config file is a script helper and merges config files. I created a tunnel to the remote server since it does not have a GUI so that I can access its localhost in a browser on my local machine, and now I can view the entire merged config, see my next post for a copy of netdata.conf.

The node is reporting stats as expected in netdata cloud. However, when the script ran, there were some failures regarding trying to access some remote repos, and if memory serves me, I think I saw it continue the installation by way of a local install? Does that make sense?

I’ve been struggling with these agent installs in general because it seems to install different versions of the agents on different nodes of the same type. i.e. I will install an agent on the same day to multiple CentOS 7.9 nodes, and some of them will receive the v1.34.1 agent, others will receive the v1.34.0-285-nightly agent, and still others will receive the v1.34.0-285-g5902130c6. There may even be more versions!

The one thing they all have in common is that they are missing the go.d orchestrator.

Any suggestions would be greatly appreciated!
Thanks.
Amit

I need to break up the copy of my netdata.conf into two posts since it is larger than the permissible 32,000 characters.

First half:

# netdata configuration
#
# You can download the latest version of this file, using:
#
#  wget -O /etc/netdata/netdata.conf http://localhost:19999/netdata.conf
# or
#  curl -o /etc/netdata/netdata.conf http://localhost:19999/netdata.conf
#
# You can uncomment and change any of the options below.
# The value shown in the commented settings, is the default value.
#

# global netdata configuration

[global]
	run as user = netdata
	history = 3996
	process scheduling policy = idle
	OOM score = 1000
	stock config directory = /etc/netdata/conf.d
	# glibc malloc arena max for plugins = 1
	# glibc malloc arena max for netdata = 1
	# hostname = modb01-central.sipengines.com
	# update every = 1
	# config directory = /etc/netdata
	# log directory = /var/log/netdata
	# web files directory = /usr/share/netdata/web
	# cache directory = /var/cache/netdata
	# lib directory = /var/lib/netdata
	# home directory = /var/log/netdata
	# lock directory = /var/lib/netdata/lock
	# plugins directory = "/usr/libexec/netdata/plugins.d" "/etc/netdata/custom-plugins.d"
	# memory mode = dbengine
	# page cache size = 32
	# dbengine disk space = 256
	# dbengine multihost disk space = 256
	# host access prefix = 
	# memory deduplication (ksm) = yes
	# debug flags = 0x0000000000000000
	# debug log = /var/log/netdata/debug.log
	# error log = /var/log/netdata/error.log
	# access log = /var/log/netdata/access.log
	# facility log = daemon
	# errors flood protection period = 1200
	# errors to trigger flood protection = 200
	# TZ environment variable = :/etc/localtime
	# timezone = UTC
	# pthread stack size = 8388608
	# cleanup obsolete charts after seconds = 3600
	# gap when lost iterations above = 1
	# cleanup orphan hosts after seconds = 3600
	# delete obsolete charts files = yes
	# delete orphan hosts files = yes
	# enable zero metrics = no
	# dbengine extent pages = 64

[web]

	# option 'web files owner' is not used.
	web files owner = root

	# option 'web files group' is not used.
	web files group = netdata
	bind to = localhost
	# ssl key = /etc/netdata/ssl/key.pem
	# ssl certificate = /etc/netdata/ssl/cert.pem
	# tls version = 1.3
	# tls ciphers = none
	# ses max window = 15
	# des max window = 15
	# mode = static-threaded
	# listen backlog = 4096
	# default port = 19999
	# disconnect idle clients after seconds = 60
	# timeout for first request = 60
	# accept a streaming request every seconds = 0
	# respect do not track policy = no
	# x-frame-options response header = 
	# allow connections from = localhost *
	# allow connections by dns = heuristic
	# allow dashboard from = localhost *
	# allow dashboard by dns = heuristic
	# allow badges from = *
	# allow badges by dns = heuristic
	# allow streaming from = *
	# allow streaming by dns = heuristic
	# allow netdata.conf from = localhost fd* 10.* 192.168.* 172.16.* 172.17.* 172.18.* 172.19.* 172.20.* 172.21.* 172.22.* 172.23.* 172.24.* 172.25.* 172.26.* 172.27.* 172.28.* 172.29.* 172.30.* 172.31.* UNKNOWN
	# allow netdata.conf by dns = no
	# allow management from = localhost
	# allow management by dns = heuristic
	# enable gzip compression = yes
	# gzip compression strategy = default
	# gzip compression level = 3
	# web server threads = 6
	# web server max sockets = 256
	# custom dashboard_info.js = 

[health]
	stock health configuration directory = /etc/netdata/conf.d/health.d
	# silencers file = /var/lib/netdata/health.silencers.json
	# enabled = yes
	# default repeat warning = never
	# default repeat critical = never
	# in memory max health log entries = 1000
	# script to execute on alarm = /usr/libexec/netdata/plugins.d/alarm-notify.sh
	# health configuration directory = /etc/netdata/health.d
	# enable stock health configuration = yes
	# rotate log every lines = 2000
	# run at least every seconds = 10
	# postpone alarms during hibernation for seconds = 60

[plugins]
	# PATH environment variable = /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin
	# PYTHONPATH environment variable = 
	# timex = yes
	# checks = no
	# idlejitter = yes
	# tc = yes
	# diskspace = yes
	# proc = yes
	# cgroups = yes
	# enable running new plugins = yes
	# check for new plugins every = 60
	# slabinfo = no
	# apps = yes
	# charts.d = yes
	# fping = yes
	# ioping = yes
	# node.d = yes
	# perf = yes
	# python.d = yes

[ml]
	# enabled = no
	# maximum num samples to train = 14400
	# minimum num samples to train = 3600
	# train every = 3600
	# dbengine anomaly rate every = 60
	# num samples to diff = 1
	# num samples to smooth = 3
	# num samples to lag = 5
	# random sampling ratio = 0.20000
	# maximum number of k-means iterations = 1000
	# dimension anomaly score threshold = 0.99000
	# host anomaly rate threshold = 0.01000
	# minimum window size = 30.00000
	# maximum window size = 600.00000
	# idle window size = 30.00000
	# window minimum anomaly rate = 0.25000
	# anomaly event min dimension rate threshold = 0.05000
	# hosts to skip from training = !*
	# charts to skip from training = netdata.*
	# stream anomaly detection charts = yes

[registry]
	# enabled = no
	# registry db directory = /var/lib/netdata/registry
	# netdata unique id file = /var/lib/netdata/registry/netdata.public.unique.id
	# registry db file = /var/lib/netdata/registry/registry.db
	# registry log file = /var/lib/netdata/registry/registry-log.db
	# registry save db every new entries = 1000000
	# registry expire idle persons days = 365
	# registry domain = 
	# registry to announce = https://registry.my-netdata.io
	# registry hostname = modb01-central.sipengines.com
	# verify browser cookies support = yes
	# enable cookies SameSite and Secure = yes
	# max URL length = 1024
	# max URL name length = 50
	# netdata management api key file = /var/lib/netdata/netdata.api.key
	# allow from = *
	# allow by dns = heuristic

[cloud]
	# proxy = env
	# aclk implementation = ng
	# query thread count = 4
	# statistics = yes

[statsd]
	# enabled = yes
	# update every (flushInterval) = 1
	# udp messages to process at once = 10
	# create private charts for metrics matching = *
	# max private charts allowed = 200
	# max private charts hard limit = 1000
	# private charts memory mode = dbengine
	# private charts history = 3996
	# decimal detail = 1000
	# disconnect idle tcp clients after seconds = 600
	# private charts hidden = no
	# histograms and timers percentile (percentThreshold) = 95.00000
	# add dimension for number of events received = yes
	# gaps on gauges (deleteGauges) = no
	# gaps on counters (deleteCounters) = no
	# gaps on meters (deleteMeters) = no
	# gaps on sets (deleteSets) = no
	# gaps on histograms (deleteHistograms) = no
	# gaps on timers (deleteTimers) = no
	# statsd server max TCP sockets = 256
	# listen backlog = 4096
	# default port = 8125
	# bind to = udp:localhost tcp:localhost


# per plugin configuration

[plugin:cgroups]
	# cgroups plugin resource charts = yes
	# update every = 1
	# check for new cgroups every = 10
	# use unified cgroups = auto
	# containers priority = 40000
	# enable cpuacct stat (total CPU) = auto
	# enable cpuacct usage (per core CPU) = auto
	# enable cpuacct cpu throttling = yes
	# enable memory = auto
	# enable detailed memory = auto
	# enable memory limits fail count = auto
	# enable swap memory = auto
	# enable blkio bandwidth = auto
	# enable blkio operations = auto
	# enable blkio throttle bandwidth = auto
	# enable blkio throttle operations = auto
	# enable blkio queued operations = auto
	# enable blkio merged operations = auto
	# enable cpu pressure = auto
	# enable io some pressure = auto
	# enable io full pressure = auto
	# enable memory some pressure = auto
	# enable memory full pressure = auto
	# recheck zero blkio every iterations = 10
	# recheck zero memory failcnt every iterations = 10
	# recheck zero detailed memory every iterations = 10
	# enable systemd services = yes
	# enable systemd services detailed memory = no
	# report used memory = yes
	# path to /sys/fs/cgroup/cpuacct = /sys/fs/cgroup/cpu,cpuacct
	# path to /sys/fs/cgroup/cpuset = /sys/fs/cgroup/cpuset
	# path to /sys/fs/cgroup/blkio = /sys/fs/cgroup/blkio
	# path to /sys/fs/cgroup/memory = /sys/fs/cgroup/memory
	# path to /sys/fs/cgroup/devices = /sys/fs/cgroup/devices
	# max cgroups to allow = 1000
	# max cgroups depth to monitor = 0
	# enable new cgroups detected at run time = yes
	# enable by default cgroups matching =  !*/init.scope  !/system.slice/run-*.scope  *.scope  /machine.slice/*.service  /kubepods/pod*/*  /kubepods/*/pod*/*  !/kubepods*  !*/vcpu*  !*/emulator  !*.mount  !*.partition  !*.service  !*.socket  !*.slice  !*.swap  !*.user  !/  !/docker  !*/libvirt  !/lxc  !/lxc/*/*  !/lxc.monitor*  !/lxc.pivot  !/lxc.payload  !/machine  !/qemu  !/system  !/systemd  !/user  * 
	# search for cgroups in subpaths matching =  !*/init.scope  !*-qemu  !*.libvirt-qemu  !/init.scope  !/system  !/systemd  !/user  !/user.slice  !/lxc/*/*  !/lxc.monitor  !/lxc.payload/*/*  !/lxc.payload.*  * 
	# script to get cgroup names = /usr/libexec/netdata/plugins.d/cgroup-name.sh
	# script to get cgroup network interfaces = /usr/libexec/netdata/plugins.d/cgroup-network
	# run script to rename cgroups matching =  !/  !*.mount  !*.socket  !*.partition  /machine.slice/*.service  !*.service  !*.slice  !*.swap  !*.user  !init.scope  !*.scope/vcpu*  !*.scope/emulator  *.scope  *docker*  *lxc*  *qemu*  *kubepods*  *.libvirt-qemu  * 
	# cgroups to match as systemd services =  !/system.slice/*/*.service  /system.slice/*.service 
	# enable cgroup / = no
	# search for cgroups under / = yes
	# enable cgroup system.slice = no
	# search for cgroups under /system.slice = yes
	# enable cgroup system.slice/system-getty.slice = no

[plugin:timex]
	# timex plugin resource charts = yes
	# update every = 10
	# clock synchronization state = yes
	# time offset = yes

[plugin:apps]
	# update every = 1
	# command options = 

[plugin:charts.d]
	# update every = 1
	# command options = 

[plugin:fping]
	# update every = 1
	# command options = 

[plugin:ioping]
	# update every = 1
	# command options = 

[plugin:node.d]
	# update every = 1
	# command options = 

[plugin:perf]
	# update every = 1
	# command options = 

[plugin:idlejitter]
	# loop time in ms = 20

[plugin:tc]
	# script to run to get tc values = /usr/libexec/netdata/plugins.d/tc-qos-helper.sh

[plugin:python.d]
	# update every = 1
	# command options = 

[plugin:proc]
	# netdata server resources = yes
	# /proc/pagetypeinfo = no
	# /proc/stat = yes
	# /proc/uptime = yes
	# /proc/loadavg = yes
	# /proc/sys/kernel/random/entropy_avail = yes
	# /proc/pressure = yes
	# /proc/interrupts = yes
	# /proc/softirqs = yes
	# /proc/vmstat = yes
	# /proc/meminfo = yes
	# /sys/kernel/mm/ksm = yes
	# /sys/block/zram = yes
	# /sys/devices/system/edac/mc = yes
	# /sys/devices/system/node = yes
	# /proc/net/dev = yes
	# /proc/net/wireless = yes
	# /proc/net/sockstat = yes
	# /proc/net/sockstat6 = yes
	# /proc/net/netstat = yes
	# /proc/net/snmp = yes
	# /proc/net/snmp6 = yes
	# /proc/net/sctp/snmp = yes
	# /proc/net/softnet_stat = yes
	# /proc/net/ip_vs/stats = yes
	# /sys/class/infiniband = yes
	# /proc/net/stat/conntrack = yes
	# /proc/net/stat/synproxy = yes
	# /proc/diskstats = yes
	# /proc/mdstat = yes
	# /proc/net/rpc/nfsd = yes
	# /proc/net/rpc/nfs = yes
	# /proc/spl/kstat/zfs/arcstats = yes
	# /proc/spl/kstat/zfs/pool/state = yes
	# /sys/fs/btrfs = yes
	# ipc = yes
	# /sys/class/power_supply = yes

[plugin:proc:diskspace]
	# remove charts of unmounted disks = yes
	# update every = 1
	# check for new mount points every = 15
	# exclude space metrics on paths = /proc/* /sys/* /var/run/user/* /run/user/* /snap/* /var/lib/docker/*
	# exclude space metrics on filesystems = *gvfs *gluster* *s3fs *ipfs *davfs2 *httpfs *sshfs *gdfs *moosefs fusectl autofs
	# space usage for all disks = auto
	# inodes usage for all disks = auto

[plugin:proc:/proc/stat]
	# cpu utilization = yes
	# per cpu core utilization = yes
	# cpu interrupts = yes
	# context switches = yes
	# processes started = yes
	# processes running = yes
	# keep per core files open = yes
	# keep cpuidle files open = yes
	# core_throttle_count = auto
	# package_throttle_count = no
	# cpu frequency = yes
	# cpu idle states = yes
	# core_throttle_count filename to monitor = /sys/devices/system/cpu/%s/thermal_throttle/core_throttle_count
	# package_throttle_count filename to monitor = /sys/devices/system/cpu/%s/thermal_throttle/package_throttle_count
	# scaling_cur_freq filename to monitor = /sys/devices/system/cpu/%s/cpufreq/scaling_cur_freq
	# time_in_state filename to monitor = /sys/devices/system/cpu/%s/cpufreq/stats/time_in_state
	# schedstat filename to monitor = /proc/schedstat
	# cpuidle name filename to monitor = /sys/devices/system/cpu/cpu%zu/cpuidle/state%zu/name
	# cpuidle time filename to monitor = /sys/devices/system/cpu/cpu%zu/cpuidle/state%zu/time
	# filename to monitor = /proc/stat

[plugin:proc:diskspace:/]
	# space usage = auto
	# inodes usage = auto

[plugin:proc:diskspace:/dev]
	# space usage = auto
	# inodes usage = auto

[plugin:proc:diskspace:/dev/shm]
	# space usage = auto
	# inodes usage = auto

[plugin:proc:diskspace:/dev/hugepages]
	# space usage = auto
	# inodes usage = auto

[plugin:proc:diskspace:/proc/sys/fs/binfmt_misc]
	# space usage = no
	# inodes usage = no

[plugin:proc:diskspace:/sys/kernel/security]
	# space usage = no
	# inodes usage = no

[plugin:proc:diskspace:/sys/fs/cgroup]
	# space usage = no
	# inodes usage = no

[plugin:proc:diskspace:/sys/fs/cgroup/systemd]
	# space usage = no
	# inodes usage = no

[plugin:proc:diskspace:/sys/fs/cgroup/cpu,cpuacct]
	# space usage = no
	# inodes usage = no

[plugin:proc:diskspace:/sys/fs/cgroup/cpuset]
	# space usage = no
	# inodes usage = no

[plugin:proc:diskspace:/sys/fs/cgroup/memory]
	# space usage = no
	# inodes usage = no

[plugin:proc:diskspace:/sys/fs/cgroup/hugetlb]
	# space usage = no
	# inodes usage = no

[plugin:proc:diskspace:/sys/fs/cgroup/net_cls,net_prio]
	# space usage = no
	# inodes usage = no

[plugin:proc:diskspace:/sys/fs/cgroup/blkio]
	# space usage = no
	# inodes usage = no

[plugin:proc:diskspace:/sys/fs/cgroup/pids]
	# space usage = no
	# inodes usage = no

[plugin:proc:diskspace:/sys/fs/cgroup/perf_event]
	# space usage = no
	# inodes usage = no

[plugin:proc:diskspace:/sys/fs/cgroup/devices]
	# space usage = no
	# inodes usage = no

[plugin:proc:diskspace:/sys/fs/cgroup/freezer]
	# space usage = no
	# inodes usage = no

[plugin:proc:diskspace:/sys/fs/pstore]
	# space usage = no
	# inodes usage = no

[plugin:proc:diskspace:/sys/kernel/config]
	# space usage = no
	# inodes usage = no

[plugin:proc:diskspace:/run]
	# space usage = auto
	# inodes usage = auto

[plugin:proc:diskspace:/run/user/0]
	# space usage = no
	# inodes usage = no

[plugin:proc:diskspace:/boot]
	# space usage = auto
	# inodes usage = auto

[plugin:proc:diskspace:/run/netdata]
	# space usage = auto
	# inodes usage = auto

[plugin:proc:diskspace:/run/user]
	# space usage = auto
	# inodes usage = auto

[plugin:proc:diskspace:/var/spool/postfix/maildrop]
	# space usage = auto
	# inodes usage = auto

[plugin:proc:/proc/uptime]
	# filename to monitor = /proc/uptime

[plugin:proc:/proc/loadavg]
	# filename to monitor = /proc/loadavg
	# enable load average = yes
	# enable total processes = yes

[plugin:proc:/proc/sys/kernel/random/entropy_avail]
	# filename to monitor = /proc/sys/kernel/random/entropy_avail

[plugin:proc:/proc/pressure]
	# base path of pressure metrics = /proc/pressure
	# enable cpu some pressure = yes
	# enable memory some pressure = yes
	# enable memory full pressure = yes
	# enable io some pressure = yes
	# enable io full pressure = yes

[plugin:proc:/proc/interrupts]
	# interrupts per core = auto
	# filename to monitor = /proc/interrupts

[plugin:proc:/proc/softirqs]
	# interrupts per core = auto
	# filename to monitor = /proc/softirqs

[plugin:proc:/proc/vmstat]
	# filename to monitor = /proc/vmstat
	# swap i/o = auto
	# disk i/o = yes
	# memory page faults = yes
	# out of memory kills = yes
	# system-wide numa metric summary = auto

[plugin:proc:/sys/devices/system/node]
	# directory to monitor = /sys/devices/system/node
	# enable per-node numa metrics = auto

[plugin:proc:/proc/meminfo]
	# system ram = yes
	# system swap = auto
	# hardware corrupted ECC = auto
	# committed memory = yes
	# writeback memory = yes
	# kernel memory = yes
	# slab memory = yes
	# hugepages = auto
	# transparent hugepages = auto
	# filename to monitor = /proc/meminfo

[plugin:proc:/sys/kernel/mm/ksm]
	# /sys/kernel/mm/ksm/pages_shared = /sys/kernel/mm/ksm/pages_shared
	# /sys/kernel/mm/ksm/pages_sharing = /sys/kernel/mm/ksm/pages_sharing
	# /sys/kernel/mm/ksm/pages_unshared = /sys/kernel/mm/ksm/pages_unshared
	# /sys/kernel/mm/ksm/pages_volatile = /sys/kernel/mm/ksm/pages_volatile

[plugin:proc:/sys/devices/system/edac/mc]
	# directory to monitor = /sys/devices/system/edac/mc

[plugin:proc:/proc/net/dev]
	# filename to monitor = /proc/net/dev
	# path to get virtual interfaces = /sys/devices/virtual/net/%s
	# path to get net device speed = /sys/class/net/%s/speed
	# path to get net device duplex = /sys/class/net/%s/duplex
	# path to get net device operstate = /sys/class/net/%s/operstate
	# path to get net device carrier = /sys/class/net/%s/carrier
	# path to get net device mtu = /sys/class/net/%s/mtu
	# enable new interfaces detected at runtime = auto
	# bandwidth for all interfaces = auto
	# packets for all interfaces = auto
	# errors for all interfaces = auto
	# drops for all interfaces = auto
	# fifo for all interfaces = auto
	# compressed packets for all interfaces = auto
	# frames, collisions, carrier counters for all interfaces = auto
	# speed for all interfaces = auto
	# duplex for all interfaces = auto
	# operstate for all interfaces = auto
	# carrier for all interfaces = auto
	# mtu for all interfaces = auto
	# disable by default interfaces matching = lo fireqos* *-ifb

[plugin:proc:/proc/net/dev:eth0]
	# enabled = yes
	# virtual = no
	# bandwidth = auto
	# packets = auto
	# errors = auto
	# drops = auto
	# fifo = auto
	# compressed = auto
	# events = auto
	# speed = auto
	# duplex = auto
	# operstate = auto
	# carrier = auto
	# mtu = auto

[plugin:proc:/proc/net/dev:lo]
	# enabled = no
	# virtual = yes

[plugin:proc:/proc/net/wireless]
	# filename to monitor = /proc/net/wireless
	# status for all interfaces = auto
	# quality for all interfaces = auto
	# discarded packets for all interfaces = auto
	# missed beacon for all interface = auto

[plugin:proc:/proc/net/sockstat]
	# ipv4 sockets = auto
	# ipv4 TCP sockets = auto
	# ipv4 TCP memory = auto
	# ipv4 UDP sockets = auto
	# ipv4 UDP memory = auto
	# ipv4 UDPLITE sockets = auto
	# ipv4 RAW sockets = auto
	# ipv4 FRAG sockets = auto
	# ipv4 FRAG memory = auto
	# update constants every = 60
	# filename to monitor = /proc/net/sockstat

[plugin:proc:/proc/net/sockstat6]
	# ipv6 TCP sockets = auto
	# ipv6 UDP sockets = auto
	# ipv6 UDPLITE sockets = auto
	# ipv6 RAW sockets = auto
	# ipv6 FRAG sockets = auto
	# filename to monitor = /proc/net/sockstat6

[plugin:proc:/proc/net/netstat]
	# bandwidth = auto
	# input errors = auto
	# multicast bandwidth = auto
	# broadcast bandwidth = auto
	# multicast packets = auto
	# broadcast packets = auto
	# ECN packets = auto
	# TCP reorders = auto
	# TCP SYN cookies = auto
	# TCP out-of-order queue = auto
	# TCP connection aborts = auto
	# TCP memory pressures = auto
	# TCP SYN queue = auto
	# TCP accept queue = auto
	# filename to monitor = /proc/net/netstat

[plugin:proc:/proc/net/snmp]
	# ipv4 packets = auto
	# ipv4 fragments sent = auto
	# ipv4 fragments assembly = auto
	# ipv4 errors = auto
	# ipv4 TCP connections = auto
	# ipv4 TCP packets = auto
	# ipv4 TCP errors = auto
	# ipv4 TCP opens = auto
	# ipv4 TCP handshake issues = auto
	# ipv4 UDP packets = auto
	# ipv4 UDP errors = auto
	# ipv4 ICMP packets = auto
	# ipv4 ICMP messages = auto
	# ipv4 UDPLite packets = auto
	# filename to monitor = /proc/net/snmp

[plugin:proc:/proc/net/snmp6]
	# ipv6 packets = auto
	# ipv6 fragments sent = auto
	# ipv6 fragments assembly = auto
	# ipv6 errors = auto
	# ipv6 UDP packets = auto
	# ipv6 UDP errors = auto
	# ipv6 UDPlite packets = auto
	# ipv6 UDPlite errors = auto
	# bandwidth = auto
	# multicast bandwidth = auto
	# broadcast bandwidth = auto
	# multicast packets = auto
	# icmp = auto
	# icmp redirects = auto
	# icmp errors = auto
	# icmp echos = auto
	# icmp group membership = auto
	# icmp router = auto
	# icmp neighbor = auto
	# icmp mldv2 = auto
	# icmp types = auto
	# ect = auto
	# filename to monitor = /proc/net/snmp6

[plugin:proc:/proc/net/sctp/snmp]
	# established associations = auto
	# association transitions = auto
	# fragmentation = auto
	# packets = auto
	# packet errors = auto
	# chunk types = auto
	# filename to monitor = /proc/net/sctp/snmp

[plugin:proc:/proc/net/softnet_stat]
	# softnet_stat per core = yes
	# filename to monitor = /proc/net/softnet_stat

[plugin:proc:/proc/net/ip_vs_stats]
	# IPVS bandwidth = yes
	# IPVS connections = yes
	# IPVS packets = yes
	# filename to monitor = /proc/net/ip_vs_stats

[plugin:proc:/sys/class/infiniband]
	# dirname to monitor = /sys/class/infiniband
	# bandwidth counters = yes
	# packets counters = yes
	# errors counters = yes
	# hardware packets counters = auto
	# hardware errors counters = auto
	# monitor only active ports = auto
	# disable by default interfaces matching = 
	# refresh ports state every seconds = 30

[plugin:proc:/proc/net/stat/nf_conntrack]
	# filename to monitor = /proc/net/stat/nf_conntrack
	# netfilter new connections = no
	# netfilter connection changes = no
	# netfilter connection expectations = no
	# netfilter connection searches = no
	# netfilter errors = no
	# netfilter connections = no

[plugin:proc:/proc/sys/net/netfilter/nf_conntrack_max]
	# filename to monitor = /proc/sys/net/netfilter/nf_conntrack_max
	# read every seconds = 10

[plugin:proc:/proc/sys/net/netfilter/nf_conntrack_count]
	# filename to monitor = /proc/sys/net/netfilter/nf_conntrack_count

[plugin:proc:/proc/net/stat/synproxy]
	# SYNPROXY cookies = auto
	# SYNPROXY SYN received = auto
	# SYNPROXY connections reopened = auto
	# filename to monitor = /proc/net/stat/synproxy

[plugin:proc:/proc/diskstats]
	# enable new disks detected at runtime = yes
	# performance metrics for physical disks = auto
	# performance metrics for virtual disks = auto
	# performance metrics for partitions = no
	# bandwidth for all disks = auto
	# operations for all disks = auto
	# merged operations for all disks = auto
	# i/o time for all disks = auto
	# queued operations for all disks = auto
	# utilization percentage for all disks = auto
	# extended operations for all disks = auto
	# backlog for all disks = auto
	# bcache for all disks = auto
	# bcache priority stats update every = 0
	# remove charts of removed disks = yes
	# path to get block device = /sys/block/%s
	# path to get block device bcache = /sys/block/%s/bcache
	# path to get virtual block device = /sys/devices/virtual/block/%s
	# path to get block device infos = /sys/dev/block/%lu:%lu/%s
	# path to device mapper = /dev/mapper
	# path to /dev/disk/by-label = /dev/disk/by-label
	# path to /dev/disk/by-id = /dev/disk/by-id
	# path to /dev/vx/dsk = /dev/vx/dsk
	# name disks by id = no
	# preferred disk ids = *
	# exclude disks = loop* ram*
	# filename to monitor = /proc/diskstats
	# performance metrics for disks with major 11 = yes
	# performance metrics for disks with major 8 = yes
	# performance metrics for disks with major 253 = yes

[plugin:proc:/proc/diskstats:sr0]
	# enable = yes
	# enable performance metrics = yes
	# bandwidth = auto
	# operations = auto
	# merged operations = auto
	# i/o time = auto
	# queued operations = auto
	# utilization percentage = auto
	# extended operations = auto
	# backlog = auto

[plugin:proc:/proc/diskstats:sda]
	# enable = yes
	# enable performance metrics = yes
	# bandwidth = auto
	# operations = auto
	# merged operations = auto
	# i/o time = auto
	# queued operations = auto
	# utilization percentage = auto
	# extended operations = auto
	# backlog = auto

[plugin:proc:/proc/diskstats:sda1]
	# enable = yes
	# enable performance metrics = no
	# bandwidth = no
	# operations = no
	# merged operations = no
	# i/o time = no
	# queued operations = no
	# utilization percentage = no
	# extended operations = no
	# backlog = no

[plugin:proc:/proc/diskstats:sda2]
	# enable = yes
	# enable performance metrics = no
	# bandwidth = no
	# operations = no
	# merged operations = no
	# i/o time = no
	# queued operations = no
	# utilization percentage = no
	# extended operations = no
	# backlog = no

[plugin:proc:/proc/diskstats:centos-root]
	# enable = yes
	# enable performance metrics = yes
	# bandwidth = auto
	# operations = auto
	# merged operations = auto
	# i/o time = auto
	# queued operations = auto
	# utilization percentage = auto
	# extended operations = auto
	# backlog = auto

[plugin:proc:/proc/diskstats:centos-swap]
	# enable = yes
	# enable performance metrics = yes
	# bandwidth = auto
	# operations = auto
	# merged operations = auto
	# i/o time = auto
	# queued operations = auto
	# utilization percentage = auto
	# extended operations = auto
	# backlog = auto

[plugin:proc:/proc/mdstat]
	# faulty devices = yes
	# nonredundant arrays availability = yes
	# mismatch count = auto
	# disk stats = yes
	# operation status = yes
	# make charts obsolete = yes
	# filename to monitor = /proc/mdstat
	# mismatch_cnt filename to monitor = /sys/block/%s/md/mismatch_cnt

[plugin:proc:/proc/net/rpc/nfsd]
	# filename to monitor = /proc/net/rpc/nfsd

[plugin:proc:/proc/net/rpc/nfs]
	# filename to monitor = /proc/net/rpc/nfs

[plugin:proc:/proc/spl/kstat/zfs/arcstats]
	# filename to monitor = /proc/spl/kstat/zfs/arcstats

[plugin:proc:/proc/spl/kstat/zfs]
	# directory to monitor = /proc/spl/kstat/zfs

[plugin:proc:/sys/fs/btrfs]
	# path to monitor = /sys/fs/btrfs
	# check for btrfs changes every = 60
	# physical disks allocation = auto
	# data allocation = auto
	# metadata allocation = auto
	# system allocation = auto

[plugin:proc:ipc]
	# message queues = yes
	# semaphore totals = yes
	# shared memory totals = yes
	# msg filename to monitor = /proc/sysvipc/msg
	# shm filename to monitor = /proc/sysvipc/shm
	# max dimensions in memory allowed = 50

[plugin:proc:/sys/class/power_supply]
	# battery capacity = yes
	# battery charge = no
	# battery energy = no
	# power supply voltage = no
	# keep files open = auto
	# directory to monitor = /sys/class/power_supply


second half of netdata.conf:



# per chart configuration

[CONFIG_SECTION_GLOBAL_STATISTICS]
	# update every = 1

[system.idlejitter]
	# enabled = yes

[netdata.statsd_metrics]
	# enabled = yes

[netdata.statsd_useful_metrics]
	# enabled = yes

[netdata.statsd_events]
	# enabled = yes

[netdata.statsd_reads]
	# enabled = yes

[netdata.statsd_bytes]
	# enabled = yes

[netdata.statsd_packets]
	# enabled = yes

[netdata.tcp_connects]
	# enabled = yes

[netdata.tcp_connected]
	# enabled = yes

[netdata.private_charts]
	# enabled = yes

[netdata.plugin_statsd_charting_cpu]
	# enabled = yes

[netdata.plugin_statsd_collector1_cpu]
	# enabled = yes

[netdata.plugin_tc_cpu]
	# enabled = yes

[netdata.plugin_tc_time]
	# enabled = yes

[netdata.aclk_status]
	# enabled = yes

[netdata.server_cpu]
	# enabled = yes

[netdata.uptime]
	# enabled = yes

[disk_space._]
	# enabled = yes

[system.cpu]
	# enabled = yes

[netdata.aclk_query_per_second]
	# enabled = yes

[netdata.plugin_cgroups_cpu]
	# enabled = yes

[netdata.clients]
	# enabled = yes

[netdata.requests]
	# enabled = yes

[disk_inodes._]
	# enabled = yes

[netdata.net]
	# enabled = yes

[netdata.aclk_cloud_req]
	# enabled = yes

[netdata.aclk_processed_query_type]
	# enabled = yes

[cpu.cpu0]
	# enabled = yes

[netdata.response_time]
	# enabled = yes

[disk_space._dev]
	# enabled = yes

[netdata.compression_ratio]
	# enabled = yes

[cpu.cpu1]
	# enabled = yes

[netdata.dbengine_compression_ratio]
	# enabled = yes

[netdata.aclk_cloud_req_http_type]
	# enabled = yes

[disk_inodes._dev]
	# enabled = yes

[netdata.page_cache_hit_ratio]
	# enabled = yes

[cpu.cpu2]
	# enabled = yes

[netdata.aclk_query_threads]
	# enabled = yes

[netdata.page_cache_stats]
	# enabled = yes

[disk_space._dev_shm]
	# enabled = yes

[disk_inodes._dev_shm]
	# enabled = yes

[disk_space._run]
	# enabled = yes

[disk_inodes._run]
	# enabled = yes

[disk_space._boot]
	# enabled = yes

[netdata.dbengine_long_term_page_stats]
	# enabled = yes

[netdata.aclk_query_time]
	# enabled = yes

[disk_inodes._boot]
	# enabled = yes

[netdata.aclk_protobuf_rx_types]
	# enabled = yes

[netdata.dbengine_io_throughput]
	# enabled = yes

[netdata.dbengine_io_operations]
	# enabled = yes

[disk_space._run_netdata]
	# enabled = yes

[cpu.cpu3]
	# enabled = yes

[netdata.dbengine_global_errors]
	# enabled = yes

[disk_inodes._run_netdata]
	# enabled = yes

[netdata.apps_cpu]
	# enabled = yes

[netdata.apps_sizes]
	# enabled = yes

[disk_space._var_spool_postfix_maildrop]
	# enabled = yes

[netdata.dbengine_global_file_descriptors]
	# enabled = yes

[cpu.cpu4]
	# enabled = yes

[disk_inodes._var_spool_postfix_maildrop]
	# enabled = yes

[netdata.dbengine_ram]
	# enabled = yes

[netdata.plugin_diskspace]
	# enabled = yes

[netdata.apps_fix]
	# enabled = yes

[netdata.plugin_diskspace_dt]
	# enabled = yes

[cpu.cpu5]
	# enabled = yes

[netdata.apps_children_fix]
	# enabled = yes

[cpu.cpu6]
	# enabled = yes

[system.processes_state]
	# enabled = yes

[cpu.cpu7]
	# enabled = yes

[apps.cpu]
	# enabled = yes

[system.intr]
	# enabled = yes

[system.ctxt]
	# enabled = yes

[system.forks]
	# enabled = yes

[system.processes]
	# enabled = yes

[cpu.cpu0_cpuidle]
	# enabled = yes

[cpu.cpu1_cpuidle]
	# enabled = yes

[cpu.cpu2_cpuidle]
	# enabled = yes

[apps.mem]
	# enabled = yes

[cpu.cpu3_cpuidle]
	# enabled = yes

[cpu.cpu4_cpuidle]
	# enabled = yes

[cpu.cpu5_cpuidle]
	# enabled = yes

[cpu.cpu6_cpuidle]
	# enabled = yes

[cpu.cpu7_cpuidle]
	# enabled = yes

[system.uptime]
	# enabled = yes

[system.load]
	# enabled = yes

[system.active_processes]
	# enabled = yes

[system.entropy]
	# enabled = yes

[apps.vmem]
	# enabled = yes

[system.interrupts]
	# enabled = yes

[apps.threads]
	# enabled = yes

[cpu.cpu0_interrupts]
	# enabled = yes

[apps.processes]
	# enabled = yes

[cpu.cpu1_interrupts]
	# enabled = yes

[cpu.cpu2_interrupts]
	# enabled = yes

[apps.uptime]
	# enabled = yes

[cpu.cpu3_interrupts]
	# enabled = yes

[cpu.cpu4_interrupts]
	# enabled = yes

[apps.cpu_user]
	# enabled = yes

[cpu.cpu5_interrupts]
	# enabled = yes

[cpu.cpu6_interrupts]
	# enabled = yes

[cpu.cpu7_interrupts]
	# enabled = yes

[apps.cpu_system]
	# enabled = yes

[system.softirqs]
	# enabled = yes

[cpu.cpu0_softirqs]
	# enabled = yes

[cpu.cpu1_softirqs]
	# enabled = yes

[apps.swap]
	# enabled = yes

[cpu.cpu2_softirqs]
	# enabled = yes

[cpu.cpu3_softirqs]
	# enabled = yes

[apps.major_faults]
	# enabled = yes

[cpu.cpu4_softirqs]
	# enabled = yes

[cpu.cpu5_softirqs]
	# enabled = yes

[cpu.cpu6_softirqs]
	# enabled = yes

[cpu.cpu7_softirqs]
	# enabled = yes

[apps.minor_faults]
	# enabled = yes

[system.pgpgio]
	# enabled = yes

[apps.preads]
	# enabled = yes

[mem.pgfaults]
	# enabled = yes

[system.ram]
	# enabled = yes

[mem.available]
	# enabled = yes

[system.swap]
	# enabled = yes

[mem.committed]
	# enabled = yes

[mem.writeback]
	# enabled = yes

[apps.pwrites]
	# enabled = yes

[mem.kernel]
	# enabled = yes

[mem.slab]
	# enabled = yes

[mem.transparent_hugepages]
	# enabled = yes

[net.eth0]
	# enabled = yes

[net_operstate.eth0]
	# enabled = yes

[net_carrier.eth0]
	# enabled = yes

[net_mtu.eth0]
	# enabled = yes

[net_packets.eth0]
	# enabled = yes

[system.net]
	# enabled = yes

[apps.lreads]
	# enabled = yes

[ipv4.sockstat_sockets]
	# enabled = yes

[ipv4.sockstat_tcp_sockets]
	# enabled = yes

[ipv4.sockstat_tcp_mem]
	# enabled = yes

[ipv4.sockstat_udp_sockets]
	# enabled = yes

[ipv6.sockstat6_tcp_sockets]
	# enabled = yes

[ipv6.sockstat6_udp_sockets]
	# enabled = yes

[ipv6.sockstat6_raw_sockets]
	# enabled = yes

[ip.tcpconnaborts]
	# enabled = yes

[apps.lwrites]
	# enabled = yes

[ip.tcpofo]
	# enabled = yes

[ip.tcpsyncookies]
	# enabled = yes

[system.ip]
	# enabled = yes

[ip.ecnpkts]
	# enabled = yes

[ipv4.packets]
	# enabled = yes

[ipv4.errors]
	# enabled = yes

[apps.files]
	# enabled = yes

[ipv4.icmp]
	# enabled = yes

[ipv4.icmp_errors]
	# enabled = yes

[ipv4.icmpmsg]
	# enabled = yes

[apps.sockets]
	# enabled = yes

[ipv4.tcpsock]
	# enabled = yes

[ipv4.tcppackets]
	# enabled = yes

[ipv4.tcperrors]
	# enabled = yes

[ipv4.tcpopens]
	# enabled = yes

[ipv4.tcphandshake]
	# enabled = yes

[apps.pipes]
	# enabled = yes

[ipv4.udppackets]
	# enabled = yes

[ipv4.udperrors]
	# enabled = yes

[system.ipv6]
	# enabled = yes

[ipv6.packets]
	# enabled = yes

[users.cpu]
	# enabled = yes

[ipv6.udppackets]
	# enabled = yes

[ipv6.udperrors]
	# enabled = yes

[ipv6.mcast]
	# enabled = yes

[users.mem]
	# enabled = yes

[ipv6.mcastpkts]
	# enabled = yes

[ipv6.icmp]
	# enabled = yes

[ipv6.icmperrors]
	# enabled = yes

[users.vmem]
	# enabled = yes

[netdata.web_thread1_cpu]
	# enabled = yes

[ipv6.icmprouter]
	# enabled = yes

[users.threads]
	# enabled = yes

[ipv6.icmpneighbor]
	# enabled = yes

[ipv6.icmpmldv2]
	# enabled = yes

[ipv6.icmptypes]
	# enabled = yes

[users.processes]
	# enabled = yes

[ipv6.ect]
	# enabled = yes

[users.uptime]
	# enabled = yes

[system.softnet_stat]
	# enabled = yes

[cpu.cpu0_softnet_stat]
	# enabled = yes

[cpu.cpu1_softnet_stat]
	# enabled = yes

[users.cpu_user]
	# enabled = yes

[cpu.cpu2_softnet_stat]
	# enabled = yes

[cpu.cpu3_softnet_stat]
	# enabled = yes

[users.cpu_system]
	# enabled = yes

[cpu.cpu4_softnet_stat]
	# enabled = yes

[cpu.cpu5_softnet_stat]
	# enabled = yes

[users.swap]
	# enabled = yes

[cpu.cpu6_softnet_stat]
	# enabled = yes

[cpu.cpu7_softnet_stat]
	# enabled = yes

[users.major_faults]
	# enabled = yes

[disk.sda]
	# enabled = yes

[users.minor_faults]
	# enabled = yes

[disk_ext.sda]
	# enabled = yes

[disk_ops.sda]
	# enabled = yes

[disk_ext_ops.sda]
	# enabled = yes

[disk_backlog.sda]
	# enabled = yes

[disk_busy.sda]
	# enabled = yes

[disk_util.sda]
	# enabled = yes

[disk_mops.sda]
	# enabled = yes

[disk_ext_mops.sda]
	# enabled = yes

[disk_iotime.sda]
	# enabled = yes

[disk_ext_iotime.sda]
	# enabled = yes

[disk.dm-0]
	# enabled = yes

[users.preads]
	# enabled = yes

[disk_ext.dm-0]
	# enabled = yes

[disk_ops.dm-0]
	# enabled = yes

[users.pwrites]
	# enabled = yes

[disk_ext_ops.dm-0]
	# enabled = yes

[disk_backlog.dm-0]
	# enabled = yes

[disk_busy.dm-0]
	# enabled = yes

[disk_util.dm-0]
	# enabled = yes

[disk_iotime.dm-0]
	# enabled = yes

[disk_ext_iotime.dm-0]
	# enabled = yes

[users.lreads]
	# enabled = yes

[disk.dm-1]
	# enabled = yes

[disk_ext.dm-1]
	# enabled = yes

[disk_ops.dm-1]
	# enabled = yes

[users.lwrites]
	# enabled = yes

[disk_ext_ops.dm-1]
	# enabled = yes

[disk_backlog.dm-1]
	# enabled = yes

[disk_busy.dm-1]
	# enabled = yes

[disk_util.dm-1]
	# enabled = yes

[disk_iotime.dm-1]
	# enabled = yes

[users.files]
	# enabled = yes

[disk_ext_iotime.dm-1]
	# enabled = yes

[system.io]
	# enabled = yes

[system.ipc_semaphores]
	# enabled = yes

[users.sockets]
	# enabled = yes

[system.ipc_semaphore_arrays]
	# enabled = yes

[system.shared_memory_segments]
	# enabled = yes

[system.shared_memory_bytes]
	# enabled = yes

[netdata.plugin_proc_cpu]
	# enabled = yes

[netdata.plugin_proc_modules]
	# enabled = yes

[users.pipes]
	# enabled = yes

[groups.cpu]
	# enabled = yes

[groups.mem]
	# enabled = yes

[groups.vmem]
	# enabled = yes

[groups.threads]
	# enabled = yes

[groups.processes]
	# enabled = yes

[groups.uptime]
	# enabled = yes

[groups.cpu_user]
	# enabled = yes

[groups.cpu_system]
	# enabled = yes

[groups.swap]
	# enabled = yes

[groups.major_faults]
	# enabled = yes

[groups.minor_faults]
	# enabled = yes

[groups.preads]
	# enabled = yes

[groups.pwrites]
	# enabled = yes

[groups.lreads]
	# enabled = yes

[groups.lwrites]
	# enabled = yes

[groups.files]
	# enabled = yes

[groups.sockets]
	# enabled = yes

[groups.pipes]
	# enabled = yes

[disk_await.sda]
	# enabled = yes

[disk_ext_await.sda]
	# enabled = yes

[disk_avgsz.sda]
	# enabled = yes

[disk_ext_avgsz.sda]
	# enabled = yes

[disk_svctm.sda]
	# enabled = yes

[disk_await.dm-0]
	# enabled = yes

[disk_ext_await.dm-0]
	# enabled = yes

[disk_avgsz.dm-0]
	# enabled = yes

[disk_ext_avgsz.dm-0]
	# enabled = yes

[disk_svctm.dm-0]
	# enabled = yes

[disk_await.dm-1]
	# enabled = yes

[disk_ext_await.dm-1]
	# enabled = yes

[disk_avgsz.dm-1]
	# enabled = yes

[disk_ext_avgsz.dm-1]
	# enabled = yes

[disk_svctm.dm-1]
	# enabled = yes

[netdata.runtime_postfix_local]
	# enabled = yes

[system.clock_sync_state]
	# enabled = yes

[system.clock_status]
	# enabled = yes

[system.clock_sync_offset]
	# enabled = yes

[netdata.plugin_timex]
	# enabled = yes

[netdata.plugin_timex_dt]
	# enabled = yes

[postfix_local.qemails]
	# enabled = yes

[postfix_local.qsize]
	# enabled = yes

[netdata.queries]
	# enabled = yes

[netdata.db_points]
	# enabled = yes

UPDATE:

I completely removed the agent from the node and then re-installed using the instructions from docs Installation–> Linux from Git. I then subsequently ran the script from our NetData.cloud instance to claim the node.

This time the agent was installed with the go.d orchestrator, and I was successfully able to configure and enable it. It is collecting CouchDB stats from the local node.

Additionally, the edit-config script and the .environment file is in /etc/netdata/ folder as expected.

Seems like the kickstart.sh script doesn’t like my CentOS VMs.

I will double check the kickstart script on the CentOS 7.9.

So are you ok now?

Yes, I am technically OK since I can get a working agent.
Looking forward to hearing back about the script for CentOS 7.9 since it’s definitely less time consuming to use it.
Thanks Tasos!