Hi guys!
I have server with 4 nvme disks that formatted in btrfs
raid.
I only mount one root btrfs subvolume, with simple command like this
mount /dev/nvme0n1 /home
My problem is that Netdata reporting wrong disk usage(at least I think and hope that it’s wrong :slight_smile )
For example, it show that nvme0n1
has 100% disk utilization. But I cannot see it with other tools, like iostat
. This is output of iostat on nvme0n1
disk show something close to ~2%util, when netdata reporting 100% util.
Netdata also show that my disk backlog is 94:33:22.
Is there a way for me to confirm this util and disk backlog with some linux tools? I don’t see any slowness during server usage, so I wondering if this netdata checks are correct
ilyam8
March 12, 2021, 8:03am
2
Hi @Stanislav_Rybak
No worries, i think your disk is ok
utilization and backlog are Field 10 and Field 11 from /proc/diskstats
.
Let’s do the following
for ((i=0; i<10; i++)); do cat /proc/diskstats; echo; sleep 1; done
ilyam8
March 12, 2021, 8:07am
3
Ok, i see, then both those metrics has no sense for you. I believe you can ignore both alarms (they are to: silent
, so you see them on the dashboard only).
That utilization in fact is busy_ms, 100% busy doesn’t mean the disk can handle no more additional operations.
Hi @ilyam8 , thanks for the quick reply!
So, if I understand correctly, this checks does not really work for raid setups?
Output of the command you suggested above(I only show numbers related to nvme disks):
0 0 nvme0c0n1 4305520 0 1018798892 1013570 143132976 0 15230188912 122981367 0 18206888 123994937 0 0 0 0 0 0
0 0 nvme1c1n1 4209251 0 1013070276 1008524 138990071 0 15044468056 128045405 0 19637016 129053930 0 0 0 0 0 0
0 0 nvme2c2n1 4148287 0 1004127380 986847 138990071 0 15044468056 127307887 0 19624152 128294735 0 0 0 0 0 0
0 0 nvme3c3n1 4234757 0 1020006540 1016110 143132976 0 15230188912 127702000 0 18146924 128718110 0 0 0 0 0 0
0 0 nvme0c0n1 4305520 0 1018798892 1013570 143132976 0 15230188912 122981367 0 18206888 123994937 0 0 0 0 0 0
0 0 nvme1c1n1 4209251 0 1013070276 1008524 138990078 0 15044468112 128045406 0 19637044 129053930 0 0 0 0 0 0
0 0 nvme2c2n1 4148287 0 1004127380 986847 138990078 0 15044468112 127307887 0 19624180 128294735 0 0 0 0 0 0
0 0 nvme3c3n1 4234757 0 1020006540 1016110 143132976 0 15230188912 127702000 0 18146924 128718110 0 0 0 0 0 0
0 0 nvme0c0n1 4305520 0 1018798892 1013570 143132976 0 15230188912 122981367 0 18206888 123994937 0 0 0 0 0 0
0 0 nvme1c1n1 4209251 0 1013070276 1008524 138990084 0 15044468160 128045406 0 19637068 129053931 0 0 0 0 0 0
0 0 nvme2c2n1 4148287 0 1004127380 986847 138990084 0 15044468160 127307887 0 19624204 128294735 0 0 0 0 0 0
0 0 nvme3c3n1 4234757 0 1020006540 1016110 143132976 0 15230188912 127702000 0 18146924 128718110 0 0 0 0 0 0
0 0 nvme0c0n1 4305520 0 1018798892 1013570 143132979 0 15230189016 122981367 0 18206892 123994937 0 0 0 0 0 0
0 0 nvme1c1n1 4209251 0 1013070276 1008524 138990090 0 15044468208 128045406 0 19637092 129053931 0 0 0 0 0 0
0 0 nvme2c2n1 4148287 0 1004127380 986847 138990090 0 15044468208 127307887 0 19624228 128294735 0 0 0 0 0 0
0 0 nvme3c3n1 4234757 0 1020006540 1016110 143132979 0 15230189016 127702000 0 18146928 128718111 0 0 0 0 0 0
0 0 nvme0c0n1 4305520 0 1018798892 1013570 143132979 0 15230189016 122981367 0 18206892 123994937 0 0 0 0 0 0
0 0 nvme1c1n1 4209251 0 1013070276 1008524 138990097 0 15044468264 128045407 0 19637120 129053931 0 0 0 0 0 0
0 0 nvme2c2n1 4148287 0 1004127380 986847 138990097 0 15044468264 127307888 0 19624256 128294735 0 0 0 0 0 0
0 0 nvme3c3n1 4234757 0 1020006540 1016110 143132979 0 15230189016 127702000 0 18146928 128718111 0 0 0 0 0 0
0 0 nvme0c0n1 4305520 0 1018798892 1013570 143132979 0 15230189016 122981367 0 18206892 123994937 0 0 0 0 0 0
0 0 nvme1c1n1 4209251 0 1013070276 1008524 138990104 0 15044468320 128045407 0 19637148 129053932 0 0 0 0 0 0
0 0 nvme2c2n1 4148287 0 1004127380 986847 138990104 0 15044468320 127307888 0 19624284 128294736 0 0 0 0 0 0
0 0 nvme3c3n1 4234757 0 1020006540 1016110 143132979 0 15230189016 127702000 0 18146928 128718111 0 0 0 0 0 0
0 0 nvme0c0n1 4305520 0 1018798892 1013570 143132979 0 15230189016 122981367 0 18206892 123994937 0 0 0 0 0 0
0 0 nvme1c1n1 4209251 0 1013070276 1008524 138990111 0 15044468376 128045407 0 19637176 129053932 0 0 0 0 0 0
0 0 nvme2c2n1 4148287 0 1004127380 986847 138990111 0 15044468376 127307888 0 19624312 128294736 0 0 0 0 0 0
0 0 nvme3c3n1 4234757 0 1020006540 1016110 143132979 0 15230189016 127702000 0 18146928 128718111 0 0 0 0 0 0
0 0 nvme0c0n1 4305520 0 1018798892 1013570 143132979 0 15230189016 122981367 0 18206892 123994937 0 0 0 0 0 0
0 0 nvme1c1n1 4209251 0 1013070276 1008524 138990118 0 15044468432 128045408 0 19637204 129053932 0 0 0 0 0 0
0 0 nvme2c2n1 4148287 0 1004127380 986847 138990118 0 15044468432 127307888 0 19624340 128294736 0 0 0 0 0 0
0 0 nvme3c3n1 4234757 0 1020006540 1016110 143132979 0 15230189016 127702000 0 18146928 128718111 0 0 0 0 0 0
0 0 nvme0c0n1 4305520 0 1018798892 1013570 143132985 0 15230189312 122981367 0 18206896 123994937 0 0 0 0 0 0
0 0 nvme1c1n1 4209251 0 1013070276 1008524 138990124 0 15044468480 128045408 0 19637228 129053933 0 0 0 0 0 0
0 0 nvme2c2n1 4148287 0 1004127380 986847 138990124 0 15044468480 127307889 0 19624368 128294737 0 0 0 0 0 0
0 0 nvme3c3n1 4234757 0 1020006540 1016110 143132985 0 15230189312 127702001 0 18146932 128718111 0 0 0 0 0 0
0 0 nvme0c0n1 4305520 0 1018798892 1013570 143132985 0 15230189312 122981367 0 18206896 123994937 0 0 0 0 0 0
0 0 nvme1c1n1 4209251 0 1013070276 1008524 138990131 0 15044468536 128045408 0 19637256 129053933 0 0 0 0 0 0
0 0 nvme2c2n1 4148287 0 1004127380 986847 138990131 0 15044468536 127307889 0 19624396 128294737 0 0 0 0 0 0
0 0 nvme3c3n1 4234757 0 1020006540 1016110 143132985 0 15230189312 127702001 0 18146932 128718111 0 0 0 0 0 0
Look at #5744 (comment) . In your case, the problem is the same - wrong major and minor numbers.
0 0 nvme0c0n1
0 0 nvme1c1n
You can delete the alarms (they are not accurate anyway) or try applying #5799 as a temporary workaround, but in general, it is better to look for a correctly working NVMe driver.
1 Like
Hey @vlvkobal ,
What do you mean that the alarms are not accurate? I know that we are working on improving our sane defaults, so should I ping product to look into these alarms as well?
Hi @vlvkobal , thanks for the explanation.
I was checking github issue, where you mention that this issue with specific kernel version.
I have 5.8.0
version. I also saw some reports that changing io scheduler for nvme disk may show correct values, but apparently I already haveis set:
sudo cat /sys/block/nvme0c0n1/queue/scheduler
[none] mq-deadline
More output from /prod/diskstats
:
259 1 nvme0n1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 nvme1c1n1 4209279 0 1013071868 1008528 139996859 0 15150482400 128970122 0 19902480 129978651 0 0 0 0 0 0
259 3 nvme1n1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 nvme2c2n1 4148293 0 1004127548 986848 139996859 0 15150482400 128216599 0 19889420 129203448 0 0 0 0 0 0
259 5 nvme2n1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 nvme3c3n1 4240838 0 1020201132 1016674 143845232 0 15300125192 128137941 0 18301524 129154615 0 0 0 0 0 0
259 7 nvme3n1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
As you see there is correct major and minor device numbers, but they, not actually display any usage data.
Disk utilization can be misleading depending on actual hardware. There is nothing we can do to improve it.
@Stanislav_Rybak can you show the whole content of /proc/diskstats
, /proc/self/mountinfo
, and ls -al /sys/dev/block
?
ilyam8
March 12, 2021, 2:29pm
10
@Stanislav_Rybak and show iostat -x 1 5
@vlvkobal
cat /proc/diskstats
7 0 loop0 331 0 5140 2271 0 0 0 0 0 2548 2271 0 0 0 0 0 0
7 1 loop1 131 0 2500 36 0 0 0 0 0 84 36 0 0 0 0 0 0
7 2 loop2 1445 0 124780 2370 0 0 0 0 0 6992 2370 0 0 0 0 0 0
7 3 loop3 131 0 3421 40 0 0 0 0 0 160 40 0 0 0 0 0 0
7 4 loop4 481 0 30440 21 0 0 0 0 0 1244 21 0 0 0 0 0 0
7 5 loop5 11 0 28 1 0 0 0 0 0 8 1 0 0 0 0 0 0
7 6 loop6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
7 7 loop7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 nvme0c0n1 5168748 0 1174237820 1086093 150468469 0 15979669792 128958373 0 19646296 130044467 0 0 0 0 0 0
259 1 nvme0n1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 nvme1c1n1 4209279 0 1013071868 1008528 147474769 0 15907407376 136242072 0 22185696 137250601 0 0 0 0 0 0
259 3 nvme1n1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 nvme2c2n1 4148293 0 1004127548 986848 147474769 0 15907407376 135619159 0 22172020 136606008 0 0 0 0 0 0
259 5 nvme2n1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 nvme3c3n1 4240838 0 1020201132 1016674 150468469 0 15979669792 133776221 0 19522376 134792895 0 0 0 0 0 0
259 7 nvme3n1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
8 0 sda 67773 17915 3819581 54673 6807560 12496849 919745212 14022152 0 2286072 14076826 0 0 0 0 0 0
8 1 sda1 67215 16252 3797268 54407 6807558 12496849 919745210 14022150 0 2285824 14076558 0 0 0 0 0 0
8 14 sda14 175 0 1550 77 0 0 0 0 0 148 77 0 0 0 0 0 0
8 15 sda15 247 1663 17994 126 2 0 2 1 0 196 128 0 0 0 0 0 0
cat /proc/self/mountinfo | grep nvme
(I scope it to nvme
disks, as full output is huge!)
862 29 0:51 / /home rw,relatime shared:538 - btrfs /dev/nvme0n1 rw,ssd,space_cache,subvolid=5,subvol=/
@ilyam8 iostat -x 1 5
Linux 5.8.0-44-generic (oracle01) 03/13/2021 _x86_64_ (128 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
9.69 0.00 3.59 0.01 0.00 86.71
Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz aqu-sz %util
loop0 0.00 0.00 0.00 0.00 6.86 7.76 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop1 0.00 0.00 0.00 0.00 0.27 9.54 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop2 0.00 0.07 0.00 0.00 1.64 43.18 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop3 0.00 0.00 0.00 0.00 0.31 13.06 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop4 0.00 0.02 0.00 0.00 0.04 31.64 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop5 0.00 0.00 0.00 0.00 0.09 1.27 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme0c0n1 5.39 612.43 0.00 0.00 0.21 113.59 156.96 8334.52 0.00 0.00 0.86 53.10 0.00 0.00 0.00 0.00 0.00 0.00 0.14 2.05
nvme1c1n1 4.39 528.38 0.00 0.00 0.24 120.34 153.84 8296.67 0.00 0.00 0.92 53.93 0.00 0.00 0.00 0.00 0.00 0.00 0.14 2.31
nvme2c2n1 4.33 523.71 0.00 0.00 0.24 121.03 153.84 8296.67 0.00 0.00 0.92 53.93 0.00 0.00 0.00 0.00 0.00 0.00 0.14 2.31
nvme3c3n1 4.42 532.09 0.00 0.00 0.24 120.28 156.96 8334.52 0.00 0.00 0.89 53.10 0.00 0.00 0.00 0.00 0.00 0.00 0.14 2.04
sda 0.07 1.99 0.02 20.91 0.81 28.18 7.10 479.70 13.04 64.74 2.06 67.55 0.00 0.00 0.00 0.00 0.00 0.00 0.01 0.24
avg-cpu: %user %nice %system %iowait %steal %idle
15.61 0.00 4.11 0.01 0.00 80.27
Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz aqu-sz %util
loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme0c0n1 0.00 0.00 0.00 0.00 0.00 0.00 1.00 4.00 0.00 0.00 0.00 4.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.40
nvme1c1n1 0.00 0.00 0.00 0.00 0.00 0.00 5.00 20.00 0.00 0.00 0.20 4.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.00
nvme2c2n1 0.00 0.00 0.00 0.00 0.00 0.00 5.00 20.00 0.00 0.00 0.00 4.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.00
nvme3c3n1 0.00 0.00 0.00 0.00 0.00 0.00 1.00 4.00 0.00 0.00 0.00 4.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.40
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
avg-cpu: %user %nice %system %iowait %steal %idle
16.45 0.00 4.00 0.01 0.00 79.54
Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz aqu-sz %util
loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme0c0n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme1c1n1 0.00 0.00 0.00 0.00 0.00 0.00 5.00 20.00 0.00 0.00 0.00 4.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.00
nvme2c2n1 0.00 0.00 0.00 0.00 0.00 0.00 5.00 20.00 0.00 0.00 0.00 4.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.00
nvme3c3n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
avg-cpu: %user %nice %system %iowait %steal %idle
16.74 0.00 4.05 0.01 0.00 79.20
Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz aqu-sz %util
loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme0c0n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme1c1n1 0.00 0.00 0.00 0.00 0.00 0.00 5.00 20.00 0.00 0.00 0.00 4.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.00
nvme2c2n1 0.00 0.00 0.00 0.00 0.00 0.00 5.00 20.00 0.00 0.00 0.00 4.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.00
nvme3c3n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
avg-cpu: %user %nice %system %iowait %steal %idle
9.78 0.00 3.42 0.00 0.00 86.80
Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz aqu-sz %util
loop0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
loop5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme0c0n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme1c1n1 0.00 0.00 0.00 0.00 0.00 0.00 7.00 28.00 0.00 0.00 0.00 4.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.80
nvme2c2n1 0.00 0.00 0.00 0.00 0.00 0.00 7.00 28.00 0.00 0.00 0.14 4.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.80
nvme3c3n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
@Stanislav_Rybak could you provide the output of ls -al /sys/dev/block
?
ilyam8
March 19, 2021, 1:56pm
13
And tree -l -d -L 3 /sys/block/
Hi guys!
I updated to latest version and it seems that btrfs does not have fasle reports anymore. Looking good!
Thanks!
ilyam8
April 6, 2021, 3:37pm
16
Thanks for letting us know @Stanislav_Rybak