I decided to try out Austin Hemmelgarn's method, and I can say definitively that it works... almost too well.
I wanted to run a quick test with the Fedora 40 Workstation VM I already have installed, so first I booted it up and ran a quick baseline disk-benchmark in GNOME Disks, because pictures are pretty any precision wasn't my concern. That run looked like this:
!["before" benchmark](https://cdn.statically.io/img/i.sstatic.net/QSgT3OMn.png)
(My system has a lot of other stuff going on, and it's an older machine, so despite the SSD speeds are kind of all over the place in VM-world.)
Then I shut down the VM, opened up its KVM config, and made the following edit, setting the max. total transfer rate for the VM's boot/only virtual disk waaaaay down to 100K/s:
diff --git a/fedora40-wor.txt b/fedora40-wor.txt
index 25e7f9d..d46595a 100644
--- a/fedora40-wor.txt
+++ b/fedora40-wor.txt
@@ -53,6 +53,10 @@
<driver name="qemu" type="qcow2" cache="writeback" discard="unmap"/>
<source file="/home/ferd/.local/share/gnome-boxes/images/fedora40-wor"/>
<target dev="vda" bus="virtio"/>
+ <iotune>
+ <total_bytes_sec>100000</total_bytes_sec>
+ <read_iops_sec>20000</read_iops_sec>
+ </iotune>
<address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
</disk>
<disk type="file" device="cdrom">
(You can also add a <write_iops_sec>
limiting value if you want to constrain write operations.)
...Then I booted it back up. Well... started to. I was able to monitor the VM's disk activity in virsh
with domstate <domain> --block
, and as I watched the block.0.rd.bytes=
value slooowly creep upwards, I realized I'd forgotten just how much disk activity is involved in booting up a Linux system.
So after around 60 seconds of getting nowhere, I gave up, destroy <domain>
'd the VM instance, went back into the configuration, and added a pair of zeros into the the total_bytes_sec
value to make it a more-tolerable 10M/s max rate.
That got the system to boot up in under than a minute (actually, in only a few seconds), and I headed back into GNOME Disks to repeat the benchmark.
It's a good thing I wasn't looking for precision, because GNOME Disks definitely does NOT deliver it. As I said, I set the max transfer rate to 10M/s. As my benchmark run of 100 20MB samples progressed, I could see that each sample was taking roughly 2 seconds, which checks out.
Disks, though, continuously estimated the read speed at an impossible 30MB/s for most of the run. Except for a couple of spots where it jumped up to really impossible values, hence the crazy spike and nonsense numbers in this image. But the actual benchmark run took right around 200 seconds, 2 seconds per sample, which is consistent with the limits I'd set.
!["after" benchmark](https://cdn.statically.io/img/i.sstatic.net/04qcjICY.png)