3

I wonder: rpm uses a (simple) database to store things like packet names. When using rpm -qa to query that database, it is terribly slow.

For example, this command took almost a minute to complete while the disk was "shuffling":

# rpm -qa rtl8812au-kmp\*
rtl8812au-kmp-preempt-5.9.3.2+git20210427.6ef5d8f_k5.3.18_57-lp153.1.1.x86_64
rtl8812au-kmp-default-5.9.3.2+git20210427.6ef5d8f_k5.3.18_57-lp153.1.1.x86_64

Of course the second invocation was much faster, as all the data was in disk cache, so it took 2.2 seconds "only". Fast is spelled differently, however.

# grep MHz /proc/cpuinfo 
cpu MHz     : 3998.528
...
# LANG= time dd if=/var/lib/rpm/Packages of=/dev/null
748816+0 records in
748816+0 records out
383393792 bytes (383 MB, 366 MiB) copied, 0.606612 s, 632 MB/s
0.30user 0.29system 0:00.60elapsed 99%CPU (0avgtext+0avgdata 1876maxresident)k
0inputs+0outputs (0major+81minor)pagefaults 0swaps
# LANG= time rpm -qa rtl8812au-kmp\*
rtl8812au-kmp-preempt-5.9.3.2+git20210427.6ef5d8f_k5.3.18_57-lp153.1.1.x86_64
rtl8812au-kmp-default-5.9.3.2+git20210427.6ef5d8f_k5.3.18_57-lp153.1.1.x86_64
2.17user 0.06system 0:02.24elapsed 99%CPU (0avgtext+0avgdata 32472maxresident)k
0inputs+0outputs (0major+6302minor)pagefaults 0swaps

So reading the complete package (names) database completely is faster than searching for a specific name!

The /var (root) filesystem is ext3 put on an LV where the PV of the VG is a LUKS-encrypted volume make from a partition of a RAID1 (imsm):

# hdparm -t /dev/mapper/system-root
/dev/mapper/system-root:
 Timing buffered disk reads: 242 MB in  3.02 seconds =  80.19 MB/sec

# hdparm -t /dev/mapper/cr_md-uuid-fe1588c4:a2140388:5b4117ba:4e4339b9-part5
/dev/mapper/cr_md-uuid-fe1588c4:a2140388:5b4117ba:4e4339b9-part5:
 Timing buffered disk reads: 240 MB in  3.03 seconds =  79.33 MB/sec

# hdparm -t /dev/disk/by-id/md-uuid-fe1588c4:a2140388:5b4117ba:4e4339b9-part5
/dev/disk/by-id/md-uuid-fe1588c4:a2140388:5b4117ba:4e4339b9-part5:
 Timing buffered disk reads: 290 MB in  3.01 seconds =  96.32 MB/sec

# hdparm -t /dev/md126
/dev/md126:
 Timing buffered disk reads: 408 MB in  3.01 seconds = 135.63 MB/sec

# hdparm -t /dev/sda
/dev/sda:
 Timing buffered disk reads: 394 MB in  3.01 seconds = 130.92 MB/sec

# hdparm -t /dev/sdb
/dev/sdb:
 Timing buffered disk reads: 444 MB in  3.03 seconds = 146.61 MB/sec

Timing example

Sorry, this is a late addition, but I had to remember to make the timing test after boot before the RPM database files are in cache.

~> time rpm -qa libavd\*
libavdevice58_13-4.4-pm153.2.8.x86_64
libavdevice57-3.4.9-pm153.1.2.x86_64

real    1m15,788s
user    0m2,727s
sys 0m0,613s
~> time rpm -qa libavd\*
libavdevice58_13-4.4-pm153.2.8.x86_64
libavdevice57-3.4.9-pm153.1.2.x86_64

real    0m2,300s
user    0m2,220s
sys 0m0,080s

So still with all files in RAM cache the command needs 2.2 seconds of user CPU.

Effect of rebuilding the database

(Sorry, the locale is de_DE)

# ll /var/lib/rpm/
insgesamt 447440
-rw-r--r-- 1 root root  20901888 28. Jan 20:35 Basenames
-rw-r--r-- 1 root root     28672 28. Jan 20:35 Conflictname
-rw-r--r-- 1 root root  12189696 28. Jan 20:35 Dirnames
-rw-r--r-- 1 root root      8192 27. Jan 21:40 Enhancename
-rw-r--r-- 1 root root      8192 22. Jan 22:51 Filetriggername
-rw-r--r-- 1 root root     77824 28. Jan 20:35 Group
-rw-r--r-- 1 root root    184320 28. Jan 20:35 Installtid
-rw-r--r-- 1 root root    319488 28. Jan 20:35 Name
-rw-r--r-- 1 root root     94208 28. Jan 20:35 Obsoletename
-rw-r--r-- 1 root root 409018368 28. Jan 20:35 Packages
-rw-r--r-- 1 root root  11694080 28. Jan 20:35 Providename
-rw-r--r-- 1 root root    114688 28. Jan 20:35 Recommendname
-rw-r--r-- 1 root root   1339392 28. Jan 20:35 Requirename
-rw-r--r-- 1 root root         0 26. Okt 2014  .rpm.lock
-rw-r--r-- 1 root root    634880 28. Jan 20:35 Sha1header
-rw-r--r-- 1 root root    360448 28. Jan 20:35 Sigmd5
-rw-r--r-- 1 root root     20480 27. Jan 21:40 Suggestname
-rw-r--r-- 1 root root    667648 28. Jan 20:34 Supplementname
-rw-r--r-- 1 root root      8192 20. Nov 2018  Transfiletriggername
-rw-r--r-- 1 root root      8192 31. Dez 02:06 Triggername
# rpm --rebuilddb 
# ll /var/lib/rpm/
insgesamt 373392
-rw-r--r-- 1 root root  16920576  3. Feb 21:26 Basenames
-rw-r--r-- 1 root root     24576  3. Feb 21:26 Conflictname
-rw-r--r-- 1 root root   7684096  3. Feb 21:26 Dirnames
-rw-r--r-- 1 root root      8192  3. Feb 21:26 Enhancename
-rw-r--r-- 1 root root      8192  3. Feb 21:26 Filetriggername
-rw-r--r-- 1 root root     77824  3. Feb 21:26 Group
-rw-r--r-- 1 root root    139264  3. Feb 21:26 Installtid
-rw-r--r-- 1 root root    311296  3. Feb 21:26 Name
-rw-r--r-- 1 root root     77824  3. Feb 21:26 Obsoletename
-rw-r--r-- 1 root root 343470080  3. Feb 21:26 Packages
-rw-r--r-- 1 root root  10416128  3. Feb 21:26 Providename
-rw-r--r-- 1 root root    114688  3. Feb 21:26 Recommendname
-rw-r--r-- 1 root root   1204224  3. Feb 21:26 Requirename
-rw-r--r-- 1 root root    503808  3. Feb 21:26 Sha1header
-rw-r--r-- 1 root root    311296  3. Feb 21:26 Sigmd5
-rw-r--r-- 1 root root     16384  3. Feb 21:26 Suggestname
-rw-r--r-- 1 root root    647168  3. Feb 21:26 Supplementname
-rw-r--r-- 1 root root      8192  3. Feb 21:26 Transfiletriggername
-rw-r--r-- 1 root root      8192  3. Feb 21:26 Triggername
# 

Having upgraded to OpenSUSE Leap 15.5 the issue is still there (kernel 5.14.21-150500.55.12-default, rpm-4.14.3-150300.55.1.x86_64):

> # directly after boot and login, so most likely no data cached
> time rpm -qa >/tmp/rpmlist-poweroff

real    0m58,971s
user    0m3,016s
sys 0m0,472s
> time rpm -qa 'base*'

real    0m2,644s
user    0m2,603s
sys 0m0,040s
> # when the data is in cache, it's mostly CPU-bound

So the issue seems to be excessive (useless?) I/O.

5
  • 1
    Please try running sudo rpm --rebuilddb and check whether it helps. Commented Jan 17, 2022 at 21:54
  • Neither man rpm nor rpm --help do list that option for rpm-4.13.3.
    – U. Windl
    Commented Jan 18, 2022 at 6:58
  • 1
    Well, it's there, it works and you decided not to run it. The official documentation also doesn't count I presume: rpm.org/user_doc/db_recovery.html Cheers! Commented Jan 18, 2022 at 8:46
  • Did you ever find any reason for the slowness or figure out a way to improve it?
    – bdrx
    Commented Jul 10, 2023 at 14:54
  • @bdrx Honestly, If I'd know the answer, I would have added it here
    – U. Windl
    Commented Jul 12, 2023 at 13:28

1 Answer 1

1

I recently discovered by using the verbose flag to rpm rpm -qa -vvv that rpm on the OS I am using (RHEL 8) does signature and digest checking when querying all the packages. Adding the --nodigest and --nosignature flags significantly improved the time for me.

rpm -qa --nodigest --nosignature base*

Example timings from my system:

$ free && sync && echo 3 > /proc/sys/vm/drop_caches && free
$ time rpm -qa > /dev/null

real: 0m2.944s
user: 0m1.689s
sys:  0m0.163s
$ time rpm -qa > /dev/null

real: 0m1.740s
user: 0m1.644s
sys:  0m0.079s
$ free && sync && echo 3 > /proc/sys/vm/drop_caches && free
$ time rpm -qa --nodigest --nosignature > /dev/null

real: 0m1.319s
user: 0m0.259s
sys:  0m0.147s
$ time rpm -qa --nodigest --nosignature > /dev/null

real: 0m0.366s
user: 0m0.235s
sys:  0m0.099s

Now I just need to discover if there is a way to safely configure the system (maybe edit or override the macros) to set this as the default behavior for -qa.

1
  • Would you like to improve your answer by adding these: (1) Actual timing without --nodigest --nosignature, (2) After emptying any caches, the timing with those options applied. And just to make sure, (3) After emptying any caches, actual timing without --nodigest --nosignature. This is just to "prove" your findings...
    – U. Windl
    Commented Sep 7, 2023 at 6:09

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .