Heating up Evergreen search

Posted on Mon 25 August 2008 in Libraries

So, after loading 3.7 million records into the Project Conifer test server, we have found that search can be slow. Not really a big surprise, because I've spent very little time tuning the database beyond running a VACUUM FULL and tweaking just a few parameters. But one of the extremely useful hints that Mike Rylander gave me about PostgreSQL a long time back is that it relies primarily on file system caching to cache access to data, from the reasonable perspective that your file system already knows which files are being accessed most often. PostgreSQL's data is stored in files that map back to individual tables and indexes; unlike some other database systems that I've worked with, you don't dedicate system memory specifically to caching those database files (hello, DB2 buffers!); instead, you just trust the file system to know what's best.

That caching approach works great on a system that's in production and getting a steady stream of queries reflecting what users actually search for on a day to day basis. However, if you've just loaded a test system, then it doesn't have much opportunity to cache and the first dozen (or hundreds, or thousands!) of queries will be slow as your database goes out and loads up files from disk. Even worse, if you have a system like ours where backups have temporarily been set up as "tar czf /backups/backup.tar.gz /", then on a nightly basis your file system cache is going to be filled with all kinds of irrelevant data.

So what are we to do? Well, actually, another extremely useful hint that Mike Rylander gave me was to just run the pertinent data files through /dev/null to load up the file system cache. On the surface, it seems like a dirty hack, but it's a smart one, and we can even make it elegant. Let's walk through the process:

  1. You need to know where your data files are. You (or your system installer) will have created a PostgreSQL cluster. In my case (on Debian Etch), I can find it at /var/lib/postgresql/main/base. Then, by running "du -hs /var/lib/postgresql/main/base" I can see that one of our databases (represented by a directory name that's just an integer - "16385") weighs in at 60GB. That's our 3.7 million record baby. If you run an "ls" command on that directory, you'll see that it's filled with hundreds of files of differing sizes, most of them with just plain integers for their names. This is where the data is stored.

  2. You need to know the base filenames that you want to use to warm up the file system cache. For my first stab at this, I decided to warm up the cache with the full-text search indexes, as I know those are frequently used by Evergreen's search. To figure out the base filenames for these indexes, we can query PostgreSQL's catalog of its own objects:

    evergreen=# SELECT relfilenode, relname, relpagesevergreen-# FROM pg_class WHERE relname LIKE '%vector%'; relfilenode |                   relname                    | relpages -------------+----------------------------------------------+----------      648864 | authority_full_rec_index_vector_idx          |    59282      649137 | metabib_title_field_entry_index_vector_idx   |    29766      649149 | metabib_author_field_entry_index_vector_idx  |    20125      649161 | metabib_subject_field_entry_index_vector_idx |    23481      649173 | metabib_keyword_field_entry_index_vector_idx |    90709      649185 | metabib_series_field_entry_index_vector_idx  |     8682      649210 | metabib_full_rec_index_vector_idx            |   452980(7 rows)
    

    relfilenode is the basename of the files that we want to load into the file system cache.

  3. The maximum size of your file system cache cannot be more than the physical RAM installed on your system, so you'll want to tally up the size of the index data files to ensure that their total is less than the total amount of your physical RAM. Note that in the example from our system, below, I'm using "*" because database objects with lots of data will be split between multiple files with extensions like ".1" and ".2" in sequential order:

    # cd /var/lib/postgresql/main/base/16385# du -hs 649185* 649210* 1065608*68M 6491851.1G   6492101.1G   649210.11.1G 649210.21.1G 10656081.1G  1065608.11.1G    1065608.2842M    1065608.3# du -hs 649207*1.1G 6492071.1G   649207.11.1G 649207.2467M 649207.3
    

    Adding all of this up, we're getting close to the 16GB of RAM installed on our database server. If we add any more data, we will want to add more RAM to the system.

  4. Now we warm up the cache by outputting the contents of each file into /dev/null.

    # cd /var/lib/postgresql/main/base/16385# cat 648864* > /dev/null# cat 649137* > /dev/null# cat 649149* > /dev/null# cat 649161* > /dev/null# cat 649173* > /dev/null# cat 649185* > /dev/null# cat 649210* > /dev/null
    

After running through this relatively simple exercise, searches were definitely much snappier on our test system. I plan to automate the process so it runs after every one of those cache-killing backups. If there is interest, I could package it into a simple Perl script that other sites could use to assist with their testing - or to help warm up the file system cache after a large data load, for example.