After tuning Postgresql, PgBench results are worse

I am testing PostgreSQL on an 8gb Ram/4 CPUs/ 80gb SSD cloud server from Digital Ocean. I originally ran PgBench with default settings in the postgresql.conf, and then altered some common settings--shared_buffers, work_mem, maintenance_work_mem, effective_cache_size--to reflect the 8gb of RAM. After running the 2nd set of tests, I noticed that some of my results were actually worse. Any suggestions on why this might be? I am rather new to PgBench and tuning PostgreSQL in general.
shared_buffers = 2048mb
work_mem = 68mb
maintenance_work_mem = 1024mb
effective_cache_size = 4096mb
pgbench -i -s 100
pgbench -c 16 -j 2 -T 60 -U postgres postgres
pgbench -S -c 16 -j 2 -T 60 -U postgres postgres
pgbench -c 16 -j 4 -T 60 -U postgres postgres
pgbench -S -c 16 -j 4 -T 60 -U postgres postgres
pgbench -c 16 -j 8 -T 60 -U postgres postgres
pgbench -S -c 16 -j 8 -T 60 -U postgres postgres
How effective are these tests? Is this an effective way to employ PgBench? How should I customize tests to properly reflect my data and server instance?


  • A mentioned configuration variables are basic for configuration and you probably cannot to be wrong there (server must not use a swap actively ever - and these variables ensure it).
    A formula that I use:
    -- Dedicated server 8GB RAM
    shared_buffers = 1/3 .. 1/4 dedicated RAM
    effecttive_cache_size = 2/3 dedicated RAM

    maintenance_work_mem > higher than the most big table (if possible)
    else 1/10 RAM
    else max_connection * 1/4 * work_mem

    work_mem = precious setting is based on slow query analyse
    (first setting about 100MB)

    --must be true
    max_connection * work_mem * 2 + shared_buffers
    + 1GB (O.S.) + 1GB (filesystem cache) <= RAM size
    Usually default values of WAL buffer size and checkpoint segments is too low too. And you can increase it.

Sign In or Register to comment.