I mentioned earlier that IO scheduler CFQ coming by default in RedHat / CentOS 5.x may be not so good for MySQL. And yesterday one customer reported that just changing cfq to noop solved their InnoDB IO problems. I ran tpcc scripts against XtraDB on our Dell PowerEdge R900 server (16 cores, 8 disks in RAID10, controller Perc/6i with BBU) to compare cfq, deadline, noop and anticipatory (last one just to get number, I did not expect a lot from anticipatory).



Here is result (in transactions per minute, more is better):

cfq 2793.5 noop 6586.4 deadline 6513.7 anticipatory 1465

Here is graph of disk writes (column bo in vmstat) during benchmark



As you see noop / deadline can utilize disks much better.

For reference I used tpcc scripts from https://launchpad.net/perconatools, generated 100W (about 9.5GB of data on disk), and used next XtraDB params:

[mysqld] #mysqld options in alphabetical order user=root default_table_type=MYISAM innodb_buffer_pool_size=3G innodb_data_file_path=ibdata1:10M:autoextend innodb_file_per_table=1 innodb_flush_log_at_trx_commit=1 innodb_log_buffer_size=8M innodb_log_files_in_group=2 innodb_log_file_size=128M innodb_thread_concurrency=0 innodb_flush_method = O_DIRECT innodb_write_io_threads=4 innodb_read_io_threads=4 innodb_io_capacity=800 innodb_adaptive_checkpoint=1 max_connections=3000 query_cache_size=0 skip-name-resolve table_cache=2048 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 [ mysqld ] #mysqld options in alphabetical order user = root default_table_type = MYISAM innodb_buffer_pool_size = 3G innodb_data_file_path = ibdata1 : 10M : autoextend innodb_file_per_table = 1 innodb_flush_log_at_trx_commit = 1 innodb_log_buffer_size = 8M innodb_log_files_in_group = 2 innodb_log_file_size = 128M innodb_thread_concurrency = 0 innodb_flush_method = O_DIRECT innodb_write_io_threads = 4 innodb_read_io_threads = 4 innodb_io_capacity = 800 innodb_adaptive_checkpoint = 1 max_connections = 3000 query_cache_size = 0 skip - name - resolve table_cache = 2048