We keep getting paged for datazilla swap. datazilla has 12G RAM: 21310 mysql 20 0 46.6g 7.7g 4896 S 27.4 66.5 18261:15 mysqld 20737 root 20 0 11332 344 340 S 0.0 0.0 0:00.02 mysqld_safe
datazilla2 has 24G RAM (why the mismatch?) and is using: 23207 root 20 0 11332 1144 1140 S 0.0 0.0 0:00.03 mysqld_safe 23780 mysql 20 0 33.8g 16g 4920 S 0.0 69.5 6461:57 mysqld
I thought we got them both with 24GB. We can get DCOps to up them both to 32GB if that'll help. Thoughts?
Well, we should first get datazilla1 up to 24G RAM. But I think it's a matter of tuning TokuDB not to USE ALL THE RAM.
Paged again today. Is there a timescale for memory upgrades and/or tuning?
Spoke with the toku folks - they advised me to check tokudb_cache_size and innodb_buffer_pool_size and compare the latter with the actual data/indexes used by InnoDB. Actual data/index size used was <20M, and the innodb_buffer_pool_size was set to 8G, and tokudb_cache_size was set as the default - 1/2 of memory...memory is 12G, so tokudb_cache_size was 6G. I updated the /etc/my.cnf on the datazilla boxes to use 512M for the innodb_buffer_pool_size and 8G for the tokudb_cache_size, so things should be OK now, and swapping should recover and not occur again. Here's the status right now, with 120G in use: [email@example.com ~]$ top -b | head -8 top - 09:16:47 up 127 days, 8:39, 1 user, load average: 0.28, 0.60, 0.72 Tasks: 312 total, 1 running, 310 sleeping, 0 stopped, 1 zombie Cpu(s): 1.4%us, 0.3%sy, 0.0%ni, 97.5%id, 0.8%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 12181188k total, 5638584k used, 6542604k free, 126640k buffers Swap: 2097144k total, 31764k used, 2065380k free, 3401160k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 25542 mysql 20 0 5101m 1.6g 8412 S 25.5 14.1 1:53.42 mysqld