My Mediawiki site is very slow

Does anyone have some ideas what I should look at?

http://telarapedia.com is the site and there is a server-status off the main domain if you are curious.

I've installed iotop to see if it could help but get this error:
> Could not run iotop as some of the requirements are not met:

  • Python >= 2.5 for AF_NETLINK support: Found

  • Linux >= 2.6.20 with I/O accounting support: Not found

I am on a 512 VPS with 90 extra purchased short-term as well. And here is what free shows:

             total       used       free     shared    buffers     cached
Mem:        616672     595208      21464          0      11332     182236
-/+ buffers/cache:     401640     215032
Swap:       262136        136     262000

Edit: Here is some sql tuner script output (reloaded mysql about 10 min before, though):

perl mysqltuner.pl 

 >>  MySQLTuner 1.0.1 - Major Hayden <major@mhtx.net>>>  Bug reports, feature requests, and downloads at http://mysqltuner.com/
 >>  Run with '--help' for additional options and output filtering
Please enter your MySQL administrative login: root
Please enter your MySQL administrative password: 

-------- General Statistics --------------------------------------------------
[--] Skipped version check for MySQLTuner script
[OK] Currently running supported MySQL version 5.0.75-0ubuntu10.5-log
[OK] Operating on 32-bit architecture with less than 2GB RAM

-------- Storage Engine Statistics -------------------------------------------
[--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster 
[--] Data in MyISAM tables: 15M (Tables: 216)
[--] Data in InnoDB tables: 64M (Tables: 167)
[--] Data in MEMORY tables: 0B (Tables: 3)
[!!] Total fragmented tables: 22

-------- Performance Metrics -------------------------------------------------
[--] Up for: 3m 53s (11K q [47.798 qps], 644 conn, TX: 12M, RX: 1M)
[--] Reads / Writes: 100% / 0%
[--] Total buffers: 66.0M global + 2.6M per thread (100 max threads)
[OK] Maximum possible memory usage: 328.5M (54% of installed RAM)
[OK] Slow queries: 0% (0/11K)
[OK] Highest usage of available connections: 4% (4/100)
[OK] Key buffer size / total MyISAM indexes: 16.0M/5.4M
[OK] Key buffer hit rate: 97.3% (620 cached / 17 reads)
[OK] Query cache efficiency: 35.0% (3K cached / 8K selects)
[OK] Query cache prunes per day: 0
[OK] Sorts requiring temporary tables: 0% (0 temp sorts / 2 sorts)
[OK] Temporary tables created on disk: 4% (21 on disk / 501 total)
[OK] Thread cache hit rate: 99% (4 created / 644 connections)
[!!] Table cache hit rate: 2% (64 open / 3K opened)
[OK] Open file limit used: 1% (15/1K)
[OK] Table locks acquired immediately: 100% (6K immediate / 6K locks)
[!!] InnoDB data size / buffer pool: 64.9M/8.0M

-------- Recommendations -----------------------------------------------------
General recommendations:
    Run OPTIMIZE TABLE to defragment tables for better performance
    MySQL started within last 24 hours - recommendations may be inaccurate
    Enable the slow query log to troubleshoot bad queries
    Increase table_cache gradually to avoid file descriptor limits
Variables to adjust:
    table_cache (> 64)
    innodb_buffer_pool_size (>= 64M)</major@mhtx.net> 

The site has basically gone to being totally unusable. :(

16 Replies

MediaWiki is pretty heavy on the RAM and CPU.

you can, however, enable some sort of cache. here's MediaWiki's official manual.

you can choose to deploy either file cache (easier and faster, though not for logged in users) and/or memcached (for busiest sites), but in any case remember to install APC: if you're Ubuntu or Debian, it's enough to do "apt-get install php-apc" and it works automagically.

you can also try giving nginx+php-fpm a shot - or try at least php-fpm with apache worker.

for the mysql part, you should change innodbbufferpoolsize to at least the size of your InnoDB tables (which is 64.9 MB, so set it to 70M or something). increase tablecache too, as suggested by mysqltuner.

to decrease the size of your tables, run cron scripts who optimize them at least once a week (or even every day late at night) and MediaWiki's maintenance file compressOld.php (you can find docs on MediaWiki's wiki :D).

also, install htop to monitor the situation of your server.

that should do the trick ;)

I would take a look at your extensions. It's possible one or more of them is slow. http://telarapedia.com/wiki/Special:Version

Thanks for the help and sorry for the belated reply!

I already had APC installed, and the problem seemed to go away when I upgraded to a larger VPS. However, now I have another issue that when people hit 'save page' it is sometimes instantaneous and sometimes takes up to 30 seconds (which is obviously insane). I'd imagine this would be mysql related?

I also have NGINX installed (use it for some sites anyway), but too bad I can't see any good recommendations on tuning it for a busy Mediawiki site… :(

MediaWiki.org's performance tuning page and the associated guides (Aaron Schulz's and Illmari Karonen's are the most useful) have a whole bunch of things you can do.

BarkerJr is probably on to something. In particular, I think the Semantic MediaWiki extension tends to require a lot of things to be regenerated when edits are saved. Set up the job queue so that jobs get run by a separate process, rather than on every edit.

You should also take a look at whether your server's various components are getting the resources they need. As mejicat suggested, you should adjust MySQL's settings as mysqltuner suggests. You should also check whether APC has enough memory. There's an APC visualizer extension for MediaWiki that makes it easy to check.

If nothing else works, you can enable profiling for slow page loads to see where the bottleneck is. I don't think that'll be necessary, though.

yeah, by default MediaWiki runs a "job" each time a page is run, which is not really the best thing ever - especially if you have a lot of very-used templates. you should set a cron job for "php /path/to/mediawiki/maintenance/runJobs.php" every six hours or something, so the users loading pages don't have to wait while jobs are in execution. I personally set $wgJobRunRate to 0.01 (one job every 100 requests, reasonable enough) and run runJobs.php once every night to clear what's left in the queue.

as for nginx, there's really not much to setup except workerprocesses to 4 in nginx.conf (you have 4 cpus). since nginx handles just the static stuff, i'd look into your php configuration instead - for example, set php.ini's memorylimit to 64 or even 32 instead of the default 128. if you're using php-fpm, check that the maximum number of spawned children is reasonable (i'd say 20 or so).

if you haven't done so already, you should give mediawiki's file cache a try. it really helps with the busiest sites since not-logged-in users are served static, pre-generated html files. if you can't (or don't want to) use file cache, memcached is a good option and it works for registered users too (I personally use both).

Thanks, guys.

I've changed the job queue thing and it's still slow most of the time on save. Sometimes so long that things time out, but often it's like 20 seconds. Also, I've done most of the tweaks recommended, but honestly the wiki is lightning fast for everything except saving. :(

Would memcache help with that? I'd need to go up from my current 1088MB VPS for that, obviously, and the wiki is fast in all other conditions so it seems a bit overkill?

Edit: It's so bad on saves now that people get 504 timeout errors from NGINX quite often.

@Cio:

Would memcache help with that? I'd need to go up from my current 1088MB VPS for that
No, you wouldn't.

@Cio:

Also, I've done most of the tweaks recommended
Really? You tried using php-fpm? You checked that APC wasn't getting full? You bumped up the MySQL settings as recommended? (You enabled profiling? :))

@Cio:

Thanks, guys.

I've changed the job queue thing and it's still slow most of the time on save. Sometimes so long that things time out, but often it's like 20 seconds. Also, I've done most of the tweaks recommended, but honestly the wiki is lightning fast for everything except saving. :(

Would memcache help with that? I'd need to go up from my current 1088MB VPS for that, obviously, and the wiki is fast in all other conditions so it seems a bit overkill?

Edit: It's so bad on saves now that people get 504 timeout errors from NGINX quite often.

Then you need to enable profiling and figure out where all the time is being spent.

Thanks, all. I'll try profiling tonight and see if I can wade through the cryptic page on setting it up. :)

APC is totally full, yes. I can bump that up (has 2 30 meg segments at the moment). I have made the mysql changes, though.

Using php5-cgi - I guess I should look at compiling and using php-fpm? (On Ubuntu 9.04 and there is no special package).

As a heads up, your Ubuntu version is no longer supported as of October of last year. This means that you haven't received any security updates in about three months.

I would strongly advise you to either upgrade to 9.10 then to 10.04 LTS, or build a new instance on 10.04 LTS, before doing any substantial work on your system.

@hoopycat:

As a heads up, your Ubuntu version is no longer supported as of October of last year. This means that you haven't received any security updates in about three months.

I would strongly advise you to either upgrade to 9.10 then to 10.04 LTS, or build a new instance on 10.04 LTS, before doing any substantial work on your system.

Yes we are doing what you described. :) Going to get another linode on 10.04 LTS and setup php-fpm and all the other goodies properly from the start and then move the wiki over to the new linode and turn off the old one.

Has anyone used the file cache, too? I had it enabled but never setup the directories. Just to confirm, every time someone changes a page mediawiki will rebuild the file cache for that file? I have a feeling it hates NGINX as it isn't used even when I enable it and specify the directory (and give the webserver rights to it and 0777 it).

@Cio:

Just to confirm, every time someone changes a page mediawiki will rebuild the file cache for that file?
An edit to a page will invalidate the cache file for that page, but I don't think it will actually be recreated until a logged-out user views it. (So it's not going to make edit saves take longer, if you were afraid of that.)

@Cio:

I have a feeling it hates NGINX as it isn't used even when I enable it and specify the directory (and give the webserver rights to it and 0777 it).
Are you visiting it while logged out (it won't generate cache files until someone does)? What happens when you run rebuildFileCache.php?

yeah, it doesn't rebuild the cache right away - it waits for the next visitor to view the page.

also, in 1.16 (i think, it didn't do that before) it also invalidates the cache if a template used by the page is edited and stuff like that. so it's really the best option for CPU and it guarantees your pages are almost always up to date - only exception i can think of is if you edit the skin template, but it's enough to truncate the cache table and/or delete all the files in the cache directory - which you should set outside of the public, world-viewable directory by the way.

@mejicat:

yeah, it doesn't rebuild the cache right away - it waits for the next visitor to view the page.

also, in 1.16 (i think, it didn't do that before) it also invalidates the cache if a template used by the page is edited and stuff like that. so it's really the best option for CPU and it guarantees your pages are almost always up to date - only exception i can think of is if you edit the skin template, but it's enough to truncate the cache table and/or delete all the files in the cache directory - which you should set outside of the public, world-viewable directory by the way.
Yeah but I get like 20k uniques a day, most of which are not logged in, so it must be a different problem.

I'll try the rebuildfilecache command but my vps is smashed atm with lik 200% cpu usage so I want to wait till it isnt' going to burn up first. :D

I can't run rebuildFileCache - this is the error:

Nothing to do – $wgUseFileCache is disabled.

However, it is turned on!

$wgUseFileCache = true;
$wgFileCacheDirectory = "/var/www/w/cache";
$wgShowIPinHeader = false;

The directory is empty, though, but the error is odd.

I'll reiterate it again.

Enable profiling

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct