ipsure logo
Logo and Language
Login icon Language selection icon
Hello, guest
*NIX Active category menu left background Active category menu right background BACKUP Hands-On blog header image Right block of Hands-On blog header image Final menu block of Hands-On blog header image
Home page Hands-On Services IT Business Contact About References Terms of Use RSS


eAccelerator (as a Zend extension) High Load Averages Issue

Filed under: *NIX — Tags: , , , , , , , — Sezgin Bayrak @ 12:51

eAccelerator In one of our previous articles here, we had written about eAccelerator and metioned that it should significantly reduce the server load while increasing its speed from 1 to 10 times. But during the evaluation of eAccelerator under 1000+ http req/sec. conditions, we have noticed that Apache repeatedly launches child processes untill it reaches the MaxClients server limit and causes abnormal load averages to bring multi-core and multi-cpu performance servers on their knees just in seconds. When we debug the issue, we have noticed that all complaints in gdb backtraces were somehow about nothing but Zend.

We know eAccelerator has been reported to work with Zend many times but we realised that they do not actually accord well even when the requests only go over 50 – 60 per second. Just imagine the view when you get thousands of them. Previously in our related article, while discussing the initial configuration of eAccelerator, we’ve told that you might load it either as a Zend extension or alternatively as a PHP extension in your php.ini file. Now we explicitly recommend you not to load it as a Zend extension. Instead, prefer the straight PHP extension way with;


Do not install Zend Optimizer with eAccelerator if you don’t use scripts encoded with Zend Encoder. So also disable Zend optimization right behind the eAccelerator configuration lines with;


and just in case put your whole eAccelerator configuration lines at the top of your php.ini file as Zend Optimizer must be loaded after eAccelerator inside php.ini. Incidentally, for those who run the older version of eaccelerator, it’s vital to upgrade it to the latest one as old versions have a spinlock bug that leads into deadlock under heavy loads. Then, restart the apache and observe the new attitudes. Although you’ll gain some increase in the number of requests that are being responded without a fatal crash, there’re still two important things to take care of: “stat()” system calls and the compression.

stat() is a Unix system call that returns data on the size and the parameters associated with a file. On every single hit eAccelerator will check the modification time of a script to see if it’s changed and needs to be recompiled. Each time, this is done by the stat calls which consume time and add a serious overhead to the system which precludes the response to mass requests. You must skip these expensive calls which are enabled by default;

eaccelerator.check_mtime = "0"

When you disable this check, remember that you have to manually clean the eAccelerator cache when you update a file.

Compression also requires an additional cpu activity. While it’s essential for the complete http part, it’s not meaningful for the codes that are going to be cashed. And if we’re talking about skinning a flint here, then disable this feature also;

eaccelerator.compress = "0"

According to our observations, following all these configuration modifications, now the system can handle 100 http req/sec. effectively reducing the overall load at nearly %80. But if your requests grow, this brilliant savings will probably turn into a crash. In our opinion, eaccelerator is very well suited for the web servers that serve utmost 100 http req/sec. which approximately corresponds to ~1000 concurrent users (depending on the type of your web application), not for more.

Those who will take a chance on xcache as an alternative to eaccelerator won’t gain a victory above 100 requests per second, because most probably your web server will become unresponsive with a lot of processes in lockf state. We’ve also tested xcache under same pressures.

But this is not a case of this-cache or that-cache situation, it clearly seems to be a case of locking mechanism used. My impression is that we need to enable semaphore locks instead of default fcntl under certain loads. As FreeBSD doesn’t support pthread mutex locking, our next resort shall be a marginal one if semaphores don’t really help. That is spinlocks which is still considered experimental within APC (Alternative PHP Cache). Install APC from ports by enabling IPC SHM (shared memory) and spinlocking, try the configuration below and see what happens if locking is an issue for you, too. But before, you need to make sure that your “kern.ipc.shmmax” value has been set large enough in/etc/sysctl.conf to handle the shm_size below.

Related Posts with Thumbnails
Subscribe to our RSS feeds Email Subscription via FeedBurner RSS Subscription via FeedBurner
  1. PHP Acceleration with eAccelerator and Shared Memory Tweaking



There has not been any trackback links yet.

Reader Comments

Rafael Hasson16/07/2011 17:56

I have a similar issue on FreeBSD with APC, while handling more then 100 req/sec.
I’m testing some kernel memory configs as well, don’t have a final result yet.
What was your result with spinlocking?
Congratulations for this article. Very good.

Sezgin Bayrak20/07/2011 23:14

Hi Rafael,

I don’t know if you have enabled spinlocking in APC yet but if you start to use spinlocks, ~150 req/sec. mustn’t be a problem henceforth with the appropriate “kern.ipc.shmmax” kernel setting along with the spinlocking. Because without spinlocking, non of any caching solutions including APC helps above 100 req/sec. duly. At least we couldn’t get our application stabilized with the traditional locking methods. But we did confirmed that spinlocking does better job around 100 to ~300 req./sec.

In our case, as we have more than ~1000 req/sec. per server at peak times, we necessarily needed to expand our web server farm (/w both physical and virtualized server systems) and depended more on load-balancing + multiplexing by using redundant application switches at the gateway.

I strongly recommend you to get into Apache’s single process mode with httpd -X , find the relevant process ID and check out your syscall counts with # truss -c -p PID

Then investigate deeper especially your “stat” calls with # truss -D -p PID in order to find if anything different effects your situation additionaly, in terms of coding and handling.

Also you may get gdb backtraces too.

In most situations, at least in a good setup, the coding quality effects the case more than FreeBSD’s kernel, apache/php and caching configuration settings.

We also appreciate very much if you share your further findings.
Best regards and thank you very much.

RSS feed for comments RSS feed for comments on this post. TrackBack URL

Leave a comment