A single HTTPD process taking up huge amounts of Memory

A single HTTPD process taking up huge amounts of Memory

Post by Big A » Fri, 09 Sep 2005 12:01:53


I have a Dual Xeon server with 2 gigs of ram running Linux Red Hat 9
with Apache/1.3.33. I am the only one using the server and all of my
scripting is done with PHP 4.3. There is a 30 second time limit on all
PHP scripts.

There is an HTTPD process taking up huge amounts of RAM (usually about
1.6G) . It causes server load to increase greatly. I switched to a
Dual Xeon to handle the load, because it was crashing my P4 system
every day or two. Now the Dual Xeon can ride it out. The process
will usually come back to normal within about 5 hours.

During this period 1 of the processors in TOP is 100% occupied.
Although the there is nothing within the process list showing any
substantial usage. Even the httpd process consuming all of the memory
has a low processor usage. Maybe the processor is just busy swapping
memory or something?

I have combed through my error_log carefully and fixed every last
problem. I run "lsof -p PID" on the process and see a number of files
open, but none of them have anything in common with 1 particular
script. That makes sense since this httpd process may be serving
hundreds of different requests. I have also run "lsof -n -T -w -i
tcp:80" thinking maybe I could find an IP address associated with the
httpd process and correlate it to the access log. However the process
ID running out of control never has an IP address associated with it.

Does anyone have any recommendations how to troubleshoot this problem
further? I might need to wait a few days for the problem to come back
again.

Thanks,
Brian
 
 
 

A single HTTPD process taking up huge amounts of Memory

Post by Juha Laih » Sun, 11 Sep 2005 02:14:40

"Big Al" < XXXX@XXXXX.COM > said:
...

Hmm.. switch on the Apache server-status view (especially
ExtendedStatus). That'll show what was the last request served by each
child process. That might bring some new data.

Also, if the memory grows slowly (but apparently not in this case?),
you could consider lowering the MaxRequestsPerChild, to force
more frequent recycling of child processes.

As for the long-time 100% CPU usage you mentioned, is it userspace
or kernel (system) space CPU usage (top also shows this).
--
Wolf a.k.a. Juha Laiho Espoo, Finland
(GC 3.0) GIT d- s+: a C++ ULSH++++$ P++@ L+++ E- W+$@ N++ !K w !O !M V
PS(+) PE Y+ PGP(+) t- 5 !X R !tv b+ !DI D G e+ h---- r+++ y++++
"...cancel my subscription to the resurrection!" (Jim Morrison)

 
 
 

A single HTTPD process taking up huge amounts of Memory

Post by Big A » Thu, 15 Sep 2005 11:18:44

Brilliant!!!!!!!!
That was an excellent tip about turning the Extended server-status on.
I was able to find a URL that was *** on the given process ID. The
exact cause was a corrupt artwork configuration file that caused the
system to try and resize an image to a negative Width.

I hired a group of "Expert Linux Consultants" to figure out why the
server was crashing and they recommended to watch the access logs in
real time and keep a second window open with TOP... and stare at it
waiting for the problem to return... not a good option.

One thing I noticed while keeping my eye on top (unrelated to the image
problem) is that the server was showing a fairly high load on the
server for no apparent reason. The processors look calm. The memory
is all used up but later the load comes back down and the memory usage
is the same? I looked through the entire process list and didn't see
anything running high either. Any reason why this is happening?

17:24:12 up 31 days, 10:28, 2 users, load average: 1.57, 1.31, 1.17
130 processes: 129 sleeping, 1 running, 0 zombie, 0 stopped
CPU states: cpu user nice system irq softirq iowait
idle
total 0.0% 1.3% 0.3% 0.0% 0.0% 2.8%
95.3%
cpu00 0.0% 0.5% 0.0% 0.0% 0.1% 0.0%
99.2%
cpu01 0.0% 0.8% 0.0% 0.0% 0.0% 0.0%
99.2%
cpu02 0.2% 2.2% 0.6% 0.0% 0.0% 5.6%
91.4%
cpu03 0.0% 2.0% 0.8% 0.0% 0.0% 5.8%
91.4%
Mem: 2055268k av, 2033136k used, 22132k free, 0k shrd,
55596k buff
1366196k actv, 258988k in_d, 30572k in_c
Swap: 2048276k av, 133740k used, 1914536k free
1397812k cached


A few hours later the load comes back down again .......


20:28:43 up 31 days, 13:32, 2 users, load average: 0.03, 0.12, 0.09
122 processes: 121 sleeping, 1 running, 0 zombie, 0 stopped
CPU states: cpu user nice system irq softirq iowait
idle
total 0.0% 0.6% 0.1% 0.0% 0.0% 1.6%
97.4%
cpu00 0.0% 0.4% 0.0% 0.0% 0.0% 3.4%
96.2%
cpu01 0.0% 0.0% 0.1% 0.0% 0.0% 3.3%
96.4%
cpu02 0.0% 0.5% 0.0% 0.0% 0.1% 0.0%
99.2%
cpu03 0.0% 1.5% 0.3% 0.0% 0.0% 0.0%
98.0%
Mem: 2055268k av, 1322900k used, 732368k free, 0k shrd,
37172k buff
768460k actv, 295280k in_d, 29772k in_c
Swap: 2048276k av, 144992k used, 1903284k free
892592k cached
 
 
 

A single HTTPD process taking up huge amounts of Memory

Post by Juha Laih » Fri, 16 Sep 2005 04:28:14

"Big Al" < XXXX@XXXXX.COM > said:

Good that it was of help.


No idea why it would do that, but I wouldn't consider loadavg of 1 as
high. 10 is high - perhaps even 5 - but not 1. Looks like there is
a process that consistently reports as runnable, but when run,
always goes to sleep for some reason. I think you can't get a per-process
system call rate in Linux with any diagnosis tool yet -- that'd help
in tracking this down. How to track this down with current tools,
I don't know. But as I said, with load of 1, I wouldn't worry (not
too much, at least).
--
Wolf a.k.a. Juha Laiho Espoo, Finland
(GC 3.0) GIT d- s+: a C++ ULSH++++$ P++@ L+++ E- W+$@ N++ !K w !O !M V
PS(+) PE Y+ PGP(+) t- 5 !X R !tv b+ !DI D G e+ h---- r+++ y++++
"...cancel my subscription to the resurrection!" (Jim Morrison)