[nycbug-talk] Memory sizing

Bjorn Nelson o_sleep at belovedarctos.com
Sat Apr 22 20:14:00 EDT 2006


Francisco,

On Apr 22, 2006, at 3:33 PM, Francisco Reyes wrote:

> Pete Wright writes:
>> the best i'd look for info on memory utilization in freebsd would  
>> be the
>> McKusick/Neville-Neil design and implementation book.
>
> Althought that is a good long term plan I would settle for a good
> explanation of top's memory display. :-)
> I have found some, but not to the level I need.
> In short I am looking to answer "does this machine has enough  
> memory for the
> amount of programs we are running it it.. and to leave a decent  
> amount of
> memory for the OS to cache files".

Can you show a snapshot of what top is showing you now during ideal  
load (or a load that could be extrapolated on somehow)?  Analyzing my  
host might help you analyze yours.   I have a memory strapped  
machine, total process vmsize is:
bjorn at host=>for i in `ps -axo rss`; do SUM=$(($SUM+$i)); done; echo $SUM
893352
bjorn at host=>for i in `ps -axo vsz`; do SUM=$(($SUM+$i)); done; echo $SUM
1166716

Unfortunately, shared memory makes this hard to quantify, but if I  
could have this much memory, I wouldn't have a problem.

top shows currently:
Mem: 92M Active, 111M Inact, 56M Wired, 13M Cache, 38M Buf, 804K Free
Swap: 512M Total, 684K Used, 511M Free

bjorn at host=>vmstat 1
procs      memory      page                   disk   faults      cpu
r b w     avm    fre  flt  re  pi  po  fr  sr ad0   in   sy  cs us sy id
1 8 1  191568  16864   28   0   0   0 266 240   0 1341 1867 623 22  5 73
1 8 0  193012  15392   92   0   4   0 414   0  65 1400 1944 742 18  9 73

So far I see that my host has 56M wired mostly for the kernel, 92M  
for active processes and 111M inactive for sleeping processes.  Maybe  
someone can pipe in about the difference between Cache and Buf, man 1  
top says that one is VM and the other is BIO based, McKusick's book  
says that FreeBSD buffering using the VM system, so I am assuming BIO  
is external to the VM system but not sure how.  Is this the hard  
drive's cache?

8 processes are blocking probably contending for vm resources to get  
freed up.  I am paging in and freeing a bunch of resources during  
this cycle.

Also when I look at iostat I see a significant amount of disk  
activity (a lot for a 2.5" ide):
root at tabasco=>iostat 1
       tty             ad0             cpu
tin tout  KB/t tps  MB/s  us ni sy in id
    0    1 18.27  52  0.93  16  5  5  1 73
    0  129 12.89  70  0.88   0 18  9  2 71
    0   43 23.55  67  1.54   1 25  6  0 68
    0   43 59.08  50  2.88   0 39  7  0 54
    0   43 57.38  52  2.91   1 34  8  0 57
    0   43 48.09  64  3.00   0 38  7  1 54

this is pretty much due to vm swapping:
bjorn at host=>ps -axo inblock,oublock,comm | sort -n -k 2 | tail -3
   325  1152 ntpd
     0  8695 bufdaemon
69360 5017287 syncer

wait a sec, these are file system daemons, not indicating a vm  
problem necessarily.  Is my host slow because I don't have fast  
enough access to files.

bjorn at host=>ps -axo inblock,oublock,comm | sort -n -k 1 | tail -5
5895     0 imapd
6326     0 imapd
21420    77 syslogd
69378 5017849 syncer
204353     0 qmgr

regardless, major faults show that my webserver is paging in the most.
bjorn at host=>ps -axo majflt,minflt,comm | sort -n -k 1 | tail -5
     18    394 sshd
     19   1746 named
     24  18078 python
     41    220 httpd
    128  22273 httpd

minor faults show that my mail and imap servers are reclaiming from  
the inactive memory pool.  These process are probably the most active  
since they don't have a high number of minfaults but not major faults.
  bjorn at host=>ps -axo majflt,minflt,comm | sort -n -k 2 | tail -5
      0  68678 authdaemond.plain
      0  89604 cron
      0  96640 imapd
      0 181791 imapd
      0 299139 master

The result of this is that I would probably be fine with a gig of  
ram.  As far as factoring for shared memory and trying to figure if I  
would do well with less ram, anyone have any ideas?  Francisco, can  
you apply this to what you are contending with?

-Bjorn



More information about the talk mailing list