load testing and tuning a 4GB RAM server
cswiger at mac.com
Sun Apr 6 15:12:42 PDT 2003
[ ... ]
> The drives are hot-swappable in themselves, but I can't add a Raid array
> on the fly, nor can I add to the capacity of a Raid array. I am sure that
> even if the Raid card allowed me to do that, FreeBSD wouldn't support a
> new Raid array that just popped up without a reboot. I am not even sure if
> Windows can do that. Now you know the hardware, please correct me if I am
> wrong about this.
"man aac" seems to have more info, which suggests that a Linux-based
management app might do exactly what you're looking for. Also,
camcontrol rescan _target_
...lets you rescan the SCSI bus, which at least allows you to attach
additional drives if you needed to. Not a long-term solution, sure, but
in an emergency, you do what you need to.
I don't know for certain about adding RAID volumes dynamicly; that
probably depends on the BIOS and other things that you could ask Dell
about. Anyway, it's at least possible that you can convince the system
to rebuild a RAID-5 or RAID-1 mirror if you replace a failed drive with
a working one, without rebooting.
[ ... ]
> Sorry, let me rephrase this a bit. I mean network connections to the
> server that are marked "ESTABLISHED" in netstat -an output, if you grep
OK re: the definition; that's the number of active children.
[ ... ]
>> How much of your traffic is going to be over SSL? You might want to
>> look into getting a HI/FN crypto-accelerator card, particularly if you
>> have lots of small/short SSL sessions rather than few longer ones.
> Not sure as to the exact break-down, but estimate 5-15%. Is that large in
> the context of total connections?
The connection between those factors isn't very direct. SSL session
startup involves creating 1024-bit keys, which takes a long time, and
even the normal 40/56/128-bit session encryption will eat up half of
your CPU power if you're pushing more than 10 MB/s of encrypted data.
>> You really want to run only one type of production database per machine;
>> you're risking VM thrashing otherwise.
> Even if the load on the second one is _much_ less?
> Please explain why this is so, if possible.
If you're running multiple schemas (or databases under a DB server,
depending on which parlance you like) within one vendors' product, the
multiple DBs will cooperate and work from the same pool of memory.
If you run two different products, they'll each want to have their own
pool of memory and may fight over who gets what. Basicly, if the
database's caching mechanism for its DB files and the VM sub-system
disagree on whether a page should be swapped in or swapped out,
performance is crippled...and this effect gets worse when you have more
than one DB potentially fighting for the memory. If the databases are
small enough to fit entirely in RAM without swapping, you'll probably be
okay. Otherwise, you're going to have to tune the SGA size (# of DB
buffers, database memory cache, whatever your DB calls it) & SysV shmem
And if you want to get really detailed, you could look up something like
Belady's anomaly, or why things like databases with sequential and
striding memory access patterns tend to be challenging to the VM sub-system.
[ ... ]
> Right. I suppose I _could_ get another 146GB drive, and reconfigure the
> 4-drive Raid 5 array to be a 2-drive Raid 1 and 3-drive Raid 5.
Why not reconfigure the four-drive RAID-5 to be a four-drive RAID-1,0?
[ ... ]
> As far as my testing, it's been very raw and no, I haven't done any DB
> testing yet. I was mostly testing for stability under heavy heavy memory
> usage, i.e. a bahzillion lynx'es and -j500 makeworld. How would you do
> this? :)
Apache comes with something called "ab". No doubt you could use that to
hit a bunch of test pages from PHP examples, or whatever, which will
generate DB accesses. Then look in /usr/ports/benchmarks for something
like iozone or bonnie for I/O benchmarks to run at the same time.
More information about the freebsd-isp