freebsd server limits question
rwboyer at mac.com
Tue Jan 3 02:23:40 UTC 2012
Just realized that the MongoDB site now has some recipes up for what you really need to do to make sure you can handle a lot of incoming new documents concurrently….
Boy you had to figure this stuff out yourself just last year - I guess the mongo community has come a very long way….
Splitting Shard Chunks - MongoDB
On Jan 2, 2012, at 5:38 PM, Robert Boyer wrote:
> Sorry one more thought and a clarification….
> I have found that it is best to run mongos with each app server instance most of the mongo interface libraries aren't intelligent about the way that they distribute requests to available mongos processes. mongos processes are also relatively lightweight and need no coordination or synchronization with each other - simplifies things a lot and makes any potential bugs/complexity with app server/mongo db connection logic just go away.
> It's pretty important when configuring shards to take on the write volume that you do your best to pre-allocate chunks and avoid chunk migrations during your traffic floods - not hard to do at all. There are also about a million different ways to deal with atomicity (if that is a word) and a very mongo specific way of ensuring writes actually "made it to disk" somewhere = from your brief description of the app in question it does not sound that it is too critical to ensure "every single solitary piece of data persists no matter what" as I am assuming most of it is irrelevant and becomes completely irrelevant after the show- or some time there after. Most of the programing and config examples make an opposite assumption in that they assume that each transaction MUST be completely durable - if you forgo that you can get screaming TPS out of a mongo shard.
> Also if you do not find what you are looking for via a ruby support group - the JS and node JS community also may be of assistance but they tend to have a very narrow view of the world…. ;-)
> On Jan 2, 2012, at 4:21 PM, Robert Boyer wrote:
>> To deal with this kind of traffic you will most likely need to set up a mongo db cluster of more than a few instances… much better. There should be A LOT of info on how to scale mongo to the level you are looking for but most likely you will find that on ruby forums NOT on *NIX boards….
>> The OS boards/focus will help you with fine tuning but all the fine tuning in the world will not solve an app architecture issue…
>> I have setup MASSIVE mongo/ruby installs for testing that can do this sort of volume with ease… the stack looks something like this….
>> with only one Nginix instance can feed an almost arbitrary number of Unicorn/Sinatra/MongoMapper instances that can in turn feed a properly configured MongoDB cluster with pre-allocated key distribution so that the incoming inserts are spread evenly against the cluster instances…
>> Even if you do not use ruby that community will have scads of info on scaling MongoDB.
>> One more comment related to L's advice - true you DO NOT want more transactions queued up if your back-end resources cannot handle the TPS - this will just make the issue harder to isolate and potentially make the recovery more difficult. Better to reject the connection at the front-end than take it and blow up the app/system.
>> The beauty of the Nginix/Unicorn solution (Unicorn is ruby specific) is that there is no queue that is feed to the workers when there are no workers - the request is rejected. The unicorn worker model can be reproduced for any other implementation environment (PHP/Perl/C/etc) outside of ruby in about 30 minutes. It's simple and Nginix is very well suited to low overhead reverse proxy to this kind of setup.
>> Wishing you the best - if i can be of more help let me know…
>> On Jan 2, 2012, at 3:38 PM, Eduardo Morras wrote:
>>> At 20:12 02/01/2012, Muhammet S. AYDIN wrote:
>>>> Hello everyone.
>>>> My first post here and I'd like to thank everyone who's involved within the
>>>> FreeBSD project. We are using FreeBSD on our web servers and we are very
>>>> happy with it.
>>>> We have an online messaging application that is using mongodb. Our members
>>>> send messages to "the voice" show's (turkish version) contestants. Our two
>>>> mongodb instances ended up in two centos6 servers. We have failed. So hard.
>>>> There were announcements and calls made live on tv. We had +30K/sec
>>>> visitors to the app.
>>>> When I looked at the mongodb errors, I had thousands of these:
>>>> http://pastie.org/private/nd681sndos0bednzjea0g. You may be wondering why
>>>> I'm telling you about centos. Well, we are making the switch from centos to
>>>> freebsd FreeBSD. I would like to know what are our limits? How we can set
>>>> it up so our FreeBSD servers can handle min 20K connections (mongodb's
>>>> connection limit)?
>>>> Our two servers have 24 core CPUs and 32 GBs of RAM. We are also very open
>>>> to suggestions. Please help me out here so we don't fail deadly, again.
>>>> ps. this question was asked in the forums as well however as someone
>>>> suggested in the forums, i am posting it here too.
>>> Is your app limited by cpu or by i/o? What do vmstat/iostat says about your hd usage? Perhaps mongodb fails to read/write fast enough and making process thread pool bigger only will make problem worse, there will be more threads trying to read/write.
>>> Have you already tuned mongodb?
>>> Post more info please, several lines (not the first one) of iostat and vmstat may be a start. Your hd configuration, raid, etc... too.
>>> freebsd-questions at freebsd.org mailing list
>>> To unsubscribe, send any mail to "freebsd-questions-unsubscribe at freebsd.org"
>> freebsd-questions at freebsd.org mailing list
>> To unsubscribe, send any mail to "freebsd-questions-unsubscribe at freebsd.org"
> freebsd-questions at freebsd.org mailing list
> To unsubscribe, send any mail to "freebsd-questions-unsubscribe at freebsd.org"
More information about the freebsd-questions