FreeBSD 9.1 vs CentOS 6.3

Paul Pathiakis pathiaki2 at
Sun Mar 24 15:43:55 UTC 2013

I haven't worked with CentOS since 6.0.  I work on many other variants at this time.  I'm more than happy to take a look after I get my company off the ground....  (A couple more months or land my next contracting gig)

Anyhow,  unlike with jails, it seems no matter what type of VM I use, there's always 'overhead' in using virtual machine software.  Performance is 99.99% there with a 'real' installation.  Whether it's VMware, VirtualBox, or Zen, there's the issues of things that have always seemed to cause a minor 'hurt'.  It's also annoying when I have to dictate how much memory, how many cores, disk space and everything else with regards to each virtual machine.  Running the virtual software takes resources, from the OS, and each VM under control has to be 'given' bounds as to how many resources it can use. (I'm told that an ESX server is better at this, however, that server must have a core OS that uses resources as well.)

So, lately, I started working with jails....  for everything.  There seems to be no measurable issues with their use.  Does anyone have any comparison on jails versus various VM software?  I'm not just talking the VM software running an OS in real-time and no negligible loss of performance.  I'm talking about the what's being taken from the core machine running the software.  That's overhead.  The software consumes resources (memory, cpu cycles, etc) and creates a certain amount of overhead for each VM created.

Jails seem to be highly maintainable, easy to use, the resource management of CPU, memory and other types are handled by the OS and not an additional layer of software running on the host that becomes responsible for all this juggling.  So, from my perspective, it seems jails remove a layer of indirection over VM software.  (Of course, arguably, jails are lightweight VMs.)  I'm just starting to become knowledgeable and a 'fan' of jails.

I'm also a little 'aged' and I never understood the need for VM software as UNIX has always been capable of juggling (time slicing) task courtesy of the job scheduler and the like.  I can see that a mainframe, mini, and Windows OS that were not designed to be capable of time sharing would need them, but not UNIX.

If I wanted a 'rough' analogy, I can equate VM software is to jails as UFS is to a ZFS pool.  I think of it this way:  with VM software I have to understand the resources I will need and I will create boundaries according to a 'best guess' scenario with jails I create the environment and all the jails get access to all the available resources to the machine and allow a robust UNIX-like (I really hate writing that given FBSD's roots :-) ) to handle something it's always been capable of handling from it's design.  This is akin to having to setup UFS versus ZFS.  UFS you have to have an idea of how big the partitions are and choose bounds (and it's "not fun" when you have to re-partition), however, with ZFS every partition grows within the bounds of the pool until it is exhausted, at that point, add storage to the pool.  (With a jail, at that point, if it's anything but CPU cores, just add resources to the machine - if it is CPU, it's time for a new
 CPU or maybe a second machine.)  

I hate to say this but I'm finding jails 'highly superior' to VM software and now that I hear that we can run Linux in a jail, I'd be very curious to do that, too.  


One last thing, I see VMs almost as a 'development tool' that people just recklessly took to the next level.  It's a lot of fun to create VMs on a desktop machine that you are doing development on to see what changes occur before putting software into production, but, like most things of the last 25 years (high capacity disk drives, plummeting memory prices, and the ongoing speed increases of CPUs), people have become lazy in doing things the right way.  When things were tight, people thought at the 'assembler' level to program lean, mean and fast, C was a boon as it's kind of a level 2.5 Von Neuman language, you can access low-level but it's structured like a level 3 language.  Now, people don't really think of machine resources.  They just hack together things and hope the compiler catches their mistakes.    *shrug*


PS - (I'd post my credentials, but, basically, I'm a Systems Architect that has a past with employment or consulting with many major corporations.  My job is creating systems of systems that are highly scalable and modular and can be nimble in moving from one tech to another.  I've been exposed to almost every *NIX type of OS, Windows and other OS variants....  I'm still impressed with FBSD as I'm a CS and I have watched it always try to be cutting edge and always implement the correct technologies and refuses to compromise by releasing a<Version>.0 that 'kinda works'That's how I've lived my career.  Kudos to all the people working on it!)

PPS - I have to make my living using all variants of *nix including Debian, CentOS, RH, SuSe, etc.  Also, Ibelieve I was the first one to create a SAN in 1993 while at EMC.  I've worked with many Linux variants.  However, when I look at who is really using BSD.... Cisco, Juniper, NetApp, and many major manufacturers base their *NIX products on it and kind of give away Linux for free but they are really happy to get consulting hours at $200-$400/hr to work it....  So, I'm not a 'fan boy' , I'm somebody who respects the mindset of the 'best tech to solve the problem'.

 From: Bill Totman <bill.totman at>
To: davide.damico at 
Cc: freebsd-performance at 
Sent: Sunday, March 24, 2013 10:11 AM
Subject: Re: FreeBSD 9.1 vs CentOS 6.3
On 3/23/13 3:44 AM, Davide D'Amico wrote:
> Il 23.03.2013 01:34 Paul Pathiakis ha scritto:
>> Hi,
>> There are several things about this that are highly suspect.
>> First, wipe out the hardware RAID. The processor doing RAID
>> computation is, probably, MUCH slower than a core on the CPU. Even if
>> it's RAID-1 (Simple Mirror) this RAID card is performing tasks that is
>> does not need to do including replicating writes to two targets from
>> the controller or checking it's cache, battery, etc. If it's possible
>> to disable the onboard cache, do it.
> Hi Paul,
> thanks for your suggestions (some of them I've applied before starting any consideration, like disabling all on-disk caches or controller buffers) I'll try next monday.
> Anyway, the fact is that using the same hardware configuration (raid1+raid10) I saw that a centos 6.x outperformed freebsd 9.1.
> Another test I made yesterday was: on the same hardware I installed vmware esx 5.x and created a vm with centos inside it. The result was really impressive: the centos vm outperformed the 'real' freebsd 9.1 too and checking vmware performances graphs I didn't see any huge need for a massive throughput (I saw values from KBps to 10MBps), instead I saw a big use of CPU (using OLTP tests with a concurrency of 32 threads it's performaces began to slow down).
So, what happened when you installed FreeBSD 9.1 in the VM? How did the 'fake' FreeBSD 9.1 compare 1) to the 'real', and 2) to either of the CentOS installations?

> I don't know is using some magic value for HZ or setting some trick with scheduler, I could gain something: I hope so, because I don't want to "pinguinate" my farm :)
> Thanks,
> d.
> _______________________________________________
> freebsd-performance at mailing list
> To unsubscribe, send any mail to "freebsd-performance-unsubscribe at"

freebsd-performance at mailing list
To unsubscribe, send any mail to "freebsd-performance-unsubscribe at"

More information about the freebsd-performance mailing list